Dead simple {for devs} python crawler (script) for extracting structured data from any website into CSV

Dead simple {for devs} python crawler (script) for extracting structured data from any  website into CSV

On my previous post I wrote about a very basic web crawler I wrote, that can randomly scour the web and mirror/download websites. Today I want to share with you a very simple script that can extract structured data from any <almost> website.

Use the following script to extract specific information from any website (i.e prices, ids, titles, phone numbers etc..). Populate the “fields” parameter with the names and the patterns (regular expression) of the data you want to extract. In this specific example, I extract the product names, prices, ratings and images from Amazon.com.

Some notes:

  • I have set a user agent name, as some websites block crawling if no user agent is present
  • No external imports are required
  • You can define as many fields to extract as you’d like. The field name is the “key” in the “fields” parameter
  • As I use regular expressions to define where the content is, no DOM parsing is performed, so malformed HTML pages are none issue.
  • Each time you run the script it will overwrite the content in output.csv

Enjoy,

Ran

Spread the News

Subscribe to our newsletter for more news and updates!

By submitting you agree to Webz.io's Privacy Policy and further marketing communications.

Feed Your Machines the Data They Need

Feed Your Machines the Data They Need

GET STARTED