Part one of this series focuses on requesting and wrangling HTML using two of the most popular Python libraries for web scraping: requests and BeautifulSoup
After the 2016 election I became much more interested in media bias and the manipulation of individuals through advertising. This series will be a walkthrough of a web scraping project that monitors political news from both left and right wing media outlets and performs an analysis on the rhetoric being used, the ads being displayed, and the sentiment of certain topics.
The first part of the series will we be getting media bias data and focus on only working locally on your computer, but if you wish to learn how to deploy something like this into production, feel free to leave a comment and let me know.
Web scraping is the process of automatically extracting data from websites. Any publicly accessible web page can be analyzed and processed to extract information – or data. These data can then be downloaded or stored so that they can be used for any purpose outside the original website. Web scraping, also called web data mining or web harvesting, is the process of constructing an agent which can extract, parse, download and organize useful information from the web automatically. Web scraping refers to the extraction of data from a website. This information is collected and then exported into a format that is more useful for the user. Be it a spreadsheet or an API. Although web scraping can be done manually, in most cases, automated tools are preferred when scraping web data as they can be less costly and work at a. Rvest helps you scrape (or harvest) data from web pages. It is designed to work with magrittr to make it easy to express common web scraping tasks, inspired by libraries like beautiful soup and RoboBrowser. If you’re scraping multiple pages, I highly recommend using rvest in concert with polite. Web scraping (also called web data extraction or data scraping) provides a solution for those who want to get access to structured web data in an automated fashion. Web scraping is useful if the public website you want to get data from doesn’t have an API, or it.
Every time you load a web page you're making a request to a server, and when you're just a human with a browser there's not a lot of damage you can do. With a Python script that can execute thousands of requests a second if coded incorrectly, you could end up costing the website owner a lot of money and possibly bring down their site (see Denial-of-service attack (DoS)).
With this in mind, we want to be very careful with how we program scrapers to avoid crashing sites and causing damage. Every time we scrape a website we want to attempt to make only one request per page. We don't want to be making a request every time our parsing or other logic doesn't work out, so we need to parse only after we've saved the page locally.
If I'm just doing some quick tests, I'll usually start out in a Jupyter notebook because you can request a web page in one cell and have that web page available to every cell below it without making a new request. Since this article is available as a Jupyter notebook, you will see how it works if you choose that format.
After we make a request and retrieve a web page's content, we can store that content locally with Python's open()
function. To do so we need to use the argument wb
, which stands for 'write bytes'. This let's us avoid any encoding issues when saving.
Below is a function that wraps the open()
function to reduce a lot of repetitive coding later on:
Assume we have captured the HTML from google.com in html
, which you'll see later how to do. After running this function we will now have a file in the same directory as this notebook called google_com
that contains the HTML.
To retrieve our saved file we'll make another function to wrap reading the HTML back into html
. We need to use rb
for 'read bytes' in this case.
The open function is doing just the opposite: read the HTML from google_com
. If our script fails, notebook closes, computer shuts down, etc., we no longer need to request Google again, lessening our impact on their servers. While it doesn't matter much with Google since they have a lot of resources, smaller sites with smaller servers will benefit from this.
I save almost every page and parse later when web scraping as a safety precaution.
Each site usually has a robots.txt on the root of their domain. This is where the website owner explicitly states what bots are allowed to do on their site. Simply go to example.com/robots.txt and you should find a text file that looks something like this:
The User-agent field is the name of the bot and the rules that follow are what the bot should follow. Some robots.txt will have many User-agents with different rules. Common bots are googlebot, bingbot, and applebot, all of which you can probably guess the purpose and origin of.
We don't really need to provide a User-agent when scraping, so User-agent: * is what we would follow. A * means that the following rules apply to all bots (that's us).
The Crawl-delay tells us the number of seconds to wait before requests, so in this example we need to wait 10 seconds before making another request.
Allow gives us specific URLs we're allowed to request with bots, and vice versa for Disallow. In this example we're allowed to request anything in the /pages/subfolder which means anything that starts with example.com/pages/. On the other hand, we are disallowed from scraping anything from the /scripts/subfolder.
Many times you'll see a * next to Allow or Disallow which means you are either allowed or not allowed to scrape everything on the site.
Sometimes there will be a disallow all pages followed by allowed pages like this:
This means that you're not allowed to scrape anything except the subfolder /pages/. Essentially, you just want to read the rules in order where the next rule overrides the previous rule.
This project will primarily be run through a Jupyter notebook, which is done for teaching purposes and is not the usual way scrapers are programmed. After showing you the pieces, we'll put it all together into a Python script that can be run from command line or your IDE of choice.
With Python's requests
(pip install requests
) library we're getting a web page by using get()
on the URL. The response r
contains many things, but using r.content
will give us the HTML. Once we have the HTML we can then parse it for the data we're interested in analyzing.
There's an interesting website called AllSides that has a media bias rating table where users can agree or disagree with the rating.
Since there's nothing in their robots.txt that disallows us from scraping this section of the site, I'm assuming it's okay to go ahead and extract this data for our project. Let's request the this first page:
Since we essentially have a giant string of HTML, we can print a slice of 100 characters to confirm we have the source of the page. Let's start extracting data.
We used requests
to get the page from the AllSides server, but now we need the BeautifulSoup library (pip install beautifulsoup4
) to parse HTML and XML. When we pass our HTML to the BeautifulSoup constructor we get an object in return that we can then navigate like the original tree structure of the DOM.
This way we can find elements using names of tags, classes, IDs, and through relationships to other elements, like getting the children and siblings of elements.
We create a new BeautifulSoup object by passing the constructor our newly acquired HTML content and the type of parser we want to use:
This soup
object defines a bunch of methods — many of which can achieve the same result — that we can use to extract data from the HTML. Let's start with finding elements.
To find elements and data inside our HTML we'll be using select_one
, which returns a single element, and select
, which returns a list of elements (even if only one item exists). Both of these methods use CSS selectors to find elements, so if you're rusty on how CSS selectors work here's a quick refresher:
A CSS selector refresher
<a></a>
, <body></body>
, use the naked name for the tag. E.g. select_one('a')
gets an anchor/link element, select_one('body')
gets the body element.temp
gets an element with a class of temp, E.g. to get <a></a>
use select_one('.temp')
#temp
gets an element with an id of temp, E.g. to get <a></a>
use select_one('#temp')
.temp.example
gets an element with both classes temp and example, E.g. to get <a></a>
use select_one('.temp.example')
.temp a
gets an anchor element nested inside of a parent element with class temp, E.g. to get <div><a></a></div>
use select_one('.temp a')
. Note the space between .temp
and a
..temp .example
gets an element with class example nested inside of a parent element with class temp, E.g. to get <div><a></a></div>
use select_one('.temp .example')
. Again, note the space between .temp
and .example
. The space tells the selector that the class after the space is a child of the class before the space.<a id=one></a>
, are unique so you can usually use the id selector by itself to get the right element. No need to do nested selectors when using ids.There's many more selectors for for doing various tasks, like selecting certain child elements, specific links, etc., that you can look up when needed. The selectors above get us pretty close to everything we would need for now.
Tips on figuring out how to select certain elements
Most browsers have a quick way of finding the selector for an element using their developer tools. In Chrome, we can quickly find selectors for elements by
Sometimes it'll be a little off and we need to scan up a few elements to find the right one. Here's what it looks like to find the selector and Xpath, another type of selector, in Chrome:
Our data is housed in a table on AllSides, and by inspecting the header element we can find the code that renders the table and rows. What we need to do is select
all the rows from the table and then parse out the information from each row.
Here's how to quickly find the table in the source code:
Simplifying the table's HTML, the structure looks like this (comments <!-- -->
added by me):
So to get each row, we just select all <tr>
inside <tbody>
:
tbody tr
tells the selector to extract all <tr>
(table row) tags that are children of the <tbody>
body tag. If there were more than one table on this page we would have to make a more specific selector, but since this is the only table, we're good to go.
Now we have a list of HTML table rows that each contain four cells:
Below is a breakdown of how to extract each one.
The outlet name (ABC News) is the text of an anchor tag that's nested inside a <td>
tag, which is a cell — or table data tag.
Getting the outlet name is pretty easy: just get the first row in rows
and run a select_one
off that object:
The only class we needed to use in this case was .source-title
since .views-field
looks to be just a class each row is given for styling and doesn't provide any uniqueness.
Notice that we didn't need to worry about selecting the anchor tag a
that contains the text. When we use .text
is gets all text in that element, and since 'ABC News' is the only text, that's all we need to do. Bear in mind that using select
or select_one
will give you the whole element with the tags included, so we need .text
to give us the text between the tags.
.strip()
ensures all the whitespace surrounding the name is removed. Many websites use whitespace as a way to visually pad the text inside elements so using strip()
is always a good idea.
You'll notice that we can run BeautifulSoup methods right off one of the rows. That's because the rows become their own BeautifulSoup objects when we make a select from another BeautifulSoup object. On the other hand, our name
variable is no longer a BeautifulSoup object because we called .text
.
We also need the link to this news source's page on AllSides. If we look back at the HTML we'll see that in this case we do want to select the anchor in order to get the href
that contains the link, so let's do that:
It is a relative path in the HTML, so we prepend the site's URL to make it a link we can request later.
Getting the link was a bit different than just selecting an element. We had to access an attribute (href
) of the element, which is done using brackets, like how we would access a Python dictionary. This will be the same for other attributes of elements, like src
in images and videos.
We can see that the rating is displayed as an image so how can we get the rating in words? Looking at the HTML notice the link that surrounds the image has the text we need:
We could also pull the alt
attribute, but the link looks easier. Let's grab it:
Here we selected the anchor tag by using the class name and tag together: .views-field-field-bias-image
is the class of the <td>
and <a>
is for the anchor nested inside.
After that we extract the href
just like before, but now we only want the last part of the URL for the name of the bias so we split on slashes and get the last element of that split (left-center).
The last thing to scrape is the agree/disagree ratio from the community feedback area. The HTML of this cell is pretty convoluted due to the styling, but here's the basic structure:
The numbers we want are located in two span
elements in the last div
. Both span
elements have classes that are unique in this cell so we can use them to make the selection:
Using .text
will return a string, so we need to convert them to integers in order to calculate the ratio.
Side note: If you've never seen this way of formatting print statements in Python, the f
at the front allows us to insert variables right into the string using curly braces. The :.2f
is a way to format floats to only show two decimals places.
If you look at the page in your browser you'll notice that they say how much the community is in agreement by using 'somewhat agree', 'strongly agree', etc. so how do we get that? If we try to select it:
It shows up as None because this element is rendered with Javascript and requests
can't pull HTML rendered with Javascript. We'll be looking at how to get data rendered with JS in a later article, but since this is the only piece of information that's rendered this way we can manually recreate the text.
To find the JS files they're using, just CTRL+F for '.js' in the page source and open the files in a new tab to look for that logic.
It turned out the logic was located in the eleventh JS file and they have a function that calculates the text and color with these parameters:
Range | Agreeance |
$ratio > 3$ | absolutely agrees |
$2 < ratio leq 3$ | strongly agrees |
$1.5 < ratio leq 2$ | agrees |
$1 < ratio leq 1.5$ | somewhat agrees |
$ratio = 1$ | neutral |
$0.67 < ratio < 1$ | somewhat disgrees |
$0.5 < ratio leq 0.67$ | disgrees |
$0.33 < ratio leq 0.5$ | strongly disagrees |
$ratio leq 0.33$ | absolutely disagrees |
Now that we have the general logic for a single row and we can generate the agreeance text, let's create a loop that gets data from every row on the first page:
In the loop we can combine any multi-step extractions into one to create the values in the least number of steps.
Our data
list now contains a dictionary containing key information for every row.
Keep in mind that this is still only the first page. The list on AllSides is three pages long as of this writing, so we need to modify this loop to get the other pages.
Notice that the URLs for each page follow a pattern. The first page has no parameters on the URL, but the next pages do; specifically they attach a ?page=#
to the URL where '#' is the page number.
Right now, the easiest way to get all pages is just to manually make a list of these three pages and loop over them. If we were working on a project with thousands of pages we might build a more automated way of constructing/finding the next URLs, but for now this works.
According to AllSides' robots.txt we need to make sure we wait ten seconds before each request.
Our loop will:
Remember, we've already tested our parsing above on a page that was cached locally so we know it works. You'll want to make sure to do this before making a loop that performs requests to prevent having to reloop if you forgot to parse something.
By combining all the steps we've done up to this point and adding a loop over pages, here's how it looks:
Now we have a list of dictionaries for each row on all three pages.
To cap it off, we want to get the real URL to the news source, not just the link to their presence on AllSides. To do this, we will need to get the AllSides page and look for the link.
If we go to ABC News' page there's a row of external links to Facebook, Twitter, Wikipedia, and the ABC News website. The HTML for that sections looks like this:
Notice the anchor tag (<a>
) that contains the link to ABC News has a class of 'www'. Pretty easy to get with what we've already learned:
So let's make another loop to request the AllSides page and get links for each news source. Unfortunately, some pages don't have a link in this grey bar to the news source, which brings up a good point: always account for elements to randomly not exist.
Up until now we've assumed elements exist in the tables we scraped, but it's always a good idea to program scrapers in way so they don't break when an element goes missing.
Using select_one
or select
will always return None or an empty list if nothing is found, so in this loop we'll check if we found the website element or not so it doesn't throw an Exception when trying to access the href
attribute.
Finally, since there's 265 news source pages and the wait time between pages is 10 seconds, it's going to take ~44 minutes to do this. Instead of blindly not knowing our progress, let's use the tqdm
library (pip install tqdm
) to give us a nice progress bar:
tqdm
is a little weird at first, but essentially tqdm_notebook
is just wrapping around our data list to produce a progress bar. We are still able to access each dictionary, d
, just as we would normally. Note that tqdm_notebook
is only for Jupyter notebooks. In regular editors you'll just import tqdm from tqdm
and use tqdm
instead.
So what do we have now? At this moment, data
is a list of dictionaries, each of which contains all the data from the tables as well as the websites from each individual news source's page on AllSides.
The first thing we'll want to do now is save that data to a file so we don't have to make those requests again. We'll be storing the data as JSON since it's already in that form anyway:
If you're not familiar with JSON, just quickly open allsides.json
in an editor and see what it looks like. It should look almost exactly like what data
looks like if we print it in Python: a list of dictionaries.
Before ending this article I think it would be worthwhile to actually see what's interesting about this data we just retrieved. So, let's answer a couple of questions.
Which ratings for outlets does the communityabsolutely agreeon?
To find where the community absolutely agrees we can do a simple list comprehension that checks each dict
for the agreeance text we want:
Using some string formatting we can make it look somewhat tabular. Interestingly, C-SPAN is the only center bias that the community absolutely agrees on. The others for left and right aren't that surprising.
Which ratings for outlets does the communityabsolutely disagreeon?
To make analysis a little easier, we can also load our JSON data into a Pandas DataFrame as well. This is easy with Pandas since they have a simple function for reading JSON into a DataFrame.
As an aside, if you've never used Pandas (pip install pandas
), Matplotlib (pip install matplotlib
), or any of the other data science libraries, I would definitely recommend checking out Jose Portilla's data science course for a great intro to these tools and many machine learning concepts.
Now to the DataFrame:
agree | agree_ratio | agreeance_text | allsides_page | bias | disagree | |
---|---|---|---|---|---|---|
name | ||||||
ABC News | 8355 | 1.260371 | somewhat agrees | https://www.allsides.com/news-source/abc-news-... | left-center | 6629 |
Al Jazeera | 1996 | 0.694986 | somewhat disagrees | https://www.allsides.com/news-source/al-jazeer... | center | 2872 |
AllSides | 2615 | 2.485741 | strongly agrees | https://www.allsides.com/news-source/allsides-0 | allsides | 1052 |
AllSides Community | 1760 | 1.668246 | agrees | https://www.allsides.com/news-source/allsides-... | allsides | 1055 |
AlterNet | 1226 | 2.181495 | strongly agrees | https://www.allsides.com/news-source/alternet | left | 562 |
agree | agree_ratio | agreeance_text | allsides_page | bias | disagree | |
---|---|---|---|---|---|---|
name | ||||||
CNBC | 1239 | 0.398905 | strongly disagrees | https://www.allsides.com/news-source/cnbc | center | 3106 |
Quillette | 45 | 0.416667 | strongly disagrees | https://www.allsides.com/news-source/quillette... | right-center | 108 |
The Courier-Journal | 64 | 0.410256 | strongly disagrees | https://www.allsides.com/news-source/courier-j... | left-center | 156 |
The Economist | 779 | 0.485964 | strongly disagrees | https://www.allsides.com/news-source/economist | left-center | 1603 |
The Observer (New York) | 123 | 0.484252 | strongly disagrees | https://www.allsides.com/news-source/observer | center | 254 |
The Oracle | 33 | 0.485294 | strongly disagrees | https://www.allsides.com/news-source/oracle | center | 68 |
The Republican | 108 | 0.392727 | strongly disagrees | https://www.allsides.com/news-source/republican | center | 275 |
It looks like much of the community disagrees strongly with certain outlets being rated with a 'center' bias.
Let's make a quick visualization of agreeance. Since there's too many news sources to plot so let's pull only those with the most votes. To do that, we can make a new column that counts the total votes and then sort by that value:
agree | agree_ratio | agreeance_text | allsides_page | bias | disagree | total_votes | |
---|---|---|---|---|---|---|---|
name | |||||||
CNN (Web News) | 22907 | 0.970553 | somewhat disagrees | https://www.allsides.com/news-source/cnn-media... | left-center | 23602 | 46509 |
Fox News | 17410 | 0.650598 | disagrees | https://www.allsides.com/news-source/fox-news-... | right-center | 26760 | 44170 |
Washington Post | 21434 | 1.682022 | agrees | https://www.allsides.com/news-source/washingto... | left-center | 12743 | 34177 |
New York Times - News | 12275 | 0.570002 | disagrees | https://www.allsides.com/news-source/new-york-... | left-center | 21535 | 33810 |
HuffPost | 15056 | 0.834127 | somewhat disagrees | https://www.allsides.com/news-source/huffpost-... | left | 18050 | 33106 |
Politico | 11047 | 0.598656 | disagrees | https://www.allsides.com/news-source/politico-... | left-center | 18453 | 29500 |
Washington Times | 18934 | 2.017475 | strongly agrees | https://www.allsides.com/news-source/washingto... | right-center | 9385 | 28319 |
NPR News | 15751 | 1.481889 | somewhat agrees | https://www.allsides.com/news-source/npr-media... | center | 10629 | 26380 |
Wall Street Journal - News | 9872 | 0.627033 | disagrees | https://www.allsides.com/news-source/wall-stre... | center | 15744 | 25616 |
Townhall | 7632 | 0.606967 | disagrees | https://www.allsides.com/news-source/townhall-... | right | 12574 | 20206 |
To make a bar plot we'll use Matplotlib with Seaborn's dark grid style:
As mentioned above, we have too many news outlets to plot comfortably, so just make a copy of the top 25 and place it in a new df2
variable:
agree | agree_ratio | agreeance_text | allsides_page | bias | disagree | total_votes | |
---|---|---|---|---|---|---|---|
name | |||||||
CNN (Web News) | 22907 | 0.970553 | somewhat disagrees | https://www.allsides.com/news-source/cnn-media... | left-center | 23602 | 46509 |
Fox News | 17410 | 0.650598 | disagrees | https://www.allsides.com/news-source/fox-news-... | right-center | 26760 | 44170 |
Washington Post | 21434 | 1.682022 | agrees | https://www.allsides.com/news-source/washingto... | left-center | 12743 | 34177 |
New York Times - News | 12275 | 0.570002 | disagrees | https://www.allsides.com/news-source/new-york-... | left-center | 21535 | 33810 |
HuffPost | 15056 | 0.834127 | somewhat disagrees | https://www.allsides.com/news-source/huffpost-... | left | 18050 | 33106 |
With the top 25 news sources by amount of feedback, let's create a stacked bar chart where the number of agrees are stacked on top of the number of disagrees. This makes the total height of the bar the total amount of feedback.
Below, we first create a figure and axes, plot the agree bars, plot the disagree bars on top of the agrees using bottom
, then set various text features:
For a slightly more complex version, let's make a subplot for each bias and plot the respective news sources.
This time we'll make a new copy of the original DataFrame beforehand since we can plot more news outlets now.
Instead of making one axes, we'll create a new one for each bias to make six total subplots:
Hopefully the comments help with how these plots were created. We're just looping through each unique bias and adding a subplot to the figure.
When interpreting these plots keep in mind that the y-axis has different scales for each subplot. Overall it's a nice way to see which outlets have a lot of votes and where the most disagreement is. This is what makes scraping so much fun!
We have the tools to make some fairly complex web scrapers now, but there's still the issue with Javascript rendering. This is something that deserves its own article, but for now we can do quite a lot.
There's also some project organization that needs to occur when making this into a more easily runnable program. We need to pull it out of this notebook and code in command-line arguments if we plan to run it often for updates.
These sorts of things will be addressed later when we build more complex scrapers, but feel free to let me know in the comments of anything in particular you're interested in learning about.
Web Scraping with Python: Collecting More Data from the Modern Web — Book on Amazon
Jose Portilla's Data Science and ML Bootcamp — Course on Udemy
Easiest way to get started with Data Science. Covers Pandas, Matplotlib, Seaborn, Scikit-learn, and a lot of other useful topics.
Join over 7,500 data science learners.
Web scraping or crawling is the process of fetching data from a third-party website by downloading and parsing the HTML code to extract the data you want.
“But you should use an API for this!'
However, not every website offers an API, and APIs don't always expose every piece of information you need. So, it's often the only solution to extract website data.
There are many use cases for web scraping:
The main problem is that most websites do not want to be scraped. They only want to serve content to real users using real web browsers (except Google - they all want to be scraped by Google).
So, when you scrape, you do not want to be recognized as a robot. There are two main ways to seem human: use human tools and emulate human behavior.
This post will guide you through all the tools websites use to block you and all the ways you can successfully overcome these obstacles.
When you open your browser and go to a webpage, it almost always means that you ask an HTTP server for some content. One of the easiest ways to pull content from an HTTP server is to use a classic command-line tool such as cURL.
The thing is, if you just do: curl www.google.com
, Google has many ways to know that you are not a human (for example by looking at the headers). Headers are small pieces of information that go with every HTTP request that hits the servers. One of those pieces of information precisely describes the client making the request, This is the infamous “User-Agent” header. Just by looking at the “User-Agent” header, Google knows that you are using cURL. If you want to learn more about headers, the Wikipedia page is great. As an experiment, just go over here. This webpage simply displays the headers information of your request.
Headers are easy to alter with cURL, and copying the User-Agent header of a legit browser could do the trick. In the real world, you'd need to set more than one header. But it is not difficult to artificially forge an HTTP request with cURL or any library to make the request look exactly like a request made with a browser. Everybody knows this. So, to determine if you are using a real browser, websites will check something that cURL and library can not do: executing Javascript code.
The concept is simple, the website embeds a Javascript snippet in its webpage that, once executed, will “unlock” the webpage. If you're using a real browser, you won't notice the difference. If you're not, you'll receive an HTML page with some obscure Javascript code in it:
Once again, this solution is not completely bulletproof, mainly because it is now very easy to execute Javascript outside of a browser with Node.js. However, the web has evolved and there are other tricks to determine if you are using a real browser.
Trying to execute Javascript snippets on the side with Node.js is difficult and not robust. And more importantly, as soon as the website has a more complicated check system or is a big single-page application cURL and pseudo-JS execution with Node.js become useless. So the best way to look like a real browser is to actually use one.
Headless Browsers will behave like a real browser except that you will easily be able to programmatically use them. The most popular is Chrome Headless, a Chrome option that behaves like Chrome without all of the user interface wrapping it.
The easiest way to use Headless Chrome is by calling a driver that wraps all functionality into an easy API. SeleniumPlaywright and Puppeteer are the three most famous solutions.
However, it will not be enough as websites now have tools that detect headless browsers. This arms race has been going on for a long time.
While these solutions can be easy to do on your local computer, it can be trickier to make this work at scale.
Managing lots of Chrome headless instances is one of the many problems we solve at ScrapingBee
Everyone, especially front-end devs, know that every browser behaves differently. Sometimes it's about rendering CSS, sometimes Javascript, and sometimes just internal properties. Most of these differences are well-known and it is now possible to detect if a browser is actually who it pretends to be. This means the website asks “do all of the browser properties and behaviors match what I know about the User-Agent sent by this browser?'.
This is why there is an everlasting arms race between web scrapers who want to pass themselves as a real browser and websites who want to distinguish headless from the rest.
However, in this arms race, web scrapers tend to have a big advantage here is why:
Most of the time, when a Javascript code tries to detect whether it's being run in headless mode, it is when a malware is trying to evade behavioral fingerprinting. This means that the Javascript will behave nicely inside a scanning environment and badly inside real browsers. And this is why the team behind the Chrome headless mode is trying to make it indistinguishable from a real user's web browser in order to stop malware from doing that. Web scrapers can profit from this effort.
Another thing to know is that while running 20 cURL in parallel is trivial and Chrome Headless is relatively easy to use for small use cases, it can be tricky to put at scale. Because it uses lots of RAM, managing more than 20 instances of it is a challenge.
If you want to learn more about browser fingerprinting I suggest you take a look at Antoine Vastel's blog, which is entirely dedicated to this subject.
That's about all you need to know about how to pretend like you are using a real browser. Let's now take a look at how to behave like a real human.
TLS stands for Transport Layer Security and is the successor of SSL which was basically what the “S” of HTTPS stood for.
This protocol ensures privacy and data integrity between two or more communicating computer applications (in our case, a web browser or a script and an HTTP server).
Similar to browser fingerprinting the goal of TLS fingerprinting is to uniquely identify users based on the way they use TLS.
How this protocol works can be split into two big parts.
First, when the client connects to the server, a TLS handshake happens. During this handshake, many requests are sent between the two to ensure that everyone is actually who they claim to be.
Then, if the handshake has been successful the protocol describes how the client and the server should encrypt and decrypt the data in a secure way. If you want a detailed explanation, check out this great introduction by Cloudflare.
Most of the data point used to build the fingerprint are from the TLS handshake and if you want to see what does a TLS fingerprint looks like, you can go visit this awesome online database.
On this website, you can see that the most used fingerprint last week was used 22.19% of the time (at the time of writing this article).
This number is very big and at least two orders of magnitude higher than the most common browser fingerprint. It actually makes sense as a TLS fingerprint is computed using way fewer parameters than a browser fingerprint.
Those parameters are, amongst others:
If you wish to know what your TLS fingerprint is, I suggest you visit this website.
Ideally, in order to increase your stealth when scraping the web, you should be changing your TLS parameters. However, this is harder than it looks.
Firstly, because there are not that many TLS fingerprints out there, simply randomizing those parameters won't work. Your fingerprint will be so rare that it will be instantly flagged as fake.
Secondly, TLS parameters are low-level stuff that rely heavily on system dependencies. So, changing them is not straight-forward.
For examples, the famous Python requests
module doesn't support changing the TLS fingerprint out of the box. Here are a few resources to change your TLS version and cypher suite in your favorite language:
Keep in mind that most of these libraries rely on the SSL and TLS implementation of your system, OpenSSL is the most widely used, and you might need to change its version in order to completely alter your fingerprint.
A human using a real browser will rarely request 20 pages per second from the same website. So if you want to request a lot of page from the same website you have to trick the website into thinking that all those requests come from different places in the world i.e: different I.P addresses. In other words, you need to use proxies.
Proxies are not very expensive: ~1$ per IP. However, if you need to do more than ~10k requests per day on the same website, costs can go up quickly, with hundreds of addresses needed. One thing to consider is that proxy IPs needs to be constantly monitored in order to discard the one that is not working anymore and replace it.
There are several proxy solutions on the market, here are the most used rotating proxy providers: Luminati Network, Blazing SEO and SmartProxy.
There is also a lot of free proxy lists and I don’t recommend using these because they are often slow and unreliable, and websites offering these lists are not always transparent about where these proxies are located. Free proxy lists are usually public, and therefore, their IPs will be automatically banned by the most website. Proxy quality is important. Anti-crawling services are known to maintain an internal list of proxy IP so any traffic coming from those IPs will also be blocked. Be careful to choose a good reputation. This is why I recommend using a paid proxy network or build your own.
Another proxy type that you could look into is mobile, 3g and 4g proxies. This is helpful for scraping hard-to-scrape mobile first websites, like social media.
To build your own proxy you could take a look at scrapoxy, a great open-source API, allowing you to build a proxy API on top of different cloud providers. Scrapoxy will create a proxy pool by creating instances on various cloud providers (AWS, OVH, Digital Ocean). Then, you will be able to configure your client so it uses the Scrapoxy URL as the main proxy, and Scrapoxy it will automatically assign a proxy inside the proxy pool. Scrapoxy is easily customizable to fit your needs (rate limit, blacklist …) it can be a little tedious to put in place.
You could also use the TOR network, aka, The Onion Router. It is a worldwide computer network designed to route traffic through many different servers to hide its origin. TOR usage makes network surveillance/traffic analysis very difficult. There are a lot of use cases for TOR usage, such as privacy, freedom of speech, journalists in a dictatorship regime, and of course, illegal activities. In the context of web scraping, TOR can hide your IP address, and change your bot’s IP address every 10 minutes. The TOR exit nodes IP addresses are public. Some websites block TOR traffic using a simple rule: if the server receives a request from one of the TOR public exit nodes, it will block it. That’s why in many cases, TOR won’t help you, compared to classic proxies. It's worth noting that traffic through TOR is also inherently much slower because of the multiple routing.
Sometimes proxies will not be enough. Some websites systematically ask you to confirm that you are a human with so-called CAPTCHAs. Most of the time CAPTCHAs are only displayed to suspicious IP, so switching proxy will work in those cases. For the other cases, you'll need to use CAPTCHAs solving service (2Captchas and DeathByCaptchas come to mind).
While some Captchas can be automatically resolved with optical character recognition (OCR), the most recent one has to be solved by hand.
If you use the aforementioned services, on the other side of the API call you'll have hundreds of people resolving CAPTCHAs for as low as 20ct an hour.
But then again, even if you solve CAPCHAs or switch proxy as soon as you see one, websites can still detect your data extraction process.
Another advanced tool used by websites to detect scraping is pattern recognition. So if you plan to scrape every IDs from 1 to 10 000 for the URL www.example.com/product/
Some websites also do statistic on browser fingerprint per endpoint. This means that if you don't change some parameters in your headless browser and target a single endpoint, they might block you.
Websites also tend to monitor the origin of traffic, so if you want to scrape a website if Brazil, try to not do it with proxies in Vietnam.
But from experience, I can tell you that rate is the most important factor in “Request Pattern Recognition”, so the slower you scrape, the less chance you have of being discovered.
Sometimes, the server expect the client to be a machine. In these cases, hiding yourself is way easier.
Basically, this “trick” comes down to two things:
For example, let's say that I want to get all the comments of a famous social network. I notice that when I click on the “load more comments” button, this happens in my inspector:
Notice that we filter out every requests except “XHR” ones to avoid noise.
When we try to see which request is being made and which response do we get… - bingo!
Now if we look at the “Headers” tab we should have everything we need to replay this request and understand the value of each parameters. This will allow us to make this request from a simple HTTP client.
The hardest part of this process is to understand the role of each parameter in the request. Know that you can left-click on any request in the Chrome dev tool inspector, export in HAR format and then import it in your favorite HTTP client, (I love Paw and PostMan).
This will allow you to have all the parameters of a working request laid out and will make your experimentation much faster and fun.
The same principles apply when it comes to reverse engineering mobile app. You will want to intercept the request your mobile app make to the server and replay it with your code.
Doing this is hard for two reasons:
For example, when Pokemon Go was released a few years ago, tons of people cheated the game after reverse-engineering the requests the mobile app made.
What they did not know was that the mobile app was sending a “secret” parameter that was not sent by the cheating script. It was easy for Niantic to then identify the cheaters. A few weeks after, a massive amount of players were banned for cheating.
Also, here is an interesting example about someone who reverse-engineered the Starbucks API.
Here is a recap of all the anti-bot techniques we saw in this article:
Anti-bot technique | Counter measure | Supported by ScrapingBee |
---|---|---|
Browser Fingerprinting | Headless browsers | ✅ |
IP-rate limiting | Rotating proxies | ✅ |
Banning Data center IPs | Residential IPs | ✅ |
TLS Fingerprinting | Forge and rotate TLS fingerprints | ✅ |
Captchas on suspicious activity | All of the above | ✅ |
Systematic Captchas | Captchas-solving tools and services | ❌ |
I hope that this overview will help you understand web-scraping and that you learned a lot reading this article.
We leverage everything I talked about in this post at ScrapingBee. Our web scraping API handles thousands of requests per second without ever being blocked. If you don’t want to lose too much time setting everything up, make sure to try ScrapingBee. The first 1k API calls are on us :).
We recently published a guide about the best web scraping tools on the market, don't hesitate to take a look!