How to write a scraper of kinopoisk on python with source code and file samples.
How it works
This parser collects information from the pages of the site that contain list in their address. Simply put, it is a parser of filtering pages and pagination pages. What a pun :). I will not hide the fact that it works quite slowly, because in order to access the pages of this site, you need to use the so-called dynamic parsing. That is, parsing pages with preliminary rendering and processing by the browser.
This is the first barrier that must be overcome before gaining access to the necessary pages. The second barrier on our way is the blocking of certain IP addresses from which parsing occurs. Therefore, appropriately selected proxies are used.
A working parser of the kinopoisk site can be downloaded from here. All you have to do is activate the virtual environment and install the necessary packages.
How to make it, installing packages
We will write this proxy in python using third-party libraries and utilities. First, we will create a virtual environment and install it. You also need to download my proxy rotator.
If Windows then:
If *unix systems then:
Now it's the turn to install the necessary packages
Create a main.py file in the project directory. Our script will not be very large, just over 100 lines of code. Here is the complete source code:
Next, to make it easier to understand the code, I broke it down into structural elements and functions. Let's start with the main run function. This function starts scraping and rotates proxies and decides when to stop scraping.
It also creates a virtual browser and makes requests for the necessary URLs.
Next, Selenium needs to hook onto something when loading. That is, to start acting immediately as soon as the corresponding elements appear on the page. In the case of kinopoisk, these are elements with the class base-movie-main-info_link__YwtP1. Here is a cycle of traversal through all available pages.
It is worth paying attention to the fact that I save the result in a table when parsing each page and after going through all the pages. Thus, even if the parser suddenly "crashes", we will still be able to get the necessary data.
The get_page_number function is as easy. We find the necessary element and convert the string to a number:
The save_to_exel function is even simpler. Since we have the pandas package installed, all this function does is create a special data frame and convert it to a table:
The scrape_kinopoisk_list function searches for the necessary elements on the page and fills the dictionary.
That's all. All that's left is to run the run function. It's up to you how to do it. Just run it at the bottom of the script or do it like I did, that is:
Collecting the free proxies
Since kinopoisk is an website operating in Russia and the CIS (at least the audience is from there), the proxies should be from there for greater credibility of our parsers, like they are ordinary users.
So, how and where can you get free proxies? I present to you my free proxy scraper, with the ability to select and filter proxies by country, protocols used, and the type of proxies themselves.
To collect only Russian proxies using http, https, socks4 and socks5 protocols, enter the following command:
If you want to know which codes correspond to which countries, enter:
As a result, you will receive JSON files with proxy lists. All such files are located in the data directory. Everything happens in parallel, mode and you can stop the script at any time if you think you have enough.
Checking free proxies
So, we have collected hundreds of proxies, and I can guarantee you that most of them are outright garbage. We will need to filter it using the target site, i.e., kinopoisk.
To check the functionality of a proxy on a particular site, I created a special CLI tool. Which you can download from the link in the previous sentence (⊙_(⊙_⊙)_⊙). Here's how to check a list of such proxies, the command:
Where -i is the proxies you got using my proxy scraper
Where -o is the name of the result file where each proxy will be assigned a weight.
Where -U is the list of websites to check
More options and variants can be viewed using the -h flag. But in this case, we are more interested in the log.txt file. After all, it stores the results of checks for each proxy and how many times it successfully connected to the target site. Choose the most successful proxies and combine them into one JSON file, which you will then use to parse sites.
0
0
External links
Related questions
Manual scraping, what is it?
This is a process of extracting data from web resources or documents that is manual, that is, performed by a person without the help of any auxiliary scripts or programs.
Cloud scraping what is it
This is a service for collecting information from various sources and grouping them in various formats, which is carried out on the cloud servers of the provider of this service.
What is the best web scraping tool?
The choice of a scraping tool depends on the nature of the website and its complexity. As long as the tool can help you get the data quickly and smoothly with acceptable or zero cost, you can choose the tool you like.
This is a tutorial with an example showing how to write a scraper for online stores with by passes of blocking using proxies and their rotation. Using Selenium and some …