This parser collects information from the pages of the site that contain list in their address. Simply put, it is a parser of filtering pages and pagination pages. What a pun :). I will not hide the fact that it works quite slowly, because in order to access the pages of this site, you need to use the so-called dynamic parsing. That is, parsing pages with preliminary rendering and processing by the browser.
This is the first barrier that must be overcome before gaining access to the necessary pages. The second barrier on our way is the blocking of certain IP addresses from which parsing occurs. Therefore, appropriately selected proxies are used.
A working parser of the kinopoisk site can be downloaded from here. All you have to do is activate the virtual environment and install the necessary packages.
How to make it, installing packages
We will write this proxy in python using third-party libraries and utilities. First, we will create a virtual environment and install it. You also need to download my proxy rotator.
If Windows then:
If *unix systems then:
Now it's the turn to install the necessary packages
Create a main.py file in the project directory. Our script will not be very large, just over 100 lines of code. Here is the complete source code:
Next, to make it easier to understand the code, I broke it down into structural elements and functions. Let's start with the main run function. This function starts scraping and rotates proxies and decides when to stop scraping.
It also creates a virtual browser and makes requests for the necessary URLs.
Next, Selenium needs to hook onto something when loading. That is, to start acting immediately as soon as the corresponding elements appear on the page. In the case of kinopoisk, these are elements with the class base-movie-main-info_link__YwtP1. Here is a cycle of traversal through all available pages.
It is worth paying attention to the fact that I save the result in a table when parsing each page and after going through all the pages. Thus, even if the parser suddenly "crashes", we will still be able to get the necessary data.
The get_page_number function is as easy. We find the necessary element and convert the string to a number:
The save_to_exel function is even simpler. Since we have the pandas package installed, all this function does is create a special data frame and convert it to a table:
The scrape_kinopoisk_list function searches for the necessary elements on the page and fills the dictionary.
That's all. All that's left is to run the run function. It's up to you how to do it. Just run it at the bottom of the script or do it like I did, that is:
Collecting the free proxies
Since kinopoisk is an website operating in Russia and the CIS (at least the audience is from there), the proxies should be from there for greater credibility of our parsers, like they are ordinary users.
So, how and where can you get free proxies? I present to you my free proxy scraper, with the ability to select and filter proxies by country, protocols used, and the type of proxies themselves.
To collect only Russian proxies using http, https, socks4 and socks5 protocols, enter the following command:
If you want to know which codes correspond to which countries, enter:
As a result, you will receive JSON files with proxy lists. All such files are located in the data directory. Everything happens in parallel, mode and you can stop the script at any time if you think you have enough.
Checking free proxies
So, we have collected hundreds of proxies, and I can guarantee you that most of them are outright garbage. We will need to filter it using the target site, i.e., kinopoisk.
To check the functionality of a proxy on a particular site, I created a special CLI tool. Which you can download from the link in the previous sentence (⊙_(⊙_⊙)_⊙). Here's how to check a list of such proxies, the command:
Where -i is the proxies you got using my proxy scraper
Where -o is the name of the result file where each proxy will be assigned a weight.
Where -U is the list of websites to check
More options and variants can be viewed using the -h flag. But in this case, we are more interested in the log.txt file. After all, it stores the results of checks for each proxy and how many times it successfully connected to the target site. Choose the most successful proxies and combine them into one JSON file, which you will then use to parse sites.
In this article I will show how to make a simple python scraper. This parser is an example of how to parse static and dynamic sites. With the source code …
In this article I will show you how to make a Google serp scraper using their official API. I will show how to get an API key and search engine …
Used termins
Website builder ⟶ It is an app or web service, with a collection of ready for use templates and tools, for constructing a website.
Python programming language ⟶ It is interpreted, build upon object-oriented approach, with dynamic semantics, and yes it s high-level programming language. Actively using for rapid development, scripting and gluing part of existing apps.
Website ⟶ Is defined as a collection of related web pages that are typically identified by a common domain name and published on at least one web server. Websites can serve various purposes and can include anything from personal blogs to business sites, e-commerce platforms, or informational resources.
Scraper ⟶ In the context of computing and web development, refers to a program or script that is designed to extract data from websites. This process is known as web scraping. Scrapers can automatically navigate through web pages, retrieve specific information, and store that data in a structured format, such as CSV, JSON, or a database.
Related questions
What is the difference between web scraping and web crawling?
Web scraping and web crawling are two related concepts. Web scraping, as we mentioned above, is a process of obtaining data from websites; web crawling is systematically browsing the World Wide Web, generally for the purpose of indexing the web.
Cloud scraping what is it
This is a service for collecting information from various sources and grouping them in various formats, which is carried out on the cloud servers of the provider of this service.
How scraping is done
It all depends on what to scrape and what to scrape with. You can scrape documents and tables, or you can scrape a websites. Moreover, websites are more difficult to scrape from than documents, because there are many websites and each has its own architecture, which greatly complicates scraping.