Scraper of an online shop using python using wilberries webiste as an example.
16.11.2024
15.04.2025
5 minutes
71
0
0
0
0
Introduction and overview of the scraper
Nowadays, parsing online stores is not easy. All of them are quite advanced in terms of protection from parsers and bots. They can use such types of protection as using dynamic content and firewalls. One of the most famous companies providing such protection is cloudflare.
In this tutorial, I will show various methods of bypassing blocking technologies using, for example, the online store wildberries. To be more specific, I will show how to make a parser based on the Selenium library and how to set up proxy rotation for it. We will use free proxies.
And as a bonus, I will provide one proxy scraper and one proxy checker. They will help you create your own proxy lists for various sites.
Writing a basic parser
First, we will write and review the parser I wrote. You will need to create and activate a virtual environment.
If it is a windows system:
If it is *nix like system:
After that, we will install the necessary packages:
Import headers. In your working directory, there should be a folder proxy_rotator. This package regulates the issuance of proxies on request. It issues proxies randomly depending on the weight of the proxy. The higher the weight, the more likely it is that this proxy will be selected.
Let's add a couple more utility functions, such as:
- saving in json
- saving in html
- downloading a proxy
- and scraping the necessary data
In my case, I decided to parse the main page. Collect all prices and product titles. The parse_data function determines what to parse and where to save it.
Add these lines of code at the very bottom. It will call the run() function only if this script is run using the python interpreter.
Now this is what the main run() function looks like:
First, we create a proxy rotator by loading the already prepared list into it (see the next chapter for how to create your own). Then, in a loop, we create a selenium driver and assign a proxy to it. If the proxy is bad, a TimeoutException will pop up, which will trigger a message in the console and a proxy replacement.
This was a basic parser. A fully-ready parser for the wildberries website. In this archive you will find both the proxy_rotator package and a list of ready-made proxies, although I do not guarantee that they will work at the time you read this article.
Collecting the free proxies
Since wildberries is an online store operating in Russia and the CIS, the proxies should be from there for greater credibility of our parsers, like they are ordinary users.
So, how and where can you get free proxies? I present to you my free proxy scraper, with the ability to select and filter proxies by country, protocols used, and the type of proxies themselves.
To collect only Russian proxies using http and https protocols, enter the following command:
If you want to know which codes correspond to which countries, enter:
As a result, you will receive JSON files with proxy lists. All such files are located in the data directory. Everything happens in parallel, mode and you can stop the script at any time if you think you have enough.
Checking free proxies
So, we have collected hundreds of proxies, and I can guarantee you that most of them are outright garbage. We will need to filter it using the target site, i.e., wildberries.
To check the functionality of a proxy on a particular site, I created a special CLI tool. Which you can download from the link in the previous sentence (⊙_(⊙_⊙)_⊙). Here's how to check a list of such proxies, the command:
Where -i is the proxies you got using my proxy scraper
Where -o is the name of the result file where each proxy will be assigned a weight.
Where -U is the list of websites to check
More options and variants can be viewed using the -h flag. But in this case, we are more interested in the log.txt file. After all, it stores the results of checks for each proxy and how many times it successfully connected to the target site. Choose the most successful proxies and combine them into one JSON file, which you will then use to parse sites.
Comments
(0)
Send
It's empty now. Be the first (o゚v゚)ノ
Other
Similar articles
Used termins
- VPS (Virtual private server) ⟶ This is a service whose essence is that access to a dedicated server on a specific machine is provided. There can be thousands of such dedicated servers on one machine. Typically, managing such a server is no different from managing a regular, physical one.
- Website builder ⟶ It is an app or web service, with a collection of ready for use templates and tools, for constructing a website.
- Website ⟶ Is defined as a collection of related web pages that are typically identified by a common domain name and published on at least one web server. Websites can serve various purposes and can include anything from personal blogs to business sites, e-commerce platforms, or informational resources.
- Scraper ⟶ In the context of computing and web development, refers to a program or script that is designed to extract data from websites. This process is known as web scraping. Scrapers can automatically navigate through web pages, retrieve specific information, and store that data in a structured format, such as CSV, JSON, or a database.
- Search engine ⟶ Is a software system designed to carry out web searches. It allows users to search for information on the internet by entering keywords or phrases. The search engine then uses algorithms to index and retrieve relevant results from its database of websites.
- TCP (Transmission Control Protocol,) ⟶ Is one of the main protocols of the Internet Protocol Suite. It is used for establishing a connection between networked devices, ensuring reliable data transmission over the internet or other networks.
Related questions
- How to avoid being blocked when scraping a website? Many websites would block you if you scraped them too much. To avoid being denied, you should make the scraping process more like a human browsing a website. For example, adding a delay between two requests, using proxies, or applying different scraping patterns can help you avoid being blocked.
- What is the difference between web scraping and web crawling? Web scraping and web crawling are two related concepts. Web scraping, as we mentioned above, is a process of obtaining data from websites; web crawling is systematically browsing the World Wide Web, generally for the purpose of indexing the web.
- Scraping data from Instagram is illegal? If the data you are going to collect is public and accessible to everyone, then it is definitely allowed. Plus, Instagram provides a special API for scraping so there should be no problems.
- Cloud scraping what is it This is a service for collecting information from various sources and grouping them in various formats, which is carried out on the cloud servers of the provider of this service.
- How scraping is done It all depends on what to scrape and what to scrape with. You can scrape documents and tables, or you can scrape a websites. Moreover, websites are more difficult to scrape from than documents, because there are many websites and each has its own architecture, which greatly complicates scraping.