About duplicated and Not-canonical pages while implementing paginator
02.03.2025
03.03.2025
3 minutes
43
0
0
0
0
Introduction
Imagine you make a website for yourself, you don’t bother anyone. You have about 1000 pages in the index, and the next day, you wake up and open Google Search Console, and you see this:

Most of the time I had something like 1000 pages in google index: 80 articles, maybe another 100 pages about tools, 100 definitions, and 100 Q&A pages. The rest were pages from the paginator.
I started to worry about "duplicate pages" on my part. After all, who would care about pagination pages? They are just a collection of existing articles on the site. And so I took action.
About pagination on thewebsite
Let's start talking about the paginator. Or as I call it, Pagiscroll. The thing is that my paginator combines two types of content updates on the site. The first is a paginator, that is, there are some buttons that, when pressed, you get a new page. The second type of content update is the so-called infinite scrolling. The content itself is loaded as soon as the user reaches a certain point on the page.
I have the first version of pagination. Yes, I have page numbers instead of the offset value and the number of returned elements. But they are there, they are just abstracted, and the user cannot manipulate them. (。╯︵╰。) Lord, I have enough problems with this paginator as it is.
About the tagging system on the website
But you can't get by with just a pagiscroll (paginator + infinity scroll). You want filtering and the ability to group content on the site by keywords. For this, there is such a thing as tagging.
Tags allowed me to group content into even smaller groups, plus they allowed me to customize and SEO-optimize them later via PagiScrollEditor tool. You can learn more about tags and categories in this article.
About duplicated pages
For a long time, the number of indexed pages remained at the 1000 pages + another 10,000 pages with mismatched canonicals or which were simply missing. But after the last update of the paginator, Google was able to find many more pages and index them (8000 indexed and 30,000 with problems). But, as you can see, the problem with duplicates remained.
Any site, no matter how small, with one or another paginator types faces this problem. And different solutions are offered. Starting from closing pagination pages to clever schemes for weight manipulation. There is actually another option, it is simply to do nothing. Google, and other search engines do not index pages just anyhow. And if these pages are in the search, then they are something.
My actions
And here I got you. I didn't do anything. Yeah, the best option in my case is just to wait and watch my site's search visibility grow and with it the number of visitors ٩(◕‿◕。)۶ . And you know what's ironic? Pagination pages, although they outnumber any category on my site, give the fewest impressions and clicks.

To avoid being unfounded, here are the statistics for February, for the entire site.
I am not an SEO expert, but I believe it is all about the distribution of link weight and internal linking of my site. Articles with similar topics are combined on one page. This page in turn redistributes weight to weaker pages and they get into search results.
This, by the way, applies not only to Google search, but to Yandex too. For the first time in a year or so, it started giving me some traffic. And much better than from Google.

Conclusions
Well, now about the conclusions that can be made. You shouldn't act rashly and close potentially useful pages from the index. In general, in all search engine blogs, it is recommended not to think about this and let them decide whether to index these pages or not.
Comments
(0)
Send
It's empty now. Be the first (o゚v゚)ノ