3 horizontal lines, burger
3 horizontal lines, burger
3 horizontal lines, burger
3 horizontal lines, burger

3 horizontal lines, burger
Remove all
LOADING ...

Content



    Pagination pages were added into the Google index. What to do?

    Clock
    02.03.2025
    /
    Clock
    23.03.2026
    /
    Clock
    7 minutes
    An eye
    785
    Hearts
    0
    Connected dots
    0
    Connected dots
    0
    Connected dots
    0

    Introduction

    Imagine you're building your own website, meaning nothing to anyone. You have about 1,000 pages indexed, and the next day, when you wake up and open Google Search Console, you see this:
    Most of the time, I had about 1,000 pages in my index: 80 articles, maybe another 100 tool pages, 100 definitions, and 100 FAQs. The rest were pages from the paginator, plus filter pages.
    On my site, filter pages can be accessed using tags and/or the time of publication. So the system itself is very simple, but as you can see, even this filtering can lead to index clutter.
    I started worrying about "low-quality pages" on my end. After all, who would be interested in pagination pages? They're simply a collection of existing articles on the site. Let's start by understanding what low-quality pages are and how pagination and filtering are related to their appearance on the site.

    About Low-Quality (Thin) Pages

    I'll start with a definition, as the term "low-quality page" will appear quite frequently later in the article.
    A low-quality (thin) page is one that search engines like Yandex, Google, or Bing consider useless due to duplication, lack of uniqueness, insufficient text, or technical errors.
    In my case, low-value pages include pagination and filter pages. There are thousands of them, and they're essentially just pass-through pages, meaning their main role is to serve as a guide to more valuable pages.
    For a long time, the number of indexed pages remained at 1,000, plus another 10,000 pages with mismatched canonicals or that were simply missing. But after the latest update of pagination on my website, Google was able to find and index many more pages (8,000 indexed and 30,000 with issues). However, as you can see, the duplicate issue remains.
    All of these indexed pages, or pages with problems, were pagination and filtering pages.
    This problem is faced by any website, no matter how small, that has implemented pagination. Various solutions are proposed:
    1. Completely block such pages from being indexed.
    2. Link to the root section of such pages on every page.
    3. Leave everything as is and let search engines decide what to keep and what to remove.
    4. Try setting up canonicalization for such pages yourself and working with the rel="canonical" rel="next" rel="prev" meta tags.
    But before we move on to solving the problem of indexing paginated pages, let's take a closer look at pagination and content filtering on a website. What it is and how it works.

    About website pagination

    Let's start with the pagination, or rather, the pagiscroll (Pagination + Scroll), as I call it. My pagiscroll combines two types of content updates on the website:
    1. The first is the pagination, meaning there are certain buttons that you click to go to the next page.
    2. The second type of content update is the so-called infinite scroll. The content automatically loads as soon as the user reaches a certain point on the page.
    This is simply a feature of my website. Most websites will have either a pure pagination or an infinite scroll. When I created the pagination for my website, I followed this guide (with an implementation example) and Google's recommendations.
    This is a demo of how my PagiScroll works on my website. It has an infinite feed (which is very finite), plus the ability to navigate between pages using buttons.
    I should also clarify that I use so-called offset pagination, which uses specific parameters to specify what and how much to return to the user. It might look like this: http://website.com/articles?items=10&offset=0 That is, start at element 0 and return exactly ten.
    There's also cursor pagination.
    Cursor pagination is a method of paginating data that uses a unique marker (cursor) to indicate a precise position in a data set, instead of a page number.
    More technical information about cursor pagination (and other types of pagination) can be found in the attached link.
    Pagination itself is very useful and, I would say, an integral part of any website, even a small one, with content. But from the perspective of SEO optimization and SERP ranking benefits, it's very ambiguous, just like the use of filters on a website.

    About website filtering

    You can't get by with just a paginator (paginator + infinite feed). You want filtering and the ability to group website content by keywords. You can filter pages using tagging.
    Tagging has different names. On Printerest, it's pins, on Twitter and Instagram, it's hashtags. Some simply call them categories. But essentially, they all mean the same thing: a tag.
    Tags allowed me to group content into even smaller groups, plus they allowed me to customize and optimize them for SEO. You can learn more about tags and categories in this article.
    An example of how tagging (or filtering) works on my site.
    Filtering has exactly the same problems as pagination: low-value content, index clogging with this content, poor user experience, and implementation complexity.

    What can be done, and what did I do, with the filter pages that were indexed?

    Initially, as is customary, I simply ignored this problem. But the number of pages and errors kept growing, as you saw at the beginning of the article. Then I discovered how filter pages can affect a site's ranking.

    The Impact of Filtering Pages on Search Ranking

    The content blocking technology itself doesn't directly affect search rankings, but it does indirectly:
    1. It can lead to a lack of crawl budget. There may simply not be enough budget to check new or updated pages on the site. Learn how to monitor and manage it in the pinned article.
    2. Since these pages are generally of low value, they drag down the rankings of their parent directories and the site as a whole. For example, there's a path /en/articles/seo-optimization/, and this directory acts as a paginator for articles about SEO optimization. If this directory contains low-quality pages (e.g., /en/articles/seo-optimization/?page=1, /en/articles/seo-optimization/?page=2), these low-quality pages will drag down the rankings of the entire directory along with it. Hobo Technical wrote extensively about the so-called "Topical Neighborhood" effect in the address bar, and I highly recommend it.
    3. It's technically difficult to implement, especially for a beginner or non-developer. And it's likely to result in numerous errors, which Google or any other search engine will perceive as a poorly performing site.
    4. It generates negative user experience factors. While this is debatable and not applicable to every site, it's a fact for a blog like mine. The most you can hope for from these pages is a +1 to page view depth, but that's all.
    These are the effects of page filtering and pagination that can occur on a site that implements them incorrectly. Now let's look at what can be done with these pages.

    What are the options for dealing with filtered pages?

    As always, everything depends on the specific site, and it's impossible to say definitively that such a site should simply be abandoned and let Google sort out the mess.
    So, the following options are available for dealing with these pages:
    1. Do nothing. This is probably the simplest, but not the best course of action. Those who do this are relying on smart algorithms and AI to figure everything out for them. Who knows, maybe this helps someone.
    2. Do everything correctly (from a search engine and SEO perspective). That is, set a meta canonical tag, add rel="next" rel="prev" . And again, do nothing, but now know that you've done everything you could to help search engines understand your site.
    3. Disable indexing of all such pages. This is the most radical and also the most reliable option. Here you explicitly state that such pages should not be indexed.
    4. Add a canonical meta tag to the main filtering page so that all of them link to a single one. This will allow you to retain only the most unique and valuable filtering pages.
    5. Prevent filtering pages from being indexed, but only allow those that can be grouped into logical groups and sufficiently uniquely differentiated.
    Now let's talk about these options. I've put them in an ordered list for a reason, as I've gone through this sequence myself. I'm currently working on option 3, but I plan to try option 5 soon and group the pagination/filtering pages I need into specific clusters. This will be a separate article.
    I don't recommend trying option 2, as I doubt anyone would get it right the first time. I was only able to implement everything correctly with the third. A link to a non-existent page would appear, or a filtering page would appear that simply returns a 500 error code...
    In short, either do nothing or simply close such pages from the index. This is much simpler and more reliable, because, as you'll see later, such pages don't contain much value.

    The results of my actions with pagination pages

    At the beginning of the article, I showed the results of the first two options: doing nothing and modifying the pagination page meta tags.
    But a couple of months later, after the initial publication of this article, I made more drastic decisions regarding these pages and decided to completely block them from the index. Here are my site's Google index scores as of March 2026:
    As you can see, this isn't a quick process; it took my site a year. Most of the unindexed pages were those hidden behind the noindex meta tag.
    Furthermore, this had no impact on impressions or site visibility, even though, as you might recall, I had something like 8,000 pages indexed.
    This isn't surprising, as these are low-value pages. Here's a clear demonstration of the value of such pages (this was at the time when they were still visible in my search results):
    To avoid being unfounded, here are the statistics for February for the entire site.

    Conclusions and the Future of Pagination/Filtering Pages

    Well, now for the conclusions that can be drawn. A year ago, I concluded that it wasn't a good idea to act rashly and block these pages from the index. But I wouldn't say that now. In my view, and in Google's, these were low-quality pages that didn't help the end user solve their problems.
    And I would recommend blocking such pages from the index, at least for sites like mine — that is, blogs and author sites. As the saying goes, the game isn't worth the candle. Maintaining such pages can be extremely expensive, and the benefits are almost nonexistent.
    I'll return to this article when I've managed to group similar pagination pages into a single, canonical URL and make it unique enough that even Google won't be able to say no. But that will be covered in the next article update. Stay tuned.

    Do not forget to share, like and leave a comment :)

    Comments

    (0)

    captcha
    Send
    LOADING ...
    It's empty now. Be the first (o゚v゚)ノ

    Other

    Similar articles


    Monetize your files. Is it really possible earn money by sharing files?

    Clock
    03.03.2025
    /
    Clock
    11.03.2026
    An eye
    2814
    Hearts
    1
    Connected dots
    0
    Connected dots
    0
    Connected dots
    0
    In this article, I will tell and show on the example of my site how you can earn money on file-sharing services. And is it even possible? Let's analyze the …

    How to implement pagination in Django + HTMx pr. 1

    Clock
    02.04.2025
    /
    Clock
    24.03.2026
    An eye
    926
    Hearts
    0
    Connected dots
    0
    Connected dots
    0
    Connected dots
    0
    In this article, I will describe how to create a paginator using Django and the HTMx library. And why it was so easy compared to the paginator on my site. …

    How to make simple paginator in Django and HTMx. Adding fitering and sorting feature. pr. 2

    Clock
    08.04.2025
    /
    Clock
    24.03.2026
    An eye
    604
    Hearts
    0
    Connected dots
    0
    Connected dots
    0
    Connected dots
    0
    In this article I will describe the process and main code blocks to add sorting and filtering feature to a paginator. This paginator is written in Django using HTMx.

    Customizing the 404 Page in Django, Two Ways. Guide

    Clock
    12.04.2025
    /
    Clock
    30.03.2026
    An eye
    2295
    Hearts
    0
    Connected dots
    0
    Connected dots
    0
    Connected dots
    0
    I'll cover two ways to customize and configure a 404 response page. I'll explain the nuances and details for 400, 403, and 500 pages. I'll explain why you should do …

    Used termins


    Related questions