How to Fix Page Blocked From Indexing
Lighthouse flags pages that prevent search engines from indexing them. Learn how to identify and remove unintentional indexing blocks so your pages appear in search results.
What Lighthouse Is Telling You
When Lighthouse flags “Page isn’t blocked from indexing” as failing, it means the page has a directive that prevents search engines from including it in their index. This is the highest-weighted SEO audit — Lighthouse designed it so that failing this single audit drops the SEO category below 69%.
If this page should be publicly searchable, the indexing block needs to be removed.
Why Pages Get Blocked Accidentally
- Development
noindexleft in production — Adding<meta name="robots" content="noindex">during development and forgetting to remove it before deployment - robots.txt blocking — A
Disallow: /rule inrobots.txtthat was meant for staging but got deployed to production - HTTP header middleware — Server or CDN middleware that adds
X-Robots-Tag: noindexto responses globally instead of targeting specific paths - Framework defaults — Some frameworks or boilerplate templates include
noindexin the base layout as a safety measure - Environment confusion — Using the same configuration for staging and production without differentiating robot directives
The Old Way to Fix It
- Run Lighthouse and see the “is-crawlable” audit fail
- View the page source and search for
<meta name="robots"> - Check HTTP response headers for
X-Robots-Tagusing DevTools Network panel - Check
/robots.txtfor Disallow rules matching the page’s path - Determine whether the block is intentional (login pages, admin panels) or accidental (public content)
- Remove the blocking directive from the meta tag, HTTP header, or robots.txt
- Wait for search engines to recrawl and reindex the page
- Verify with Google Search Console’s URL Inspection tool
The Frontman Way
Tell Frontman to fix your Lighthouse issues. That is the entire workflow.
Frontman has a built-in Lighthouse tool. It runs the audit, reads the failing scores, fixes the underlying code, and re-runs the audit to verify the score went up. If issues remain, it keeps going — iterating through fixes and re-checks until the metrics pass. You do not hunt through meta tags, HTTP headers, and robots.txt to find the blocking directive. You say “fix the Lighthouse issues on this page” and Frontman handles the rest.
Key Fixes
- Remove
noindexfrom meta robots — Change<meta name="robots" content="noindex">to<meta name="robots" content="index, follow">or remove the meta tag entirely (indexing is the default) - Fix robots.txt — Remove or narrow
Disallowrules that block important pages. UseAllowfor specific paths within a blocked directory - Remove X-Robots-Tag headers — Check server config (nginx, Apache, Vercel, Netlify) and CDN settings for headers that add
noindex - Use environment-specific config — Set
noindexonly in staging/development environments. Use environment variables:if (process.env.NODE_ENV !== 'production') { noindex = true } - Audit your robots.txt — Keep
robots.txtin version control. Review it during deployment. Use Google’s robots.txt Tester to verify - Use Google Search Console — After fixing, use the URL Inspection tool to request reindexing and verify the page is indexable
People Also Ask
Should some pages be noindexed?
Yes. Pages that should have noindex: login pages, admin panels, internal search results, user dashboards, thank-you pages after form submission, and paginated archives (if using rel="canonical" to the first page). Only noindex pages you intentionally want excluded from search.
How long does it take for Google to reindex a page?
After removing the noindex directive, Google typically recrawls within days to weeks. You can speed this up by requesting indexing via Google Search Console’s URL Inspection tool. Sitemaps help Google discover the change faster.
Does nofollow also prevent indexing?
No. nofollow tells search engines not to follow links on the page — it does not prevent indexing. noindex prevents indexing. They are separate directives. <meta name="robots" content="noindex, nofollow"> blocks both indexing and link following.
Can a canonical tag prevent indexing?
A rel="canonical" tag pointing to a different URL tells search engines that this page is a duplicate and the canonical URL is the preferred version. The current page may be dropped from the index in favor of the canonical. This is not the same as noindex — it redirects the indexing to another page rather than preventing it entirely.
You can use Frontman to automatically fix this and any other Lighthouse issue. Frontman runs the audit, reads the results, applies the fixes, and verifies the improvement — all inside the browser you are already working in. Get started with one install command.