Google Search Console flags pages that are not indexed under "Page indexing" with a reason. Most of those reasons are fixable in code; some are content issues. Here is the diagnosis and fix for the most common ones.
Redirect error
Cause: Google tried to crawl a URL from your sitemap, but the URL redirected (often to a different host like apex → www, or to HTTPS, or to a different canonical). Sitemap URLs should serve directly with HTTP 200, not 30x.
Fix: align the canonical URL across sitemap, robots.txt, the canonical link tag, and any JSON-LD @id. Pick one host (www vs apex), one protocol (always HTTPS), and one slash policy. Update the sitemap to use the canonical version. After deploying, click VALIDATE FIX in Search Console.
Alternate page with proper canonical tag
Cause: Google found a page whose canonical link points to a different URL. Google is correctly deferring to the canonical, but is reporting it as a side effect.
Fix: usually informational and not a real error. But: if you did not intend to canonicalize the page elsewhere, your canonical link is wrong. Also commonly seen alongside redirect chains where the canonical points to a URL that itself redirects — fix the chain.
Discovered – currently not indexed
Cause: Google knows about the URL but has not crawled it yet. Common when site quality signals are low or crawl budget is constrained.
Fix: improve internal linking to the URL (orphan pages get crawled last), reduce thin / duplicate content, fix any crawl errors, and submit the sitemap via Search Console. For new sites, wait — Google catches up over weeks, not hours.
Crawled – currently not indexed
Cause: Google crawled the page but decided not to index it. Usually a quality signal — thin content, duplicate of another page, low authority.
Fix: this is a content problem more than a technical one. Either improve the page (more depth, originality, internal links) or noindex it intentionally. Pages stuck here for months that you cannot improve are candidates for deletion or merging.
Soft 404
Cause: the page returns HTTP 200 but Google thinks it is an error page based on content (empty product list, "no results", error message in the body).
Fix: either return real content or return a real 404 status. SPAs are common offenders — make sure your client-side router serves a 404 status (or at minimum a noindex tag) for missing routes, not just an empty page.
Server error (5xx)
Cause: Googlebot got a 500-level response. If sustained, your pages will be deindexed.
Fix: check your error logs for Googlebot user-agent hits and find the failing endpoint. Common causes: rate limiting that incorrectly blocks Googlebot, database timeouts, server outages during the crawl window. Set up monitoring on 5xx rate to your important pages.
Excluded by noindex tag
Cause: the page has a meta robots noindex tag or X-Robots-Tag header.
Fix: if intentional (login pages, internal tools, thank-you pages) — fine, ignore. If unintentional, find where the noindex is coming from. Common source: staging environment configs leaking into production, or a CMS setting that defaults to noindex on draft → publish transitions.
Duplicate without user-selected canonical
Cause: Google found multiple URLs serving substantially the same content and you did not specify which one is canonical.
Fix: add canonical link tags pointing to the preferred URL. Common cases: paginated lists, filter/sort URL parameters, trailing-slash variants. Either canonicalize or use rel=next/prev / robots noindex on duplicates.
How long fixes take
After clicking VALIDATE FIX in Search Console, Google takes 1–4 weeks to recrawl and update the report. Major issues like "Redirect error" can resolve in days for sites Google crawls frequently. Discovered-not-indexed clears slowest.
If you are stuck on errors that resist fixes, that is exactly the kind of problem we untangle in our SEO audit service.