Back to Journal
seoeeatnext-jsai-tools

AI Built Our Site. The SEO Audit Found 23 Problems.

We built czaban.dev with Claude. It looked right. The /seo audit scored it 59/100. Here is every issue it found and how we fixed them in one day.

Tamas Czaban

We built czaban.dev with Claude. It looked right.

The /seo audit tool scored it 59/100 and listed 23 issues. None of them were things Claude had mentioned.

That score is not a criticism of Claude. Claude did what we asked. We asked it to build a site that worked and looked professional. It delivered both. We did not ask it to audit for discoverability, so it never did.

The gap between "looks right" and "ranks" turns out to be 23 actionable issues.

The Audit

The audit runs eight parallel subagents across seven weighted categories: E-E-A-T (23%), Technical SEO (22%), On-Page (20%), Schema (10%), Performance (10%), GEO (10%), Images (5%). It scores each category and flags every gap with a specific fix.

Starting score: 59/100.

The issues split into two kinds. The first kind were things Claude had no way to know without being asked: logos, author bios, linking structure. The second kind were things the framework handles but only if you wire them correctly: schema generators, hreflang declarations, sitemap settings. Both kinds were present. Neither kind had been caught during the build.

The Unexpected Failures

Some findings were obvious in hindsight. Missing logo.png and og-default.png. Two static files referenced in Organisation JSON-LD and in the default OG meta on every page. Every social share on czaban.dev was rendering no preview image. The files were listed in the schema as if they existed. They did not.

Some findings were not obvious at all.

"The fabricated bios were the most expensive issue." Claude had written founder credentials it had no basis for. "Over a decade of experience in distributed systems and cloud infrastructure" for Zsombor, who is a frontend and UX engineer. Fluent prose. Completely wrong. The /about page and Team.tsx both contained these, and we had not cross-checked them against real CVs because the prose read as credible. An E-E-A-T quality rater would have flagged them immediately.

Zsombor's location was also wrong. Founders.tsx had it hardcoded as "Greece". He had since moved to Kecskemét, Hungary. The field had been incorrect on the live site for an unknown period.

The third unexpected finding: author pages referenced in Person JSON-LD returned 404. siteConfig.ts pointed tamas.url and zsombor.url at /journal/author/tamas and /journal/author/zsombor. Those pages did not exist. Every page on the site was emitting Person schema with broken links in the url field.

The build surfaced none of these. Claude wrote the bios, Claude configured the schema, Claude generated the links. All of it looked correct in a local dev review. None of it was correct from a discoverability standpoint.

What We Fixed and in What Order

We ran two sprints with a re-audit gate between them.

Sprint 1 (S1–S11, 11 PRs in one session):

  • Committed logo.png and og-default.png to /public
  • Rewrote founder bios from actual CV sources; corrected Zsombor's location
  • Built /journal/author/[slug] route and pointed siteConfig author URLs at it
  • Wired the existing personSchema generator into /about for both founders (the generator existed and had tests; it had never been called from the page)
  • Added BreadcrumbList JSON-LD to portfolio, journal, and service pages
  • Rewrote 5 title tags from 14–33 chars to 50–60 chars, keyword forward
  • Fixed sitemap: journal entries had changeFreq: weekly (should be monthly); /privacy and /terms were absent
  • Removed noIndex: true from /privacy and /terms. Both were in the sitemap but flagged as do not index, a no-op that was adding URLs to the crawl graph while telling Google to ignore them
  • Deferred the Material Symbols stylesheet to non-blocking load
  • Migrated homepage components to next/image with proper sizes attributes
  • Enriched Organisation JSON-LD with address, foundingDate, and sameAs fields

Mid sprint re-audit: 76/100.

Sprint 2 (S12–S23, 12 more PRs):

  • Added site wide hreflang self-ref. Sprint 1 had added en + hu + x-default on /en/work/ and /hu/work/ but the homepage, journal, and every other page had no hreflang at all.
  • Fixed /hu/work returning 404. The middleware redirected /work to /en/work/custom-software-for-founders but had no equivalent for the HU variant. Hungarian visitors hit a 404.
  • Updated the Work nav anchor in Nav.tsx from /work to /en/work/custom-software-for-founders directly. Every visitor click was triggering a 308 redirect chain before the fix.
  • Added ClaudeBot, Google-Extended, OAI-SearchBot, CCBot, and Bytespider to robots.txt. The GEO category (AI crawler discoverability) had scored 0 until this landed.
  • Added X-Content-Type-Options, Referrer-Policy, and Permissions-Policy via next.config.ts. Content Security Policy was deferred: Calendly and Plausible both need exception lists and getting them wrong breaks the booking widget on every page.
  • Fixed hero LCP. Mobile score was 3.9s. No fetchPriority="high" on the hero <Image> component, no sizes attribute, and the hero image was 18x larger than needed. All three fixed in one slice.
  • Built journal post coverImage infrastructure and applied cover images to the three highest commercial intent posts.
  • Added a "Latest from Journal" section to the homepage and a secondary CTA in the hero pointing at the VitalRegistry case study.
  • Enriched author profiles: single source of truth in siteConfig.authors, both the author page and the Founders component now read from the same record.

Final score: 87/100.

The Decision We Deferred

We skipped the Content Security Policy header. Calendly is the primary booking mechanism on the site. Plausible is the analytics layer. Both require exception entries in any CSP header. Getting the exception list wrong breaks the booking widget silently on every page. The non-CSP headers (content type sniffing, referrer policy, permissions policy) were safe to ship without that risk. CSP gets its own slice later, after we have a staging environment to validate against.

Deferred items have a cost. The audit records them. Shipping known gaps is a choice, not an oversight.

What the Score Means

87/100 is not a perfect site. It is a site where the known gaps are documented decisions rather than unknown omissions.

The 13-point gap is mostly CSP (deferred), cover images on six lower priority journal posts (separate content task), and a Hungarian translation surface that does not exist yet.

The movement from 59 to 87 in one day was not a dramatic rescue. It was a systematic pass through issues that were invisible during the build because the build process had no discoverability mandate.

"Claude never told us any of this." The site was built across multiple sessions. Not once did Claude flag the missing logo files, the fabricated bios, the broken author page URLs, or the noIndex conflict. These are not obscure edge cases. They are the difference between a site that appears credible to a quality rater and one that does not. Claude built for visual correctness. It had no mandate to audit for discoverability, so it did not.

The mandate has to come from you.

If you have built your site with an AI tool and have not run a structured SEO audit, your score is probably in the 50s. Not because AI tools are bad at building sites. Because discoverability requires asking questions the build process never asks by default.

Is your site built, or is it built and auditable?


We used the claude-seo skill for the audit. The /seo audit command runs eight subagents in parallel and produces a scored report with specific file level fixes. Source available on request.

Working with a tool you've outgrown?

Book a call

Written by

Tamas Czaban