
GSC wake-up call
Google Search Console flagged indexing failures. Pages missing, 404 errors piling up, organic traffic down 33%.
Asked AI about ourselves
ChatGPT, Perplexity, Gemini: zero citations. One model described services we never offered.
AI was misrepresenting us
Commit deef381: 30 files rebuilt. Structured data sending conflicting signals. LLMs filling gaps with hallucinations.
Fixed bot access
robots.txt had no AI rules. llms.txt did not exist. Added crawler policies, hardened bot variants.
Cleaned structured data
Removed fake ratings, duplicate schemas. Built 18-entity knowledge graph. GSC-driven content optimization.
Overhauled 21 posts
StatBlocks, BlogTables, FAQ schemas, speakable markup. Every post made independently citable by AI.
Automated into tools
AI Crawl Checker, Citation Tracker, Robots Analyzer. Each tool born from a manual audit step we were tired of repeating.
Radar is born
All 12 tools in one parallel scan. Cross-tool conflict detection. LLM Answer Diff. 60-second full audit.
50 users validated it
10+ minute sessions. Return visits. Waitlist demand after token expiry. The problem was not unique to us.
6 months
Oct 2025 to Apr 2026
516 commits
Every fix documented
12 tools
Now in Radar
The moment we realized we had a problem
In October 2025, Google Search Console sent us a wake-up call. Pages were not being indexed. Crawl errors were piling up. Our organic traffic had dropped 33%, and we wrote an entire blog post about it.
But indexing issues were just the surface. The real problem revealed itself when we started asking AI assistants about our own company.
We asked ChatGPT, Perplexity, and Gemini: "What does Pixelmojo do?" The answers ranged from wrong to nonexistent. One model confidently described services we had never offered. Another did not mention us at all.
We were not just invisible. AI was making things up about us.
Phase 1: Fixing what Google told us was broken (October 2025)
The git history tells the story. Our first commits were pure triage:
| Date | Commit | What we fixed |
|---|---|---|
| Oct 2, 2025 | feat: add seo improvements and ai-native content | Basic SEO hygiene was missing |
| Oct 7, 2025 | fix(seo): resolve google search console indexing issues | GSC flagged pages not being indexed at all |
| Oct 9, 2025 | fix: resolve additional 404 errors from gsc | Broken URLs GSC kept trying to crawl |
| Oct 22, 2025 | fix(seo): fix meta tags and canonicalization | Duplicate content signals confusing crawlers |
| Oct 25, 2025 | feat(seo): add seo monitoring with gsc integration | Built monitoring because we needed to see the damage |
These fixes helped Google. But they did nothing for AI. That required a completely different approach.
Phase 2: Discovering AI could not find us (November 2025)
Once Google's issues were stabilized, we turned to the question that would change everything: can AI systems actually access our content?
The answer was no.
Our robots.txt had no rules for AI crawlers. We had no llms.txt file. Our structured data was a mess of duplicates and fabricated ratings. AI systems had no reliable way to understand what Pixelmojo actually was.
| Date | Commit | What we discovered and fixed |
|---|---|---|
| Oct 31, 2025 | feat(seo): add ai crawler support to robots.txt | AI bots had zero explicit access rules |
| Nov 5, 2025 | refactor(ai): make llms.txt machine-parseable | Our llms.txt did not even exist before this commit |
| Nov 5, 2025 | feat(seo): optimize robots.txt and llms.txt for ai discovery | Iterated on bot access policies |
| Nov 7, 2025 | feat(seo): fix ai system misrepresentation | AI was confidently stating wrong information about us |
| Nov 8, 2025 | feat(ai): harden crawler policy with variants | Different AI bots needed different handling |
Commit deef381 on November 7, 2025 is when the real work began. We discovered that AI systems were not just ignoring us. They were actively misrepresenting our services. The structured data on our site was sending conflicting signals, and LLMs were filling in the gaps with hallucinated information.
That single commit touched 30 files. It was not a patch. It was a ground-up rebuild of how our site communicated its identity to machines.
Phase 3: Building the playbook by being the guinea pig (February 2026)
By early 2026, we had a working hypothesis: AI visibility is not one thing. It is a stack of interconnected signals that all need to be correct simultaneously. Miss one layer and the whole thing breaks.
We spent February systematically auditing and rebuilding every layer:
Structured data cleanup:
| Date | Commit | What we fixed |
|---|---|---|
| Feb 8, 2026 | fix(seo): remove fake aggregate ratings | Fabricated review markup was poisoning our entity signals |
| Feb 8, 2026 | fix(seo): remove duplicate faqpage schema | Multiple FAQ schemas on one page confused parsers |
| Feb 8, 2026 | fix(seo): remove duplicate schemas from homepage | Homepage had 3 competing Organization schemas |
Knowledge graph and entity architecture:
| Date | Commit | What we built |
|---|---|---|
| Feb 14, 2026 | feat(seo): add knowledge graph with entity auto-mapping | Built a knowledge graph from scratch so LLMs could understand entity relationships |
| Feb 15, 2026 | feat(seo): gsc-driven content optimization | Used GSC data to identify which content AI was actually surfacing |
| Feb 20, 2026 | fix(seo): optimize meta for 3 low-ctr posts | CTR data revealed which AI-surfaced results users were ignoring |
We documented the entire knowledge graph journey in a dedicated blog post. The short version: we built an 18-entity knowledge graph that maps relationships between our services, tools, blog posts, and concepts. LLMs started citing us more accurately almost immediately.
Content optimization at scale:
We did not optimize one post at a time. We overhauled 21 blog posts in a single session, adding StatBlocks, BlogTables, FAQ schemas, speakable markup, and front-loaded definitions to every post. The goal: make every piece of content independently citable by AI systems.
Bot access and crawl policy:
We discovered that blocking AI training bots while allowing retrieval bots actually improved our citation rates. Counter-intuitive, but the data was clear. The commit trail shows the iteration:
| Date | Commit | Policy change |
|---|---|---|
| Mar 3, 2026 | fix(seo): resolve gsc crawled-not-indexed issues | Fixed redirect chains blocking both Google and AI crawlers |
| Mar 3, 2026 | fix(seo): add redirects for gsc crawl issues | 14 legacy URLs were confusing crawl budgets |
| Mar 14, 2026 | perf(core): optimize lcp from 6.5s to target <2.5s | Page speed was hurting crawl efficiency for both Google and AI bots |
Phase 4: The manual process was killing us
By late February, we had a working playbook. But executing it was brutal. Every audit required:
- Manually checking
robots.txtfor every AI crawler variant - Validating
llms.txtformat and content - Running our domain through 4 different LLMs to check for hallucinations
- Auditing structured data with multiple tools
- Checking crawl access with different user agents
- Comparing what each AI said about us, side by side
Each full audit took 2 to 3 hours. We were doing it weekly. The spreadsheets were getting unwieldy. We started making mistakes, missing regressions, losing track of what we had already fixed.
So we did what any engineer would do: we automated it.
Phase 5: From scripts to tools to Radar (March 2026)
The first tool we built was the AI Crawl Checker. It started as a script we ran locally. Then we realized other people might want it:
| Date | Commit | Tool shipped |
|---|---|---|
| Feb 21, 2026 | feat(tools): add ai crawl checker free tool | First tool, born from manual pain |
| Feb 22, 2026 | feat(tools): add ai citation tracker | Needed to know if fixes were actually working |
| Feb 23, 2026 | feat(tools): add readiness + robots analyzer | Automated the robots.txt audit we were doing by hand |
| Feb 24, 2026 | fix(tools): handle waf blocks in crawl checker | Learned the hard way that WAFs block audit tools |
| Feb 25, 2026 | fix(tools): use policy-based bot access instead of http status | Iterated on accuracy after real-world testing |
Each tool solved one piece of the puzzle. We spent two weeks after the individual tools were built integrating them into a single parallel scan architecture. The real insight came during that integration: the interactions between layers matter more than any individual layer.
A site with perfect robots.txt but broken structured data still gets hallucinated. A site with perfect schema but no llms.txt still gets ignored by some models. You need to check everything at once and see where the conflicts are.
That is why we built Radar.
| Date | Commit | Milestone |
|---|---|---|
| Mar 8, 2026 | feat(platform): add radar ai visibility platform | Radar is born: all tools in one parallel scan |
| Mar 15, 2026 | feat(radar): enable all 12 tools by default | Expanded from 6 to 12 tools based on our own audit needs |
| Mar 26, 2026 | Radar v2 release | Added LLM Answer Diff and cross-tool conflict detection |
Phase 6: 50 strangers proved we were not alone
We opened Radar to the public expecting maybe 10 signups. As we covered in Part 1 of this series, 50 people ran full audits in the first week.
The data from those 50 users told us everything:
They were not tire-kickers. As the Part 1 engagement data shows, they averaged over 10 minutes per session. They ran multiple scans. Users came back within 48 hours after implementing fixes to re-scan and verify improvements. When tokens expired, users joined the waitlist instead of leaving. They wanted more access, not less.
The cross-tool conflict detection was the feature users mentioned most. No other platform surfaces the contradiction between what your robots.txt allows and what your server actually blocks. That single insight justified the entire platform for most users.
The problem we solved for ourselves was not unique. Every business with an online presence is about to face the same question: what is AI saying about me, and is it correct?
What 516 commits taught us
Six months. 516 commits. Every mistake documented, every fix timestamped, every iteration preserved in git history.
Here is what we learned:
AI visibility is not SEO. Traditional SEO optimizes for ranking algorithms. AI visibility optimizes for language model comprehension. Different inputs, different signals, different failure modes. You can rank #1 on Google and be completely invisible to ChatGPT.
The stack is interdependent. Fixing your robots.txt without fixing your structured data is like unlocking the front door but leaving the lights off. AI crawlers can get in, but they cannot understand what they find. Every layer depends on every other layer.
Manual audits do not scale. We are a small team and we could barely keep up with our own site. Businesses with hundreds of pages and multiple product lines cannot do this by hand. The tooling gap is real.
Scores must correlate with outcomes. We went from 0 AI citations in October 2025 to being cited by all four major LLMs by March 2026. We know Radar's scoring works because we used it on ourselves, implemented every recommendation, and watched those citations appear. That is not a theoretical claim. It is 516 commits of receipts.
Here is what the before and after actually looks like across all four providers:
Described Pixelmojo as a "digital marketing agency specializing in social media management and content creation."
Identifies Pixelmojo as an AI product studio building Vector, Hive, and Radar. References founder Lloyd Pilapil.
No mention of Pixelmojo in any query about AI agencies, product studios, or AI visibility tools.
Cites pixelmojo.io directly. References Radar, free AI visibility tools, and the AI Visibility Stack blog series.
Zero results. Pixelmojo did not exist in Gemini's knowledge for any relevant query.
Mentions Pixelmojo when asked about AI visibility platforms. References structured data and llms.txt approach.
Confused Pixelmojo (AI product studio, Philippines) with Pixel Mojo (VR/AR company, Minneapolis).
Correctly distinguishes Pixelmojo from Pixel Mojo. References AI visibility tools, Radar platform, and GEO methodology.
0/4
Cited (Oct 2025)
4/4
Cited (Mar 2026)
516
Commits between
The tool we wished existed
Radar exists because we needed it first. Not because we saw a market opportunity. Not because an investor told us AI visibility was hot. Because we were invisible to AI, we were being misrepresented, and the manual process of fixing it was unsustainable.
Every feature in Radar maps to a problem we hit personally:
| Radar tool | The problem we had |
|---|---|
| Crawl Checker | We did not know which AI bots could access our site |
| Robots.txt Analyzer | Our bot policies were inconsistent and incomplete |
| llms.txt Validator | Our llms.txt was not machine-parseable |
| Schema Audit | We had duplicate and fabricated structured data |
| Citation Check | We could not tell if any LLM was citing us |
| Hallucination Detection | AI was confidently saying wrong things about us |
| LLM Answer Diff | Different models said different things and we could not compare |
| AI Readiness Score | We had no way to measure overall progress |
| Prompt Share-of-Voice | We did not know how often competitors appeared in AI answers |
| Source Influence Map | We could not see which sources LLMs trusted for our industry |
| AEO Audit | Our content was not structured for AI extraction |
| Reddit Radar | Community discussions were shaping AI training data without our input |
If you are wondering whether AI is making things up about your brand, it probably is. We know because it happened to us.
Start your own audit
You do not need 6 months and 516 commits. Radar runs the same 12 checks we did manually, in 60 seconds, for free.
- Check your brand free: Run a full AI visibility audit on your domain
- AI Visibility Strategy: Let us implement the fixes for you
- Contact us: Discuss your AI visibility needs
AI Visibility Origin Story: Questions Readers Ask
Common questions about this topic, answered.
What This Series Covers
This is Part 2 of The AI Visibility Stack, a five-part series documenting how we built an AI visibility platform from scratch. Each post builds on the last, from user validation to founder story to product deep-dives.
All 5 parts published.
