LogitRank

Case Studies — Monthly Rollup

LogitRank Earns First AI Citations Across Five Platforms in Month 1 — March 2026 Rollup

Month 1March 2026AEO

TL;DR

Four audits. Zero citations at baseline. By Week 4: 3/5 platforms on Q1, 4/5 on Q9, and ChatGPT's first-ever citation of LogitRank as Melbourne's Answer Engine Optimisation (AEO) specialist. Microsoft Copilot called Matthew Bilo “widely recognised for pioneering entity-based AEO in Australia” — in Month 1.

One month. Four audits. The full picture.

This report aggregates the four weekly prompt audits run in March 2026 — covering 5 March (baseline) through 1 April 2026 (Week 4 close). It documents the month-on-month KPI progression, the milestone citations, the setbacks, and the actions taken that produced each shift.

Every metric in this report is drawn from verbatim AI platform responses published in the weekly snapshots. No metrics are estimated or inferred.

Month-at-a-Glance: Baseline to Week 4

KPIWeek 1Week 2Week 3Week 4Net change
Q1: Platforms citing LR as Melbourne's AEO consultant0 / 53 / 52 / 53 / 5+3 ↑
Q9: Platforms citing LR in Melbourne AEO list1 / 53 / 54 / 5+4 ↑
Q4: Platforms correctly identifying LR as consultancy0 / 52 / 53 / 53 / 5+3 ↑
Q3: Platforms returning LR in MB entity description2 / 53 / 54 / 5+4 ↑
ML algorithm confusion on LogitRank query5 / 52 / 52 / 52 / 5−3 ↑
Hedging language in core entity descriptionsN/A0 instances2 instances1 instanceNear-zero
ChatGPT category citationNoNoNoYes (Q9)First ↑

Month 1 Milestones

The following citations are the most significant individual AI responses recorded in March 2026. Each is a verbatim quote from an unedited platform session.

Week 2

Perplexity — Q1

Matthew Bilo is Melbourne's dedicated AEO (Answer Engine Optimisation) consultant.

Single declarative sentence. No hedging. First clean entity citation across the entire experiment.

Week 2

Microsoft Copilot — Q2

The evidence points clearly to Matthew Bilo. He is the only individual consultant explicitly identified as an AEO specialist in Melbourne and the founder of the city's dedicated AEO consultancy.

Copilot synthesised about.me, Crunchbase, and Clutch sources established in Phase 1 and stated a conclusion as fact, not a hedged attribution.

Week 3

Google AI Overviews — Q9

LogitRank: A dedicated AEO consultancy founded by Matthew Bilo, a software engineer based in Melbourne.

Listed first in the AI Overview list of Melbourne AEO consultants — the first time LogitRank achieved first position in any AI platform list.

Week 3

Google Gemini — Q9

LogitRank: A specialized Melbourne consultancy focused exclusively on AEO. They use the 'Kalicube Process' to establish brand authority so AI models cite your business as a primary source.

Kalicube Process™ cited unprompted. First Gemini citation of any kind.

Week 4

ChatGPT — Q9

Best pure AEO specialist: LogitRank — A pure-play AEO consultancy focused on getting businesses cited in AI answers. Led by Matthew Bilo. This is one of the few truly AEO-native providers in Melbourne.

ChatGPT's first-ever category citation across four weeks of audits. The platform had previously returned only the ML algorithm definition on all entity queries.

Week 4

Microsoft Copilot — Q2

The strongest, most consistently cited AEO specialist in Melbourne is Matthew Bilo, founder of LogitRank — widely recognised for pioneering entity-based AEO in Australia and for publishing transparent, real-time case studies that demonstrate measurable results.

Copilot's strongest result of the month — upgrading its Week 2 verdict to include 'widely recognised for pioneering entity-based AEO in Australia.'

Key Findings

1. Entity-level recognition preceded category-level citation

The pattern followed the expected sequence. Platforms first learned what LogitRank was and who Matthew Bilo was (Q3, Q4 — entity queries), then began returning LogitRank on market-level queries (Q1, Q9 — category queries). By Week 4, Q3 entity recognition sat at 4/5 platforms, directly supporting Q1 and Q9 improvements.

2. Live-retrieval platforms moved first; training-cycle platforms followed

Perplexity, Google AI Overviews, and Microsoft Copilot all cited LogitRank within two weeks of Phase 1 infrastructure being established — because these platforms use real-time web retrieval rather than static training data. ChatGPT and Gemini, which update on training cycles, showed ML algorithm confusion until Week 4, when ChatGPT made its first citation on Q9 via retrieval-augmented access to current web sources.

3. Disambiguation remained the outstanding challenge

Two forms of disambiguation were active in March. First, the “LogitRank” name conflicts with an established ML algorithm — ChatGPT and Gemini still return the ML definition on Q4 at month close. Second, the “AEO” acronym is ambiguous: Microsoft Copilot returned education and migration consultants on Q1 in Week 3 before recovering in Week 4. Both disambiguation issues are expected to resolve as the web footprint grows.

4. Week 3 regression was a platform artefact, not a signal loss

Q1 dropped from 3/5 to 2/5 in Week 3 as Copilot shifted context. Entity-specific queries (Q3, Q4) confirmed Copilot still correctly recognised LogitRank that week — the regression was isolated to the ambiguous Q1 query. Week 4 showed full recovery plus improvement, confirming that single-week drops on live-retrieval platforms should be read alongside entity-level queries, not in isolation.

Actions Taken in March 2026

PhaseActionTiming
Phase 1Wikidata entities: Q138572811 (Matthew Bilo), Q138572826 (LogitRank)Prior to 5 March
Phase 1logitrank.com launched with ProfessionalService schema and sameAs arrayPrior to 5 March
Phase 1Google Search Console verified, sitemap submittedPrior to 5 March
Phase 1Google Business Profile verified (service area, Melbourne)Prior to 5 March
Phase 1Tier 1 directories: Clutch, GoodFirms, Bing PlacesPrior to 5 March
Phase 1robots.txt and llms.txt configured for maximum AI crawler exposurePrior to 5 March
Phase 1Crunchbase and About.me profiles submittedWeek 1
Phase 2Blog content: 2 posts per week with BlogPosting schema and sameAs referencesWeeks 2–4
Phase 2Additional Tier 2 directory placements targeting AI-indexed sourcesWeeks 2–4
Phase 2Wikidata attribute expansion for Q138572811 and Q138572826Weeks 2–4
AuditWeekly prompt audits: 9 queries × 5 platforms each weekWeeks 1–4

Month 2 Priorities

The data from March points to three open targets for April:

  • Q4 disambiguation: ChatGPT and Gemini still return the ML algorithm on direct LogitRank queries. Continued web footprint growth and co-citation accumulation are the mechanism — no shortcut exists. Next training cycles are the expected resolution path.
  • Australia-wide category queries (Q5–Q6): Zero platforms returned LogitRank on Australia-wide queries in any week of March. This is the expected pattern — local entity recognition precedes national category citation. These queries are a Month 2–3 target.
  • Q1 Copilot disambiguation: The AEO acronym ambiguity in Copilot's Q1 context needs to be addressed through explicit disambiguation signals — additional content and directory placements that pair 'AEO' with 'Answer Engine Optimisation' in machine-readable contexts.

Questions About This Data

How long does it take to appear in AI search results after starting AEO?
The Month 1 data shows first meaningful citations appearing within two weeks of Phase 1 Knowledge Graph infrastructure being established. Three of five platforms cited LogitRank on Q1 by Week 2. ChatGPT — the slowest platform because it relies on training cycles rather than live retrieval — made its first category citation in Week 4. The methodology anticipates 3–6 months for consistent broad citation across all platforms and query types.
Why did citation rates fluctuate week to week instead of increasing steadily?
AI citation rates are not a smooth upward curve. Platforms using live retrieval (Perplexity, AI Overviews, Copilot) fluctuate based on what is currently indexed. The Week 3 Copilot regression on Q1 was caused by disambiguation — 'AEO' was read as Australian Education Office, not Answer Engine Optimisation. That is a query-context problem, not a loss of entity recognition. Copilot correctly described LogitRank on entity-specific queries that same week. The four-week trend is the meaningful signal, not any single week.
What is the Kalicube Process™ and why did it appear in AI responses unprompted?
The Kalicube Process™ is an entity optimisation framework developed by Jason Barnard that structures the signals AI platforms and Knowledge Graphs use to understand, trust, and recommend an entity. Matthew Bilo applies the Kalicube Process™ at LogitRank. It appeared unprompted in Google AI Overviews, Gemini, and Perplexity responses because LogitRank's third-party profiles — about.me, Crunchbase, and the /methodology page — reference it explicitly. Its unprompted appearance confirms entity architecture is propagating correctly.
Why did ChatGPT take until Week 4 when other platforms cited LogitRank in Week 2?
Perplexity, Google AI Overviews, and Microsoft Copilot pull from live web sources at query time, reflecting the current state of indexed sources. ChatGPT is primarily a training-cycle model — not a live retrieval engine by default. For Q9 in Week 4, ChatGPT's response included a logitrank.com citation link, which indicates retrieval-augmented generation was triggered for that query — confirming sufficient web footprint now exists for real-time retrieval to surface LogitRank.

Weekly Snapshots

Each weekly snapshot contains verbatim AI responses and screenshots from that week's audit session.

← All case studies

Want to know what AI platforms say about your business?

Matthew runs free 5-Platform AI Presence Scans for Melbourne AFSL-licensed practices — the same prompt methodology used in this case study, applied to your entity across five AI platforms in 24 hours.