The editorial entity behind every Surgio article and study. We are a deliberately anonymous research bench β a contrarian bet on entity-driven authority over cult-of-personality. In place of bylined faces, we publish verifiable data sources, documented methods, and reproducible findings.
Not marketing promises. Publishing standards we are building toward. Some are already enforced in code. Some are planned with a specific deadline. Below β honest status for each as of today.
| # | Principle | Status | Target |
|---|---|---|---|
| 1 | Data over opinion | π‘ Partial | 2026-Q3 |
| 2 | Anti-hype | β Live | β |
| 3 | Reproducibility | π΄ Planned | 2026-Q3 |
| 4 | Transparent limitations | π‘ Partial | 2026-Q3 |
| 5 | AI-augmented, not AI-generated | π΄ Planned | 2026-Q3 |
| 6 | Entity over personality | β Live | β |
| 7 | Update over delete | π΄ Planned | 2026-Q3 |
2 of 7 principles fully implemented, 2 partial, 3 planned with a Q3 2026 target. We publish these statuses to earn reader trust, and to discipline ourselves. We update this table on every change.
The goal: every quantitative statement is sourced β original research, a 2024β2026 publication, or our own audit-harness measurement. "Most experts say" is banned. If no number exists, the claim is framed as a hypothesis, not as fact.
scripts/generate-article.js requires citation-grade facts. Most articles cite real numbers from 5W Index, Profound, Search Engine Land.
scripts/fact-source-check.js β separate LLM pass after draft, extracts every number / percent / specific claim, verifies presence of source. Unsourced list β TG alert + blocks deploy. Target: 2026-Q3.
We do not publish "X is the future" without caveats. We ship an llms.txt file, but we do not sell it to clients as a citation-lift tactic β as of May 2026 none of ChatGPT, Claude, Perplexity, or Google has publicly confirmed they fetch it. Anti-hype = honesty about what works and what does not.
The goal: every methodology (AEO tracker, audit harness, schema checks) can be repeated. Prompts, tools, and measurement dates are public. If you cannot reproduce our finding, we did something wrong.
scripts/aeo-tracker.js and our audit harness exist and run daily. Methodologies are described textually on this page.
@surgio-aeo does not exist yet. A third party cannot currently clone our repo and replicate measurements.
github.com/surgio-aeo + push 3 OSS repositories (aeo-tracker, llms-txt-validator, faq-schema-cli) with real production code (sanitized β no API keys, no proprietary prompts). Target: 2026-Q3.
The goal: we do not know everything. Our AEO tracker currently measures only ChatGPT (22 queries, baseline 0/22). Perplexity and Claude tracking is in backlog (requires API budget). We disclose, not hide. When sample size is small, we write "n=X," not "data shows."
The goal: drafts and FAQ blocks are prepared with GPT-4o-mini, but every article passes editorial review: facts verified, numbers sourced, voice tuned. No review = no publish. Not "100% human-written," but also not AI-spam.
seo-agent.js generates an article every Monday via gpt-4o-mini β build β deploy. Without mandatory human review. Technically this is AI-generated, lightly-edited by an AI agent. Claiming "every article passes editorial review" right now is not true.
Why we are anonymous. It is a conscious choice: build entity authority through Wikidata (Q139741871), data drops, and a consistent editorial voice β rather than cult-of-personality around a single founder. A contrarian play in 2026, but the 5W AI Citation Index shows 11.29% of citations go to .org and institutional sources β collective brands are cited reliably when the entity scaffolding is in place.
The goal: when data changes (which in SEO / AEO is monthly), we update the article β we do not delete it. Every article carries dateModified in its schema. When you see a recent update date, it is a real revision, not cosmetic.
dateModified is present in schema but currently equals datePublished β no article has actually been refreshed yet. Aspirational, not real.
scripts/refresh-articles.js β monthly cron that scans articles with dateModified > 90d ago, generates a refresh draft via AI, runs it through the self-critique pass (#5), waits for human approve, updates article + dateModified + rebuilds the sitemap. Target: 2026-Q3.
What we actually measure, and where external numbers come from.
Our own probe that measures TTFB, transfer size, HTTPS, security headers, plus 35 HTML SEO checks and GPT-4o-mini language-aware analysis. Fires on every URL submitted via /api/audit.
Independent of PageSpeed Insights API. Pre-flight data with no caching.
Script scripts/aeo-tracker.js queries 22 category prompts against the ChatGPT browse API (gpt-4o-search-preview) and parses citations. Daily run, raw JSON kept in repo.
Current baseline: 0/22. Perplexity Sonar and Claude Web Search are in backlog.
Surveys with n <100, vendor-funded studies without full methodology, posts older than 12 months in fast-changing topics (AEO, AI Overviews). 2023 posts on "SEO best practices" are now disinformation.
Every headline number passes triangulation β confirmation from at least 2 independent sources. If only one source exists, we write "per X," not "it is known that."
The path from idea to publication.
Topics come from: (a) the content/plan.json brief queue, (b) real client questions from audits, (c) AEO tracker gap analysis β topics where we should be cited but are not.
5β15 sources are collected, dates checked, key numbers triangulated. For rapidly changing topics (AEO, AI Overviews), sources must be <90 days old.
GPT-4o-mini drafts the piece against our editorial prompt (anti-hype, fact-density, Quick Answer block, FAQ block). Draft is stored as Markdown with frontmatter.
Fact-check (numbers especially), tone-of-voice tuning, clarity of claims, alignment with our principles. Most articles are rewritten 30β50%. Generic AI phrasing is removed.
Article + FAQPage + BreadcrumbList JSON-LD. Internal links are auto-injected from the anchor-map: 2β5 relevant cross-references per article.
UA + EN versions ship simultaneously. Sitemap is regenerated, IndexNow pings Bing/Yandex/Naver/Seznam, Telegraph mirror publishes, Telegram channel notifies.
Each article is reviewed at 90 days. If data is stale (especially AEO/AI Overviews) β full refresh with a new dateModified. If the topic is no longer relevant β marked archived and 301-redirected to an active replacement.
Not filler. Red lines.
Spotted a factual error, stale data, or have a topic to suggest? Our editorial channel is public.