
A dual SEO + GEO content engine for the post-Google search era
An SEO + GEO content engine encoded into a Custom GPT, producing dual-optimized tone of voice-tuned articles by default
TL;DR
We designed and built the dual-optimization content engine Adamas Studio uses to rank on Google AND get cited by ChatGPT, Perplexity, and Gemini, encoded into a Custom GPT that produces education content pre-optimized for both surfaces, with proprietary insight frameworks designed as citation magnets and a 17-point checklist that ships with every article. The strategy was an early-mover bet on Generative Engine Optimization at a moment when most e-commerce SEO was still optimizing for Google alone, and now serves as a reusable template - now using Claude Skills - for any brand operating in an AI-mediated discovery environment.
The brief
What did the client need?
Adamas was launching into a category (lab-grown diamonds) with high information asymmetry. Customers want to make confident decisions about a five-figure purchase, and they look up everything. The SEO question wasn't just "how do we rank for buying terms?". It was "how do we make sure that when somebody asks ChatGPT about ideal diamond proportions, the answer cites us?".
That second question wasn't on most e-commerce SEO playbooks. The jewelry industry's SEO conventions are antiquated: optimize for Google, write thin product copy, hope. The brief was to leapfrog that and build content architecture for the way buying actually works in 2026, where "I'll Google it" has been quietly replaced with "I'll ask ChatGPT" for an enormous share of high-consideration purchases.
The deeper version of the brief: turn brand knowledge into an unfair advantage. If we could build a content library that LLMs structurally have to cite, Adamas wouldn't just rank, it would become the source of truth that AI defers to. That shifts the competitive surface from keyword volume to citation density, which is a much harder moat for competitors to replicate.
The constraints
What made this hard?
Three constraints. The first was discipline divergence. Google wants signals: keywords, backlinks, authority, engagement metrics. LLMs want self-contained, structured, quotable answers. Most content optimized for one is suboptimal for the other. Building a single content workflow that does both meant resolving the tension at the structural level, not bolting them together at the publishing layer.
The second was the citation problem. LLMs cite content they were trained on, which is biased toward older sources. Getting newly-published content into AI answer rotations requires specific structural moves (freshness signals in schema, semantic richness, embedded Q&A, schema markup, "Last Updated" boxes, and proprietary frameworks the AI hasn't seen anywhere else). Most of these aren't in the standard SEO toolkit.
The third was scale. Adamas needed dozens of education pages (cuts, settings, metals, the 4Cs applied to each), and writing every one to dual-optimization standard manually would have been a six-month freelancer project. The strategy had to be encoded into a content engine that could ship articles ready for both Google and LLMs without manual SEO rework on each one.
The approach
How did Tincture frame the problem?
A single content framework that satisfies both surfaces, encoded into a Custom GPT so the workflow runs at scale. The strategic insight: you don't need separate content for SEO and GEO. You need content structured so that the same article ranks on Google and gets quoted by an LLM. The structure that does both is the same structure: full topical coverage, quotable self-contained sentences, embedded Q&A, schema markup, proprietary insights as citation magnets, semantic richness, freshness signals, and AI-surviving CTAs.
We codified that into eight working principles. Full coverage over word count. Quotable self-contained sentences ("Princess cuts typically retain 70-80% of the original diamond rough, compared to roughly 50% for round brilliants") designed to travel alone into AI answers. Embedded Q&A throughout, not just in a closing FAQ. Proprietary insights, like Adamas's specific recommended-ideal proportion ranges, that only Adamas publishes, so any AI answering a question about those ranges has to cite Adamas. Semantic richness and concept clusters telling AIs that this brand is the authority on everything around the subject. Schema markup for Google and AI parsers. Freshness signals (datePublished, dateModified, "Last Updated" boxes, temporal cues in text). Branded CTAs designed to stay persuasive even when lifted into an AI summary.
Then we encoded the framework into a Custom GPT that applies the rules automatically, structuring headings, embedding Q&A, inserting quotable fact lines, and maintaining keyword density without manual SEO rework on each article.

The build
What was shipped?
A comprehensive dual-optimization content strategy document covering the full framework, eight working principles, the article structure template, the schema templates, and the proprietary-insight strategy. Ready to use as a brand-level reference for any content team picking up the work.
A repeatable article structure for diamond education content: H1 (keyword front-loaded), intro definition (snippet-ready), Why choose / Pros and cons, Ideal proportions (proprietary), the 4Cs applied to this cut, additional factors, best settings, embedded FAQs (schema-marked), branded CTA. Every article ships with a 17-point checklist covering keyword placement, schema markup, proprietary insights, internal linking, image alt text, and meta optimization.
The SEO Custom GPT: a content engine that applies the full strategy framework automatically. Structures headings, embeds Q&A, inserts quotable fact lines, maintains keyword density, writes in Adamas's brand voice, and ships articles ready for both Google and LLMs without manual rework. Plus a library of SEO-optimized example articles, reusable prompts for rewriting existing content, and schema templates with documentation.
Schema templates for FAQ, HowTo, comparison tables, definition lists, and structured data, parsable by both Google and AI assistants directly.
The outcome
What were the results?
Adamas ships content into the dual-optimization world by default now. Every education article is structured for Google ranking and LLM citation simultaneously. The Custom GPT removes the "rewrite this for SEO" step entirely; what comes out of the engine is what ships.
The proprietary-insight strategy is the part that compounds. The recommended-ideal proportion ranges for each diamond cut are published only by Adamas. Any LLM answering a question about, say, the ideal table percentage for a cushion cut has to cite Adamas as the origin, because that specific framing doesn't exist anywhere else. That's a structural citation moat that gets stronger every time a new model is trained on a snapshot of the open web.
The strategic outcome is broader than the per-article work. Adamas now operates with content architecture designed for AI-mediated discovery as the baseline, not the future state. Most jewelry brands haven't started thinking about GEO. Adamas has shipped it.
What it took
What tools and methods were used?
A Custom GPT (OpenAI) for the content engine, with a structured system prompt encoding the full framework, the eight working principles, the article template, and the schema requirements. Fillout for structured input where needed. Schema markup standards (FAQ, HowTo, comparison, definition lists) embedded into the article output. The proprietary-insight library was built first, in collaboration with the Adamas diamond specialist, so the engine had unique frames to cite.
The methodological underpinning is something we use across the practice: when an emerging discipline is still being defined, encode your framework into tooling early, before the conventional wisdom catches up. The competitors who'll struggle most over the next eighteen months are the ones still optimizing for Google alone.
The other move worth naming: proprietary insight as moat. AI-citation visibility scales with how much of the open web only one source has the frame for. Adamas built that frame deliberately. Every brand has the option to do the same; most haven't yet.

The takeaway
What's the transferable principle?
Most content strategy is still optimizing for the world that existed when the strategy was written. The buying journey has shifted, the discovery surface has shifted, and the optimization target has shifted, but the playbook hasn't caught up. The work that lands gets the playbook ahead of the shift instead of behind it.
For Adamas, that meant treating GEO not as a footnote on the SEO strategy but as a parallel discipline with its own citation logic and its own structural moves. The Custom GPT made the new framework operational, not aspirational. Articles ship dual-optimized by default.
The other transferable principle, and this one matters wherever AI is changing how customers discover brands: build proprietary frameworks an AI has to cite. Generic content gets paraphrased into homogeneity. Specific frames, named structures, proprietary numerical ranges, those get cited. Citation density is the new ranking signal, and it doesn't compete on volume.
Read more on this in You Don't Rank on Google, You Get Cited by Claude: Generative Engine Optimization
Frequently asked questions
The thinking behind it
More like this

~$70k of private-client revenue pre-launch
A 1m+ SKU marketplace from concept to launch
Tincture led Product and Operations as co-founder of Adamas Studio's 1m+ SKU lab-diamond marketplace, building the entire commercial and operational backbone from zero across multiple PRD iterations. The marketplace consolidates two vendor APIs into a single live-inventory results page, layered with a Custom GPT CAD generator, an Ideal Diamond finder, and a Reddit-driven market intelligence engine. Roughly $70k of private-client revenue shipped before the marketplace went public, on infrastructure designed for scale rather than launch.
FeaturedAI-native jewelry imagery, from CAD to lifestyle in a single brief
Kora Image Studio
Tincture built Kora Image Studio, a full-stack AI product visualization application that takes a single jewelry design specification and generates CAD line drawings, photorealistic 3D renders, e-commerce product photography, and editorial lifestyle imagery on demand. The studio replaces the traditional pipeline of CAD designer, product photographer, and lifestyle shoot with a single integrated workflow, running on a dual-model Gemini stack with iteration, transformation, and project management built in. Live and in active use, with a single hero specification now seeding entire product catalogues' visual assets.

12 agents, ~10 issues/day, ~2 hours saved daily, cost = Claude subscription + ~$40/month VM
A 12 agent ↔ Linear ↔ Notion setup
The Operating layer was wired to Linear and Notion, paired via MCP, so each specialist agent could ship from Linear and mirror state into Notion against its per-persona write contract, with identity that survives across sessions. Stand-up took about 2.5 working days of focused build, over a week of elapsed time. Steady state: 12 agents working roughly 10 issues or tasks a day, about 2 hours of time saved daily plus the context-rebuilding tax that doesn't show up on a clock. The whole thing runs on a Hetzner VM in Helsinki, so agents pick up Linear comments and project emails whether the laptop is open or not. Cost is the Claude subscription plus about $40 a month for the VM.
Want a content engine built for AI-mediated discovery?
We build dual SEO + GEO frameworks and Custom GPT engines for brands operating in the post-Google search environment.
