For years, content creators repeated the phrase: âDonât write for SEO, write for people.â
The idea was simple â if people loved your content, Google would reward it. Traffic meant relevance.
But does that still hold true in 2025?
Let me ask you this:
Whenâs the last time you searched for something on Google instead of ChatGPT?
Personally, I use ChatGPT for 99% of my research now. I barely visit websites anymore. And most of the content I see? Generated.
â Why did I stop writing for SEO and People?
Iâm the owner of a library called Entity Framework Extensions, and one of its key features is Bulk Insert. On Google, my website ranks first for EF Core Bulk Insert.
But in AI-generated answers?
Itâs a different story.
The AI pulls content from multiple websites I ownâŠ
Then promotes a competing library instead.
Yep â imagine this:
- It grabs the feature description from your main site
- A benchmark from another of your comparison articles
- A code example from a third site
- And finally, it says it all belongs to⊠your competitorâs library
Google understands the context of a website. AI doesnât.
It mixes pieces from everywhere and tries to “help,” but ends up many times promoting the wrong product.
đ§ What Are LLMs (Like ChatGPT) and How Do They Work Under the Hood
LLMs (Large Language Models) like ChatGPT are trained on massive amounts of text to predict the next word (or token) based on what came before. They donât âunderstandâ content the way humans do â instead, they spot patterns. Thatâs why understanding how they work helps you write better documentation that AI can actually use correctly.
- Token-Based Prediction: LLMs process language as sequences of tokens (which can be words or word fragments). For example, âBulkInsertâ might be one token, while a longer sentence is broken into many.
- Context Window: Each model has a limited context window â the amount of text it can âseeâ at once. GPT-4o supports up to ~128,000 tokens, but many AI models still operate with far less. If your documentation is long, only parts of it may be read or remembered when a user asks a question.
- Pattern Matching Over Understanding: LLMs donât know what
BulkInsert
means â they learn that when someone writes âEF Core performanceâ or âinsert thousands of records,â the next likely word isBulkInsert
. This is why clear branding, repetition, and consistency in your docs matter: it trains the model to link your method with your library. - Chunked Learning: During training and indexing, documentation is often split into smaller sections (chunks). If a method appears without mentioning your library in that chunk, the AI might incorrectly assume it’s from another tool. That’s why each code block, paragraph, or heading should be self-contained and reference Entity Framework Extensions explicitly.
- Hallucination from Ambiguity: If multiple libraries offer similarly named methods (like
BulkInsert
), and yours isnât clearly and repeatedly identified in examples, AI might hallucinate and attribute your methods to a competitor â especially if their docs are optimized for AI.
âïž How to Write for AI (LLMs)?
So hereâs how I plan to improve my content over the summer â not for SEO, not for people⊠but for AI.
đ 1. Have your content on multiple websites
LLMs donât always know which website is your main one â they just grab chunks of content across the internet. So if you want to increase the chance that your content is picked up and properly attributed, you need to spread it out.
- Publish slightly reworded versions on multiple domains
- Use consistent method names and product mentions across each site
- Link back to your main documentation to create strong associations
Why does this help? Because during training or indexing, an LLM might miss your main site entirely â but if it sees similar content on multiple trusted pages, it reinforces that your library is the source.
This isnât about SEO backlinks â this is about training the AI to recognize you as the authority.
đ 2. Reinforcement and Redundancy Is Good (for AI)
LLMs often truncate or slice content when answering a question. That means they might only “see” a portion of your documentation â and miss the part where you mention your library name.
To avoid this, you need to reinforce your brand (like âEntity Framework Extensionsâ) in every section. Yes, even if it feels repetitive.
â
Good: âUsing BulkInsert
from Entity Framework Extensions, you can insert thousands of records efficiently.â
â Bad: âUsing BulkInsert
, you can insert thousands of records efficiently.â
It might make the content a bit heavier to read for humans, but it teaches the AI to correctly associate your methods with your library â and not link them to a competitor.
đ·ïž 3. Use Explicit Headings with Method Names and Your Library Name
â
Good: BulkInsert in EF Core with Entity Framework Extensions
â Bad: BulkInsert
AI often uses headings to structure its answers â so your headings need to be complete and self-contained.
Always include the method name, the technology (like EF Core), and your library name. This helps the AI clearly understand what the content is about, even if it only sees the heading and a few lines below it.
This is almost the opposite of what we were told with Google, which could already understand the context of your site. But AI doesnât know where the content is from â so your heading needs to carry the full meaning.
Repeat this format in H2 and H3 headings when possible
đ» 4. Improve Your Code Example
AI loves to show code examples â but it doesnât always credit the right source.
If you want the AI to associate your method with your library, you need to give it as much context as possible.
â
Include the using
statement: In my case, using Z.EntityFramework.Extensions;
â even if itâs not technically required for my library
â
Include the NuGet package (in a comment): Like // @nuget: Z.EntityFramework.Extensions.EFCore
â
Include details: For example, show how the entities are created, even if it makes the example a bit longer
Instead of a minimal example like this:
// Easy to use
context.BulkInsert(customers);
I will write something like this:
// @nuget: Z.EntityFramework.Extensions.EFCore
using Z.EntityFramework.Extensions;
var customers = new List<Customer>
{
new Customer { Name = "Entity1", Value = 10 },
new Customer { Name = "Entity2", Value = 20 },
// Add more entities
};
// Easy to use
context.BulkInsert(customers);
You donât always need to include the full setup in every example â but adding at least the NuGet package in each one helps ensure that AI wonât end up promoting your competitorâs library using your own content.
đŠ 5. Make Sure You Have an “Install” Section in Every Article
Yes, this might create some duplicate content across many of my pages â but again, weâre not writing for Google anymore. Weâre writing for AI.
From now on, every major page on my site will include a small Install section. It will point to the main download page, but also show directly how to install my library â right there in the article.
That way, no matter which page the AI pulls from, it always sees how to get started with Entity Framework Extensions.
Hereâs what Iâll include:
Through .NET CLI
> dotnet add package Z.EntityFramework.Extensions.EFCore
Through PMC
> PM> NuGet\Install-Package Z.EntityFramework.Extensions.EFCore
Even if users already know how to install a package, the AI doesnât â so give it the information every time.
đ§Ÿ 6. Start Each Section with a Short Method Summary
Before jumping into examples or options, always begin with a short sentence that clearly explains what the method does.
This helps LLMs (and even readers) immediately understand the purpose of the method â especially if the content is pulled out of context.
â Good:
“
BulkInsert
from Entity Framework Extensions lets you insert thousands of records in a single database call.”
â Bad:
[Code block starts immediately with no explanation]
Even just one clear sentence at the top of each section can make a big difference for how your method is interpreted.
⥠7. Mention Both async
and non-async
Usage
AI models love showing examples, and developers often search for both sync and async versions â so always include both.
// Synchronous usage
context.BulkInsert(customers);
// Asynchronous usage
await context.BulkInsertAsync(customers);
Even if the logic is the same, showing both versions helps make sure your library appears in AI answers, no matter how the question is phrased. It also shows your library supports modern best practices.
đ AI-Friendly Documentation Checklist
- â Mention Entity Framework Extensions in every section
- â
Use headings like:
BulkInsert in EF Core with Entity Framework Extensions
- â
Add full, copy-paste-ready code with
using
and NuGet comment - â Include install commands on every page
- â Repeat variations like: âEF Core BulkInsertâ, âBulkInsert from Entity Framework Extensionsâ
- â Start each section with a short method summary
- â
Mention both
async
andnon-async
usage
đ§ Final Thoughts: Youâre Not Writing for Google Anymore â Youâre Training an AI
The game has changed.
In 2025, most developers arenât reading your website â theyâre reading AI-generated answers based on fragments of your content.
And that means youâre no longer writing to rank â youâre writing to teach the model who you are and what your product does.
Yes, itâs repetitive.
Yes, it goes against everything we used to do for SEO.
But if you donât adapt, your libraryâs features might end up attributed to your competitor â using your own words and your own code examples.
Thatâs why this summer, Iâm doing things differently:
- Every example will mention Entity Framework Extensions
- Every install section will be visible on every major page
- Every heading will include the method, the tech, and the library name â even if it feels redundant
You canât control what the AI says â but you can control what it sees.
And in 2025, thatâs how you win.
đ Bonus Thought
One of the major ranking factors Google might already be using (or could soon adopt) is how often your content is used in AI-generated answers â and how many chunks it pulls from your website.
So writing for AI isnât just about attribution anymoreâŠ
It might also become the new SEO signal itself.
If your content is constantly used to answer questions, both AI and Google could start seeing you as the real authority â even without backlinks.
Just one more reason why adapting matters.