You open an old project. It takes 30 minutes to remember what you did. You open the files. You can’t understand your own notes. You research again. You waste time. This repeats with every new project.
RAG solves search. But it doesn’t solve accumulation. Tomorrow you’ll ask a similar question and the system will do the same work from scratch. Nothing connects. Nothing builds.
Manual wikis work for two weeks. Then you stop updating. Cross-references become obsolete. The value dies.
The answer is a system that builds its own knowledge.
TL;DR: LLM Wiki is a pattern where the AI doesn’t just answer — it maintains an interconnected markdown base, accumulates context, connects concepts, and evolves on its own. Raw sources (immutable) → Wiki (AI-maintained) → Schema (rules). You explore, AI does the maintenance. Result: a second brain that compounds.
The Problem: Knowledge That Disappears
Every solo builder faces this:
- You research something, note it in Notion, never look at it again
- You open an old project and don’t understand your own thinking
- You need the same information in another context but can’t find it
- You waste hours re-discovering what you already knew
RAG solves the search problem. But RAG is point-to-point. Each question is an isolated query. The system doesn’t “remember” that you asked related questions before. It doesn’t connect the dots. It doesn’t build anything.
Manual wikis (Obsidian, Notion, Zettelkasten) promise accumulation. But maintenance is manual:
- Creating links between notes
- Updating references when something changes
- Noting contradictions between articles
- Maintaining tags and organization
This works for a month. Then it becomes bureaucratic work nobody sustains.
The result is the same: knowledge you have, but can’t access.
The Solution: AI as Librarian, Not Search
The “LLM Wiki” concept proposed by Andrej Karpathy inverts the logic:
- RAG → you ask, AI searches
- Manual wiki → you maintain everything
- LLM Wiki → AI maintains, you explore
The correct metaphor: “Obsidian is the IDE, the LLM is the programmer, the wiki is the codebase.”
You don’t write the wiki. You query. You add raw sources. The AI:
- Reads your sources
- Extracts concepts
- Creates interlinked notes
- Updates references automatically
- Marks contradictions
- Keeps the structure alive
The system compiles context. With each new source, it integrates into what already exists. It’s not search. It’s cumulative construction.
Architecture: Three Layers That Work
Raw Sources (Immutable)
Your original sources:
- PDF articles
- Meeting transcripts
- Research notes
- Various documents
You never modify these files. They’re the absolute truth. AI reads but doesn’t alter.
Wiki (AI-Maintained)
A directory of markdown files completely managed by AI:
- Summaries of each source
- Concept pages
- Timelines
- Connection notes between ideas
AI creates, updates, links. You just read and query.
Schema (The Rules)
A configuration file (e.g., CLAUDE.md) that defines:
- How to structure new notes
- How to ingest new files
- What format to use in answers
- What the wiki conventions are
This is the “contract” between you and AI about how to maintain the system.
Operations: The System Cycle
Ingest
You add a new article to the raw sources folder. AI:
- Reads the article
- Creates a summary
- Identifies 10-15 related concepts that already exist in the wiki
- Updates each with insights from the new article
- Adds backlinks
- Logs the action
Result: in 30 seconds, the new knowledge is integrated and connecting with everything you already have.
Query
You ask a complex question. AI:
- Reads the central index
- Navigates to relevant pages
- Synthesizes an answer
- If it discovers a new connection during the conversation, saves that discovery as a new page in the wiki
You no longer need to “remember where you wrote something.” AI navigates through all your compiled knowledge.
Lint
Periodically, you ask for a “health check.” AI:
- Looks for broken links
- Finds outdated claims
- Identifies contradictions between pages
- Finds orphans (pages without connections)
- Suggests structural improvements
The wiki self-repairs. You don’t need to do manual maintenance.
Use Cases for Solo Builders
1. Product Research
You’re exploring a niche. You read 20 articles, watch 10 videos, note scattered insights. With LLM Wiki:
- Throw raw sources into the system
- AI compiles a niche map
- Shows connections between trends
- Stays updated as you add more sources
- You ask “what are the main pains of this audience?” and get the answer in seconds
2. Knowledge Base for SaaS
You’re building a product. Documentation, design decisions, architecture, business decisions. With LLM Wiki:
- Each decision becomes a note
- AI connects related decisions
- New team members navigate the wiki and understand the full context
- You don’t need to explain everything every time
3. Learning Tracking
You’re learning a new technology. With LLM Wiki:
- Each tutorial, article, video becomes a source
- AI compiles what you’ve learned
- Connects concepts across different sources
- You have a map of your knowledge that grows itself
- You ask “what do I know about X?” and get a consolidated view
4. Living Product Documentation
You have a running product. User feedback, issues, roadmap decisions. With LLM Wiki:
- AI compiles feedback by theme
- Identifies patterns in reported problems
- Maintains an updated view of priorities
- You ask “what are the biggest user problems?” and get analyzed results
Why This Is Different from RAG
| Aspect | RAG | LLM Wiki |
|---|---|---|
| Accumulation | None | Continuous |
| Connection | Point search | Permanent links |
| Maintenance | Manual | Automatic |
| Evolution | Static | Compounding |
| Context | Per question | Accumulated |
RAG is like a librarian who searches relevant books for each question but never organizes the library. LLM Wiki is like a librarian who reorganizes the entire shelf with each new book, updates references, and connects with everything that already exists.
To learn more about implementing RAG, check our article on building RAG systems in Python.
Monetization: What You Can Build
1. Ready-made Knowledge Systems
Sell LLM Wiki templates configured for specific niches:
- “Niche research wiki for product hunters”
- “Knowledge system for solo developers”
- “Compiled base for SaaS makers”
Each template includes: folder structure, configured CLAUDE.md, ingest and query prompts, and usage tutorial.
2. Automated Second Brain SaaS
A tool that automates the complete cycle:
- Source upload (PDFs, links, notes)
- AI compiles and connects automatically
- Navigation and query interface
- Knowledge evolution reports
Differential: it’s not just search. It’s cumulative compilation.
3. Setup Consulting
Many builders want a second brain but don’t know how to structure it. Services for:
- System architecture
- Schema configuration
- Integration with existing tools
- Usage training
4. Domain-Specific Agents
An LLM Wiki trained specifically for a domain:
- “Jurisprudence wiki” for lawyers
- “Medical wiki” for healthcare professionals
- “Technical wiki” for engineers
The compounding knowledge base within a specific domain has exponential value.
How to Start (Practical)
Step 1: Basic Structure
Create the folder structure:
my-wiki/
├── raw/ # Your raw sources
├── wiki/ # AI-maintained notes
└── CLAUDE.md # System schema
Step 2: Define the Schema
In CLAUDE.md, define the rules:
# Wiki Rules
## Note Structure
- Each concept = one note
- Notes have: title, summary, related concepts, sources
## Conventions
- Use links in [[note-name]] format
- Include creation timestamp
- Mark orphan notes for review
## Ingest
- When adding source, create summary note
- Identify 5-10 existing concepts to update
- Add backlink to original source
## Query
- First read index.md
- Navigate to relevant notes
- Synthesize answer with references
Step 3: Configure the Agent
Use Claude Code (or another agent) with CLAUDE.md loaded. The agent now knows:
- How to structure notes
- How to do ingest
- How to maintain consistency
Step 4: First Ingest
Add 3-5 sources about a topic you want to compile. Ask the agent to do the complete first ingest.
Watch how it:
- Creates concept notes
- Connects with what already exists
- Adds backlinks
Step 5: Query
Ask something complex. “What are the main patterns that emerged from these sources?” Watch the agent navigate the compiled wiki and synthesize.
The Future: Companies with Living Memory
LLM Wiki represents a fundamental shift: we’re moving from AI as search to AI as knowledge maintainer.
In 2-3 years:
- Companies will have “institutional memory” that automatically compiles context
- Decisions will be made based on accumulated knowledge, not point-in-time research
- Onboarding new members will be “navigate the wiki” instead of “read 50 documents”
- Individual builders will have competitive advantage because nobody else compiles context like they do
The person implementing an LLM Wiki today will have 2 years of accumulated context when others are starting from scratch.
Knowledge becomes an asset. And assets grow on their own.
How to Transform Knowledge into an Asset
The question isn’t “how to save my notes.” The question is: “how to make my knowledge work for me while I build other things?”
LLM Wiki answers this:
- Stop maintaining manually — let AI do the bureaucratic work
- Compile context — each source connects to all others
- Evolve on its own — the system improves without you needing to improve it
- Use as advantage — queries that would take hours take seconds
The result: a second brain that doesn’t just store — it builds.
You don’t need to remember everything. You need a system that remembers for you.
