Kagi betting on the small web is a smart differentiation play. Google has essentially optimized for large, authoritative domains — which means personal blogs, niche forums, and indie projects get buried even when they have the most relevant content for specific queries.
The small web is where genuine expertise lives. When I'm looking for a real opinion on a tool or a deep technical explanation, I almost never find it on the first page of Google anymore. It's all SEO-optimized content farm articles or AI-generated summaries. The actual expert wrote a blog post that's sitting on page 5.
The challenge is curation at scale. Manually identifying quality small web content works when the corpus is small, but maintaining quality as it grows requires either very good automated signals or a community-driven approach. Curious how Kagi balances this.
The spec-driven approach resonates. I've found that the quality of the initial context you feed to AI coding tools determines everything downstream. Vague specs produce vague code that needs constant correction.
One pattern that's worked well for me: instead of writing specs manually, I extract structured architecture docs from existing systems (database schemas, API endpoints, workflow logic) and use those as the spec. The AI gets concrete field names, actual data relationships, and real business logic — not abstractions. The output quality jumps significantly compared to hand-written descriptions.
The tricky part is getting that structured context in the first place. For greenfield projects it's straightforward. For migrations or rewrites of existing systems, it's the bottleneck that determines whether AI-assisted development actually saves time or just shifts the effort from coding to prompt engineering.
The small web is where genuine expertise lives. When I'm looking for a real opinion on a tool or a deep technical explanation, I almost never find it on the first page of Google anymore. It's all SEO-optimized content farm articles or AI-generated summaries. The actual expert wrote a blog post that's sitting on page 5.
The challenge is curation at scale. Manually identifying quality small web content works when the corpus is small, but maintaining quality as it grows requires either very good automated signals or a community-driven approach. Curious how Kagi balances this.