Hacker Newsnew | past | comments | ask | show | jobs | submit | Fourwheels2512's commentslogin

  Interesting take, but what you're describing is sophisticated RAG with a feedback loop. The model's weights never change. It writes better notes — it
  doesn't actually know more.

  That works for agentic workflows. But for organizations fine-tuning models on proprietary data, it falls apart. Add a second domain, catastrophic
  forgetting destroys the first. Context windows are finite. Memory notes are lossy. The model never internalizes anything.

  I built the actual weight-update solution. Sequential multi-domain fine-tuning on Mistral 7B with -0.16% drift across 5 domains. No replay buffers, no
  frozen params. The model genuinely accumulates knowledge.

  Top labs may not need continual learning for foundation models. Every organization deploying fine-tuned models on their own data absolutely does.
  Different problem, both real.

  Try it: modelbrew.ai

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: