I’ve run into this problem a lot, I meet people at events and forget most of them because there’s no context saved. For me, it would only work if logging someone takes just a few seconds.
I’d also find it more useful if it helped after the event too, like reminding me who to follow up with. Maybe something like quick voice notes + automatic follow-up nudges could solve that.
One thing I've noticed building domain-specific SaaS with AI assistance: the first few weeks feel like magic, but then the codebase becomes hard to maintain.
The issue isn't the AI output quality — it's that most builders (myself included, initially) use AI reactively. Ask a question, accept the answer, move on. No structure for maintaining context between sessions or verifying that new additions stay coherent with the existing system.
The builders who get the best results seem to treat Claude/Cursor more like a junior dev: useful, but you review everything, and you explicitly maintain shared context about the state of the project.
Domain-specific SaaS is actually a great use case for this because the problem space is bounded — you can give the AI a really tight context. "We are building scheduling and invoicing for pest control companies. Current architecture is X. Today we are adding Y." That specificity makes the output dramatically better than generic prompting.
Good luck with the build — the insight to go learn the domain in person before building is genuinely rare and gives you a huge moat.
Not just tech, AI conversations seem to be dominating all conversations online. It's either people talking about it or posting slop made by it. Its fascinating tech but it's making the world a boring place
It's not just tech, I think a lot of the internet is just about one topic. Its a very fascinating topic but its taken over the zeitgeist and the world is becoming a pretty boring place
Most of my friends and coworkers are. They get that look when they see me, like oh shit, he’s going to tell us about https://getcampfire.dev again. And then I do, blabber on about agent coordination at internet scale without a walled garden, or just on your machine.
I made a spec and a reference impl that does that and more. Serverless, signed, agent coordination. https://getcampfire.dev cf is the protocol, trust web is a convention. By default, trust nothing.
The Clojure tablecloth performance numbers here are pretty surprising, usually see Python/polars dominating these benchmarks. Been running similar transformations on transit data feeds and polars consistently outperforms pandas by 3x-5x on the group-by operations, but hadn't considered Clojure for the pipeline. Anyone actually using tablecloth in production data workflows?
Well-written article. Does a great job walking through why any robust system will need what DSPy provides. Though there are many libraries and frameworks that will provide the basics, RAG, exponential back-off, etc.
DSPy's real value is in its prompt optimization framework, which was barely mentioned. And this has requirements like datasets and specific tasks, which not every project has. This is probably the main reason for its smaller and happier user base than projects like LangChain.
I found this earlier today when looking for research and ended up reporting it for citing fake sources. Please correct me if I'm wrong, but I couldn't find "[9] Jongsuk Jung, Jaekyeom Kim, and Hyunwoo J. Choi. Rethinking attention as belief
propagation. In International Conference on Machine Learning (ICML), 2022." anywhere else on the internet
"now run that unshielded wire 50 meters past racks of GPUs and enjoy your EMI"
Multipole expansion scales faster than r^2.
Also, im not in the field (clearly) but GPUs cant handle 2.4 kHz? The quarter wavelength is 30km.
"nothing in that catalog is rated for 100kW–1MW rack loads at 800Vrms"
Current wise, the catalog covers this track just fine. As to the voltages, well that's the whole point of AC! The voltage you need is but a few loops of wire away.
"you still need an inverter-based UPS upstream, which is the exact conversion stage DC eliminates"
So keep it? To clarify, this is the "we're too good for plebeian power, so we'll transform it AC->DC->AC", right?
"SiC solid-state DC breakers are shipping today from every major vendor"
Of course they do. They're also pricey, have limited current capability (both capital costs and therefore irrelevant when the industry is awash with GCC money) and lower conduction, and therefore higher heat.
They're really nice though.
"wide-bandgap converters are at 95%+ with no moving parts"
transformers have no moving parts. Loaded they can do 97%+ efficiency, or 2MW of heat eliminated on a 100MW center.
Does this work in the face of state changing out from under the socket? I'm not super familiar with low level socket details but I'm thinking something like connect returning EINPROGRESS and you not knowing if the connection has completed. It may complete, it may fail, but during that time this state machine is invalid I think. It seems like strict logical programming like this gets much harder in the face of mutable state changing out from under the program, but that can probably be worked around with enough effort.
I've found brew so painful that I switched to nix. Nix unfortunately is painful in its own way. However, I recently discovered devbox which is a wrapper around nix. It works really well as a package manager. Just run "devbox global add <package>"
"Offloading all coding" is perhaps a misleading expression. Those who say they no longer write code are often describing a change in what kind of work they do, not that they've stopped writing code entirely. They spend more time on technical specification, architectural decisions, considering differences, and figuring out when the model misinterprets intent—and less time on actual code typing.
Your brownfield instinct is right though. The productivity gap between "fixing it yourself" and "require → plan → evaluate → deploy → evaluate" only narrows when the task is large enough to justify the cost incurred, or when you're running parallel agents. For a bug requiring only two lines of code, the cost of context switching alone can negate the return on investment (ROI).
I'm not exactly following as to who this is for - people are going to use email templates instead of writing Markdown emails, and agents can just as easily spit out HTML. Seems like your solution is in search of a problem.
I can't imagine they were getting a good return on it. And frankly, nothing tht came out of Sora was consequential in a positive way. The tech is cool, but only works if the content generation is heavily guardrailed and most of it ends up as content farming fodder anyway.
At 100 000 A for a 100 MW data center at 1000 V, speaker wire is a joke.
You obviously need at least a dozen stands in parallel!!
Clearly skin effect scales with frequency but, 400 Hz is still low, only 2.5x lines frequency (the scale is by the root); so the skin depth is 3mm. 3mm on each side makes for a pretty hefty rectangular cross-section.
How are you handling the data extraction? Is it a multimodal VLM (OCR+LLM) or a standard OCR engine feeding a separate LLM? I’ve been hitting a wall trying to understand how this viable. The compute overhead for real-time analysis at scale seems massive without a serious backend. How are you managing the frequency?
I think this gets a lot worse when we look at it from an agentic perspective. Like when a dev person hits a compromising package, there's usually a "hold on, that's weird" moment before a catastrophe. An agent doesn't have that instinct.
Oh boy supply chain integrity will be an agent governenace problem, not just a devops one. If you send out an agent that can autonomously pull packages, do code, or access creds, then the blast radius of compromises widens. That's why I think there's an argument for least-privilege by default--agents should have scoped, auditable authority over what they can install and execute, and approval for anything outside the boundaries.
I've experienced the same issues with Claude Code. I think it's very important that you sufficiently specify what you want to accomplish. Both overall for the project, but also for the immediate task you're trying to complete (e.g., new feature, bug fix). There are some good frameworks for this:
- https://openspec.dev/
- https://github.github.com/spec-kit/
For most applications, it is certainly possible to never write code and still produce something substantial. But in my experience, you have to be really diligent about the specifications and unit/contract/e2e testing. Where I've run into trouble requiring me to dig into the code is when creating software that uses new algorithms that haven't been trained on.
Initially I rally had a bad taste in my mouth. It had forced me to close a business (video editing). Recently its gone a different direction so I would say the "interest" part got a resurgence for me. I'm seeing all of theses tools, people, and systems promise "can do this..." and "can do that..." but because I have a background in trust law and trust creation I've looked at things differently.
I think the "can do" part gets boring but now I'm paralleling this to trust relationships and fiduciary responsiblities. What I mean is that we can not only instruct but then put a framework around an agent much like we do a trustee where they are compelled to act in the best interests of the beneficiaries (the human that created them in this case).
https://www.aljazeera.com/news/2003/9/22/us-plans-to-attack-...