Hacker Newsnew | past | comments | ask | show | jobs | submit | mark_l_watson's commentslogin

I have also found similarities with things in the Bhagavad Gita. Paramahansa Yogananda also writes on this topic.

I was disagreeing with just your last statement, but then I did a little research: about 80% of revenue is from chatbots and 20% from APIs. So +1 on your comment.

Bear with me here, I actually do have a point to make: I took my stepdaughter out for breakfast this morning. She is a financial wizard specializing in running large cities, and to explain to her the current craziness of overspending on AI infrastructure, I described "exponential spending increases for linear economic value increases." I may be wrong about this, but I am all for targeting the sweet spot of more efficient smaller AI models that are fit to purpose for specific use cases.


You're agreeing with GP but it's a flawed premise: a chatbot doesn't have to be limited to entertainment and trivia, as anyone who has used one for productivity knows.

A 3B resident parameter MOE allows absolutely huge savings on inference costs. I use a cloud provider for models to large to run locally, can’t wit for them to support qwen3-coder-next hopefully in a few days.

So much expensive inference is provided free or at large discounts - that craziness should end.


I configured Claude Code to use a local model (ollama run glm-4.7-flash) that runs really well on a 32G M2Pro macmini. Maybe my standards are too low, but I was using that combination to clean up the code, make improvements, and add docs and tests to a bunch of old git repo experiment projects.

Did you have to do anything special to get it to work? I tried and it would just bug out, things like respond with JSON strings summarizing what I asked of it or just outright getting things wrong entirely. For example, I asked it to summarize what a specific .js file did and it provided me with new code it made up based on the file name...

Yes, I had to set the Ollama context size to 32K

Thank you, it's working as expected now!

I want one of those t-shirts.

I was incredibly lucky to have been funded to write StarLisp code for the original CM-1 machine. CM-1 was a SIMD architecture, the later models were MIMD. Think of the physical layout being a 2D grid or processors with one edge being for I/O. That was a long time ago so I may have the details wrong.


You can order them at the bottom, lucky you :)

A friendly counterpoint: my test of new models and agentic frameworks is to copy one of my half a zillion old open source git repos for some usually very old project or experiment - I then see how effective the new infra is for refactoring and cleaning up my ancient code. After testing I usually update the old projects and I get the same warm fuzzy feeling as I do from spring cleaning my home.

I also like to generate greenfield codebases from scratch.


I like the idea of a smaller version of OpenClaw.

Minor nitpick, it looks like about 2500 lines of typescript (I am on a mobile device, so my LOC estimate may be off). Also, Apple container looks really interesting.


I surprise myself: at the international AI conference AAAI in 1982 some of the swag was a bumper sticker “AI It Is For Real” that I put on my car and left it there for years.

With that tedious history out of the way, even though I respect all the good work that has gone into this and also standalone mostly autonomous LLM-based tools, I am starting to feel repulsed by any use of AI that I don’t directly use for research, coding, studying non-tech subjects, etc. I think tools like Gemini deep research and NoteBookLM are superb tools. Tools like Claude Code and Google’s Antigravity are so well done, but I find it hard to get excited about them or use any tool like these tools for more than once or twice a week (and almost always for less than 10 minutes a session.)


I was going to agree with this sentiment until I realized that I listen to a few podcasts put out by individuals who spend a lot of time producing content. For me, it is pod quality vs. how many ads.

I read years ago that Hetzner placed data centers near inexpensive power, but I understand that the EU’s energy situation has deteriorated. So you are correct, they have the larger energy problem to contend with.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: