Hacker Newsnew | past | comments | ask | show | jobs | submit | jedisct1's commentslogin

Qwen3-Coder-Next also remains amazing as a local model.

If you want to use small models for coding, I'd highly recommend Swival https://swival.dev which was explicitly optimized for these.


MCP servers were a fad, but virtually all of them are completely useless, and often counterproductive for agents that can run code and execute commands directly.

When agents struggle to quickly understand how to use tools, SKILLS provide a far better solution than MCP.

The real issue is that some agents support MCP yet cannot execute any commands without it; tools like Jan or Claude Desktop. With these agents, you can't even access remote APIs, making an MCP necessary despite its limitations.


Same experience here.

Documentation-based skills don’t really work in practice. They tend to waste tokens instead of adding value.

CLI skills are also redundant when the CLI already provides clear built-in help messages. Those help messages are usually up to date, unlike separate skills that need to be maintained independently.

If the CLI itself is confusing (and would likely be confusing for humans as well) then targeted skills can serve as a temporary workaround, a kind of band-aid.

Where skills truly shine is when agents need to understand non-generic terms and concepts: unique product names, brand-specific terminology, custom function names, and other domain-specific language.


I strongly disagree about CLI help being a good enough solution. Skills with CLIs backing them is the gold standard right now for a reason.

1. Skills let the agent know the CLI is available because they get an entry in the context window.

2. They let you provide a ton of organisational knowledge and processes that the agent would have a hard time figuring out from the CLI alone.

3. It is just more efficient to provide quick information in a skill than it is to require an agent to figure out every detail from CLI help messages alone every single time.


Ouch. OVH are also going to increase their prices.

a_int and b_int are signed values.

It makes no difference whether they're signed or unsigned. Unless the subtraction is checking for overflow, or using a wider integer type than the numbers being compared, the high bit will not in every case indicate which number is smaller.

e.g.

    0x8000_0000 < 0x0000_0001 for signed numbers
    0x8000_0000 - 0x0000_0001 = 0x7fff_ffff, high bit clear

They are using a wider type.

Yes, looking at the source code on GitHub now cleared that up!

Didn't see it mentioned in the article though, maybe I missed it. If not, I think that this detail would be a good thing to include, both since it's a common mistake that others with less experience might make, and to get ahead of nitpicky comments like mine :)


Claude is good at writing code, not so good at reasoning, and I would never trust or deploy to production something solely written by Claude.

GPT-5.2 is not as good for coding, but much better at thinking and finding bugs, inconsistencies and edge cases.

The only decent way I found to use AI agents is by doing multiple steps between Claude and GPT, asking GPT to review every step of every plan and every single code change from Claude, and manually reviewing and tweaking questions and responses both way, until all the parties, including myself, agree. I also sometimes introduce other models like Qwen and K2 in the mix, for a different perspective.

And gosh, by doing so you immediately realize how dumb, unreliable and dangerous code generated by Claude alone is.

It's a slow and expensive process and at the end of the day, it doesn't save me time at all. But, perhaps counterintuitively, it gives me more confidence in the end result. The code is guaranteed to have tons of tests and assurance for edge cases that I may not have thought about.


The AirTag is a fantastic device.

If only it were usable with an Android phone :(


Dual-network tags certainly exist, posted above.

https://www.amazon.com/Tracker-Locator-Android-Bluetooth-Fin...


Really cool.

But how to use it instead of Copilot in VSCode ?


Would love to know myself, I recall there was some plugin for VSCode that did next edits that accepted a custom model but I don't recall what it was now.


Run server with ollama, use Continue extension configured for ollama


I'd stay away from ollana, just use llama.cpp; it is more up date, better performing and more flexible.


But you can't just switch between installed models like in ollama, can you?




To write Fastly VCL code, I strongly recommend XVCL https://dip-proto.github.io/xvcl/

It makes VCL so much easier and readable.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: