Hacker Newsnew | past | comments | ask | show | jobs | submit | bschwindHN's commentslogin

Very cool! Quick question: did you use a plugin to generate the NFC antenna?

The routing and layout looks nice. The end result is great! I bet it was satisfying to get it working on the first try.


I used https://eds.st.com/antenna/#/ to get an antenna that fit with a target inductance of 4.7uH and then used https://github.com/nideri/nfc_antenna_generator to create the footprint which I slightly modified for the board! You can read a bit more about it in the journal (JOURNAL.md)!

It was really satisfying to get everything working (especially the NFC because I've found RF to be a bit tricky), but the eink logic was actually a bit of gamble, because I broke my only eink while prototyping so the production batch was the first test of the driver. So always carry spare components when designing prototypes!


HN these days is filled with people saying basically "Show HN: I had an LLM shit out something I wanted, I didn't read it, but you should!".

And then a bunch of green new accounts commenting on how it's cool and they learned something. It's just a never ending attack on our attention.


The upcoming Baochip is an RV32 chip with an MMU, I believe.

https://www.bunniestudios.com/blog/2026/baochip-1x-a-mostly-...

Edit - Oops GeorgeHahn beat me to it


The UI fits right in, in a good way!

That's a cool slot machine reskin

I agree, I love code-based CAD but there needs to be a hybrid with GUI tools because selecting stuff with a mouse will almost always be easier.

I known there is research out there (can't dig it up at the moment), but the goal would probably be to generate a robust geometric query for a selected item, so that small changes in the model don't affect which edge gets selected after subsequent operations.

So if you extruded a face upwards, and then filleted all the edges at the top of that extrusion, this hybrid tool would generate a query which gives you all the top edges, instead of whatever the indices of those edges happens to be. I can't imagine it's an easy problem though to generate robust queries for all possible geometry a user might select.


> I known there is research out there (can't dig it up at the moment), but the goal would probably be to generate a robust geometric query for a selected item, so that small changes in the model don't affect which edge gets selected after subsequent operations.

There is quite a bit of research that this is impossible. No matter what algorithm or heuristic you use, the second that symmetry is introduced, the query breaks down. The only way to resolve those issues is to present them to the user as an underspecified constraint, and no geometric kernel is well designed to do that.


Almost certainly running some sort of O(n^2) algorithm on the chat text every key press. Or maybe just insane hierarchies of HTML.

Either way, pretty wild that you can have billions of dollars at your disposal, your interface is almost purely text, and still manage to be a fuckup at displaying it without performance problems.


When will you all learn that merely "telling" an LLM not to do something won't deterministically prevent it from doing that thing? If you truly want it to never use those commands, you better be prepared to sandbox it to the point where it is completely unable to do the things you're trying to stop.

Even worse, explicitly telling it not to do something makes it more likely to do it. It's not intelligent. It's a probability machine write large. If you say "don't git push --force", that command is now part of the context window dramatically raising the probability of it being "thought" about, and likely to appear in the output.

Like you say, the only way to stop it from doing something is to make it impossible for it to do so. Shove it in a container. Build LLM safe wrappers around the tools you want it to be able to run so that when it runs e.g. `git`, it can only do operations you've already decided are fine.


Even even worse, angry all-caps shouting will make it more stupid, because it pushes you into a significantly stupider vector subspace full of angry all-caps shouting. The only thing that can possibly save you then is if you land in the even tinier Film Crit Hulk sub-subspace.

I touch on this a bit in the piece I wrote for normies, it helped a lot of people I know understand the tech a bit better.


Is this true for anything beyond the simplest LLM architectures? It seems like as soon as you introduce something like CoT this is no longer the case, at least in terms of mechanism, if not outcome.

This is true for prohibitions but claude.md works really well as positive documentation. I run custom mcp servers and documenting what each tool does and when to use it made claude pick the right ones way more reliably. Totally different outcome than a list of NEVER DO THIS rules though, for that you definitely need hooks or sandboxing.

Yes but this is probabilistic. Skill, documentation etc help by giving it the information it needs. You are then in the more correct probability distribution. Fine for docs, tips etc, but not good enough for mandatory things.

"more reliably" is still not "reliably".

The phrase "don't give them ideas" comes to mind.

Feels like a lot of people are still treating these tools like “smart scripts” instead of systems with failure modes.

Telling it not to do something is basically just nudging probabilities. If the action is available, it’s always somewhere in the distribution.

Which is why the boundary has to be outside the model, not inside the prompt.


Agree completely. The middle ground between "please don't" and full sandboxing: run a validation script between agent steps. The agent writes code, a regex check catches banned patterns, the agent has to fix them before it can proceed. Sandboxing controls what the agent can do. Output validation controls what it gets to keep. Both are more reliable than prompt instructions.

That’s right, because we’re not developers anymore— we orchestrate writhing piles of insane noobs that generally know how to code, but have absolutely no instinct or common sense. This is because it’s cheaper per pile of excreted code while this is all being heavily subsidized. This is the future and anyone not enthusiastically onboard is utterly foolish.

My point is exactly that you need safeguards. (I have VMs per project, reduced command availability etc). But those details are orthogonal to this discussion.

However "Telling" has made it better, and generally the model itself has become better. Also, I've never faced a similar issue in Codex.


> sandbox it to the point where it is completely unable to do the things you're trying to stop

Why are permissions for these "agents" on a default allow model anyway?


What do you mean? By default, Claude asks for permission for every file read, every edit, every command. It gets exhausting, so many people run it with `--dangerously-skip-permissions`.

It does not ask for permission for every file read, only those outside the project and not explicitly allowed. You can bypass project edit permission requests with “allow edits”, no need for “dangerously skip permissions”. Bash commands are harder, but you can allow-list them up to a point.

> so many people run it with `--dangerously-skip-permissions`

It's on the people then, not the "agent". But why doesn't Claude come with a decent allow list, or at least remember what the user allows, so the spam is reduced?


You have the option to "always allow command `x.*`", but even then. The more control you hand over to these things, the more powerful and useful (and dangerous) they become. It's a real dilemma and yet to be solved.

I use a script wrapper of git un muy path for claude, but as you correctly said, I'm not sure claude Will not ever use a new zsh with a differentPATH....

> They had the infrastructure and custom SoCs and everything. What a waste.

What are they wasting, exactly?


Good riddance. AI video generation is not something humanity needs.

I don't really disagree, but the proper way to think about it was that with Sora some of that ability democratized. Now it will be available only to the rich and powerful ( and nerdy ). Humanity may not need it per se, but removal of that option that does not automatically make it better; not if the removal is only for a portion of the population.

Nah, that's not the "proper" way to think about it, that's just your opinion.

As it stands today, AI video generation tools like Sora suck up useful energy and produce things that are useless at best (throwaway short form videos), and harmful at worst (propaganda, deepfakes).

Rich people were always going to do what they wanted anyway, "democratizing" that doesn't make the situation better.


>Rich people were always going to do what they wanted anyway, "democratizing" that doesn't make the situation better.

total disagree.

if you put vid gen in the hands of regular people then regular people get super-powered in that they begin to recognize the frame pacing, frame counts, and typical lengths and features of an AI video.

Do you know how many people have cited AI videos in this war? We'd all be better off if all of us were betting at spotting fakes rather than allowing the fakes to illicit hardcore emotional responses from every peon on the street.


I think you're overestimating the average person. We can give people direct, scientifically-backed evidence of something, and there will still be significant groups of people fervently denying it.

The resources (money, energy, opportunity cost of engineering time) put into AI video generation are better spent elsewhere. Not pouring resources into it would hopefully stunt its progress, making AI generated propaganda lower quality and easier to spot.


Even if that were true, the little quirks of private large scale video models would be different than the public cheap ones. If anything, it would just give the public a false sense of being able to detect AI videos and overlook the more subtle flaws of privately made ones.

So only rich people can propagandize? How is that better?

There are a lot of things it seems only rich people can do and get away with. It doesn't mean I support it or want them to do it, but that seems to be the reality.

If I may make an analogy, it would be like looking at rich corporations dumping toxic chemicals into our waterways, and saying "wow I wish I could dump toxic chemicals in the water too, not fair!"

The point is that if a rich person wants to do it, my only hope is that they have to spend a significant amount of their resources to do it, and that there would be immense negative social pressure against them when they do.


OpenAI never gave the community the weights. They always intended to monopolize it for corporate extortion, they didn't "democratize" shit.

There are open-source alternatives:

https://mochi1ai.com/

https://wan.video/

and others. There are free to use tools also.


> democratized

I really don't think that using that term is appropriate when there's a multi-billion American macro corporation involved in the activity in question.


HN loves to abuse the term to pretend it's somehow a good thing when one human being is in control of something.

> with Sora some of that ability democratized

No it didn't; OpenAI had control.

Saying Sora democratised video generation is like saying that landlords democratised home ownership.


Video production is already wildly democratized. AI did not lower the barrier to entry. Digital tools already did most of the legwork.

We'll let the market decide that rather than your emotional outbursts

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: