Hacker Newsnew | past | comments | ask | show | jobs | submit | jackb4040's commentslogin

Normal people don't feel the need to characterize their own blog posts as "sane"


Came here to post this. Plus they're all just calls to the same LLM providers under the hood anyways


> the free speech crowd when you disagree with the US state department


There is no major US social media company that doesn't have members of the Israeli military among its executives, HN included.


I find myself wondering what people would think if we swapped "Israeli" for "Chinese" in your reply. And why this double standard exists in all our minds.

Clearly you can't run a country when your elites owe their first allegiance to somewhere else.


"It's fine if we lie because we're the good guys"


I searched their site for any information on "how" they can claim it's safe for kids. This is what I could find: https://stickerbox.com/blogs/all/ai-for-kids-a-parent-s-guid...

> No internet open browsing or open chat features. > AI toys shouldn’t need to go online or talk to strangers to work. Offline AI keeps playtime private and focused on creativity.

> No recording or long-term data storage. > If it’s recording, it should be clear and temporary. Kids deserve creative freedom without hidden mics or mystery data trails.

> No eavesdropping or “always-on” listening > Devices designed for kids should never listen all the time. AI should wake up only when it’s invited to.

> Clear parental visibility and control. > Parents should easily see what the toy does, no confusing settings, no buried permissions.

> Built-in content filters and guardrails. > AI should automatically block or reword inappropriate prompts and make sure results stay age-appropriate and kind."

Obviously the thing users here know, and "kid-safe" product after product has proven, is that safety filters for LLMs are generally fake. Perhaps they can exist some day, but a breakthrough like that isn't gonna come from an application-layer startup like this. Trillion dollar companies have been trying and failing for years.

All the other guardrails are fine but basically pointless if your model has any social media data in its dataset.


They fail their own checklist in that article.

> Here’s a parent checklist for safe AI play:

> [...] AI toys shouldn’t need to go online

From the FAQ:

> Can I use Stickerbox without Wi-Fi?

> You will need Wi-Fi or a hotspot connection to connect and generate new stickers.


I'm sure you are correct about being able to do some clever prompting or tricks to get it to print inappropriate stickers, but I believe in this case it may be OK.

If you consider a threat model where the threat is printing inappropriate stickers, who are the threat actors? Children who are attempting to circumvent the controls and print inappropriate stickers? If they already know about topics that they shouldn't be printing and are trying to get it to print, I think they probably don't truly _Need_ the guardrails at that point.

In the same way many small businesses don't (most likely can't even afford to) opt to put security controls in place that are only relevant to blocking nation state attackers, this device really only needs enough controls in place to prevent a child from accidentally getting an inappropriate output.

It's just a toy for kids to print stickers with, and as soon as the user is old enough to know or want to see more adult content they can just go get it on a computer.


ChatGPT allegedly has similar guardrails in place, and now has allegedly encouraged minors to commit self-harm. There is no threat actor, it's not a security issue. It's an unsolved, and as far as we know intrinsic problem with LLMs themselves.

The word "accidentally" is slippery, our understanding of how accidents can happen with software systems is not applicable to LLMs.


North Korean defectors are well known for being unreliable sources. They rarely have skills nor social connections and thus are massively incentivized to join the existing markets for anti-NK propaganda in both the West and India.

The kind of obvious propaganda like "it's a crime to have the same haircut as Kim Jong-Un" (or to not have it, depending on the source). No one is saying life there is great, but there is a track record of fantastically untrue stories.


If they rarely have skills or social connections, how were they able to achieve the feat of escaping from the Hermit Kingdom in the first place? It seems to me that this high bar selects for people who can thrive in adverse circumstances.


The original title is "US special forces killed North Korean civilians in botched 2019 mission". Your change is... questionable


What's questionable about it. It was just shorter.


It totally changes the context?

It removes the focus of the killing of civilians and makes it "aw shucks, the mission didn't go as planned".


The bit where killing people is just an oopsie.


> The key question now is: what data do we need, exactly?

What if I told you that's not the key question, and the "more data" approach has obviously and publicly hit a wall that requires causal reasoning to move past?


then i would disagree


I have a node app that has one-off scheduled tasks. Between node-cron and real Linux cron, I went with real cron because node-cron just polls every second, which is extremely inefficient and I'm on a free tier.

How does your library work in this regard? If my node server is down, will my scheduled tasks still execute? I notice you have a .start() method, what does that do? Is it polling periodically?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: