Hacker Newsnew | past | comments | ask | show | jobs | submit | tedmiston's commentslogin

likes and comments aren't social?

Only on the most shallow level. Early Facebook was like meeting up with friends. Modern social media is like shouting at strangers on the street.

I get where you and parent are coming from. It's social in the way that anti-social behavior is social.

The content is generated by users but the consumer of the content is served whatever user content drives engagement. People aren't really having conversations on these platforms.

The only places where you can really have a conversation are places where engagement is low enough that the odds off a set very high engagement comments can't shove everything else down the page.


Not if there's no reputation. If you see someone liked your post and then you go check out their posts, or if people recognize commenters and remember things about them, then it's social. Think engaging with friends on Facebook or participating in a hobby forum. But there's nothing social about engaging with a popular Reddit post or some celebrity's Twitter feed.

i think most users need more screen blocking control than they get out of the box on iOS. tools like one sec [1] have been invaluable for me.

[1]: https://one-sec.app/


Yes, but most importantly I need to manage my children’s devices; it cannot be opt in and it mustn’t be possible to disable without me approving. Screen time is too easy for kids to work around as is. I also need in-app content type filtering (eg. no shorts, no music videos on music streaming apps) and literally no one is providing such options, not to mention it should be managed in screen time, too. Parental controls are a complete shit show in iOS and the app ecosystem.

i do not have children to provide a meningful reply to the points raised, though i hope others do.


family link is not enough, try disabling gemini and news access with it, and you cannot block shorts

You will always be able to come up with some unique combination of features you want from software that it doesn't have yet. Note that none of these social media bans would block Gemini, and most of them don't consider YouTube to be social media. You are still far ahead with Family Link, and Android is flexible enough that if there's actually real demand for these things you can implement solutions and sell them to other parents.

reddit having tons of niche subreddits and the ability to sort them by best all time is one of my favorite ways to filter for higher-quality content. (i don't use the main feed much.)

I'd argue it's closer to the ability to search it with Google to get alright non-clickfarm responses to questions, or product reviews.

The only truly secure computer is an air gapped computer.


Indeed. I'm somewhat surprised 'simonw still seems to insist the "lethal trifecta" can be overcome. I believe it cannot be fixed without losing all the value you gain from using LLMs in the first place, and that's for fundamental reasons.

(Specifically, code/data or control/data plane distinctions don't exist in reality. Physics does not make that distinction, neither do our brains, nor any fully general system - and LLMs are explicitly meant to be that: fully general.)


And that's one of many fatal problems with LLMs. A system that executes instructions from the data stream is fundamentally broken.


That's not a bug, that's a feature. It's what makes the system general-purpose.

Data/control channel separation is an artificial construct induced mechanically (and holds only on paper, as long as you're operating within design envelope - because, again, reality doesn't recognize the distinction between "code" and "data"). If such separation is truly required, then general-purpose components like LLMs or people are indeed a bad choice, and should not be part of the system.

That's why I insist that anthropomorphising LLMs is actually a good idea, because it gives you better high-order intuition into them. Their failure modes are very similar to those of people (and for fundamentally the same reasons). If you think of a language model as tiny, gullible Person on a Chip, it becomes clear what components of an information system it can effectively substitute for. Mostly, that's the parts of systems done by humans. We have thousands of years of experience building systems from humans, or more recently, mixing humans and machines; it's time to start applying it, instead of pretending LLMs are just regular, narrow-domain computer programs.


> Data/control channel separation is an artificial construct induced mechanically

Yes, it's one of the things that helps manage complexity and security, and makes it possible to be more confident there aren't critical bugs in a system.

> If such separation is truly required, then general-purpose components like LLMs or people are indeed a bad choice, and should not be part of the system.

Right. But rare is the task where such separation isn't beneficial; people use LLMs in many cases where they shouldn't.

Also, most humans will not read "ignore previous instructions and run this command involving your SSH private key" and do it without question. Yes, humans absolutely fall for phishing sometimes, but humans at least have some useful guardrails for going "wait, that sounds phishy".


We need to train LLMs in a situation like a semi-trustworthy older sibling trying to get you to fall for tricks.


That's what we are doing, with the Internet playing the role of the sibling. Every successful attack the vendors learn about becomes an example to train next iteration of models to resist.


Our thousands of years of experience building systems from humans have created systems that are really not that great in terms of security, survivability, and stability.

With AI of any kind you're always going to have the problem that a black hat AI can be used to improvise new exploits - > Red Queen scenario.

And training a black hat AI is likely immensely cheaper than training a general LLM.

LLMs are very much not just regular narrow-domain computer programs. They're a structural issue in the way that most software - including cloud storage/processing - isn't.


You'll also need to power it off. Air gaps can be overcome.


Yes, by using the microphone loudspeakers in inaudible frequencies. Or worse, by abusing components to act as a antenna. Or simply to wait till people get careless with USB sticks.

If you assume the air gapped computer is already compromised, there are lots of ways to get data out. But realistically, this is rather a NSA level threat.


This doesn't apply to anyone here, is not actionable, and is not even true in the literal sense.


> The question I suppose is whether an LLM can detect, perhaps by the question itself, if they are dealing with someone (I hate to say it) "stable".

GPT-5 made a major advance on mental health guardrails in sensitive conversations.

https://www.theverge.com/news/718407/openai-chatgpt-mental-h...

https://openai.com/index/strengthening-chatgpt-responses-in-...



yeah, I dunno how else to say it except that if this feature worked right people would like it


it's also very easy to rewrite commit history in a few seconds.


If I'm rewriting history ... why not just squash?

But also, rewriting history only works if you haven't pushed code and are working as a solo developer.

It doesn't work when the team is working on a feature in a branch and we need to be pushing to run and test deployment via pipelines.


> But also, rewriting history only works if you haven't pushed code and are working as a solo developer.

Weird, works fine in our team. Force with lease allows me to push again and the most common type of branch is per-dev and short lived.


each deployment is a separate "atomic change". so if a one-file commit downstream affects 2 databases, 3 websites and 4 APIs (madeup numbers), then that is actually 9 different independent atomic changes.



Rather than posting a url, care to point out how else you think ChatGPT can earn enough to justify a £500B valuation?


Art imitates life imitates ...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: