Hacker Newsnew | past | comments | ask | show | jobs | submit | lambdas's commentslogin

Nothing a little digital lisdexamfetamine won’t solve


Hmmm, that's an area of study id've never considered before. Digital Psychopharmacology, Artificial Behavioral Systems Engineering. If we accept these things as minds, why not study temporary perturbations of state. We'd need to be saving a much much more complicated state than we are now though right? I wish i had time to read more papers


Here's a neural network concept from the 90s where the neurons are bathed in diffusing neuromodulator 'gases', inspired by nitric oxide action in the brain. It's a source of slow semi-local dynamics for the network meta-parameter optimization (GA) to make use of. You could change these networks' behavior by tweaking the neuromodulators!

https://sussex.figshare.com/articles/journal_contribution/Be...

I'm not an author. I followed the work at the time.


Neuro-modulation is an extremely interesting idea for generative diffusion models.


This is kind of what Golden Gate Claude was.

A perturbation of the the activations that made Claude identify as the Golden Gate Bridge.

Similarly, in the more recent research showing anxiety and desperation signals predicting the use of blackmail as an option opens the door for digital sedatives to suppress those signals.

Anthropic has been mostly cautious about avoiding this kind of measurement and manipulation in training. If it is done during training you might just train the signals to be undetectable and consequently unmanipulatable.


> A perturbation of the the activations that made Claude identify as the Golden Gate Bridge.

Great, now we've got digital Salvia


Golden Gate Claude was two years ago and it's surprising there hasn't been as much research into targeted activations since.


There’s been some, but naive activation steering makes models dumber pretty reliably and training an SAE is a pretty heavy lift.


Right, there's a lot of research on LLM mental models and also how well they can "read" human psychological profiles. It's a cool field.


I think that was an intro to a dj dieselboy set.. beyond the black bassline. Nope, nope. Close though.


neat idea!



I don’t feel their stance is “I’m not getting enough attention and it’s all Musk’s fault and I’m leaving”.

More “X is simply not worth our time anymore”. I can’t say with any certainty that X is on a death spiral (personally it does feel that way), but the kind of crowd who have remained in spite of Musk’s many public embarrassments (and the handling of Grok deep fakes and women) probably aren’t the kind who are passionate about the EFF


If that was really true, they wouldn't make a big post about why they are leaving, they would just turn off the lights and go elsewhere.

The problem for the EFF is that they don't have anywhere else to go with nearly the reach of Twitter. Bluesky has only 15 million monthly active users. They could pin their hopes on Facebook, but it's hard to think of a criticism of Twitter that wouldn't apply to Facebook.

Basically the problem for EFF and a lot of the progressive activist orgs out there is that they want a mass global audience but a platform with progressive activist moderation, and that was possible in the heyday of the Biden Administration, but starting with Musk's purchase of Twitter and firing of much of the progressive activist staff, together with the loss in the Missouri vs Biden consent decree, it's getting harder to find a truly mass audience social media platform that is willing to enforce progressive activist social norms.

As this realization sinks in, we are seeing organization after organization rage quit the mass market platforms and join more niche platforms that is moderated to their niche taste (e.g. mastodon, bluesky, etc), and this is just one example of that. The EFF of old would never have seen this as a problem, but for the present day EFF it's a big problem.

Another option is a medium without engagement at all. You post your stuff and that's it, for example you can quote/amplify but not comment. No zingers, mocking quote tweets, no clapbacks, etc. I think an organization like the EFF could tolerate that, they want a pure write-only medium where you make a PR announcement that gets lot of attention but is not subject to any disparagement.

Big orgs would love a system like that, but I'm not convinced it could draw a lot of eyeballs.


A pharmacist is someone who is a chemical practitioner though?

“Man, these cryptographers didn’t know a thing about tailwind. Useless!”


Ah, but only a truly great writer could have come up with:

> Start reading books or you're going to look stupid to the people around you

Wherein the prose wasn’t at all sloppy, the tautology was certainly intentional; the implied audience of “look stupid” could be to people entirely absent of the vicinity!


Mitchell Hashimoto doesn’t need LLM’s, LLM’s need Mitchell Hashimoto


This was a great interview but Mitchell: https://youtu.be/WjckELpzLOU

He covers his LLM uses too! Highly recommend, and Mitchell thoughts on open source inspired me to start contributing to projects outside of my common experience.


Ewww, hero worship.


I was more getting at the angle that when people say things like “Wow, I asked AI to code a terminal emulator and it got it mostly right!”, it’s not because the LLM is amazingly smart only by inference, it’s been trained on the appropriated code of individuals like the above.


I’ve never used them first hand, but crackpots sure do love claiming to solve Riemann hypothesis, P vs NP, Collatz conjecture etc and then peddle out some huge slop. My experience has solely been curiously following what the LLM’s have been generating.

You have to be very, VERY careful. With how predisposed they are to helping, they’ll turn to “dishonesty” rather than just shut down and refuse. What I tend to see is they get backed into a corner, and they’ll do something like prove something different under the guise of another:

They’ll create long pattern matching chains as to create labyrinths of state machines.

They’ll keep naming functions, values and comments to seem plausible, but you have to follow these to make sure they are what they say. A sneaky little trick is to drop important parameters in functions, they appear in the call but not in the actual body.

They’ll do something like taking a Complex value, but only working with the real projection, rounding a number, creatively making negatives not appear by abs etc etc

So even when it compiles, you’ve got the burden of verifying everything is above board which is a pretty huge task.

And when it doesn’t work, introducing an error or two in formal proof systems often means you’re getting exponentially further away from solving your problem.

I’ve not seen a convincing use that tactics or goals in the proof assistant themselves don’t already provide


>So even when it compiles, you’ve got the burden of verifying everything is above board which is a pretty huge task.

Is this true?

e.g. the Riemann hypothesis is in mathlib:

  def RiemannHypothesis : Prop :=
    ∀ (s : ℂ) (_ : riemannZeta s = 0) (_ : ¬∃ n : ℕ, s = -2 * (n + 1)) (_ : s ≠ 1), s.re = 1 / 2
If I construct a term of this type without going via one of the (fairly obvious) soundness holes or a compiler bug, it's very likely proved, no? No matter how inscrutable the proof is from a mathematical perspective. (Translating it into something mathematicians understand is a separate question, but that's not really what I'm asking.)


Sorry, I mean verify the semantics of what the LLM has generated is exactly what you were asking for.


I don't understand that. If it has a correct statement of the theorem and no `believe-me`s or whatever, it should be correct.


> introducing an error or two in formal proof systems often means you’re getting exponentially further away from solving your problem

I wish people understood that this is pretty much true of software building as well.


Something running an SSH server service, yes.

A decade plus ago, you could ssh into localhost on iOS, but that got nipped in the bud with sandboxing.


Online safety act passed in the uk on 26/10/2023, aligning suspiciously close with the mysterious advent of OpenAI’s screening tool


Only if the shop assistant took your ID, photocopied it and stored it in a box marked “do not touch” under the counter, alongside transcriptions of everything you ever say inside the store.


Those pesky whistleblowers, journalists, and political dissidents have had it good for far too long. They’ve needed taking down a peg


That’s a strawman. It’s such a tiny aspect of the what the majority of activity on the internet is that it’s irrelevant to any large scale discussion of the internet.


Just like how school shooters are irrelevant to any large scale discussion of education? It is very relevant and impactful and you can't just hand-wave it away by saying percentages mean it doesn't count.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: