Hacker Newsnew | past | comments | ask | show | jobs | submit | mlyle's commentslogin

Spy chips could be just slightly different firmware for... any number of different things. It could be pretty stealthy, too.

What you describe is a software Advanced Persistent Threat and not a "spy chip" as reported by Bloomberg. People have been reverse engineering firmware since forever, no any evidence of booby-trapped firmware was found or reported.

Us splitting hairs is moot: the claims of subversion - whether by sw or hw - were unsubstantiated and uncorroborated, and remain so to date.


Run continuously, non-delayed, but only sweep the order book at a random time every [1,2) seconds. Run for something like our current extended market hours.

Everyone gets the benefit of fast-enough execution and strong liquidity.

Crazy high-frequency gamesmanship goes away. Smart quantitative plays are still possible.


Diversifying away from NASDAQ-tracking index as a component of my investments will be extremely tax costly. Maybe more costly than the gavage (as the NASDAQ/SpaceX folks seem to be betting).

And most people won't even be informed that this is happening.

Large markets need to be run in the public interest...


> high-bandwidth, low-latency mesh network in a contested electronic environment.

Hard to win at jamming, when you're further away and the opponents are frequency agile.

1. They can use directionality more effectively to their advantage

2. Inverse square law works against you (unlike e.g. jamming GPS where it works for you).

3. They can be frequency agile, strongly rejecting everything outside of the 20MHz slice they're using "right now"-- and have choices of hundreds of those slices.

Fighters already have radars that they expect to "win" with despite that being inverse fourth power, a longer range, and countermeasures. They can send communications-ish signals anywhere over a couple GHz span up near X-band. Peak EIRP that they put out isn't measured in kilowatts, but tens of megawatts.


Fair point, “jammed” was too binary.

My concern is less total link loss than what happens under degraded or intermittent connectivity. If the wingman still depends on the manned aircraft for tasking or weapons authority, then the interesting question is how it behaves when the link is noisy rather than gone.

That feels like the real hinge in the concept.


Sometimes something good (ATT). Sometimes something bad (this terrible age-verification thing that is a huge barrier to entry for small entrants and comes with massive state surveillance risk).

In the end, all the little people are just collateral damage or occasionally they get some collateral benefits from wherever the munitions land.


If ATT had been applied uniformly, sure. But Apple has exemption from its own rules. So, less trickle down benefit, and more tilting the playing field wildly in their favor. Its new advertising system is doing great!

I don't think the online advertising field is tilted "wildly in Apple's favor". Yes, Apple squeaked out one area of advantage, eliminating some crushing abuse by others in the process.

In a sane world, no one would have the kind of market power that so much hinges upon their competitive actions.


Personally I've lived in the world of "small entrants" and can see that but I think the average voter doesn't really understand that "just anybody" could have created an online service. That is, they think you have to have VC money, be based in Silicon Valley, have to have connections at tha pp store, that it's a right for "them" and not for "us".

This isn't about the average voter-- this is about an entrenched industry creating structural barriers to entry to protect growing monopoly power.

> Cost of enforcement matters. The exact same nominal law that is very costly to enforce has completely different costs and benefits then that same law becoming all but free to rigidly enforce.

Hey, I really like this framing. This is a topic that I've thought about from a different perspective.

We have all kinds of 18th and 19th century legal precedents about search, subpoenas, plain sight, surveillance in public spaces, etc... that really took for granted that police effort was limited and that enforcement would be imperfect.

But they break down when you read all the license plates, or you can subpoena anyone's email, or... whatever.

Making the laws rigid and having perfect enforcement has a cost-- but just the baseline cost to privacy and the squashing of innocent transgression is a cost.

(A counterpoint: a lot of selective law enforcement came down to whether you were unpopular or unprivileged in some way... cheaper and automated enforcement may take some of these effects away and make things more fair. Discretion in enforcement can lead to both more and less just outcomes).


This is my problem with Americans and their "but the constitution" arguments.

The U.S. constitution has been written in an age before phones, automatic and semi-automatic rifles (at least in common use), nuclear weapons, high-bandwidth communications networks that operate at lightning speed, mass media, unbreakable encryption and CCTV cameras.


The problem is that "all sides" agree that if the constitution was written today, surprise, surprise, it'd totally agree with them; the gun control people are sure that the 2nd wouldn't cover military weapons, the gun lovers are sure that it would mandate tanks for everyone.

But since having 300 million people have a detailed, nuanced discussion about anything is impossible, everyone works at the edges.


I think their point was that a lot (but not all) of the existing argument boils down to “Well it should be that way because someone decided it hundreds of years ago” so if we are consciously starting again from scratch, ideally that specific argument no longer holds water. (I’d say we should instead use data based approaches, look at what has been successful in other countries, etc, although that’s slightly expanding the current topic.)

One big difference between the UK's historic constitutionalia and the US is that the UK generally recognises that we only do things a certain way because agreeing how to change them is too hard, while the US appears to think that they do things in their certain way because that's the right way to do them.

Specific examples for the UK: inducting politicians into the Privy Council in order to qualify them for security briefings, Henry VII powers, and ministers' authority deriving from the seal they're given by the sovereign. Which would almost make as much sense if it were a marine mammal as it does being a stamp.

The thing being, they work well enough. And if you want to replace them, you need to work out what to replace them with and how.


Modern democracy starts to make a lot more sense when you realize the driving principles are "what works easy enough" and "how do we prevent getting to the point of violent revolution".

I think the fundamental issue is that a form of equality where everyone gets what was previously the worst outcome is... probably worse.

Many times when politicians get to suffer the full effects of their laws, the laws quickly change for the better.

The models don't consider these because there's considerable uncertainty as to the size of these effects and potential countervailing forces of similar magnitudes.

The fact is, for all of these other secondary effects etc... we just don't know. It's too complicated of a system.

So as a result, we've got a prediction of something between "somewhat bad" and "catastrophically-is-an-understatement bad" with a maximum likelihood estimate of "really really bad."


> ... we just don't know. It's too complicated of a system.

I wish this comment was higher up.

The big thing under-discussed about climate change is that the deeper we get into it, the harder it is to predict and understand.

I recall Dr. Richard Alley discussing how Thwaites Glacier collapsing wasn't factored into any IPCC reports; but ultimately pointed out it was for good reason because it's simply not possible to model these things and their consequences accurately.

I don't do any climate modeling, but I do a lot of other modeling and forecasting: the biggest assumption we make in all statistical models is that the system itself more or less stays statistically similar to what it currently is and what we have seen in past. As soon as you drop that assumption you're increasingly in the world of wild guessing. If you wanted me to build you a RAM price prediction model 2 years ago, I could have done a pretty good job. Ask for one today and your better off asking someone with industry but no modeling experience what they think might happen.

This is the hidden threat of climate change most people are completely unaware of: we can know it will be bad when certain things happen, we know they will happen in the nearish future, but we can't really say exactly how and when they'll unfold with any meaningful confidence.


And yet we can still say something simple that is true: warming will accelerate due to non-human greenhouse gas emissions as the planet continues to warm, due to feedback loops and tipping points in the natural carbon cycle. This is an unassailable statement.


> This is an unassailable statement.

No. I believe what you're saying is very likely to be true, but we know there's both positive and negative feedback and we don't really know how they really will interplay and where all the tipping points are.

There may even be significant phase delay in these mechanisms and so we could even get oscillation.


Over time periods in excess of 10K years this is a reasonable caveat. For more human-oriented timelines, there's no negative feedback mechanism I'm aware of that would do anything close to producing an actual oscillation.

Edit: I'd be happy for you to educate me how I'm wrong btw, since that would mean I've missed something significant, which would make me happy! So please do tell me if you know of such a mechanism.


I really meant to say that there's no way to really know of any region of the CO2 vs. temperature graph if there's positive or negative feedback dominating. You're proposing it all runs away in one lump, and I'm saying that there can be chunks of runaway and then damping. An extreme case would be if things are really underdamped somewhere and we spiral down to one of these points.

There are all kinds of things that have time lags from years to centuries, though, that could cause ringing (ocean heat uptake, rates of carbon uptake as the biosphere adapts and shifts, etc).

Indeed, we have evidence of ringing in the geologic climate record-- like Dansgaard-Oeschger events. We also live with ringing in weather systems like El Nino. Warming intensifying or creating new modes of oscillation would not be that surprising.


If those customers end up profitable, it could be tempting for nVidia to vertically integrate.

I don't think it's as easy as others say, though.


There's nothing to say that you can't build something intelligent out of them by bolting a memory on it, though.

Sure, it's not how we work, but I can imagine a system where the LLM does a lot of heavy lifting and allows more expensive, smaller networks that train during inference and RAG systems to learn how to do new things and keep persistent state and plan.


Memory is not just bolted on top of the latest models. They under go training on how and when to effectively use memory and how to use compaction to avoid running out of context when working on problems.


Maybe there's an analogy to our long and short term memory - immediate stimuli is processed in the context deep patterns that have accreted over a lifetime. The effect of new information can absolutely challenge a lot of those patterns but to have that information reshape how we basically think takes a lot longer - more processing, more practice, etc.

In the case of the LLM that longer-term learning / fundamental structure is a proxy for the static weights produced by a finite training process, and that the ability to use tools and store new insights and facts is analogous to shorter-term memory and "shallow" learning.

Perhaps periodic fine-tuning has an analogy in sleep or even our time spent in contemplation or practice (..or even repetition) to truly "master" a new idea and incorporate it into our broader cognitive processing. We do an amazing job of doing this kind of thing on a continuous basis while the machines (at least at this point) perform this process in discrete steps.

If our own learning process is a curve then the LLM's is a step function trying to model it. Digital vs analog.


do you have some reading material to share on this matter?

thanks already


I don't, but look into what the creators of Codex, Gemini CLI, Claude Code, Kimi CLI, etc have said about the models. While these harnesses are advertised as coding specific we know that coding ability correlates with reasoning ability.


You aren't wrong and that is a fascinating area of research. I think the key thing is that the memory has to fundamentally influence the underlying model, or at least the response, in some way. Patching memory on top of an LLM is different from integrating it into the core model. To go back to human terms it is like an extra bit of storage, but not directly attached to our neo cortex. So it works more like a filter than a core part of our intelligence in the analogy. You think about something and assemble some thought and then it would go to this next filter layer and get augmented and that smaller layer is the only thing being updated.

It is still meaningful, but it narrows what the intelligence can be sufficiently that it may not meet the threshold. Maybe it would, but it is probably too narrow. This is all strictly if we ask that it meet some human-like intelligence and not the philosophy of "what counts as intelligence" but... we are humans. The strongest things or at least the most honest definitions of intelligence I think exist are around our metacognitive ability to rewire the grey matter for survival not based on immediate action-reaction but the psychological time of analyzing the past to alter the future.


> I have built a quick demo

This is obvs 5 minutes of LLM generated code

> a regulator verifies the cryptographic proof without trusting the operator.

No, the regulator verifies that the operator signed the proof, which isn't a lot different from the operator saying it alone.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: