Hacker Newsnew | past | comments | ask | show | jobs | submit | Kiboneu's commentslogin

Neat. The author is about to stumble onto a secret.

> In Sum# > Abstractions. They don’t exist in assembler. Memory is read from registers and the stack and written to registers and the stack.

Abstractions do not exist periodi. They are patterns, but these patterns aren’t isolated from each other. This is how a hacker is born, through this deconstruction.

It’s just like the fact that electrons and protons don’t really exist. but the patterns in energy gradients are consistent enough to give them names and model their relationship. There are still points where these models fail (QM and GR at plank scale, or just the classical-quantum boundaries). It’s gradients all the way down, and even that is an abstraction layer.

Equipped with this understanding you can make an exploit like Rowhammer.

https://en.wikipedia.org/wiki/Row_hammer


Abstractions pretty much exist and in assembler they matter even more because the code is so terse.

Now, there are abstractions (which exist in your brain, whatever the language) and tools to represent abstractions (in ASM you've got macros and JSR/RET; both pretty leaky).


That wasn’t my point. You almost got there when you wrote “there are abstractions (which exist in you brain, whatever the language)”. And your point on leaky abstractions is exactly the indication that they exist in your mind, not out there.

My point is that we settle with what we see for convenience/utility and base our models on that. We build real things on top of these models. Then the result meets reality. If only that transition were so simple.

When an effect jumps unexpectedly between layers of abstraction we call it an abstraction leak. As you mentioned. The correct response is to re-examine these leaks and make other frameworks to cover the edge cases, not to blame the world.

Hackers actively seek these “leaks” by suspending assumptions that arise out of the abstractions that humans tend to rely on.

I’m not surprised that my OP got downvoted. It can be very upsetting when one’s conceptual frameworks are challenged without prescription. No one even mentioned the specific example that I referenced. Well, if they can’t parse it, they don’t deserve it. Keeps me in the market.


> My point is that we settle with what we see for convenience/utility and base our models on that.

I disagree with that. My abstractions are pure in my mind (well, I hope even if it sounds a bit pretentious). I try to get the best out of the tools I have at hand to represent them. I'm perfectly fine with leaky abstractions, mismatches, inconvenent languages, etc. I live with that. But I certainly don't actively seek these leaks. Quite the opposite :-) (well instead I'm pursuing another goal like performance, in which case, I blow abstractions away). Oh, and in case you wonder, I did write tons of assembly on 8086/80386 and 6502 and now I'm full on on rust, julia and python. I know what an abstraction is :-)

But I thnk we globally agree nonetheless


I introduced my dad to claude code. He doesn’t even code, but now it’s a more welcoming and rewarding experience from the get-go. He’s happy, became more comfortable with linux.

Occasionally I remote in to help fix something, but the coding agent really takes a load off my back, and he can start learning without knowing where the endpoints are.


Let’s not pretend anymore.

The uncomfortable truth is that that people in affluent countries don’t want to change their lifestyle. Affluent countries are less affected by global warming than countries responsible for a fraction of global emissions. All the emissions from manufacturing follow suit.


It's not primarily a lifestyle issue, and the problem is no longer primarily developed countries. Per-capita carbon emissions in developed countries have decreased 30% in the past 20 years, and energy demand has remained relatively constant, in fact decreasing slightly in some places due to increases in efficiency. Meanwhile, developing countries like China and India are projected to account for 35-40% of total global emissions in the next decade, far outweighing the impact of individual lifestyle choices in the West.

Of course, people should do everything they can to reduce or offset their own emissions. But the solution is going to have to be societal, keeping up with energy demand by adding more nuclear, solar, and wind to the grid.


China and India currently make up 35% of the global population. So saying they account for 35%of emissions is to be expected.

> Meanwhile, developing countries like China and India are projected to account for 35-40% of total global emissions in the next decade, far outweighing the impact of individual lifestyle choices in the West.

Sure, but that's still mostly driven by Western demand for... Stuff. I'd like to see that share of global emissions if the West was still industrialised and didnt rely on China to be its factory


80% of China's manufacturing output is consumed domestically. Of the goods exported from China, only about 15% go to the United States. While that still represents a lot of exports in absolute terms, it's simply false to say that it's "mostly driven by Western demand for stuff".

I found your claim hard to believe, so I fact checked it, and it turned out mostly correct. Thanks!

35-40% considering their population and Africa still not getting their people to same consumption level as west seems entirely reasonable.

The EU has less than half the emissions per capita compared to the USA.

The USA has uniquely high emissions (matched only by a couple oil states). This is not solely explained with affluency.


It also includes social pressure, try to have a kid and no car, people will look at you like you are crazy. I even live in a country with great public transportation.

As a nomad, can confirm. Coming back to America, the scale and quality of materials is remarkable.

Could you elaborate on your observations?

[flagged]


> India and China’s contribution to CO2 emissions dwarfs the rest of the world

Such brazen, bald-faced lies. How do you sleep at night?

https://ourworldindata.org/grapher/cumulative-co2-emissions-...


I wasn't talking about historic emissions. I was talking about the current state of affairs, and in particular, the projections over the next few years, as this is what's relevant.

I'm not interested in political judgements of "past guilt." I'm interested in the trends that will matter for our childrens' future.


It isn't "past guilt". That CO2 hasn't gone anywhere. It's warming our world today. That's what's affecting our children's future.

If you don't want to "talk about historic emissions" then there's no point talking about China or India's emissions yesterday either. Or the day before that. Hell leave out the last 5 minutes too. Just talk about each instant.

That's clearly idiotic but it's the obvious conclusion if we accept your premise that "historic emissions" don't matter. Today's emissions are tomorrow's "historic emissions" and all historic emissions warm the planet equally.


If you understand the trends over the last decade, and the projections of the next decade, you'll understand why the actions of China and India will have significantly more impact than the actions of the West.

That's why for those of us that truly care about the environment (as opposed to politics), are focussing on what we can do about China and India, as opposed to pointlessly self-flagellating about the last 100 years.


What can you do about China and India? Are you Indian or Chinese? Do you live there? They're already working on the problem. They don't have oil. They're poor. They're more exposed to climate change's consequences than any Western or developed country.

Becoming energy independent as cheaply as possible - which means renewables in 2026 - is a national security priority for them.

Tend your own garden. The developed world's past and current actions have done and continue to do harm. Work on that. If you want to push India and China to go faster, lobby your politicians to tariff high carbon emitters and subsidize developing countries' rollout of clean energy. To be clear it has to be both. That's all you can do about emissions outside your country.

It doesn't matter where emissions are reduced. They all count equally, home or abroad. Those of us who truly care about the environment (as opposed to politics) understand this.

Above all don't ever vote for politicians that deny human-caused climate change.


> They're poor.

China is a nuclear-armed superpower with the largest Navy in the World.

> The developed world's past and current actions have done and continue to do harm. Work on that. If you want to push India and China to go faster, lobby your politicians to tariff high carbon emitters and subsidize developing countries' rollout of clean energy. To be clear it has to be both. That's all you can do about emissions outside your country.

That's all we can do, is it?

I don't think so!

It is through deliberate Western actions and inactions that we have allowed our own production capacity to be replaced by production capacity in China, a country that cares little for human rights, workers rights and environmental standards.

It is through decisive economic action, like America's tarrifs that we can quickly bring production capacity back to the West. This is already happening at pace, with huge inward investment in American factories, which will use cutting edge automation and will create many, many jobs.

Not only is this a vital geo political win (consider China's aggression towards Hong Kong, Taiwan and its neighbours in the South China Sea), but it is also a win for the working classes in the West, who have seen their opportunities torn up by globalist leaders in the West.


Now who's talking politics? I didn't hear anything about actually reducing emissions.

I guess that isn't surprising given the lies you started this thread with...


do you know how to read that graph? because this graph you linked exactly shows what you call "lies" is true.

Is China often moved by western ire?

Saying "The West has deindustrialised" is incredibly disingenuous. The west's consumerism has driven a race-to-the-bottom for pricing of products in China and India.

Seeking the cheapest goods is not a practice that's isolated to Westerners.

But destroying our own industrial capacity by government policy is certainly something unique to our countries.


Yesterday I was just watching an episode of futurama where bender (the robot) “finds” his imagination and believes that he is a dnd character. He goes around Don Quixote style.

To defeat bender, Fry pretends to a destructive spell on him.

I couldn’t find the exact scene but…. https://youtube.com/watch?v=chjVoVcVi3A


GOSH am I thankful to my old self running npm packages in containers with very specific access to the filesystem.

And that filesystem is CoW with snapshots, of course.

The story won’t end here. It will soon be time to do the same for other programming language environments too.


Filesystem & database snapshots are very cheap to make, you can make them every 15 minutes. You can expire old snapshots (or collapse the deltas between them) depending on the storage requirements.

That doesn't really matter though against an attack that takes some time to spread. If the attack was active for let's say, 6 hours, then 43,000 legitimate edits happened in between the last "clean" snapshot and the discovery of the attack. If you just revert to the last clean snapshot you lose those legitimate edits.

> Cleaning this up is going to be an absolute forensic nightmare for the Wikimedia team since the database history itself is the active distribution vector.

Well, worm didn't get root -- so if wikimedia snapshots or made a recent backup, probably not so much of a nightmare? Then the diffs can tell a fairly detailed forensic story, including indicators of motive.

Snapshotting is a very low-overhead operation, so you can make them very frequently and then expire them after some time.


Even if they reset to several days ago and lose, say, thousands of edits, even tens of thousands of minor edits, they're still in a pretty good place. Losing a few days of edits is less-than-ideal but very tolerable for Wikipedia as a whole

At $work we're hosting business knowledge databases. Interestingly enough, if you need to revert a day or two of edits, you're better off to do it asap, over postponing and mulling over it. Especially if you can keep a dump or an export around.

People usually remember what they changed yesterday and have uploaded files and such still around. It's not great, but quite possible. Maybe you need to pull a few content articles out from the broken state if they ask. No huge deal.

If you decide to roll back after a week or so, editors get really annoyed, because now they are usually forced to backtrack and reconcile the state of the knowledge base, maybe you need a current and a rolled-back system, it may have regulatory implications and it's a huge pain in the neck.


I preach to everyone to fail as loudly as possible and as fast as possible. Don't try to "fix" unknown errors in code. It often catches fresh graduates off guard. If you fail very loud and fast most issues will be found asap and fixed.

I had to help out a team in the cleanup of a bug that corrupted some data silently for a while before being found. It was too long out to roll back and they needed all help to identify what was real or wrong data.


Nah, you can snapshot every 15 minutes. The snapshot interval depends on the frequency of changes and their capacity, but it's up to them how to allocate these capacities... but it's definitely doable and there are real reasons for doing so. You can collapse deltas between snapshots after some time to make them last longer. I'd be surprised if they don't do that.

As an aside, snapshotting would have prevented a good deal of horror stories shared by people who give AI access to the FS. Well, as long as you don't give it root.......


>Nah, you can snapshot every 15 minutes.

obviously you can. but, what is the actual snapshot frequency? like, what is the timestamp of the last known good snapshot? that is what matters.

in any case, the comment you are replying to is a hypothetical, which correctly points out that even a day or two of lost edits is fine (not ideal, but fine). your reply doesnt engage with their comment at all.


> the comment you are replying to is a hypothetical, which correctly points out that even a day or two of lost edits is fine (not ideal, but fine). your reply doesnt engage with their comment at all.

I did engage, by pointing out that it wasn't relevant nor a realistic scenario for a competent sysadmin. (Did you read the OP?) That's a /you/ problem if you rely on infrequent backups, especially for a service with so much flux.

> what is the actual snapshot frequency? like, what is the timestamp of the last known good snapshot?

? Why would I know what their internal operations are?


>I did engage, by pointing out that it wasn't relevant nor a realistic scenario for a competent sysadmin.

>Why would I know what their internal operations are?

i mean... you must, right? you know that once-a-day snapshots is not relevant to this specific incident. you know that their sysadmins are apparently competent. i just assumed you must have some sort of insider information to be so confident.


I think you are misreading my comments and made a bad assumption. The reason I'm confident is because this has been my bread and butter for a decade.

>The reason I'm confident is because this has been my bread and butter for a decade.

my decade of dealing with incompetent sysadmins and broken backups (if they even exist) has given me the opposite of confidence.

but im glad you have had a different experience


> my decade of dealing with incompetent sysadmins and broken backups (if they even exist) has given me the opposite of confidence.

Oh, I agree that the average bar is low. That's part of the reason I do it all myself.

The heuristic with wikimedia is that they've been running a PHP service that accepts and stores (anonymous) input for 25 years. The longetivity with the risk exposure that they have are indicators that they know what they are doing, and I'm sure they've learned from recovering all sorts of failures over the years.

Look at how quickly it was brought back up in this instance!

So, yeah. I don't think initial hypothetical counterpoint holds water, and that's what I have been pointing out.


Kudos for very polite responses to trolling.

no one is trolling in this comment chain.

i found kibone's reply to a hypothetical musing as if it was some counterpoint in a debate instead of a simple expansion on their comment to be off putting. we had some comments back and forth and we both came out of it just fine. weird of you to add on this little insult to an otherwise pretty normal exchange.


FWIW I did not assume that you were trolling, and yes we did come out fine.

I have good faith, though I should get off hn now... :P

I still don't need to assume what the intent is. Troll or no troll, it works. My comments might inspire someone else to try a CoW fs. I'm also really impressed with wikimedia's technical team.


Nowadays I refuse to do any serious work that isn't in source control anywhere besides my NAS that takes copy-on-write snapshots every 15 minutes. It has saved my butt more times than I can count.

Yeah same here. Earlier I had a sync error that corrupted my .git, somehow. no problem; I go back 15 minutes and copy the working version.

Feels good to pat oneself in the back. Mine is sore, though. My E&O/cyber insurance likes me.


The problem isn't the granularity of the backup but since the worm silently nukes pages, it's virtually impossible to reconcile the state before the attack and the current state, so you have to just forfeit any changes made since then and ask the contributors to do the leg work of reapplying the correct changes

Why would nuked pages matter? Snapshots capture everything and are not part of wikimedia software.

The nuke might be legitimate?

That's not a lot of state lost. Destructive operations are easier to replay than constructive ones.

Is Wikimedia overreacting then?

No: from what I can tell, they're being conservative, which is appropriate here. Once you've pushed the "stop bad things happening" button, there's no need to rush.

Nothing was rolled back in the db sense, i think people just used normal wiki revert tools.

It also never effected wikipedia, just the smaller meta site (used for interproject coordination)


I wonder if the bad traffic overwhelmed the good traffic enough that it's simpler to pick out some of the good traffic from the bad and replay it rather than spot all of the bad traffic.

GOD am I thankful to my old self for disabling js by default. And sticking with it.

edit: lol downvoted with no counterpoint, is it hitting a nerve?


> edit: lol downvoted with no counterpoint, is it hitting a nerve?

I have upvoted ya fwiw and I don't understand it either why people would try to downvote ya.

I mean, if websites work for you while disabling js and you are fine with it. Then I mean JS is an threat vector somewhat.

Many of us are unable to live our lives without JS. I used to use librewolf and complete and total privacy started feeling a little too uncomfortable

Now I am on zen-browser fwiw which I do think has some improvements over stock firefox in terms of privacy but I can't say this for sure but I mainly use zen because it looks really good and I just love zen.


> I mean, if websites work for you while disabling js and you are fine with it. Then I mean JS is an threat vector somewhat

It's also been torture, I definitely don't prescribe it. :P Like you say, it's a sanity / utility / security tradeoff. I just happen to be willing to trade off sanity for utility and security.

And yes, unfortunately I have to enable JS for some sites -- the default is to leave it disabled. And of course with cloudflare I have to whitelist it specifically for their domains (well, the non analytics domains). But thankfully wikipedia is light and spiffy without the javascript.


What is uncomfortable about Librewolf? I thought it was basically FF without telemetry and UBO already baked in?

I appreciate librewolf but when I used to use it, IIRC its fingerprinting features were too strict for some websites IIRC and you definitely have to tone it down a bit by going into the settings. Canvases don't work and there were some other features too.

That being said, Once again, Librewolf is amazing software. I can see myself using it again but I just find zen easier in the sense of something which I can recommend plus ubO obv

Personally these are more aesthetic changes more than anything. I just really like how zen looks and feels.

The answer is sort of, Just personal preference that's all.


Exactly. On top of that, most managed flash (which is equivalent to SSD controllers) will pass all write through a modified cyclic XOR pad in order to keep the /bit/ entropy high. I don’t think the article holds on multiple abstraction layers.

Which is the same reason storing data to a HDD doesn't add weight. You can pack the data tighter if you are writing basically balanced 1s and 0s. Thus you can pack more bytes into a given area by encoding them into patterns with even distributions even though that means you need to write more bits.

But SSD erasing must write a constant (either one or zero). So an erased ready-to-write SSD block will have consistently different energy than one written with a random scrambled pattern. Same for SMR HDDs - but not for CMR.

The SSD lies :) TRIM is a scam.

What happens when you XOR a 0 with a byte in a one time pad? You get the corresponding byte from the pad, right? And the block full of 1s or 0s is exactly a case the scrambler needs to be used.

It's not for compression or anything like that, it is used because the cells are packed so tightly that sharp changes in energy gradients over the space will even themselves out by donating electrons to their neighbors (flipping bits). It's kind of like rowhammer.

Instead of just a ont-time-pad, a more complicated and often proprietary scrambler is used. But it's towards the same end. It's a deterministic, pseudo-random bit sequence.

If you can bypass the controller (read directly from NAND) you will see the true values that it returns, and it will likely be scrambled even if the flash controller reports otherwise. This also allows you to recover the scrambler, since you know that the other side of the XOR operation is 0.


Trim might be an illusion, but block erase is very real, even through it's not exposed as a device command. What I'm trying to tell, writes are scrambled but erases (not trim which marks a logical block as collectable, but physical erase that makes a physical block writable) aren't as they happen over several megabytes at once with no way to control individual bits.

Oh yes you’re right, I did neglect physical block erase which does happen outside the data path.

Idea: canary fingers. You register a few digits that wipe filesystem keys from memory so that no further biometrics can be used. It’s like what happens when you hold the power button on an iphone, but with a fun russian roulette twist.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: