Hacker Newsnew | past | comments | ask | show | jobs | submit | alakra's commentslogin

I was slightly hoping this was a piece about ninja turtles.



Absolute favorite of a B movie, and today something of a historical gem too, showing a grimy, grim Manhattan underworld (literally) that's long gone, or at least more hidden than ever.


> “I’m as mad as a hornet that SRS didn’t explain where the radioactive waste came from or if there is some kind of leak from the waste tanks that the public should be aware of,” Clements said.

This made me chuckle.


I had high hopes for IPFS, but even it has vectors for abuse.

See https://arxiv.org/abs/1905.11880 [Hydras and IPFS: A Decentralised Playground for Malware]


Can you point me at what you mean? I'm not immediately finding something that indicates that it is not fit for this use case. The fact that bad actors use it to resist those who want to shut them down is, if anything, an endorsement of its durability. There's a bit of overlap between resisting the AI scrapers and resisting the FBI. You can either have a single point of control and a single point of failure, or you can have neither. If you're after something that's both reliable and reliably censorable--I don't think that's in the cards.

That's not to say that it is a ready replacement for the web as we know it. If you have hash-linked everything then you wind up with problems trying to link things together, for instance. Once two pages exist, you can't after-the-fact create a link between them because if you update them to contain that link then their hashes change so now you have to propagate the new hash to people. This makes it difficult to do things like have a comments section at the bottom of a blog post. So you've got to handle metadata like that in some kind of extra layer--a layer which isn't hash linked and which might be susceptible to all the same problems that our current web is--and then the browser can build the page from immutable pieces, but the assembly itself ends up being dynamic (and likely sensitive to the users preference, e.g. dark mode as a browser thing not a page thing).

But I still think you could move maybe 95% of the data into an immutable hash-linked world (think of these as nodes in a graph), the remaining 5% just being tuples of hashes and pubic keys indicating which pages are trusted by which users, which ought to be linked to which others, which are known to be the inputs and output of various functions, and you know... structure stuff (these are our graph's edges).

The edges, being smaller, might be subject to different constraints than the web as we know it. I wouldn't propose that we go all the way to a blockchain where every device caches every edge, but it might be feasible for my devices to store all of the edges for the 5% of the web I care about, and your devices to store the edges for the 5% that you care about... the nodes only being summoned when we actually want to view them. The edges can be updated when our devices contact other devices (based on trust, like you know that device's owner personally) and ask "hey, what's new?"

I've sort of been freestyling on this idea in isolation, probably there's already some projects that scratch this itch. A while back I made a note to check out https://ceramic.network/ in this capacity, but I haven't gotten down to trying it out yet.


This was recently posted:

Separation of Concerns in a Bug Tracker [2024]

https://news.ycombinator.com/item?id=43296422


Is this like the FlatBuffers "zero-copy" deserialization?


I'm not done reading the article yet, but nothing so far indicates that this is zero-copy, just a more efficient internal representation


Nope. This is just a different implementation that greatly improves the speed in various ways.


Someone just ported this implementation to Rust. :)

[1] https://github.com/thevilledev/ChibiHash-rs


Happy to help! There’s also a Go implementation available by someone: https://github.com/rainrambler/ChibiHashGo


Yes. Same for me as well. It just works and is easy for my fingers to remember without consciously thinking about it.


Somewhat related, does anyone know the difference between `erlang:suspend_process` and `sys:suspend` when trying trying to "unschedule" a process for executing reductions? Are they the same thing?


sys:suspend is more low-level. erlang:suspend_process is also tracking which processes suspended a process, so you can suspend from multiple places and if all suspending processes terminate, it will automatically unsuspend the suspended one. Probably erlang:suspend_process uses sys:suspend when counter is >1 and sys:resume when counter reaches 0.


As someone who lives near the rocky mountains, I will say the part about getting struck by lightning by a storm that's up to 10 miles away is quite accurate.

I was out riding my bike one afternoon in Chatfield State Park in Colorado when I looked up and noticed how quickly the weather changed. Clouds had rolled over the mountains in the west and I heard lightning strike in the distance.

In the same second, I saw electricity arc between my hands and the handle bars as my fingers froze for a second. I didn't feel any pain or shock, but I got out of the area real quick.


+1 for me in Colorado


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: