Hacker Newsnew | past | comments | ask | show | jobs | submit | twic's commentslogin

VCs are, traditionally, people who made a lot of money in a lottery and think that makes them experts. It's virtually guaranteed they're idiots.

Not long ago, a local mad church put a packet of pamphlets through my door, one of which was this magnificent tale:

https://avesselofhonour.com/2023/06/28/48-hours-in-hell/

Some of those features show up there too. Of course, this comes from a Christian background, and draws on that. But it does have a river, and there's no river in hell in the bible.


Where is that story from? Is George Lennox even a real person?

Of course not, it’s proselytizing material, the sort of stories that might be illustrated in Chick Tracts, down to the admonition to repent and pray at the end - a ‘Ticket to Heaven’

Surely this can be solved with financial engineering. The memory makers build more capacity, but they finance it with something like floating-rate notes linked to an index of memory prices, or even catastrophe bonds or AT1s. Or more crudely, set up special purpose vehicles to build the extra capacity, and issue convertible bonds from those; if the memory market collapses, investors don't get paid, but they do get a memory factory.


gvisor tries to be a complete kernel in userland we are not trying to. We will consciously choose never to try and support multi-proess env in the sandbox. The idea is there are enough people running single process containers and they can benefit from a lighter more secure runtime. This solution will not try to replace the kernel. For example the python tests we run for https to some website ends up runnign implementing only 60 syscalls not 350. i expect to add another 10-20 for support typescript but this will always be strictly single process.Plus the performance overhead of gvisor is substantial 2-10us ( me reading internet) for the system i am implemeting on the hot path it is less than 1us. Plus there is always the density story my shim currently is 4KB the python runtime is shared through memfd. I am working on a demo showing i can run 1000 vm on 512 MB ram each launching in under 30msec. Remember this will never replace or be able to handle generic mutli-process sandboxes this is targeted only at single process env where we can make lots of simplifying assumptions

This doesn't work. For any number of significant bits, there are pairs of numbers one machine epsilon apart which will truncate to different values.

Knowing where the transmitters are is vital. So wonder if you build in a positioning system to them. Each transmitter transmits a signal, but also rebroadcasts the signals it receives from the other transmitters on separate bands (these can be at lower power). If you can pick up a few transmitters, is that enough to build a model of where they are relative to each other, and then where they are relative to you?

If each transmitter picks up the rebroadcasts if its own signals, then with some assumptions about the rebroadcast lag (or measurements of it added to the signal!), that's enough to know the range to each other transmitter, right? So maybe they do that and then just broadcast the ranges (tagged on to their main signal), then any remote receiver can work it all out from there.


> that's enough to know the range to each other transmitter, right?

Only in a flat environment without too much atmospheric distortions. As soon as you get multipath effects from eg waves bouncing off buildings and mountains then the computational complexity goes through the roof. Also I don't think you should underestimate how much the signal degrades in a "target path" vs the "direct path". The article mentions -60 dB and I think that is fairly optimistic. The transmitter power needs to be HUGE to make it work, so it would be much easier to have stationary transmitters. Normal radars manage to do this because they are highly directional, but multistatic radars need to look in all directions at once and need to up the power as a result.


You're completely right. I've got an idea for an investment strategy. We start a company, and issue shares. We use the money to buy Teslas, and hold them as assets. As the value goes up, the value of the company will rise. Then we issue more shares, at a higher valuation, and use the money to buy more Teslas. Infinite money glitch. I call it a Drivable Asset Treasury company.


Oh, so this isn't about the Modell's collapse? https://www.nytimes.com/2020/03/11/business/modells-bankrupt...


My version of this workflow is, i take digital photos, and don't edit them.

Turns out, it's fine! The photos aren't perfect, but no amount of editing could make them perfect anyway. They look like the thing they're a picture of. That does the job. And with the time i save by not doing any editing, i have time to take more photos! Or read a book! Or sleep!


Having gone from multi-repo to monorepo recently, I'd say the opposite. A multi-repo lets you do those things incrementally. A monorepo forces you to do them in one go.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: