Hacker Newsnew | past | comments | ask | show | jobs | submit | dvdcxn's commentslogin

I'm sorry but this is completely wrong. NPM has lock files which explicitly lockdown the version you have downloaded after your first install. These are commited to source control, so all subsequent installs will use the exact same version of dependencies, and nested dependencies too.

You need to ask npm to upgrade or delete your lock file and node modules to run into this issue.


With lock files, you will always be stuck with whatever version you first installed until you explicitly ask npm to upgrade, or delete your lockfile.


Even if they did they're not 'breaking your app in minutes', as if all live apps which use that package are suddenly going to poll npm for deleted packages. That's absurd.


Of course that's absurd, that's not really the core of the argument though. I would still consider it breaking my app if I now need to go replace that package somehow, or pull it from some archive, before I can re-deploy my application.


What use is that if the servers running the rest of the game are turned off?


Most (not all) web3 games don't have a server. Just frontend interacting with a smart contract


And what do you think you're getting the frontend assets from? They certainly don't live on-chain, and usually aren't open source.


Yes I realize that could be an issue, but it's way easier to address than keeping a server alive. See: https://archive.org/details/softwarelibrary_flash_games


Distribution of binaries of obsolete games is a solved problem. It costs money to leave dedicated servers running for legacy titles, but costs nothing to leave the Steam page online. Plenty of commercial games are also free to download, but cost money to register and actually play.


I think this 3 year old topic needs to be revisted considering the improvements that Concurrent mode will provide.


Yes this is my predection too. We are enforcing evolutionary pressure that favours a less dangerous strain of the virus.


Given that this virus can spread well before there are any symptoms and well after symptoms are gone I think there are very little evolutionary pressures on it right now and it can evolve into milder strains with pretty much same probability as more lethal strains.

If we are vigilant then the only thing that the virus might be pressure toward is having even longer asymptomatic spread and even higher survivability in the air and on the surfaces.


Arguably you are consciously observing by viewing the output of the RNG


But I can record that screen on another camera and look at that recording the next month :)

Are we now also assuming causality can go both ways in time :) ? Or am I changing the recording the moment I look at it :) ?

It's such a stupid theory :)


Congratulations, you just re-invented Schrödinger's cat thought experiment [1]. One interpretation of this phenomenon is that the universe exists in a superposition of states until an observer causes the collapse through observation. What you dismiss as "such a stupid theory" is an idea/paradox that has confounded quantum physicists for decades.

1. https://en.wikipedia.org/wiki/Schr%C3%B6dinger%27s_cat


I don't think anybody seriously thinks Schredringer's cat is in superposition in real world. It's a living macroscopic thing at room temperature, it interacts with the universe and would collapse immediately. So would the recording.


Well, you can scroll down to interpretations in the article I linked and choose any of the ones that fit your fancy. Or you can propose a new one. But acting like you are right and all other interpretations is wrong is merely hubris. As far as I am aware, this is an open issue and claiming that "... no on seriously thinks ..." isn't a valid or serious argument.


No, the camera that you observe 1 month later would be in a quantum state, until you actually observe it.

The "recording" would be in multiple states until you observe the recording.


Except when we do the experiment in the real world we can't keep the quantum superposition for macroscopic objects from collapsing and now we're claiming it just happens because we don't know the answer.

That's the difference between declaring entangled photons be in superposition and declaring a tossed coin being in superposition because I covered it with my hand without looking.

One is state of the universe, the other is my lack of knowledge.

But sure - we can go on declaring everything entangled all the way up. Then we declare that the whole universe except the conscious observer is in a superposition of states until the observer decides to observe it or not.

The problem with that is that I'm a part of the universe and I have no proof that I have a free will independent of everything else. It just feels like it :)

So assuming everything waits for me to decide seems very self-centered and unjustified by the evidence as opposed to everything happening when it happens and my decision being predetermined like everything else.


> Except when we do the experiment in the real world we can't keep the quantum superposition for macroscopic objects from collapsing

No. You don't actually know that. Because the act of you "observing" this macroscopic object, could be the cause of this object collapsing.

So the macroscopic object that collapsed could be due the fact that you have observed it.

> One is state of the universe

You have no way of knowing the state of the universe, unless you have observed it. So if you are ever looking at any state of the universe, it is observed and has already collapsed.

> Then we declare that the whole universe except the conscious observer is in a superposition of states until the observer decides to observe it or not.

Yep that's the theory.

> I have no proof that I have a free will independent

Yes, that could be true as well.

> seems very self-centered and unjustified

So, here we get to the Crux of the matter. No matter where we say that the wave "collapses", it is all unfalsifiable.

It could be that the observes collapses it.

It could be that it was collapsed once it had any interactions with a photon or the smallest microscopic particle.

Or, as you pointed out, it could be that the universe has never collapsed, and the whole universe is in a super position right now.

And all of these positions are equally unfalsifiable. There IS no justification that any of these are correct, and there is no way of saying which is more justified.

So, yes, it is unjustified. But it is just as unjustified as any other collapse theory.

It is a fools errand to give arguments as for why one collapse theory is worse or better than any others, and they are all equally unfalsifiable.


It seems to me that one of these positions makes much more sense because it assumes the least unproven behaviour.

We measure the wave collapsed as if the cat died in minute 3 of being in the box (we can calculate that from the amount of CO2 and other methods). You say "we don't know if it really collapsed in minute 3, or collapsed now and retroactively made every measurable fact look like it happened in minute 3".

To me it sounds like "Earth can be 6000 years old - God simply made it look like it has billions of years". Sure I cannot experimentally distinguish between these options, but one is much simpler.

I guess with QM at least we "caught God doing the retroactive stuff" for small things, but we had no proof it had anything to do with consciousness, and it never seems to happen for macroscopic stuff.

It's like I see street lamp goes black when I'm near it and assume it's because I was close to it. It's easy illusion to get into, because you never see the lamps that go black when you're not around. But there's no justification that you/consciousness is needed.


You are once against picking a specific time as to when exactly collapse happens.

I would argue the opposite to you, actually, and instead say that the whole world is entangled, and the wave never collapses.

We can't even prove that collapse happens in the first place.

So, assuming that things never collapsed is actually the situation with the least amount of unproven assumptions, and it literally is not assuming that the collapse ever happens.

That's much better than saying that the wave function collapses at exactly the atomic level or something.

All of these points in time are arbitrary.

> retroactive stuff

There is nothing retroactive about any of this. Before collapse, it exists in a state of it not happening and happening at the same time. That's not retroactive.


> You are once against picking a specific time as to when exactly collapse happens.

Well Many Worlds assumes multiple universes that's kinda big assumption too. But I agree it's the most elegant interpretation.

> All of these points in time are arbitrary.

If we exclude Many Worlds then collapse happens. We have to conclude it can happen for microscopic objects at least.

What I cannot understand is how people then jump to assume it's consciousness that is the important distinction. What's the reasoning here? We can't even define it, we don't know if we have it, we don't know if it's important, why put it it physics?

> There is nothing retroactive about any of this. Before collapse, it exists in a state of it not happening and happening at the same time. That's not retroactive.

Ok, if you don't go back and recreate everything that happened when I observe it then you have to remember all the possible paths and chooses one of them when I look at it.

If I had to code the universe I wouldn't write it in such a way that it needs to remember everything that could happen but was never observed by a player :) Seems like a huge waste of resources if I could just as well cull the statetree early.

We even have an interaction that AFAIK cannot be isolated (gravity) - how's that working with the cat in a box? When it falls dead it curves spacetime outside of the box differently after all.


> AFAIK cannot be isolated (gravity) - how's that working with the cat in a box?

If you want to argue that the gravitation effect of something, causes the act to be "observed" by the person, that's fine. But that's still an observation.

That is still the person "observing" the effect, because they are now effected by it. And you have no way of knowing when exactly the wave function collapsed.

It could have collapsed at any time.

So back to your original example, it would not be the camera recording it that collapses it, it would be the gravity waves being "observed", or whatever.


Restricting the movement of labour does not in any way benefit the individual. Presumably, as a member of this profession you must realise that competition improves compensation.

You are not owned by your employers. Anything else is like a turkey voting for christmas


What's weird about it? In practice it just means writing classes which instead of newing things inside the constructor, allows them to be passed in by the caller.

This is a pretty fundamental aspect of polymorphism and composition.


> What's weird about it?

What I meant about weird was that OP's proposed "solution" to DI is to "do it manually", while the exact reason while we want DI in the first place is not to do it manually (or, in the way that I heard it most in my life, "to be able to change it without rebuilding")


> What I meant about weird was that OP's proposed "solution" to DI is to "do it manually", while the exact reason while we want DI in the first place is not to do it manually (or, in the way that I heard it most in my life, "to be able to change it without rebuilding")

No, the main reason we want DI is to change the behaviors based on what's passed. That's dependency injection. Framework passing the dependencies is automatic dependency injection and what I'm advocating against.


> No, the main reason we want DI is to change the behaviors based on what's passed.

but which behaviours are passed depend on something external to the system - config files, etc. - so you need something that gets the information from somewhere and instantiates the correct class - and trust me, you don't want to write

    Protocol proto = null;
    if(config == "protocolA")
      proto = new ProtocolA;
    else if(config == "protocolB")
      proto = new ProtocolB;
    else if(config == "protocolC")
      proto = new ProtocolC;
    else if(config == "protocolD")
      proto = new ProtocolD;
    else
      throw whatever;

    return new MyObjectWithDependencies(proto);
especially when your system supports >50 protocols, and your object also needs a logger which can itself be of 12 different kinds, a file accessor which can mmap or not according to configuration & os, etc etc


Imo, dependency injection frameworks are distinct to the concept of DI. They do not represent DI itself, which is just a design pattern.


Agree, it reduces boilerplate and prevents you from shooting yourself in the foot when it comes to writing testable code.


It's less of a walled garden than OSX, which isn't even virtualisable.


It actually can be virtualized, but it isn't as easy as Windows. I'd argue that neither of them should be used in government though.


I'd posit that the primary reason it's not as "easy" to virtualize macOS is because Virtualbox, VMWare, et al haven't poured enormous sums of money into optimizing for macOS performance, as they have Windows.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: