Hacker Newsnew | past | comments | ask | show | jobs | submit | Ixio's commentslogin

This is kind of what Django does instead and I much prefer the Rails way. Though it might just be because I started MVC-dev with Rails.

I think I find it easier to find common abstractions on the {models/views/controllers} level than the resource level and that frustrates me with Django app-based dev.


We keep finding new ways to make fission safer and reducing risk of runaway scenario but I'm pretty sure we'll never reach zero risk. Sure we might reach it on paper but human error can always happen. Chernobyl operators thought their reactor design had zero risk of exploding, current reactors are much safer but I'm pretty sure the risk isn't zero.


With modern reactor designs the inherent safety mechanisms mean that humans are not in the loop to reduce reactivity or remove decay heat.

Here's an example from Argonne National Laboratory:

> In the first test, with the normal safety systems intentionally disabled and the reactor operating at full power, Planchon's team cut all electricity to the pumps that drive coolant through the core, the heart of the reactor where the nuclear chain reaction takes place. In the second test, they cut the power to the secondary coolant pump, so no heat was removed from the primary system.

"In both tests," Planchon says, "the temperature went up briefly, then the passive safety mechanisms kicked in, and it began to cool naturally. Within ten minutes, the temperature had stabilized near normal operating levels, and the reactor had shut itself down without intervention by human operators or emergency safety systems."

https://www.ne.anl.gov/About/hn/logos-winter02-psr.shtml


There is obviously a difference between:

- There are many passive systems that work in concert to prevent the fission material from having a runaway chain reaction that continues on its own,

and

- It is literally impossible within our understanding of physics for the reaction to continue without the continued application of power to the reaction chamber.

No matter how 'safe' the former gets, it's just asymptotically approaching the latter. There will always be more assumptions and caveats involved in preventing a self-sustaining reaction from continuing.

In particular, re. that article, a lot seems to be resting on the sodium cooling pool being present while there's something else going wrong. So what if an earthquake breaks it open and dumps it out. Or a bomb.


Their risk isn't zero but even the 60s the inventor of PWR said this was not very safe design and designing inherent safely systems are far better.

Of course you never have zero risk. That literally impossible and not a standard you would use for literally anything else in human existence.

The fact is, you can design nuclear power plants that are so safe that the chain of events you had to come up with to get any radiation outside of the reactor safety boundary is so ridiculous that the probability of them happening is barley measurable.

Sure if you have human error and 3 black swan events on the same day, the risk is not zero.

But even if you come up with these crazy events the damage from those events would be a far smaller then Chernobyl and Chernobyl was also far less damaging then in popular imagination.

The risk that somebody dies during the construction of the reactor confinement building is probably 100000x higher, but nobody seeks to prevent ever building large structures.

> Chernobyl operators thought their reactor design had zero risk of exploding, current reactors are much safer but I'm pretty sure the risk isn't zero.

This is where we are with nuclear. Any debate goes back to Chernobyl. Again, in no other area do we go and say 'well the soviet thought this in the 60s so therefore we can never moved past it'.

There is fundamental physics and chemistry involved and just because some soviet operators didn't know that does mean its unknowable.

Humanity should be living in the nuclear age. Climate change would not even be a thing if everybody had done what the French have done in the 70s. And we would be much better in terms of space exploration if the whole world were not so reluctant about using anything nuclear.


I'm of the mind that randomness is only an approximation of complexity, if you're omniscient then randomness doesn't exist and everything is deterministic.

However it seems to me the deterministic vs randomness debate is of the same order as the existence of god(s) : we will never be able to prove or disprove it and we aren't able to make useful predictions out of the information one way or the other so we might as well just agree to disagree and move on.


Do you grow your own food ? Where does your zero trust end ?

Science is very difficult to cheat on such high visibility issues. There is a huge scientific consensus on climate change, from experts of all countries and diverse backgrounds. For it all to be a huge conspiracy seems like a pretty big leap of faith, that they are genuine is a simpler explanation.


Science is incredibly easy to cheat. Everyone is silo'd and beholden to funding.

That very few people speak up about it (why would they, when their jobs are at risk) means nothing.

A really good example of someone speaking up, from a 'trusted' organisation, is this editorial in the British Medical Journal.


aaand here's the link, sorry!

editorial in the BMJ about covid: https://www.bmj.com/content/371/bmj.m4425


Your blogpost might benefit from explaining further the following line: "Special relativity says that the rocket has a different coordinate frame than the space stations. In particular the x axis, which corresponds to all the points in space at the same rocket-time, is not the same as the x-axis for the space stations."

I'm on the lookout for an explanation of why FTL=time-travel that I'll finally understand. However in your blogpost my limited knowledge of Special relativity fails to make sense of the whole argument.


Exactly ! Why don't we suppose traveling/observing through the wormhole has the same time dilation ?

I've heard of FTL=time-travel a couple of years ago for the first time and I would love to be able to argue with someone knowledgeable about it to understand. I hoped this HN thread would have answers. However all the answers here seem to have holes in same.

I'm starting to wonder if FTL=time-travel isn't like Schrödinger's cat: a hypothetical thought experiment terribly misunderstood by the masses.


If travel through the wormhole takes exactly as long as travel via space, what kind of wormhole is it?


Looks like the first tidal power plant was built in 1966 in Brittany: https://en.wikipedia.org/wiki/Rance_Tidal_Power_Station.

It was also the largest for 45 years until a bigger one opened in South Korea: https://en.wikipedia.org/wiki/Sihwa_Lake_Tidal_Power_Station.

Not sure if someone has an analysis about why those aren't more widespread ? It seems like the existing ones are pretty massive and in very specific spots, maybe smaller installations aren't cost-effective enough.


Perhaps like democracy it is the least bad option.


Are people really calling this specific model of git branching THE Gitflow model ? For me this was a gitflow amongst others, a gitflow being a git workflow.

In the original article "A successful Git branching model" I don't see a single mention telling people they should call this specific model Gitflow.

For me the word gitflow has the useful meaning of a git branching model, am I in the minority ? Using it as a noun for a specific model seems like a waste of the word.

In regards to the article, sure this gitflow is one of the most shared images on the subject. However I recommend that every team use whatever gitflow makes the most sense for them and their project. I was not aware that our industry had a problem with teams cargo culting that specific git workflow.


Interesting. I have only ever heard "gitflow" used to refer to the specific model described in https://nvie.com/posts/a-successful-git-branching-model/ . The author also wrote a tool, "git-flow", to assist with this model https://github.com/nvie/gitflow .

I have heard the terms "git branching model" or "git workflow" used for the concept that you call "gitflow".


nvie/gitflow is a long dead project. There's an active fork at https://github.com/petervanderdoes/gitflow-avh


Thanks for the link, at the very least Jeff Kreeftmeijer in 2010 already used the name git-flow for that specific model. I somehow missed all of that.

Plus nvie.com updated its 10-year old post with a notice yesterday and also uses the name git-flow there.

I guess I'd better use the term git workflow in the future.


+1. We use as many branches as it makes sense on a particular project. Simple ones might have just master (and temporary feature/bugfix branches). Only when we need to have a LTS version do we introduce another branch for it.

I always thought this is also Gitflow, but reading this article, I'm not so sure anymore.


> There’s absolutely no use case out there, where I search for something, and then I say, hey, I believe my search result will be item #3175 in the current sort order.

I actually have a use case. When the search filtering functionality is lacking or overly complicated to use. For example with Gmail if I can't be bothered to look up how date filtering works, since I receive emails at a fairly constant rate I can sort of guess that item #3175 might be around the date I'm looking for.

I strongly disagree with anyone that thinks Facebook "got it right" with their timeline. As far as my experience goes it's very easy to see something interesting on the Facebook timeline only for it to refresh and lose it forever. It can be very frustrating not to be able to get a consistent timeline.


Also from the article: "when was the last time you googled for SQL and then went to page #18375 to find that particular blog post that you were looking for?"

The key difference in use cases here is that one is searching my stuff, and one is searching everywhere. Nobody wants page #18375 of everything. People do occasionally want page #3175 of their own stuff.


Even when I'm searching other people's stuff, I don't always just want to browse through it linearly. Sometimes I want to skip around. Meaningful keys such as timestamps would be better than just page groupings, but page groupings, so long as they are stable, are still useful. Sometimes I do want to skip to page #18375 because I know that I've already browsed pages #1 through #18374 on a previous visit and I want to start where I left off. You can't typically do that with infinite scroll.


Unfortunately, the "so long as they are stable" constraint is incompatible with idea of mutable database backend. SQL pays massive performance price for OFFSET and it's results are still neither fast nor stable.


Which is why i browse twitter chronologically. I hate when sites don’t provide a way to edit the offset e.g. by editing the url


I've got plenty of use-cases of this to know that GMail's number of pages is not accurate.


I think it isn’t meant to be accurate. Not even sure if it is stable on page refresh


Then why do they include it at all?


what matters is that there might be more search results, not their precise number


I couldn't even believe I read that, the facebook timeline is easily the worst web UI out there.

The best part is when you follow a link from your timeline, then press the back button. With any normal website, you'd be back at the same position in the page where you were before. With Facebook, not only do I not get that, I get what seems to be a random position in the timeline.


Facebook having a timeline which is not stable/deterministic is a peculiarity of their implementation. OP refers to a simple timeline which would not show such undeterministic behavior


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: