Hacker Newsnew | past | comments | ask | show | jobs | submit | CariadKeigher's commentslogin

Unless you have access to a mobile phone's baseband source code, you cannot really trust anything about its level of security.

This was discussed on HN a while back and comes up quite often:

https://news.ycombinator.com/item?id=10905643


Keep in mind that at best it would take maybe 1,000 years with current technology to get there with a probe or human-supporting ship. It would be highly unpopular however as it involves exploding nuclear bombs behind the craft to get it there that fast--that and it would probably cost trillions to build the thing.


There's a $100 million effort to develop tiny spacecraft that are accelerated to 10%-20% the speed of light with ground-based lasers: https://en.wikipedia.org/wiki/Breakthrough_Starshot


Today at 2:55 EST Philip Lubin, who is involved with the Starshot project, will present some work on the study he is doing for NASA investigating the feasibility of this.

You can watch it live here: http://livestream.com/viewnow/NIAC2016


The craziest thing to me is that if we sent a generational spaceship there that was expected to take 1,000 years, it seems likely that better technology would allow the next generation of spaceships to get there much faster. So by the time the 1,000 year spaceship arrived, there would already be people there!


This is a known paradox, I forget what it's called though. The concept is that it's arguable that any long-term space travel is pointless because there will always be a faster ship surpassing it as the originating civilization improves its technology, and so on for that faster ship. So, it's irrational to ever launch anything...


Apple has adopted a similar philosophy with their MacBook Pro line of computers.


I haven't laughed so much in days. Thank you.


This is known as the incessant obsolescence postulate: http://arxiv.org/abs/1101.1066


>>This is a known paradox >>So, it's irrational to ever launch anything...

That faster ship just won't appear out of the blue.

You need to launch slower ones to get to the faster ones.

Its like the original inventor of the car opting not to build it because someday there would be a Ferrari.


Improving the speed of a 1000-year ship may only require improvements in propulsion or structure (lighter ships). There are nearby incentives to create better propulsion and structure. We do not necessarily have to launch a 1000-year ship to create a 500-year ship.


But you don't have to send your people 1000 years away.


Yup -- This is precisely the answer to the question, so it thus remains an unresolvable paradox.


Seems like you'd reach a point where it becomes worth it even with continuous improvement. If the speed of light really is a speed limit, then you'll get to a point where the maximum theoretical improvement is still some small amount. If there's a way to go faster than light, then you'll get to a point where it only takes two seconds to get to your destination.


Even without the speed limit, you can expect to reach this point. Let's say your technology gets 10% faster every 25-year generation, you should launch a 200-year trip but not a 250-year one. We'll probably keep improving at that pace or better for a few iterations at least, but eventually space travel will be a stable technology, speed improvements will be rare and incremental, and 1000-year journeys will be justified.


Right, barring time travel, once you can make the voyage at all, you'll always reach a point where it makes sense to depart on it. If you invent technology that takes a million years to arrive, then you have a maximum deadline of a million years. Past that point, even instantaneous travel won't be worth waiting for, if your goal is simply to get there as early as possible.


> So, it's irrational to ever launch anything...

Sounds like the interstellar version of Zeno's paradox.


On a tangent with this topic, you guys should treat yourselves with a short sci-fi story named "The road not taken".


I just finished reading the entire novel series that Turtledove spun out of that short story concept. It's really good, well-researched and imaginative.


But technology that would be able to send spaceships significantly faster would probably cost significantly more, so it might not be worth pursuing given the cost.


Moore's law never stopped anyone from making chips with current technology.


Or maybe they'd rendezvous and bring them "up to speed" so to speak ?


it would be better to just 'seed' the tiny starshot spacecraft with human embryos that could litter the planet and then develop on arrival.


Right because human babies let alone embryos do so well on their own.


that's why we also send self-replicating nanobots along with them.


For their extensive experience in child rearing.


no problem, we train them with a convolutional neural net w/ examples from earth to provide for them. should suffice until about an age of 7 after which they'll just continue to use the nanobots to provide raw materials.


See James P. Hogan's Voyage from Yesteryear. It involves a post-scarcity civilization seeded in much the way you suggest.

https://en.wikipedia.org/wiki/Voyage_from_Yesteryear


Adults from later, faster spacecrafts could raise the babies.


Why wouldn't they just bring their own? Or make some?


we'll also send send self-replicating nanobots


If you're limiting it to generational ships then there isn't much improvement to be made, we can still only accelerate at ~1G. The only variable is how long that acceleration can be maintained.



Maybe there could be a kind of factory-ship that was built to be self-improving, so during those 1000 years it would improve itself and increase it's speed.


After we build the laser it might not cost that much more just to keep a constant stream of these guys going to the planet relaying their information back along the stream. Effectively giving us a streaming feed of what's happening on the planet.

A single probe in orbit could obviously do this but with this idea there's no way to slow down.


There's a way to slow down:

http://www.lunarsail.com/LightSail/rit-1.pdf

>The lightsail is built in two sections, an outer doughtnut- shaped ring, and an inner circular section 30 km in diameter. This 30 km payload section of the sail has a mass of 71 metric tons, including a science payload of 26 metric tons. The remaining, ring-shaped "decel" stage has the mass of 714 metric tons, or ten times the smaller payload "stage". The central payload section of the sail is detached from the larger stage and turned around so that its reflecting surface faces the reflecting surface of the ring-shaped portion (see Fig. 4). At a time 4.3 years earlier, the laser power from the solar system was upgraded to 26 TW (there are 37 years to get ready for this increase in power). The stronger laser beam travels across the space to the larger ring sail. The increased power raises the acceleration of the ring sail to 0.2 m/s2 , and it continues to gain speed. The light reflected from the ring sail is focused onto the smaller sail, now some distance behind. The light rebounds from the 30-km sail, giving it a momentum push opposite to its velocity, and slowing it down. Since the smaller sail is 1/10 the mass of the larger one, its deceleration rate is 2.0 m/s2 , or 0.2 g. The light flux on the smaller sail has increased considerably, but it is only two-thirds of the maximum light flux that the sail can handle.


This is from the same people who announced the "Breakthrough Message", an open competiton with a $1 million dollar prize pool, whose details were "to be announced soon". [1]

The press release was made in July 2015, and there has been no communication about it since then. I'm not sure how seriously to take this group.

[1] http://www.breakthroughinitiatives.org/Initiative/2



Unless "The Moties" send one here, first :-)

Yet another comment plugging a story involving some novel interplanetary travel (this one involving a species from a red dwarf): https://en.wikipedia.org/wiki/The_Mote_in_God%27s_Eye


This should be the top comment! Not the guy poo pooing on the idea of checking out this planet.


(ignore what I said here)


Also at 10% the speed of light the probe zips by the planet in a fraction of a second. There's no slowing down in this scenario.


You can slow down - just that the probe won't remain intact. But perhaps that isn't necessary for the probe to be useful. We can also use these space probes to do geological and atmospheric surveys of the planet when they arrive.

Simply: we slam the probe into the planet, a few milliseconds after it transmits the final images. One gram going at 20% of the speed of light has kinetic energy that bears comparison to the Hiroshima bomb.

At a steep angle of entry, the huge entry glow will give a reading of the atmospheric molecular makeup. And if we can ionize some of the crust, massive space telescopes can get a spectroscopic measurement of the composition, four light years away.


Let's hope any alien race living there doesn't have the same idea and send a few probes to smash in to Earth.


That would be like Christopher Columbus, mid-Atlantic crossing, encountering a gunpowder-equipped Mayan navy going off to conquer Europe. Not likely.


now that's a way to say Hi to a foreign species! there isn't high chance this wouldn't be considered as an act of hostility, and if they would check our history of our current world, they would probably just decide to erase this dangerous species from the face of the universe for greater common good.


I think we both know humanity's interstellar destiny is to become the murderous alien invaders we've faced down in so many movies.


There are ideas for slowing them down and even returning them back to Earth, at least in principle:

http://www.centauri-dreams.org/?p=31913

Nothing that could be implemented in any foreseeable future though.


Could they carry mini solar sails to slow down? Solar parachutes if you will.

Or if we sent them in single file several seconds apart they could each relay what they see to the ones behind them and then back to us. Effective oh giving us a long exposure.


The laser is only on each device for 10 minutes. And there are 1,000 of them. So the hope is that at laest 1 in 1,000 will make it.


if even a tiny fraction of those 1000 tiny probes hit that planet at 20% of speed of light, could that pose a problem? it sounds like a shotgun approach, not so figuratively, using pellets loaded with plutonium.


The probes would each be only a few grams, so they'd have about 10^13 J of relativistic kinetic energy, or a half kilotons of TNT equivalent. This can be compared with the 15 kilotons for the (small, by modern standard) Little Boy atomic bomb. So it's probably noticeable for any inhabitants in the area, depending on how the energy is dissipated in the atmosphere, but still quite small on the scale of a planet.


It should be noted that meteors explode with Hiroshima-like force fairly regularly within Earth's atmosphere, on the order of once a decade or so. It happens high up enough that nobody notices beyond an occasional light show. The meteor explosion which hurt a bunch of people in Chelyabinsk a few years back was about 500 kilotons.


And so many of those people were injured because they went to a window to see the spectacle, not knowing about the shock wave that was coming.

It made me realize that the civil defense films we watched in the '50s weren't so misguided after all:

https://www.google.com/#q=when+you+see+the+white+flash+duck+...


Indeed, that's pretty much what "duck and cover" was all about. There was even a school teacher in Chelyabinsk who apparently remembered her Cold War drills and had her students take cover, saving them from injury.


cool, good to know that raining down a bunch of probes in this manner would most likely be a pretty light show. still i wonder about the plutonium aspect of this?

it might be prudent to first consult with the legal experts and diplomats of the Galactic Council, to clear this type of activity with them first, minimize interpretation of this probing as a hostile interstellar act.


I don't think the plutonium would be enough to have any real effect, although it might irritate the inhabitants if there are any. Certainly, we've put far more plutonium into Earth's atmosphere by blowing up thousands of nuclear bombs in it, and we survived.

Clearing it with the Galactic Council definitely sounds like a good idea. Do you have their phone number handy? I seem to have deleted their contact info by accident.


>>So it's probably noticeable for any inhabitants in the area

An intelligent civilization with a 1000 year head start whose presence was just proven by device that arrived at 10% the speed of light would pretty much scare any government on earth.

I would assume it would scare the aliens there too.


This is the coolest project I've heard about in a while. Thanks for sharing!


I wonder what it would be like to be the Nth generation of guys-that-left-in-2016 finally arriving and be greeted by the descendants of the FTL guys that arrived 800 years before you.


This is the plot of at least one Star Trek: Enterprise episode.


Which one? I Netflix'ed my way through that whole show a couple years ago and didn't see any like that.

It does sound a little bit like "Space Seed" from TOS though, except I don't remember that ship having a particular destination. There was another TOS episode where they find a stray asteroid that turns out to actually be a generation ship inside. And there was some TNG episode where they find a ship with some 20th-century Earthers in cryogenic storage because they had just died of medical ailments. But I don't remember any ENT episodes like this, just some talk about "slow" Earth ships traveling at only Warp 1.5 or so, so that people lived their whole lives on them while on long-term trading missions (their helmsman came from one of these ships).


I'm pretty sure you're wrong and it was an episode of 'The Next Generation, although I've no idea what the title of the episode was... [0]

[0] I've watched the episode only once (I'm not a Trekkie :) ), but if I remember correctly a synopsis of the plot was that decades (at least...) ago an automated ship containing Klingons in cryogenic suspension was launched to colonise a distant planet; in the time following the launch of that ship, the Federation reached and colonised that planet using faster ships, unaware that the Klingon ship was on the way...

The crew of the Enterprise [D] has to try to figure out how to stop the Klingon ship from introducing the Federation colonists on the planet to the magic of orbital bombardment (a casus belli) without destroying the Klingon ship (a casus belli).


I think you're mixing up some details of that episode. The closest episode I can think of that matches that is "The Ensigns of Command", which involves a colony of humans that were isolated and underdeveloped due to interference from the planet's radiation. The colony was on a planet that technically belonged to the "Sheliak", a mysterious race the Federation had limited contact with, but who had decided to begin colonizing the planet and gave the Federation a short time span to remove the humans before they would eradicate them.


I looked it up, and the title of the episode was "The Emissary" [S2E20] (although, yes, either I remembered a few details incorrectly or the Wikipedia synopsis of the plot is incomplete).


... are you sure?

There's one episode where they visit a colony established decades previous by a warp 1 or warp 2 ship, but they don't beat the colonists to the planet.


no shortage of fiction covering that scenario...


Since we're pretty sure FTL is pure fiction and not possible in reality, that likely won't be an issue. Just because we don't know everything, doesn't mean we don't know anything.


We are, by no means, sure of that. There are numerous theoretical possibilities that fit current theory. Not least of which is the Alcubierre Drive: https://en.wikipedia.org/wiki/Alcubierre_drive


Mathmatically compatible with GR (which is known to be incomplete) doesn't mean possible in reality, in fact, read your own source, it makes that clear..

> Although the metric proposed by Alcubierre is mathematically valid (in that the proposal is consistent with the Einstein field equations), it may not be physically meaningful, in which case a drive will not be possible.

That drive is nothing more than speculative science fiction.


Are there really numerous possibilities? I know about Alcubierre, what are the others?


Alcubierre's drive is a solution to Einstein's field equations which simply means the math checks out in theory for warp drive.

The resulting practical problem is that the energy requirements necessary for the solution are far greater than what we can feasibly achieve now or in the future (exotic matter's existence notwithstanding).

But the fact that a physicist was able to derive this metric (energy requirements aside) is significant. Given the history of science, I would not discount the possibility that someone else will come along in the future with another solution which lowers the energy requirements to something feasible. But we can't predict this.

But isn't it amazing that the math checks out at all? I find it inspiring...


I'm not so sure the energy requirements are all that high. Alcubierre suggested that the sort of exotic matter needed would actually be fairly easy to create. NASA's been running experiments hoping to measure it with inconclusive results:

https://en.wikipedia.org/wiki/Alcubierre_drive#Experiments

> In 2012, a NASA laboratory announced that they had constructed an interferometer that they claim will detect the spatial distortions produced by the expanding and contracting spacetime of the Alcubierre metric. The work has been described in Warp Field Mechanics 101, a NASA paper by Harold Sonny White.[5][6] Alcubierre has expressed skepticism about the experiment, saying "from my understanding there is no way it can be done, probably not for centuries if at all".

> In 2013, the Jet Propulsion Laboratory published results of a 19.6-second warp field from early Alcubierre-drive tests under vacuum conditions.[33] Results have been reported as "inconclusive".[34]


Wormholes are another. There's some debate whether or not the idea of creatable wormholes really exists in our physical models. But the idea of existing shortcuts through space time certainly does.


Wormholes aren't known to exist; they're theoretical implications of some math, not known things. It's entirely possible they don't exist at all.


Hence the "in our models".


No, but if we leave today with a trip time of 1000 years and in 100 years we invent a way to travel .2c, then the people on the ship with the new tech will arrive in 120 years instead of 1000. Don't need FTL tech to make generation ships a silly prospect.


The other thing you guys are missing is relativistic time dilation: get the ship moving fast enough and time will pass more slowly inside the ship than outside. The ship may take a century or two to get to the destination, but only a few years will have passed for the travelers.


You have to go really, really fast to get 100:1 time dilation (like, 99.99% c) -- probably fast enough to make interstellar travel hazardous in a "collide with a hydrogen atom, it hurts" kind of way.

Also, the energies involved are absurd.


They will still arrive after the faster ship.


Pretty sure? I'm pretty sure we don't know near enough to make any type of guess on that.


Your mistaken, and this opinion is what the second sentence was for; just because we don't know everything doesn't mean we don't know anything. FTL is likely not physically possible. That's not to say it's impossible, just that it's not likely given the vast number of things we do know about physics.


Maybe you'd get a nice little tech upgrade when you got there. And I'd hope that we'd at least develop some form of suspended animation to make things bearable.


To be honest, assuming, you get there safely I'd rather be in suspected animation on the slow ship than the fast one. I'd get all the heroism and send-offs of a pioneer, I'd have my name etched on a monument somewhere, I'd have the thrill of exploring something totally new. All that, except when I arrive all the brutal, dangerous, boring work of establishing a self-sufficient biome will already be done and I can jump immediately into the "boldly go where no one has gone before" part.


They'd just get picked up along the way.


Well, they can stop at the spaceship on their way back and then... I don't know


It's not quite that bad. Have you heard of fission-fragment rockets? Highly feasible, and travel times to Alpha Centauri on the order of decades, not millennia. Baffles me that no one talks about them.

https://en.wikipedia.org/wiki/Fission-fragment_rocket


Someone mentioned them in this discussion ~30 minutes before you posted.


Where? I'm ctrl-f-ing here and seeing nothing older than my comment.


OK, the previous ref was for nuclear pulse propulsion, which I'm sure you consider completely different.


They are extremely different. One is a highly impractical idea from the 1940s, and the other is a relatively recent and promising design that continues to be refined.


What's exactly impractical about a nuclear pulse drive - conventionally launched and activated outside the atmosphere? We know how to build that stuff safely to account for launch failures, it has been done with RTGs many times already.


Actually the 1000 year number comes from Project Orion in the 1940ties and 50ties.

https://en.wikipedia.org/wiki/Project_Orion_(nuclear_propuls...

An updated design from the 80ties calculated a time of 100 years:

https://en.wikipedia.org/wiki/Project_Longshot

And a nuclear fusion design is calculated to achieve 12% of light speed, thereby reducing time to reach the fourth nearest sun system in 46 years.

https://en.wikipedia.org/wiki/Project_Daedalus

So there are concepts that could make unmanned interstellar travel possible, even within a humans life span, it's just that it costs so much and the incentive is pretty low compared to the incentive countries had for getting objects into space. (primarily military incentives - get spy satellites and nuclear warheads into space to not fall back behind adversaries)

I believe that given a strong enough incentive humans could do it, no matter what current consensus is telling us.

Humans set out to work on reaching outer space without even having a design on how this could be achieved and we did it anyway.


At current rates, it would be 137 thousand years.

http://www.ucolick.org/~mountain/AAA/aaawiki/doku.php?id=how...


50ties - Fifty-ties -- Fifties - 50s / 50's


Thanks, English is not my native language. So what is correct, 50s or 50's?


Yeah, I figured, no worries. Your English is excellent, apart from this little oddity. Either way to put the 's' is fine, really, but I think with the apostrophe (50's) is more common.


Thanks a lot! Very kind of you.


Yeah, I smiled when I read "Just over four light-years" as if just around the corner kind of thing.

It might be just around the corner in space travel time, but those are quite a few light years still.


Also no one knows how to shield a spaceship from the interstellar medium https://en.wikipedia.org/wiki/Interstellar_travel#Interstell... over such distance.


Fission fragment rockets are well within current technology and can achieve delta-vs of 0.1c at reasonable payload fractions, getting us there in ~ 40yr.

Edit: Nuclear pulse propulsion is good for about half that (80yr, not 1000).


How does it slow down?


It doesn't hence its going to be used for flyby missions and not orbit/capture missions.

Many missions to the outer planets are flybys since we can't have enough fuel to actually slow down.


just curious but how would you even capture any usable signals when passing a planet at 20% lightspeed?


Well if you think about it we are capturing useful signals that move at relativistic speeds all the time.

Many galaxies we see are moving at very high speeds due to the expansion of the universe hence the doppler shift and we can take both optical and radio images of them.

I'm not an RF/Optical engineer but I would think that it would be possible to capture and send some data back to earth even its minimal it's still might be better than nothing / what we can get from earth/our solar system, at the end we only might have to account for the doppler shift.

IIRC there have been also other tricks like deploying very large sails and using them as drag chutes or using some mechanical trickery and deploying a very small probe by literally like having it on some pendulum and some other weird stuff so you would transfer most of the momentum it has to the probe and you'll release it with considerably less momentum than the rest of the spacecraft.


The planet itself could slow the probe down quite rapidly. ;)


Would it be possible to drop a payload that would enter orbit?


The payload would need to slow down to be captured by the planet just the same. Normally you could try aerobraking and the like, but we are talking about relativistic speeds here.


To get better closeup pictures, if possible, aim the probe for the planet and live stream picture home all the way to impact.

Although I'm not sure these probes are at all steerable, either autonomously or remotely.


Highly unlikely, the mass of the probes will be on the order of several grams at most, and, depending on the trajectory relative to the planet, steering could require an equally impractical amount of fuel as slowing down would.


Livestream at 20% of light speed?


There may not be any significant atmosphere for aerobraking. This likely would not be known until long after the probe arrived.


We'd also have limited knowledge of the system itself. Without more accurate data about its objects and their orbits, it'd be almost impossible to plan out orbit insertions even assuming we come up with a means of slowing a probe from relativistic speeds.

Even if your probe had the computing power to run orbital mechanics calculations, there's no guarantee that it'd be in a position to actually make it work when it got there. Of course, that's ignoring the mass constraints involved.

But that doesn't discount the value of even 'just' flyby missions. The scientific benefit would be, literally, incalculable. And the data could help with followup missions, assuming we develop a drive system that could get there and slow down.


The StarShot probes are going to be about a gram. Not much weight to make a primary probe AND a payload.


slowing down from that kind of velocity is serious business. you're talking about braking from 75000km/s to ~8km/s and not blowing up in a spectacular explosion.


I just died laughing. I'm reading all the inspiring comments on how we actually could get there, and could totally see myself being on an engineering team that builds this thing, and 100 billion dollars later you ask that question and we all go: "Oh... oh crap." On a constructive note -- does anyone have a list / book of generally smart people getting sidelined by such things?


There's a book called It Looked Good On Paper full of things a bit like that, albeit on a smaller scale.


This comment left me flabbergasted. Great thinking.

What about orbiting the planet and using the lasers when the craft is orbiting towards us in blasts to slow it down gradually?


Magsails would work.


If you're going to use magsails to slow it down, why not use a magsail to get it there? The trip is basically symmetrical.

I suppose it's possible that the laser could produce a much more stable and precise trajectory, while a magsail would just slow it's descent towards or accelerate it away from the sun. But I think it's more likely to be a case of the materials science and energy density being in favor of the laser method over sails.

What if we used lasers to send a fleet of laser ships with powerful one-time-use chemical lasers, then the front set of laser ships fired the same kind of beam back at the rearmost ship to to slow it down?


> If you're going to use magsails to slow it down, why not use a magsail to get it there? The trip is basically symmetrical.

If you launch a strong magnet in interstellar space, it will slow down relatively to plasma by deflecting charged particles.

In order to speed up, you need to spend energy.

The symmetry is broken between accelerating and slowing down relatively to interstellar plasma (Edit: or solar wind).


Ah, I had neglected the effects of interstellar plasma.

I assumed the magsail operated in a similar fashion to a solar sail, depending on the solar wind. If the destination's solar wind was sufficient to slow us down, I expected that the Sun's solar wind would be sufficient to speed us up.

Thanks for the explanation!


Imagine the political implications of choosing who will go onto that ship, as well (assuming the intent is to colonize the world and fill it up with humans)


Telephone sanitisers.


A poet.


This is a (likely) reference to Contact (1997), with Jodie Foster.

Some celestial event. No - no words. No words to describe it. Poetry! They should've sent a poet. So beautiful. So beautiful... I had no idea.


Only the richest, elites. I say we ship them immediately! /s

Let's get real, they wouldn't go anyway. It's too good for them here where they have people to do things for them.


> Only the richest, elites.

Would they actually want to go? If we are talking about a 1000 year voyage, and not assuming a major breakthrough in cryogenics or longevity, signing up for that trip means you are going to spend the rest of your life on that ship, mostly outside but near our solar system.

I don't see that being particularly enticing to rich elites.

I don't remember were I read this so cannot properly give credit, but I saw an interesting variation on the generation ship.

The conventional approach is to send a large crew, whose job is to operate the ship, reproduce, and raise their kids to take their place, generation after generation until the ship arrives and they become colonists.

The variation would be to start with a much smaller crew and large collection of frozen embryos. You make use of the embryos when you arrive to build up the population to full colony size.

The advantage of this approach is that since there are fewer people during the trip, you have more capacity for supplies. You can better equip the ship to deal with unforeseen problems.

For instance, suppose taking the embryo approach, you can get it down to a crew of 6. You'll have 12 when the crew is overlapping with their kids. Call it 18 if the crew's parents have not yet died when the crew has their kids.

Suppose each crew member needs 3000 calories per day. Then on a 1000 year voyage, you need 18 people x 3000 calories/person/day x 365.2422 days/year x 1000 years = 19.7 billion calories.

I have a protein bar by my desk at the moment. It is 190 calories, and is about 125 mm x 30m x 20mm = 75000 mm^3. So, 19.7 billion calories x 1 bar/190 calories x 75000 mm^3/bar gives a volume of 7.8 x 10^12 mm^3. Stored in a cubic storage container, this would require a container with an interior length, width, and height of 19.8 m.

The "small active crew, everyone else a frozen embryo" generation ship could start out with enough food on board to last the entire voyage, and so would not need to raise food onboard. That alone should greatly simplify things, and greatly improve the chances of making it. Of course they probably would still grow food, but now it would be for added variety and flavor, not a necessity.

(I'm not going to do the calculation to see if they could start with enough water for the whole trip. Water is very bulky and we use a lot of it, so my totally uneducated guess is that it would take too much space. However, I believe that efficient water recycling in a closed environment is something we know how to do very well, and so water should not be a problem).


The problem is that these people are going to be living out their entire lives on the ship. While you can maintain genetic viability by supplementing from frozen stock. Maintaining a viable culture and a society that the caretakers will actually want to live in is another thing. Forcing anyone to live a culturally and socially impoverished life is cruel.

I would suspect you would want to have at least 10-20 people in each 5 year age bracket. Then people will have a fighting chance at developing their own social lives and maybe even their own culture.


That's a lot of generations of inbreeding before the approx three descendant females of childbearing age get to unfreeze the embryos and start widening the gene pool though, assuming they're up for the task of being the surrogate mother for dozens of other people's kids...

tbh I think storing enough food for the journey is going to be the least of all problems


You could do a mix of crew replacement by breeding and crew replacement by reviving frozen embryos to keep the genetic diversity up.

Or maybe an all female crew that does crew replacement either using frozen sperm or frozen embryos, and selects for female replacements.

When the ship arrives and it is time to start the colony, I don't think you'd try to grow to thousands of people quickly. I think you'd want to go slow early to make sure you understand your new environment. Maybe 12 years out, the crew switches from 1:1 replacement to 3:1. Sticking with 6 as the main crew, plus possibly up to 6 of the crew's parents still alive, plus 18 kids. I think you'd want to spend a few years based on the ship studying the planet and conducting research expeditions to figure out if the planet really is suitable for colonization and figure out dangers that unmanned probes and study from Earth may have missed. When that is done, the kids should be 18 or so, and you can start the colony with them and with their grandparents, with the main crew staying with the ship to provide support. That would give 18-24 people on planet attempting to live there, but not needing to be self-supporting yet because of the ship.

In a few years, the colony population should start naturally growing. If the babies do OK, people can be encouraged to have bigger families, with one or two per family being from the frozen embryos and the rest produced the old fashioned way.

> tbh I think storing enough food for the journey is going to be the least of all problems

Yeah, there will be a lot of problems.

Many of the hard ones will not even be technical. For instance, you'd want to have some way to stop from happening something that happened to a colony in Larry Niven's "Known Space" universe. When the colony ship arrived the crew decided to set up the colony so that the crew was the ruling class and the colonists essentially serfs.

The ship in that Niven story wasn't a generation ship. Crew and colonists were cryogenically suspended for the trip, with the crew being automatically revived when the ship arrived. I supposed one advantage of the traditional generation ship is that it has some protection against that scenario because during the trip everyone is crew.


There's no reason they have to wait to arrive to start using the embryos.


I think you're conflating a few things here:

1.) A probe and a human spaceship are vastly different problems. Since all a probe really needs is electricity to sustain itself, you could get away with a tiny payload and some long lasting radioactive energy source, light sails or even sending the energy from earth's orbit. Such a thing would be either slow and cheap or fast (a few percent of light speed) and expensive, but not both at the same time and I doubt it would be in the trillion dollar range whatever you do.

2.) I completely agree that sending humans would currently not be feasible within a single nation's budget and the technology for that is still at the very least decades out (cryogenics, EM shielding, better propulsion systems, using mass from cheaper solar system bodies than earth etc.).

3.) 1000 years is what we'd need with conventional current technology. The theoretical limit for a nuclear impulse propulsion drive is 20% of light speed if you want to break or 40% for a fly-by.


Well using laser propulsion, we might be able to get a very lightweight probe up to 1/4 c using technology that isn't too far off.[0]

Coincidentally, the group working on this will be presenting some of their most recent work on this at 2:55 EST today. You can watch that live here[2].

[0] http://www.deepspace.ucsb.edu/projects/directed-energy-inter... [1]https://www.nasa.gov/sites/default/files/atoms/files/2016_sy... [2]http://livestream.com/viewnow/NIAC2016


Don't forget that halfway there you have to turn it around and start slowing down.


The bigger deal is, How do you deal with the ablation problem at those speeds? Given the likely density of H in interstellar space, erosion is your primary problem.


Add the Broussard ramscoop, magnetically deflect the atoms into an accelerator and expel them to add to your propulsion.


What about a solar sail mixed[1] with, ion engine[2] and nuclear powered[3]? With all those things combined, you would constantly be accelerating for many years.

[1] http://www.planetary.org/explore/projects/lightsail-solar-sa... [2] http://www.nasa.gov/centers/glenn/about/fs21grc.html [3] https://en.wikipedia.org/wiki/Radioisotope_thermoelectric_ge...


I'm interested in NASA's work on the Alcubierre drive [1]. Still likely to require much more energy than we'll be able to produce in the near future, but 100 years ago we were just getting a firm handle on the basic mechanics of flight.

[1] http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/2011001...


The Alcubierre drive is just a solution to the equations of general relativity that most probably can not be achieved in practice. Sure, maybe one day we will have drives like that, but we do not have any idea currently that even remotely resembles a warp drive permitted by the laws of nature.


That would require us to create matter with a negative energy density, which has never been detected and is likely impossible.


Still fiction; it relies on magic, i.e. things not known to actually exist.


Magic operates in ways we don't understand. I think we understand negative energy density, we just don't know if it's possible in reality.


> Magic operates in ways we don't understand.

Witches would disagree. :)

> I think we understand negative energy density, we just don't know if it's possible in reality.

Exactly as in magic. When you propose things are possible if only X material with some magical property necessary for thing to work, that's speculative fiction. Especially when those constructs only make sense mathematically. Math isn't physics, but physics is math, there's an important implication there.


According to Niel DeGrasse Tyson, you could send a tiny, pocket-sized payload and get it there in only a few decades. So if we just want a couple snapshots and not the sort of thing we typically send to orbit planets, it could be done in a reasonable amount of time.


Why would it be unpopular? On the scale of the space involved, any "pollution" would be literally inconsequential.

On the cost, I agree and think we have much bigger fish to fry on our own planet than to spend huge sums to discover what is in all likelihood a barren rock.


Frying every piece of electronics on the ground below the ship when you fire it up, and every satellite visible above the horizon, would be bad PR.

You could potentially boost it far away from Earth by more conventional means before turning on the nukes, but suddenly it becomes a far larger and more difficult thing.


Oh please.

Any interstellar ship like this is not going to be built on Earth's surface, it's going to be constructed in space somewhere. Building a craft that large on the surface is far more difficult than just assembling it in zero-g from components, since the stresses of leaving the gravity well through the atmosphere on a large object are huge.


The point is, not only does it need to be built in space, but it needs to be built (or moved) pretty far away from Earth before you light it up. You can't just build it in Earth orbit. And even that would be well beyond modern capabilities. A ground-launched Orion could be built now, but constructing an interstellar ship in solar orbit will require a lot more work on space access.


A ground-launched Orion could be built now, but it'd make a mess when it launches, and it'd probably be much too small for a useful interstellar mission anyway, because of that scaling problem I mentioned. The stresses are just too high, and it's pointless because you don't need a ship that rugged in space, only for transiting planetary atmospheres.

I don't think we have the capability now to build a sufficiently large ship on the ground anyway, even if we decided to say "screw it, we don't care if it's wasteful to massively overbuild this thing"; our materials science probably isn't up to the task. So we need to learn to build things in space anyway.

No, we don't really have this capability right now. But we need to assume we'll have it when we need it, and we need to work towards it, and not try any kind of missions requiring it until we do. Thinking about interstellar missions right now is really putting the cart before the horse; we haven't even gotten manned missions beyond the Moon, and even those were pretty simple (walk around, hit some golf balls, drive a rover around), not anything involving real work such as building a habitat or serious excavation or mining.

This is why I think all this talk about going to Mars is silly too; we need to be concentrating on closer things, like near-Earth asteroid retrieval and prospecting, and building a Moon base, and figuring out how to mine materials and build larger ships offworld. We need a bigger ship to go to Mars, not some little tin can that you can stick on top of a rocket; something the size of the ISS would be good, because the crew will have to be trapped in it for months, and they need stuff for landing on the surface and doing real work there. You can't do all that with something the size of the lunar modules we launched on the Saturn V.


Just build it in space.


Maybe wait for space elevators?


Didn't Project Orion determine that it's theoretically feasible to accelerate to not-insignificant fractions of c using NPP? If your journey could average even 0.05c, you can potentially make it within a human lifespan.


I thought the issues were:

  - navigation at any non-trivial fraction of *c* (not running into something that would obliterate the vessel)
  - slowing down and actually arriving where you wanted to and not overshooting it or stopping .5 LY away


About slowing down, it's pretty predictable. The idea is even to run the ship in constant acceleration.

I don't think you'll want to actually navigate that ship. It's a point and go task. But if needed, you rotate the ship and keep accelerating. But I have no idea on how much you'd be able to fix your route after you discovered it is wrong.

I think the main issues are how do we make a ship where people can live for decades? And how do we launch it from the ground?


I think speed is only one part of it you'd be dead from radiation over such a long period of time unless you were surrounded by lead.


Proxima Centauri is 4ly away. New Horizons (the Pluto Probe) was travelling at 15.73km/second (just over 34,000/mph)

You're looking at closer to 75,000 years - not 1,000 years - to reach Proxima Centauri.

E:

Did actual maths. Closer to 75,000 not 100,000.


This whole conversation reminds me of Europeans who have never been to the states thinking of flying over for a week, renting a car and visiting the Florida Keys, Times Square, the grand Canyon and Disneyland.

At 34k MPH it would take 75 Millenia to reach our literal stellar next door neighbor.

Makes the blood boil how vast and empty space really is, when you think about it


As you mention - comprehending how large/small countries are is tough for some people. Planet-size differences even more so. Many probably don't realize how big the sun is! Then you have to comprehend that on a galactic scale - our sun is really, really tiny [0].

We exist on a tiny spec of dust; inside of a solar system that is no larger than a tiny spec of dust; inside of a galaxy that is no larger than a tiny spec of dust; inside a supercluster that is only a tiny spec of dust.

[0] http://i.imgur.com/DUzDo3k.gifv


Every time I try to comprehend the sun as it actually is -- its size and composition -- I end up completely boggled and unnerved. A vast and uncaring ball of plasma, mostly hydrogen, supporting billions of years of fusion reactions, so big that the orbit of the planet I live on causes only the slightest wobble in its position; it's no wonder the ancients worshiped the sun. There's nothing about it I can begin to grasp except by analogy.



Ill just leave THIS here for you

https://www.youtube.com/watch?v=QgNDao7m41M


Thanks to you, and to the GP. I do enjoy the analogies! I think the absolutely brain-crushing thing about objects of astronomic magnitude is trying to reverse the log-transform we use to make sense of them. For me it produces a sense of vertigo, like standing at the top of a cliff.


Okay. . . (humbly slinks away with tail between his legs)


that video totally blows my mind, i must have watched it a dozen times.


Yeah, it's pretty damn epic.


I did some rough estimation on a map recently and decided that driving from Seattle to Miami was only slightly shorter than from Paris to China.

This is a bloody big country.


New Horizons is a traditional design with chemical propultion. Ion drives should hit ~5.5x that speed fairly easily. (http://www.extremetech.com/extreme/144296-nasas-next-ion-dri...)

A larger issue is RTG's are not useful on a very long long timescale.

ITER style fusion is likely the best power source for such missions and should hit ~1-10% of light speed fairly easily. But, building something that large is a major issue.

On the upside, we have already gone 18.1 light hours, 4.2 light years is not an unreasonable jump.


~5.5x that speed is still about 13,636~ years. Much better but still not very realistic....

18.1 lighthours is 3/4ths of 1 lightday. Which is 1/1533 of 4.2 lightyears or in other words: 0.06% of the way there. Going the remaining 99.94% is a massive jump!


Can you link to more information?


Not the best link but here's something to chew on:

http://www.universetoday.com/15403/how-long-would-it-take-to...

>However, despite these advantages in fuel-efficiency and specific impulse, the most sophisticated NTP concept has a maximum specific impulse of 5000 seconds (50 kN·s/kg). Using nuclear engines driven by fission or fusion, NASA scientists estimate it would could take a spaceship only 90 days to get to Mars when the planet was at “opposition” – i.e. as close as 55,000,000 km from Earth.

> But adjusted for a one-way journey to Proxima Centauri, a nuclear rocket would still take centuries to accelerate to the point where it was flying a fraction of the speed of light. It would then require several decades of travel time, followed by many more centuries of deceleration before reaching it destination. All told, were still talking about 1000 years before it reaches its destination. Good for interplanetary missions, not so good for interstellar ones.

There's talk of other drive systems being able to pull it off but this is the only one that actually has been tested but never built to scale.


He's probably thinking of Freeman Dyson's Project Orion: https://en.wikipedia.org/wiki/Project_Orion_(nuclear_propuls...


http://breakthroughinitiatives.org/Initiative/3 http://www.spaceflightinsider.com/missions/stephen-hawking-r...

It says the probe would be able to get there in about 20 years, travelling at 20% the speed of light, that's around 37,200mps.

There's going to be a relativistic effect, I think 20 years from our perspective will be slightly shorter from the probes point of view?


I'm reminded of the books Illium and Olympos by Dan Simmons.


I've read them--- how and why are you reminded of them by this story?


I was referring to the comment I replied to.


<Obligatory flamewar-starting "Em Drive" comment>


You're assuming that they have a vested interest in holding their Bitcoins for some greater good.


I don't know how you came to that conclusion.

My assumption is that they're selfish assholes who want to successfully launder stolen Bitcoins. Holding on to Bitcoins and laundering them through tumblers gradually would allow them to do that. But it seems like every time a thief steals a large amount of Bitcoin, they try to put it through a tumbler all at once or come up with some other scheme that doesn't work to launder it all at once.


As part of a presentation I did at a local OWASP chapter, here are some numbers based on just using CPython's Hashlib processing of 14,000,000 someodd passwords:

Intel Xeon E5-1620 3.6 GHz: SHA: 8.16 seconds, SHA256: 11.01 seconds, MD5: 8.7 seconds

AMD FX-8320 3.5 GHz: SHA: 10.63 seconds, SHA256: 13.49 seconds, MD5: 10.06 second

Intel Celeron N2840 2.2 GHz: SHA: 32.4 seconds, SHA256: 39.75 seconds, MD5: 28.95 seconds

Intel Pentium M 1.7 GHz: SHA: 37.98 seconds, SHA256: 48.12 seconds, MD5: 34.49 seconds

SHA512 isn't going to make it much better.


There are several services (including one run by me).

https://canar.io (mine)

https://haveibeenpwned.com/

Mine lets you free-form search whereas HaveIBeenPwned is there for searching just e-mail addresses.


Yours is fun, in that it's more free-form, but HIBP seems to cover more ground.


Any developer today that is developing an application and isn't using something like Argon2, Bcrypt, or Scrypt should be considering a plan to move away from whatever they're currently using yesterday. There is no reason to be using anything less than those three and continued use is in my mind negligence.

If at all possible you shouldn't be storing passwords to begin with and instead relying on another service for authentication.

This should be the takeaway from this article.


> If at all possible you shouldn't be storing passwords to begin with and instead relying on another service for authentication.

Should we all be using "Login with LinkedIn", then?

Passwords are always difficult to deal with even when using bcrypt. Who knows if bcrypt is still considered secure in 5 years? How long would it take to implement a change which updates the hashing algorithm for new logins while still using the old algorithm for old logins? When should you erase all passwords from inactive who haven't logged in and thus still use the old algorithm. (If you are interested in this problem, Django's user model uses a pretty straight forward and good approach[1]).

Outsourcing them is not the answer. It is a good idea to add that for the user's convenience but I hate it when websites only offer the option to login with "your" favorite social media. But even then, by outsourcing the passwords, you are risking your users' privacy by giving them to Google/Facebook/etc. This even discriminates users' privacy when they are not using Facebook for authenticating because facebook can see that user X visited your website (and sometimes even all URLs from that website you have visited). This is because those "Login with" and "Like" buttons always hit Facebook's and Google's servers with every webpage.

[1]: https://docs.djangoproject.com/en/1.9/topics/auth/passwords/

Edit: Forgot the link, thanks!


> Outsourcing them is not the answer

It very much is, if you're outsourcing to someone who can do it with greater competence than the average team can. Keeping current on the crypto, designing with the ability to sunset algorithms in mind, continuous pen testing, investing in physical security/network security/HSMs/you name it definitely isn't cheap or easy. Unless you're in the business of doing _that_ you're almost certainly better off having someone do it for you.

That said, I'm with you on the social logins front. I have/had? hope for OpenID Connect as an alternative so it would be great if someone neutral like Mozilla jumped on the bandwagon.


You are right, if the company you outsource your task to is actually better then you. In LinkedIn's case, outsourcing was the wrong decision because they used pretty bad "tools". You should also only outsource if you trust the other company to be "competent" and protect your interests. For instance, I would have trusted Mozilla with their OpenID alternative but not Google, Facebook, and LinkedIn (though I'm pretty sure Google knows how to keep the login data safe, I'm more worried about privacy in that case).

In this case, what I would do is to use a framework that makes getting those things wrong hard. Django is a great example for that. They provide you with a generic user model that does password handling for you. They also add a few middlewares by default to protect you against CSRF, click jacking and many more. While django can be really slow, and hard to use when doing something "unsual", you can learn a lot from it. I don't know many frameworks that make security so much easier. In Go, you can do all those things as well but that requires that you are aware of those security measures to use them, which is not ideal for junior developers or "fast moving startups that don't have time to invest in security measure".


Disclaimer: I work for Auth0.

I couldn't agree more with this comment.

Storing username and passwords is really hard. Using a secure hashing alg is trivial compared to other time consuming problems like handling brute force attacks or other kind of anomalies like a distributed attack trying to use a db of leaked email and passwords from other services.


https://en.wikipedia.org/wiki/Pluggable_authentication_modul...

That is pretty much the first thing I do when I inherit a project with authentication. You don't need to make another company your application's doorman, there are a lot of PAM backends that you can run on premises that "do it for you". If you have the competency to manage a LAMP stack - then you can likely handle a well tested and existing authentication server.

All the years in physical security might have broken my brain, because I am always surprised by how willing people are to leak information that doesn't need to be leaked. One project I was pulled into was on the precipice of uploading millions of customer's addresses to Google's geolocation API - had I not been able to bring the lead to his senses I might have made a run for the network closet.


PAM is great, and it's especially great as a layer of indirection, but I can't agree with your overall point that using PAM = problem solved. To your no harder than LAMP point, most teams can't competently manage a security critical LAMP stack. They're in good company given that big companies/governments get pwned with great regularity. Survival requires defense in depth, and that gets expensive. It's a matter of everything from policy (are there two-man change controls on firewall rules, do separate teams own the route tables and firewall, do separate teams develop/audit/deploy security critical code) to hardware (is private key material stored on an HSM, are sensitive services physically isolated, does entropy come from a hardware RNG). Most small companies aren't thinking about those things.

Also, given that the P is for pluggable, what's the backend? You wouldn't use pam_unix for users outside your org. A DB? Now you're back to square one. LDAP+Kerberos/AD? That beats the DB but it doesn't do anything for your defense in depth requirement.


> ...I can't agree with your overall point that using PAM = problem solved.

I don't think we have the same problem definition. I'm saying that it solves the problem of authentication implementation details - where the just-enough-to-be-dangerous types screw up (salting, keyspace, the keeping current on crypto part). LDAP can certainly be leveraged for defense in depth, authorization vs authentication, but that is much less off-the-shelf. This also provides some separation between the authentication server and braindead PHP scripts that barf the results of ";select * from users;".

> Also, given that the P is for pluggable, what's the backend?

Kerberos is the obvious choice for authentication, LDAP integration for authorization if you're needing a fine granularity. You'd really have to go out of your way to end up with a PAM that dumps right into a DB with a poor crypto policy - I've never seen it. You could use /etc/passwd - but you're right, you wouldn't want to... the option is nice though.

I don't disagree that a company that makes money primarily on identity management could do it better, if you assign a low value to the information that is necessarily leaked. But let me just point out the context in which we are having this conversation: LinkedIn offered such a service, as does Facebook - both have suffered breaches. While that isn't how they made their money, plenty of people used the service - following the letter of your advice, if not the spirit of it.


The point of bcrypt though is that it is (more) future proof in that the hash generation is slow as balls and can be made exponentially slower by increasing the difficulty.

Other than some kind of algorithm weakness this slowness will translate through to the brute force attack thus taking longer for an attacker. BCrypt also has something that means it is harder for a GPU to crack it (mutable RAM memory tables or something).

As for migrating passwords - you can use a fallback password hasher that checks against a previous hash should the main one fail - then once the login is successful re-hash with the newest algorithm!


You can't really know how future proof it is, though. As far as I know, nobody has proven it to be unbreakable. Right know we can't break it, we can only brutal force it but maybe tomorrow a mathematician finds some properties to calculate all possible inputs of a certain length for the hash within a reasonable time. Or maybe they find another way that doesn't include brutal force (statistics, ...).

What I'm saying is that passwords are hard in general (to store, to enforce policies properly, ...) and just because bcrypt is "unbreakable" right now, doesn't mean it has to be in 5 years.

Your last paragraph describes nicely what needs to be done, but is your code ready for that? Maybe your database password/salt column only allows X characters, now you need to rebuild the database. Maybe something else expects the passwords to have that specific format (another micro service, some script, ...). Re-hashing a password can be hard if this possibility was not considered from day one.


> How long would it take to implement a change which updates the hashing algorithm for new logins while still using the old algorithm for old logins?

As long as you remember to store the cost parameter along with each hash it's just a matter of increasing the default cost and reusing the old ones.


I was talking about the algorithm. Let's say bcrypt is considered insecure in 5 years (even with 100 iterations) and you are supposed to use another hash algorithm. Is your code and database able to handle that? If so, great but I don't think many people implement this and depending on your code, this can get difficult.


> Django's user model uses a pretty straight forward and good approach[1]

You forgot to post the link.


Not OP, but here's the official Django doc on the topic, including a section further down about upgrading the hash without needing a login:

https://docs.djangoproject.com/en/1.9/topics/auth/passwords/...

Here's a blog post which covers the same topic in an easy-to-understand form, including why computationally-expensive password hashing is important:

http://tech.marksblogg.com/passwords-in-django.html


> ...instead relying on another service for authentication.

If you are suggesting something like on-premises Kerberos, I agree - better yet: just design with PAM in mind. But if you are suggesting something like "login with facebook" then I have to disagree - unless your goal is to add a serious point of failure, major dependency, and huge source of information leakage all in one step.


Unsalted hashes in 2012 was negligence already...


I complained to my bank that their 12 character password limit suggests they are storing passwords. Their reply was little more than don't worry about it, you aren't responsible for fraud. I asked for them to add some kind of second factor authentication (I'm a fan of TOTP systems) and was told they are thinking about making that available for their business accounts.

It bothers me that my most valuable login is probably my weakest.


I'm glad they fixed this, but until relatively recently (last year), Charles Schwab had the following password requirements:

* between 6 and 8 characters

* alphanumeric

* no symbols

* case-insensitive

[1] is a nice writeup of exactly how broken this was until they changed it recently.

[1] - http://www.jeremytunnell.com/posts/swab-password-policies-an...


> If at all possible you shouldn't be storing passwords to begin with and instead relying on another service for authentication

This is wrong. Using another login service might introduce all sorts of new issues, a big one is privacy.


> There is no reason to be using anything less than [Argon2, bcrypt or scrypt] and continued use is in my mind negligence.

PBKDF2 is fine too.


How is this better than double salted sha256. I.e sha256("secret", sha256("secret2", $pwd))

The double sha/md5 would even give rainbow tables a super hard time right?


> How is this better than double salted sha256. I.e sha256("secret", sha256("secret2", $pwd))

ITYM hmac-sha256("secret", hmac-sha256("secret2", $pwd)), but that's neither here nor there; the operation requires either two (your version) or four (my HMAC version) SHA256 operations, which really isn't much: your version would make an exhaustive password search twice as expensive; mine would make it four times as expensive. Neither is very much.

Note too that salts are not secret; they must be stored with the password.

> The double sha/md5 would even give rainbow tables a super hard time right?

If you're using even a single high-entropy salt, a rainbow table is useless. What's not useless is just trying lots and lots and lots of potential passwords: first sha256("salt", sha256("salt2", "12345")), then sha256("salt", sha256("salt2", "hunter2")), then sha256("salt", sha256("salt2", "opensesame")), then sha256("salt", sha256("salt2", "love")) and on and on and on.

'But that will take forever!' you might cry. Not really: the article noted a 209.7 million hashes per second rate. At that rate, one could try every possible date in a century (36,525 possibilities: lots of people use birthdays or anniversaries in passwords) in U.S., British, German, French date formats (x4) with the top 100 male and female names prepended and appended (x400: 100 each, before & after), with and without spaces between (x2) in approximately half a second. If one adopted your approach, it'd slow it down to just over a second; under my approach, it'd be 2¼ seconds. Not very good.

PBKDF2, bcrypt, scrypt & Argon2 all attempt to slow this down by not requiring one or two or four hash operations, but rather by requiring thousands or hundreds of thousands of operations. scrypt goes even further by requiring lots of memory access, since memory access is slow on GPUs while just hashing is very expensive. Argon2 goes even further still.

Under any of the above, it might require a full second on even a GPU cluster to verify a password, which would mean that it'd take 3 years and 8 months to try all of those possibilities earlier. While a real-world system wouldn't be tuned to take a full second on a GPU cluster (it'd be slower on a server), it might very well be tuned to take say 10 or 100 milliseconds, which is still relatively slow for someone trying every single hash but relatively fast for validating a user login.


PBKDF2 is not for password storage.


PBKDF2 is absolutely for password storage. In fact RFC 2898 specifically notes that use case (for KDFs in general):

> Another approach to password-based cryptography is to construct key derivation techniques that are relatively expensive, thereby increasing the cost of exhaustive search.


Do you have a source for that? It was my understanding that PBKDF2 is 'good enough for now', but not necessarily the most future-proof of techs, given how easily the algorithm is optimised for GFX cards.


Most of the attacks described in this article are not solved by any of those though right? They protect against hacking one person but if you just do these advanced dictionary attacks you can still crack people with weak passwords.

Maybe I'm missing something?


No. You cannot effectively use the techniques you can use against SHA/MD5 to attack the three I mentioned.

SHA and MD5 can be calculated entirely in a CPU's registers without having to rely on RAM. Bcrypt for example requires the use of a matrix as part of its calculation, slowing down the process. A GPU has so few channels from the processor to memory that it cannot be effectively done in parallel.

All one has to do with Bcrypt is just adjust the difficulty and any advances in GPU technology or whatever can be nullified.


Thanks. I thought I saw an article about a recent leak with salted bcrypted passwords where they cracked weak passwords. The conclusion was there is no way to prevent a weak password from being compromised so to prevent it require longer more complicated passwords.


A weak password is a weak password no matter how good the hashing is. I think that you're referring to this bit from last year:

http://www.pxdojo.net/2015/08/what-i-learned-from-cracking-4...

The author of this piece was doing about 156 hashes per second and after just over five days, he had only gone through and cracked 4,000 account passwords--we're talking 0.0001% of Ashley Madison's supposed userbase here. To run just over the 14,000,000 passwords from RockYou.txt on every single user account from AM, it would take up to at least two billion years.


Weak passwords are still weak, but things like bcrypt buy you time. A bcrypt configuration might have hashing take 250ms on a server's Xeon CPU (as opposed to nanoseconds with a sha-based hash). You can try the password 'password' on say 50k hashes that were dumped, but it will take about 3.5 hours. You can get this time down by parallelizing or with better hardware like GPUs / FPGAs / ASICs, but you're still looking at a base time of 3.5 hours per attempt to go through each account. Or if you're only targeting one account, 3.5 hours to try 50k words.

If the server had bcrypt configured to take a second per password, multiple everything by 4. And so on. What I think is needed as a supplement to "use bcrypt / scrypt" is a "and use it this way so you don't accidentally open your server to DoSing or give a poor experience", because a 10 seconds-to-compute-on-a-Xeon hash is great from a security standpoint if your hashes get leaked, it sucks from a user experience to have to wait at least 10 seconds to login, and if you have to service multiple logins at once your server's not going to be able to do anything else if you just use bcrypt/scrypt synchronously.


If the server had bcrypt configured to take a second per password, multiple everything by 4. And so on. What I think is needed as a supplement to "use bcrypt / scrypt" is a "and use it this way so you don't accidentally open your server to DoSing or give a poor experience", because a 10 seconds-to-compute-on-a-Xeon hash is great from a security standpoint if your hashes get leaked, it sucks from a user experience to have to wait at least 10 seconds to login, and if you have to service multiple logins at once your server's not going to be able to do anything else if you just use bcrypt/scrypt synchronously.

If you're in a situation where you're needing to rely on hashing a user's password for every action, your application has far worse problems than what password storage method is in play. Moving from your current password storage method that is inadequate to one that would be better also takes into account that you haven't done something completely wrong with session states.

[edit]

I misread the poster I was responding to assuming that they meant that the user was re-authenticating on each request. My thought process was that if they're storing credentials in a cookie in lieu of a session ID then that needed to be addressed first before even going down the avenue of correcting password storage.


I wasn't talking about verifying a password for every action, but simply having more than one user of your service at once.


They're not "solved", but they're made 3 to 6 orders of magnitude more effort:

See https://gist.github.com/epixoip/a83d38f412b4737e99bbef804a27...

"8x Nvidia GTX 1080 Hashcat Benchmarks"

TL;DR:

Hashtype: MD5 Speed.Dev.#.: 200.3 GH/s

Hashtype: SHA1 Speed.Dev.#.: 68771.0 MH/s

Hashtype: bcrypt, Blowfish(OpenBSD) Speed.Dev.#.: 105.7 kH/s

Hashtype: scrypt Speed.Dev.#.: 3493.6 kH/s

Hashtype: PBKDF2-HMAC-SHA512 Speed.Dev.#.: 3450.1 kH/s

Hashtype: PBKDF2-HMAC-MD5 Speed.Dev.#.: 59296.5 kH/s

You can't protect against people using "password123" or "dadada" as their passwords, but for those of us using long randomly generated passwords bcrypt makes cracking it well outside the realms of possibility for anyone short of nation-state attackers. (I bet the NSA can get quite a lot more than 100kH/s for bcrypt if they're determined enough, but I wonder if even _they_ can throw 6 orders of magnitude more compute at the task?)


Some of these don't make sense. The point of bcrypt, scrypt, and pbkdf2 is the difficulty of them is configurable.

Digging into the comments on that gist, it says that the bcrypt benchmark used a workfactor of 5 (= 32 rounds). The lowest possible bcrypt workfactor is 4 (= 16 rounds).

For comparison, the default for the login hashes on OpenBSD is 8 (= 256 rounds) for standard accounts and 9 (= 512 rounds) for the root account, and OpenBSD's bcrypt_pbkdf(3) function uses a workfactor of 6 (= 64 rounds) and that's intended to be called with multiple rounds itself eg. in signify(1) it uses 42 rounds (a bit over the equivalent of bcrypt with a workfactor of 11) and ssh-keygen(1) defaults to 16 (roughly equivalent to bcrypt with a workfactor of 10 (= 1024 rounds).

The point I'm trying to make here is that bcrypt benchmark uses a ridiculously low workfactor, and it looks like the scrypt and pbkdf2 ones did to.


Yep - and even in it's "ridiculously low work factor" configuration, it's around 6 orders of magnitude slower than SHA. In context, the author of that piece was trying to show off the hashrate, so would be expected to err on the side of making his monster 8 gpu rig look better rather than worse.

Any of bcrypt, scrypt, or PBKDF2 can easily (as in, by design in the hash function parameters) be made however much slower is needed for you (so long as your normal login process is then still "fast enough", I had a WordPress site a while back where I couldn't wind the bcrypt plugin up much past 11 before the inexpensive webhosting would time out before login succeeded... (And yeah, feel free to mock me for "securing" WP with bcrypt - it was mostly because I wanted to be confident if/when the site got exploited, the hashes in the DB weren't going to be too easily attackable for anyone who'd used a decent password))


bcrypt uses unique salts per-password. The hash outputted by the bcrypt function is actually several delimited fields that have been joined into a single string.

This means that if multiple users use a common password like 'pass123', the hashes stored in the database will still each be unique. Any attacker trying to reverse all of the password hashes will have to reverse every single one individually in a targeted manner rather than using pre-computed rainbow tables or generating hashes from a wordlist and scanning the database for matching hashes.


What are the advantages of bcrypt compared to SHA based hashes with unique salts?


General-purpose cryptographic hash functions like the (now-broken) MD5, SHA1, SHA256, etc. are designed to be computationally easy, ie. fast.

Salting protects against rainbow tables [1], but it doesn't change the fact that computing a SHA256 hash is fast.

Password hash functions like PBKDF2, bcrypt, scrypt, Argon2 are designed to be computationally expensive, to make a password-cracking endeavor take even longer.

Argon2, the winner of the Password Hashing Competition and the current state-of-the-art, for example, has two ready-made variants: Argon2d is more resistant to GPU cracking, while Argon2i is more resistant to time-memory tradeoff attacks [2].

[1] https://en.wikipedia.org/wiki/Rainbow_table

[2] https://github.com/p-h-c/phc-winner-argon2


bcrypt is significantly slower to compute. Something like 5 or 6 orders of magnitude slower. (se my other comment in here with numbers for cracking various hash types on an 8gpu rig...)


Can I achieve the same by applying SHA x times?


If X is millions (or even billions), maybe, but you shouldn't. Just use one of the real password algorithms. Never ever roll your own hashing system assuming it's secure enough. It won't be.


Of course. I just wanted to get an idea of the reasons without going too much into mathematical details. For projects I would just use argon 2 or bcrypt.


> And I think that email is unbeatable, because it is federated, because it's governed by standards and because in spite of all constraints, it's quite adaptable, being the kind of platform supporting short term proprietary solutions because (and not in spite of) its client/server decoupling.

E-mail's biggest problem is that many, many, many people get it wrong either through configuration snafus or a holier-than-thou approach to how it talks to other servers. It's a miracle that it has managed to function as well as it has; and has done so only only out of sheer necessity.

The problems that e-mail face are the problems that Moxie doesn't want to deal with. If other people want to go for a protocol that has interoperability that's fine, but he wants no part in it. He has no obligation to provide a service to clients he doesn't want connecting and he's right to demand that those who use "Signal" in their name cease its use so to not confuse their attempts with his own.

We already saw a revolt when Signal (when it was known as "TextSecure") went away from its SMS model to a client-server one, leading to a version that still relies on SMS. The point of switching away was to further remove metadata that otherwise would have become exposed. This demonstrates that the type of people who want to go against Moxie's wishes are the type that are to get this implemented incorrectly.

> Is it unencrypted? Sure, but it doesn't matter though. Because we are willingly trading that for a capable search engine and a good web interface. Trade secrets aren't communicated over email anyway.

Your username should be enough to tell you that trade secrets are traded over e-mail routinely.

A multitude of inappropriate material that should not be shared via e-mail is done so on a regular basis. If you work at any company that has credit card numbers being used for either expenses or customer details, you'll quickly find that with a search for 16-digit strings within e-mails will give results.

The problem you're neglecting to acknowledge here is that data at rest can be left unencrypted but overall has no business being unencrypted when in transit. If data from party A is meant for party B (and C, D, E, F, and so on) then any party that is not involved has no business knowing about its contents other than where it is destined to--and even that is questionable.

Those who favour convenience over security are part of a huge problem that faces the Internet.

> I think Moxie is missing the point. He's emulating WhatsApp, but you can't beat WhatsApp at their own game. Did WhatsApp really deliver encryption to 1 billion users? Well, those are 1 billion users that probably won't use Signal or chat with Signal users. Oops.

Moxie isn't trying to beat WhatsApp at its game; he in fact went and improved it by incorporating aspects of Signal into it [1]. Signal is meant to be something else and not something to directly compete with WhatsApp on. Signal and WhatsApp cater to different groups and markets.

[1] https://whispersystems.org/blog/whatsapp-complete/


First of all, I really respect Moxie's choices and am very grateful for his work. What he wants to do, it's entirely his choice. Never meant to imply otherwise.

> Those who favour convenience over security are part of a huge problem that faces the Internet.

I'm not necessarily in favor of convenience, the problem is I cannot trust a binary blob communicating with a proprietary server, even if I can trust some of the people that worked on it, at least for now. I cannot trust something like WhatsApp. Signal I can trust, because at least it is open-source and up for review, but Signal will not succeed in being popular. At least not when it makes the same design choices. You say they cater to different markets, but I don't see a difference. For example Signal considers the phone number as being the username, just as WhatsApp.

Hence I end up carrying more about freedom than security. When I changed my email provider from Google Apps to FastMail, nobody noticed and I value that a lot.

> If you work at any company that has credit card numbers being used for either expenses or customer details, you'll quickly find that with a search for 16-digit strings within e-mails will give results

That may happen, but we've got strict policies in place. Nothing over email is communicated that's more important than source code. And given that source code lives in a Git repository provided by a public service, it would be ridiculous to do encrypted email, but not have behind-the-vpn on-premises Git repositories. And I know mistakes are made, etc. I still want federation more than I want end-to-end encryption.


> gratipay offers bitcoin payments, which, if they implement it right, would eliminate the middleman [...] when the financee cashes out using bitcoin

And tell me, how does one cut out the middleman when they convert their Bitcoin earnings to US dollars or whatever currency they prefer?


Jesus Christ why don't you people just start mowing the guy's lawn for Christ's sake


well then wouldn't the blades be the middleman


That can be taken more than one way


The second way wouldn't be done for Christ's sake...


A few people will donate bitcoin. A few things can be bought directly with bitcoin. On average, you'd expect that the fraction of your income paid in bitcoin would be similar to the fraction of goods and services that you can pay for with bitcoin.


> A few people will donate bitcoin. A few things can be bought directly with bitcoin. On average, you'd expect that the fraction of your income paid in bitcoin would be similar to the fraction of goods and services that you can pay for with bitcoin.

The point of donating money to the author of the piece is to encourage them to continue writing this epic story. Giving them Bitcoin is akin to leaving a tip to your server where the tip appears to be currency but rather instead is a Chick tract with the suggestion that you'll pray for their soul.


The difference being that others accept these "chick tracts" in exchange for goods and services.

I'm not in the mood to register with some service and hand out my CC data to give the author a buck. In contrast I'd scan a QR code in a heartbeat.


> As far as unauthorized use of campus resources, I think the university would be better served by suing AT&T for failing to adequately protect its network.

It is not the responsibility of a network service provider to protect the assets of a customer. If you're given a network link and an allocation of IP addresses, it is your responsibility to use them wisely and securely.

> Or maybe universities and organizations could teach their employees to handle network security better.

This is more sane thinking.


It should be noted that what weev was doing is nothing new and is really just this:

$ cat payload.ps |netcat -q 0 $printer_ip 9100

This is what was originally posted:

https://storify.com/weev/a-small-experiment-in


Thanks, this is a much better link than the mostly content-free NYT article.

This guy is a real piece of work, sneaking offensive Nazi stuff into random places and then laughing when people get offended. It's basically the Internet version of throwing a punch at someone and then making fun of them for flinching, but with more wasted paper.

Edit: it seems he actually believes the offensive stuff he printed. I just assumed it was chosen because it would offend people the most. I don't know if that makes it better or worse.


I'd argue it's actually an IRL manifestation of some of the worst stuff that has been going on all over the web for a long time. As such, the content is really any different, but it's a lot more noticeable.


That's a pretty good point. A lot of people love to get online and join into 'divisive' stuff by way of anonymity. That's not the Weev style. He is, in my estimation, a textbook example of what was meant by the phrase "I disapprove of what you say, but I will defend to the death your right to say it" in the context of the US. The guy certainly knows how to test the bounds of what is or is not criminal, according to the current laws and whatnot.


I totally agree, the only new thing here is that it involves paper, rather than online comment boxes and web page defacing and such. The paper aspect seems to get stronger reactions, since it sounds like people are assuming that something coming out of a printer must be from a local person. Or maybe it's just that paper is more tangible.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: