Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why aren’t we using SSH for everything? (2015) (medium.com/shazow)
173 points by fforflo on April 17, 2016 | hide | past | favorite | 84 comments


Because it's not a very good protocol. Adopting it, instead of HTTP/2 and TLS 1.3, basically gets you everything that is bad about TLS (most notably, a legacy of 1990s cryptography) and everything that is bad about SSH (poor performance, extreme complexity), and leave you with no upsides.

For instance: the HTTPS stack will, with effort, allow you to opt in to protocol forwarding, via mechanisms that were designed to prevent accidental forwarding. The SSH stack, on the other hand, requires you to opt out of arbitrary TCP/IP port forwarding. Screwing that up will, more often than not, get you owned up Phineas Phisher-style.

The things this post appreciates about SSH are equally available in the HTTPS stack (even more so in the HTTP/2 stack). Unlike SSH, with its janky Connection Protocol, HTTP has for the last 10 years been designed as an arbitrary application protocol. SSH does a better job providing interactive terminal sessions. HTTP does a better job at everything else.


Newer versions of OpenSSH have totally dumped the "90s crypto", although its important to note the problems with older ssh stuff are mostly due to things like RSA key lengths and use of RC4 and CBC. This stands in contrast to numerous gaping flaws in TLS stacks like the use of export ciphers, message maleability, inummerable weaknesses in x509 certs that allow spoofing/mitm and other horrible shit that is trivially compromised.

Also consider attack surfaces here, how many exploits have there been on say openssl and apache/nginx over the Past 1/5/10 years. How many on OpenSSH?

>For instance: the HTTPS stack will, with effort, allow you to opt in to protocol forwarding, via mechanisms that were designed to prevent accidental forwarding. The SSH stack, on the other hand, requires you to opt out of arbitrary TCP/IP port forwarding. Screwing that up will, more often than not, get you owned up Phineas Phisher-style.

Lets not forget that libraries like OpenSSL require you to "opt out" of shit like ECB mode and NULL ciphers.....


*"Also consider attack surfaces here, how many exploits have there been on say openssl and apache/nginx over the Past 1/5/10 years. How many on OpenSSH?"

This. I can't believe the top comment on this post is supporting TLS (as implemented currently) over SSH (where everyone is basically using openssh). There's pretty much a guaranteed "named" widespread TLS based attack exposed every month or so. Whether it's due to TLS the spec or the prevalent implementations, it doesn't matter. Any application designer choosing a protocol has to contend with the fact of shitty implementations of the underlying layer when thinking of security.


Not really. TLS and SSH have essentially the same crypto problems.

Legacy SSH uses conventional DH signed with pinned RSA keys to run CBC and HMAC. Legacy TLS uses conventional DH with RSA keys to run CBC and HMAC. Both have complicated negotiation schemes that include cipher parameters nobody should ever use.

Modern TLS uses forward-secure ECDH, signed with pinned RSA keys, to boot up an AEAD. Modern SSH uses ECDH signed with pinned RSA keys to boot up an AEAD.

There are two important reasons why TLS has more attacks than SSH:

1. Contra the article's claim of SSH's ubiquity, TLS is actually ubiquitously deployed, and so older version of the protocol by necessity live longer and cause more trouble.

2. TLS is used by browsers, and browsers support content-controlled code. You can't have BEAST or CRIME or AlFardan's RC4 attack without something to force a client to generate thousands of related connections.

But neither of these are factors for people who would consider using SSH instead of TLS. Those same people can deploy minimized configurations of TLS, and use native applications instead of browsers. See, for instance, what payments companies do with their iOS applications.

TLS has never supported ECB, by the way.


> leave you with no upsides.

Not even close.

While TLS technically supports client-side certs, all extant implementations of it are unbearably clunky to use, completely ignored by all vendors. Meanwhile the tooling around SSH keys (like ssh-agent) is seamlessly integrated with your OS and works so well that it's easy to forget it even exists.

Browser vendors completely dropped the ball on this; they dropped it so hard it continues to hurt even after all these years.


This may be true, but at least x509 will get you certificates right out the gate (sadly without reliably working reject lists).

I've yet to get to the point of moving to "modern ssh" (ed25519 key/certs, chacha20-poly1305 encryption etc) - with certificate only. Because keys are *so' convenient. But they're also, while better than passwords, pretty bad: No expiry, no easy rotation, no easy revocation.

I will say this though: it should be quite feasible to move to modern ssh, banning keys and passwords (other than perhaps as a second factor), and moving to certs only. But clearly deployment of reasonable ssh setups are lacking behind the technological improvements.


> all extant implementations of [mutual auth] are unbearably clunky to use, completely ignored by all vendors.

I hope things have gotten better now, but five years ago I ran into this in spades. Pulling teeth both to implement and then to explain. Especially if it's from a cert chain, and doubly so if you want to only trust certain from your CA.

And I think across the whole stack we ended up with three TLS stacks, and some HSM hardware. So I got to figure out trust stores multiple times.

I should have run screaming, but I stayed at that job an extra five months just to make sure that everyone really understood the mutual certificate auth code at a practical level. There's gotta be a better way.

I really think we need a hybrid system that is more like PGP, where you have a key chain and get advisary info from your peer group about the veracity of a CA Signed cert.


I'm not suggesting that people should use browsers. Obviously I can't be, because browsers don't do SSH.


I agree - the killer advantage of SSH is that OpenSSH is very, very good.


In the article, it shows a chat server as an example. Imagine the same app implemented using HTTPS. Would openssl s_client be effective as a client? [I suspect not, as HTTPS != SSL, but perhaps my naive question will prompt suggestion of a better approach from someone with a clear understanding of the layers.]

Update: looks like an effective tool could be socat, which seems to be a SSL-capable take on netcat. On the server side, reckon it would work to use stunnel. You could have xinetd host a launcher that wraps stunnel around a simple socket server. Clients could connect to that with socat.


I'm not sure I understand the comparison. openssl s_client creates a TLS connection, but doesn't drive the HTTP protocol; it's a generic bidirectional encrypted transport. Lots of chat protocols use TLS in exactly this manner.

But I'm going a step further: I think even if you're stuck with HTTPS/HTTP/2, you're still better off tunneling your application protocol over that than SSH.


Part of the appeal of the ssh chat server from the article is that you can connect to the server using an existing console tool. It's not necessary to have a purpose-built client. So I've been trying to think: how could you get that convenience, but using HTTPS instead of SSH. [based on your example above]

In an update above I've talked about socat. That's less convenient than ssh, because socat is obscure vs ssh.

A hassle I've had trying to get a SSL server going in this last hour - it seems like you need to be signed by a recognised authority. ssh is convenient in that you get solid encryption, but you only need a fingerprint, not an authority sig.


> Part of the appeal of the ssh chat server from the article is that you can connect to the server using an existing console tool.

If what you're looking for is confirmation that ssh is indeed the most ubiquitous (at least ignoring Windows) console app to support encrypted console chat apps, then yes, I think you're correct for that very narrow design goal. For just about any other set of design constraints I think you have a much harder argument to make that ssh is more usable than HTTPS.

> A hassle I've had trying to get a SSL server going in this last hour - it seems like you need to be signed by a recognised authority. ssh is convenient in that you get solid encryption, but you only need a fingerprint, not an authority sig.

What you call a hassle is a major feature of all correct TLS implementations. SSH offers no guarantee that the server you're connecting to is who they say they are. I don't count SSHFP DNS entries because (a) they're rare, and (b) DNS is trivially intercepted. DNSSEC may fix that but support is rare. Without the ability to guarantee the identity of the site you're connecting to, encryption doesn't really matter. You may just be having an encrypted conversation with an attacker.

https://letsencrypt.org/ takes much of the hassle of getting a verified TLS certificate away and can even be fully automated for example: https://caddyserver.com/


CAs don't gaurentee identity either. If you care deeply about either, you need to be comparing fingerprints. SSH clients have built in support for this, web browsers do not. Although I concede mechanisms like HPKP exist, they are not the default.


I would argue that automating verification via signatures is mostly strictly better than manually verifying fingerprints. But for ssh, you'd have to turn off basic key authentication (of users and hosts), and force the use of an trusted CA.

Trusting a slew of CAs to identify your server, when you only actually use one CA is indeed nuts (not to mention the fragility of certificate revocation for TLS, due to a (misguided) effort to avoid denial-of-service attacks).


You're not wrong (about most ssh deployments) - but ssh absolutely supports running your own CA. Perhaps there's a start-up opportunity in running a letsencrypt-like CA for ssh certs, though. I'm sure many people would be happy to pay a fee to have better key rollover, and arguably more secure server authentication.


You can use Lynx. It is a terminal based web browser, and is older than SSH. It exists for many platforms and is not particularly obscure or difficult to use.

https://en.m.wikipedia.org/wiki/Lynx_(web_browser)


This doesn't work as a solution to the problem in focus here.

Consider: could you implement a chat app that worked as effectively as the ssh one if you used lynx as the client? No, because interaction with lynx is synchronous.

The original article is making a point that you can build effective apps that work over ssh. The post at the root of this thread raised concerns about using ssh specifically.

Hypothetically you could have a console based web browser that supported javascript and websockets, and then expose the chat server over javascript. But that wouldn't be a good solution either: it's a specialised client and a large number of layers to achieve something that the ssh chat app shows to be straightforward - a secure, generic approach to exposing async functionality to a thin console tool.


If that's the case, how can I plug the HTTP/2 and TLS 1.3 stack into Linux-PAM and use it to login to my remote servers instead of using ssh?

Or are they not suitable for that?


You could wrap TLS around telnetd (bound to localhost, with a TLS terminator in front) pretty trivially. I'm not sure I'd recommend it, but you certainly could. You could then login with openssl s_client.

Integration to the PAM-stack would be a little more complicated, but there are some tools for working with certificates[eg: 1]. I'm not aware of any that is designed to work for console-like login remotely, though.

[1] http://www.ftsafe.com/application/signon/linuxpampkcs11/


tinysshd does not have the 90's crypto by default. It is optional. Nor does it have the "extreme complexity".


> and leave you with no upsides.

The only place I've personally seen client-side certificates implemented in a browser was for StartSSL's site (which is famously janky in it's workflow). In contrast, pretty much anyone who does anything on the CLI in a unix environment has a client-side ssh key set up, already good to go.


Cacert.org has an option for logging in with certificates. It works well enough.


HN discussion when this was originally posted 471 days ago:

https://news.ycombinator.com/item?id=8828543


I need to get on that hacker news api and figure out what the half life of a HN user is, so we can tell whether 471 days is a meaningful value. I suspect a large number of people reading this weren't around then. (I think I measured that 2/3 of posters here don't have at least 501 karma, so you can use that as a ballpark figure.)


Some users are lurkers. I created this account 1921 days ago but have only 201 karma.

That said, I don't remember the previous posting of this article, so linking to previous comments is useful. It lets me compare the discussions and see how opinions do or don't change.


You know, maybe I'm wrong, but I read the comment as a cheap way to register superiority over the OP - essentially saying "we already talked about this, so move along." That's not as valuable when "we" changes over time, and (as you pointed out) a lot of "us" don't participate in every discussion. IMHO we'd be better off if we didn't have to hear about it every time an old topic got resurrected.


> ...I read the comment as a cheap way to register superiority over the OP...

Whose comment? fjarlq's comment? If so, I also might be wrong, but I read it as "Here's old discussion on the topic. Might be worth reading to see if what you want to talk about today has already been hashed out, or if there's discussion that you might find interesting.".


Yeah, I merely wanted to reference the old discussion in the hope that it might improve the discussion today. I think it's fine when a story gets discussed multiple times.


Seconded; I'm a casual browser but when a topic grabs my attention it's great to see there's been multiple well-trafficked threads and have links mentioned to backtrack to alternate discussions


What? Links to old discussions (especially when they were informed) is always good. And sometimes it'll avoid a redundant rehash of points. When it comes to software like TLS/ssh, it's also very interesting to see if things that were true/important a year ago are less true/important now.

I see where you're coming from, but I think everyone is well served not always looking for ulterior motives where there are none. A link to a rich previous discussion on a topic is almost always both relevant and informative, and saves others from having to go an search for the relevant thread(s).

If there's one great failing of computer science as a field, it is the lack of history -- people just don't know what was, or how things came to be. And so we have a lot of effort wasted on recreating yesterdays mistakes, today, rather than building on old solutions to create better ones for tomorrow (to partly paraphrase Alan Kay).


"I measured that 2/3 of posters here don't have at least 501 karma", is that a meaningful piece of data though? I've been lurking on here for years; rarely comment, never posted.


I made this account 1979 days ago, have 230 karma and have probably averaged over 7 visits a week. Some people just spend way more time reading than commenting.


Sometimes, people are also just asleep/on vacation and will miss the previous submission, even if their acount age is greater than the "X days ago".


Because it's easier to wrap an arbitrary protocol with TLS than it is to tunnel it through ssh. Plus, TLS is already supported for the purposes you mentioned. A number of MU*s already offer TLS access, IRC and XMPP have supported TLS since forever, there's near ubiquitous support for it in any language you want to write a client/server in, plus, as others have mentioned, you don't have the security risks that SSH can potentially have. It just provides too much weight for not enough benefit.


Ask a provider like GitHub and Heroku which git protocol they prefer: SSH or HTTPS.

The answer is HTTPS.

Managing an SSH server with millions of customer pubkeys is a pain. Python/twisted and Go have decent SSH implementations but there are quirks and side effects a plenty.

Shells are scary. You have to be very careful about how you execute the program on the remote side of the pipe. Shell command parsing and escaping is gnarly.

Windows...

HTTPS with basic auth headers is much easier to scale.

The one gotcha is that it took a while for HTTP git to get "smart" and offer as efficient transport as SSH with access to a .git dir.


It's not as simple as you make it out to be. HTTPS git still has no consistent way to cache credentials, so you're stuck either keeping it in plaintext on disk or re-authenticating every time you push.

Even though client TLS certs are technically supported by a handful of services, there's nothing for HTTPS that matches the security and convenience of ssh-agent. It has a long way to go to catch up to where SSH was ten or fifteen years ago.


git supports "credential helpers" [0] which allow arbitrary credential storage and retrieval schemes. I've successfully used it with a Yubikey, for example, although I promptly reverted to using SSH and my GPG authentication subkey when it became clear that there is no single scheme that works on all platforms I use.

[0] https://git-scm.com/docs/git-credential


The parent has a great point about credentials and ssh-agent in general. Netrc isn't a great pattern.

But for git you have the right answer.

I completely forgot this aspect because on OS X you can delegate git auth to Keychain with a helper.

https://help.github.com/articles/caching-your-github-passwor...


Apparently since the last time I checked, git actually includes a helper script that allows you to use `gpg-agent` to decrypt your creds in an analogous way to how `ssh-agent` works. It requires a bit of setup since for some reason it's disabled by default, and it's a bit more moving parts (GPG key plus username/password instead of a single keypair) but it's a lot better than it used to be.

I hope this pattern catches on for services other than git.


A better solution is to simply use `gpg-agent` as your `ssh-agent`, as it supports that protocol and it supports RSA authentication keys.

Then your public keys provide not only identity, but services can verify that identity through their web of trust.


> millions of customer pubkeys is a pain

And millions of passwords isn't because ... ? Whatever solution the industry has arrived at for storing a large number of user passwords and authenticating against them, can be applied to SSH keys too. I would be greatly suprised if Github uses ~/.ssh/authorized_keys to handle SSH authentication at its scale.

> Python/twisted and Go have decent SSH implementations but

So is the case with HTTPS implementations. It's just a standard, how it's implemented is not under its control. On the other hand, we have some good implementations of HTTPS, and similarly we have good implementations of SSH too: libssh2.

> Shells are scary.

So don't use shells over the SSH protocol. Make the server execute (upon successful auth) something that isn't a shell.

> Windows...

Agreed. Until user-agents popularly support a protocol, adoption faces large obstacles.

> HTTPS with basic auth headers is much easier to scale

Easier how? You have to do less computation to verify the password? This has nothing to do with the transport, and everything to do with the cryptographic strength of the password. SSH is better here because of its good support of keys (versus TLS client certs).


I prefer ssh for this purpose since I don't want to store credentials somewhere in plain text.


Why would you do that with either option?


HTTPS auth provided by Github et al. don't support client certs, only username/password pairs. There is no equivalent to ssh-agent for HTTPS (because authenticating to HTTPS URLs is not a standard).


I still don't see why that'd require storing of plaintext credentials.


Doesn't scale, its very chatty as a protocol and its not actually very reliable. I've seen software deployments over ssh scaled out to 1,000s of target servers and its just not pretty. Also used sitescope for alerting and the whole 'agentless' model (where 'ssh' is the real agent) is bad -- again poor scalability in both the CPU and reliability domains...


the better question is why we're not using pgp for everything.

I wish I could register for services with nothing but a public key. No email address. No username. When I want to login, generate a login token and I'll sign it and give it back.


The noise protocol is great candidate for an everything crypto protocol.

http/2 and TLS1.3 are great but make some assumptions that might not be right for everything.

Whats App is using Noise now in their apps.


Some MUDs already support ssh, don't they? I know that some nethack servers do.


Evennia (Python, Twisted + Django) is one of the most (the most?) popular Python MUD servers and features SSH/SSL support: https://groups.google.com/forum/#!topic/evennia/4blKxTGQ3CE

More on Evennia itself: http://evennia.com


I think I've seen that implemented by using a standard SSH server and having the remote shell be essentially the command "telnet localhost". You would want to make sure to disable port forwarding, running other commands, etc.


1. It can't be easily grokked. How many options does it have. Dozens? Hundreds? Compare to FTP.

2. Only specialists appear to understand the PKI infrastructure which underpins this security enough to properly manage the keys; possibly with expensive appliances. Compare to FTP; it's easy to grok managing and securing a string.

3. It's leaky. When you connect by FTP you present only the creds you desire. When you connect over SSH you expose all of your identities / keys to the other server. Pretty dumb.


Any secure system (more secure than FTP, at least) will have a lot of knobs. Most of the time, it is well handled by sticking to defaults. THis is something SSH has done well, and HTTPS not so well; I've had to set TLS configs on my web servers, while SSH is a simple `apt install ssh && cat /etc/ssh/ssh_host_ecdsa_key.pub`

> Only specialists appear to understand the PKI infrastructure ... Compare to FTP, it's easy to grok managing and securing a string.

It's also easy to break into a typical string-password-protected service. If FTP needed the higher security, it would have to use complex crypto. Crypto is hard.

But, as I said above, it doesn't have to be hard to use, only implement.

> It's leaky

Agreed, SSH should have some support for matching keys with hosts automatically. But it already supports doing this manually. Check out IdentityFile in ssh_config's manual.


Idea: Maybe it's the "sh" in ssh that make it so useful. Environmental variables instead of "HTTP headers". envdir

Opinion: In terms of "authentication" I still think ssh has the edge over anything associated with http/https and www. Two parties should be able to authenticate to each other without involving a third.


Actually https authentication does work like ssh (at least in Mozilla Firefox), just with the extra CA-signature option. You are warned that the certificate is not signed by CA nor been whitelisted yet. So, you can whitelist it and then all future connections can be authenticated just fine.


Clarification: By "authentication" I mean keys that the users generate using ssh-keygen. I do not mean certificates or "certficate authorities" (CA's).


TLS supports mutual authentication via X.509 certificates, and they can be self-signed on both ends with some "accept" dialogs, similar to SSH's "do you trust this host's key?" prompts.


Is generating ed25519 keys slower or faster than generating self-signed certs?

Nothing wrong with OpenSSH supporting the option to use certs. They can be useful to some users.

But the entire X.509 scheme to my knowledge was based around some idea of third party verification.

This gave rise to the business of selling CA "services". Problematic to say the least.

And still to this day, "self-signing" appears to be disfavored. Or perhaps the openssl binary is just too loaded with options for users to learn the commands to generate CA and server certs and keys.

Whether it truly is or not, ostensibly "SSL/TLS certificates" to the public seems to require third party involvement.

ed25519 keys do not have this problem. And generating them is relatively fast.


> Programmatic Data Streams

> RPC API

Please don't. Managing SSH sensibly (with checking keys properly) in any greater scale is awful. SSH was never meant for general communication, which reflects how it operates on implementation level and on protocol level. SSH servers require usable home directory locally and remotely (OpenSSH has some workarounds, but this quickly becomes ugly), you need to manage SSH keys, both public and local, and protocol has plenty of options useful for interactive administrative service, but terrible from security standpoint (have you disabled everything unnecessary? and are you sure it was everything?). If you use SSH in non-interactive, non-supervised mode, it is a disaster waiting to happen.

> Why aren’t we using SSH for everything?

Because it is not suited for everything. It is only suited for providing shell and for some non-interactive but human-supervised tasks.


SSH is also implemented by Golang, and Erlang (And probably a bunch of other languages). You're right that traditional SSH clients tend to need long-term storage, but one could easily turn off host key caching, and look to SSHFP records (http://www.ietf.org/rfc/rfc4255.txt) instead. I'd actually argue that SSHFP records are analogous to DANE. Paired with DNSSEC, I'm not there there's a practical attack there.

As far as the server side - apart from the storage of the keys (which is required with any cryptographic protocol), I'm not sure where the challenge comes in.

Bootstrapping trust is a problem in any situation. SSH server keys paired with an out of band mechanism for distributing public keys or SSHFP can prove to be a fairly reasonable method. I'd say somewhat more reasonable than what we had with HTTP + TLS for the longest time (until say Let's Encrypt). Even with that, certificate stores aren't particularly easy to manage.

Going back to the language-specific implementations of SSH. Those servers can be run without the necessary submodules to do shell-like stuff. I agree with you, it'd be crazy talk to use the system SSH daemon to do much of what's described by the author, but as a protocol, perhaps it is more reasonable.


By "no practical attack", you mean, "as long as you trust whichever government controls your TLD and whichever governments effectively control the global DNS root" --- which, for most of the hosts people on HN contact, means "as long as you trust the NSA".

DNSSEC is a disaster. Avoid it.

http://sockpuppet.org/blog/2015/01/15/against-dnssec/


I hate to add "me too" replies, but it is important to get the message out there that lots of really smart folks consider DNSSEC an absolute failure of such epic proportions that you shouldn't even joke about building something real on top of it.

The only thing DNSSEC has given us is widespread DDoS amplification.


What better option is there?


Not using DNSSEC and not having false sense of security. No sense of security is better than a false one.



DNSSec is useless if TLS was implemented in the ideal way. But, unfortunately, TLS is far from ideal.

The author states "For something as harebrained as the CA system, remarkably few criminal breaches trace back to it -- There have been many false CAs, and certificates issues. We have no idea what they've been used for (http://arstechnica.com/security/2015/03/google-warns-of-unau...). In addition to this, there exist many CAs whose primary purpose is to perform MITM-style attacks

The author's points about DNSSec being expensive - somewhat, but more an more providers are offering DNSSec at the same price as normal DNS. Fedora is even enabling it on all their end hosts.

As far as the "Government controlled PKI": 1. What's better? Some security, or no security? 2. If the government wanted to crack DNSSec, there still exists the fact that we can share KSKs out of band for verification.

DNSSec is capable of providing better security than the current system. It does have some implementation gaps, but what do you propose alternatively?


I don't agree we should use SSH for everything, but

> Managing SSH sensibly (with checking keys properly) in any greater scale is awful.

The certificate key format introduced by OpenSSH allows for quite easy large-scale key management so long as your clients and servers are OpenSSH or golang x/crypto/ssh


Managing host keys is much more than just dumping at some point in time all the certificates in some directory, even if under version control. Servers are (can be) deployed and ramped down and reinstalled and moved from network to network all the time, and not always by you nor by somebody who cares about your configuration management system, which leads to plenty of fun in tracking what keys the servers have and should have.


SSH is a protocol, not an implementation, all your complaints are just engineering problems.


> Fortunately, as long as the proxy doesn’t have the original server’s private key, then the key fingerprint will be different.

Will it? What is the proxy server happens to have a key whose fingerprint hash is the same as the original server's key?

SSH very recently upgraded how keys are fingerprinted; the screenshots in this article are the old method, which as I learned recently, are MD5 hashes. (I had thought they were SHA1.) SHA-256 is what gets used now. But, for example, Ubuntu's LTS doesn't have the update. I'm glad they made the change, but I'm left wondering how key exchanges aren't somewhat vulnerable on unupgraded machines.

(e.g., fingerprints now look like:

  SHA256:7h5Q0O1Qc8G7Hv/hVIrx1d6ZTgnITwrutr1XDBYY0sg

)


I'm not sure how easy it is to forge a fingerprint. Given most of the MD5 collision work was done based on length-extension attacks, I think SSH is somewhat more tolerant to these. Nonetheless, your point stands - it is possible to forge a key that has the same fingerprint as the original server, but I cannot think of any practical way to do this.


Nonetheless, your point stands - it is possible to forge a key that has the same fingerprint as the original server, but I cannot think of any practical way to do this.

If I understand correctly that's basically a preimage attack, and even for MD5 where you can find arbitrary colliding pairs very easily today, AFAIK no one has come up with a single preimage example or practical attack.


Right, none of the published MD5 attacks allow you to generate data to match a specific hash value. It's certainly a good idea to avoid MD5, but it's not easily exploitable in most situations.


Logging in to an IRC server over ssh could do the user verification using the user key, just like https can do client certificates. It is an underrated posibility perhaps..


Reminds me of: "Show HN: ssh chat.shazow.net" (the github repo actually links to the medium article of this story now) https://news.ycombinator.com/item?id=8743374

I suppose the natural thing would be to marry the Robust IRC server/gateway to a golang ssh front-end, so that one could ssh in and be dumped right in a (nick authenticated) robust irc sesssion...:

https://github.com/robustirc/robustirc


Does SSH have PFS?


Yes, if session keys are ephemeral.

"SSH session keys are ephemeral and are exchanged between hosts using DHE."

From: https://lwn.net/Articles/572926/

DHE: http://stackoverflow.com/questions/14034508


[flagged]


Accusations of shillage without evidence break HN's civility rule. I've posted about this countless times.

We banned this account for serial trolling and detached this subthread from https://news.ycombinator.com/item?id=11516860 and marked it off-topic.


I've seen this accusation thrown around for years now (often privately) and I just don't see it. Maybe I'm blind.

Even if he's shilling for the government (stupid claim but whatever), what do they gain by him encouraging people to adopt better crypto habits?

This seemingly-baseless personal attack doesn't even pass the laugh test for me. I'd like to see some hard evidence.

What do government shills sound like? What are they trying to accomplish? How does Thomas line up with their modus operandi and/or endgame?

As someone who works heavily with appsec/crypto and has no business relationship with tptacek, it's obvious to me that his warnings are exclusively of the "don't do unnecessary things that often result in security foot-bullets" category.

What OpenSSH does right: public key pinning instead of a CA infrastructure. But then, you don't necessarily need the CA infrastructure for TLS.

Do all security researchers sound like government shills now?


He's got positions on the US government that rubs a lot of people the wrong way (including me; then again, my positions on the US governments rubs a lot of other people the wrong way). That's probably where the accusation stems from. While I disagree with him a lot on politics, like you I don't see any justification for calling him a shill.

It seems a lot of people don't understand that most people who have political views they don't like hold those views because they honestly believe it is best, not because of some nefarious conspiracy.

More importantly, I've yet to see any security advice from him that is in any way suspect. Strangely enough it is possible to disagree with someone about politics and still trust their technical advice when it time and time again is demonstrably sound.


I think a better term would be apologist, not shill. For better or for worse, Thomas's interest in the law gives him a position on Federal power that is often at odds with the more old-school libertarian/anarchist members of the HN community.

Speaking for myself, his apologia of The State also irritates me quite a bit, but I try to keep my mouth shut because he contributes so much to the community.


Encryption is about 2 orders of magnitude more computing intensive than non-encryption.

It's expensive. That's why we aren't using Encryption for everything.


Being two orders of magnitude more computing intensive that doing nothing isn't really an issue.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: