Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> the server your home directory is on is compromised

The attack presumes your home directory is client side, and you use SCP to connect to any kind of server, whilst ~ is the client side open directory.

Thus, any server you might SCP to could write to your local home dir. In university, I did this with compute clusters, servers of my association and other servers.

This breaks the SCP security model because it means a server has covert access to your local working path. Whereas normally you know which files SCP touched, so you can verify they are as intended.



Right I get all that (I updated to clarify that I understand the attack is against your local home dir, but I assume you have one on the remote side).

I just don’t understand the use case of scp’ing from an untrusted host.


In the B2B world there is a perhaps surprising amount of ad hoc data transfer (transactions, statements) between separate entities using scp/sftp from a cron'ed ksh script written by a junior sysadmin 8 years ago.

Throw a Tectia server in the DMZ, fix the firewall, manage some keys (never change them) and you're good to go! Data at rest security via Zip or PGP for the ambitious.

Occasionally a big enough entity will mandate "everyone will use our data xfer spec" (and it's typically some gnarly XML-infested SOAPy animal). But there's a whole lot of automated business processes doing CSV-over-scp both intra- and inter-company.

Don't even get me started on FTP.


I always go CSV-over-SCP called from a crontab, over anything more complicated like SOAP, whenever I have any say. I try to strengthen both accounts, for instance having the target account only accept SCP connections, but still this kind of bug could be exploited maliciously to jump from one server (or company!) to another.


> Occasionally a big enough entity will mandate "everyone will use our data xfer spec" (and it's typically some gnarly XML-infested SOAPy animal).

In which case, you leave your CSV-over-scp or CSV-over-FTP in place, and duct tape on another layer that handles the new data transfer spec. That way you can leave the 8 year old process itself alone and let it keep creaking away, silently chugging away as it slowly fades into the twilight of "important but forgotten" systems running on autopilot throughout the years.

It reminds me of how places like Rome still have sewers from the days of Ancient Rome in operation. Rather than replace them outright, they were just connected them to the newer sewer system to divert the waste to treatment plants instead of directly to water sources. And they'll keep on going, only being updated when absolutely necessary to warrant it.


Throughout the years my employers have reinstalled servers and consequently changed host keys. I believe many users just accept the new key without realizing what they might get themselves into.

Ideally your organization should keep updated (public) host keys somewhere, say on some https-protected website so you can double check yourself. How common is this?

(I mainly use ssh for interactive/tunneling use, but with a bit of bad luck the host key would change just in time for scp. BTW, don't rsync/unison use scp sometimes?)


Ideally they set up SSHFP dns records with the hostkey fingerprints

Haven't yet encountered it in the wild though


Or they automatically distribute the host key fingerprints onto employees machines via some organization-wide internal method (ldap, orchestration/configuration management tool of the month, ssh_config pointing to a global known_hosts on a share, etc.).


OpenSSH also supports certificates, so you can have the host system provide a certificate identifying it -- but you have to setup a CA, and arrange for new hosts to securely have their host keys signed by it.


"Ideally they set up SSHFP dns records with the hostkey fingerprints ..."

We (rsync.net) are doing this in 2019. A little embarrassed we haven't done it already ...


I've been using SSH since SSH-1 was released in 1995. I've never seen anyone verify a host key, ever!

Also, when I reinstall a server and keep the name, I preserve the keys from the original server.


I copied over results files from a university cluster to my local home dir. It wouldn't surprise me if it were possible for another user of the cluster to affect the server-side SCP program.

I trust the cluster enough to give me those results. I don't trust it with write access to .bash_aliases . Thats why I did

    scp file@host .
rather than

    scp *@host .


Because the remote host could be compromised/hacked even when you "trust" it, this could easily be used to jump from a compromised auxiliary external server to owning the internal laptop/desktop of a domain-wide administrator, for example.


If somebody manages to hack into a server somehow they can then contaminate hosts that attempt to scp from it. It's not the easiest exploit ever made but it's definitely pretty bad.


> I just don’t understand the use case of scp’ing from an untrusted host

You are SCPing to an untrusted machine, not from. Your client is trusted, the server you are connecting to is not.

Sometimes circumstances push you to connect to a server that is "semi-trusted" as described in the comments above.


> I just don’t understand the use case of scp’ing from an untrusted host.

What? Why would it even be an expectation that the host ought to be trusted? Would you say that about FTP, or HTTP or any other file transfer protocol? What's special about SCP?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: