Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It’s just not the same as having transfers available on the remote command line. I have a program I wrote that maintains a local server with a port forwarded over SSH. Then on the remote side I have a client program that sends a file(s) (or folder) back to my local computer. It can either save the file or open it, depending on the command I ran.

This type of automatic transfer makes it very convenient to generate figures on a server (where the data lives) and then view them locally. It’s a much better workflow than having to use sshfs or scp.

What I wrote is really quite similar to an old transfer program like zmodem with the added feature of auto opening a file if I choose.



Once you mount everything as local with rclone, transferring protocols have no sense, everything should be part of a filesystem. Plan9 did it right, there's no difference between local and remote once you get the grasps of 'bind'.


But if the analysis programs are in a different server or you don’t want to transfer 100s of GB of data to the local machine for processing, you still want to have the ability to selectively transfer individual files.

It doesn’t matter if you mount a remote server locally… unless your data is of a trivial size, you still want to do the processing remotely.

I see this as a theory vs practice difference. In theory having one unified file system is great and the way to go. In practice… there are issues.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: