For background, on Sandstorm, each Gitlab project is placed in a separate grain (container), isolated from all others. In order to communicate with a grain at all, you must have been granted some level of access to it by its owner -- Sandstorm does not let you send requests to private grains to which you haven't been given access. So, private Gitlab repos hosted on Sandstorm are basically not vulnerable to any vulnerability.
Of course, Gitlab is the kind of thing you might intentionally make public to all, e.g. to host an open source project. Therefore, it makes sense to analyze whether a public repository would be exploitable.
Privilege escalation via "impersonate" feature
On Sandstorm, authentication is handled by Sandstorm. The app receives an unspoofable header indicating which user the request came from, and what permissions they have. A well-written app uses this header on every request to authenticate the user.
Unfortunately, our Gitlab package currently uses this information only when a session first opens, then relies on the session cookie going forward. This pattern is sometimes used as a "hack" on Sandstorm to more easily integrate with existing login code designed to do upfront / one-time authentication. As such, public Gitlab repositories hosted on Sandstorm would be vulnerable.
Had Gitlab on Sandstorm been implemented "properly", it would not be vulnerable. That said, the "impersonate user" feature would not have worked at all. That's probably for the best: a feature like this really ought to be implemented by Sandstorm itself, which would have the ability to implement it (securely) across all apps at once, rather than have each app implement its own version.
Somewhat embarrassingly, the Sandstorm package of Gitlab actually predates this feature being added, therefore Gitlab instances on Sandstorm today actually aren't vulnerable. (Generally, if the upstream app author does not directly maintain the Sandstorm package, then the Sandstorm package will tend to fall behind. This should get better as Sandstorm gains popularity and upstream authors target it explicitly.)
Privilege escalation via notes API
Privilege escalation via project webhook API
Information disclosure via project labels
Information disclosure via new merge request page
These vulnerabilities allow a user to manipulate a private project to which they aren't supposed to have access. On Sandstorm, a private project would live in its own grain, and if you hadn't been given access then Sandstorm would deny you the ability to talk to the grain at all. Therefore, these cannot be exploited.
XSS vulnerability via branch and tag names
XSS vulnerability via custom issue tracker URL
XSS vulnerability via label drop-down
These vulnerabilities require that you have write access to one project on the server in order to launch an attack. On Sandstorm, since every repository is its own instance, the attack would be limited to the repository on which you have write access -- you would not be able to use this attack to damage someone else's Gitlab repository on which you lack write access.
XSS vulnerability via window.opener
This is a subtle phishing issue that is very widespread. However, it mostly doesn't work when the opener is a sandstorm app: the app lives inside an iframe which is prohibited by Content-Security-Policy from browsing away from the server.
Information disclosure via milestone API
Information disclosure via snippet API
A public project hosted on Sandstorm would be vulnerable to these (leaking confidential issues attached to public milestones, and leaking private snippets attached to a public project). Sandstorm can only enforce access control at the grain level; anything finer than that is up to the app, and is subject to app bugs. I generally recommend that Sandstorm users try to put confidential data in separate grains from public data -- e.g. creating a separate issue tracker for confidential issues.
(The Sandstorm packaging again predates the milestone bug's introduction, though may be affected by the snippet issue.)
We will update the Gitlab package tomorrow. Once an update is pushed, every Sandstorm user will receive a notification within 24 hours and can apply the update with one click.
Conclusion: Sandstorm mitigated 8/11 issues by design, 2/11 by accident, and is vulnerable to 1/11. The biggest issue was only mitigated by accident in this case, although a well-behaved Sandstorm app normally wouldn't have this kind of issue by design. Overall, though, I'm disappointed in Sandstorm's performance here -- it's much worse than the usual 95% mitigation rate.
Wouldn't this be better in a blog post than as an HN comment? It seems interesting enough - but at least personally I would prefer that the HN comment thread is a discussion of the topic, rather than marketing of another product.
I always get bothered when people bring up the `curl [...] | bash`
“argument” (which is usually less of an argument and more of a rude
dismissal of a good product).
Sure, the script downloaded with curl should be validated.
Not sure, but I’m pretty sure you don’t validate every tarball you
download, and even if you do validate them, you certainly wouldn’t look
through each line of code making sure there isn’t a backdoor or something.
Any malicious person could break into a tarball download site, and
replace the checksums as well (only way to mitigate that is using a
signature that is associated with the tarball maintainer), and even if
something like PGP is used, the attacker would be able to replace the
instructions to obtain the PGP key with their own generated one, or
even remove the signature completely.
This is, of course, the same for curl piped into bash, but my point is
that it’s only slightly worse than an unsecured package site (which
I’m sure there are many of). The only thing worse about it is that
it’s less noticeable when the script has been replaced.
Also, some scripts (such as the rvm installatio script) /require/ PGP
to be used for the installation to even start.
In conclusion, please don’t be rude and dismiss an entire product
because of a single installation method, which isn’t actually worse
than a malicious tarball.
> I always get bothered when people bring up the `curl [...] | bash` “argument” (which is usually less of an argument and more of a rude dismissal of a good product).
Why? Package managers were created for a reason. Virtually every Linux distribution anyone would use to host a service such as Sandstorm will include a package manager.
> Not sure, but I’m pretty sure you don’t validate every tarball you download, and even if you do validate them, you certainly wouldn’t look through each line of code making sure there isn’t a backdoor or something.
I don't download tarballs. I install packages. Who downloads tarballs to install software in 2016?
Anyone actively developing for a project will likely be using git, and I'm not aware of too many projects which make use of toolchains which are so new that they're not available in any rolling release/testing distro.
> Any malicious person could break into a tarball download site, and replace the checksums as well (only way to mitigate that is using a signature that is associated with the tarball maintainer), and even if something like PGP is used, the attacker would be able to replace the instructions to obtain the PGP key with their own generated one, or even remove the signature completely.
Yes, fine. And that's why all major distributions sign their packages, so you know that it's valid. Again, it's 2016, who seriously installs software from tarballs?
> The only thing worse about it is that it’s less noticeable when the script has been replaced.
No, what is much worse than that is you are installing software outside of your package manager. Also, companies who deploy via curl | bash, typically pull in their own dependencies outside of the package manager.
It's bad form.
If your application has specific dependencies, then ship it as an appliance using LXC, Docker, or $TRENDY_CONTAINER_TECH. That way, all updates are atomic and you don't risk eff-ing the OS by pulling in a bunch of stuff outside of the package manager.
> In conclusion, please don’t be rude and dismiss an entire product because of a single installation method, which isn’t actually worse than a malicious tarball.
In conclusion, package management is a solved problem. Companies which have installers which do not make use of the distribution's package management system are lazy, and anyone who defends curl | bash is an apologist.
Seriously, it's not difficult to generate a DEB/RPM/<insert distribution package format> and create your own signed repo. Package management has been a solved problem for at least a decade.
It annoys me that companies think it's okay to have curl | bash as an installation method. If they think that will ever be acceptable in an enterprise environment where change management is important, they're delusional.
> Virtually every Linux distribution anyone would use to host a service such as Sandstorm will include a package manager.
Yes, but:
* Most of them ship on a 6-month or even 2-year release cycle whereas Sandstorm updates every week.
* Most of them will not accept a package that wants to self-containerize with its own dependencies, which means Sandstorm would instead have to test against every different distro's dependency packages every week, whereas with self-containerization we only depend on the Linux kernel API -- which is insanely stable.
* If we publish the packages from our own repo, we're back to square one: how does the user get the signing key, if not from HTTPS download?
* You should probably be installing Sandstorm in its own VM anyway.
I'm certainly not saying curl|bash is perfect. There are trade-offs. For now this is the trade-off that makes the most sense for us. Later on, maybe something else will make sense.
> If they think that will ever be acceptable in an enterprise environment where change management is important, they're delusional.
We've never had an enterprise customer comment on this (other than requesting the PGP-verified option, which we implemented). The vast majority of complaints come from HN, Twitter, and Reddit. ::shrug::
> Most of them ship on a 6-month or even 2-year release cycle whereas Sandstorm updates every week.
That's fine. Run your own repo where you control the release cycle. Puppet does this. GitLab does this. PostgreSQL does this. Sandstorm does not do this.
> Most of them will not accept a package that wants to self-containerize with its own dependencies, which means Sandstorm would instead have to test against every different distro's dependency packages every week, whereas with self-containerization we only depend on the Linux kernel API -- which is insanely stable.
GitLab ships an omnibus installer with their CE/EE product, and it works great. I don't see why Sandstorm couldn't also publish an omnibus installer which contains all the dependencies, in essence creating your own container in somewhere neutral like /opt
This way, you have atomic releases you can install. How do I select the version to install with 'curl | bash' without user interaction?
> If we publish the packages from our own repo, we're back to square one: how does the user get the signing key, if not from HTTPS download?
Publish your signing key as a distribution package. This is what most organizations do (e.g. EPEL, PostgreSQL, Puppet).
Then the user does 'apt-get install sandstorm-release', 'apt-get update', 'apt-get install sandstorm' and you have an authenticated release.
Your signing infrastructure should be secure enough that you don't have to change the signing key within the major release cycle of a Linux distribution anyway.
> You should probably be installing Sandstorm in its own VM anyway.
This isn't an excuse for 'curl | bash' installs. If they want to recommend that their customers run the product in a VM or container, they should provide appliance/container images, as well as packages.
> We've never had an enterprise customer comment on this
I am in enterprise, although I'm not a sandstorm customer. I will tell you that one of the first considerations when deciding on a product is how well it fits into our existing infrastructure. If I can't push a repo and install a specific version of the software, tested and known working, using puppet, it's not getting deployed in our org.
Perhaps the reason none of Sandstorm's enterprise customers are asking for packaged installers is because every enterprise customer who expects a packaged installer sees Sandstorm's installation method and decides not to use it.
I agree that all companies should have a signed package repository,
instead of tarballs 100%.
I didn't mean to defend ‘curl | bash‘, I just meant to say why ‘curl |
bash‘ isn’t as bad as people think (versus tarballs). Package managers
definitely are far superior to tarballs, and ‘curl | bash‘.
Another thing I didn’t like is how the OP seemingly dismissed an
entire product with the single statement “kthxbai”, but that’s not
relevant.
In an ideal world companies (or even small projects) have
repositories, in an even more ideal world they were in the distro
repositories already.
At least it is https, so no mitm attack. You have to trust the Sandstorm developers, even if you download the source and apply all the steps manually. If the site gets compromised, the attackers can modify the source and the website documentation for the PGP verification.
> Since every project is isolated, how do you handle operations that require data from multiple projects, such as the activity feed or merge requests?
Right now: Poorly.
But in a few months:
- We're adding an activity feed API to Sandstorm, so that you can get a unified feed and notifications across all apps.
- The Powerbox (https://sandstorm.io/how-it-works#powerbox) allows connecting grains to each other. We will define a git API which each of the git apps (Gitlab, Gitweb, Gogs) can implement, so that you can even do cross-app merge requests.
https://docs.sandstorm.io/en/latest/using/security-non-event...
Let's see how it scores here...
For background, on Sandstorm, each Gitlab project is placed in a separate grain (container), isolated from all others. In order to communicate with a grain at all, you must have been granted some level of access to it by its owner -- Sandstorm does not let you send requests to private grains to which you haven't been given access. So, private Gitlab repos hosted on Sandstorm are basically not vulnerable to any vulnerability.
Of course, Gitlab is the kind of thing you might intentionally make public to all, e.g. to host an open source project. Therefore, it makes sense to analyze whether a public repository would be exploitable.
On Sandstorm, authentication is handled by Sandstorm. The app receives an unspoofable header indicating which user the request came from, and what permissions they have. A well-written app uses this header on every request to authenticate the user.Unfortunately, our Gitlab package currently uses this information only when a session first opens, then relies on the session cookie going forward. This pattern is sometimes used as a "hack" on Sandstorm to more easily integrate with existing login code designed to do upfront / one-time authentication. As such, public Gitlab repositories hosted on Sandstorm would be vulnerable.
Had Gitlab on Sandstorm been implemented "properly", it would not be vulnerable. That said, the "impersonate user" feature would not have worked at all. That's probably for the best: a feature like this really ought to be implemented by Sandstorm itself, which would have the ability to implement it (securely) across all apps at once, rather than have each app implement its own version.
Somewhat embarrassingly, the Sandstorm package of Gitlab actually predates this feature being added, therefore Gitlab instances on Sandstorm today actually aren't vulnerable. (Generally, if the upstream app author does not directly maintain the Sandstorm package, then the Sandstorm package will tend to fall behind. This should get better as Sandstorm gains popularity and upstream authors target it explicitly.)
These vulnerabilities allow a user to manipulate a private project to which they aren't supposed to have access. On Sandstorm, a private project would live in its own grain, and if you hadn't been given access then Sandstorm would deny you the ability to talk to the grain at all. Therefore, these cannot be exploited. These vulnerabilities require that you have write access to one project on the server in order to launch an attack. On Sandstorm, since every repository is its own instance, the attack would be limited to the repository on which you have write access -- you would not be able to use this attack to damage someone else's Gitlab repository on which you lack write access. This is a subtle phishing issue that is very widespread. However, it mostly doesn't work when the opener is a sandstorm app: the app lives inside an iframe which is prohibited by Content-Security-Policy from browsing away from the server. A public project hosted on Sandstorm would be vulnerable to these (leaking confidential issues attached to public milestones, and leaking private snippets attached to a public project). Sandstorm can only enforce access control at the grain level; anything finer than that is up to the app, and is subject to app bugs. I generally recommend that Sandstorm users try to put confidential data in separate grains from public data -- e.g. creating a separate issue tracker for confidential issues.(The Sandstorm packaging again predates the milestone bug's introduction, though may be affected by the snippet issue.)
We will update the Gitlab package tomorrow. Once an update is pushed, every Sandstorm user will receive a notification within 24 hours and can apply the update with one click.
Conclusion: Sandstorm mitigated 8/11 issues by design, 2/11 by accident, and is vulnerable to 1/11. The biggest issue was only mitigated by accident in this case, although a well-behaved Sandstorm app normally wouldn't have this kind of issue by design. Overall, though, I'm disappointed in Sandstorm's performance here -- it's much worse than the usual 95% mitigation rate.