It is hard to understand at first but after playing with this a bit it will make sense. As with everything, there are trade offs compared to IEEE floats, but having more precision when numbers are close to 1 is pretty nice.
according to wayback machine the domain expired at the end of 2021. Months later it came back with the sneaky ad. Their page at github https://includeos.github.io/ does not have it. It is safe to assume whoever grabbed the domain has no association to the authors.
You're completely right. It's a squatter and he wants money. The project is not dead, however. It's currently used for research at University of Oslo, and I think we can expect much more activity in the coming years, such as ARM support.
I might add RISC-V support myself, as I am the author of libriscv. We'll see.
> You're completely right. It's a squatter and he wants money. The project is not dead, however. It's currently used for research at University of Oslo
If someone bought the domain and is re-using the same assets to put advertisements up then the author of includeos would have some real teeth in courts...
Yes, we have spoken to a lawyer and it is indeed the case. It still costs money, and we are not sure how to proceed yet, outside of removing links to the domain.
It's a lot harder to make a mistake in binary mersenne primes, and if you loop your video's, 1 person should be able to do it quickly, so might be a bit boring.
There are 136,279,841 ones in the binary mersenne prime
That works out to about 41 million seconds worth of footage, or about 473 days
Given there was a gap of 6 years between the discovery of this mersenne prime and the previous, in theory 1 person could say all the digits (in binary) before we find the next mersenne prime!
Sure, if that one person can stay awake and not eat/drink and use the restroom, sure. Otherwise, you need to make this union happy with 8 hour shifts, 2 15 minute breaks, and a minimum of 12 hour turn around before next shift. If they work more than 8 hours in a day, they are entitled to over time. Working a sixth day in a row is automatic 1.5x pay, and seven consecutive days is automatic 2x pay.
"The largest known prime is two raised to the power of one hundred thirty six million two hundred seventy nine thousand eight hundred forty one,
minus one" only took me 8.2 seconds to say aloud at a normal speaking pace. :)
> Given there was a gap of 6 years between the discovery of this mersenne prime and the previous
It was the largest gap so far. Several primes were quite close. The next one might be discovered much sooner. Previos two had a gap of a bit less than a year.
They are trying to build a decentralized and open social protocol and one of the first things they have done is removing scrollbar from their pages for no reason at all. They are definitely on the right track.
I saw an interesting article about how a million line production codebase was able to run simultaneously in both python 2 and 3, so I don't see how you could call them "different languages with similar names."
Python 2 and 3 are nearly identical. Yes, there's some backward incompatible changes. It's nothing like the gulf between Java and JavaScript which are only related in that they use C-like syntax. Java and C# have more in common than JavaScript.
That is not correct at all. Python 2 and Python 3 have relatively few syntax differences. You could teach someone the key differences in like 10 minutes.
In fact, I would even say that the difference between Python 2 and Python 3 syntax is far less than the syntax difference between traditional javascript and the latest revisions to javascript (ES6+).
I always get bothered when people bring up the `curl [...] | bash`
“argument” (which is usually less of an argument and more of a rude
dismissal of a good product).
Sure, the script downloaded with curl should be validated.
Not sure, but I’m pretty sure you don’t validate every tarball you
download, and even if you do validate them, you certainly wouldn’t look
through each line of code making sure there isn’t a backdoor or something.
Any malicious person could break into a tarball download site, and
replace the checksums as well (only way to mitigate that is using a
signature that is associated with the tarball maintainer), and even if
something like PGP is used, the attacker would be able to replace the
instructions to obtain the PGP key with their own generated one, or
even remove the signature completely.
This is, of course, the same for curl piped into bash, but my point is
that it’s only slightly worse than an unsecured package site (which
I’m sure there are many of). The only thing worse about it is that
it’s less noticeable when the script has been replaced.
Also, some scripts (such as the rvm installatio script) /require/ PGP
to be used for the installation to even start.
In conclusion, please don’t be rude and dismiss an entire product
because of a single installation method, which isn’t actually worse
than a malicious tarball.
> I always get bothered when people bring up the `curl [...] | bash` “argument” (which is usually less of an argument and more of a rude dismissal of a good product).
Why? Package managers were created for a reason. Virtually every Linux distribution anyone would use to host a service such as Sandstorm will include a package manager.
> Not sure, but I’m pretty sure you don’t validate every tarball you download, and even if you do validate them, you certainly wouldn’t look through each line of code making sure there isn’t a backdoor or something.
I don't download tarballs. I install packages. Who downloads tarballs to install software in 2016?
Anyone actively developing for a project will likely be using git, and I'm not aware of too many projects which make use of toolchains which are so new that they're not available in any rolling release/testing distro.
> Any malicious person could break into a tarball download site, and replace the checksums as well (only way to mitigate that is using a signature that is associated with the tarball maintainer), and even if something like PGP is used, the attacker would be able to replace the instructions to obtain the PGP key with their own generated one, or even remove the signature completely.
Yes, fine. And that's why all major distributions sign their packages, so you know that it's valid. Again, it's 2016, who seriously installs software from tarballs?
> The only thing worse about it is that it’s less noticeable when the script has been replaced.
No, what is much worse than that is you are installing software outside of your package manager. Also, companies who deploy via curl | bash, typically pull in their own dependencies outside of the package manager.
It's bad form.
If your application has specific dependencies, then ship it as an appliance using LXC, Docker, or $TRENDY_CONTAINER_TECH. That way, all updates are atomic and you don't risk eff-ing the OS by pulling in a bunch of stuff outside of the package manager.
> In conclusion, please don’t be rude and dismiss an entire product because of a single installation method, which isn’t actually worse than a malicious tarball.
In conclusion, package management is a solved problem. Companies which have installers which do not make use of the distribution's package management system are lazy, and anyone who defends curl | bash is an apologist.
Seriously, it's not difficult to generate a DEB/RPM/<insert distribution package format> and create your own signed repo. Package management has been a solved problem for at least a decade.
It annoys me that companies think it's okay to have curl | bash as an installation method. If they think that will ever be acceptable in an enterprise environment where change management is important, they're delusional.
> Virtually every Linux distribution anyone would use to host a service such as Sandstorm will include a package manager.
Yes, but:
* Most of them ship on a 6-month or even 2-year release cycle whereas Sandstorm updates every week.
* Most of them will not accept a package that wants to self-containerize with its own dependencies, which means Sandstorm would instead have to test against every different distro's dependency packages every week, whereas with self-containerization we only depend on the Linux kernel API -- which is insanely stable.
* If we publish the packages from our own repo, we're back to square one: how does the user get the signing key, if not from HTTPS download?
* You should probably be installing Sandstorm in its own VM anyway.
I'm certainly not saying curl|bash is perfect. There are trade-offs. For now this is the trade-off that makes the most sense for us. Later on, maybe something else will make sense.
> If they think that will ever be acceptable in an enterprise environment where change management is important, they're delusional.
We've never had an enterprise customer comment on this (other than requesting the PGP-verified option, which we implemented). The vast majority of complaints come from HN, Twitter, and Reddit. ::shrug::
> Most of them ship on a 6-month or even 2-year release cycle whereas Sandstorm updates every week.
That's fine. Run your own repo where you control the release cycle. Puppet does this. GitLab does this. PostgreSQL does this. Sandstorm does not do this.
> Most of them will not accept a package that wants to self-containerize with its own dependencies, which means Sandstorm would instead have to test against every different distro's dependency packages every week, whereas with self-containerization we only depend on the Linux kernel API -- which is insanely stable.
GitLab ships an omnibus installer with their CE/EE product, and it works great. I don't see why Sandstorm couldn't also publish an omnibus installer which contains all the dependencies, in essence creating your own container in somewhere neutral like /opt
This way, you have atomic releases you can install. How do I select the version to install with 'curl | bash' without user interaction?
> If we publish the packages from our own repo, we're back to square one: how does the user get the signing key, if not from HTTPS download?
Publish your signing key as a distribution package. This is what most organizations do (e.g. EPEL, PostgreSQL, Puppet).
Then the user does 'apt-get install sandstorm-release', 'apt-get update', 'apt-get install sandstorm' and you have an authenticated release.
Your signing infrastructure should be secure enough that you don't have to change the signing key within the major release cycle of a Linux distribution anyway.
> You should probably be installing Sandstorm in its own VM anyway.
This isn't an excuse for 'curl | bash' installs. If they want to recommend that their customers run the product in a VM or container, they should provide appliance/container images, as well as packages.
> We've never had an enterprise customer comment on this
I am in enterprise, although I'm not a sandstorm customer. I will tell you that one of the first considerations when deciding on a product is how well it fits into our existing infrastructure. If I can't push a repo and install a specific version of the software, tested and known working, using puppet, it's not getting deployed in our org.
Perhaps the reason none of Sandstorm's enterprise customers are asking for packaged installers is because every enterprise customer who expects a packaged installer sees Sandstorm's installation method and decides not to use it.
I agree that all companies should have a signed package repository,
instead of tarballs 100%.
I didn't mean to defend ‘curl | bash‘, I just meant to say why ‘curl |
bash‘ isn’t as bad as people think (versus tarballs). Package managers
definitely are far superior to tarballs, and ‘curl | bash‘.
Another thing I didn’t like is how the OP seemingly dismissed an
entire product with the single statement “kthxbai”, but that’s not
relevant.
In an ideal world companies (or even small projects) have
repositories, in an even more ideal world they were in the distro
repositories already.
At least it is https, so no mitm attack. You have to trust the Sandstorm developers, even if you download the source and apply all the steps manually. If the site gets compromised, the attackers can modify the source and the website documentation for the PGP verification.
It is hard to understand at first but after playing with this a bit it will make sense. As with everything, there are trade offs compared to IEEE floats, but having more precision when numbers are close to 1 is pretty nice.