Keep in mind that it's not just a matter of comparing the JS engine. The runtime that is built around the engine can have a far greater impact on performance than the choice of v8 vs. JSC vs. anything else. In many microbenchmarks, Bun routinely outperforms Node.js and Deno in most tasks by a wide margin.
The claim I responded to is that Bun is "at least twice as fast" as Deno. This sounds a lot more general than Bun being twice as fast in cherry-picked microbenchmarks. I wasn't able to find any benchmark that found meaningful differences between the two runtimes for real-world workloads. (Example: https://hackernoon.com/myth-vs-reality-real-world-runtime-pe...)
With Bun's existing OSS license and contribution model, all contributors retain their copyright and Bun retains the license to use those contributions. An acquisition of this kind cannot change the terms under which prior contributions were made without explicit agreement from all contributors. If Bun did switch to a CLA in the future, just like with any OSS project, that would only impact future contributions made after that CLA went into effect and it depends entirely on the terms established in that hypothetical CLA.
Hello, thank you, but that doesn't answer my question. I'm not asking for a definition, but for information about licensing decisions for the future of Bun.
There are a number of semver-major changes included in v7. That said, the goal for this release has been improved stability and performance over new features so the jump from v6 to v7 is fairly small.
I think gedy meant that it used to be that "v7.0 released!" meant that one could expect exciting, fun features to be present, and that the new version is worth taking a look at, and playing with.
With semver, it seems like a lot of "new features" are typically released in minor versions, since quite often they don't need to break compatibility in order to introduce features. So major versions are, to me, almost more of a cause for concern these days. My first thought is typically "Oh no, what part of my stack is going to break now? How much time will I spend tracking down the fix?"
So major versions are, to me, almost more of a cause for concern these days. My first thought is typically "Oh no, what part of my stack is going to break now? How much time will I spend tracking down the fix?"
Isn't that exactly the point of semver?
And, assuming things will break at some point,* isn't that great? Now you know when to expect it.
Semver doesn't influence design decisions of a project's lifetime. It describes them.
* fair assumption, unless you're dealing with software which literally never breaks backwards compatibility.
Yeah, that's definitely the point of SemVer. The only point I'm (and presumably gedy is) making is that a major version no longer feels like Christmas morning, but rather akin to "see me in my office tomorrow morning." Okay, not quite that bad, but in the same vein.
SemVer is great and helpful and I wouldn't choose anything else currently, but it also lacks the builtin PR that old-school major versions seemed to have, where major version bumps usually meant you could get excited about exploring new major features. There's nothing special about a minor SemVer bump that says "new major features have been introduced." The spec only asserts that minor means new features.
That is, there's no obvious way to know that 1.1 introduced only one new method for checking status, while 1.2 introduced a new magic() method that finishes your work for you and makes all your dreams come true. :-P
Eh, no harm no foul. :-) I could have been more clear, if there was still room for misunderstanding, and it gave me the chance to mentally flesh out my thoughts a bit better as well. :-)
To be clear, v4.2 is the stable dependable target that most users should be using as their target. Developers that want to track new development and new features and have more flexibility can move up to v5. Next year at around this time, v6 will become the new stable target. The actual number of breaking changes between v4 and v6 are likely to be quite small -- things that work in v4 should continue to work in v6 and beyond really. So the best advice I can give is: if you are developing modules for developers to use in their applications, you'll want to primarily target v4 for now but keep an eye on where v5 is going and use that as your beta channel for new development going forward.
For Node.js core APIs, there is a deprecation strategy that requires at least one major version cycle before anything can be removed, and even then the chances of things actually being removed are slim. We're trying to take a very cautious and conservative approach to breaking changes in core. Generally, if it works in v4, it should continue to work in v5, if it doesn't, that's a bug that should be reported and fixed.
Changes in npm, on the other hand are a different story. Technically, npm is not part of Node core, it's a utility that we bundle with the core but it has it's own lifecycle, it's own process and it's own separate project. The "contract" between npm and node.js is still being worked out and this kind of feedback is extremely useful.
> For Node.js core APIs, there is a deprecation strategy that requires at least one major version cycle before anything can be removed
Considering there was 52 days between one major version cycle that criteria isn't really useful; I'd expect a timeframe more than a version cycle.
> Changes in npm, on the other hand are a different story. Technically, npm is not part of Node core, it's a utility that we bundle with the core but it has it's own lifecycle, it's own process and it's own separate project. The "contract" between npm and node.js is still being worked out and this kind of feedback is extremely useful.
I really should have said the main issue is really just writing code with node and sharing it; node has essentially been given the responsibility of being a standard set of libraries for JavaScript on the server and having standard libraries change APIs, even minor, in less than 2 months is typically indicative of a language pre 1.0.
But I'm hopeful you're right and things should continue working in most respects, it's just the edge cases that worry me. I don't want to spend a ton of time developing something that ends up simply not working in the very next version without a good path of providing my code that works with both.
To be certain, whether or not LINK is supported is not really the issue. The file is mishandling HTTP methods in general for the sake of "optimization". Had the parser been written correctly in the first place, it ought to have been trivial for someone to add support for new extension methods like LINK, but since the parser is broken, it becomes significantly more difficult.
Oh, to be certain. The HTTP spec does have extension methods; any token is valid as a method name. (Implementing the spec in Rust has taught me a lot about HTTP.) I guess I should avoid the quibble of the particular example and focus on the bigger picture :-)