I feel like Java's IDE support is best in class. I feel like go is firmly below average.
Like, Java has great tooling for attaching a debugger, including to running processes, and stepping through code, adding conditional breakpoints, poking through the stack at any given moment.
Most Go developers seem to still be stuck in println debugging land, akin to what you get in C.
The gopls language server generally takes noticeably more memory and cpu than my IDE requires for a similarly sized java project, and Go has various IDE features that work way slower (like "find implementations of this interface").
The JVM has all sorts of great knobs and features to help you understand memory usage and tune performance, while Go doesn't even have a "go build -debug" vs "go build -release" to turn on and off optimizations, so even in your fast iteration loop, go is making production builds (since that's the only option), and they also can't add any slow optimizations because that would slow down everyone's default build times. All the other sane compilers I know let you do a slower release build to get more performance.
The Go compiler doesn't emit warnings, insisting that you instead run a separate tool (govet), but since it's a separate tool you now have to effectively compile the code twice just to get your compiler warnings, making it slower than if the compiler just emit warnings.
Go's cgo tooling is also far from best in class, with even nodejs and ruby having better support for linking to C libraries in my opinion.
Like, it's incredibly impressive that Go managed to re-invent so many wheels so well, but they managed to reach the point where things are bearable, not "best in class".
I think the only two languages that achieved actually good IDE tooling are elisp and smalltalk, kinda a shame that they're both unserious languages.
It absolutely is. There is not many ecosystems where you can attach a debugger to a live prod system with minimal overhead, or one that has something like flight recorder, visualvm, etc.
> The gopls language server generally takes noticeably more memory and cpu than my IDE requires for a similarly sized java project
Okay, come on now :D Absolutely everything around Java consumes gigabytes of memory. The culture of wastefulness is real.
The Go vs Java plugins for VSCode are no comparison in terms of RAM usage.
I don't know how much the Go plugin uses, which is how it should be for all software — means usage is low enough I never had to worry about it.
Meanwhile, my small Java projects get OOM killed all the time during what I assume is the compilation the plugin does in the background? We're talking several gigabytes of RAM being used for... ??? I'm not exactly surprised, I've yet to see Java software that didn't demand gigabytes as a baseline. InteliJ is no different btw, asinine startup times during which RAM usage baloons.
Java consumes memory because collecting garbage is extra work and under most circumstances it makes no sense to rush it. Meanwhile Go will rather take time away from your code to collect garbage, decreasing throughput. If there is ample memory available, why waste energy on that?
Nonetheless, it's absolutely trivial to set a single parameter to limit memory usage and Java's GCs being absolute beasts, they will have no problem operating more often.
Also, intellij is a whole IDE that caches all your code in AST form for fast lookoup and stuff like that.. it has to use some extra memory by definition (though it's also configurable if you really want to, but it's a classic space vs time tradeoff again).
> If there is ample memory available, why waste energy on that?
Cute argument, but there's no way for a program to know how much memory is available. And there is such a thing as "appropriate amount of memory for this task". Hint: "4GB+" / "unbounded" is not the right answer for an LSP on a <10k line project.
> Nonetheless, it's absolutely trivial to set a single parameter to limit memory usage and Java's GCs being absolute beasts, they will have no problem operating more often.
Cool, then the issues I mentioned should never have existed in the first place. But they did, and probably still do today, I can't test easily. So clearly, they're not so easy to fix for some reason.
Also, this is such programmer-speak. It's trivial to set a single parameter? Absolutely, you just need to know that's what's needed, what the parameter is, how to set it, and what to set it to. Trivial! And how exactly would I, as a user, know any of this?
I'm a decent example here, since I could tell the software is written in Java (guess how), I know about its GC tuning parameters and I could probably figure out what parameters to set. So what exact value should I set? 500MB? 1GB? 2GB? How long should I spend doing trial-and-error?
Now consider you'd be burdening every single user of the program with the above. How about we, as engineers, choose the right program parameters, so that the user doesn't have to do or worry about anything? How about that.
> a classic space vs time tradeoff again
Most of the time, just like here, "it's a tradeoff" is a weasel phrase used to excuse poor engineering. Everything is a tradeoff, so pointing that out says nothing. Tradeoffs have to be chosen wisely too, you know.
> but there's no way for a program to know how much memory is available
It usually knows the total system RAM available, and within containers it knows the resource limits. So your statement is false.
> And there is such a thing as "appropriate amount of memory for this task"
Appropriate for whom? Who to tell that I want this app to have better throughput and I don't care how much memory it would cost, or that I don't care about slightly lower throughput but I want the least amount of memory used?
> Now consider you'd be burdening every single user of the program with the above
Or you know, just set one as the developer? There is not even such a thing as a JRE anymore, the prevalent way to ship a Java app is with a "JRE" made up from only the modules that the app needs, started with a shell script or so. You can trivially set as a developer your own set of flags and parameters.
Refcounting is significantly slower under most circumstances. You are literally putting a bunch of atomic increments/decrements into your code (if you can't prove that the given object is only used from a single thread) which are crazy expensive operations on modern CPUs, evicting caches.
The mutex case is one where they're using a mutex to guard read/writes to a map.
Please show us how to write that cleanly with channels, since clearly you understand channels better than the author.
I think the golang stdlib authors could use some help too, since they prefer mutexes for basically everything (look at sync.Map, it doesn't spin off a goroutine to handle read/write requests on channels, it uses a mutex).
In fact, almost every major go project seems to end up tending towards mutexes because channels are both incredibly slow, and worse for modeling some types of problems.
... I'll also point out that channels don't save you from data-races necessarily. In rust, passing a value over a channel moves ownership, so the writer can no longer access it. In go, it's incredibly easy to write data-races still, like for example the following is likely to be a data-race:
handleItemChannel <- item
slog.Debug("wrote item", "item", item) // <-- probably races because 'item' ownership should have been passed along.
Developers have a bad habit of adding mutable fields to plain old data objects in Go though, so even if it's immutable now, it's now easy for a developer to create a race down the line. There's no way to indicate that something must be immutability at compile-time, so the compiler won't help you there.
Good points. I have also heard others say the same in the past regarding Go. I know very little about Go or its language development, however.
I wonder if Go could easily add some features regarding that. There are different ways to go about it. 'final' in Java is different from 'const' in C++, for example, and Rust has borrow checking and 'const'. I think the language developers of the OCaml language has experimented with something inspired by Rust regarding concurrency.
Rust's `const` is an actual constant, like 4 + 1 is a constant, it's 5, it's never anything else, we don't need to store it anywhere - it's just 5. In C++ `const` is a type qualifier and that keyword stands for constant but really means immutable not constant.
This results in things like you can "cast away" C++ const and modify that variable anyway, whereas obviously we can't try to modify a constant because that's not what the word constant means.
In both languages 5 += 3 is nonsense, it can't mean anything to modify 5. But in Rust we can write `const FIVE: i32 = 5;` and now FIVE is also a constant and FIVE += 3 is also nonsense and won't compile. In contrast in C++ altering an immutable "const" variable you've named FIVE is merely forbidden, once we actually do this anyway it compiles and on many platforms now FIVE is eight...
Right, I forgot that 'const' in Rust is 'constexpr'/'consteval' in C++, while absence of 'mut' is probably closer to C++ 'const', my apologies.
C++ 'constexpr' and Rust 'const' is more about compile-time execution than marking something immutable.
In Rust, it is probably also possible to do a cast like &T to *mut T. Though that might require unsafe and might cause UB if not used properly. I recall some people hoping for better ergonomics when doing casting in unsafe Rust, since it might be easy to end up with UB.
Last I heard, C++ is better regarding 'constexpr' than Rust regarding 'const', and Zig is better than both on that subject.
AFAICT Although C++ now has const, constexpr. consteval and constinit, none of those mean an actual constant. In particular constexpr is largely just boilerplate left over from an earlier idea about true compile time constants, and so it means almost nothing today.
Yes, the C++ compile time execution could certainly be considered more powerful than Rust's and Zig's even more powerful than that. It is expected that Rust will some day ship compile time constant trait evaluations, which will mean you don't have to write awkward code that avoids e.g. iterators -- so with that change it's probably in the same ballpark as C++ 17 (maybe a little more powerful). However C++ 20 does compile-time dynamic allocation†, and I don't think that's on the horizon for Rust.
† In C++ 20 you must free these allocations inside the same compile-time expression, but that's still a lot of power compared to not being allowed to allocate. It is definitely possible that a future C++ language will find a way to sort of "grandfather in" these allocations so that somehow they can survive to runtime rather than needing to free them.
Rust does give you the option to break out the big guns by writing "procedural" aka "proc" macros which are essentially Rust that is run inside your compiler. Obviously these are arbitrarily powerful, but far too dangerous - there's a (serious) proc macro to run Python from inside your Rust program and (joke, in that you shouldn't use it even though it would work) proc macro which will try out different syntax until it finds which of several options results in a valid program...
It seems like the normal and rational choice to not include logging to me.
Java included 'java.util.logging'. No one uses it because it's bad, everyone just uses slf4j as the generic facade instead, and then some underlying implementation.
Golang included the "log" package, which was also a disaster. No one used it so they introduced log/slog recently, and still no one uses that one, everyone is still using zap, logrus, etc.
Golang is also a disaster in that the stdlib has "log/slog" as the approved logging method, but i.e. the 'http.Server.ErrorLog' is the old 'log.Logger' you're not supposed to use, and for backwards compatibility reasons it's stuck that way. The go stdlib is full of awkward warts because it maintains backwards compatibility, and also included a ton of extra knobs that they got wrong.
The argument of structured vs unstructured logging, arguments around log levels (glog style numeric levels, trace/debug/info/warn/error/critical or some subset, etc), how to handle lazy logging or lazy computations in logging, etc etc, it's all enough that the stdlib can't easily make a choice that will make everyone happy.
Rust as a language has provided enough that the community can make a sl4j like facade (conveniently, the clearly named 'log' crate seems to be the winner there), so there's no reason it has to be in the stdlib.
Python includes logging, and packages generally use it. It's far from perfect, but it's important to have at least a standard interface so that all components use it in concert. When you have a proliferation of logging libraries with no standard, other packages either don't do any logging at all, or do it in idiosyncratic ways that are hard (or impossible) to integrate.
> Golang included the "log" package, which was also a disaster. No one used it so they introduced log/slog recently, and still no one uses that one, everyone is still using zap, logrus, etc.
I honestly don't know how common it is to use zap, logrus and co on new projects anymore. I know I always go for slog.
> Golang is also a disaster in that the stdlib has "log/slog" as the approved logging method, but i.e. the 'http.Server.ErrorLog' is the old 'log.Logger' you're not supposed to use, and for backwards compatibility reasons it's stuck that way. The go stdlib is full of awkward warts because it maintains backwards compatibility, and also included a ton of extra knobs that they got wrong.
This isn't a practical problem at all. First of all, the stdlib shouldn't make backwards-incompatible changes just for the sake of logging. Second, slog.NewLogLogger()[0] exists as a way to get log.Loggers to be used with things that rely on the old log.Logger.
The std has stability promises, so it's prudent to not add things prematurely.
Go has the official "flag" package as part of the stdlib, and it's so absolutely terrible that everyone uses pflag, cobra, or urfave/cli instead.
Go's stdlib is a wonderful example of why you shouldn't add things willy-nilly to the stdlib since it's full of weird warts and things you simply shouldn't use.
> and it's so absolutely terrible that everyone uses pflag, ../
This is just social media speak for inconvenient in some cases. I have used flag package in lot of applications. It gets job done and I have had no problem with it.
> since it's full of weird warts and things you simply shouldn't use.
The only software that does not have problem is thats not written yet. This is the standard one should follow then.
That's pulling on a dependency. An ugly case, but a very specific one relating to a tricky backwards-incompatible change that the Go team fudged on back in the pre-module times.
That story doesn't generally speak to the reality that most Go packages have rather small dependency graphs.
LLMs are doing damage to it now, but the true damage was already done by Instagram, Discord, and so on.
Creating open forums and public squares for discussion and healthy communities is fun and good for the internet, but it's not profitable.
Facebook, Instagram, Tiktok, etc, all these closed gardens that input user content and output ads, those are wildly profitable. Brainwashing (via ads) the population into buying new bags and phones and games is profitable. Creating communities is not.
Ads and modern social media killed the old internet.
Who's paying $30 to run an AI agent to run a single experiment that has a 20% chance of success?
On large code-bases like this, where a lot of context gets pulled in, agents start to cost a lot very quickly, and open source projects like this are usually quite short on money.
let me list some things I can do on android which I cannot do on iOS:
* Install real mobile firefox, including installing firefox addons I've built for myself. Firefox on iOS is a safari skin
* Install web browser security updates without also updating my entire OS. On Android, firefox is an app. on iOS, safari is a part of the OS that cannot be updated independently
* Install an open source app my friend built without paying $100/year or having to reload it every 7 days
* Build and install an app without owning a macbook or other macOS device, just using linux
* Filter notifications to my garmin smartwatch by-app
* Have reliably working file-syncing (i.e. syncthing for iOS) because background tasks are something you can do well in android, and barely at all in iOS
* Have access to the source code to debug and fix problems
* Have the ability to flash my own custom kernel / rom (not all android devices, but many)
.... Really, not being able to write and install my own app without paying apple $100, and without owning a macbook is the big dealbreaker, followed by iOS restricting APIs needed to do all sorts of things like proper notification handling, proper NFC, etc etc.
It amazes me that so many people on the "hacker news" forum are okay with their primary computing device being wildly hostile to the hacker spirit, to the desire to tinker around for fun and learn and hack on things.
Yes, this is an attempt to nerd-snipe you into giving a marginally more informed opinion, while also shame you for being too lazy to click a single link, but not too lazy to type an entire comment.
lol didn't mean to come off rude, I just skimmed it and missed it I guess - so the answer in this case is generally no you cannot relicense agpl 3.0 without being an original copyright holder and getting sign offs from all the other holders.
Also generally agpl 3.0 is considered a viral license, so accessing it over a network is considered a form of distribution (which is probably why they dont like it) but relicensing it is just a core "nope" type of thing.
(also dual licensing seems like you're relicensing effectively if the purchaser doesn't have to respect the gpl license, but not as clear to me)
If you look at the link they have for proof, the change was GPLv3 to a dual-license AGPLv3 + not-really-specified license you can privately arrange.
They have to respect the original GPLv3 license, which means that Core has to continue to publish all libpebble3 changes under a GPLv3 compatible license, and they do appear to be doing so, even if they also offer a separate license for sale.
I feel like rebble is phrasing this a little misleadingly too. The neutral phrasing here would be "Pebble forked our work, and per our GPL license is continuing to make all their changes available to all users for free. If you contribute to their repo, not ours, they now require a CLA, and for code they write you can also pay them for a difference license (though it's always also available for free under the GPL)"
There may be something that's real here, but "forked our library and added a CLA" feels normal and expected, not worth hostile phrasing.
The AGPLv3 is for the new code core writes going forward I would assume.
Distributing a mix of AGPL and GPLv3 code is pretty reasonable to do, right, and I think basically all the user's rights under the GPLv3 are being fulfilled just fine.
I agree the commercial license could be dicey, but I assume in reality it's the usual AGPL thing where it's "If you pay, you don't have to comply with the network-services bit, but you now get the code under the GPLv3, so you have to make a network service and ensure your users _never_ get binaries containing this code".
Or, possibly even more realistically, they've put that there and if anyone says "We'll pay $3M for un-encumbered code" they'll rewrite the code from scratch to make it un-encumbered by the old GPL code, and until someone says a number big enough to cover the rewrite they'll never actually do anything.
> Copyright and Licensing
A forward-looking section applying to all new changes going forward I guess.
As long as they've preserved the old copyright notice somewhere, and it's given to users who request it, it doesn't really matter what the README says does it?
I promise I'm not a shill for them. I do think what they're doing comes off as overall not great, but not as "willful GPL violation" (they're still sharing code), and not as egregiously malicious as the blog makes it sound, so the blog author has me a little unsympathetic with their own misleading (in my opinion) phrasing of this stuff.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
Can't they? They're given the option to take the upstream code under the AGPLv3, so if they take the code as AGPLv3, they can incorporate their changes since AGPL and GPL are compatible.
Well, yeah. I'd never really thought about it before, but linus was really prescient in not adopting the or-later clause. This sort of thing would be destructive to linux.
The issue becomes there's little to no way to tell the difference between the two.
Additionally, if human summaries aren't copyright infringement, you can train LLMs on things such as the Wikipedia summaries. In this situation, they're still able to output "mechanical" summaries - are those legal?
> The issue becomes there's little to no way to tell the difference between the two.
If you and I write the exact same sentence, but we can prove that we did not know each other or have inspiration from each other, we both get unique copyright over the same sentence.
It has never been possible to tell the copyright status of a work without knowing who made it and what they knew when they made it.
Also, the human produced summary is likely to have been produced by people who have read purchased books (i.e. legally distributed) whereas the algorithmic production of a summary has probably been fed dubious copies of books.
Also there is fair use gray area. Unlike Wikipedia, ClosedAI is for profit to make money from this stuff and people using generated text do it for profit.
Are you religious? If not, you should assume that your cognition is a product of your body, a magnificent machine.
I don't think LLMs are sapient, but your argument implies that creativity is something unique to humans and therefore no machine can ever have it. If the human body IS a machine, this is a contradiction.
Now, there's a very reasonable argument to be made concerning the purpose of copyright law, but "machines can't be creative" isn't it.
Creativity is not unique to humans, but legal rights to protect creativity is unique to humans (or human-represented organizations). Humans are always special case in law.
Selling human livers and selling cow livers are never treated the same in terms of legality. Even the difference between your liver and that of a cow is much, much smaller than the difference between your brain and Stable Diffusion. I'm sure there isn't single biochemical reaction that is unique to humans.
Converting a raw 8k video down to 1080p is a full of creative solutions and its perfectly fine to see it as the machine doing some form of creativity when optimizing performance and file size. In similar way, compilers are marvelous and in many cases ingeniously when building and compiling binaries from source code.
It is however general agreed on those do not create independent original works. At the most generous interpretation they get defined as derivatives, and at the more common interpretation they are plain copies. The creativity is also usually not given to the machine, but those who built the machine. Even if the programmer is unable to predict all outcomes of creative written code, a unpredictable outcome is still attributed to the programmer.
It was ruled that our Copyright Law does require that a Human create the work, and that only a Human can hold copyright. The monkey was not given copyright over the image it took.
Monkeys obviously can be creative. However, our law has decided that human creativity is protected by copyright, and human creativity is special within the law. I don't see any contradictions or arguments about sapience that are relevant here.
I feel like Java's IDE support is best in class. I feel like go is firmly below average.
Like, Java has great tooling for attaching a debugger, including to running processes, and stepping through code, adding conditional breakpoints, poking through the stack at any given moment.
Most Go developers seem to still be stuck in println debugging land, akin to what you get in C.
The gopls language server generally takes noticeably more memory and cpu than my IDE requires for a similarly sized java project, and Go has various IDE features that work way slower (like "find implementations of this interface").
The JVM has all sorts of great knobs and features to help you understand memory usage and tune performance, while Go doesn't even have a "go build -debug" vs "go build -release" to turn on and off optimizations, so even in your fast iteration loop, go is making production builds (since that's the only option), and they also can't add any slow optimizations because that would slow down everyone's default build times. All the other sane compilers I know let you do a slower release build to get more performance.
The Go compiler doesn't emit warnings, insisting that you instead run a separate tool (govet), but since it's a separate tool you now have to effectively compile the code twice just to get your compiler warnings, making it slower than if the compiler just emit warnings.
Go's cgo tooling is also far from best in class, with even nodejs and ruby having better support for linking to C libraries in my opinion.
Like, it's incredibly impressive that Go managed to re-invent so many wheels so well, but they managed to reach the point where things are bearable, not "best in class".
I think the only two languages that achieved actually good IDE tooling are elisp and smalltalk, kinda a shame that they're both unserious languages.