I'm still sitting here shocked that a language where "err" (and the keywords that check around it) are used an order of magnitude more frequently than all other syntax in that language, has achieved this much popularity: https://anvaka.github.io/common-words/#?lang=go
That's what kept me from looking at the language for the last five years. I finally broke down and wrote my first Go program. Yeah, the errors (and lack of generics) are annoying, but it gets enough else right – tooling, runtime speed, compilation speed, type inference, parametric polymorphism (on interface types) – that I enjoyed it anyway. Every language has some annoyances; it's nice when there are only a few.
Not only that, but you can just ignore errors during prototyping and with a little editor magic you can go back easily generate at least 85% of the if err != nil {... etc. statements.
As much as people complain about it, explicitly checking errors like this (and checking them all) is almost essential for production quality enterprise code and any code running on mission critical systems (and to most project managers, all systems are mission critical).
And fwiw I still prefer go's style of error checking to languages that implement massive try/catch blocks.
I think the "embrace failure" supervisor model in BEAM is a superior design for mission-critical systems (such as cell networks, where it originated) while resulting in literally a ton less boilerplate. You just code the "happy path" and done. Any errors are logged and the process instantly restarted by the supervisor, you can follow up on them if they present a problem.
If you were to code just this "happy path" in Go, it would actually be the unhappy path, because you'd have silent errors resulting in indeterminate state (read: horrible bugs to troubleshoot). If you believe programming is mostly about managing all possible state (as I do), then Go's strategy is way too explicit. In Erlang/Elixir/BEAM's case, as soon as state goes off the rails (read: encounters something the programmer didn't anticipate), it's logged and restarted by its supervisor process almost instantly. (and there are of course strategies for repeated restarts)
People keep saying this and I struggle to understand how this represents good design. In any language I can always just eat the error and keep going, or eat the error and restart.
That doesn't fix the error, and it doesn't imply the program will work correctly.
> In any language I can always just eat the error and keep going, or eat the error and restart.
Defer to pmarrek on what they meant, but to me it's an issue of practical programming.
In a choice between "I will tell you what to do about errors" vs "I will assume you only crash and restart on any error", I've found the later to be far more efficient.
The former generally leads to a rat's nest of never ending error specialization as unexpected or rare stuff bubbles up in UAT or down the road in prod.
Which isn't to say there's a right answer. There's always going to be particular situations where of course you should use one or the other.
But on the whole, as a philosophical default, fail-and-recycle-on-all-errors is a helluva lot easier to spec, code, test, and maintain for me.
I think you need to try it out to see what the big deal is. It's pretty liberating.
The thing is, there will always be the "unknown unknown" bugs, the ones you didn't even think to anticipate, and Erlang/Elixir/BEAM will always win out in a behavior contest on those vs. in Go, because it is built to anticipate any possible error, not just the possibilities you're aware of.
so the argument is that there are (broadly) two classes of errors. ones that happen all the time and ones that happen hardly ever
the erlang strategy is to make it possibly to continue from a known good state when the latter happens where chances are it won't happen again. the errors that happen all the time you will encounter early/often enough that you can work out how to handle/fix them
the result of this strategy is you pretty much never write code that handles the unexpected. it turns out that's a lot of code to not have to write
Those supervisors don't always handle the errors the way you want and can't always be implemented without diminishing overall performance. It's a trade-off.
It's convenient to write this sort of optimistic code that Erlang encourages, but the error messages are often not very helpful. You'll often end up with a pattern matching failure and stacktrace that isn't very clear compared to a hand-written error message that you would get in Go.
Have you used erlang? Pattern matching is a big part of it, and pattern match failures always result in errors. So, if you have some function foo() that always returns ok or {error, Reason}, then in the "follow the happy path" style, you'd just have a line reading "ok = foo(),", and then if foo actually returns something else, you get a "crash". There's no assuming anything there, it's just the way the language works.
Yes I have. but I see the erlang blindness still runs strong. Just because you have pattern matching doesn't mean you don't have bugs cause by humans. It can be syntactically correct and also logically not match the problem domain.
For example what if you screw up and forget to encode that it SHOUDN'T return OK?
you know that many other frameworks also have these safeguards right? Java server frameworks have been doing it forevever. You can code dirty if you want and deal with no issues or you can be error prone. Hell you can do the same thing in go. You can catch panics. It's not pretty but you can definitely do it.
I forgot one can only point out strengths of erlang not problems with humans coding in general. all hail erlang.
Logical errors are always an issue, regardless of the language.
As for Erlang's problems, it does have a fair number of warts and special cases, but the biggest of them all is that it's still a niche language. We actually had to dump Erlang for new projects in favor of Go. That may sound ridiculous, but filling positions involving Erlang was next to impossible (even with candidates without any prior knowledge of the language).
Why wouldn't it, if you're pattern-matching correctly? And in any event, it seems far easier to get into that situation in Go (which ignores errors unless you explicitly check for them) than in Erlang (which crashes on every error because restarting is cheap)
My point is... "Pattern matching correctly"... assumes no mistakes on the coder's part. Go back and re-read my comment. Or don't. Downvote me because I dare question your skill in the context of the mighty erlang.
I would never downvote someone simply for disagreeing, and certainly not unless I was 100% sure they were wrong or it was way offtopic. (And you would be surprised how often I admit I was wrong or cede a point!)
You're basically saying "well, it's still subject to logic errors." Well, that's a nonargument, because you can say that about 100% of languages in existence. That is not a criticism you can levy just against BEAM langs. And we were never arguing that BEAM magically takes care of your logic bugs. Just that it seems to handle the "set of all bugs that are not logic bugs" better.
I also apologize for erlang/elixir fans being rabid and insufferable. ;) A lot of us have dealt with many other langs (I spent years on Ruby and other OO langs) and we're enjoying the unexpected (to us) benefits of this new/old world, that's all. Joe Armstrong is basically Einstein AFAIC...
You can wrap all your code in an unchecked exception and never again have to worry about errors. It's bad practice obviously but there are many situations in which you don't care what error occurred but that an error itself occurred.
Is not what software engineers should spend time on, definitely.
Not automating ubiquitous trivial propagations with at least explicit "rie <expr>" (rie for "return if error") is a complete engineering fail under any philosophy.
Oh, I have an idea. If ", err" part is missing from lvalue, insert that mantra automagically under #pragma ARIE=on. This workaround will give designers some stats to embrace.
I consider that explicitly and manually handling every possible error, and making a conscious decision to bubble it up (return) or to interpret it is a very good use of my time as a programmer.
> Is not what software engineers should spend time on, definitely.
But hey! Your editor can easily boilerplate all that... boilerplate!
I think Go will continue to grow until we finally have metrics that say that coding something in Go might get you fast-running code but will be slower to code and hell to maintain.
If it weren't for Goroutines I would not find go very useful. I expect C++ will implement coroutines and/or fibers someday that will be very well designed and more carefully/broadly thought out.
No, it's way worse, because you have to write the error handling multiple times even if it is all the same (and it usually is).
For example, in Java, I can make as many method calls I want inside of a try block, and catch an exception from any one of them in the catch block. In Go, I would need the equivalent of one catch block per method call, in the form of "if err != nil { return err }". Why repeat yourself?
If you want to pass on an error in Go, its easy to do accidentally -- just omit "if err != nil". Both types of exception handling models enable this kind of behavior. If you are going to do it, you might as well not repeat yourself while you are doing it.
You can, but you should not be forced to do so. I could easily do that in Java if I wanted to, but I also have the flexibility to handle them all in the same way, or pass them back to the caller (which the right thing to do 99% of the time), with a very small amount of code.
To be honest moaning about Go idioms is like moaning that Java is object orientated or Lisp has too many braces. If you don't like the idioms of a particular language then don't use that langauge. There's plenty of others to choose from. But moaning that language x should be more like language y is just daft as it misses the point of why language x decided to do something different in the first place.
I'm not saying Go's error handling is better than Java's. But I've written some relatively large and complicated projects in Go (like the Linux $SHELL and REPL scripting language I'm currently coding this very moment) and error handling really is the least of the problems I've been having. Sure, Go's idioms do get in my way from time to time. But on balance Go's idioms save me more time than it wastes. Which is why I keep coming back to Go despite having used more than a dozen other languages in anger. But that's my workflow and my personal preference. Others will find the reverse to be true and I wouldn't be complaining that their language should be more like Go.
Frankly I think people waste too much time comparing languages. Just try something and if it doesn't work, move on. Don't assume your personal preference is a benchmark for how all languages should be designed as there will be a million other developers whose personal preference will exactly oppose you.
I remember in Python some class / module would have different fields in the error object, so it was really difficult to do something meaningful with those errors.
There is also the switch case you need to make when catching an error, like tcp connection then dns resolution and so on, you have to know every time any kind of sub errors.
Frankly, I've never seen that (try / catch / ignore exception) in any commercial codebase I've worked on, running into tens of millions of lines.
The worst I've seen is people logging the error message but not the exception itself (so losing the stack trace), or causing another exception to be thrown (and losing the stack trace or cause of the original).
I don't understand this comment - Python does what I'd argue to be the "right" thing, which is that operations which can fail raise exceptions that unwind the stack. If you don't care to handle failure in a particularly granular or graceful way (e.g., you're a command-line process, or you're a server process with lots of independent requests), you can have one large catch statement around all of your work, or even just use the implicit one around the entire program provided by the runtime.
Meanwhile, well-written C code does this the "wrong" way; every external call has to be done like
if (read(fd, buf, len) < 0) {
hopefully remember to free some things;
return NULL;
}
Why do you think explicit error handling is a Pythonic practice?