It just isn't. There's nothing stopping other languages from being that easy but very few even try.
Go sees itself more as a total dev environment than just a language. There's integrated build tooling, package management, toolchain management, mono repo tools, testing, fuzzing, coverage, documentation, formatting, code analysis tools, performance tools...everything integrated in a single binary and it doesn't feel bloated at all.
You see a lot of modern runtimes and languages have learned from Go. Deno, Bun and even Rust took a lot of their cues from Go. It's understood now that you need a lot more than just a compiler/runtime to be useful. In languages that don't have that kind of tooling the community is trying to make them more Go-like, for example `uv` for Python.
Getting started in a Go project is ridiculously easy because of that integrated approach.
I personally like Go and appreciate its simplicity and tooling and everything but the example given is "making a folder" and "putting a ... main() func" in it. But, like, this is exactly as easy with every single other language that I can think of.
The second part "Running go install at the root ./.." is actually terrible and risky but, still, trivial with make (a - literally - 50 year old program) or shell or just whatever.
I get that the feelz are nice and all (just go $subcmd) but.. come on.
> "making a folder" and "putting a ... main() func" in it
You can't do that with python for instance. First, you need a python interpreter on the target machine, and on top of that you need the correct version of the interpreter. If yours is too old or not old enough, things might break. And then, you need to install all the dependencies. The correct version of each, as well. And they might not exist on your system, or conflict with some other lib you have on your target machine.
Same problem with any other interpreted language, including Java and C# obviously.
C/C++ dependency management is a nightmare too.
Rust is slightly better, but there was no production-ready rust 16 years ago (or even 10 years ago).
You also need a version of the go compiler, possibly one new enough to handle some //go:magic:comments.
I agree that static linking is great and that python sucks but I was trying to say I can, very easily, mkdir new-py-program/app.py and stick __main__ in it or mkdir new-perl-program/app.pl or mkdir my-new-c-file/main.c etc.
For 2/3 of the above I can even make easy/single executable files go-style.
Nowadays, with uv (and probably some other tools too) it's pretty easy to ship a python program on a machine that doesn't even have python on it, so it's pretty much a solved problem today (in most cases). But 5 or 10 years ago it was a real hassle that go solved elegantly. Yes you can make python executables but they are like 100 Mb even for a simple hello world. It's a last resort solution.
I don't understand your comment on magic comments. You don't need them to cross-compile a program. I was already doing that routinely 10 years ago. All I needed is a `GOOS=LINUX GOARCH=386 go build myprog && scp myprog myserver:`
> But, like, this is exactly as easy with every single other language that I can think of.
I mean, not exactly. Rust (or rather Cargo) requires you to declare binaries in your Cargo.toml, for example. It also, AIUI, requires a specific source layout - binaries need to be named `main.rs` or be in `src/bin`. It's a lot more ceremony and it has actively annoyed me whenever I tried out Rust.
> The second part "Running go install at the root ./.." is actually terrible and risky but, still, trivial with make (a - literally - 50 year old program) or shell or just whatever.
Again, no, it is not trivial. Using make requires you to write a Makefile. Using shell requires you to write a shell script.
I'm not saying any of this is prohibitive - or even that they should convince anyone to use Go - but it is just not true to say that other languages make this just as easy as Go.
> declare binaries in your Cargo.toml, for example. It also, AIUI, requires a specific source layout - binaries need to be named `main.rs` or be in `src/bin`.
It does not. Those are the defaults. You can configure something else if you wish. Most people just don’t bother, because there’s not really advantages to breaking with default locations 99% of the time.
I think one other major part is that, compared to e.g. make the build process is more-or-less the same for all Go projects. There is some variation, and some (newcomers I want to think) still like to wrap go commands into Makefile, but it's still generally very easy to understand and very uniform across different Go projects
The distinction, I believe, is between "possible" and "easy". Go makes a lot of very specific things easy via some of its design choices with language and tooling.
As a counter example, it seems like e.g. c++ is mostly concerned about making things possible and rarely about easy.
It’s not true of C/C++ which need changes to the build system. Also true for rust workspaces which is how I’d recommend structuring monorepos although it is generally easy (you just need to add a small cargo.toml file) or you can not use a workspace but you still need to declare the binary if I recall correctly.
It's true for any C/C++ project which bothers to write 10 lines of GNU `make` code.
The problem is that many projects still pander to inferior 1980s-era `make` implementations, and as such rely heavily on the abominations that are autotools and cmake.
Autotools is far too pessimistic, but there are still considerable differences between different compiling environments. Tooling like CMake will always be necessary unless you are only targeting unix and your dependency graph isn't terribly deep.
Even in that specific niche I find using a programmatically generated ninja file to be a far superior experience to GNU make.
This is the theory, yes. And then reality comes bursting through the door and you are confronted with the awfulness that is the C/++ toolchain. Starting with multi-OS, multi-architecture builds, and then plunging into the cesspit that is managing third party dependencies.
Let’s be real. C/++ has nothing even approaching a sane way to do builds. It is just degrees from slightly annoying to full on dumpster fire.
The most universal thing in C/C++ is vendoring, IMHO.
If you are distributing source, you distribute everything. Then, it only needs a compiler and libc. That vendored package is tested, and it works on your platform, so there's no guesswork.
That might work for small, self-contained dependencies in projects which aren't themselves libraries. But good luck vendoring your deps when one of those deps is something really large like Qt or WebKit, or if you're building a library that other applications need to be able to link against.
Yes, at Google, roughtly 10 years ago - great experience. Recently with my own small projects, and on Windows (where it lacks the same charm... yet - but eventually!)
But yes, I worked - mainly Java (back then) with GWT, some Python, Sawzall, R, some other internal langs.