Really? My experience is that of C, C++, Go, Python, and Rust, Go BY FAR breaks code most often. (except the Python 2->3 change)
Sure, most of that is not the compiler or standard library, but dependencies. But I'm not talking random opensource library (I can't blame the core for that), but things like protobuf breaking EVERY TIME. Or x/net, x/crypto, or whatever.
But also yes, from random dependencies. It seems that language-culturally, Go authors are fine with breaking changes. Whereas I don't see that with people making Rust crates. And multiple times I've dug out C++ projects that I have not touched in 25 years, and they just work.
The stdlib has been very very stable since the first release - I still use some code from Go 1.0 days which has not evolved much.
The x/ packages are more unstable yes, that's why they're outside stdlib, though I haven't personally noticed any breakage and have never been bitten by this. What breakage did you see?
I think protobuf is notorious for breaking (but more from user changes). I don't use it I'm afraid so have no opinion on that, though it has gone through some major revisions so perhaps that's what you mean?
I don't tend to use much third party code apart from the standard library and some x libraries (most libraries are internal to the org), I'm sure if you do have a lot of external dependencies you might have a different experience.
Well, for C++ the backwards compatability is even better. Unless you're using `gets()` or `auto_ptr`, old C++ code either just continue to compile perfectly, or was always broken.
Sure, the Go standard library is in some sense bigger, so it's nice of them to not break that. But short of a Python2->3 or Perl5->6 migration, isn't that just table stakes for a language?
The only good thing about Go is that its standard library has enough coverage to do a reasonable number of things. The only good thing. But any time you need to step outside of that, it starts a bit-rotting timer that ticks very quickly.
> though [protobuf] has gone through some major revisions so perhaps that's what you mean?
No, it seems it's broken way more often than that, requiring manual changes.
But any time you need to step outside of that, it starts a bit-rotting timer that ticks very quickly.
This is not my experience with my own or third party code. I can't remember any regressions I experienced caused by code changes to the large stdlib at all in the last decade, and perhaps one caused by changes to a third party library (sendgrid, who changed their API with breaking changes, not really a Go problem).
A 'bit-rotting timer' isn't very specific or convincing, do you have examples in mind?
I've been promoted several times for adding business value, complex or simple in implementation.
Saving every team from doing X manually, saving so and so many engineer hours per year. At cost Y per year instead in one team. The higher X and lower Y the better.
Backups did not exist. Now they do and work every time.
I wrote the top customer appreciated feature, associated with an increase with X increase in whatever metric.
"It was really hard" shouldn't come into it, or not as much by far.
It likely wouldn't be kinetic, but nukes didn't stop us from chipping away at the Soviet Union.
I could be wrong, but I don't buy the public story that this is about regime change. You don't topple a government with air superiority alone, and you don't do it in a matter of days. I also don't expect the US would be okay letting the Iranian people pick who comes next. We have a history of installing puppets and that similarly doesn't happen only via bombing runs.
> If Iran had deployable nukes, would they get invaded?
Honestly, maybe? Like if we had high confidence we knew where they were, and Israel consented to the attack, I could absolutely see the U.S. trying to take it out in storage.
If Iran had a nuke that could hit the U.S., I'd say no. But that's a stretch from "deployable nukes."
> Name a country that got bombed to credibly destroy the government, and had nukes
> if we had high confidence we knew where they were
That's a very big gamble. They only need to have hide one on a cargo ship and the attacker is going to have a Very Bad Day.
Nobody's made that gamble yet. Yes, there's been kinetics between India and Pakistan, and Iran sending missiles at Israel, but not a credible threat to the state.
> Pedantically, Ukraine.
Not sure when you mean. Did they get bombed while they had physical control in the early 90s? They never had operational control, but now that they're being bombed they don't have even physical control of nukes.
I would say that I understand all the levels down to (but not including) what it means for electron to repel another particle of negative charge.
But what is not possible is to understand all these levels at the same time. And that has many implications.
Humans we have limits on working memory, and if I need to swap in L1 cache logic, then I can't think of TCP congestion windows, CWDM, multiple inheritance, and QoS at the same time. But I wonder what superpowers AI can bring, not because it's necessarily smarter, but because we can increase the working memory across abstraction layers.
> The first batches of Quake executables, quake.exe and vquake.exe were programmed on HP 712-60 running NeXT and cross-compiled with DJGPP running on a DEC Alpha server 2100A.
Is that accurate? I thought DJGPP only ran on and for PC compatible x86. ID had Alpha for things like running qbps and light and vis (these took for--ever to run, so the alpha SMP was really useful), but for building the actual DOS binaries, surely this was DJGPP on x86 PC?
Was DJGPP able to run on Alpha for cross compilation? I'm skeptical, but I could be wrong.
I thought the same thing. There wouldn't be a huge advantage to cross-compiling in this instance since the target platform can happily run the compiler?
Running your builds on a much larger, higher performance server — using a real, decent, stable multi-user OS with proper networking — is a huge advantage.
Yes, but the gains may be lost in the logistics of shipping the build binary back to the PC for actual execution.
An incremental build of C (not C++) code is pretty fast, and was pretty fast back then too.
In q1source.zip this article links to is only 198k lines spread across 384 files. The largest file is 3391 lines. Though the linked q1source.zip is QW and WinQuake, so not exactly the DJGPP build. (quote the README: "The original dos version of Quake should also be buildable from these
sources, but we didn't bother trying").
It's just not that big a codebase, even by 1990s standards. It was written by just a small team of amazing coders.
I mean correct me if you have actual data to prove me wrong, but my memory at the time is that build times were really not a problem. C is just really fast to build. Even back in, was it 1997, when the source code was found laying around on an ftp server or something: https://www.wired.com/1997/01/hackers-hack-crack-steal-quake...
"Shipping" wouldn't be a problem, they could just run it from a network drive. Their PCs were networked, they needed to test deathmatches after all ;)
And the compilation speed difference wouldn't be small. The HP workstations they were using were "entry level" systems with (at max spec) a 100MHz CPU. Their Alpha server had four CPUs running at probably 275MHz. I know which system I would choose for compiles.
> "Shipping" wouldn't be a problem, they could just run it from a network drive.
This is exactly the shipping I'm talking about. The gains would be so miniscule (because, again, and incremental compile was never actually slow even on the PC) and the network overhead adds up. Especially back then.
> just run it from a network drive.
It still needs to be transferred to run.
> I know which system I would choose for compiles.
All else equal, perhaps. But were you actually a developer in the 90s?
Whats the problem? 1997? They were probably using 10BaseTX network, its 10Mbit...
Using Novel Netware would allow you to trasnfer data at 1MB/s.. quake.exe is < 0.5MB.. so trasnfer will take around 1 sec..
Networking in that era was not a problem. I also don’t know why you’re so steadfast in claiming that builds on local PCs were anything but painfully slow.
It’s also not just a question of local builds for development — people wanted centralized build servers to produce canonical regular builds. Given the choice between a PC and large Sun, DEC, or SGI hardware, the only rational choice was the big iron.
To think that local builds were fast, and that networking was a problem, leads me to question either your memory, whether you were there, or if you simply had an extremely non-representative developer experience in the 90s.
You keep claiming it somehow incurred substantial overhead relative to the potential gains from building on a large server.
Networking was a solved problem by the mid 90s, and moving the game executable and assets across the wire would have taken ~45 seconds on 10BaseT, and ~4 seconds on 100BaseT. Between Samba, NFS, and Netware, supporting DOS clients was trivial.
Large, multi-CPU systems — with PCI, gigabytes of RAM, and fast SCSI disks (often in striped RAID-0 configurations) — were not marginally faster than a desktop PC. The difference was night and day.
Did you actively work with big iron servers and ethernet deployments in the 90s? I ask because your recollection just does not remotely match my experience of that decade. My first job was deploying a campus-wide 10Base-T network and dual ISDN uplink in ~1993; by 1995 I was working as a software engineer at companies shipping for Solaris/IRIX/HP-UX/OpenServer/UnixWare/Digital UNIX/Windows NT/et al (and by the late 90s, Linux and FreeBSD).
That's exactly what you said, and it was incorrect:
> This is exactly the shipping I'm talking about. The gains would be so miniscule (because, again, and incremental compile was never actually slow even on the PC) and the network overhead adds up. Especially back then.
The network overhead was negligible. The gains were enormous.
> I mean correct me if you have actual data to prove me wrong, but my memory at the time is that build times were really not a problem.
I never had cause to build quake, but my Linux kernel builds took something like 3-4 hours on an i486. It was a bit better on the dual socket pentium I had at work, but it was still painfully slow.
I specifically remember setting up gcc cross toolchains to build Linux binaries on our big iron ultrasparc machines because the performance difference was so huge — more CPUs, much faster disks, and lots more RAM.
That gap disappeared pretty quickly as we headed into the 2000s, but in 1997 it was still very large.
I remember two huge speedups back in the day: `gcc -pipe` and `make -j`.
`gcc -pipe` worked best when you had gobs of RAM. Disk I/O was so slow, especially compared to DRAM, that the ability to bypass all those temp file steps was a god-send. So you'd always opt for the pipeline if you could fill memory.
`make -j` was the easiest parallel processing hack ever. As long as you had multiple CPUs or cores, `make -j` would fill them up and keep them all busy as much as possible. Now, you could place artificial limits such as `-j4` or `-j8` if you wanted to hold back some resources or keep interactivity. But the parallelism was another god-send when you had a big compile job.
It was often a standard but informal benchmark to see how fast your system could rebuild a Linux kernel, or a distro of XFree86.
Sure, most of that is not the compiler or standard library, but dependencies. But I'm not talking random opensource library (I can't blame the core for that), but things like protobuf breaking EVERY TIME. Or x/net, x/crypto, or whatever.
But also yes, from random dependencies. It seems that language-culturally, Go authors are fine with breaking changes. Whereas I don't see that with people making Rust crates. And multiple times I've dug out C++ projects that I have not touched in 25 years, and they just work.
reply