Banks remain with COBOL because it's unsexy and stable. And then they say... let's just YOLO some vibe code into the next release sight unseen! Logic checks out.
They stick with COBOL because it runs well on the mainframe. The mainframe and sysplex architecture gives them an absurd level of stability and virtualization that I don't think the rest of the market has nearly caught up to yet. Plus having a powerful and rugged centralized controller for all of this is very useful in the banking business model.
This is the reason. IBM Mainframe business grew 60%. The modern mainframe is the best state of the art platform for computing, in both reliability and efficiency.
Also, IBM mainframes are wonderfully isolated from physical hardware. They could change processors in the next model, and users would notice a small delay as binaries were recompiled on-the-fly the first time it was used.
They surely could extract more performance from the hardware by shedding layers, but prioritized stability and compatibility.
> Also, IBM mainframes are wonderfully isolated from physical hardware. They could change processors in the next model, and users would notice a small delay as binaries were recompiled on-the-fly the first time it was used.
This was with AS/400's move from their own CISC processors to POWER. While you could pull that off with mainframes, it'd be recompiling actual native binary code. IBM mainframe architecture is very well defined and documented (sadly, unlike AS/400).
At this point in time it can have more cores and more memory than a Z, and likely higher performance in benchmarks, but the architecture is closer to a minicomputer than a mainframe.
If you have an app where the user must give their address when subscribing, what are the test that can be done?
User doesn't exist, invalid character in a field, user exists, wrong street name for the zip, wrong state for zip, wrong house number in the street, age below threshold, age above threshold,...
Each of these example must be done manually at least once to prove that the logic is correct and the tester must keep a report of it.
But, for each of these basic test, the data must be in a specific state (especially for the name already exists) so between each test you usually have a data preparation phase.
When you have a lot of these tests because it's spanning logic from decades, it takes time, especially when dealing with investments or insurance.
And usually for these test, you hire specific people that are targeted on correctness, not speed.
Now imagine what happens when you're at step 89 of your test and it fails.
The dev fix the code, fix the automated tests... And the tester restarts from step 1.
...Sir/madam, we have test environments, and comprehensive banks of (overtime hopefully idempotized) test data. When your schtick is software verification, one of the first things you develop a damn near fetishization for is test data management infra, and a place to store it. It's... A weird fixation admittedly. But getting data to land in just the right place at the right time so your testcase can run in a maximally parallelized fashion... chef's kiss Is perfection.
You couldn't score any higher on the risk factors. The training corpus for COBOL can't be all that large so the models won't understand it that well. Humans are largely out of the loop and the tooling guardrails are insufficient. Causing a billion dollar disaster with the help of a "shotgun surgeon"? Priceless.
Banks are slowly moving away from their old COBOL systems. It's about cost as much as it's about catching up with the neo-bank competition.
The main thing that makes this difficult is that in most cases the new system is supposed to be more capable. Transactional batch processing systems are replaced with event-based distributed systems. Much more difficult to get right.
I don't think learning how to write COBOL was ever a problem. Knowing that spaghetti codebase and how small changes in one place cause calamity all over the place is. Those 4 people's job is to avoid outages, not to write tons of code, or fix tons of bugs.
Honestly, there's companies that have lost the source code for some of their applications. Or, they depend on components from vendors that have long ceased to exist. I remember there being a lot of consternation around being able to compile and link against binary components that have just been around forever that could never be recompiled themselves. More people "Learning COBOL" was never going to be a solution to that ball and chain. And yeah, LLMs are good in the reverse engineering space too so maybe we'll finally see movement on that in the next decade.
You're probably right, no disagreement there. but in the context of my previous comment, the people that write cobol today, I don't think there is a lot of work for them trying to reverse engineer native code back to cobol because the source is lost. But you make a really good point, if AI can assist with lost code recovery, perhaps it will assist them in migrating away from it or getting rid of workarounds and complexities implemented to get that previously opaque binary's behavior.
I would say more significantly, 4 million people can read it. The changes required for any given quarter are probably miniscule, but the tricky part is getting up to speed on all those legacy patterns and architectural decisions.
A model being able to ingest the whole codebase (maybe even its VCS history!) and take you through it is almost certainly the most valuable part of all.
Not to mention the inevitable "now one-shot port that bad boy to rust" discussion.
In my experience, learning COBOL takes you a week at most, learning the "COBOLIC" (ha ha) way of your particular source base will take you a couple of months, but mastering it all including architecture will take you a year, half a year if you're really good.
One year from zero to senior doesn't sound that hard, does it? Try that with a Java codebase.
The trick is that the USA steps up the buy price of an asset when you pass away. So if you use cheap loans your whole life, you can defer capital tax until it goes away.
Instead of two certainties in life being death and taxes, it's now death or taxes.
If you're bootstrapped, borrow a bunch of money to pay tax because your company got to $10M val. But then the market shifts and it goes back down to $0 in later years, do you get the money back?
Even if you do, it sounds weird taxing someone for the right to create something, especially when they're still in the middle of creating it.
But honestly at this point I’m destined to buy a Steam Machine despite having a hefty Mac that could do gaming if only it were possible. Valve have been amazing about open computing and Apple are basically the enemy at this point.
It makes me wonder about what using steam machine for all computing might look like, as the new home of open computing and gaming.
Had some very weird behaviour from cloudfront used purely to serve images from s3. Mostly huge slowdowns and outright failures on endpoints. Was about 15 hours ago that I noticed it by chance.
Was nothing on the aws status pages and no alerts/errors in my console. Eventually it sped up again.
What Apple really needs to do is mimic their old policy of no fees except for games. Let everyone develop for it, and then rug pull by making the fees apply to everything
But they can’t do it twice. So the Vision Pro ends up with no ecosystem