I was surprised that TFS was not mentioned in the story (at least not as far as I have read).
It should have existed around the same time and other parts of MS were using it. I think it was released around 2005 but MS probably had it internally earlier.
SLM (aka slime, shared file-system source code control system) was used in most of MS, aka systems & apps.
NT created (well not NT itself, IIRC, there was some an MS-internal developer tools group in charge)/moved to source depot since a shared file-system doesn't scale well to thousands of users. Especially if some file gets locked and you DoS the whole division.
Source depot became the SCCS of choice (outside of Dev Division).
Then git took over, and MS had to scale git to NT-size scale, and upstream many of the changes to git mainline.
TFS was used heavily by DevDiv, but as far as I know they never got perf to the point where Windows folk were satisfied with it on their monorepo.
It wasn't too bad for a centralized source control system tbh. Felt a lot like SVN reimagined through the prism of Microsoft's infamous NIH syndrome. I'm honestly not sure why anyone would use it over SVN unless you wanted their deep integration with Visual Studio.
After the initial TFS 1.0 hiccups, merging was way, way better than SVN. SVN didn't track anything about merges until 1.6. Even today git's handling of file names has nothing on TFS.
I don't think the example was perfect which explains your wife's reaction.
Think of it more like the first delivery guy/girl left his/her car outside and wrote 123 on it. Then walked back.
The next one sees the car with a sign saying 123 and won't even ring the door bell or leave a package. Now you haven't gotten the package twice, it has not been delivered twice.
Sure you can complain that there's a car outside your home but in digital system you won't even see it. It would also cost the deliver firm a car for every package but that is not your problem and again, in the digital world the cost is a lot less than a car.
There is an argument that the street would be filled up with delivery vans ans there would be no more room for new deliveries to you or your neighbors but that is a limitation you could talk about. You probably can't handle an infinite number of packages delivered at the same time either and you won't wait an infinite amount of time for any specific package.
I don't see any reason why it shouldn't scale. It also allows you to use any backend language. If you write messy code in php and think it is the language 's fault then you can use something else. Rust, Go, Java, C#, Elixir. Everything should work. Just don't write messy code.
Do you really want the thousands of bugs that would occur if we shipped a real binary to each device?
And all security problems?
Sorry, the FooApp 1.1 does not support your iPhone 12. It also crashes on Samsung Galaxy 21 and lower and has a huge security problem on Windows 10 when using an Intel Core i5.
There was/is a state management lib called Mobx that was released years ago and was usually combined with React. It was kind of reactive, very easy to use and fast. It normally only updated the components that was effected by a state change.
It was a bit sad that it didn't get more popular and possibly improved. Some devs liked it but we were a minority compared to those who liked/used Redux.
One strange thing they did was to release a new "version" called Mobx State Tree which was very different. It had a great api but was different and slower.
Did you read the article?
The main issue with your idea is that an LLM won't know if the algorithm it created is any good, or even if it works at all. If it can't check that it will never know and never get better. You could ask it to generate a number of algorithms and then yourself choose the best one but then you have worked as a team, the LLM did not plan anything.
LLM's can integrate with a sandbox to deploy and test their code and iterate on a solution until it appears to be valid. They can also integrate with web search to go out and find presumably-valid sudoku puzzles to use for test cases if they're (likely) unable to generate valid sudoku puzzles themselves. I know it's expanding the definition of "LLM" to include a sandbox or web search, but I think it's fair because it's a reasonably practical and obvious application environment for an LLM which you plan to ask to do things like this, and I think LLMs with both these integrations will be commonplace in the next 1-2 years.
No, I don't think LLM's can "reason and plan". But I do think they can effectively mimic (fake) "reasoning and planning" and still arrive at the same result that actual reasoning and planning would yield, for reasonably common and problems of greater than trivial complexity but less than moderate complexity.
I think pretty much all of our production AI models today are limited by their lack of ability to self-assess and "goal-seek" and mutate themselves themselves to "excel". I'm not 100% sure what this would look like but I can be sure they don't have any real "drive to excel beyond". Perhaps improvements in Reinforcement Learning will uncover something like this, but I think there may need to be a paradigm shift before we invent something like that.
It should have existed around the same time and other parts of MS were using it. I think it was released around 2005 but MS probably had it internally earlier.