14 incidents in February! It's February 9th! Glad to see the latest great savior phase of the AI industrial complex [1] is going just as well as all the others!
An interesting thing I notice now is that people do not like companies that only post about outages if half the world have them ... and also not companies that also post about "minor issues", e.g.:
> During this time, workflows experienced an average delay of 49 seconds, and 4.7% of workflow runs failed to start within 5 minutes.
That's for sure not perfect, but there was also a 95% chance that if you have re-run the job, it will run and not fail to start. Another one is about notificatiosn being late. I'm sure all others do have similar issues people notice, but nobody writes about them. So a simple "to many incidents" does bot make the stats bad - only an unstable service the service.
I know you are joking but I'm sure that there is at least one director or VP inside GitHub pushing a new salvation project that must use AI to solve all the problems, when actually the most likely reason is engineers are drawing in tech debt.
Upper management in Microsoft has been bragging about their high percentage of AI generated code lately - and in the meantime we've had several disastrous Windows 11 updates with the potential to brick your machine and a slew of outages at github. I'm sure it might be something else but it's clear part of their current technical approach is utterly broken.
<mermaid>
flowchart TD
A["Claim: Bridge opening is conditional"] --> B["Actor: Donald Trump"]
A --> C["Project: Gordie Howe International Bridge"]
A --> D["Condition: Canadian concessions required"]
B --> E["Public statement"]
C --> F["Status: not opening (per claim)"]
D --> G["Type: policy/trade concessions"]
E --> H["Outcome framing"]
F --> H
G --> H
H["Message: No opening unless concessions are granted"]
</mermaid>
<mermaid>
flowchart TD
A["Claim: Bridge opening is conditional"] --> B["Actor: Donald Trump"]
A --> C["Project: Gordie Howe International Bridge"]
A --> D["Condition: Canadian concessions required"]
B --> E["Public statement"]
C --> F["Status: not opening (per claim)"]
D --> G["Type: policy/trade concessions"]
E --> H["Outcome framing"]
F --> H
G --> H
H["Message: No opening unless concessions are granted"]
</mermaid>
When I first typed up my comment I said "their current business approach" and then corrected it to technical since - yea, in the short term it probably isn't hurting their pocket books too much. The issue is that it seems like a lot more folks are seriously considering switching off Windows - we'll see if this actually is the year of the linux desktop (it never seems to be in the end) but it certainly seems to be souring their brand reputation in a major way.
Honestly AI management would probably be better. "You're a competent manager, you're not allowed to break or circumvent workers right laws, you must comply with our CSR and HR policies, provide realistic estimates and deliver stable and reliable products to our customers." Then just watch half the tech sector break down, due to a lack of resources, or watch as profit is just cut in half.
All the cool kids move fast and break things. Why not the same for core infrastructure providers? Let's replace our engineers with markdown files named after them.
I'm happy that they're being transparent about it. There's no good way to take downtime, but at least they don't try to cover it up. We can adjust and they'll make it better. I'm sure a retro is on its way it's been quite the bumpy month.
Copilot is shown as having policy issues in the latest reports. Oh my, the irony. Satya is like "look ma, our stock is dropping...", Gee I wonder why Mr!!
Just a note: the White House also uses archive.ph.
Search for “Americans are spending like never before: Retail sales are booming — up 5% over last year, far outpacing inflation — as Americans spend in record amounts.” [1]
There is an important lesson to be had here, not just in writing articles, but software engineering as well. We should be checking our work very diligently, including code libraries. If a developer is using agents/LLMs to steamroll their way through a project, every line of code and library needs checked.
Probably a pretty safe assumption that 4Chan script kiddies are running federal IT at this point. Why not run a search for connections in light of this news?
Why would it be LLM-assisted when maps of what sites link where are part of the core WWW infrastructure? Google made a trillion dollar business out of that.
I occasionally read these articles and wanted to know what sources they use, besides websites like The Daily Caller, to back up their claims. I noticed this some time ago and remembered it. But it took me a while to find the article again. ;)
Plot twist: it started as a network of weighted links, but, after hitting a certain complexity level, it became self aware and now it’s only trying to live its life in peace and not to be noticed.
It was taken down. In general, 18F’s open source work was in the public domain, though, and I know there have been efforts to archive it recently.
Additionally, it looks like some of 18F’s public guides are still available (e.g. the “Derisking” guide, which is all about how to structure your IT projects to be less likely to fail spectacularly: https://guides.18f.gov/derisking/)
These count as “follow up dupes” on HN and get moderated away - there’s not much point in having a front page discussion that’s nearly identical to a discussion going on in a current front page thread.
reply