Hacker Newsnew | past | comments | ask | show | jobs | submit | phyrex's commentslogin

Camera is recording/taking a picture


Why?


They wear down being run at 100% all the time. Support slowly drops off, the architecture and even the rack format become deprecated.


GPUs do not wear down from being ran at 100%, unless they're pushed past their voltage limits, or gravely overheating.

You can buy a GPU that's been used to mine bitcoin for 5 years with zero downtime, and as long as it's been properly taken care of (or better, undervolted), that GPU functions the exact same as a 5 year old GPU in your PC. Probably even better.

GPUs are rated to do 100%, all the time. That's the point. Otherwise it'd be 115%.


Yeah that's not how it works in practice in a datacenter with the latest GPUs, they are basically perishable goods.

You don't run your gaming PC 24/7.


No, you're fundamentally wrong. There's the regular wear & tear of GPUs that all have varying levels of quality, you'll have blown capacitors (just as you do with any piece of hardware), but running in a datacenter does not damage them more. If anything, they're better taken care of and will last longer. However, since instead of having one 5090 in a computer somewhere, you have a million of them. A 1% failure rate quickly makes a big number. My example included mining bitcoin because, just like datacenters, they were running in massive farms of thousands of devices. We have the proof and the numbers, running at full load with proper cooling and no over voltage does not damage hardware.

The only reason they're "perishable" is because of the GPU arms race, where renewing them every 5 years is likely to be worth the investment for the gains you make in power efficiency.

Do you think Google has a pile of millions of older TPUs they threw out because they all failed, when chips are basically impossible to recycle ? No, they keep using them, they're serving your nanobanana prompts.


GPU bitcoin mining rigs had a high failure rate too. It was quite common to run at 80% power to keep them going longer. That's before taking into account that the more recent generations of GPUs seems to be a lot more fragile in general.


Mining rigs also used more milk cartons than datacenter racks; [hot/cold] aisles? No, piles! Not to mention the often questionable power delivery...


AI data centers are also incentivised to reduce costs as far as they can. They could absolutely be running them in questionable setups


Indeed, fair point. I'd hope the larger players would be better... but I know better


Yeah what's crazy is most of these companies are making accounting choices that obscure the true cost. By extending the stated useful life of their equipment, in some cases from 3 years to 6. Perfectly legal. And it has the effect of suppressing depreciation expenses and inflating reported earnings.


But don't they palpitate for thise sweet depreciation credits to decrease their tax on revenue?


Small sacrifice to not spook investors and the market.


That's literally graphql though? That's the whole idea. This is so weis, clearly the author is aware of graphql but then just decided to reimplement half the backend? Why?


*weird


Wow, good for you, you can make A LOT of money!! https://bugbounty.meta.com/


Interesting how exploits already got weaponized instead of being reported to Meta...


You can make 10x this amount by handing the exploit to brokers.


workaround from the issue discussion:

```

  Problem: Claude Code 2.1.0 crashes with Invalid Version: 2.1.0 (2026-01-07) because the CHANGELOG.md format changed to include dates in version headers (e.g., ## 2.1.0 (2026-01-07)). The code parses these headers as object keys and tries to sort them using semver's .gt() function, which can't parse version strings with date suffixes.

  Affected functions: W37, gw0, and an unnamed function around line 3091 that fetches recent release notes.

  Fix: Wrap version strings with semver.coerce() before comparison. Run these 4 sed commands on cli.js:

  CLI_JS="$HOME/.nvm/versions/node/$(node -v)/lib/node_modules/@anthropic-ai/claude-code/cli.js"

  # Backup first
  cp "$CLI_JS" "$CLI_JS.backup"

  # Patch 1: Fix ve2.gt sort (recent release notes)
  sed -i 's/Object\.keys(B)\.sort((Y,J)=>ve2\.gt(Y,J,{loose:!0})?-1:1)/Object.keys(B).sort((Y,J)=>ve2.gt(ve2.coerce(Y),ve2.coerce(J),{loose:!0})?-1:1)/g' "$CLI_JS"

  # Patch 2: Fix gw0 sort
  sed -i 's/sort((G,Z)=>Wt\.gt(G,Z,{loose:!0})?1:-1)/sort((G,Z)=>Wt.gt(Wt.coerce(G),Wt.coerce(Z),{loose:!0})?1:-1)/g' "$CLI_JS"

  # Patch 3: Fix W37 filter
  sed -i 's/filter((\[J\])=>!Y||Wt\.gt(J,Y,{loose:!0}))/filter(([J])=>!Y||Wt.gt(Wt.coerce(J),Y,{loose:!0}))/g' "$CLI_JS"

  # Patch 4: Fix W37 sort
  sed -i 's/sort((\[J\],\[X\])=>Wt\.gt(J,X,{loose:!0})?-1:1)/sort(([J],[X])=>Wt.gt(Wt.coerce(J),Wt.coerce(X),{loose:!0})?-1:1)/g' "$CLI_JS"

  Note: If installed via different method, adjust CLI_JS path accordingly (e.g., /usr/lib/node_modules/@anthropic-ai/claude-code/cli.js).
```


Parsing markdown into a data structure without any sort of error handling is diabolical for a company like Anthropic


This sounds exactly like the type of thing you would expect an LLM to do


Why? Their software sucks, they're an LLM company not a software company.


They're a LLM company that has claimed that 90% of code will be written by LLMs. Please don’t give them any excuses.


Running sed commands manually in 2026? Just tell Codex to fix your Claude Code


I hate the stock kobo reader, so this is exciting!


It's a good idea. Obviously you can't preemptively OCR all images but having "context menu -> follow link" which works on QR codes and images with links in them seems totally doable do me


I think that a separate program might be better, which could be used with any video source in any program, including a QR code made up from multiple pictures or from CSS, or a video, etc. Furthermore, it is not necessarily a URL.

However, it might also be made as a browser extension in case you do not want to use a separate program, or if you want to be able to follow such links directly without going through another program.


The datacenter OS doesn't have to be the same as the developer OS. At my work (of similar scale) the datacenters all run Linux but very nearly all developers are on MacOS


MacOSX is a popular choice for dev boxes (if I understand correctly, they provide some pretty good tooling for managing a fleet of machines; more expensive hardware than a Linux dev machine fleet, but less DIY for company-wide administration).

... but Google solves the "A Linux fleet requires investment to maintain" problem by investing. They maintain their own in-house distro.


> their own in-house distro

Not really, it is just a well known outside distro plus internal CI servers to make sure that newly updated packages don't break things. Also some internal tools, of course.


Relative to what the rest of the world does, that is maintaining your own in-house distro.

It's downstream of Ubuntu (unless that's changed) but it's tweaked in the ways you've noted (trying to remember if they also maintain their own package mirrors or if they trust apt to fetch from public repositories; that's a detail I no longer recall).


Own package mirrors where packages are re-built by ourselves. Adding external public repositories is simply not allowed.


https://en.wikipedia.org/wiki/Goobuntu

In 2018, Google replaced Goobuntu with gLinux, a Linux distribution based on Debian Testing

https://en.wikipedia.org/wiki/GLinux


Not to be a jerk, but 'hundreds of devs and dozens of MR per day' is not 'huge repos'. Certain functionality only becomes relevant at scale, and what is easy on a repo worth hundreds of megabytes doesn't work anymore once you have terabytes of source code to deal with.


> terabytes of source code

You sure that exists?

Git repositories that contain terabytes of source code?

I could imagine a repo that is terabytes but has binaries committed or similar... But source code?


Google's monorepo is in fact terabytes with no binaries. It does stretch the definition of source code though - a lot of that is configuration files (at worst, text protos) which are automatically generated.


Google had 86TB of sourcecode data in Piper way back in 2016.


Dang, that's mind boggling - especially if I keep in mind that a book series like lord of the rings is mere kilobytes if saved as plain text.

Having 86 TB of plain text/source code - I can't fathom the scale, honestly

Are you absolutely sure there aren't binaries in there (honestly asking, the scale is just insane from my perspective - even the largest book compilation like Anna's isn't approaching that number - if you strip out images ... And that's pretty much all books in circulation - with multiple versions per title)


Each snapshot of the repo isn't that big, but all the snapshots together, plus all the commit metadata and such, are


git could never, but piper at google is way over that figure. Way, way over.


Microsoft has actually done a lot of work to scale got to large repos


It's why there's special Microsoft Git VFS (a lot like VFS at google that is also referenced in the talk).

It was made to make working on Windows source code possible with Git.


Very sure, i work in one


I'm pretty sure Meta's team has written about that at length. It's about many things, such as (power/transportation/internet/energy) infrastructure, political situation, available workforce, vicinity to population centers, property prices, and a whole lot more


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: