Hacker Newsnew | past | comments | ask | show | jobs | submit | rr808's commentslogin

I'm still on 23H2, windows update fails every time I try. This summer I'm switching to Linux.

You need to repair your Windows installation first, sometimes after enough updates it just corrupts itself.

Ain't that the truth. My older desktop's DHCP and WLAN services no longer work. So either I reinstall Windows 10 or see if I can get what I need it to do on Windows

Not sure if you're referring here to Windows or Microsoft as a company. Could fit either /S

Something that might work is downloading the current ISO, then mount it and run the setup

I work in big financials. Everything used to be built on Excel. A lot still is but Python/Jupyterhub or custom applications has taken over a lot of the complex stuff. Excel isn't really essential any more.

While less necessary with AI, Excel is still the king of data entry and basic data manipulation (sorting, filtering, updating, etc.). I’d say that SQLite with a GUI for visualization is a far stronger competitor than Jupyter at those sorts of things. You can do that stuff in Jupyter, but it’s easier in Excel.

Jupyter also has a janky execution model. It doesn’t track dependencies so you have to be very careful in how you separate cells from one another and just running the whole notebook every time seems kind of pointless vs just writing a pure Python script.


I switched from C++ to Java/Python 20 years ago. I never really fit in, I just dont understand when people talk about the complicated frameworks to avoid multithreading/mutexes etc when basic C++ multi threading is much simpler than rxjava or async/await or whatever is flavor of the month.

But C++ projects are usually really boring. I want to go back but glad I left. Has anyone found a place where C++ style programming is in fashion but isn't quite C++? I hope that makes sense.


That place is C++/Lua, with most of the code in Lua. Concurrency model is thread-per-lua-state. Interthread comms is message passing implemented via textbook c++11 atomic and condition variables. Lua is just a small C library so the style remains very friendly to those who like C with classes.

Me too, I figured I spent more time on the computer than taking pics. Now I shoot jpg and if I have some spare time I go out and shoot. If I take a good pic I share it with basic editing if any instead of waiting to get it "perfect".

Yeah I started with that. Eventually did an electronics engineering degree. Now I just write python and CI config.


Still having your own hardware seems so much cheaper. Maybe even just for dev/uat environments?

Every big corporate I have worked at has lower cost of capital than Amazon, and yet they want to move to AWS. I just dont understand it.


I've moved two clients to colo. Dramatic cost savings. So many systems only use VMs and a few basic cloud features. Everyone knows this, but just to make the point, you can still use certain cloud products (cloud storage for example) just fine while running your primary workloads on your own hardware. Sometimes it makes perfect sense, and you just need someone to nudge you and tell you it's going to be ok.


Going to the cloud can't possibly be as cheap as owning your own hardware for obvious reasons - they have to make money somehow. Well, unless you use spot instances, which uses spare nodes. In any case, you move to the cloud despite the cost if you need the multi-region redundancy, the management/features etc. More commonly it's because the higher ups heard everybody's doing it, but oh well :D


The main thrust of the economic argument has been on the cost of system adminstrators that maintain the hardware. Electricity and cooling being big ongoing costs, but also when AWS released it wasn't uncommon to order a server and have it take 3 months to arrive.

I think in practice the system administrators are still in the company now as AWS engineers, they still keep all that platform stuff running and your paying AWS for their engineers too as well as electricity. It has the advantage of being very quick to spin up another box, but also machines these days can come with 288 cores, its not a big stretch to maintain sufficient surplass and the tools to allow teams to self service.

Things are in a different place to when AWS first released, AWS ought to be charging a lot less for the compute, memory and storage, their business is wildly profitable at current rates because per core machines got cheaper.


I think it's incentives. Working with AWS is good for your resume. It's also a responsibility thing. If AWS is down then it's Amazon's fault. If there's a problem with your physical server, then it's your problem.

A broader thing here is -- and you may also notice this trend in software -- employees are incentivized towards complex solutions, while business owners are incentivized towards simple solutions.

(Shiny object syndrome sold separately ;)


Everything I've read says that self-hosting doesn't become cheaper than AWS for companies until you reach $1-$3 million per month spending when all costs are accounted for. Then there is the highly overlooked aspect that a good API like AWS has lets your expensive admins actually get things done hundreds of times faster than how most self-hosted IT can do. It usually takes months to buy and install additional capacity for most companies.


> good API like AWS has lets your expensive admins actually get things done hundreds of times faster than how most self-hosted IT can do

Depends. APIs must take into account many more cases than our own specific use case, and I find we are often spending a lot of time going through unnecessary (for us) hoops. And that's leaving aside possible API changes.


I'm having a hard time imagining a situation where an using an API is slower than not.


I think it can make sense for user-facing services. I host my web and database servers with AWS because unmanaged DBs can be a PITA, Amazon is peered with basically everyone, AWS is way more generous with network speeds than many dedicated / colo providers, and it’s easy to scale capacity up and down. Backend servers are hosted with cheaper providers though.


There are so many articles like these:

"We Moved from AWS to Hetzner. Cut Costs 89%. Here’s the Catch."

https://medium.com/lets-code-future/we-moved-from-aws-to-het...


You just linked AI slop.


You mean the image? The text does not sound like AI at all IMHO.


> No theory. No fluff. Just production.

ChatGPT tells me "no theory, no fluff" all the time :D


Where do you think it learned that phrasing from?


Not from people using that same phrasing twice within a few sentences.

""" No warning. No traffic spike. Just… more money gone.

That’s when I finally looked at Hetzner.

I’ve seen too many backend systems fail for the same reasons — and too many teams learn the hard way.

So I turned those incidents into a practical field manual: real failures, root causes, fixes, and prevention systems.

No theory. No fluff. Just production. """

It's clearly slop, they immediately use effectively the same one again:

""" That last line isn’t a joke. There were charges I genuinely couldn’t explain. Elastic IPs we forgot to release. Snapshots from instances that no longer existed. CloudFront distributions someone set up for testing. """

No, human writers don't repeat this pattern every single paragraph. They use it at most across in a whole article.


Repetition is a very common tool in writing (ie 'I have a dream').

I'm just irked that it's being called out for AI slop because "I feel it in my bones!!"

There's a good chance it was written using AI -- should that matter? If the content is wrong/sucks, say that instead. If you're going to dismiss all AI assisted writing: good luck in the next decade.


Reading the same (very annoying marketing blog) style of writing gets old fast.

It’s like suddenly all memes are just the same meme and nobody makes their own memes because “AI does it better”.

The style of writing is an intrinsic part of communication, if you can’t critique that then what is content? We’re not machines sharing pieces of data with each other.


AI-written text is not necessarily incorrect, but if the author did not take their time to remove the AI slop, they probably did not put much effort into it elsewhere. In addition, the text is often over two times longer than without the slop, which disrespects the reader's time (even worse in this case, since a significant fraction of the article is an ad for the author's books).


> I just don’t understand it

Maintaining and updating your own hardware comes with so much operational overhead compared to magically spinning up and down resources as needed. I don’t think this really needs to be said.


I dunno… for setup, yes absolutely. One time cost in time. After that, not really.


It’s absolutely not a one time cost. Once you have it you need to hire people full time to maintain it and eventually upgrade it. Hardware fails constantly


I've done this for decades with a full rack. Stuff fails on occasion. So what?


You need to factor in the data center, power, cooling, hands-on support, future growth, etc.

You're never just paying for the hardware.


Back then I didn't foresee the 22GB image our jupyter/ML is in 2026. There must be a better way.


Is that dockers fault? A basic Linux image is like 400MB right?


I still remember my first CPU with a heatsink. It seemed like a temporary dumb hack.


Well it kinda was! seeing how power efficient iPhone chips are despite hovering the top of single core benchmarks.


Nice, I didn't think of that. I love the fanless trend in Apple macbook airs.


I had the same inclination back in the 90s when I upgraded my Cyrix 486 SLC2 50MHz without a heat sink (which seems like a no-no in retrospect) to Cyrix MediaGX 133MHz. The stocker fan was immediately noticeable. I thought I had done something wrong.


Upgrading and Repairing PCs 4th edition even says directly, that some shady resellers will put a heatsink on a chip that they're running beyond spec, but that Intel designs all their processors to run at rated speed without one.


I had a PC with an old PII or PIII cartridge.

The cpu and heatsink was fully integrated into what looked like a NES cart, with an integrated fan and everything. It was not really possible to separate the cpu and the heatsink as the locking mechanism to keep the cart in place on the motherboard interfaced with the heatsink assembly.

So I'm a little dubious of that no-heatsink claim.


I've never seen a Xeon without a heat sink, I don't believe they are designed to run without one.


Indeed, even the oldest, slowest Xeons shipped in SECC cartridges with integrated heatsinks.

But that was several years after the book cited by the GP was published (1994, shortly after the release of the original Pentium).


Ah I missed that on my first read, was more focused on the claim at the end of the sentence. Thanks.


The first Xeon looks to be released 1998, so sounds about right


The LG Gram is incredible, 17 inch thin light and powerful. However it has one of the worst keyboards I've ever used.


I think they're dumb but my wife loves them. The video quality is surprisingly good.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: