In my personal experience, Msys 2 would work great until it didn't. Unless this has changed, from what I remember, Msys2 compiled everything without PIC/PIE, and Windows does allow you to configure, system-wide, whether ASLR is used, and whether it's used "if supported" or always. If that setting is set to anything but off, Msys2 binaries will randomly crash with heap allocation errors, or they do on my system. It happened so much to me when I had actual coreutils installed that I switched to uutils-coreutils even though I knew that uutils-coreutils has some discrepancies/issues. Idk if they've fixed that bug or not; I did ask them once why they didn't just allow full ASLR and get on with things and they claimed that they needed to do non-ASLR compilations for docker.
This looks really really AI-generated even if the author did try to hide it by making some grammar elements improper. Idk if that diminishes it's accuracy though.
I had to stop reading. I have become overly sensitive to LLMisms. This is definitely "ChatGPT, read this article and rewrite it in a casual tone" with little to no actual authorship. On HN we should try to get primary sources for this sort of thing.
I don't know why you are downvoted. The article is AI blogspam, it doesn't have any more factual information than eg https://www.darkreading.com/application-security/vercel-empl... and is full of empty LLMisms. It's depressing people are willing to read this.
I dont have an llm-radar like you but I felt some anxiety reading through it. Cant explain why but the logic was not linear and this strained me as a reader. It didnt have the obvious llm-isms i see on youtube videos "not this but that".
My natural instinct is to make sense of what I read, and if presented with a word-salad, it strains me. What are the empty LLMisms so my radar can be calibrated ? These are some giveaways I could spot.
> The timeline is genuinely absurd
> The timeline sequence description (Feb/March/April) is abstract and does not depict specifics reflecting human understanding.
That article you linked to didn't mention that Context.ai, from where this mess originated, is a YCombinator company. Most probably its founders are on this very web-forum.
It's absolutely LLM prose, though not all of it. Maybe the author rewrote parts.
The thing that concerns me is that even at a site like HN, where a lot of people are very familiar with LLMs, it seems to be passing.
I hate to think this will become the norm but it's not the first HN linked post that's gotten a lot of earnest engagement despite being AI generated (or partly AI generated).
I'm very comfortable with AI generated code, if the humans involved are doing due diligence, but I really dislike the idea of LLM generated prose taking over more and more of the front page.
Of course it will be new normal. Even worse in few years you will be writing yourself AI-like prose cause of all of that AI written article and news that you read, will cause silently for you to adopt that style. In few more years barely anybody will be able to write coherent statements themselves without help of LLM :)
The author did change some things to try to pass the LLM test. For example, they removed the apostrophe from I've. The problem, of course, is that this isn't enough to actually pass it, and the author would need to practically rewrite it in their own words to actually come off as natural.
And yes, I agree with you: it's sad seeing LLM-generated slop taking over the front page. I (possibly naively) hope that this trend starts reversing itself sometime soon, as HN is a valuable resource for me to discover new and fascinating things.
I mean, the signs have been there that the costs to run and operate these models wasn't as simple as inference costs. And the signs were there (and, arguably, are still there) that it costs way, way more than many people like to claim on the part of Anthropic. So to me this price hike is not at all surprising. It was going to come eventually, and I suspect it's nowhere near over. It wouldn't surprise me if in 2-3 years the "max" plan is $800 or $2000 even.
I would not be surprised at all, a $1,000/mo tool that makes your $20,000/mo engineer a lot more productive is an easy sell.
I’m guessing we’re gonna have a world like working on cars - most people won’t have expensive tools (ex a full hydraulic lift) for personal stuff, they are gonna have to make do with lesser tools.
If your company is making $1 mil per employee per year, then 10% is 100k. Even at 500k employee or lesseer numbers it's almost always better to buy the $1000/month tool (break even is a measly $108k revenue per employee per year)
It's not just about cost, it's about having the control, stability, and autonomy of on-prem. Plus you can probably repurpose that compute when employees are out of the office.
Huh? I've been seeing the "hopelessly doomed because of AI" trope practically since ChatGPT came out. It wasn't even remotely as bad as it is now, but it's been there all along.
I think you might be confusing Backblaze reading files and how Dropbox/OneDrive/Nextcloud/etc. work. NC doesn't enable this by default (I don't think), but Windows calls it virtual file support. There is no avoiding filling the upload buffer, because Backblaze has zero control over how Dropbox downloads files. When Backblaze requests that a file be opened and read, Windows will ask Dropbox or whatever to open the file for it, and to read it. How that is done is up to whatever handles the virtual files. To Backblaze, your Dropbox folder is a normal directory with all that that entails, so Backblaze thinks that it can just zip through the directory and it'll read data from disk, even though that isn't really what's happening. I had to exclude my Nextcloud directory from my Duplicati backups for precisely this reason -- my Nextcloud is hosted on my server, and Duplicati was sending it so many requests it would cause my server to start sending back error 500s.
And no, my server isn't behind cloudflare, primarily because I don't have $200 to throw at them to allow me to proxy arbitrary TCP/UDP ports through their network, and I don't know how to tell CF "Hey, only proxy this traffick but let me handle everything else" (assuming that's even possible given that the usual flow is to put your entire domain behind them).
Dropbox and onedrive can handle backblaze zipping through and opening many files. The risk is getting too many gigabytes at once, but that shouldn't happen because backblaze should only open enough for immediate upload. If it does happen it's very easily fixed.
If it overloads nextcloud by hitting too many files too fast, that's a legitimate issue but it's not what OP was worried about.
The issue you’re missing is that the abstraction Dropbox/OneDrive/etc provide is not that of an NFS. When an application triggers the download of a file, it hydrates the file to the local file system and keeps it there. So if Backblaze triggers the download of a TB of files, it will consume a TB of local file system space (which may not even exist).
It does keep them permanently. Dropbox is not a NAS and does not pretend to be one.
> When you open an online-only file from the Dropbox folder on your computer, it will automatically download and become available offline. This means you’ll need to have enough hard drive space for the file to download before you can open it. You can change it back to online-only by following the instructions below.
Same exact behavior for OneDrive, though it apparently does have a Windows integration to eventually migrate unused files back to online-only if enabled.
> When you open an online-only file, it downloads to your device and becomes a locally available file. You can open a locally available file anytime, even without Internet access. If you need more space, you can change the file back to online only. Just right-click the file and select "Free up space."
This "let's not back up .git folders" thing bit me too. I had reinstalled windows and thought "Eh, no big deal, I'll just restore my source code directory from Backblaze". But, of course, I'm that kind of SWE who tends to accumulate very large numbers of git repositories over time (think hundreds at least), some big, some small. Some are personal projects. Some are forks of others. But either way, I had no idea that Backblaze had decided, without my consent, to not back up .git directories. So, of course, imagine how shocked and dismayed I was when I discovered that I had a bunch of git repositories which had the files at the time they were backed up, but absolutely no actual git repo data, so I couldn't sync them. At all. After that, I permanently abandoned Backblaze and have migrated to IDrive E2 with Duplicati as the backup agent. Duplicati, at least, keeps everything except that which I tell it not to, and doesn't make arbitrary decisions on my behalf.
This is exactly why random corporations need to be gone from government. Or copyright needs to be abolished, one of the two. No corporation (no matter how beloved) should ever have this kind of power. IMO the more powerful an organization becomes, the deeper the scrutiny should be.
This distinction is good in academic circles and similar (like on here). But the public (and ordinary people who aren't people who regularly visit Hacker News -- or even know that Hacker News exists) don't care. To them, AI == inequality and inequality accelerants, because it is funded and run by the richest, most powerful people on Earth. And those very people are making everything worse for all but them, not better. Nobody is going to care about academic distinctions in such circumstances.
It's because the consequences of AI is so direct and obvious, and also faster, where the inequality and job losses from other tech advances are just less direct.
That is, it's not hard to see why so many main streets in smaller towns have boarded up retail stores since you can now get anything in about a day (max) from Amazon. But Amazon (and other Internet giants) always played at least semi-plausible lip service that they were a boon to small fry (see Amazon's FBA commercials, for example). But you've got folks like Altman and Amodei gleefully saying how AI will be able to do all the work of a huge portion of (mostly high paying) jobs.
So it's not surprising that people are more up in arms about AI. And frankly, I don't think it really matters. Anger against "the tech elite" has been bubbling up for a long time now, and AI now just provides the most obvious target.
Does economics or political theory focus on centralization, practically speaking? Not as a normative claim. What the actual effects are like. It just feels like we're at a centralization of power of unprecedented scale, to the point where no previous theories or models could really apply (in order to make analytical progress - I mean sure feudalism is honestly becoming a scarier and scarier analogy but still, there are significant differences)
I'm pretty much only thinking about these kinds of problems at my job at this point, so this is important to me in that regard
reply