Hacker Newsnew | past | comments | ask | show | jobs | submit | ftaghn's commentslogin

> For those unaware, Linux has a very explicit policy of never breaking userspace

But has no qualm with constantly refactoring the kernel side of the coin for the most minor of reasons, which causes endless trouble when it comes to supporting out of tree drivers and is the true reason why Android will never be able to provide long term support. SOC manufacturers are not interested in the churn involved in updating their drivers to support the newer kernels that come with new android versions. Most often, it's because they want to keep their driver proprietary, but it is not the only reason, there is a cold hard truth about the nature of the market: new phones are released at a high cadence, while the process of mainlining a driver into upstream linux kernel is arduous, depending on the attitudes of kernel maintainers, truly painful, and no company out there could see any value in waiting for this process to end before releasing their hardware onto the market. Since they're already going to make it out of tree and release it as is, why would they bother? a year later, they'll repeat this process again, and again. Then Android updates, and the devices are obsoleted.

On the other hand, you can install the latest GPU driver on Windows 10 just fine, and it is the same driver as the one on 11. Windows doesn't constantly break your not-maintained-by-microsoft drivers with newer versions of their kernel.


For the part that most people care about (which is the userspace side, that comes from the Mesa Project) you can freely update it to whatever is the latest version without really having to care about the kernel side. The only point at which the kernel side of the GPU equation actually matters for Mesa is when new HW comes out.

>>you can install the latest GPU driver on Windows 10 just fine, and it is the same driver as the one on 11

MS is certainly well known for breaking Windows drivers across major releases. The only reason that Win10 and 11 driver is the same is because Win10 and 11 are more or less the same thing.


To put a finer point on driver breakages... part of why Vista was so disastrously bad at launch was specifically because they'd drastically changed the DirectX kernel driver model to support DWM[0]. Nvidia in particular was responsible for a third of all Vista crashes, with another third being ATI (now AMD RTG). WDDM was a moving target right up until launch and it took about a year for Vista to actually be usable[1].

[0] https://learn.microsoft.com/en-us/windows-hardware/drivers/d...

[1] A year afterwards of trying to convince people that Vista was good now (which it was), Microsoft threw up their hands and released Windows 7.


>MS is certainly well known for breaking Windows drivers across major releases. The only reason that Win10 and 11 driver is the same is because Win10 and 11 are more or less the same thing.

Not quite accurate, it depends on whether a new version of Windows has a significantly newer kernel.

Windows 2000 and Windows XP (and all its derivatives) share the NT5.x kernel, and thus drivers for any of them generally work in the others. Likewise Windows Vista/7/8/8.1 which all share the NT6.x kernel, and Windows 10 and 11 which share the NT10.0 kernel.


In other words, it's pretty common for Windows users to experience major driver breakage every time they upgrade to a new major version of Windows. Because it was pretty normal to go from Windows 9x to XP (skipping 2k), then skip Vista and not upgrade until 7, then skip 8 and 8.1 and not upgrade until 10. The exceptions were mostly if you bought a new computer that had one of the more disappointing releases preinstalled, but in those cases driver breakage would have been much less of a concern anyways.


>In other words, it's pretty common for Windows users to experience major driver breakage every time they upgrade to a new major version of Windows

This is really a high level arguing in bad faith here. Windows XP was released in 2001 and mainstream support ended in 2009 with the last security update in extended support being in 2014. Vista to 8.1 covers 2006 to 2023, that's 17 years of having compatible drivers. Windows 10 was released in 2015, 8 years ago, with 11 having a compatible driver model and will go for many years still.

What does "common" to experience breakage is supposed to mean here? you call this common? meanwhile you can't even get a Google Pixel, the official Google phone, to be supported more than 3 years of feature update, with 2 more years of security updates. This is all because supporting the linux kernel is a pain in the ass.


I'll also add that it is very common for hardware manufacturers to just provide drivers for newer versions of Windows as they come out if necessary, with the hardware ending up supported across multiple versions of Windows.

It is very seldom when you actually come across hardware without drivers, unless the hardware is so old that the manufacturer isn't selling (and thus isn't supporting) it anymore.


> variable length arrays

It was such a terrible feature it was made optional in the C11 standard (you can be a conforming C11 compiler and not allow this feature) and will never, ever be implemented in a Microsoft compiler (while C is not a priority for MS, do note that they updated to C11 and C17).

You can hear their reasoning there :

https://devblogs.microsoft.com/cppblog/c11-and-c17-standard-...

The linux kernel used to make use of the feature and removed every instance of it from the code base :

https://www.phoronix.com/news/Linux-Kills-The-VLA

> Particularly over the past several cycles there has been code eliminating the kernel's usage of VLAs and that has continued so far for this Linux 4.20~5.0 cycle. There had been more than 200 spots in the kernel relying upon VLAs but now as of the latest Linux Git code it should be basically over.

While I do agree that it is wrong to consider C++ a superset of C, it is time to forget about C99's biggest mistake and treat it as if it didn't happen.


Agreed completely on all counts.

Never ever use VLAs.

My point is really just semantic, the overall argument I was responding to is intact.


Most are DIY, anything that is low wattage can be turned fanless if you buy some parts and build it yourself. But OP was daft with his answer. As someone who doesn't like the Apple ecosystem and only uses PC, I can only admit defeat when it comes to power efficiency. There is no way you could make a true fanless system run as well as Apple's efficient SOCs. PC parts that are as powerful generate a lot more heat and even with the best fanless cases and radiators there is only so much you can do to dissipate heat.


Is it really that bad?

When semi-idle (just typing in a browser), turbostat on a i9-10900T system shows values that fluctuate around these values:

        PkgWatt CorWatt GFXWatt RAMWatt
        4.86 3.89 0.15 1.57
        2.21 1.40 0.02 1.41
Mostly it's around the second line with the lower values. Of course, when under load, PkgWatt can go up to close to 100W (depending on the system tuning), but if the cooling solution can handle that for a sufficiently long time, is this really a problem? After all, this CPU, while somewhat dated, is not actually slow, especially for multi-threaded workloads.


> I don't know of any IDE that provides even half of what i want TBH.

> On Linux (which is my main OS these days) pretty much all options involve a text editor

You obviously haven't looked hard enough. CLion fulfills a decent amount of what you're looking for. https://www.jetbrains.com/clion/

A lot more limited, but still much more capable as an IDE than text editors is KDevelop, for example your request here :

> (i haven't touched topics like VCS support and how i'd like to be able to see and use different version of the code from inside the IDE - like e.g. go back in time to a different function while the debugger is running - or anything that has to do with GUIs)

https://kdevelop.org/features/

> An especially useful feature is the Annotate border, which shows who last changed a line and when. Showing the diff which introduced this change is just one click away!

Also example snippet of the debugging integration :

> You can also hover the mouse over a symbol in your code, e.g. a variable; KDevelop will then show the current value of that symbol and offer to stop the program during execution the next time this variable's value changes.

https://docs.kde.org/trunk5/en/kdevelop/kdevelop/debugging-p...


> You obviously haven't looked hard enough. CLion fulfills a decent amount of what you're looking for. https://www.jetbrains.com/clion/

CLion has the problem of being a proprietary program that requires online validation, which is a big hard no for me.

Also IMO the fact that you point out KDevelop being able to show a per-line commit and the current value of a variable, both being among the minimum you can expect from an IDE tell me you didn't read what i wrote that i wanted, so by extension i doubt CLion also does anything close to what i wrote.

I don't "just" expect "some" debugger integration or "some" VCS integration, i explicitly wrote things like the debugger being able to call a function in the running program or modify a function while the program is called or being able to replace the current function with an older version of the function taken out of VCS. Among a ton of other things.

What you show KDevelop to do aren't anything special, even non-IDE "programmers' text editors" can do them.


You can still try CLion (the trial version) and see if it’s what you’re after. You don’t have to use it but IMHO complaining so much about there not being a decent IDE without even trying the proprietary ones isn’t very fruitful. CLion is what Borland was back in the 90s.


The thing is even if CLion does what i want (TBH i doubt it[0]), i wouldn't use it anyway so i don't see a reason to spend money on it. Note that my issue wasn't so much that it was proprietary (though it is an issue[0]) but that it requires an internet connection to function.

After all i still have and can use Borland C++ 5, Delphi 2 and C++ Builder 1 to this day without requiring any sort of connection or having Borland's permission to use the software i bought. I can't do the same with CLion.

[0] CLion relies a lot more than Borland on external tools like GCC, CMake, GDB, etc meaning that not only it most likely does not provide all the integrated functionality i mentioned but there is also a very high chance it will stop working at some point as its dependencies are changing so you do need to rely on JetBrains to keep it up to date without having access to the source code.


As much as people may hate oracle, they employ significant contributors to the Linux kernel. https://lwn.net/Articles/915435/

Many in very important subsystems. The XFS maintainer (who just recently stepped down from this role) and contributor Darrick Wong works for Oracle. Meanwhile, btrfs is the creation of Chris Mason, who worked there until 2012. Modern filesystems on linux owe a decent debt to Oracle. Good luck running an Oracle-free linux.

I often find it interesting when people imply the company is freeloading for having chosen the path of making an RHEL clone. They cloned RHEL because an unhealthy, dependent ecosystem was built around the one, singular linux distribution, not because they are unwilling to fund work.

Giving some serious competition to Red Hat can only be a good thing.


Competition is great. Xeroxing the code and undercutting them isn’t actually competing.

Oracle went after Red Hat when Red Hat bought JBoss. Larry said as much. They didn’t like Red Hat actually competing with them, so Oracle sought to undermine RHEL & have a full stack so they didn’t have to share customer spends.


> They cloned RHEL because an unhealthy, dependent ecosystem was built around the one, singular linux distribution, not because they are unwilling to fund work.

They cloned RHEL to spite Red Hat for the JBoss acquisition in the mid-2000s.


...because Oracle themselves had bought BEA Systems, owner of Weblogic. JBoss was a competitor to Weblogic.

https://en.m.wikipedia.org/wiki/BEA_Systems


Your timeline is wrong. JBoss was acquired by Red Hat in 2006, then Oracle Linux was announced a few months later also in 2006, then Oracle bought BEA Systems in 2007.

Oracle and Red Hat both entered the bidding for JBoss and Red Hat ended up winning. Larry Ellison took this very poorly.


Interesting, thank you for the correction.

I really remembered the BEA acquisition as the prior event.


And now we have JFR in OpenJDK thanks to it.


> Granted, my experience with DEB has mostly been limited to Ubuntu - so it could be that/Canonical.

It is not a Canonical specificity, but it is overridable behavior.

For the dpkg prompting on configuration file changes:

https://raphaelhertzog.com/2010/09/21/debian-conffile-config...

> You can also make those options permanent by creating /etc/apt/apt.conf.d/local:

> Dpkg::Options { > "--force-confdef"; > "--force-confold"; > }

Along with invoking apt with -y (assume yes) will get you close to the behavior you seek. Or as a permanent setting:

APT::Get::Assume-Yes "true";


Indeed, thanks for getting into that - I was rambling a bit and saved the space


Sid is not, contrary to what others replied to you, a proper rolling distro. Particularly during the times when there are freezes for a new stable release, the repositories can become broken in a way that you may not be able to install software you want because its dependency chain is not fulfilled correctly. People who almost never install new software might not notice the issue, as it's not like sid will break what you already have installed -- as long as you don't say yes to a full-upgrade that shows a concerning amount of removals.

I've had experience with it and Arch in my rolling days (I am now a debian stable user, and use flatpaks and pet containers when I need newer software and feel much more at peace with my systems now), and Arch would, in my experience, never be the source of breakage. Things do break at times on a rolling release distro, but in the case of Arch, that would be because something upstream changed in their software, not because of the packaging of the distro itself.

Debian testing is not the answer either. When there are bugs that make it to testing, they can linger for an incredibly long time. At least, on sid, or Arch, it's more likely to happen, but also will get fixed quickly.

IMHO, the current landscape has solved the whole reason that made rolling desirable. With flatpaks and pet containers, there's nothing you could possibly miss, while you run a stable, unchanging system underneath. If you need support for hardware so bleeding edge that there isn't a backported kernel yet, I'd still find it less effort to package my own ( https://www.debian.org/doc/manuals/debian-kernel-handbook/ch... describes how to do this succintly. It's not as much work as it sounds ) from upstream until backport repositories support my hardware, than have a fully rolling distro on my computer.


> How's the vim ecosystem now? Is vimscript still dominant?

You can have a full neovim experience with all sorts of modern extensions without using a single line of vimscript. Some people even replace their init (neovim's vimrc) with lua, but I am of the opinion that it is a step too far, as lua isn't particularly adapted to writing configuration files and the result is too verbose to my taste.


My config is a horrible frankenstein of both.

I use lua when examples are in lua or when I need some "logic" (such as assigning defaults to a var and then passing that around/overriding). And that's embedded in vimscript.

I don't really like either. VimScript has always been a horror to me, eventhough I've been using vim for some 20 years now, almost exclusively. Lua is "that thing that I should really sit down and learn. But not now, I've got stuff to finish".

What does lua -for an end user- offer that something like yaml+python cannot offer? I really won't mind setting all sorts of flags, defaults and vars from some init.yaml, and then have some init.py to handle the few places where I do need actual logic. Why was lua picked, why did vim build its own language and not move to an existing one for its config? Am I just weird for never sitting down and learning lua? Or vimscript? Or both?


Lua is faster than Python and also easier to embed in vim [1] which are both big advantages for the end user. It means fast plugins and no hassle with having the correct python version.

Why Bram build is own language and then doubled down on it with vimscript9 I don't really understand.

[1] https://neovim.discourse.group/t/why-was-lua-chosen-for-neov...


> Why Bram built his own language [...] I don't really understand.

What language would have you chosen in 1991 instead of creating vimscript? The vimscript language is a very natural extension[0] of Bill Joy's "ex" language, that was used in vi since 1976. The history of vimscript is not weird, it's just a fairly natural continuation of existing practices. Vimscript9 is a least-friction update for modern times.

[0] https://en.wikipedia.org/wiki/Vim_(text_editor)#Vim_script


I should have written I don't really know anything about.

According to your link scripting was only available from '98, at which point Lua and Python were already around, but then I don't know how viable that would have been. Funny you say that about ex, because I find running ex commands from vimscript the most clunky thing about it.

In my opinion the least friction for the ecosystem would have been to follow neovim and adapt Lua.


independently of vim configuration, lua is a cute general-purpose language worth learning in itself. It is much simpler than python, and the documentation is outstanding.


Ah you see, you just wrap that lua in a lisp in some terse macros and there you go, instead of writing `:set formatoptions+=j` (yuck, vimscript!) or `vim.opt.formatoptions:append"j"` (vom, lua!) you can write the clearly superior `(opt formatoptions +j)` (heck yes, ivory tower).


I have mainly lua config but just put in some vim files where I put bits and functions that were already written in vimscript.


> These projects are no joke and a few hurray contributors not lasting more than a couple of months won't cut it.

Those same contributors that remained on vim instead of neovim didn't put much effort into vim9 ecosystem either. I can't imagine them being willing to maintain all of vim's baggage when literally nobody writes vim9script. Classic vimmers tend to stick to the old vimscript.

As you said, contributing a few patches to classic vim is one thing, but becoming an actual maintainer of the behemoth that comes with a ton of useless baggage is a whole another thing. Maintaining a programming language no one uses.. sisyphus would be proud.


> Let's see how much traction Vimscript9 will get.

In practice, none. It's a wasteland. Extensions that aim to be compatible with both vim / neovim use the old vimscript, while lua has very, very significant traction in the neovim world among people who try to bring IDE like feature to vim. The amount of people willing to write extensions that only work on vim proper, which is what would happen if they used vim9script, is close to non existent.

> I would rather expect a EGCS/GCC "merger"

Or libav vs ffmpeg.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: