Linux had a different model than windows. It doesn't really have separate drivers. This has a number of upsides and downsides, let me mention just one.
Windows used to have a really bad reputation for stability. I'm sure a lot of this was windows's fault, but a lot of this was bad third-party drivers that crashed.
You might remember that nvidea does have a driver for Linux. The approach they took isn't really supported, which is one of the reasons why graphics drivers are always such a nightmare under Linux.
You are likely not even using the nvidia gpu or the driver (nouveau) if your thinkpad is new. Or you are using just the nvidia gpu if you have discrete mode. For older designs with weird wiring and optimus, nouveau is very problematic for even basic use with external screens. Problems range from screen tearing (reverse prime, no solution), stability, poor performance, and fundamental usability issues such as screen not coming back on after suspend. Not that this is nouveau's fault, for it is a reverse engineered driver afterall.
AMD should, in theory, be quite good these days, especially with a new kernel. Note that amdpro drivers are generally not recommended despite the word pro in them. The newer amd drivers are all free and being merged upstream continuously. pro (afaik) is the older proprietary driver with specific optimisations for things like CAD, but otherwise doesn't have much use these days.
These days I use AMD, and I just haven't had to think about it. Before this I had Nvidea, and even there the open drivers were fine until I wanted to play games.
Interestingly, Windows improved its stability problem through a combination of moving drivers into userspace and formal verification. Unlike in Linux, when your video driver crashes it usually just results in a momentarily blank screen instead of a core dump.
I would not say its true. Crashed gpu driver on linux will most likely leave you with locked out screen that does not accept any user input.
On the contrary, windows GPU(WDDM) drivers are restartable and can handle crashes even without restarting the user graphical apps. This've been so since at least windows 7.
One upside is simply the fact that you don’t need to hunt down drivers and install them.
I’ve had to install wifi drivers for a Windows machine before. Hunting for the right driver, downloading it and transferring it from my phone was quite a pain, as opposed to the wifi just working on Linux.
That's not entirely true. It works that way because your distro maintainer built a kernel with nearly every module turned on to support the widest range of hardware out of the box.
I've found that it still happens almost every time I try to install a consumer Windows edition on server hardware, or server Windows on consumer hardware, even for Intel Ethernet. And it's often necessary to edit an INF file so that a perfectly functional driver won't refuse to load on such an "unsupported" configuration.
It supports separate drivers in the form of dynamic kernel modules (DKMS). But this is about default upstream support. Good upstream support is always preferable to chaotic dkms mess.
You could've just used DKMS and use the kernel module in 4.18 and such [1] (unless you run Fedora who have the driver baked in the kernel; then you gotta recompile your own kernel...).
Heck, you could've used the Apple Magic Trackpad 2 on previous kernels. It's just that the multitouch wouldn't work. Which is the great thing about the device.
FWIW, the device doesn't work out of the box with Windows either.
They are drivers as others have mentioned. However I disagree with those calling it stupid or a mistake. Since Linux and most of it's drivers are open source, developing Linux together with it's drivers is really not that bad of an idea in my opinion. It allows the API to evolve without making your old hardware stop working. This is definitely not a theoretical concern, a lot of old hardware I had such as scanners never worked beyond Windows XP because the drivers were simply not compatible.
The fact that there is no stable ABI is a sort of downside but also benefit of this model. The ABI can always be improved, but it will often be at the cost of breaking so-called out of tree drivers (Nvidia proprietary, VMware's networking and VM monitor drivers, etc.)
Linux doesn't have a stable driver interface. There's no way to write a driver that works on a few different versions of Linux. The way you are "supposed" to do it according to the kernel developers is to open source your driver code and convince the kernel developers to merge it into the kernel itself.
Yes this is a stupid design. They do it because it means they don't have to worry about backwards compatibility or API stability at all, which is a nice thing to not have to worry about. But it comes at the cost of bad hardware support in Linux (and difficult-to-update Android phones).
They do it so that anyone can improve the state of the drivers, not just programmers with apple.com email addresses working in secrecy and dumping a shit binary over the fence
It is a driver. It's just that it is released for production with this version. Linux bundles most drivers, unlike say a microkernel, which would hand the job off to userspace.
True. I would just add a few notes for non Linux users.
- Drivers are bundled with the kernel but they're loaded dynamically when requested, ie supporting more devices doesn't make the kernel any bigger or slower. The Linux kernel from mostly static in the beginning became more and more modular and today save for developers or early adopters kernel rebuilds are very rare among users. Embedded cards aside, I don't recall having rebuilt a single kernel since 2.6 on normal PCs.
- Having drivers bundled in the kernel solves the problem of that piece of old hardware we lost the drivers disk and the manufacturer's site resolves to nowhere because they're no longer in business. Caring about older hardware seems of no importance in the desktop PC business, but not uncommon in the industrial world where one happily trades 100x speed loss in exchange of 10x reliability gain and there's still a lot of old perfectly functioning iron out there.
- Drivers are brand-free. Unless specified, they support the chipset, not the hardware brand and they definitely don't bundle other junk themselves, which is one of the plagues in the Windows ecosystem where 5 cards by 5 different manufacturers but using the same chipset all come with their set of drivers and associated ton of rubbish because the vendors fight to splatter their name on your desktop. Under Linux if you have 5 cards by 5 manufacturers you need one small driver and all software interfacing to standard device drivers can use all 5 cards. There's no such thing as "one card, one driver, one software". (Big exception for expensive niche proprietary hardware of course).
I don't see how including the driver inside the compiled kernel image can NOT make the resulting linux OS bigger than not including the compiled driver.
External modules (on the fs) wont, but we're talking about the bundled ones, right?
I'm old enough to have played the embedded engineer using floppyfw on a spare 486 board screwed on a piece of wood along with a 1.44 fdd and a power supply cannibalized from somewhere. That was my embedded development system nearly 20 years back:)
A bit later I ditched the floppy in favor of a spanking new 4 or 8 MB (megs, not gigs) flash parallel ATA "diskonmodule".
Just checked, I'm surprised that the floppyfw page is still there.
https://www.zelow.no/floppyfw/
Speaking of small distros, I gave a try at both DietPI and TinyCoreLinux on virtual machines and was amazed at how good they perform.
Bundled doesn't mean statically linked, it would be plain dumb if not impossible to add every driver out there into the kernel image; nowadays about everything is dynamically loaded so it doesn't impact the kernel size and doesn't waste memory or CPU cycles unless loaded, which happens only when necessary (eg when a USB device is inserted).
While it's true that a microkernel would hand the job to userspace, this is unrelated to Linux's bundling of drivers.
Linux, unlike the Windows kernel (which is also not a microkernel), does not have a stable kernel API for drivers. This means that drivers that live outside the tree have to play catch-up to every change that the kernel devs make.
When a driver is in-kernel, the person that made the changes fixes the driver as part of their change.
So Linux really encourages drivers to become open-source and submit for inclusion in the mainline kernel, just to avoid the maintenance hassle.
Some clever brain thought drivers should be part of the kernel. I personally think that's what's prevented Linux from conquering the desktop because of GPU vendors having no stable API to build upon. That is also the reason Android phones get no updates. It was a horrible decision.
Oh THATS why android gets so few updates.. (pardon the sarcasm). Its all because of those damn linux renegates - ruining software for the rest of us. Somebody ought to stop the linux bullies preventing those poor corporations from updating their products.
> It's unreasonable to expect OEMs to keep updating the code every time the driver API is broken, especially when you have dozens of mobiles.
Nobody expects that. Once drivers are in the mainline, the OEMs don't have to do any work to keep them from getting broken when Linux devs decide to do the kind of refactoring and systemic improvements that are impossible on Windows. All Android OEMs have to do is git pull, make, and send the new OS image off to the same QA processes any other update needs before deployment.
The drivers aren't where the important trade secrets are. Those are all in the hardware or in the proprietary firmware that the driver has to upload to the hardware, or in a userspace blob as with AMD's GPU drivers. The code that actually runs in the kernel on the host CPU doesn't reveal any valuable IP given the way most hardware is designed these days.
They don't have to. They just need to open-source their drivers and mainline them, don't they? People will gladly maintain what Qualcomm et al. aren't willing to, they just need to abandon the binary blob bullshit.
Yes - very unreasonable. Im sure its an enormeous expense. Better keep those devices 10 security updates behind to make sure the bloatware still works. I mean - they already paid right: why bother updating.
Neat news. I've been thinking about trying one of those out for a while now - I really want to find a pointing/clicking device other than a traditional mouse as my hands age, but nothing's properly worked yet.
On macOS, they are absolutely awesome. The Magic Trackpad 2 has a motor for haptic feedback. Applications like OmniGraffle use this to give a subtle vibration when you are dragging an object and it aligns with another object. So you can feel the alignments when you’re dragging.
I've been a fan of Apple's trackpads for a long time, but I was still surprised at how much the addition of haptic feedback improved them. Just getting away from the hinged click mechanism was nice, but being able to adjust in software the force required to click is a really great feature, and it doesn't require support at the application level.
Yes, I’m aware. I’m just clarifying that Force Touch, which is separate but easily confused feature, requires support from applications. “Basic clicks” are handled by the OS, and the amount of force necessary for one isn’t particularly important to apps.
I use the external Apple trackpad exclusively, it is in my opinion the only viable pointing device right now. Why Apple is the only company making these is beyond me.
I find a classical clicky-button mouse to be superior to trackpads (or Apple’s weird no-button mice) and I am extemely happy that not all industry is blindly copying Apple at this point.
Do you mean superior to trackpads generally or have you specifically tried and rejected the Magic Trackpad 2?
I ask because like the parent poster I love the Trackpad 2 on macOS but I hate most other trackpads when using them with Windows/Linux. Meanwhile I also hate using most mice I've tried on macOS including the Apple ones but am fine with mice on Windows/Linux.
I imagine there are probably tracking speed and acceleration settings that would make the unpleasant combinations work better but I usually give up after making a few simple tweaks.
I wasted a lot of time tweaking low level Synaptics parameters in the config flies when trying to make the Magic Trackpad work well for me on Linux and I think I've exhausted my lifetime supply of patience for excessive pointer setting adjustments.
I still wonder how well it'd work without Apple's software to drive the thing. Apple's trackpads are always less magic when using Windows, for instance.
Yes, isn't it crazy how this seems to be something that is soooo hard to duplicate? I never feel the need for a mouse on my macbook, switch to Windows or Linux: I immediatly set out out to find my mouse (in contrast, external mice are just a pain in MacOS I feel that the acceleration is way off, like I'm mousing through sticky mud, and methods to adjust it were removed 5-6 versions of osX ago.)
It actually works very well with large screens. I use a Magic Trackpad with my 27" iMac.
> I feel the old ball type was superior in this regard, you could rely on momentum and just give it a flick for large and quick movements
That's exactly how the pointer works on macOS, both for traditional mouses and trackpads. That's what makes the Magic Trackpad work for a large screen for; I can do a little flick to move the pointer across the screen — and I can do it with just a finger, not my whole wrist/arm.
At work, I started off using a mouse for my multi-monitor setup, but quickly switched back to a Magic Trackpad. I think it scales rather well, because macOS provides a nice acceleration curve to cross the screen quickly while still allowing for precise movement.
On the topic of the mouse, I suspect you (along with nearly everybody else I know) is used to Windows-style tracking. Me, I'm used to macOS-style acceleration, so using mouses on other machines always feels inaccurate to me.
But back to the topic, it is rather interesting how hard trackpads are to replicate on other systems with the same level of quality as Apple's own trackpads under macOS. I suspect that's why one of the recent Windows 10 updates brought in consistent APIs for trackpads and multi-touch gestures.
If Microsoft are only just getting around to it now, I've my doubts there'd be any consistency on the matter over in Linux land. Not so much for lack of trying, but in a land of multiple desktop environments each with their own ways of doing things, the uncertainty of whether we keep improving X or focus everything on Wayland … well, I doubt the using the Magic Trackpad 2 under Linux would be all too pleasant.
It's getting better, especially since the Ubuntu people took an interest since it's the only option on recent GNOME even under x11. There's even a gsettings option to disable tap dragging, my personal peeve!
> But back to the topic, it is rather interesting how hard trackpads are to replicate on other systems with the same level of quality as Apple's own trackpads under macOS. I suspect that's why one of the recent Windows 10 updates brought in consistent APIs for trackpads and multi-touch gestures.
That's just as subjective as the regular mice. Personally I find their touchpads miserable to use compared to the competition.
I'm being absolutely genuine when I say "what competition"?
PC notebooks still ship with tiny, unresponsive things that make two-finger scrolling a mess and multi-touch gestures drop — apparently input, the one thing one does all the time with a computer, is the place to cheap out on.
On the external front, I've got a brand new wireless Logitech multi-touch trackpad right next to me whose responsiveness still doesn't hold a candle to what I was using on a PowerBook G4 back in 2004.
Then again, it's really hard to compare. Like I say, the hardware isn't what does the magic, it's the software. macOS has had multi-touch trackpad support done right since one of the later versions of 10.4, so we're talking something like 2006 or 2007. Microsoft doesn't seem to have taken the whole thing all that seriously until relatively recently with last year's Precision Trackpad hardware spec and APIs, and I suspect that's because as soon as Microsoft started doing their own Surface hardware, they realised (better late than never) that you can't rely on third parties to get this stuff right, so we'll see what comes of that.
> PC notebooks still ship with tiny, unresponsive things that make two-finger scrolling a mess and multi-touch gestures drop — apparently input, the one thing one does all the time with a computer, is the place to cheap out on.
Exactly those. I'd take them over Apple's sirupy mess any day.
Can you try to explain things more usefully than "sirupy mess"? Do you find that you cannot adjust the tracking speed to be fast enough for your liking? Do you want more or less acceleration? Are you experiencing unusually excessive input latency? Is it the physical texture of the trackpad surface that bothers you?
Replaced my 2016 macbook pro with a T480s and the touchpad is okay (running Ubuntu). I don't miss the old touchpad so much for pointing, but the two-finger right click doesn't work for me on Ubuntu (tapping does, but not clicking), it's still the old fashioned hinged design and it doesn't have inertial scrolling, which is really a drawback. But the keyboard is so much better and the keycaps won't break and fall out. So it's still a win for me, I don't want to support apple as a company and their planned obsolescence.
OT: out of curiosity, did you get the one with FHD or WQHD screen? Having nightmares trying to set it up with an external monitor and scaling UI properly in Gnome.
WQHD, but just for the color gamut. Using gnome with 2x scaling and xrandr scaling to 1.25 to get effective 1600x900 (I think). I don't have an external monitor.
Touchpads are great for clicking and scrolling, especially if you set it up to 'click' with just a tap. Alas, it's much worse for dragging, so e.g. graphic design is problematic. Even with dragging via double-tap-and-drag, supported by MacOS, it's a big nuisance, especially dragging outside the touchpad area (which you can do by changing fingers mid-drag).
I've heard good things about vertical mouses: they're supposed to keep the lower arm from being rotated unnaturally, and the hand position seems to be more relaxed. But I still have doubts about pressing the thing, as that seems to be the root of RSI.
In my experience, staying away from computers altogether works much better for the hands than any devices I've tried, which is not good news for a computer geek.
> Alas, it's much worse for dragging, so e.g. graphic design is problematic.
What kind of dragging operations are you talking about? Drag and drop operations and box selection are no more difficult on a Mac trackpad than with a mouse, and running out of trackpad area was pretty rare even on the pre-Touchbar generation before they almost doubled the size of their trackpads. Drawing might be harder, but neither device is at all good at that task; that's what Wacom is for.
I tried this recently and got fairly bad pain in my thumb (bottom towards the palm area). I’ve since switched back to just using my old vertical mouse and the pain is gone. It’s a shame because I did get used to it and quite like using the ball.
Maybe try a Kensington Expert Mouse [1]. I use these a lot. I'd suggest the wired version - the wireless one seemed to interfere with my Bluetooth keyboard (probably the keyboard's fault though).
I've used the Kensington SlimBlade for years and really love it. The only difference from the Expert Mouse is that instead of a scroll ring you just spin the ball in place. Completely eliminated the RSI I was getting in my mouse hand and is my favorite hardware for mouse-like functionality.
I've actually played quite a few FPS games with a Kensington trackball.
It takes some getting used to, but if you set the ball up so that a 180° rotation corresponds to the cursor travelling the width of the screen it seems to work well.
Disable acceleration on a trackball – whatever your use.
Primarily a Javascript developer, with a passion for speed, metrics and user experience. Recently playing around with Web Components (via Polymer) which feels very much like the future!
I am British and currently in San Francisco, but plan to move back to Malaysia later this year (3+ years experience working remote from there). Open to relocation.