Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Authenticated Boot and Disk Encryption on Linux (0pointer.net)
343 points by Aissen on Sept 23, 2021 | hide | past | favorite | 206 comments


Only had time to skim it, but even without all the details it is an excellent overview of the state of the art regarding secure boot chain. You can learn many of the points by looking into the Android boot sequence but this is a more generic and 'everything in a single place' write-up. Especially the mechanisms behind dm-verity seem to be not widely known and - in my view- underappreciated.

> *This is all so desktop/laptop focused, what about servers?

I know that some of the more powerful and Linux based automotive embedded systems use a similar design. While important for desktop/laptop and servers the mentioned points are crucial for systems where physical security of the hardware is limited.

Oh, and BTW, if 0pointer.net i doesn't ring a bell: The author is Lennart Poettering.


If your attacker is both sophisticated and able to access your hardware directly, the game is over; nothing we can do can currently avoid this.

What we can do is address the scenarios in which an attacker is either unsophisticated or remote. Normal network security takes care of the latter case, and for unsophisticated attackers, a good FDE (Full Disk Encryption) system covers it nicely. For laptops you can either live with typing in a password at every boot, or get a USB key of some kind. For servers with FDE you might want to use something to enable remote and unattended reboots, like (shameless plug) Mandos¹.

1. https://www.recompile.se/mandos


> If your attacker is both sophisticated and able to access your hardware directly, the game is over; nothing we can do can currently avoid this.

I think with blanket statements like this it's important to define sophisticated.

For example, what kind of resources are required to break the disk encryption of a computer protected with tpm 2.0 + secureboot + luks w/tresor* ?

I'm of course not talking about social engineering or fooling the user into typing their password on a fake system, since this is preventable.

* https://www.cs1.tf.fau.de/research/system-security-group/tre...


GP already made space for this analysis by admitting a sophisticated/unsophisticated axis, thus the contrary stance of your comment is unnecessary.

You don't really make an argument on where your example should fall on that axis, which is what I would have preferred to see.


I think it's a fair doubt.

In this case, I don't know if sophistication could mean an evil maid attack with a breadboard and $15 worth of electronic components (which was required to break the tpm 1), or something better than what the FBI has since they struggled to unlock a terrorist's phone. I can worry about the former, and relax about the later.

I'm not an expert and when someone throws that statement with authority I'd like to know if I'm vulnerable to geek or to the CIA


It's certainly possible to MITM a TPM, this is why MS built all the keys into the CPU on the Xbox One[1].

[1] https://www.platformsecuritysummit.com/2019/speaker/chen/


AFAIK that wouldn't be the case with TPM 2.0


There are millions of lines of code in a bios supported by billions of transistors. A wireless component can fit inside any innocuous chip at any part.

Software wise, the bios can contain many flaws that can be exposed via peripherals, remotely, etc. The certificates are kept secure by unknowns parties protected by algorithms that few can prove the validity of.

A physical attack access is rarer (* from sophisticated attackers). Sophisticated/dominant attackers leverage low end attackers to do the actual physical attack. Identification and knowledge of such an attacker greatly diminishes that attacker's capability and credibility. Low end attackers are also resources.

Remote attacks do exist. One of the common attack frameworks is a type of radar attack that exploit bugs in the hardware design. They work like RFID where some component has an exploit that can be activated remotely by radar. Defending against such attacks is quite difficult. Other attacks include errors in manufacturing. Not all chips come out of the oven equal, and a sophisticated attacker need not risk physical detection when they are capable of deducing the how to exploit those errors.


> For example, what kind of resources are required to break the disk encryption of a computer protected with tpm 2.0 + secureboot + luks w/tresor* ?

There are two main vector of attacks to get access to the data. The simpler version are those that attempt to get they key by recording it when the user type it in, like hooking into the keyboard. No real need to try make it a fake system since no keyboard that I know of has security built in to prevent MITM attacks. If we are asking what spy agencies do, I would also suspect that the camera on a laptop could be replaced with a look-alike with a transmitter.

The other main attack vector is to target the hardware while it is running and without waiting for the owner to type in the password. Get access to the main bus would be the primary target if the ram is not vulnerable (usb vulnerabilities, debug pins on the mother board, and so on). Vulnerabilities of the software is also an alternative, and if time is not a problem one could in theory wait until a vulnerability is found. I would assume that all internal connections are to a degree vulnerable to malicious designed hardware in mass consumer laptops, sever and desktops, but the main question is how to do it hot without crashing the machine.


> break the disk encryption of a computer protected with tpm 2.0 + secureboot + luks w/tresor*

On Windows (even modern versions IIRC) you can sniff the traffic (or maybe forge requests? I think it was simply sniffing though) to the TPM and get the keys to decrypt the HD, if there is no additional protection (like a PIN or password). Don't know if this is also applicable to Linux stacks. Kind of weird the interface has not been designed and/or programmed to resist to such attack, although of course password-less full disk decryption is often going to be less secure than with one, at least Macs are not subject to the same attack, I think?


FWIW, TPM 2.0 can encrypt the TPM traffic, so the attack by physically sniffing the TPM (https://dolosgroup.io/blog/2021/7/9/from-stolen-laptop-to-in...) is no longer possible.


It can but from the article, with a not so old Windows stack it seems that this encryption is not used:

> At the time of this writing BitLocker does not utilize any encrypted communication features of the TPM 2.0 standard, which means any data coming out of the TPM is coming out in plaintext, including the decryption key for Windows. If we can grab that key, we should be able to decrypt the drive, get access to the VPN client config, and maybe get access to the internal network.


> For example, what kind of resources are required to break the disk encryption of a computer protected with tpm 2.0 + secureboot + luks w/tresor* ?

a: 1 copy of all the computer hardware present in the target system.

b: 1 secure radio link of sufficient bandwidth.

c: 1 apartment or van near the target system.

d: Sufficient equipment to interface with the target system (now in said apartment/van[c]) well enough to replicate its login UI (via radio link[b]) on the fake copy of it[a] you left in its place when you stole it.


> For example, what kind of resources are required to break the disk encryption of a computer protected with tpm 2.0 + secureboot + luks w/tresor* ?

Intel ME and the equivalent AMD PSP.

> I'm of course not talking about social engineering or fooling the user into typing their password on a fake system, since this is preventable.

As other pointed out when the atacker has physical access it is game over. Also network access is insecure due to management engines.


Again, Blanket statements. What ME flaws?


There are dozens of known CSME vulns -- there's a list on cvedetails [0] which itself seems to be missing some (e.g. 2019-0090/2019-0091). Plenty of these could be used by a sophisticated attacker with physical access, and some can even be executed remotely.

Here's an Intel Whitepaper detailing one such vulnerability of moderate severity (CVE-2019-0090, CVSS 4.4) [1]. From the 'Potential Consequences' table:

> Unauthorized BIOS compromises OS loader and OS integrity... All use cases / user secrets protected by Intel TPM are compromised.

Other bits and pieces of firmware including AMT have been similarly plagued with vulns [2], such as CVE-2017-5689, with a CVSS score of 10!

These are often difficult to patch, and even more difficult to convince manufacturers to spend the effort testing and distributing updates for their products. My own laptop is still running a version of CSME vulnerable to many of these attacks.

[0] https://www.cvedetails.com/vulnerability-list/vendor_id-238/...

[1] https://www.intel.com/content/dam/www/public/us/en/security-...

[2] https://www.cvedetails.com/vulnerability-list/vendor_id-238/...


> nothing we can do can currently avoid this.

Unless you scope "sophisticated" as "can compromise hardware we currently believe is secure", we can get to the point where the only mechanism is a hardware keylogger (which is something made far more complicated by, eg, using a Bluetooth keyboard). This has a few problems:

1) It's relatively easy to detect - you're adding a new physical object 2) You're only seeing a small amount of what you want to see - you have no ability to inspect the data on the disk, and if the user is using MFA then getting their passwords doesn't get you much 3) Exfiltrating the data isn't easy. Either you need further physical access to extract the object, or you need it to broadcast data occasionally and that's another thing that's going to make it easier to be detected

If you're able to move from "A physically present adversary can compromise the system in a way that gives them complete access" to "A physically present adversary can log your keystrokes", that's a win.


The issue isn't compromise of the remote system, but trust of that remote system. TPM 2.0 gives us the ability to make measurements before establishing trust.


Well, you can probably annoy at least some attackers by using tamper-proof seals on your PC case and act accordingly if they are broken, because that would likely prevent them from conducting the first physical attack. But that won't help in the long run.

Outstanding locks on a reinforced door and a combination of visible and hidden cameras in the hallway are probably more effective.


All tamper-evident devices can be defeated by a sophisticated attacker with physical access.

https://www.slideshare.net/MichaudEric/how-to-steal-a-nuclea...


This is incorrect


Perhaps you'd like to cite a source.


Wait, did I miss something or how does a sophisticated attacker break encryption with ~100+ bit password entropy?


I believe OP is referring to the fact that a sophisticated attack has access to the hardware and you continue using it afterwards. They could, for example, change the unencrypted /boot partition to log the password you use to decrypt your partition. Or, if you sign the boot partition, they could install a hardware key logger, or do any other kind of hardware modification that defeats the security. Preventing this kind of attack is incredible difficult. They are many means to prevent those kind of attacks but, for the most part, it just making it harder for the attacker so that the attacker needs to become even more sophisticated.

Full disk encryption only helps if you are worried that your hardware gets stolen


Just wanted to add that this is generally referred to as the "Evil Maid" scenario in case someone wants to Google it.

It can be mitigated somewhat with secure boot, tpm technologies etc but due to the breadth of possible attacks it's really hard to do against a serious attacker.

Even the sound of the keyboard is often enough to give your password away.


Or encrypt the boot partition. Libreboot supports this.

(Though to be fair, Libreboot only runs on a very limited number of old devices).


No amount of software complexity defeats a hardware keylogger, though... or like, a no-touch version of that, such as a spycam in the room that records the sight and sound of you typing your password.


Security token defeats all that.


And glitter nail polish :)


if they have access to your device, they don't have to. They'll just snoop on whatever you use to enter your key.


How does one go about breaking FDE tho? If it's secured by a password that's only in your mind.


Keylogging? Depending on the bar for sophistication, I'm told this could be done by a microphone at the other end of town, or by measuring the power consumption of your building from the grid... but there's probably something at the middle point between these paranoid risks and something simple anyone could place in your computer case / keyboard.


> If your attacker is both sophisticated and able to access your hardware directly, the game is over; nothing we can do can currently avoid this.

This, particularly if your hardware has Thunderbolt capabilities. It's been proven that Thunderbolt can be exploited to give an attacker direct memory access[0], and there's really no recourse for it in the spec. Physical access is going to be game-over for a very long time, unfortunately.

[0] https://thunderspy.io/#affected-apple-systems


> there's really no recourse for it in the spec

There's not only a recourse for it, modern firmware implements it. You just use the IOMMU to restrict Thunderbolt devices to being able to access their own buffers.


An option that wasn't at all considered in the article is to have the initrd and (optionally) the secret key used for decrypting the disk on a separate device (or devices), for example on an 1 GB USB drive. This way the big disk can have full, real FDE (not LUKS). To an attacker not in possesion of the USB drive the big disk just looks like random data. And the USB drive can be kept on one's own person and easily destroyed if necessary.

The main part is having the initrd on an external, small, device. The secret can be either a regular passphrase, or stored somewhere on the small device.

Works fine on my Archlinux. I mount the initrd file system to /boot when I need to update it.


In the case you are mugged, that USB key is going with the mugger. (see also police raids or airport "inspections" e.g., entering or leaving bad states) I prefer memorized keys, with, ideally, a duress passphrase for this reason.

But, hypothetical scenarios without reasonable threat modelling is a losing man's game. You can never one-up a mythical adversary. Protecting against the easy 80% of threats takes only 20% effort.


> This way the big disk can have full, real FDE (not LUKS).

I thought LUKS was the only way to do FDE in Linux. What other options for FDE are there?

But yes, putting /boot on an external USB drive is the only way I know of get FDE on your hard drives. Possible with UEFI, not BIOS/MBR.


LUKS leaves encryption metadata unencrypted. For real FDE, use "plain dm-crypt" with cryptsetup: https://security.stackexchange.com/questions/109223/what-doe...

> Possible with UEFI, not BIOS/MBR.

Why do you think that? I have this setup on a modern (UEFI) computer, but I use the old BIOS/MBR interface.


> LUKS leaves encryption metadata unencrypted.

You can detach the header and store it on a USB drive, and boot from the USB drive. No one will know you're using LUKS at that point.


> LUKS leaves encryption metadata unencrypted.

Except for ruling out plausible deniability, what problem does this pose?


Which is kinda funny on a laptop. If some authority told you to unlock a device and it did have a plausible deniability encryption system - no one is going to believe that anyway.

For a USB drive or a file, sure, feature. For a laptop, “duh, it’s encrypted” is going to be the first thing anyone considers. Not “well, I guess the laptop has no data on it at all!”


Hence, “plausible”.


Metadata includes file names. If you have a file on your computer called proof_nsa_is_doing_illegal_stuff.pdf, that would significantly help a prosecutor trying to convict a wistleblower even if they can't get the file.


No it doesn't. LUKS encrypts a block device, and then the filesystem is instantiated on that encrypted block device. No file names are visible.


No, the LUKS header includes only encryption metadata, not file system metadata.


Does this matter?

I once went through the waste of time of getting /boot encrypted. Would rather avoid needless pain.


For servers that you need to unlock remotely, you will need to leave /boot unencrypted. But, for things like a laptop, grub has supported encrypted /boot for a while now (and it works well). But, the release versions still only support luks v1 (it was good enough for all those years, probably still fine for most folks mostly just worried about their laptop getting misplaced / stolen).

add to /etc/default/grub to enable encrypted /boot support: GRUB_ENABLE_CRYPTODISK=y

After encrypting /boot, you don't need anything besides ensuring that the less than 2MB of unencrypted grub bootloader that is installed before the first partition on BIOS machines or into /boot/uefi on uefi machines is verified to ensure that 100% of the boot process is trusted (and all your data too since 100% of the disk is encrypted except for < 2MB of boot loader).

A simple way (if your threat model isn't mosad/nsa) is just checking a hash of this tiny bit of data whenever something sketchy has occurred with your laptop e.g., it is left unattended with airport security. Just boot from a thumb drive, and check the hash (it will only change if you e.g., change your filesystem type or grub gets patches). If OK, then boot is still trusted (at least what is on disk; if threat model makes you a possible target for hardware manipulation this nor Pottering's proposal will help you).

Nice if this simple check of less than 2MB could be automated, but not in a complex and fragile way that removes freedom from the nominal owner of the computer as Pottering is suggesting.


It matters for deniability. Whether this matters for you or not depends on your use case.


What does a laptop with a bunch of random data on its disk and a lot of evidence of use (grime, worn keys, etc.) let you deny that the same laptop with a generic Linux boot partition followed by random data does not let you deny?


Let's put it this way: say you want to prevent somebody from getting the data on a disk. Wouldn't writing random data over the entire disk be a good idea?


blkdiscard /dev/nvme0n1

No need to write anything. Better would be to use nvme tools to securely erase it ("format" is the terminology used)


The idea is to give you the plausible deniability, you can tell the cops you wiped it with random data, there is no password.

Your flash drive can't then be used as evidence that you haven't wiped it yet.

The initrd on the drive would be evidence you haven't wiped the drive.

Whether that gets you out of a jam and not in trouble for destroying evidence is another question.


So have one drive with FDE and a USB drive with /boot but in no explicit way configured to boot the first drive?

Or maybe a better setup is an internal drive with /boot and a system stripped from sensitive files then somewhere in the drive an hidden partition (not sure how to avoid the vanilla OS overwriting the hidden partition), then you can either boot the vanilla OS or map the hidden partition and boot from it.


It matters for deniability and for reliability. If you have LUKS, you really don't want to corrupt the header.


Huh, I thought BIOS/MBR required /boot to be on a partition at the beginning of the MBR and nowhere else, whereas UEFI allows it to be anywhere on the disk as long as its labeled with ESP.

So with UEFI, you just label /boot on USB as ESP. How does it work with BIOS/MBR?


That was an issue with older BIOS versions which also often didn't fully support LBA indexed sectors.

Any system with a UEFI BIOS that has 'CSM' (compatibility support mode IIRC) should support arbitrary placement of the /boot partition.

Today the main reason to put /boot early is for migrating to larger drives later, the start sector of many filesystems (including windows) can be left in place, and the size of the last partition, the data/system partition can then just be expanded.


Technically you boot form the USB, so the BIOS is happy.


I like to have each Linux distro on a standard type 83 partition if in BIOS mode, or its corresponding 0FC63DAF-8483-4772-8E79-3D69D8477DE4 GUID if in UEFI.

These are neutral dedicated storage designations for later EXTx formatting.

With Windows to be on partitions of BIOS type 07 or UEFI GUID EBD0A0A2-B9E5-4433-87C0-68B6B72699C7.

Once again neutral dedicated storage designations, for later NTFS formatting.

Each distro is entirely self-contained on its own single volume, no separate /usr /home swap or anything else.

This is analogous to a Windows install.

But as intended, bootfiles can function properly from many different storage locations.

In BIOS you will need boot files on a primary partition and it will need to be the primary which is marked "active" (bootflag 80 rather than inactive default 00). You will also only be allowed to have a maximum of 4 primary partitions in BIOS. But with correct bootfiles on an active partition you can boot to OS's that are on logical partitions too if you want to put them on your bootmenu.

In BIOS you can have a /boot folder on the same primary that your distro is on and it can boot solely to the distro on that primary (no boot menu appears), or you can have more than one bootentry on the GRUB or Extlinux bootmenu, which triggers the bootmenu to appear and then you can boot to any other partition on the menu whether primary or not.

Analogous to Windows in BIOS when the BOOT folder is on the same active NTFS volume as the OS folders. In BIOS Windows will boot Linux using a BOOTMGR entry pointing to a copy of the Linux volume bootsector, stored on the NTFS (or FAT as often seen when a commonly found "hidden" partition1 is used for Windows bootfiles) filesystem in the form of a file.

Windows will not address files on type 83 or 0FC63DAF-8483-4772-8E79-3D69D8477DE4 volumes so they are hidden from Windows naturally without having to possess the hidden attribute. These partitions will be sensibly handled in diskmanagement, 0FC63DAF-8483-4772-8E79-3D69D8477DE4 will show as OEM rather than a blank table entry.

Linux can mount FAT (FAT32's preferred type is usually type 0C) or NTFS whether it is marked hidden or not. Type 1C is hidden FAT32, type 17 hidden NTFS. In BIOS Linux can boot Windows if you want to put it on your GRUB bootmenu.

It can be seen that you could have 4 primary partitions, each containing an independent Linux or Windows OS, with 4 specific boot folders, one dedicated to each OS on the same volume the OS is on. Whichever one of these partitions you designate with the bootflag will be the one that boots, and the other partitions if unhidden can then be mounted as ordinary storage volumes. You would never see a bootmenu and would have to change which primary has the bootflag in order to choose which OS to boot to.

This is good when you need to boot to something different and do a comprehensive backup of a working volume but in its static dormant state, whether bitwise or file-based backup.

Sometimes there are no spooks trying to cause mayhem, and the most reliable rapid backup & recovery methods need to be constantly rehearsed and revalidated foremost.

However, in any one (or more) of these BOOT folders on any of the primaries you can trigger the display of its multiboot menu simply by adding a second (or more) bootentry of your choice to its default boot configuration file.

You could even have identical full multiboot menus in identical BOOT folders on each volume, and even on an external bootable USB volume too.

This gets analogous to UEFI where you only use one /EFI/BOOT folder on a special C12A7328-F81F-11D2-BA4B-00A0C93EC93B GUID ESP partition (but if not found then the first FAT volume the firmware recognizes), formatted FAT32 and readable by all.

But in the EFI\BOOT folder there is just a .EFI file or 3 (like BOOTX64.EFI) which is what each OS install strategically renames its EFI bootfile as, which then points to its own specific \EFI\Microsoft or \EFI\debian ect. subfolder which takes it from there. The Microsoft subfolder will contain its bootmenu, better Linux distributions have it this way too (ie you can edit its GRUB bootmenu when booted to Windows because you can see it in the desired EFI\Linux subfolder), but some distros only link (during bootup) from their subfolder to their bootfiles on their EXT partition and can only be edited when booted to Linux.

Now in UEFI I have not found a way to get Windows bootmenu to boot Linux. There should be a BCDEDIT switch that should do this like there is with BIOS BOOTMGR. If Microsoft starts to get serious about interfacing Linux this would be the first deficiency they will correct, so we will know if that occurs.

It can be seen that Microsoft SecureBoot was developed in quite a stifling way when it comes to Linux. For instance if you need a live Linux distro to be bootable on any default PC which has Microsoft SecureBoot enabled, you definitely need to have a Microsoft-signed shim or something like that. Machine owner keys are good for all the top-secret operators and serious geeks but ordinary power users want things like that to just work with the same default Microsoft-signed key that Windows uses without having to insert custom keys into the mainboard firmware. The only key that's built into every modern PC is signed by Microsoft, simply because it's a PC not a MAC.

Often the Microsoft-signed Linux SecureBoot shim will be renamed to BOOTX64.EFI which is what the firmware is looking for and from there the next bootfiles are signed by the distro.

The problem is that not every Linux distro uses the same key/signature so there are some different shims, one from each major distro that overcame the SecureBoot obstacle by having a shim signed by Microsoft.

This really makes UEFI highly defective when it comes to running live distros or multibooting, compared to how usefuil the same electronics is when SecureBoot is disabled or in BIOS mode. Except for those PCs that actually can enable SecureBoot in BIOS mode, how thoughtful.

What's really needed from the author is fundamentally to get all distros onto the same shim first of all, so the Microsoft-signed spare keys for future Linux use built into the UEFI firmware do not all get blacklisted as fast in case events like that come along more often. In that case it's a lot more sensible to blacklist a single key for all distros rather than a bunch of keys at once because each major distro has a different one.

Then move on to address much further the deep security needs of the most sensitive users, whether they are using Microsoft-signed shims, or their own machine owner keys which negate the need for shims when they are in use.

Edit: corrected BOOTYMGR


Now we are back to boot floppies of yore. Oh how the wheel of IT turns.


There are basically zero scenarios where using raw dm-crypt is recommendable over using LUKS(2) with a detached header.

Some (good) distros even have built-in support for detached headers, even. I actually use detached headers on a number of my system and it just basically works out-of-box, compared to what I'd imagine you had to do to make raw dm-crypt work.

Also, what is it with people that are determined to be smarter than LUKS? I've met another person online that was quite insistent that dm-crypt must be safer. Strangely, they stopped replying when I asked how they were securely storing a properly long AES key that they were then entering at boot, remotely or otherwise.


Clevis/Tang ("network-bound disk encryption") is likely better: https://www.redhat.com/en/blog/easier-way-manage-disk-decryp...


Clevis/Tang are really cool technologies that enable a new way of doing FDE. I'm surprised they aren't talked about more.

With Clevis you can use Shamir Secret Sharing to divide the FDE key into multiple parts. The parts can them be put in multiple places: the devices TPM, a server that will only reply to devices on the local network (Tang), a hardware token, etc.

If you put one part of the key on the TPM and the other on the Tang server, you now have a device that can be automatically rebooted in a data center without needing to enter an unlock password at boot. But if the device is removed from the data center its contents will be encrypted.

Pretty neat!


do you have any more details. My first thought is "how simple" then "what am I missing"?


Firstly, you'd probably want to make backups of the USB device with the initrd, in case one of them fails.

To mount the decrypted disk during boot, it's necessary to make the kernel aware of the fact that it's something that should be mounted. There are several ways to do this, two of which are:

1. The most preferable way, I think, would be to use something like kpartx from multipath-tools. Kpartx is a very simple C program, and device mapper is used directly in that case. You could also use partprobe. So the command is something like `kpartx -a disk` or `partprobe disk`.

2. Another option is to use LVM, like described here: https://wiki.archlinux.org/title/Dm-crypt/Encrypting_an_enti... (there are also other encryption options described there). Using LVM is a solution that works more out-of-the-box, but, of course, then you always depends on LVM, which may not be something you want.


Have to lug a usb stick around with you and guard it 24/7 for equivalent security?


I use a datatraveler 2000 to store my kernel+initrd, only inconvenience is having to unlock and insert it whenever I boot my laptop.

If I use the keyfile feature, the dt2000 boot will automatically unlock my FDE, making the inconvenience a bit of a wash with arguably improved overall security. It's also nice that I can unlock the dt2000 in private or otherwise independently from the laptop, since it has a battery. Think bathroom break to unlock, insert to boot on return kind of thing. It also supports a read-only mode, requiring switching to writable whenever updating the kernel/initrd.


Not necessarily. If this is for a desktop computer you can just leave the USB plugged in and the computer running, as normal. The threat model isn’t too different between that and a normal home/office computer - in both cases you have to trust the physical security of your home/office.

For a laptop, you’re already lugging it around anyway, just remove the USB drive whenever you shut it down.


Or, instead of a USB, you have a remote, networked, secure repository that behaves as such.


(slightly) slower boot time. Risk of data loss if your USB fails, more complicated backup procedures.


There's nothing hard to replace in a kernel or initrd, it's just a matter of going back to the trusted source you originally used.


Yes, you should use a higher quality USB drive for this, or possibly an SLC NAND one with high write endurance. And make a backup copy or two and store them in faraday bags in a safe.


you're only going to write to the USB drive when you upgrade your kernel.

all these worries are completely overblown. I've been using USB boot method for close to 10 years now. It's just another folder/mountpoint to backup along with your normal backup.


Anecdote != data. And unlike the other folders, if you lose this one you lose the whole system. And it's not just writes we care about, but bit flips for any reason.


> Anecdote != data

Thank you Captain Obvious.

> And unlike the other folders, if you lose this one you lose the whole system

Yeah, that's how encryption works.

> but bit flips for any reason

Way to move the goalposts. Your typical laptop/desktop does not have ECC RAM. You're also not using mirrored ZFS anyway. So not only will you never detect corruption, but you have no way to correct it either. Which, again, is totally overkill for the average user.


> Way to move the goalposts.

If your entire system is vulnerable to a single bit flip and you don't care then, you may be on the wrong website.

> Your typical laptop/desktop does not have ECC RAM. You're also not using mirrored ZFS anyway. So not only will you never detect corruption, but you have no way to correct it either.

This is HN, not your typical computer user's website. My personal desktop workstations use both registered ECC and mirrored ZFS (along with immutable reproducible builds, among other things to enhance system integrity and reliability).

And I have no compunction whatsoever about recommending on this site that people use cheap and easy methods to safeguard against LUKS header corruption, like backup USBs and faraday bags.

That's an extremely cheap form of risk management that could prevent loss of your entire hard drive due to a single bit flip. If I were recommending something prohibitively expensive relative to the potential losses, your objections might be reasonable, but I'm not.

> Which, again, is totally overkill for the average user.

Again, this isn't a site for the average user, expect folks here to practice and recommend higher standards for computer system management.


This is a thoughtful post, yet it is useful to think again about the threat scenarios.

Poettering mentions three: The obvious 'basic' one, and two advanced scenarios focused on a thief stealing the computer and then returning it. I recall exactly one case close to the these latter two scenarios: When Mossad stole a Syrian laptop allegedly containing nuclear weapon program information and returned it[0].

However, I do not think Poettering's proposal is going to even annoy Mossad-level operations. The same people who can steal your harddrive and put it back in without notice, can for example replace your mouse with a BadUSB device and get all the info that way. Once they got hardware access, and were able to return it without the owner noticing, they have multiple ways of getting what they want. On Linux we can't be assured of controlling the hardware Apple-style.

So this is a proposal which probably doesn't stop the advanced and rare scenarios it thinks it stops. In return, we get more complexity by splitting the system into at least three components*, and going back into the partitioning mess ("You only gave 300GB to /home, in order to download the ISO you want you need to repartition the system").

IMHO, the cost-benefit ratio for splitting the system isn't good enough. We should try a simpler solution with the main purpose of stopping the basic scenario. Maybe dm-crypt everything, despite the minor speed loss? Or use fs-level authentication/encryption a-la fscrypt? Either solution still allows people who want to split the system to do so.

[0] https://www.theregister.com/2009/11/06/mossad_syria_trojan_h...

* The authenticated base system, the encrypted base system, and at least one encrypted home directory. In most system there will also be an encrypted swap partition.


> In return, we get more complexity by splitting the system into at least three components*, and going back into the partitioning mess ("You only gave 300GB to /home, in order to download the ISO you want you need to repartition the system").

Is there anything akin to APFS containers [1] in the Linux space? APFS's ability to share space between multiple filesystems is, among other features, how Apple is able to get away with splitting the filesystem up similarly to Poettering's proposal. (They don't go quite as far; they have a signed, sealed system volume plus a single user volume rather than Poettering's multiple user volumes, but the split is still there and therefore runs into the same issue of space allocation.)

[1] https://en.wikipedia.org/wiki/Apple_File_System#Partition_sc...


I think btrfs subvolumes can do this? However btrfs doesn't (yet?) support encryption.


I am using BTRFS and FDE encryption right now on Fedora Silverblue for a year if not more.

And like the post says, I am typing two passwords.


I'm talking about filesystem level support, not block level support. FDE is block level and of course will work on any supported filesystem including btrfs.

What btrfs doesn't do is support encryption inside the filesystem a la fscrypt - where we can encrypt specific directories - or ZFS encryption, where we can encrypt specific filesystems inside a pool.

If I understand btrfs right, supporting encryption/verification on subvolume or directory levels would allow splitting the system so only part of it is encrypted/verified (so we get Poettering's performance gains), while avoiding space issues between the subvolumes.

Also, using only one password is easy on fscrypt style systems, though this may not suit everyone's threat model.


fscrypt support for btrfs is apparently under development; see the most recent comments at https://github.com/btrfs/btrfs-todo/issues/25


> IMHO, the cost-benefit ratio for splitting the system isn't good enough.

A while ago I wrote a guide on a system that has good cost-benefit ratio in terms of security and amount of effort required. It is a bit outdated for 2021 but overall still good imho https://uzakov.io/2017/05/08/pretty-good-setup-pgs/


On a quick skim, I could see three issues with this proposal:

1. While each home directory is individually encrypted, since /home itself is not encrypted (unlike with current uses of full disk encryption), the user login names can be found unencrypted on the disk.

2. Since each home directory is individually encrypted, backup systems like borgbackup running as root can no longer easily backup everything.

3. As far as I could understand, this proposal does not have a mechanism to share disk space between the system and the user directories. This goes in the opposite direction of for example https://fedoraproject.org/wiki/Changes/BtrfsByDefault in Fedora 33, which allowed the whole disk to be used for files in /home, instead of always reserving IIRC 50 gigabytes exclusively for files outside /home.


> backup systems like borgbackup running as root can no longer easily backup everything

If you’re using LUKS-encrypted volumes for home directories nothing prevents you from enrolling an extra key file entrusted to the backup program, or to the TPM (which the backup program can be allowed to access).


But that defeats the whole point as now decrypting the system is enough to access all of the home directories. You are basically back to a FDE home partition with extra complexity.


Also, once you've got per-user encryption on ~/.ssh/authorized_keys you'll need some other mechanism for users to log in.

And don't think of having the users do their own backups - locking users' home directories mean they won't be able to run their own cron jobs.


> Also, once you've got per-user encryption on ~/.ssh/authorized_keys you'll need some other mechanism for users to log in.

OpenSSH supports this through the AuthorizedKeysFile directive - it'd be quite simple for the homedir mounting tool to sync that file from the user's authorized_keys file on unmount.

You could also use SSH certificates, but that requires a CA - not ideal for the home user.


You can also store the file anywhere else, a la

    AuthorizedKeysFile /etc/ssh/authorized_keys/%u
(For example, /etc/ssh/authorized_keys/bob)


Every time I read stuff about secure boot, "evil maid" attack scenarios come up. And every time, they fail to mention the easiest one.

The attack described here involves dismantling the victim's hard drive. I have an attack that isn't defeated by secure boot, and doesn't even require dismantling anything.

Steal the original laptop. Take another physically identical unit. Replace it. Copy the login screen of original laptop on a brand new laptop, and have it log the password when the victim types it to you over wifi.

There. I stole your data. Without any security flaw. In the exact same threat model described.


That's what those countless convention stickers are for, they're basically a cryptographic hash of all the leet stuff you've attended.


This or the 'glitter nail-polish' pseudo-holographic identifiers both ignore that if you have a physically identical laptop save cosmetics, you can swap in the motherboard and hard disk from the replacement unit. Externally, it's identical, internally it's all compromised.


Presumably the laptop could be designed such that opening the case would separate some contacts in a circuit, which in turn would clear some important secret value from memory.

If you have a separate authentication device, it could warn you that this had happened, to prevent the attack of someone opening the case to add a circuit which broadcasts your key presses, for example.

This still only reduces the problem from keeping your laptop with you at all times to keeping your authentication device with you at all times, though.


I believe this is what TPM lock solves, but possibly not perfect.


If you put the nail polish on the screws or seams, you could make it where you couldn't take apart the laptop to replace the motherboard without damaging them, right?


If we fixed everything else then this wouldn't be hard to solve.

Have the machine generate a TOTP which you compare to the same code generated on a phone/second device.

To prevent or at least make nearly impossible a MITM of this, the TOTP is calculated by the TPM and only while the network card(s) are off.


We haven’t found a l33t who can swap an macOS motherboard in 15 minutes.


Hmm it took me 2 hours when I had to do this a few years ago, but I'm pretty sure I could do it in 15 minutes if I practiced a few times and had 3 drills with two torx and one phillips driver bits.

Swapping the SSD and adding a keylogger to the ribbon cable would be a lot faster too, maybe 5 minutes with practice.


I can just count the missing screws on the bottom.


The solution is to use an HSM such as the Nitrokey/Purism Librem Key (same thing) that has a LED that lights up if boot integrity is fine, including a TPM secret matching (maid can't clone that).

https://www.youtube.com/watch?v=O_3Xf3gTzEE

https://www.youtube.com/watch?v=K1O-33pi33M

https://www.youtube.com/watch?v=SB82Ul_A1js


This is essentially the same solution, right? It boils down to having a single device that verifies the integrity of everything and never letting that device out of your sight. It's just marginally easier to do that when the device in question is an HSM rather than a laptop.


> Copy the login screen of original laptop on a brand new laptop, and have it log the password when the victim types it to you over wifi.

This is why you need mutual authentication. The easiest is with 2 passwords. You enter a password, this authenticates you to the system. Now system presents you some secret. It may be a passphrase, something not obvious like a password prompt with a typo, or a splash screen with some pixels a bit off that are visible at the right angle. Something that a casual shoulder-surfing won't gather. Only when the system is authenticated to you then you enter the 2nd password o actually unlock the filesystem.

As for "identical replacement" of a system - good luck. A bit of glitter and nail polish on screws and it will cost a fortune to do so. If you have those capabilities you probably have the capabilities to "nicely ask me for the password".


We used glitter glue on ports for certain traveling individuals. Took pictures of the hardened glue. Very hard for a maid to replicate, be it evil or really good.


If someone is considering an "evil maid" style attack, the objective is to compromise your security without you knowing (so that you will continue using the device believing it is still secure). "Asking nicely" isn't going to accomplish that.


What for? You want to gain access to some data or lear something or get access to one of my clients.

"Asking nicely" is how intelligence/counter-intelligence recruits their assets - some are bought some are forced.


Hello, I'm curious, do you happen to be the person behind "phh's SuperUser?"


Replicating the chassis (including the scratches, etc.) and other laptop parts is the hardest part of your attack. I assume you don't know about the nail-polish with glitter based protection?

Even "copying the login screen" is not necessarily easy.


You know the scratches on the chassis of your laptop? I definitely don't. If someone replaces my laptop with a brand new one, I'll notice something is off, but I wouldn't be surprised I'd notice /after/ typing my password. If rather than brand new, it's replaced another laptop with approximately same age/usage, I would most definitely not notice.

> I assume you don't know about the nail-polish with glitter based protection?

Nope, can you explain?

But okay, you may extend my attack by saying that you exchange the motherboard between the victim and the attacker laptop, so that you don't need to replicate the chassis.

> Even "copying the login screen" is not necessarily easy.

Personally my login screen is ubuntu's default FDE screen untouched, so there is literally no work involved to attack me there. I have absolutely no idea how to customize FDE screen. But even if I did, I'd expect that it would be pretty easy to plug in an HDMI capture to have a close-enough duplicate of the screen.


>But okay, you may extend my attack by saying that you exchange the motherboard between the victim and the attacker laptop, so that you don't need to replicate the chassis.

Modern computers has tamper detection and if you open them you'll need to type the BIOS password.

However, replacing the motherboard is going to replace the TPM. This is easily detectable with something like tpm2_totp in the bootchain.

https://github.com/tpm2-software/tpm2-totp


Now we're talking proper security, thanks.

> Modern computers has tamper detection and if you open them you'll need to type the BIOS password.

Is that somehow configurable from Linux distribution's setup, or it will require user to manually set a BIOS password? (and it requires the user to set a different bios password. if the user sets the same password for fde and for bios, then back to square 1)

> However, replacing the motherboard is going to replace the TPM. This is easily detectable with something like tpm2_totp in the bootchain.

That sounds interesting. Though it still sounds totally impossible for the vast majority of users.

At that point, I don't really know what's the goal of TFA. If it's for extreme power users who want best security, it is missing the various counter-measures mentioned in this thread. If it's about pushing distributions to have better defaults, then I think it's quite moot, because secure boot won't improve security much to average users.


>Is that somehow configurable from Linux distribution's setup, or it will require user to manually set a BIOS password? (and it requires the user to set a different bios password. if the user sets the same password for fde and for bios, then back to square 1)

Not yet? And when I said "modern computers" i should probably clarify I'm thinking about more enterprise grade computers. Such as Thinkpads.

Thinkpads also recently got the feature to set the password from Linux userspace. But I forget where I read that, and where the patch is located :)

>That sounds interesting. Though it still sounds totally impossible for the vast majority of users.

It is. But this is why threat modelling is important. If a realistic threat scenario is someone replacing your motherboard, then tpm2_totp should be something you setup.

Listing all possible attack scenarios and assuming any generic distribution protect fully against them is a pipe dream. There needs to be some compromise between usability and security.


>> I assume you don't know about the nail-polish with glitter based protection?

>Nope, can you explain?

You take clear, semi-liquid glue that hardens. The glue has various colored Mylar flexes (aka glitter) floating in it. Slather it onto a device (we did it for exposed ports on devices). The glue is semi-liquid so it will flow reasonably. Once the glue hardens, the orientation, distribution, coloring and such of the flex are set. Take picture(s) of the hardened glue. (just search for glitter glue)

Reproducing the complexity of the glue plus glitter is very hard. Possible attacks is attempting to remove it, and inserting it back in. The right glitter glue is quite brittle so hard to remove it. Heating it will make it hazy before pliable, and cooling it makes them even more brittle. Breaks show up as while surface inclusion in the glue.



I think that this attack is not possible to prevent without hardware tokens. At least user will be immediately aware that something's not quite right.


This thing right here makes me unconfortable. I don't want Microsoft signatures or any other company for what matter to be involved in my boot process.

> The UEFI firmware invokes a piece of code called "shim" (which is stored in the EFI System Partition — the "ESP" — of your system), that more or less is just a list of certificates compiled into code form. The shim is signed with the aforementioned Microsoft key, that is built into all PCs/laptops.


Almost all UEFI firmware allows replacing the default public keys with your own, and then you can sign everything yourself with private keys only you possess.


In theory. But part of the UEFI boot chain is Optional ROM files stored on hardware. Most commonly found on dedicated graphics cards and other hardware.

All of these files need to be authenticated, and currently signed by Microsoft or the OEM vendor. A modern Lenovo Thinkpad T14 Gen 2 laptop has 7 OpROM files. If validation fails for the GFX card you are essentially "soft bricking" the device since the GFX card won't work.


An idea I just had around this, for if you run your own PK/KEK/DB chain - is it feasible to "dump" these Option ROMs?

If so, you should be able to whitelist their sha256 hashes in your DB database, since the DB can contain an allow-list of hashes, as well as an allow-list of public keys.

I'm not familiar with how to view relevant Option ROMs, but if you can "dump" them in the format they will be verified in, you should be able to put those into your DB database.

Perhaps there could even be an open/peer-vouched crowdsourced set of such DB entries maintained for popular systems?


sysfs doesn't like some optionroms, so using the rom files from there wont work in some cases. Apparently they can be stored in an ACPI table.. but I haven't looked at it.

Another alternative is to read the TPM2 eventlog. It should record OptionROM checksums found during boot.

    $ tpm2_eventlog ./t14_eventlog | grep "BOOT_SERVICES_DRIVER" | wc -l
    7
This is essentially what me and Trammell Hudson has been thinking about. But I don't have any available machines to test this with, and I haven't gotten around to setting up a QEMU vm to test out this theory.

https://github.com/Foxboron/sbctl/issues/85#issuecomment-886...


Good idea RE the TPM event log - that should show the definitive perceived hash of the option ROM.

Will need to see if I have any computers that require an Option ROM - somehow I suspect shipped-with-Linux Dell XPS laptops aren't going to be good test candidates.


I completely replace the entire set of secure boot keys with my own (no MS keys at all)

for my desktop PC I had to sign the graphics card option ROM digest myself (which was a pain)

however for my Thinkpad none of this was necessary as the firmware was stored inside the UEFI image


> Some corners of the community tried (unfortunately successfully to some degree) to paint TPMs/Trusted Computing/SecureBoot as generally evil technologies that stop us from using our systems the way we want. That idea is rubbish though, I think.

It's not as rubbish as the author wants to think. Just look at the current state of Android SafetyNet attestation - a myriad of apps refuse to run entirely (banking apps, games) or with reduced quality (Netflix) on unlocked bootloader / rooted systems. There are workarounds but these are a) a constant cat-and-mouse game and b) the only known workaround against online SafetyNet attestation (pretending that the device does not have a TrustZone element, forcing SafetyNet to Basic attestation) is bound to fail as soon as Google is reasonably certain that there is no hardware floating around that has the latest Android but no TrustZone element.

Additionally, look at what happened to Microsoft's ARM adventures. Locked down to the point of being utterly useless, on top of MS not providing a Rosetta equivalent.

The problem is that here the interests of hackers (who either want as-open-as-reasonably-possible devices or, like the author, as-secure-as-possible devices) and big corporate interests (who want to keep license holders happy like Netflix, want a way to reduce successful "my device got pwned, someone stole my money, refund me!!!" claims like banks or fight against cheaters) collide in a position that's hard to make something decent out of.


I would LOVE tpms and every other hardware security feature if the had an user override that allows me (and only me) to control what the device does. Basically the opposite of the current and forseeable landscape.


The modern Windows ARM laptops all(?) use standard x86 Secure Boot/UEFI rules, as far as I know. You can install Linux on them pretty easily; I know at least Fedora/Anaconda supports ARM64 pretty well. The bigger problems come after that but are more of the same stuff, i.e. lack of drivers for peripherals, you're dealing with Qualcomm, etc.


Is it possible to set up an encrypted+authenticated boot that relies on a hardware token being present? In other words, only boot this OS if the Yubikey is inserted?


According to the docs ([0], [1]) systemd-cryptenroll and crypttab support FIDO2 and PKCS#11, both should be supported by the yubikey as well.

So without having tested it: I think yes, it should generally be possible, at least for encryption.

[0]: https://www.freedesktop.org/software/systemd/man/crypttab.ht... [1]: https://www.freedesktop.org/software/systemd/man/systemd-cry...


It definitely is. The easiest way is to set up the token in challenge-response mode (supported by yubikey). The disk has a key which is sent to the token and the response is used to decrypt the disk.


Does not LUKS already work with Yubikeys [1]?

[1] https://github.com/cornelinux/yubikey-luks


Based on the description of that repo it does exactly what I suggested you can do.


See my top-level comment. Just put the initrd and secret on the external device.


Did you just assumed that /bin and initd are separate by the virtue of “cloning-by-initd” and thusly needing to be differently signed by necessity?


In other news, operational security is just as important as technological security measures. This means not leaving your device unattended, and if you do, implementing tamper detection of some variety. This can be as simple as a battery operated motion activated video recorder hidden and aimed at your equipment. You can pick one up for $25 on amazon.


"The shim is signed with the aforementioned Microsoft key, that is built into all PCs/laptops."

this is a non-starter for me. the sheer number of microsoft products and programs dedicated to or endorsing a backdoor for third parties makes it untenable for any security purpose.


Strip the signature and sign with your own key? It's possible as a part of the Secure Boot model.


That ^^ feels like an important part of the HowTo. Is there one, that includes this step already?


I imagine that this is a documented generic process that works for all UEFI boot binaries. I can't point to any specific guide because I don't use UEFI in Secure Boot mode.


>Authentication of boot loaders is done via cryptographic signatures [...] the cryptographic certificates that may be used to validate these signatures are then signed by Microsoft

This is what concerns me. While Microsoft are indeed dominant, surely them signing these is a conflict of interest? Why can't there be an external body that signs these, including those for Microsoft?


The "default" boot chain is using a Microsoft CA key, but you can easily change it. In fact, the "root" for your system is the platform key, which is most probably signed by your OEM.

Pragmatically speaking, I'd be more worried about my OEM's platform key being compromised when someone leaks their UEFI firmware build tree through a ransomware attack or similar.

The biggest issue of the Microsoft "CA root", is that they sign everything - there was a good example [1] of them signing a Kaspersky rescue CD that could effectively break the secure boot chain.

The good news is you can load your own keys into your motherboard. It's only really a solution for enterprises or tech-savvy individuals, but it at least is a viable option and helps you to "own" your own platform.

[1] https://habr.com/en/post/446238/


> Why can't there be an external body that signs these

Just wait a few years, and it will be the government that decides which OSes can run on your hardware. Hint: It will be OSes that only allow "approved" apps to run.


> Why can't there be an external body that signs these, including those for Microsoft?

There can be, and I strongly suspect Microsoft would prefer there to be - reviewing and signing all the UEFI drivers consumes significant resources. Nobody else with sufficient resources to do this job has stepped forward.


Two projects that are interesting and relevant work to address portions of this space are safeboot (https://safeboot.dev/) and heads (https://osresearch.net/).


A significant issue is being able to protect your initramfs and your cmdline options during boot while still keeping the convenience of auto-unlock. By using the TPM to handle validation, you also cut yourself out of the mix by not knowing the password. Of course this comes with the risk of hardware attacks, but there is always a risk. Current distribution implementations DO NOT, EVEN WITH SECUREBOOT ON, verify the integrity of the initramfs, which can be repacked to include malicious code that will execute during boot, potentially intercepting your LUKS key. There have been a number of attempts to solve this problem, but the most complete appear to be Mortar (a project I head) and safeboot.dev

I highly recommend taking a look at either of these projects if you want be able to improve both your convenience through auto unlocking, and security through broadened scope of audit.

https://github.com/noahbliss/mortar

https://safeboot.dev


Sadly, the described scheme doesn't protect from hardware backdoors. Let's say someone confiscates your laptop, installs a backdoor that sends all keypresses using GSM, and returns it to you. Now the attacker has all your passwords.


On most linux distros the initrd is dynamically generated, and depends on the specific hardware. So it isn't possible for the initrd to be signed by the vendor. Is it possible for the TPM to verify such a dynamically created file during boot?

Also, requiring a TPM for decrypting the hard drive has a security/usability tradeof. If the TPM dies, or possibly other components of the computer (like the motherboard), then you lose the ability to access the data on the drive altogether. While if it isn't you can still recover the data from the drive.


The TPM 2.0 spec has the ability to duplicate keys from one TPM to another for this type of use case. It of course requires the key policy to allow for duplication, so you have to plan ahead of time, but it is possible.


The article already mentions both of those challenges, and outlines practical solutions/mitigations to both--can you be more specific about what you didn't like?


To be honest I hadn't read the whole article when I posted that, I'd only made it about halfway through. Now having read the whole thing, regarding a failing TPM, having a recovery key seems like a fine solution, I just didn't know if that was possible.

Regarding initrd, the idea of a base image plus extension images might work. Although I still have questions. Are these extension image signed by the vendor or the host? If the former, how do DKMS modules work? That isn't an edge case, the nvidia propriatary drivers, some wireless drivers, virtualbox, and sysdig for example all rely on DKMS (at least on ubuntu). And how does that system handle competing drivers? Should the base initrd use the nouveau drivers or the proprietary nvidia drivers, or does the user need to load an extension image to choose? And for that matter the base initrd you want for a server will probably be pretty different from what you want for a laptop.


A keylogger in the initramfs seems like a pretty obvious weakness to try and exploit. From there the attacker eventually gets the password manager master password, as well as any disk encryption passwords.

We could soon get to signed grub being able to read and authenticate from as fs-verity /boot on which initrd resides. Ext4 and recently btrfs support fs-verity and its more flexible than dm-verity.


Are other people experiencing too many corruption problems on encrypted disks?

I have a veracrypt drive, and I regularly have to run a scan and fix on it because it's very sensitive to power cut, brutal restart, etc.

Also, it's slow. I put my firefox conf folder in it, but it slows the browser down.

Once I went full encrypted disk, but one day corruption happened, and it stopped booting. So I went back.


Don’t generalize your problems with a specific software (Veracrypt) to all drive encryption software. Plenty of drive encryption solutions are fast, reliable, and resilient. (You could also be having a hardware problem exacerbated by adding an abstraction layer.)

I’ve used LUKS, GELI, ZFS encryption, FileVault, and BitLocker across numerous drives over the past decade (albeit in different individual proportions and timelines) with no corruption problems.


I’ve not used verycrypt, only normal Linux LUKS device mapper encryption, ZFS encryption and windows bitlocker full disk encryption with a paraphrase entered on boot (no tpm). But I haven’t had this problem with any of those.

I’d suggest either your SSD just loses data on power off which is a problem with some SSDs or it’s some veracrypt specific issue that I wouldn’t be familiar with.

Id encourage you to try out one of the above options.


You probably have some fun disk caching turned on in the OS or your disk has a built in cache that it’s not properly flushing. I’ve seen this before with cheap SSDs.


Maybe, I don't know enough about this, but I've been using this veracrypt disk on 6 machines, now, and I have the problem with each of them.

Also, I don't have any problem with non encrypted data.


Maybe a dumb question, but are you on windows? Check that Defender isn't scanning every single time data is written to your veracrypt drive. When I was on windows, the most painful slowdowns were almost invariably caused by Windows Defender.


Looks like a veracrypt problem then.


No. I run a server in my basement, and for the past 5+ years, every one of its drives have been LUKS encrypted, including external backup drives. I've had no issues with encryption (corruption, speed, or otherwise), only disk hardware failing outright.


Servers and consumer laptop are very different beast though, so not sure the comparison hold. You don't shut them off often, they don't go to sleep, rarely undergo shock, plane travel or forced shut down...


Friend, LUKS is reliable. LUKS doesn't just eat data in those edgecases you talk about, there would be tons of reports if that were the case. LUKS is used all over the place for non-server uses too.

I can count at least 3 devices that are all LUKS-encrypted that are used as consumer devices daily. I trip over the power cord, I let the battery die, I drop it, I fly regularly, etc, etc. Never once had a single issue.

In fact, the single time that I've had FS issues under Linux was running under Hyper-V but that's because of the slowly-being-uncovered serious issues w/ Hyper-V.


Never had a problem with LUKS over a decade of usage.

In fact, I'm actually having the opposite issue: I have a drive that is failing and LUKS is still able to decrypt it by some miracle.

> Also, it's slow.

Modern CPUs have AES instructions built-in. I would check to make sure you're utilizing them. You should not see any slowdown when using encryption.


The better solution to these kind of problems is having a backup concept with an implementation and regular testing of backups.

The encryption exposed a problem with your disks or system in general. Depending on what the problem was, the problem is still there but you are not seeing it. Now you have a possible problem and unencrypted files.


I have a backup, but this is not a solution. Only a crutch. I had those problems on several machines, including brand new ones.


I started using full disk encryption (except /boot) this year, so far no problems at all. I'm using ordinary luks on Fedora.


>Unfortunately not at all. Because distributions set up disk encryption the way they do, and only bind it to a user password, an attacker can easily duplicate the disk, and then attempt to brute force your password.

I think this "not at all" is a bit strong. With a good enough passphrase and a slow key derivation algorithm bruteforcing the key should be impossible in practice.

The key derivation takes about 2s for my root partition for instance, fast enough that it really doesn't make a difference for a legitimate user but slow enough to be hugely impractical to bruteforce, especially given that I use a long, random passphrase.

On the other hand the author is entirely right that this setup is rather simple to backdoor for a sophisticated attacker, but it does a good job of protecting the data at rest.


Set up a new laptop for Ubuntu recently and opted for ZFS encryption, impressed it was now a mainstream install option. Sadly amused after setup that it suspends indefinitely without requiring the passphrase, ie. keeps unlocked volumes in memory. This makes the encryption basically pointless. (NB. This category of issue was highlighted in the recent interview by the resurfaced Alpha Bay operator, specifically after his former associate Cazes was bagged in Bangkok after an international police sting allegedly created a server issue explicitly to require him to log in to the so they could access his credentials unencrypted. The resurfaced operator's reported solution was apparently to always power down, even on a bathroom break.)


Just a heads up, you should be aware that ZFS encryption leaks considerably more metadata than full LUKS block encryption.



There's definitely something to be said about assumptions here.

I knew all/most of the mentioned high level details regarding SecureBoot and the TPM. However, I would have assumed that a TPM-enabled linux distro would have authenticated everything up to the FDE password prompt at a minimum as not doing so would seem to completely defy the point! Turns out this assumption is wrong.


The Linux kernel's Integrity Measurement Architecture can be used with the TPM to establish trust of the filesystem. ref: https://www.redhat.com/en/blog/how-use-linux-kernels-integri...


So basically an issue is that initramfs is not validated. Is it even possible to resolve that issue? If new grub version adds forced initramfs validation when run under secure boot mode, attacker can just install old grub which is still signed, but allows for unsigned initramfs.

Of course everything but /boot must just be encrypted, there's no reason not to.


You don't need separate initramfs file these days, you can bake it into the kernel executable which you can then sign


True but this is complicated for users to do themselves.. And also complicated for distributions to create, sign, and distribute a "universal" or no-host-only initramfs that will boot any supported configuration. Plus that universal initramfs is quite a lot larger than the more compact host-only initramfs.

It should be possible to automate writing out a new fs-verity "boot" directory containing the locally-signed files we care about, and having the signing regime enforced by a distro signed bootloader. Like dm-verity Android images, the Merkel tree is signed by asymmetric key pair, with the private key tossed immediately after successfully creating and signing the tree, with the public key being exposed wherever convenient.


When upgrading, you have to remove old PCR states, so rolling back to an older version is detected.


I haven't read the whole essay (stopped at "In detail"). So far, I agree with most of what Lennart said, it matches my analysis, except perhaps this:

> 3. The OS configuration and state (i.e. /etc/ and /var/) must be encrypted, and authenticated before they are used. The encryption key should be bound to the TPM device; i.e system data should be locked to a security concept belonging to the system, not the user.

Not sure you actually need a TPM for this, there are a few alternatives:

- Use a generic system configuration until you get to the user prompt. This doesn't require encrypting the partition, only authenticating it.

- After user authentication, either unlock the shared /etc configuration from a password stored in the encrypted user partition (could be protected by the TPM to avoid users leaking it).

- Or, the better (IMO) option is to get rid of that "shared" /etc, and allow each user to have a unique, system-wide set of parameters (with a "safe mode" in case they need to recover). Not allowing users to install arbitrary software while there are tools like overlayfs is a bit backwards IMO, unless there's a company policy, but in that case capabilities can be dropped before, or the user can be limited on a case-by-case basis.


I would have preferred that the post started with the scenarios against one wants to fortify/harden, rather than an "everything on Linux sucks" one. Everything sucks depending from the perspective, Windows, Mac or Android are not perfect either.


Running the FDE (or partial encryption) of the TPM would include the risk that a hardware defect on the mainboard would render the disks unreadable, right? Since it would be quite hard to get that key out of it.

Guess thats what recovery keys are for...


If I've understood this right, the biggest problems are outlined in the section titled "TPM PCR Brittleness". I'm not convinced that any three of the proposed solutions are good.


I keep wondering: why do we really need authenticated boot?

If you encrypt the entire disk, including the boot partition, there's no way for an adversary to make any changes anyway.


Does FreeBSD do a better job addressing the issues he raises?


I did not read the article but the FreeBSD bsdinstall(8) installer gives you the option to use the GELI full disk encryption with ZFS on top of it.

Its kinda similar to LUKS on Linux and will ask you for password in the bootloader.

Here are the details:

https://docs.freebsd.org/en/books/handbook/disks/#disks-encr...

Regards.


Can a TPM be shared by two operating systems? As in, dual boot configurations? Is any sort of deconfliction required and does the relevant spec account for this?


> Can a TPM be shared by two operating systems?

Yes.

> Is any sort of deconfliction required

No.


I’m pleasantly surprised to realize this is from Poettering, as it gives me hope that it may actually find its way to being adopted. What he says is right: Linux is way behind others (except maybe the BSDs) in authenticated boot.


“With LUKS, the encryption is as good as your password is strong.” Well that was anticlimactic. I thought my entire setup would have to be overhauled.


Did you read the whole thing? LUKS (or FDE in general) is not enough for the attacks presented. Partly because FDE is rarely ever the “full” disk.


I did read it. The “attacks presented” are of concern to basically nobody.


[flagged]


Instead of attacking the person (ad-hominem), how about discussing the technical merits and deficiencies of the proposed solution to a real problem that no Linux distro has tackled so far?

I'm sure the NSA is happy about unauthenticated initrds. I'd be very surprised if such manipulation was never used in practice.


I don't think it's an ad hominem, because this is a viable attack vector. What better way to get access to countless systems than by compromising the supply chain?


Which operating system doesn't have this attack vector? Is there one that somehow doesn't have a supply chain?


Their accusations are bold and brash, but this is not an ad-hominem.


Instead of discussing the ideas presented, the _person_ behind the ideas is discussed. In my book, that's pretty close to an ad-hominem, even if the accusations are only implied.


> take over kernel, initrd and bootloader

Huh? There are not really any Kernel or bootloader changes proposed here, just signing them and using existing functionality within them. (Okay, I guess he proposes adjusting the Kernel to optionally have a way to authenticate parameters, but is that really “taking over”?)

As for initrd, he’s just talking about adjusting the way their generated to be more static and modular to simplify signing. Yes, there’s some tooling to do this proposed as part of the systemd organizational umbrella, but systemd isn’t a monolithic thing.

Regardless, and just like systemd, you don’t have to use any of this if you don’t want to.


Signal is partly funded by the US government:

https://www.opentech.fund/results/supported-projects/open-wh...


So is Tor. Is there any other reason to suspect foul play?


This is all totally unsubstantiated, but...

Sinal's reliance on phone numbers and their point on encrypted e2e communication facilitates meta-information gathering for the governments of the world.

As for Pottering, he is an old-time RedHat employee, and RedHat is supplying virtually all governments of the world.


I kinda think the claims about not encrypting /use are fairly dubious. What’s to stop an attacker replacing a lib with a signed+known vulnerable lib? As far as I know there’s no mechanism for revocation.

What’s to stop this occurring to the kernel and/or boot loader? Maybe the TPM but idk


From the article:

> The OS binary resources (i.e. /usr/) must be authenticated before being booted into. (But don't need to be encrypted, since everyone has the same anyway, there's nothing to hide here.)

Authentication prevents manipulation. Encryption is not necessary to do so (encryption often also authenticates).


Keeping it on a seperate blockdevice allows dm-verity to authentice the device. This value would be signed and secured, so injecting a signed+known library wouldn't work. It would refuse to mount.

It should be noted that keeping it unencrypted is only for the traditional linux package manager usecase.



I don't think a 5 year old project linked without context is appropriate when talking about modern boot chain security.

It doesn't even support stubs so it fails at the first threat scenario described in this post.


I disagree. UEFI boot security is not that recent, and it hasn't changed so much as to render earliest approaches less secure.


It is less secure. The initramfs is not signed, along with the microcode.


I only wish that package maintainers (outside of OS distro) installed directly directly into /usr/local/bin but instead they wanted “in” on the /usr/bin placement. So, We lost there as this would have facilitated secondary signing of non-OS packages by OS distro.

Doesn’t help when Fedora started muddling (merging) /bin and /usr/bin. So, secondary signing for non-OS vendors just went out the window. https://freedesktop.org/wiki/Software/systemd/separate-usr-i...

Even systemd is just now complaining about that during boot up (if the /usr partition somehow got invalidated during bootup).

This multi-tiered */bin is that hallmark of Unix (and continuously the (Linux Filesystem Standard https://refspecs.linuxfoundation.org/FHS_3.0/fhs-3.0.pdf ).

— Althou, systemd designer seems to disagree with this LFS approach. (IMHO, he needs to look deeper into the overall aspects of package signing architecture by OS-distros, and not just for Fedora) https://www.freedesktop.org/wiki/Software/systemd/TheCaseFor...

Now if Debian would just arrest the slide into a singular binary directory, all would be good.

Linux Distros are about to be ignoring this future need of multi-island/multi-signing of encryption/immutability.

No embedded Linux distro should want to be Windowized, much less the desktop Linux, unless they too seek this single island approach for immutably-verified/encrypted binaries of OS, vendors, and customers.


The problem is: where is the line drawn? Everything (and I mean everything) in many Linux distros (including Fedora) is just a package. There’s no base system, unlike in most BSDs.

Further, each of those packages can (and often are) updated independently of each other, confounding somewhat attempts at signing the root of that hierarchy. macOS signs the whole base system, but that also means that updating a single bit in it is a big lift (not that that’s bad, but it’s certainly different).

Personally, I agree that have some signed base system is valuable, but I also struggle to find attack vectors it prevents that a signed initrd (in a sense, it’s a base system) doesn’t.


Yes, initd is essentially a clone of /bin, et al.

Initd’s /bin probably is going to be needing to be signed differently from main root mount point.


initrd is a (very) small subset, far from a clone.


Yeah, let's move to the complete opposite side: one /bin for each package. You can have that today, in NixOS.


We know that package containerization (a la Apple, or NixOS) didn’t expand well either for the vendors/maintainers who want to include their products into multiples of OS distros.

And currently this probably would be very difficult to implement a signed-immutable binaries with regard to multi-OS distros vs multi-vendors.

So, the nice sweet convergence is probably hovering around hardware/OS/vendor grouping somewhere.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: