Hacker Newsnew | past | comments | ask | show | jobs | submit | stackghost's commentslogin

Put it in tomato sauce for pasta. Just a tablespoon or so.

Okay I know we're not supposed to complain about downvotes but c'mon it's actually delicious, doesn't taste like fish, and just adds umami. Don't knock it until you try it!

I didn’t downvote you but fish sauce does taste strongly like rancid fish to some people, even in trace quantities. Nothing about that flavor profile is delicious. There is nothing stealthy about it either if you are one of those people; you can immediately detect that disgusting note on the first bite.

I love anchovies and use a lot of them in many of the dishes I cook (including tomato sauce). Fish sauce ruins everything it touches for me. It isn’t lack of exposure either; I lived on Vietnamese home-cooking for many years. I eat a lot of weird and pungent things but I have no context for why anyone would want to put that fish sauce in their food. Also, some types of fish sauce from around the world don’t have this effect for whatever reason.

I’m pretty sure from observation that it is gene-linked thing, like the cilantro sensitivity. While rare, even some Vietnamese people seem to fall into this set and it is part of their cuisine.


>Business objectives should override engineering policies when the two are in conflict, at least if you're a business owner who wants to make money.

This bush league kind of attitude is why people insinuate that most software development is not "real engineering".

When Boeing or NASA lets making money get in the way of good engineering practice, people die.


> most software development is not "real engineering".

Most software development doesn't have anywhere near the real world impact of the Boeing/NASA engineering you reference.

Good engineering practice recognizes the risks and scales the effort to match it.

A CRUD app for internal users has a different set of requirements than a revenue generating SaaS app, just like a backyard fence has different building criteria than a highway bridge.


Sure, I understand the stakes are lower for blog plugins than for aircraft.

But being a professional means you do the thing even when the stakes are low. You don't decide to cut corners because you feel like it, or because it's more profitable. Mullenweg is not professional.


That's not what being a professional means at all.

You adjust your approach depending on the stakes. That shouldn't be a controversial take.

You're using "cutting corners" as a pejorative, but ultimately if the stakes are low, you may -- perfectly reasonably -- decide to allocate less time/resources to particular activities, and more to others. You can call that "cutting corners", and you'd be right, but there's nothing necessarily wrong about that: it depends on the circumstances. And there's certainly nothing "unprofessional" about it.

For the mostly-vibe-coded script to reencode a bunch of my own video files to save disk space, I skimmed the result to make sure that it wasn't going to overwrite or delete anything it shouldn't. Cutting corners? Absolutely. Perfectly fine and sufficient? Absolutely.

For the software that I write that I intend to distribute to others, that could cause data loss or other unpleasant problems for them if I get it wrong, I write the code myself, I understand how it works, and I might write tests and/or get someone else to review it, depending on my own judgment of what needs to be done.

Recognizing the difference between the the situations in the prior two paragraphs is what it means to be a professional.


>You adjust your approach depending on the stakes. That shouldn't be a controversial take.

At no time have I suggested that one cannot adjust one's approach. That's a straw man you invented.

I'm refuting the point that business considerations should always trump engineering considerations because profit.


Sure, but in this case, the engineering consideration was whether a specific plugin should be added to the list of other suggested plugins. It was literally just a business decision of whether to configure it to be one of the featured options users might want to install.

> But being a professional means you do the thing even when the stakes are low.

Not the way I understand "being a professional." All engineering, and all professions, entail the balancing of interests. There are some hard and fast rules*, like "don't do things that will kill your users." And there are some other things that are more guidelines than absolutes, such as "we don't ship feature changes in release candidates." Serious organizations understand that sometimes guidelines like the latter need to be violated for overriding business purposes.

*Even the "don't kill your users" thing is not an absolute. No car is perfectly safe, for example. We could add three more feet of crumple zone to the front and the back, but we don't, because even in safety tradeoffs have to be considered.


What does cutting corners have anything to do with the topic at hand? The situation isn't about devs getting the time to do something right, it's about programmers making a non-engineering decision that was overruled by the business in the businesses best interest. That's perfectly reasonable.

> But being a professional means you do the thing even when the stakes are low.

Being a professional means that you adjust what things you do according to the stakes.

For example, in software dev, you usually have tests for the code. Do you have tests for the tests? No? Why not? Why aren't you doing "the thing?"

In chip development, I usually had tests for the tests, because the stakes were higher. But I didn't usually have tests for the tests for the tests.


"died in a blogging accident"

Although humorous, this also leads to a provocatively interesting line of thought which is completely tangential to the engineering discussion.

Blogging accidents, usually involving tik-tok videos, selfie sticks, and rugged terrain or wild predators, get all the attention, but probably pale in comparison to medical or mental health issues faced by the average blogger.

For comparison, studies of police officers in the US have found that heart disease, cancer, and suicide are the leading causes of deaths.


RIP American democracy, we hardly knew ye.

>Is there some obvious reason not to measure requests per minute rather than second?

It's much less obtuse to say something like "average req/min" or whatever, but then again you can't write a cool blog post about misusing an SI unit for radioactivity and shoving it into a nonsensical context.


I use Rails for many of my side projects. Because of the emphasis on convention over configuration, Rails codebases tend to be succinct with minimal boilerplate, which keeps context windows small. That in turn makes it great for agent-assisted work.

For web stuff, with server-side rendering and partials it means minimal requirement to touch the hot mess that is JavaScript, and you can build PWAs that feel native pretty easily with Hotwire.

Ruby is slow as fuck though, so there's a tradeoff there.


Not really slow since YJIT, I think 3.1?

YJIT is amazing but for me, JRuby and TruffleRuby were the real game changers.

For anything "slow" I can put it in Sidekiq and just run the worker code with TruffleRuby.

I have high hopes for ZJIT but I think TruffleRuby is the project that proves that Ruby the language doesn't have to be slow and the project is still getting better.

If ZJIT, JRuby or TruffleRuby can get within 5-10% of speed of Go without having to rewrite code I would be very happy. I don't think TruffleRuby is far off that now.


Ah yeah I'm only vaguely familiar with Go. Didn't realize the speed differential was this drastic.

Even with yjit it's still more than twice as slow as even Go, to say nothing of C# AOT, which depending on the benchmarks is like 4x as fast.

In the context of breaking into phones and laptops, "state-level actor" usually implies a team of people with NSA-type forensic capabilities. That is, they have deep expertise in infosec and related topics, access to 0days that the security apparatus has hoarded and kept secret for their own use, and they may have bespoke hardware to facilitate attacking the device.

A random cop might have access to a Cellebrite machine but they can't just call up the NSA and ask them to break into some drug dealer's macbook.


Fair enough. Though they certainly could still break in if the laptop isn't encrypted, so this tool is only useful when combined with disk encryption.

Hey maybe he actually is a genius. After all, he committed securities fraud at least twice and so far has suffered zero consequences.

He's not a genius.

The elder generation of politicians he manipulated were educated in their day in historical allegory and gospel

Leadership is ignorant and clueless. No idea how "check the work" so to speak. Didn't matter to them. Trickle down made them rich. They ultimately began to encourage it

Congress is predominantly nihilists who pretend to believe in American norms to secure power


I decided to treat it like golf and aim for the lowest score possible.

Nice, my personal site is scored as "not ready"! Low score of 17, a personal best!

Maybe they meant "a file-based system" because they all run Plan 9.

Is there a reason why adoption has been so abysmally slow? Like surely all the big players have updated their networking equipment by now, and surely every piece of enterprise-grade kit sold in the last 20 years has supported v6.

The only arguments I've ever heard against ipv6 that made any sense are that:

1: it's hard to remember addresses, which is mayyyyybe valid for homelab enthusiast types, but for medium scale and up you ought to have a service that hands out per-machine hostnames, so the v6 address becomes merely an implementation detail that you can more or less ignore unless you're grepping logs. I have this on my home network with a whopping 15 devices, and it's easy.

and 2: with v6 you can't rely on NAT as an ersatz firewall because suddenly your printer that used to be fat dumb and happy listening on 192.168.1.42 is now accidentally globally-routable and North Korean haxors are printing black and white Kim Il Sung propaganda in your home office and using up all your toner. And while this example was clearly in jest there's a nugget of truth that if your IOT devices don't have globally-routable addresses they're a bit harder to attack, even though NAT isn't a substitute for a proper firewall.

But both of these are really only valid for DIY homelab enthusiast types. I honestly have no idea why other people resist ipv6.


The big reason is that domestic ISPs don't want to switch (not just in the US, but everywhere really.)

Data centers and most physical devices made the jump pretty early (I don't recall a time where the VPS providers I used didn't allow for IPv6 and every device I've used has allowed IPv6 in the last 2 decades besides some retro handhelds), but domestic ISPs have been lagging behind. Mobile networks are switching en masse because of them just running into internal limits of IPv4.

Domestic ISPs don't have that pressure; unlike mobile networks (where 1 connection needing an IP = 1 device), they have an extra layer in place (1 connection needing an IP = 1 router and intranet), which significantly reduces that pressure.

The lifespan of domestic ISP provided hardware is also completely unbound by anything resembling a security patch cycle, cost amortization or value depreciation. If an ISP supplies a device, unless it fundamentally breaks to a point where it quite literally doesn't work anymore (basically hardware failure), it's going to be in place forever. It took over 10 years to kill WEP in favor of WPA on consumer grade hardware. To support IPv6, domestic ISP providers need to do a mass product recall for all their ancient tech and they don't want to do that, because there's no real pressure to do it.

IPv6 exists concurrently with IPv4, so it's easier for ISPs to make anyone wanting to host things pay extra for an IPv4 address (externalizing an ever increasing cost on sysadmins as the IP space runs out of addresses) rather than upgrade the underlying tech. The internet default for user facing stuff is still IPv4, not IPv6.

If you want to force IPv6 adoption, major sites basically need to stop routing over IPv4. Let's say Google becomes inaccessible over IPv4 - I guarantee you that within a year, ISPs will suddenly see a much greater shift towards IPv6.


It's frustrating that even brand new Unifi devices that claim to support IPv6 are actually pretty broken when you try to use it. So 10 years from right now even, unless they can software patch it upwards.

Interesting, what's broken for you? I have some unifi gear and it handles v6 no problem.

Not parent but:

Prefix delegation is completely broken for example

I had to switch to OpenWRT on my Ubiquity gear to have something that works (which on the other hand is also easier to configure, so I'm probably going to stay there).


Except that is completely wrong. Consumer/residential networks have significantly higher ipv6 adoption rates that corporate/enterprise networks. That is why you see such clear patterns (weekend vs weekday) in the adoption graphs.

There are still a lot that have not.

Sure, the data plane supports it - but what about the management plane?

I wouldn't be surprised if ISPs did all the management tasks through a 30-year-old homebrew pile of technical debt, with lots of things relying on basic assumptions like "every connection has exactly one ip address, which is 32 bits long".

Porting all of that to support ipv6 can easily be a multi-year project.


This is true, I worked for an old ISP/mobile carrier that started in the 80s about 10-15 years ago. They had basically any system you could think of still running, from decently modern vmware with windows and linux to hp-ux, openvms, sunos, AIX, etc. Could walk around and see hardware 30 years old still going, I think one console router had an uptime of 14 years or so. One time I opened a cabinet and found a pentium 1 desktop pc on the floor still running and connected, served some webpage. The old SMSC from the 80s on DEC hardware was still in its racks though not operational, they didn't need the space as the room couldn't provide enough power or cooling for more than a few modern racks. The planning program for fiber, transmission, racks, etc, required such an old java that new security bugs didn't apply to it, and looked and worked like an old mainframe program.

The core team supported ipv6 for a long time, but its rather easy to do that part. The hard part is the customer edge and CPE and the stack to manage it, it may have a lifetime of 2 decades.


> Porting all of that to support ipv6 can easily be a multi-year project.

FWIW, as someone who has done exactly this in a megacorp (sloshing through homebrew technical debt with 32-bit assumptions baked in), the initial wave to get the most important systems working was measured in person-months. The long tail was a slog, of course, but it's not an all-or-nothing proposition.


Comcast actually implemented IPv6 10-15 years ago so that they could unify the management of all of their cable modems. Prior to that they had many regional networks using with modems assigned management IPs in overlapping private IPv4 ranges.

> it's hard to remember addresses

We desperately need a standardized protocol to look up addresses via names. Something hierarchical, maybe.

> with v6 you can't rely on NAT as an ersatz firewall

Why would you not just use a regular firewall? Any device that is able to act as a NAT could act as a firewall, with less complexity at that.


>Why would you not just use a regular firewall?

No idea, but people do it. Every time this comes up on HN there are dozens of comments about how they like hiding their devices behind a NAT, for security


Just because people regularly bring up a non sequitur doesn't mean there actually is a problem.

"I have a device acting as both a NAT and a stateful firewall, why are you making me switch to IPv6 and in the process drop both the NAT and the stateful firewall?" is a non sequitur.


I think we're talking about two different things, or maybe I just don't understand your reply.

What I'm saying is this: There exist people in the hobbyist space who believe that when their devices only have private IPv4 addresses such as 192.168.0.0/16 that this meaningfully increases their network security, and that if their raspberry pi has a globally-routable v6 address that this weakens their network security, even though this is bogus because NAT is orthogonal to network security considerations, and that this belief contributes to IPv6 hesitancy.


Has it been abysmally slow? What's the par time for migrating millions of independent networks, managed by as many independent uncoordinated administrators, to a new layer 3 protocol?

We've never done this before at this scale. Maybe this is just how long it takes?


> But both of these are really only valid for DIY homelab enthusiast types. I honestly have no idea why other people resist ipv6.

Simple. The "homelab enthusiast types" are those that usually push new technologies.

This is one they don't care about, so they don't push it. Other people don't care about any technology if it's not pushed on them.


Nothing stops you running a NAT for v6 too, its just people tend to choose not to when given the choice

I set up NAT66 recently with DHCPv6. The IPv4 and IPv6 addresses are practically the same, except IPv6 has a prefix and a double colon as the last separator.

This really should be how SOHO routers do IPv6 out of the box.

Most people don't want 1:1 addressing for their entire home or office.


Q on your setup

Are you using ULA prefix for the nat66/dhcp6, are you also allowing GUA address assignment via slaac? Im wondering how it works out with source-selection


IPv6 is a recursive WTF. It might _look_ like a conservative expansion of IPv4, but it's really not. A lot of operational experience and practices from IPv4 don't apply to IPv6.

For example, in IPv4 each host has one local net address, and the gateway uses NAT to let it speak with the Internet. Simple and clean.

In IPv6 each host has multiple global addresses. But if your global connection goes down, these addresses are supposed to be withdrawn. So your hosts can end up with _no_ addresses. ULA was invented to solve this, but the source selection rules are STILL being debated: https://www.ietf.org/archive/id/draft-ietf-6man-rfc6724-upda...

Then there's DHCP. With IPv4 the almost-universal DHCP serves as an easy way to do network inspection. With IPv6 there's literally _nothing_ similar. Stateful DHCPv6 is not supported on Android (because its engineers are hell-bent on preventing IPv6). And even when it's supported, the protocol doesn't require clients to identify themselves with a human-readable hostname.

Then there's IP fragmentation and PMTU that are a burning trash fire. Or the IPv6 extension headers. Or....

In short, there are VERY good reasons why IPv6 has been floundering.


> For example, in IPv4 each host has one local net address, and the gateway uses NAT to let it speak with the Internet. Simple and clean.

No, that’s not the IPv4 design. That’s an incredibly ugly hack to cope with IPv4 address shortage. It was never meant to work this way. IPv6 fixes this to again work like the original, simpler design, without ”local” addresses or NAT.

> In IPv6 each host has multiple global addresses.

Not necessarily. You can quite easily give each host one, and only one, static IPv6 address, just like with old-style IPv4.


Hyrum's law. That's how IPv4 is being used in practice.

> You can quite easily give each host one, and only one, static IPv6 address, just like with old-style IPv4.

You literally CAN NOT. On Android there's no way to put in a static IPv6 or even use stateful DHCPv6.


> Hyrum's law. That's how IPv4 is being used in practice.

It's still very ugly to mess with the ports that way.

The only clean NAT is 1:1 IP NAT.


> You literally CAN NOT. On Android there's no way to put in a static IPv6 or even use stateful DHCPv6.

Blame the closed and proprietary Android platform for that; not IPv6.


The problem here is the IPv6 design. It has multiple ways of configuration, and ALL of them suck.

Manual address input is clumsy because of IPv6 address length, stateless RA is limited and doesn't allow network introspection, stateless DHCP is pointless, stateful DHCP is not supported by the most widely deployed OS. There's also prefix delegation that needs stateful DHCP.


> For example, in IPv4 each host has one local net address, and the gateway uses NAT to let it speak with the Internet. Simple and clean.

This is a troll right? NAT is a lot of things, but "simple and clean" is definitely not one of them. It causes complications at every step of the process.

Pure IPv6 is so much cleaner.

I will say that DHCP6 is probably misnamed. It does not fill the same niche has IPv4 DHCP, and this causes a lot of confusion with people who are new to IPv6. It should probably be called DPDP (Dynamic Prefix Distribution Protocol) or something like that. It's for routers not hosts.

In theory you should be using anycast DNS to find local hostnames, but in practice the tooling around this is somewhat underbaked.


> This is a troll right? NAT is a lot of things, but "simple and clean" is definitely not one of them. It causes complications at every step of the process.

I invite you to try this challenge: https://news.ycombinator.com/item?id=47796992

This is something that can be done with consumer-grade routers in _minutes_ with zero configuration from endpoints apart from the usual WiFi password.

NAT is a _superior_ design in practice. It can be chained transparently, it moves all the stateful routing complexity to the border router, it enforces network isolation. And most importantly, IT ACTUALLY WORKS.


> For example, in IPv4 each host has one local net address, and the gateway uses NAT to let it speak with the Internet. Simple and clean.

I assume you mean "interface", not "host". Because it's absolutely not true that a host can only have one "local net address".

EDIT: a brief Google also confirms that a single interface isn't restricted to one address either: sudo ip address add <ip-address>/<prefix-length> dev <interface>


> For example, in IPv4 each host has one local net address, and the gateway uses NAT to let it speak with the Internet. Simple and clean.

If you think NAT is "simple and clean", you may wish to investigate STUN/TURN/ICE. An entire stack of protocols (and accompanying infrastructure) had to be invented to deal with NAT.

Heaven help you if your ISP uses CG-NAT.


I can type entire SIP handshakes from memory. And by now I'm convinced that STUN/TURN are a superior solution to IPv6, even with CGNAT.

Others agree with me. Don't believe me? Try to find a SIP provider in the US that has IPv6 connectivity. Go on. Try it.


>For example, in IPv4 each host has one local net address

Most of my home devices have multiple v4 addresses, not counting 127.0.0.1, so this assumption is incorrect.


> Then there's IP fragmentation and PMTU that are a burning trash fire.

It's not significantly worse on v6 compared to v4. Yes, in theory, you can send v4 packets without DF and helpful routers will fragment for you. In practice, nobody wants that: end points don't like reassembling and may drop fragments; routers have limited cpu budget off the fast path and segment too big is off the fast path, so too big may be dropped rather than be fragmented and with DF, an ICMP may not always be sent, and some routers are configured in ways where they can't ever send an ICMP.

PMTUd blackholes suck just as much on v4 and v6. 6rd tunnels maybe make it a bit easier to hit if you advertise mtu 1500 and are really mtu 1480 because of a tunnel, but there's plenty of derpy networks out there for v4 as well.


> but there's plenty of derpy networks out there for v4 as well.

God yes, I've helped so many users on PPPoE by telling them to set their MTU to something lower...


In my case, I set the MTU of the physical NIC to 1508 and kept the PPPoE interface at 1500. Best of both worlds. Needs the ISP to support it though.

IPv4 allows fragmentation by the middleboxes, which in practice papers around a lot of PMTU issues.

The IPv6 failing was not taking advantage of the new protocol to properly engineer fragmentation handling. But wait, there's more! IPv6 also has braindead extension headers that require routers to do expensive pointer chasing, so packets with them are just dropped in the public Net. So we are stuck with the current mess without any way to fix it.

People are trying: https://datatracker.ietf.org/doc/rfc9268/ but it's futile. It's waaaay too late and too fundamental.


> IPv4 allows fragmentation by the middleboxes, which in practice papers around a lot of PMTU issues.

In theory yes; but actual packets are 99%+ flagged DF. Reassembly is costly, so many servers drop fragmented packets, or have tiny reassembly buffers. Back when I ran a 10G download server, I would see about 2 fragmented packets per minute, unless I was getting DDoSed with chargen reflection, so I would use a very small reassembly buffer and that avoided me burning excessive cpu on garbage, while still trying to handle people with terrible networks.

Router fragmentation is also expensive and not fast path, so there's pretty limited capacity for in path fragmentation.

I think I agree with you, that RFC you linked seems awfully hopeful... unlikely to actually happen. Better endpoint probing is probably where we're going to end up. Or things like QUIC where if you don't have the required minimum MTU, too bad so sad.


> For example, in IPv4 each host has one local net address, and the gateway uses NAT to let it speak with the Internet. Simple and clean.

That's only true for smalltime home networks. Try to merge 2 company IPv4 networks with overlapping RFC1918 ranges like 10.0.0.0/8. We'll talk again in 10 years when you are done sorting out that mess ;)

> In IPv6 each host has multiple global addresses. But if your global connection goes down, these addresses are supposed to be withdrawn. So your hosts can end up with _no_ addresses.

Only a problem for home users with frequently changing dialup networks from a stupid ISP. And even then: Your host can still have ULA and link-local addresses (fe80::<mangled-mac-address>).

> ULA was invented to solve this, but the source selection rules are STILL being debated: https://www.ietf.org/archive/id/draft-ietf-6man-rfc6724-upda...

RFC6724 is still valid, they are only debating a slight update that doesn't affect a lot.

> Then there's DHCP.

DHCPv6 is an abomination. But not for the reasons you are enumerating.

> With IPv4 the almost-universal DHCP serves as an easy way to do network inspection.

IPv4 DHCP isn't a sensible means to do network inspection. Any rougue client can steal any IP and MAC address combination by sniffing a little ARP broadcast traffic. Any rogue client can issue themselves any IPv4 address, and even well-behaved clients will sometimes use 169.254.0.0/16 (APIPA) if they somehow didn't see a DHCP answer. If you want something sensible, you need 802.1x with some strong cryptographic identity for host authentication.

> Stateful DHCPv6 is not supported on Android (because its engineers are hell-bent on preventing IPv6).

Yes, that is grade-A-stupid stubborness. On the other hand, see below for the privacy hostname thingy in IPv4 and the randomized privacy mac addresses that mobile devices use nowadays. So even if Android implemented stateful IPv6, you will never be reliably able to track mobile devices on your network. Because all those identifiers in there will be randomized, and any "state" will only last for a short time. If you want reliable state, you need secure authentication like 802.1x on Ethernet or WPA-Enterprise on Wifi, and then bind that identity to the addresses assigned/observed on that port.

> With IPv6 there's literally _nothing_ similar.

Of course there is. DHCPv6 can do everything that IPv4 DHCP can do (by now, took some time until they e.g. included MAC addresses as an option field). But in case of clients like Android that don't do DHCPv6 properly, you still have better odds in IPv6: IPv6 nodes are required to implement multicast (unlike in IPv4 where multicast was optional). So you can just find all your nodes in some network scope by just issuing an all-nodes link-local multicast ping on an interface, like:

> ping6 ff02::1%eth0

There are also other scopes like site-local: > ping6 ff05::1%eth0 https://www.iana.org/assignments/ipv6-multicast-addresses/ip...

(The interface ID (like eth0, eno1, "Wired Network", ...) is necessary here because your machine usually has multiple interfaces and all of those will support those multicast ranges, so the kernel cannot automatically choose for you.)

> And even when it's supported, the protocol doesn't require clients to identify themselves with a human-readable hostname.

DHCP option 12 ("hostname") is an option in IPv4. Clients can leave it out if they like. There is also such a thing as "privacy hostname" which is a thing mobile devices do to get around networks that really want option 12 to be set, but don't want to be trackable. So the hostname field will be something like "mobile-<daily_random>".

What you skipped are the really stupid problems with DHCPv6 which make it practically useless in many situations: DHCPv6 by default doesn't include the MAC address in requests. DHCPv6 forwarders may add that option, but in lots of equipment this is a very recent addition still (though the RFC is 10 years old by now). So if you unbox some new hardware, it will identify by some nonsensical hostname (useless), an interface identifier (IAID, useless, because it may be derived from the MAC address, but it may also be totally random for each request) and a host identifier (DUID, useless, because it may be derived from the mac address, but it may also be totally random for each request). Whats even more stupid, the interface identifier (IAID) can be derived from a MAC address that belongs to another interface than the one that the request is issued on. So in the big-company usecase of unboxing 282938 new laptops with a MAC address sticker, you've got no chance whatsoever to find out which is which, because neither IAID nor DUID are in any way predictable. You'll have to boot the installer, grab the laptop's serial number somewhere in DMI and correlate with that sticker, so tons of extra hassle and fragility because the DHCPv6 people thought that nobody should use MAC addresses anymore...


> That's only true for smalltime home networks. Try to merge 2 company IPv4 networks with overlapping RFC1918 ranges like 10.0.0.0/8. We'll talk again in 10 years when you are done sorting out that mess ;)

Look, I've been doing IPv6 for 20 years, starting with a 6to4 tunnel and then moving to HE.net before getting native connectivity. I'm probably one of the first people who started using Asterisk for SIP on an actual IPv6-enabled segmented network.

I _know_ all the pitfalls of IPv6 and IPv4. And at this point, I'm 100% convinced that NAT+IPv4 is not just an accidental artifact but a better solution for most practical purposes.

> What you skipped are the really stupid problems with DHCPv6 which make it practically useless in many situations: DHCPv6 by default doesn't include the MAC address in requests.

Yes. DUIDs were another stupid idea. As I said, IPv6 is a cascade of recursive WTFs at every step of the way.

And let me re-iterate, I'm not interested in academic "but acshually" reasons. I know that you can run IPv4 with DHCP giving out publically routable IPv4 addresses to every host in the internal network without NAT. Or that you can do NAT on IPv6 or laboriously type static IPv6 addresses in your config.

What matters is the actual operational practice. Do you want a challenge? Try to do this:

1. An IPv6 network for a small office with printers, TVs, and perhaps a bunch of smart lightbulbs.

2. With two Internet uplinks. One of them a cellular modem and another one a fiber connection.

3. You want failover support, ideally in a way that does not interrupt Zoom meetings or at least not for more than a couple of seconds.

4. No NAT (because otherwise why bother with IPv6?).

Go on, try that. This is something that I can do in 10 minutes using an off-the-shelf consumer/prosumer router and IPv4. With zero configuration for the clients, apart from typing the WiFi password.


Well, I can do that with OpenWRT, no idea which prosumer devices already implement this, but it isn't rocket science: Announce the Prefix of the currently active connection, invalidate the other one. Will interrupt all your TCP connections, but they are toast anyways, most software should handle this just fine. It's quite the same as a Wifi-to-Cellular handover.

> Announce the Prefix of the currently active connection, invalidate the other one

And this doesn't actually work. Prefix deprecation is a best-effort feature that is not implemented correctly in tons of devices, including such rarely used niche operating systems as macOS. It even technically violates RFC4862 (section 5.5.3).

As usual, IETF only recently woke up to that reality: https://datatracker.ietf.org/doc/html/rfc8978

I highly recommend actually trying what I proposed. Not in a theoretical hand-wavy way, but actually setting it all up and verifying that it works. I did not pose this challenge in a "gotcha" way. I really was not able to make it work cleanly with either Mikrotik or OpenWRT routers.


How do the working IPv6 deployments cope with these issues?

The simple answer is: they just don't deploy IPv6.

These days you can use ULA and third-party monitoring tools instead of DHCP.


The reason: Skill issue.

"Is there a reason why adoption has been so abysmally slow?"

Just the obvious one: the people who designed IPv6 didn't design for backwards compatibility.


How so? The same working group published e.g. https://www.rfc-editor.org/rfc/rfc1933, and it's hard to see how v6 could have been designed for backwards compatibility in ways that it wasn't already.

I've asked lots of people to describe a more backwards-compatible design, and generally the best they can manage is to copy the way v6 does things, ending up with the same problems v6 has. This has happened so often that the only reasonable conclusion is that it can't really be done any better than it was.


> Just the obvious one: the people who designed IPv6 didn't design for backwards compatibility.

Nor for easy transition.


> 1: it's hard to remember addresses

fd::1 is perfectly valid internal IPv6 address (along with fd::2 ... fd::n)


fd::1 is somewhere in the reserved ::/8 space where various stuff like old ipv4 mapped addresses and localhost reside. What you probably mean is something like fd00::1, but that is something you shouldn't use, because 'fd00::/8' is a probabilistically unique local address (ULA) block. You are supposed to create a /48 net by appending 40 random bits to fd00::/8. Of course, if your fair dice roll lands on all zeroes, and you are ok with probable collisions in case of a network merge, you are fine ;)

In home networks, the idea of merging with someone else's network is... most certainly not worth worrying about. Maybe you marry someone or become roommates with someone who also picked fd00::/8? And you still want two separate subnets? Other than that I don't see a scenario where it matters.

Granted, if you're doing this in a corporate setting (where merging with someone else's address space is a lot more realistic), then yes definitely pick a random 40 bits. But at home? Who cares. Same as using 192.168.1.0/24 instead of a random 10.0.0.0/24 subnet... it's not worth worrying about.


I'm having my own and my girlfriend's router (in different flats) connect to each other with a wireguard tunnel, so I can print on her printer. Non-colliding addresses make this a lot easier.

But yes, renumbering also isn't a lot of work.


> Like surely all the big players have updated their networking equipment by now

My home isp can't even do symmetrical gigabit, let alone ipv6...


That's extremely common unless on "active" fiber (vs GPON, DOCSIS3, DSL, most fixed wireless, satellite, mobile, etc.)

Your wifi isn't symmetrical either.


Those are designed to have static asymmetrical bandwidth though, *dm split gives ISP side more of possible shared bandwidth. Wifi bandwidth is shared and dynamic so client can use all of it.

> Those are designed to have static asymmetrical bandwidth though

Yes, that's why I said that?

> *dm split

No idea what you're trying to say here.


Ignore all the excuses like longer addresses and incompatible hardware. The actual reason is that everyone hates change.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: