My FIL is coming here soon and Qatar airlines DOH->IAH leg flies right over Ukraine normally https://flightaware.com/live/flight/id/QTR713-1647752880-sch... . The recent flights show they have adjusted to fly just west, but that's too close for comfort for me.
So I am in this situation right now. My Father in law is flying here (Houston) from the Philippines, and one of the cheapest flights is with Qatar, which connects in Doha. The Doha to IAH flight happens to fly directly over Ukraine, however looking at flightaware, recent flights have been adjusted to fly just west of the Ukraine border.
This. I honestly think the FCC will have to mandate it's adoption and give a hard date for the termination of IPv4 for it to work. Both will need to occur.
Yeah, NTL/Virgin Media in the UK do the same in that their IPs geolocate to where the node/head end is. In a city, it's not going to be specific enough to uniquely identify you but it's still weird seeing ads that aren't that far away.
On the other hand, the IPv4/v6 addresses on my A&A connection geolocate to either London or Bracknell (where their office is), about 400 miles away. I get a lot of pointless ads for things in Surrey that I have no intention of visiting.
i have never used google search but the other day someone used that infront of me and on the bottom i saw what appeared to be "pin code for approximating your current location for local results" and something to that end. that scared me big time because this was like my home pin code, my small city has like 30 so this is narrowing me down to a single one which i am not comfortable with
Right, but is Google doing this with the information they get from your IP address or something else entirely? Is it just coincidence that your IP address corresponds to your ISP’s office which happens to be relatively local?
With loose enough permissions your browser has a geolocation API that, depending on your device, will be a hell of a lot more accurate (if you have Wi-Fi hardware it can use that to work out where it is relative to the known locations of the SSIDs it can see, or straight-out use GPS).
None of this has anything to do with IPv6 - you give away some location information with your username and profile on this very site, for example.
I believe Google has their own IP geolocation database, likely seeded from all their apps that have location access because the location given at the bottom of the search results pages is always far more accurate than any other IP geolocater I've seen and there are others on my WiFi network who use Google services with location.
I've worked in the cloud hosting industry for a decade and a half. The entire time, we were warned about the IPv4 shortage and how we needed to switch to IPv6 soon(tm). Well, things haven't changed. Everyone is dragging their feet on IPv6 adoption from hosting providers, ISPs, hardware manufacturers, and software developers. I predicted this years ago and always said that it would require a government mandate to move on from IPv4.
I honestly believe we are going to ramp up NAT in the coming years before really doing away with IPv4.
Some countries did exactly that, China for example. Most of the infrastructure, ISP networks, even user applications here is now IPv6 or ought to be in a few years [1].
Also, when your country's population is such that the entire IPv4 address space could only allow three addresses per resident, with that ignoring all reserved / multicast restrictions...
Benevolent leader is the best case of government, it is just improbable and of course it is too risky for any dissenter, and the successor is never as good. So people go for inclusive forms of government, which produces average case results more often.
NAT is ramping up on client side. Many home-internet connections are now NATted twice - in CPE, then again in CGN.
On the server side, in contrast, NAT is winding down. 15 years ago, it was common to have either DMZ-style NAT, or on AWS you had to have NAT (they call it EIP). Nowadays, having a CDN or could-native load-balancer in front of your server is increasingly common. And behind those, that server just don't need a public IP (maybe only a shared outboud NAT for OS updates). That is - if you have a server at all (and not moved to lambda, S3, etc...)
It’s hard to tell sometimes what is going on. I just learned for instance that the cable modem provided by Comcast switched to NAT - and my router is also doing NAT - and my business firewall also does NAT. So at least 3 layers now.
If they are doing CGNAT further into the infrastructure, how would I even be able to tell at this point? I’m assuming someone would also block ICMP just so it would be less embarrassing, but who knows.
Comcast does generally seem to be moving towards IPv6 at least, which is helpful.
How do ipv6-only customers reach ipv4 hosts? Wouldn't some 6to4 gateway count as CGN?
I've had this problem in the past with Vodafone, sometimes their AFTR (?) would go down but all ipv6 enabled hosts were still reachable. Only the ipv4 internet was unreachable. It took months for me to find that out, and I still don't know any workaround in case that happens again.
Every modem provided by Comcast supports dual stack broadband and IPv6 only for management by default. The latter is transparent to customer and is for internal use only. IPv6 only for management has no impact on dual stack broadband. If your modem is in bridge mode (Wi-Fi router functionality disabled) then you need to ensure that your broadband router supports IPv6 specifically DHCPv6 for the acquisition of IA-NA and IA-PD.
Highly unlikely we will ever see the day IPv4 is not used at all. There are too many legacy systems in place, so dual stack will always be required. The value of IPv4 may drop as it related to the price people pay, however, it will play a key role for decades to come.
As for the government mandate, also not possible. It would take our major ISPs over a decade to make this work, and the lobbyist would never allow it.
With that said, the DOD did make an interesting decision to move 175 Million IPs recently in routing tables.
I spent some time trying to upgrade my home network to primarily-IPv6 (mainly so I could more easily address internal computers from the outside). I was pretty unimpressed with the results; I expect to have to run dual stack for the foreseeable future.
I just don't get it. We already have regular hygiene programs to remediate legacy stuff - remove weak encryption methods, scan for CVEs and patch old versions, etc. IPV6 isn't any harder to use than IPV4 except for storing a larger IP address. Really, there's no excuse and that goes double for anyone using a modern stack instead of legacy.
All this is because IPv6 addresses are too long. If they’d made it 48 or 64 bits we would be fully converted by now. We are dragging because people hate using it.
I’ve been saying this for years. Nobody gets it because geeks don’t get ergonomics.
I've said it for years too. It's not JUST because they're long - years ago (and maybe even today?) there's still some hardware issues with keeping large sets of addresses for routing (I'm not an expert on this - I seem to remember reading about this years ago - larger ISPs not being able to keep all their routing rules in memory because of IPv6 address sizes - maybe I'm WAY off).
But, yes, generally, you're right. It's been seen from the very beginning as "a big move". If every address A.B.C.D was addressable as 0.A.B.C.D, and we opened up another 255 * 4 billion addresses... we'd have been converted a long time ago. And we'd have been better at actually implementing 'upgrades' because they'd be already done/completed - it wouldn't be a 'monumental task(tm)'.
We don't need every atom in the universe to be able to have 16 public addresses.
> (I'm not an expert on this - I seem to remember reading about this years ago - larger ISPs not being able to keep all their routing rules in memory because of IPv6 address sizes - maybe I'm WAY off).
in modern (last 10 - 15 ish years) routing table size has been roughly the same for IPv4 and IPv6.
Modern, ISP grade routers have control and forwarding planes seperated between different (usually redundant) hardware components.
The control plane is responsible for keeping states of routes (which routes do i recieve from a routing protocol? where is my next hop according to rule XYZ etc).
Forwarding plane is responsible for forwarding packets across interfaces.
Route lookups happen in the control plane, but a route lookup is almost never for a dedicated address (especially in IPV6). route lookups happen at the subnet level, and IPV6 has a "standard" subnet size which leaves half of the address space for the subnet itself. (the first /64 subnetmask bits are used for network differentiation, while the other /64 is used to create host specific addresses).
This cuts down on TCAM size considerably, because the router doesn't need to store 128 bits of information per host, but only 65 bits + subnetmask for a very large group of hosts.
besides this, IPv6 has another advantage because fragmenting routes is far more difficult then in IPv4.
Usually, organisations get a /56, the ISP usually handles /48's and RIPE/IANA etc work with /32.
This all keeps the IPV6 routing table far smaller then the IPv4 routing table, which was one of the reasons IPv6 was invented in the first place.
> But, yes, generally, you're right. It's been seen from the very beginning as "a big move". If every address A.B.C.D was addressable as 0.A.B.C.D, and we opened up another 255 * 4 billion addresses... we'd have been converted a long time ago. And we'd have been better at actually implementing 'upgrades' because they'd be already done/completed - it wouldn't be a 'monumental task(tm)'.
would this actually change the amount of "momumentalism" in switching ipv4 for something else? Backwards compatibility with larger address sizes (be it 128 bits, 33 bits or whatever) is not possible because ipv4 stacks can only hadle 32bit address space. Updating those is about as a monumental task as implementing IPV6, considering you would still need two network layer stacks for each device to handle both IPv4 and the "ipv4+" version.
> in modern (last 10 - 15 ish years) routing table size has been roughly the same for IPv4 and IPv6.
Really? I see 700k routes v4 and 70k v6 routes.
IPv6 will keep routing table size smaller since they can preallocate HUGE subnets to every AS (AS is what people would call an ISP pretty much) so that they only have to split their subnets by geolocation.
what i meant to say was, that in modern routers, IPv4 and IPv6 theoretical routing table size can be the same. There is no difference in terms of maximum routes in the routing table between both protocols.
> If every address A.B.C.D was addressable as 0.A.B.C.D, and we opened up another 255 * 4 billion addresses... we'd have been converted a long time ago.
That has nothing to do with the address being long, but with being compatible.
In designing ZeroTier I put a ton of effort into creating a secure P2P layer with addresses that are only 40 bits long. This effort continues with new solutions being worked on to maintain security while allowing more openness and federation.
It would have been much easier to use long addresses that are long hashes of keys. Having only 40 bits means we need two layers of defense in depth to prevent intentional collision: a work function to make the cost substantial (about USD $8M per collision on today’s public cloud) and a single source of truth for lookup that still supports federation. You could punt on all that with 128 or 256 bit addresses.
Yet I did it because I was quite aware that it was very necessary for usability. I have had many people tell me they love that they can type a ZeroTier address.
I would bet anyone that if the addresses had been gigantic we’d have 1/10 the adoption.
Software is first and foremost for people to use. Most of the complexity in software exists for this reason.
ZeroTier has a flat address space governed by a single algorithm. The Internet is a loose hierarchy of independently-managed networks. These problems have quite different addressing requirements.
Analogy: ZeroTier is to https://plus.codes/ as IPv6 is to mailing addresses. A mailing address is pretty long, but you can use its structure to route the mail efficiently.
The Internet is governed by a single algorithm: IP routing. Short IP addresses are a lot easier than short cryptographic addresses.
Adding 16 or 32 more bits to IPv4 would have been trivial. The existing IPv4 address space becomes 0.0.n.n.n.n or perhaps 0.n.n.n.n.0 if you wanted to give every existing IP 256 addresses to assign while also multiplying the IP space by 256.
You're describing 6to4, where the existing IPv4 address space becomes 2002:nnnn:nnnn::/48. You can treat the 80 bit suffix as 8 bits when designing a network.
Problem is, stacking the new protocol on top of IPv4 was never very reliable, so 6to4 is mostly dead now. It would've worked a bit better if the Internet had used 2002::/16 exclusively.
Adding 16 bits or 32 bits doesn't matter: The networking stack of every device would still need to be updated to understand the new address structure (just like IPv6!) You can't magically fit 48 bits in a 32 bit field.
IPv6 was the correct long term approach. You wouldn't want to pick only 48 bits and have to do this again in 20 years.
Yes. I'm saying if we had to update every device anyway, we might as well do it right and not some short term solution (48-bit addressing or whatever.)
IMO it's because they used stupid semicolons in the syntax instead of sticking with periods. Nobody likes hitting the shift key, especially so rapidly and while typing numbers.
DNS names already conflict with v4 addresses, and we deal with that ambiguity just fine.
For an actual conflict, someone would need to be using hostnames that had at least 16 segments, none of which were longer than 4 characters. Putting the burden on someone who wants to use extremely deep hostnames that look like bare IP addresses to type a trailing . on their hostname seems plenty reasonable to me. And if they want to use resolv.conf:search while still typing in 16 segments of a hostname, then that ambiguity could be resolved with a leading period.
I suspect the real reason is people who wanted to be able to write ad-hoc parsers using strchr().
We deal with it by requiring v4 addresses to be entirely numeric, which... well, it's possible for v6 but would make it even more annoying to type v6 addresses out.
No, that is not how it is dealt with. A DNS hostname can be entirely numeric as well. For example, add 'search in-addr.arpa' to your resolv.conf.
We deal with the ambiguity by making it clear that if you expect to use DNS names that look like IPv4 addresses, you're going to experience the pain of unexpected behavior. I see no reason this general expectation couldn't also have been set for 16-segment hostnames that look like hexadecimal IP addresses.
Alternatively, a full IPv6 address without any '..' abbreviation could have been defined to start with a period. Then there would be no ambiguity.
It's way cheaper to retire back in the home country after you've made your money in the states. A side benefit is that you can rent out the property before retirement which nets you a small income and keeps the property maintained. My in-laws own about a dozen properties in the Philippines, half of which they rent out, and a couple are being used by their kids. Collectively they make enough from the rentals to get by.
I have a different perspective. Having gas during freeze in Houston allowed us to boil water while the power is out. It not only allowed us to have warm water for taking a bath it also allowed us to boil the water to make it potable as the water was unsafe to drink during that time. Had we not had gas it would have been much worse for us.
This is a really good point, in that the cities that are banning natural gas in new construction, have not as far as I know invested in more reliable electric infrastructure to compensate for the lost redundancy. Are they considering the consequences of eliminating a second redundant energy supply leaving a “single point of failure”? More people will inevitably freeze to death, and that human cost should be accounted for along with the climate crisis, in finding the best path forward.
BTW in my experience, natural gas infrastructure is no more reliable than electricity. In winter 2011, our gas was shut off across Northern New Mexico for a week, due to high demand with record cold temperatures, and the need to retain pressure in the pipeline so it could function at all. Wood saved the day then, and seems to me a more distributed and robust emergency solution. Wood is super dirty of course, so it seems plausible that retaining gas residential hook ups as a backup to the electric supply, could be better for the environment by reducing wood use. Have the cities that are banning natural gas done this analysis?
For safety gas needs to be shut off during many emergencies. Solar panels and battery backups are probably better as they are less dangerous and less prone to common-mode failures than gas. As you mention, a woodpile is also a nice low-tech and low-risk backup.
Unless you have a modicum of electrical knowledge, a willingness to break some small safety regulations/laws, or lacking the former, a thirst for danger
Growing up in an area that commonly has power outages a gas range is a must for me for all of those reasons. But, they are secondary to how much nicer gas is to cook on than any electric range I have tried.
Makes sense. It seems like it would be best to create a fault tolerant society and have multiple forms of energy production instead of going all in on a single one. This can be done in an eco-friendly way, too. I don't think there will be one energy "winner" unless there's a huge breakthrough. We're one big solar flare away from needing paper and pencil for a few weeks.
On the other hand, on the west coast the likely cause of water contamination issues is a major earthquake. In which case gas would most likely be out longer than power.
The only reliable solution is to have a backup that’s not reliant on infrastructure. I.e everyone should have a camp stove and a couple of bottles of propane for it or an alternative self reliant solution that’s good for 3 or so days.
The lawsuit is not rediculous. These are marketed as being games of skill, which means there's always the ability to win on every turn. This is obviously false and people lose tons of money on this thinking it's fair.