As your parents age you should convince them to transfer their assets into a trust where they still maintain control but withdraws etc can be optionally approved by a spouse or other family member. The trust has many other benefits but is especially good for fraud as it can disassociate the holders identity from the assets and have specific conditions for withdrawal. It also can provide a clean transfer of ownership in the event of a death etc. I am sorry this happened to you, it is becoming more common in the us too. And all of these “companies” seem to establish bank accounts and addresses in Delaware…
IPv4 isn't perfect, but it was designed to solve a specific set of problems.
IPv6 was designed by political process. Go around the room to each engineer and solve for their pet peeve to in turn rally enough support to move the proposal forward. As a bunch of computer people realized how hard politics were they swore never to do it again and made the address size so laughably large that it was "solved" once and for all.
I firmly believe that if they had adopted any other strategy where addresses could be meaningfully understood and worked with by the least skilled network operators, we would have had "IPv6" adoption 10 years ago.
My personal preference would have been to open up class E space (240-255.*) and claw back the 6 /8s Amazon is hoarding, be smarter about allocations going forward, and make fees logarithmic based on the number of addresses you hold.
Only if by "political process" you mean a bunch of people got together (physically and virtually) and debated the options and chose what they thought was best. The criteria for choosing IPng were documented:
> I firmly believe that if they had adopted any other strategy where addresses could be meaningfully understood and worked with by the least skilled network operators, we would have had "IPv6" adoption 10 years ago.
The primary reason for IPng was >32 bits of address space. The only way to make them shorter is to have fewer bits, which completely defeats the purpose of the endeavour.
There was no way to move from 32-bits to >32-bits without every network stack of every device element (host, gateway, firewall, application, etc) getting new code. Anything that changed the type and size of sockaddr->sa_family (plus things like new DNS resource record types: A is 32-bit only; see addrinfo->ai_family) would require new code.
This is a lot of basically sharpshooting, but I will address your last point:
> There was no way to move from 32-bits to >32-bits without every network stack of every device element (host, gateway, firewall, application, etc) getting new code. Anything that changed the type and size of sockaddr->sa_family (plus things like new DNS resource record types: A is 32-bit only; see addrinfo->ai_family) would require new code.
That is simply not true. We had one bit left (the reserved/"evil" bit) in IPv4 headers that could have been used to flag that the first N bytes of the payload were an additional IPv4.1 header indicating additional routing information. Packets would continue to transit existing networks and "4.1" capable boxes at edges could read the additional information to make further routing decisions inside of a network. It would have effectively used IPv4 as the core transport network and each connected network (think ASN) having a handful of routed /32s.
Overlay networks are widely deployed and have very minor technical issues.
But that would have only addressed the numbering exhaustion issues. Engineers often get caught in the "well if I am changing this code anyway" trap.
An explicit goal of IPv6 considered as important as the address expansion was the simplification of the packet header, by having fewer fields and which are correctly aligned, not like in the IPv4 header, in order to enable faster hardware routing.
The scheme described by you fails to achieve this goal.
I am glad you brought this up, that is another big issue with IPv6. A lot of the problems it was trying to solve literally don't exist anymore.
Header processing and alignment were an issue in the 90s when routers repurposed generic components. Now we have modern custom ASICs that can handle IPv4 inside of a GRE tunnel on a VLAN over MPLS at line rate. I have switches in my house that do 780 Gbps.
At the time when it was designed, IPv6 was well designed, much better than IPv4, which was normal after all the experience accumulated while using IPv4 for many years.
The designers of IPv6 have made only one mistake, but it was a huge mistake. The IPv4 address space should have been included in the IPv6 space, allowing transparent intercommunication between any IP addresses, regardless whether they were old IPv4 addresses or new IPv6 addresses.
This is the mistake that has made the transition to IPv6 so slow.
> The IPv4 address space should have been included in the IPv6 space, allowing transparent intercommunication between any IP addresses, regardless whether they were old IPv4 addresses or new IPv6 addresses.
How would you have implemented it that is different from the NAT64 that actually exists, including shoving all IPv4 addresses into 64:ff9b::/96?
> That is simply not true. We had one bit left (the reserved/"evil" bit) in IPv4 headers […]
Great, there's an extra bit in the IPv4 packet header.
I was talking about the data structures in operating systems: are there any extra bits in the sockaddr structure to signal things to applications? If not, an entirely new struct needs to be deployed.
And that doesn't even get into having to deploy new DNS code everywhere.
They didn't use the reserved bit, because there's a field that's already meant for this purpose: the next protocol field. Set that to 0x29 and it indicates that the first bytes of the payload contain a v6 address. Every v4 address has a /48 of v6 space tunnelled to it using this mechanism, and any two v4 addresses can talk v6 between them (including to the entire networks behind those addresses) via it.
If doing basically exactly what you suggested isn't enough to stop you from complaining about v6's designers, how could they possibly have done any better?
This would require new software and new ASICs on all hosts and routers and wouldn't be compatible with the old system. If you're going to cause all those things, might as well add 96 new bits instead of just 2 new bits, so you won't have the same problem again soon.
IPv6 is literally just IPv4 + longer addresses + really minor tweaks (like no checksum) + things you don't have to use (like SLAAC). Is that not what you wanted? What did you want?
And what's wrong with a newer version of a thing solving all the problems people had with it...?
There are more people than IPv4 addresses, so the pigeonhole principle says you can't give every person an IPv4 address, never mind when you add servers as well. Expanding the address space by 6% does absolute nothing to solve anything and I'm confused about why you think it would.
I finally clicked when I worked out it was 2^64 subnets . You have a common prefix of you /48, which isn’t much longer than an ipv4 address - especially as it seems everything is 2001::/16, which means you basically have to remember a 32 bit network prefix just like 12.45.67.8/32.
That becomes 2001:0c2d:4308::/48 instead
After that you just need to remember the subnet number and the host number. If you remember 12.45.67.8 maps to 192.168.13.7 you might have
2001:0c2d:4308:13::7
So subnet “13” and host “7”
It’s not much different to remebering 12.45.67.8>192.168.13.7
What a fucking joke. They are going to charge me for running a script I wrote on MY server that is merely launched by their server that I am already paying an outrageous amount for to have a private repository. By the minute!!!! It never ends.
Nice write up. It would be great if the authors could follow up with a detailed technical walk through of how to use the various tooling to figure out what an extension is really doing.
Could one just feed the extension and a good prompt to claude to do this? Seems like automation CAN sniff this kind of stuff out pretty easily.
Seriously. I don’t get all the over concern over the verbosity. At least in java you can tell what the hell is going on. And the tools…so good. Right now I am in python typescript world. And let me tell you, the productivity and ease of java and c# are sorely missed!
Agreed. The whole idea that they are in it for the “service” is a really naive idea. If we want honest hardworking and qualified people to do the job the salaries should be in the 500k to 1.5m (senate) range. Then knock out the corruption.
According to this link [1] 200k would put someone above the 80th percentile and not far below the 95th percentile in terms of household income for the DC metro area, so I don't see how that could be considered "underpaid", especially when you consider the benefits.
The type of person who gets elected to Congress is likely to be far above average in charisma/intelligence/skill, and hence underpaid relative to what they could attain in the private sector
I mean, it sounds dumb, but returns from further wealth are logarithmic; after a certain point, the only thing you can buy more of is power. And you’ve already got that, in this case!
If you’re in a situation where getting more wealth could endanger your power, it makes sense not to wealth-max since, again, what else could you buy with it? But you need to get into the “what else could you buy with it” regime for this reasoning to make sense.
Someone who only has basic needs gets there pretty early. But even the relatively unenlightened don’t need the second jet except, yannow, for power.
i'd support that if and only if it required congresspeople to divest from individual stocks. with perhaps some compromise like...maybe they could hold "total market" index funds, or are only able to trade once a year, or etc
This is pretty easy to work around via vpns etc. Guess its another barrier…for now. But it forces escalating tactics by the other side to appear legit. So if ppl come to trust the “source” information. It might actually end up worse in the long run as the sources are all spoofed. Would need a more advanced system using signatures and real life verification to actually know a source. Similar to a ca with all of its drawbacks. Point being, this move is kind of a wash.
But it was not expected, it's a one trick pony, but it worked. There's a lot of accounts that were passing as being from one country, and they ended up being some content farmer from a third world country with a gdp per capita of 2000$.
It was in part fueled by Twitter's idea of paying content creators, which made the whole thing an engagement bait party, it gave an economic incentive to countries with cheap and idle workforces to work 9 to 5 on posting whatever got likes without even understanding it, even it was political.
reply