So reading the article you’re right, it’s technically anycast. But only at the /24 level to work around BGP limitations. An individual /32 has a specific datacenter (so basically unicast). In a hypothetical world where BGP could route /32s it wouldn’t be anycast.
I wasn’t precise, but what I meant was more akin to a single IP shared by multiple datacenters in different regions (from a BGP perspective), which I don’t think Cloudflare has. This is general parallel of ingress unicast as well, a single IP that can be routed to multiple destinations (even if on the BGP level, the entire aggregate is anycast).
It would also not explain the OP, because they are seeing the same source IP, but from many (presumably) different source locations whereas with the Cloudflare scheme each location would have a different source IP.
To be clear, they definitely use ingress anycast (ie anycast on external traffic coming into Cloudflare). The main question was whether they (meaningfully) used egress anycast (multiple Cloudflare servers in different regions using the same IP to make requests out to the internet).
Since you mentioned DDOS, I’m assuming you are talking about ingress anycast?
It doesn't really matter if they're doing that for this purpose, though. Cloudflare (or any other AS) has no fine control of where your packets to their anycast IPs will actually go. A given server's response packets will only go to one of their PoPs. It's just that which one will depend on server location and network configuration (and could change at any time). Even if multiple of their PoPs tried to fetch forward from the same server, all but one would be unable to maintain a TCP connection without tunneling shenanigans.
Tunneling shenanigans are fine for ACKs, but it's inefficient and therefore pretty unlikely that they are doing this for ingress object traffic.
I struggle to understand the pushback against AI features. As long as the feature isn't intrusive, it seems like a minor addition, and may even be useful to some people. LLMs are here to stay, there is no denying that at this point.
I guess it depends on your definition of "intrusive".
I have no interest in any of the AI features that have been added to the UIs of Meta products (WhatsApp, and Messenger), yet still see prompts for them and modified UIs to try and get me to engage with Meta AI.
Same goes with Gemini poking its head into various spots in the UIs of the Google products I use.
There are now UI spots I can accidentally tap/click and get dropped into a chat with an AI in various things I use on a daily basis.
There are also more "calls to action" for AI features, more "hey do you wanna try AI here?" prompts, etc.
It's not just the addition of AI features, it's all the modern, transparent desperation-for-metrics-to-go-up UX bits that come with it.
And yes, some of these things were around before this wave of AI launches, but a- that doesn't make it better, and b- all the AI features are seemingly the same across apps, so now we have bunches of apps all pushing the same "feature" at us.
I agree with you that the push towards them is annoying. (Google's "Your phone has new exciting features.")
In this case, Calibre does not seem to introduce any said annoyances (probably because it is FOSS, so no pressure for adoption), but people are upset anyways.
There are many features I don't use in various software, but it never made me complain that a new icon/menu entry appeared.
I think there are "classes" of features people have disliked. Eg: every social media app added "stories" at some point, using up screen real estate. Same goes with "shorts/reels/etc".
It's one thing when a feature gets added to an app.
It's another thing when it happens in a context where every app is doing it (or something similar), and you see it in every facet of your tech life.
WhatsApp has now told me twice about Lisa Blackpink.. I wanted to write my friend Lisa, and I talk to her on Instagram and I don't have her on WhatsApp. So searching for her on WhatsApp gives me 2 unrelated contacts, and then Meta Ducking AI suggestions, of which the top one is Lisa Blackpink. Then, further down the screen (hidden by the keyboard) I can see chats where I've mentioned her to mutual friends, but fucking nooo, it's more important that Fuckerberg shoves AI down our throats.
WhatsApp should release their most searched terms on AI, I bet it would correlate with most common names among WhatsApp users...
My problem is that all AI features are currently wildly underpriced by tech giants who are providing subsidies in the hopes of us becoming reliant upon them. Not to mention it means we’re feeding all kinds of our own behavioural data to these giants with very little visibility.
Any new feature should face a very simple cost/benefit analysis. The average user currently can’t do that with AI. I think AI in some form is inevitable. But what we see today (hey, here’s a completely free feature we added!) is unsustainable both economically and environmentally.
Actually the frontier lab pricing is way more expensive than actual cost. Look up the prices for e.g. Kiki K2 on open router to see the real “unsubsidized” costs. It can be up to an order of magnitude less.
This is totally true and points to why Calibre's feature adds value. However I think the big players see exactly what you see and are scrambling to become peoples' go-to first. I believe this is for two main reasons. The first is because it's the only way they know how to win, and they don't see any option other than winning. The second is that they want the data that comes with it so they can monetize it. People switching to local models has a chance to take all that away, so cloud providers are doing everything they can to make their models easier to use and more integrated.
If they added a menu item “Kick a puppy”, and every time you clicked it, a puppy somewhere got kicked, would your response be “oh, well, I don’t like kicking puppies, so I just won’t click it, no big deal”?
> I struggle to understand the pushback against AI features.
To develop and sustain these "AI features", human intelligence - manifested as countless hours of work published online and elsewhere - was exigently preempted and used without permission to further increase the asymmetry of knowledge/power between those with political power and those without (mostly vulnerable, marginalized cohorts).
People are annoyed because the rollout of AI has been very intrusive. It's being added to everything even when it doesn't make sense. It's this generations version of having an app for every website. Does Calibre really need its own AI chatbox when I can ask the same question to ChatGPT in a browser?
People don't want it, plain and simple. Yet again here, like a thousand times before: it still gets forced on users. I don't know who's orchestrating this madness, but it's pathetic
It's pretty much that with the UX. Instead of opening a terminal - you can hit a shortcut (option + L) and select the time in a GUI within the browser!
Delta.Chat is really underappreciated, open-source and distributed. I recommend you at least look into it.
Signal, on the other hand, is a closed "opensource" ecosystem (you cannot run your own server or client), requires a phone number (still -_-) and the opensource part of it does not have great track record (I remember some periods where the server for example was not updated in the public repo).
But yeah, if you want the more popular option, Signal is the one.
You obviously must trust that the server runs this configuration, but you can always run your own chatmail, or regular postfix. (If you don't need "federation" with other mail servers, you don't even need port 25 open).
However, you can also configure your app to delete messages from the server sooner.
»By default, Delta Chat stores all messages locally on your device. If you e.g. want to save storage space at your mail provider, you can configure Delta Chat to delete old already-received messages on the server automatically. They still remain on your device until you delete them there, too.«
It certainly looks like that. For such a technical article, there are just very vague descriptions of the failure modes. The code examples do not make much sense. I would expect more in depth dive into the problems, but all we got is this mostly generic stuff.