Hacker Newsnew | past | comments | ask | show | jobs | submit | gardenerik's commentslogin

Cloudflare actually does anycast for egress too, if that is what you meant: https://blog.cloudflare.com/cloudflare-servers-dont-own-ips-...


So reading the article you’re right, it’s technically anycast. But only at the /24 level to work around BGP limitations. An individual /32 has a specific datacenter (so basically unicast). In a hypothetical world where BGP could route /32s it wouldn’t be anycast.

I wasn’t precise, but what I meant was more akin to a single IP shared by multiple datacenters in different regions (from a BGP perspective), which I don’t think Cloudflare has. This is general parallel of ingress unicast as well, a single IP that can be routed to multiple destinations (even if on the BGP level, the entire aggregate is anycast).

It would also not explain the OP, because they are seeing the same source IP, but from many (presumably) different source locations whereas with the Cloudflare scheme each location would have a different source IP.


To my knowledge, any cast is very much a thing cloudflare uses.. It allows to split traffic per region, which, in the case of DDOS is a good thing.


To be clear, they definitely use ingress anycast (ie anycast on external traffic coming into Cloudflare). The main question was whether they (meaningfully) used egress anycast (multiple Cloudflare servers in different regions using the same IP to make requests out to the internet).

Since you mentioned DDOS, I’m assuming you are talking about ingress anycast?


It doesn't really matter if they're doing that for this purpose, though. Cloudflare (or any other AS) has no fine control of where your packets to their anycast IPs will actually go. A given server's response packets will only go to one of their PoPs. It's just that which one will depend on server location and network configuration (and could change at any time). Even if multiple of their PoPs tried to fetch forward from the same server, all but one would be unable to maintain a TCP connection without tunneling shenanigans.

Tunneling shenanigans are fine for ACKs, but it's inefficient and therefore pretty unlikely that they are doing this for ingress object traffic.


I struggle to understand the pushback against AI features. As long as the feature isn't intrusive, it seems like a minor addition, and may even be useful to some people. LLMs are here to stay, there is no denying that at this point.


I guess it depends on your definition of "intrusive".

I have no interest in any of the AI features that have been added to the UIs of Meta products (WhatsApp, and Messenger), yet still see prompts for them and modified UIs to try and get me to engage with Meta AI.

Same goes with Gemini poking its head into various spots in the UIs of the Google products I use.

There are now UI spots I can accidentally tap/click and get dropped into a chat with an AI in various things I use on a daily basis.

There are also more "calls to action" for AI features, more "hey do you wanna try AI here?" prompts, etc.

It's not just the addition of AI features, it's all the modern, transparent desperation-for-metrics-to-go-up UX bits that come with it.

And yes, some of these things were around before this wave of AI launches, but a- that doesn't make it better, and b- all the AI features are seemingly the same across apps, so now we have bunches of apps all pushing the same "feature" at us.


I agree with you that the push towards them is annoying. (Google's "Your phone has new exciting features.")

In this case, Calibre does not seem to introduce any said annoyances (probably because it is FOSS, so no pressure for adoption), but people are upset anyways.

There are many features I don't use in various software, but it never made me complain that a new icon/menu entry appeared.


I think there are "classes" of features people have disliked. Eg: every social media app added "stories" at some point, using up screen real estate. Same goes with "shorts/reels/etc".

It's one thing when a feature gets added to an app.

It's another thing when it happens in a context where every app is doing it (or something similar), and you see it in every facet of your tech life.


WhatsApp has now told me twice about Lisa Blackpink.. I wanted to write my friend Lisa, and I talk to her on Instagram and I don't have her on WhatsApp. So searching for her on WhatsApp gives me 2 unrelated contacts, and then Meta Ducking AI suggestions, of which the top one is Lisa Blackpink. Then, further down the screen (hidden by the keyboard) I can see chats where I've mentioned her to mutual friends, but fucking nooo, it's more important that Fuckerberg shoves AI down our throats.

WhatsApp should release their most searched terms on AI, I bet it would correlate with most common names among WhatsApp users...


My problem is that all AI features are currently wildly underpriced by tech giants who are providing subsidies in the hopes of us becoming reliant upon them. Not to mention it means we’re feeding all kinds of our own behavioural data to these giants with very little visibility.

Any new feature should face a very simple cost/benefit analysis. The average user currently can’t do that with AI. I think AI in some form is inevitable. But what we see today (hey, here’s a completely free feature we added!) is unsustainable both economically and environmentally.


Actually the frontier lab pricing is way more expensive than actual cost. Look up the prices for e.g. Kiki K2 on open router to see the real “unsubsidized” costs. It can be up to an order of magnitude less.


Summarizing text can very easily be done by local AI. Low powered, and free. for this type of task, there is essentially no reason to pay.


This is totally true and points to why Calibre's feature adds value. However I think the big players see exactly what you see and are scrambling to become peoples' go-to first. I believe this is for two main reasons. The first is because it's the only way they know how to win, and they don't see any option other than winning. The second is that they want the data that comes with it so they can monetize it. People switching to local models has a chance to take all that away, so cloud providers are doing everything they can to make their models easier to use and more integrated.


Which is not what is happening here. I think a lot of people’s objections would be resolved by a local model.


> Currently, calibre users have a choice of commercial providers, or running models locally using LM Studio or Ollama.

The choice is yours. If you want local models, you can do that.


If they added a menu item “Kick a puppy”, and every time you clicked it, a puppy somewhere got kicked, would your response be “oh, well, I don’t like kicking puppies, so I just won’t click it, no big deal”?


Why is it obvious they are here to stay?


> I struggle to understand the pushback against AI features.

To develop and sustain these "AI features", human intelligence - manifested as countless hours of work published online and elsewhere - was exigently preempted and used without permission to further increase the asymmetry of knowledge/power between those with political power and those without (mostly vulnerable, marginalized cohorts).


For my part it is the uneasy feeling of maybe being tracked and “sharing” private text/media either by accident or by the software’s malice.

Most devs want to put AI on their CV so they have strong personal incentives to circumvent what is best for their users.

Would you like to have LLM connections to Google from your OSS torrent client?

We can see the painting on the wall, but still not like it.


People are annoyed because the rollout of AI has been very intrusive. It's being added to everything even when it doesn't make sense. It's this generations version of having an app for every website. Does Calibre really need its own AI chatbox when I can ask the same question to ChatGPT in a browser?


Do you really need AI integration in your IDE when you can just use the ChatGPT chatbox in your browser?

Having it built-in allows Calibre to add context to the prompt automagically.


Someone wasted their time to add this feature when I can just throw some book titles at any of the other chatbots instead.

Perhaps to the detriment of some compatibility features that got sent to the back burner.


>LLMs are here to stay, there is no denying that at this point.

You make LLMs sound like a stalker, or your mom's abusive live-in boyfriend


People don't want it, plain and simple. Yet again here, like a thousand times before: it still gets forced on users. I don't know who's orchestrating this madness, but it's pathetic


These seem like a simple text rendered using some handwritten looking Google Fonts. Or is there something I am missing?


The icon reminds me pretty much of Kagi


haha, good catch!


How does it compare to something like good old at[1] that spawns a browser?

[1] https://linux.die.net/man/1/at


It's pretty much that with the UX. Instead of opening a terminal - you can hit a shortcut (option + L) and select the time in a GUI within the browser!


Delta.Chat is really underappreciated, open-source and distributed. I recommend you at least look into it.

Signal, on the other hand, is a closed "opensource" ecosystem (you cannot run your own server or client), requires a phone number (still -_-) and the opensource part of it does not have great track record (I remember some periods where the server for example was not updated in the public repo).

But yeah, if you want the more popular option, Signal is the one.


I have not noticed that problem, the messages are delayed by a few seconds, but not noticeably. Only using chatmail / own postfix though, YMMV.


It is deleted from the server only, it stays on your devices, and is also synced to new device when you add one.


Where did you see this? Or just learned from experience?


Cannot find it now, but that's how it was discussed at the time when chatmail server became a thing. It is also how it is configured in chatmail:

https://github.com/chatmail/relay/blob/96a1dbac08441034c5990...

https://github.com/chatmail/relay/blob/96a1dbac08441034c5990...

You obviously must trust that the server runs this configuration, but you can always run your own chatmail, or regular postfix. (If you don't need "federation" with other mail servers, you don't even need port 25 open).

However, you can also configure your app to delete messages from the server sooner.


The FAQ reads:

»By default, Delta Chat stores all messages locally on your device. If you e.g. want to save storage space at your mail provider, you can configure Delta Chat to delete old already-received messages on the server automatically. They still remain on your device until you delete them there, too.«

https://delta.chat/en/help#what-happens-if-i-turn-on-delete-...


FAQ: What happens if I turn on “Delete old messages from server”?

https://delta.chat/en/help#what-happens-if-i-turn-on-delete-...


The UI was actually taken from Signal in the early days[0].

[0] https://support.delta.chat/t/list-of-all-known-client-projec...


It certainly looks like that. For such a technical article, there are just very vague descriptions of the failure modes. The code examples do not make much sense. I would expect more in depth dive into the problems, but all we got is this mostly generic stuff.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: