Meandwhile the Canonical employee who's responsible for some aspects of apt has decided to insert rust code. Because of this, and just this, Debian dropped 4 entire architectures. https://lists.debian.org/debian-devel/2025/10/msg00285.html
>I plan to introduce hard Rust dependencies and Rust code into APT, no earlier than May 2026. This extends at first to the Rust compiler and standard library, and the Sequoia ecosystem. ... If you maintain a port without a working Rust toolchain, please ensure it has one within the next 6 months, or sunset the port. It's important for the project as whole to be able to move forward and rely on modern tools and technologies and not be held back by trying to shoehorn modern software on retro computing devices.
If you think Canonical isn't going to lead Debian around by the nose on this you haven't been paying attention.
```
Rust is already a hard requirement on all Debian release
architectures and ports except for alpha, hppa, m68k, and
sh4 (which do not provide sqv).
```
It seems to me that the APT change was just a nail in the coffin of these older architectures, which would have eventually been sunset anyway, due to sqv not being available. If you really want to run some kind of Linux on these very old machines, godspeed, but you can't expect them to be maintained by a project with it's fingers in so many pies forever.
Look at how the proposal for making netplan the default network manager in Debian went. Not good, from Canonical's perspective.
Making /tmp behave the way systemd guys want also went not according to plan. The behavior is modified somewhat because of the discussion.
Rust's influence doesn't come from Canonical per se, but from its promise to eradicate memory related bugs. The initial hype was off the charts, but it's coming down, and the shortcomings are becoming obvious.
Canonical is trying to affect Debian, that's true, but it's not always a given.
It's true. The megacorporations are actively blocking communication between human persons because it increases their profit and the rules don't apply to them. But this doesn't mean you need to roll over and just take it. The benefits of hosting your own email far outweigh the problems of delivery to some megacorps.
>self-hosting email is an anachronism of a simpler internet. The good old days. They are long over.
This is only true if you are being paid to run a for-profit business or institution. For human people acting in their own interest the fight for free communication is far from over.
> The operators admitted on Discord they accidentally disrupted I2P while attempting to use the network as backup command-and-control infrastructure ...
This is crazy to me. Discord is letting literal criminals use it's corporate services in full view to commit crimes?
I'm glad the llama.cpp and the ggml backing are getting consistent reliable economic support. I'm glad that ggerganov is getting rewarded for making such excellent tools.
I am somewhat anxious about "integration with the Hugging Face transformers library" and possible python ecosystem entanglements that might cause. I know llama.cpp and ggml already have plenty of python tooling but it's not strictly required unless you're quantizing models yourself or other such things.
They're both underdefining what "intimate images" means and using the term images instead of photos. So this means they want this to apply to everything that can be represented visually even if it has nothing to do with anything that happened in reality. Which means they don't care about actual harms. The way they're using the word 'harm' seems be to more in line with the word 'offend'. So now in the UK if there is an offensive image (like a painting) posted on a web site (or other internet protocol) they are going to be, " treated with the same severity as child sexual abuse and terrorism content,". That's wild. And dangerous. This policy will do far more damage than any painting or other non-photo images would.
I had to google a bit, but this Guardian article[1] goes into a lot more detail than the Register piece here. I was of the opinion that this sounds too onerous and ill-defined when I first read the Register piece especially with censorship on the rise in Europe recently, but the Guardian piece made me side more with this particular policy. It doesn't sound as broad as the Register piece puts it, it sounds like it's specifically for revenge porn and generating deep fake porn non-consensually, not any "intimate image" which I agree is far too broad. Albeit, of all governments, I'd suspect especially the current UK government is to be amongst the most likely to say expand these powers to speech they don't like or general pornography one day, etc, it doesn't sound like this specific policy is broad yet according to the Guardian article. The Register piece is using "intimate image" as a euphemism I think whereas the intent of the policy is a bit more defined and specific.
I agree that laws such as this to be defined very carefully, but I think "images" is the appropriate term to use, rather than "photos". LLMs make it near trivially easy to render a photo in countless styles, after all, such as paintings, or sketches.
I think if I produced inappropriate images that are identifiable as a specific child victim, who obviously cannot consent to have inappropriate images generated of their likeness, I believe images and photos are a distinction without a difference.
What qualifies as an "intimate image"? A photo of someone in a swimsuit at the beach?
Fictional content is also covered by this law. How do we determine what fictional content counts as an intimate image of a real person? What if the creator of an AI image adds a birthmark that the real life subject doesn't have, is that sufficient differentiation to no longer count as an intimate image of a real person? What if they change the subject's eye color, too?
If you envision yourself as a potential victim of such content, I think the answers to these questions all become pretty obvious. A swimsuit photo might or might not be intimate, depending on what kind of swimsuit it is and the context in which the person posting the photo is presenting it. A birthmark you don't have or a different eye color obviously do not make a fictionalized image become "not you" because they would not reduce the violation you'd feel.
> A birthmark you don't have or a different eye color obviously do not make a fictionalized image become "not you" because they would not reduce the violation you'd feel.
What if someone claims to feel violated by an image of a person that looks totally different: different skin color, different build, different facial structure, etc?
Then they'll try to convince the authorities that the image is of them, and presumably mostly fail although in some cases they may succeed. (If you're worried about something like the US DMCA, that's almost certainly not going to be the proposal; the UK has a number of existing 48 hour takedown policies, and they all involve orders from the authorities rather than self-certified requests from random third parties.)
Presumably the same way they investigate revenge porn complaints today? I don't mean to be obtuse, I understand your questions are rhetorical, but I don't see what it is you're gesturing towards.
In the case of revenge porn, there's an image of a real person. By contrast, fictional content doesn't affect actually show a real person, so any attempt to prohibit fictional revenge porn must target images that merely look similar to a person. What degree of similarity is required to qualify? That dimension isn't a factor in real revenge porn.
Perhaps you're missing the context? In the incidents which led to this proposal, no judgment of similarity was necessary, the sexualized images were posted in the replies to non-sexualized images of the same person.
This isn't really a novel dimension in the first place, I don't think. It's just rarely an issue in practice, because most people who post these images do so to shame and embarrass the depicted person. No doubt there will be edge cases where a sexualized image of consenting person A gets taken down because they look similar to non-consenting person B - but is that really a big problem?
The law doesn't stipulate that the offending images have to be posted with the intent to shame or embarrass, nor that the images have to be sent directly to the person that's supposed to be depicted in the image. If that's the justification, then the legislators ought to have put wording to that effect into the law.
As you point out, this is a proposed amendment to an already existing law.
> The government said: "Plans are currently being considered by Ofcom for these kinds of images to be treated with the same severity as child sexual abuse and terrorism content, digitally marking them so that any time someone tries to repost them, they will be automatically taken down."
Unless I'm mistaken, CSAM is prohibited entirely in the UK, not just in replies to the child depicted in the abusive imagery. They explicitly say that they intend to make fictional intimate content allegedly depicting a real person to be treated the same way as CSAM.
There's nothing that suggests to that this new amendment prohibiting fictional content is going to be narrowly scoped to replies to the people allegedly being depicted.
I hate to appear to defend this, but generative AI has sort of collapsed the distinction between a photo and an image. I could generate an image from a photo which told the same story, then delete the photo, and now everything is peachy fine? So that could have been a motivation for "images".
Though I wonder if not existing frameworks around slander and libel could be made to address the brave new world of AI augmented abuse.
how is blaming X's ai product victim blaming? It was allowing its users to generate CSA images and X's first response(s) to the problem didn't appear to take the issue seriously.
Both this post's viewer and the older one I linked directly above have greatly exaggerated vertical scaling. It is not proportional to the distances on the map. With actual 1:1 height scale all these planes, even the stratospheric jets, would be much closer to the ground.
Currently it servers more aesthetically than accuracy when it comes to vertical scaling, its on the feature list to add 1:1 height scale, it isn't as pleasing to look at but definitely a must have.
Nice to hear. Just as an aside, the one I linked already has a (deep in the settings) setting to adjust the vertical scaling. But even at minimum 0.2 ~ 0.3 scale it's still not 1:1.
So I did play around with it for a while, and one thing that I kept running into is how dysfunctional the entire thing becomes at that scale, it looks really good surprisingly, yet the usability gets really limiting, and makes it X times harder to use, a logarithmic solution might be better here I believe.
The Freenode to Libera incident is a great example of how using protocols allows for a community to mitigate most damage from bad actors both external and internal. I'm not saying damage wasn't done by Andrew Lee during his attempted coup. IRC as a whole lost many important FOSS projects due to Lee's channel take-overs. But most of the community of daily users just moved to the new digs and continues to carry on.
It is if you were a person who was using Freenode and wanted to keep talking to the same people you've known for the last 20 years like myself. Our community still exists.
Understandably the attacks put a sour taste in the mouth of many FOSS projects; especially those with corporate sponsors or aspects. 20 years of working fine not considered. But now we're seeing you get the same downsides with corporate services but none of the ability to effortlessly move. Going to Discord has been shown to have been a long term mistake even if it seemed nice in the short term.
why would you hedge yourself with a double-negative here? It was because of (open) protocols and not services that people could easily decamp and setup afresh.
Interoperability has always been paramount, but gets so easily forgotten.
No, this is "Oat - Ultra-lightweight, semantic, zero-dependency Javascript UI component library". If it doesn't work without javascript it is not an HTML UI component library.
>I plan to introduce hard Rust dependencies and Rust code into APT, no earlier than May 2026. This extends at first to the Rust compiler and standard library, and the Sequoia ecosystem. ... If you maintain a port without a working Rust toolchain, please ensure it has one within the next 6 months, or sunset the port. It's important for the project as whole to be able to move forward and rely on modern tools and technologies and not be held back by trying to shoehorn modern software on retro computing devices.
If you think Canonical isn't going to lead Debian around by the nose on this you haven't been paying attention.
reply