Ubuntu isn’t too big to target, if anything, its dominance makes it the obvious target. When you look at the trajectory over the years and some of Canonical’s decisions, it’s hard not to raise an eyebrow. Major distros like Ubuntu and Fedora didn’t scale globally without taking big tech money and money rarely comes with no strings attached. At some point, players like Microsoft are going to expect a return on that investment.
What fearmongering has the anti-systemd crowd been selling you? Genuinely curious because I wish I wasn't running systemd. My perspective is that the things they (we?) are saying are basically correct. But the service manager works well enough that most distros have accepted the downsides.
> it is really fearmongering when the systemd people literally founded a company to develop attestation for linux?
Considering it changes nothing on what they actually work on on systemd I would give this a yes. Every time I hear "they will do this or that" it just never really happened. So far it feels more like "the boy who cried wolf" than "slippery slope" to me. But maybe I am missing something?
A lot of the devs have always here and there added features for secure/measured boot and image based OSes and things that make them more usable to daily drive (hermetic /usr/, UKIs, sysext, portable services, mkosi, DDIs, ...). A lot of the things make image based systems more modifiable/user accessible without compromising on the general security aspect.
If they really wanted to lock in Linux users to a single blessed image from them they would have had a better chance when Lennart was working at Microsoft (which generally is the only preinstalled CA) instead of starting a "competing" company (they are targeting a different niche from what I understand).
This, and locking down everyone to a single blessed Linux distro would be... Rather difficult given how widespread Linux is and just how many distros exist. It is one thing for each distro to decide "Hey, let's use systemd". Gnome requires it but that's Gnome; there is nothing stopping you from using XFCE, or I3, or KDE, or... It is a totally different thing to make every Linux distro stop working (and have said distro go along with that) because that distro isn't the "blessed" one. Microsoft can pull this off because they're Microsoft and they have total control over one of the most dominant operating systems. Apple can pull this off because they're Apple and control everything from the hardware upwards. Linux is neither of these. I would go so far as to argue that the BSDs have a better chance of pulling off something like this than Lennart does. RedHat may have a lot of influence in the Linux world, but it certainly doesn't have some secret god mode switch it can flip and universally make every distro conform to it's wants and desires.
> it is really fearmongering when the systemd people literally founded a company to develop attestation for linux?
Can you prove beyond a reasonable doubt that they intend to force this on you without any way of disabling it, or that they have already done so? Because unless they plan to do this (and you have concrete proof of such and not just "well they could do this" claims) or they have already done it across a significant portion of the Linux distribution ecosystem (and no, distros voluntarily switching to systemd is not forcing anyone to do anything), this is fearmongering. Simple as that.
>I find this flippancy about the greatest mystery in the universe extremely arrogant and incurious and wish it wouldn't be so prevalent.
There's absolutely nothing mysterious about human intelligence unless you refuse to give it a clear definition. All the people waffling on about AGI refuse to give a clear, measurable definition of intelligence, because if they defined it exactly then it's possible to clearly determine whether a given machine does or does not meet that criteria. It's just 21st century woo peddling.
> There's absolutely nothing mysterious about human intelligence unless you refuse to give it a clear definition
This begs the question[0] by assuming that it can be given a clear, measurable definition. A large part of the mystery of consciousness and intelligence is that it's hard to define, measure, or explain; the most characteristic aspects (i.e. those relating to a subjective experience) are, in principle, impossible to measure or verify[1]. To say that it's not mysterious once you give it a clear, measurable definition is basically saying "it's not mysterious once you remove all aspects that make it mysterious."
Entrepreneurship is like sales; it can't really be taught, only learned through practice, through trial and error. The best way for kids to learn business is through doing it.
Definitely. Unlike asking the right questions or having good taste though, it's possible to know how successful you are at business so the dynamics are definitely different.
>People don't need self-help advise, they need a fair redistribution of increased productivity.
The increased productivity is pretty much entirely coming from AI researchers and the companies investing in huge amounts of GPUs, and they are the ones receiving most of the windfalls, how's that not fair?
This presumes that the value was created by the authors and not the people who found a way to use the structure in the training set to create intelligence. Like crediting wi-fi router owners with creating Wi-Fi Positioning System instead of the people who realized they could wardrive around to create maps and extract a new kind of value that would never have existed without them. Or Egyptians for deciphering hieroglyphics instead of the Frenchman who realized the Rosetta Stone they were using to hold up a wall could actually be used to do much more.
My thinking here is coming from the paper "From Entropy to Epiplexity"[1] which partly discussed why you can train on synthetic data: it's the structure of the data that enables learning, not just the amount of "information". Authors of images and videos may have worked just as hard as authors of text training sets, but they didn't contribute to AI as much because there just wasn't the same kind of structure to discover there. It's the people who found the usable structure, not the people who accidentally generated it, who created the value.
> This presumes that the value was created by the authors and not the people who found a way to use the structure in the training set to create intelligence.
This presumes it's binary.
> Or Egyptians for deciphering hieroglyphics instead of the Frenchman who realized the Rosetta Stone they were using to hold up a wall could actually be used to do much more.
Yes, I think a lot more than 1 person deserves credit for years of painstaking research.
> It's the people who found the usable structure, not the people who accidentally generated it, who created the value.
You're naively assuming the structure is accidental.
Depends on your ethics I guess? In my view fair distribution is an equitable one, which is also the principle that helps to keep society stable. Not that researchers and business people should not be rewarded for their achievement, but the reward should be moderate and limited in time instead of creating a permanent underclass or whatever the plan is now IMO
Leaving the 'aryan' and 'white' bit aside there are mountains of things that are common between Indians and Iranians -- the system of classical music, musical instruments, mythological characters, food, and of course language.
You're not actually doing engineering if you're just vibe-coding, reviewing, and testing all the way down. What the hell is that? Just a weird simulacrum of software development that will break apart in unpredictable ways. Security consultants are going to have very lucrative careers in the coming years.
If I don't have experience with the underlying framework/language/thing being modified, it makes it quite difficult to trust the actual review. In this example, I haven't worked heavily with Cloudformation, so I can't call b.s if it leaves a database instance exposed to the wider public internet rather than in my company's private VPC.
You can ask the agent to check that it doesn't leave a database instance exposed to the public, and present you with proof for you to check (references to the code and the relevant Cloudformation documentation). Then repeat this for all the things you'd normally want to check for in a code review.
In that case I'm just moving the reading of the documentation from reading it as I'm writing the yaml to when I'm doing a code review. Not saying it isn't helpful to have a pair researcher, just seems like I'm moving things around .
Even if it reaches the end state of AGI, e.g. AI that's smarter and more capable than 90% of humans, there'll still be a huge learning curve to using it well, as anyone who's tried managing very smart humans can attest.
reply