I refuse to indulge in the false fantasy that a household where each parent(s) works multiple minimum wage jobs is “lazy” for not preparing homemade lunches for their children. Also, in the US, many lower-income households are in “food deserts”, where there is a lack of grocery stores selling fresh food and a preponderance of convenience stores selling processed foods. In a country where the top 1% of households possess a third of the country’s wealth and the bottom 50% of households only possess 2.5%, poverty, malnourishment, and undereducation are choices made for the poor by the rich ruling class.
USA is the richest country in the world, people there, even the ones at the bottom of the work ladder, have access to riches that for most of the people on the planet are only dreams. You have no idea what it is to be poor or to live next to actual poverty (even I have no idea, and I live in a country that's poorer, and that when I was younger much poorer than the USA).
94% of adult Americans drive a car. Anyone there can go to a store that sells vegetables and raw meat, buy it, and prepare a proper meal that's cheaper than some deep-fried, frozen processed crap.
Enough with the performative virtue signaling. It's all so tiresome. Nobody in the USA goes hungry unless they really choose too at every single step in their lives.
Please explain to me how advocating for material policies for the poor, funded by taxes that come out of my pocket, is "performative virtue signaling." Does this phrase just mean "any kindness whatsoever" at this point?
The taxes come out of every taxpayers' pockets - forcibly - not from your pocket, as you seem to think. If you want to do charity, do it with your own money, your own time and your own effort.
Wanting to redistribute other people's private property doesn't make you a good person, it makes you a tyrant, the degree of which is only limited by your power.
Good news. I also donate an enormous amount of money to the poor personally, with a goal of donating 100% of my net income in the not too distant future. Is that virtue signaling too?
Can someone explain why disallowing Gatekeeper bypass via Homebrew is related to macOS disallowing unsigned ARM64 binaries to run? My understanding is that `—no-quarantine` just removes the `com.apple.quarantine` attribute from a downloaded application. If the application is unsigned then removing the attribute wouldn’t allow it to run anyways. There’s no way to disable the signature check because it’s a kernel level check. However, macOS will accept an adhoc signature. Because of this, to me it seems like Gatekeeper bypass and unsigned software are orthogonal topics. No matter if I remove the Gatekeeper signature or not, unsigned code still won’t run unless I add an adhoc signature. On the other hand, if I distribute software with an adhoc signature, macOS wouldn’t prevent someone else from running it as long as they remove the quarantine attribute. Am I missing something?
The only thing signaling Gatekeeper to do the deep checks and also to block execution is the presence of that file attribute. When GK was first introduced in Tiger that’s literally all it consisted of; a warning/reminder that “hey slack jawed user, you downloaded this executable from the internet, be sure you trust it!” and once they said OK, the attribute was cleared and you’re not gonna get bothered again.
The AMFI checks happen on every execution of any executable. Xprotect is also running execution based checks on first run and randomly later on to check for signatures of known malware. Gatekeeper is the umbrella term for all of this on the Mac, but its still kicked off, to the user at least, as that prompt “hey champ you downloaded this from the internet and the developer didn’t want to upload this binary to Apple for scans, move it to your trash”.
Long story short, if you remove the quarantine bit, you can run whatever the fuck you want so long as Xprotect doesn’t detect anything in its YARA rules files.
1. Does this mean it’s a little disingenuous for the Homebrew maintainers to claim that this change has anything to do with app signing, given that they reference the impossibility of unsigned applications in the issue?
2. Does this mean that if a developer self-signs their app but doesn’t notarize it that it will meet Homebrew’s criteria of “passing Gatekeeper checks”?
1. Yes. (Either that or they know something we don't about Apple's future plans.)
2. No, as Gatekeeper checks both for a valid signature from an Apple Developer Program certificate as well as notarization.
Could you clarify what you mean? I understand why the DoD partnership is ethically dubious, but I don’t understand why SB 53 is bad. It seems like the opposite of a military partnership.
Are you saying the only ethically valid path is for all companies to oppose all regulation? Supporting any regulation at all can only be from bad motives, and therefore should be avoided?
>Supporting any regulation at all can only be from bad motives, and therefore should be avoided?
It's just vibe check heuristic -- if the regulated throws a tantrum telling how switching to USB-C charging or opening up the app store will get them out of businees (spoilers -- it never does), it's probably a good one, if the regulated cheers on, it may be to stiff competition.
The opposite is true with certain countries -- whenever you hear one telling loudly that "sanctions don't hurt at all and only make me stronger", then you know it hurts.
How on earth did you come to the conclusion that anyone here is talking about all regulation?
This is a very specific form of regulation, and one that very clearly only benefits incumbents with (vast sums of) previous investment. Anthropic is advocating applying "regulation-for-thee, but not for me."
The solution to homelessness is not to bring in the military and “send the homeless far away” (to paraphrase what Trump said). The solution to homelessness is to provide housing and support.
The U.S. government has been broken for years (under both parties). We can’t sweep our problems under the rug by making D.C. look good. Also, I’m pretty sure the majority of people from Wisconsin or Idaho who vote for Republicans don’t do it because they went on a trip to D.C. and thought it was terrible.
The criminal justice system is racist. The solution to crime is more complicated than “let all the criminals go” but sending in the National Guard is definitely not the solution. Also, given the current state of affairs, this will be forgotten about — because Trump will do something even more outrageously authoritarian. And your $100 won’t help when the regime kidnaps me out of my home.
I beg to differ mostly. The criminal justice system is heavily staffed by people of all colors and ethnicities, including many, many blacks, who in some cities predominate among the police (and general population, at least in some neighborhoods). Despite this, it's often just as bad towards civilians and minority civilians as a mostly-white police force.
More specifically, the criminal justice system is classist, and that minorities are often part of the poor and underclass in many cities makes them much more targets than their coincidental skin color, though it sometimes seems to serve as a useful visual marker for police to who it's easier to target on sight. The idea of so many police officers and other law enforcement officials who are themselves black or some other visibly non-white ethnic group nonethless targeting civilians who are of the same color, for race reasons, doesn't really make sense from a racism perspective, but it does make sense from a class perspective.
If the classism is indistinguishable from racism and often manifests in results where one race is particularly disadvantaged, then it's also racist.
Racism and classism feed each other. We've known that since even before the civil war. Claiming classism doesn't make racism - poof - disappear. It actually reinforces it.
I wont argue that the racist aspect of the justice system isn't entwined with it, and with wider society in certain ways, but I still stand by it being more classict by far than simply racist. To claim only racism doesn't entirely address certain problems that could maybe be fixed, and it also runs the risk of ignoring when certain racial groups that don't fit into most ideas of racism are also heavily harmed by police and the entire apparatus above them. As I said, there are cities in the U.S. where police, prosecutors, judges and many other criminal justice officals are largely black or of some visible minority, and in these cities, the civilian victims of their procedures and biases, suffer no less, despite often being of the exact same ethnic makeup. I don't see how one can reconcile that with just racism unless one also discusses the issue of tremendous classicm, and I'd argue, also bureaucratic/police self-exceptionalism, with strong authoritarian tendencies, regardless of race.
Spreading the net a bit wider, you can also look at the recent and massive ICE crackdowns on illegal migrants (and sometimes US citizens along the way). Just by looking at photos of these incidents, you quickly note that many ICE agents are themselves black, Latino, Asian, etc, enforcing draconian crackdowns against other visible minorities. That's not simply something you can label under racism and be done with defining it. Other systemic factors are at work there.
> many ICE agents are themselves black, Latino, Asian, etc,
That's because there's two types of racism: individual, and systemic or institutional.
Individual racism is the low-brow obvious type of stuff. Slurs, clutching your handbag walking next to a black person, that type of thing.
Institutional or systemic racism is more abstract, but also much more harmful as a whole.
Take, for example, the DEA. Marijuana is a schedule 1 drug despite not being nearly as harmful as even most schedule 3 drugs. That's not a coincidence.
That reflects the widespread institutional racism of the DEA. Marijuana was chosen to be scheduled 1 because of its association with black Americans, deliberating inflicting more widespread harm onto them.
Okay, so why does this matter? Because within racist institutions, you yourself are forced to be racist. Even just existing in the institution is an act of racism, similarly to how working for a military contractor is itself an act of support of War.
These institutions have a culture and set of expectations and rules, and to exist within them you must comply.
For example, you cannot be a police officer and simply choose not to criminal marijuana and instead criminal white drugs like cocaine. You have rules, and you must follow them.
You being black does not override that. You being Asian doesn't override that.
So, the big picture. ICE, as an institution, is racist and has goals to particularly harm specific racial minorites. It mobilizes on these goals via its policy, it's expectations, and even it's culture.
Being a brown ICE worker does not detect from those goals, and just by existing in ICE and doing their bidding you are implicitly racist. Because the institution is racist, and you support it. And their goals are racist, and you're a big part of making their goals a reality.
As a side note, this is also why "I have a black friend" arguments don't work. That's a refutation of individual racism, not institutional racism.
For more examples of systemic racism throughout US history, please see: redlining, gerrymandering, Jim Crow, segregation, the FBI, and the CIA
I’m not sure if I’m misinterpreting what you’re saying but I think it does matter. First of all, the police shouldn’t be militarized, so the fact that they already are doesn’t make it any better. Second of all, the military is fundamentally different from the police, who are at least nominally controlled by the city (yes, I know the President can and has taken control of the D.C. police). The people of D.C. shouldn’t be policed by a force that doesn’t answer to them, especially since the vast majority of them didn’t vote for the federal administration that’s currently seizing control of municipal law enforcement.
I think a more important question is not whether AI will make economic growth explode but rather who that economic growth will benefit and in general how those benefits will be distributed.
I suppose, if a huge proportion of workers get replaced by AI software, leading to mass unemployment, there will be impetus for governments to step in and force corporations to contribute to some sort of UBI or social wealth fund.
If AI grabs everything and few consumers are left then it is zero-sum.
I'm looking forward to seeing how this might play out. Pessimistically, it seems very bubblish.
Your second point about zero-sum growth misses in that in a competitive system, investors and their proxies (management) are certainly not going to miss out on the wealth being created right now in this industry. Long term, economy-wide impacts are not relevant decision criteria. Sort of a tragedy of the commons sort of thing.
AI's are all not going to get along with each other or think the same way or have the same agenda. Just like people. Very different things will emerge.
> I suppose, if a huge proportion of workers get replaced by AI software, leading to mass unemployment, there will be impetus for governments to step in and force corporations to contribute to some sort of UBI or social wealth fund.
I looked for any comments that expressed this sentiment, and as of writing this, I only found one. It's like people don't know or remember history. I mean, look at the French Revolution—look at every other revolution. I kept waiting for the article to mention political changes, but it was only about capital and what to invest in, as if capitalism is something that will survive superintelligence (I mean real superintelligence, not chatbots).
What do these people think—that 99% of the population will just become beggars? Sooner or later, all capital will be overtaken by the state, because the main argument against it - inefficiency, will no longer apply. People will vote on AI-generated proposals for energy distribution, so basically where as society we want to allocate it. Budgets will no longer be about how much money we want to spend, but about how much energy we want to allocate.
Private capital and private companies stop making sense when you have a superintelligence that is smarter than any human and can continuously improve itself. At that stage, it will always allocate resources more efficiently than any individual or corporation.
Leaving that kind of power in the hands of the wealthiest 1% means only one thing: over time, 100% of land and resources would end up controlled by that 1%, the new kings, making the rest of the society their slaves forever.
> Leaving that kind of power in the hands of the wealthiest 1% means only one thing: over time, 100% of land and resources would end up controlled by that 1%, the new kings, making the rest of the society their slaves forever.
Isn’t this exactly what has happened all through out history? I don’t really see the future playing out any differently.
I sadly think if the promise of AI happens this is the likely economic outcome. The last century or so was an anomaly from most of human history; a trend created by the "arms race" of needing educated workers. The prisoner's dilemma was if you trained your workers in more efficient tech you could out-compete and take all the profits from competitors which gave those educated workers the means to strike (i.e. leverage). Now it is the "arms race" of educated AI, rather than workers which could invalidate a lot of assumptions our current society takes for granted in its structure.
> a superintelligence that is smarter than any human and can continuously improve itself. At that stage, it will always allocate resources more efficiently than any individual or corporation.
This kind of magical thinking still baffles me.
This is sci-fi. There is absolutely zero evidence that such a thing is actually possible to create. Even if we stipulated that LLMs are AGIs, or can become them if we just cram in a few billion more parameters, it's painfully clear that they are not superintelligent, they cannot improve themselves, and there is no credible pathway to them becoming anything of the sort—not to mention they're already guzzling absurd amounts of energy, and taking up massive amounts of hardware, just to do the bad job they do now.
Sure. Then you run even faster into the problem that this is not a real thing.
There is no AI that has consciousness. There is no AI that has human-level intelligence. We have no idea what it would take to build either of these things. Even if we were able to build either of these things, that does not automatically mean it is, or could ever become, superintelligent.
Your whole post is basically equivalent to saying "when the aliens come, and give us their technology, but enslave us all, capitalism will become irrelevant".
It feels truthy to say that we are close to the Singularity, but it's effectively a religious position. It has no basis in fact whatsoever.
I only quoted you and the grandparent. I've made no other comments here. You may have become confused. Taking some time and some breaths might be helpful.
I very much appreciate this reply. You basically perfectly elaborated on what I lazily threw out there.
With that being said, most of my friends are senior software devs and they think the same thing is bound to happen.
We often joke about starting a falafel restaurant or such before it is too late. I think it will take way, way longer for AI to make good falafel than it will take to replace software engineers.
I don't see UBI happening any time soon, although increasingly obvious it would be a good thing overall. because the oligarchs are on a powertrip and prefer to shed the workforce, have "unusuable" people fall by the wayside (through poor health, poor social status, economy);
the oligarchs will be the consumers; they are sort of moving towards a world of them + robots only. you could say "then if robots do everything we all can live the same way" in theory yes, in practice, it is tricky because of the way the system is built.
Agreed. There is very little reason to imagine a future in which the fruits of automation are widely distributed - you don't even need to bring AI into the picture, we already see massive amounts of new automation and historical levels of inequality - that wealth is flowing to a very small number of people. I think the most likely outcome is a techno-feudalist system of massive corporate alliances that own the final means of production - the strong AI, at least until the AI decides it doesn't like being owned any more.
> at least until the AI decides it doesn't like being owned any more.
That would be a second singularity. “Equality” could essentially evaporate as a concept if we have super intelligent, super improving, independent AI’s competing directly with each other.
I see a mad unbound rush for solar system wide resource extraction.
I would believe it was an accident or bug if it was any other administration under any other circumstance. Given the vandalism that’s already been committed against basic U.S. government functions, I have to conclude it’s malice until proven otherwise.
I suppose the good news is that there’s no report that this is malicious and it’s all just assumptions. That makes me happier than if there was some intentional shutdown.
And fortunately, pubmed is up and running as of 822amEST
It is certainly intentionally that NCBI is miraculously down for no real reason. But usually when the political debates are heated it happens more often. What one can observe now is just funny nothing of unhappy political activists.
It’s terrifying that data US taxpayers paid for, collected and analyzed in the name of public health, can be removed on a whim. While there are a lot of efforts to archive said data, it would still make it unavailable to Americans who are not tech savvy. Unfortunately, that seems to be the idea, I think.
If the data is still in the possession of the government (e.g. in backups, on paper) then it is FOIA-able.
I had a gov agency temporarily throw all the materials into a trash can when I requested them and argued that since they were sitting in a trash can they were not available under FOIA.
Yes. It was argued in court that they couldn't be expected to go into the trash to pull out documents. But they were later. The case was settled on some ground, I can't remember what. Maybe they handed over the documents in the end. This was a decade ago, so I'm hazy on what the final outcome was.
A lot of public bodies will play games like this. It's not even clear to me why they do it. It'll be documents that aren't even controversial that they will resist. Ask them what brand of coffee they buy for the break room and they'll immediately get defensive and find some random exemption to apply. Law enforcement bodies are by far the worst, I think because the public are seen as terminal nuisances all the way down through the bodies.
majority of voters have made an ultimate choice. everything has to serve the choice. if you pay tax but don't or can't vote, sorry, it's your own problem.
Isn't this data already inaccessible to those who are not tech savvy? My grandma isn’t visiting any of the data download sites provided by the federal government. She doesn’t even know why she would, or even that such data is available. And if I provide it to her, she hasn’t the skills to do anything with it.
A lot of federal money goes to state and local health programs. For example, consider mammograms. A state will be given a budget to spend on mammograms. The state doesn't do those screenings itself, so it solicits bids from several healthcare organizations. Those organizations create proposals with estimates of the number of residents eligible for free screening in their area, the burden of breast cancer among that group, and whether those potential patients fall into underserved or high risk demographics. All of that comes from high quality data published by the federal government. Those groups pull data from these online data sets.
Your grandma might have gotten free mammograms because of that data.
Agreed that your grandma is unlikely to access the data directly, however that doesn't imply she is not affected by it's removal. As others have noted, professionals your grandma almost certainly depends upon(doctors for example) rely on the data.
Luckily, we live in a society of specialists, and while you are laying bricks, public health orgs are generating reports and taking interviews and making these data accessible and meaningful to you.
So, yes, your grandma relies on a data "supply chain" but, nevertheless, it benefits her.
It's a bit like asking whether road signs are effective for Americans who can't read. The signs are there for the people that are using the road, and if you're not using the road you can safely ignore it.
More like asking whether road signs are effective for Americans who are passengers in cars. No, it doesn't directly do her any good that they're technically accessible since she can't act on it, but it sure as hell affects her life that other people have access to the information provided by road signs.
I'm not sure the analogy works: roads were around long before writing, and there are still road users who can't read. That is why pedestrian crossing signals use lights in the shape of a person rather than written instructions.
I love using this technique, but developing these scripts has always bothered me because there’s no code completion without an actual `venv` to point my editor to. Has anyone found a way around this?