Hacker Newsnew | past | comments | ask | show | jobs | submit | wyldberry's commentslogin

The finding is surprising, but I think their methodology is a bit flawed.

Study 1 shows "Difference-in-Differences analysis of engagement with 154,122 posts by 1068 accounts before and after the policy change". All this tells us is that existing accounts did not have a noticeable change. It doesn't suggest anything about accounts created after where the culture of Twitter (appears) to have shifted quite a bit from before going private.

Basically "okay cool, existing accounts didn't change their behavior". What about new accounts? More anonymous accounts? Can we understand anything else about platform growth and interaction? What about classes of user w/ respect to verified users, anonymous accounts vs accounts tied to real identities?

Study 2 is also very limited to draw that conclusion because people are less likely to honestly report their engagement with content or beliefs that could be punishing in a given political environment. This was most astutely observed by the French polymarket user who crushed it betting on the 2024 election using neighbor-polling methodology [0]. Essentially, it appears to be more reliable to ask about the preferences of a respondent's social circle than ask the respondent directly.

[0] - https://www.cbsnews.com/news/french-whale-made-over-80-milli...


Given the increasing obviousness that there's functionally no oversight of NGOs and government funding, perhaps we just need some NGOs and get government grants for these critical services.


They can, they just need to use the EU equivalent of <app> they want. No one is forcing EU residents to use <app>.


You've got it the wrong way around. No one is forcing X to operate in the EU. If they want to do that, they need to follow EU laws.


Gentle reminder that if you're commenting on hacker news articles you are likely the outlier in the "why people switch browsers" reasoning. Friends and family constantly surprise me with their tech choices and how they interface with the digital world whenever I'm home on holidays.


“My friend told me Chrome was faster”


This applying to graduate degrees really does seem like the result of AMA lobbying to keep Nurse Practitioner numbers down. It is state and program dependent, but in some states NPs have prescribing authority, which cuts into the domain of MD/DO practice in the US. There are of course merits to the argument about NP training vs MD/DO training in Pharmacology, but overall this limits patient access in America to prescribed medicine.

Congress, at the behest of AMA lobbying, had kept the number of Medicare funded residency slots capped at the same number since 1997 until the Consolidated Appropriations Act of 2021 which added 1000 new residence slots[0]. Starting in FY 2023 (October 1 2022) no more than 200 new positions would be added each FY meaning the full 1000 could be created no sooner than FY 2028 (October 1 2027). Given the medical school timeline of 7-10 years training (school, residency, fellowship) we won't see any meaningful impact from that until the mid 2030s.

The US already has a much lower physician to patient ratio than Nordic countries (as a comparison between wealthy, western countries). The us has 2.97 active physicians per 1000 population, of which 2.52 are actual direct patient care physicians[2]. For comparison Sweden is ~5 per 1,000, Norway 4.5 per 1,000, Denmark 4.45 per 1,000, and Finland at 3.8 per 1000. Extra Bonus (Russian Federation reports 4.0 per 1,000)[3]. Note these numbers are as of 2020.

In America, most people interface with doctors in order to get tests run and medicine prescribed. Reducing the incentive for RNs to move into NP by removing it's professional degree status will likely lower the amount of prescribing individuals a patient can interface with, increasing bottleneck and time to care.

[0] - https://www.sgu.edu/news-and-events/new-residency-slots-appr... [1] - https://pmc.ncbi.nlm.nih.gov/articles/PMC8370355/ [2] - https://www.aamc.org/data-reports/data/2023-key-findings-and... [3] - https://www.worldatlas.com/articles/countries-with-the-most-...


I haven’t seen a MD in years. I’ve only seen nurse practitioners, at least 5 years now. Health care in the US is deadly, expensive joke. But Fox and friends tell us how great it is compared to socialist countries! Yay!


My current working theory is that US systems are in general great, if you're smart and educated enough to not get scammed. There's a high level of knowledge you need to just exist in society without being preyed upon by some entity.

Unfortunately, healthcare is probably the most glaring example of this. It's already K-shaped based on the insurance you have (or don't have). In addition, most americans just aren't educated enough about their own bodies and medicine to accurately convey their problems to their care team, and that's before how likely they are to believe you.

I have a great PPO plan and spend a large amount of time each year researching care for longevity and curating a care team, or cash-only practices for things. If i lost that, then i'd be hosed. I can't imagine how people on HMO or medicare plans work.

NPs fulfil a very useful niche, even if that niche is "you tested positive for strep, here's your antibiotics" keeping physcians and PAs able to care on more severe persons.


> In America, most people interface with doctors in order to get tests run and medicine prescribed.

In my experience, NPs already carry a lot of this load, especially in non medical speciality cases (i.e., what a GP or PCP would do).

Another example of self-harm to protect a wealthy segment of the population (doctors) at the expense of those who need medical care. It's not just time to care that's the problem. Scarcity drives up prices as well.


"ending last November" - Is the implication that a Trump presidency implies a risk of invasion from the South?

Canada has relied greatly on the United States providing a blanket defense guarantee of the continent. The Canadian military is currently operationally worthless across the board, save the cyber domain. There are many reasons for it that I'm not here to list out. However, that does come with grave consequences geopolitically and the Canadian government has been living in the 1900s.

The USA, via Alaska, provides Canada against Russian provocation on the West Coast[0]. This is similar to the near constant probing of NATO states airspace, especially countries near Ukraine [1][2]

The Canadian Navy is severely underfunded (along with the rest of the Canadian Armed forces)[3] with not enough ships to actively patrol and protect it's waters, especially in the North.

The North passages are incredibly important, and will become more important as trade routes. The entirety of the US wanting to buy Greenland is as a part of having an Atlantic outpost to control those shipping lanes. Those trade lanes can be significantly shorter than routes using Suez or Panama canals.

In addition to the trade routes, the US fears a Russian and Chinese alliance because of the access that grants to the North Atlantic. Point blank: Nato cannot build ships anymore, and the PLAN capacity is staggering. This is already independent of CN and RU intelligence probing of the entirety of the west coast.

The world has changed dramatically, and the only thing that really changed in November is that the USA is no longer pretending it can defend the mainland, defend NATO countries, and police shipping lanes on their own. The USA doesn't have the capacity to replace ships, nor do they have the knowledge anymore to do so.

[0] - https://www.cbsnews.com/news/russia-planes-alaska-us-fighter...

[1]- https://www.nato.int/cps/en/natohq/official_texts_237721.htm

[2]- https://en.wikipedia.org/wiki/2025_Russian_drone_incursion_i...

[3] - https://www.cbsnews.com/news/trump-greenland-panama-canal-wh...


There were literal statements of annexation. Brushed off by some "that was a joke" but they were made.

Lets not downplay that fact.


I personally can downplay them as a joke because it is a joke. The mostly likely path forward for anything like that would instead a certain oil rich province voting themselves independent and then asking the US for aid or to join.

And, if it wasn't a joke, then that's even more of a reason to consider meeting your 2% NATO agreement instead of just phoning it in.


They were not joke and no one laughed at them either. They were posturing and trying to be threatening. They were coupled with start of actual trade war and intentional attempts to weaken Canada.

Calling them jokes is just a lie, retroactively trying to make it better.


They were an objective joke/troll, but multiple psychological studies on certain mindset patterns under stress show that some are intellectually unable to get certain types of jokes.


You really can't weaken Canada much more than it is.


It is irrelevant. It was not a joke and it was coupled with hostile actions. Calling it a joke is massively dishonest and no one laughed or considered it funny at the time


Millions laughed, ergo joke.


The only country to have ever invaded us is the USA.

Their national anthem is about a battle in that war.


It's downplayable because Trump isn't actually serious about it. He's serious about something until he learns what's possible. Some things are possible (absurd tariffs), other things are not (declaring war on a bordering country).


Let’s not downplay jokes. Lol


I can't take anything you say that serious because of the rather extreme bias. 'buy Greenland" I think seize is a better word if your avoiding the term invade.


This is a mean-spirited interpretation of what happens when you claim nation state.

Generally the government (as of now) is not paying private (but maybe some Critical Infrastructure companies) companies to secure things. We are in the very early stages of figuring out how to hold companies accountable for security breaches, and part of that is figuring out if they should have stopped it.

A lot of that comes down to a few principles:

* How resourced is the defender versus the attacker? * Who was the attacker (attribution matters - (shoutout @ImposeCost on Twitter/X) * Was the victim of the attack performing all reasonable steps to show the cause wasn't some form of gross negligence.

Nation state attacker jobs aren't particularly different from many software shops.

* You have teams of engineers/analysts whose job it is to analyze nearly every piece of software under the sun and find vulnerabilities.

* You have teams whose job it is to build the infrastructure and tooling necessary to run operations

* You have teams whose job it is to turn vulnerabilities into exploits and payloads to be deployed along that infrastructure

* You have teams of people whose job it is to be hands on keyboard running the operation(s)

Depending on the victim organization, if a top-tier country wants what you have, they are going to get it and you'll probably never know.

F5 is, at least by q2 revenue[0], we very profitable, well resourced company that has seen some things and been victims of some high profile attacks and vulns over the years. It's likely that they were still outmatched because there's been a team of people who found a weakness and exploited it.

When they use verbage like nation-state, it's to give a signal that they were doing most/all the right things and they got popped. The relevant government officials already know what happened, this is a signal to the market that they did what they were supposed to and aren't negligent.

[0] -https://www.f5.com/company/news/press-releases/earnings-q2-f...


HN can be unnecessarily vicious when it comes to these situations. They have a very narrow slit in which they see companies because they extrapolate their understanding into the large corporation.

The attacker needs to find 1 fault in a system to start attacking a system, the company needs to plug ALL of them to be successful, continually for all updates, for all staff, for all time.

Having been on both sides of that fence, I dont envy the defenders, it is a losing battle.


> Having been on both sides of that fence, I dont envy the defenders, it is a losing battle.

Being on the defenders side, I would say it is not a losing battle.

It is a matter if convenience versus security: not using up to date libraries because it requires some code rewrites and “aint nobody got time for that”, adding too much logic to functions and scooe creep instead of segregating services, not microsegmenting workloads, using service accounts with full privileges because figuring out what you actually need takes too much time; and the list could go on.

I am not blaming all developers and engineering managers for this because they might not know about all the intricacies of building secure services - part of the blame is on the ops and security people who don’t understand them either and think they’re secure when they are not. Amd those folks should know better.

And third, hubris: we have all the security solutions that are trendy now, we’re safe. Do they actually work? No one knows.


So, why I say it is a loosing battle is because when I look for a weakness its not a known CVE and its not known to be exploited.

Many of these companies can keep up to date assuming their vendors report correctly, The exploits that are not publicly documented are rarely fixed.


It will depend on if gaming studios continue to invest in a Linux Desktop experience. It's common to run your game server on Linux, but MS, partially through DRM support to the big media companies, creates an environment very strongly suited towards shipping your game binary to a hostile environment.

This is partially why major (effective) anti-cheats have migrated to the Kernel. Windows allows the big-budget games, which are often competitive games, to operate with a higher level of game integrity, which leads to more revenue generation.

MacOS is not an attainable gaming support platform in general, as the people who are interested in the AAA games are going to need a Pro series or similar quality device which prices a large part of the current windows gaming audience out.

As an example: it's not too expensive to buy a laptop that runs valorant, and then be funneled into the skin shop. You can get a lot more sales that way than you can through the crowd of people who are on MBP, though perhaps the MBP crew is more likely to be a whale.

note: Valorant is not supported on MacOS due to the anticheat requirement, but the hypothetical still stands.


IMO the rise of handhelds like the Steam Deck has a decent chance of pushing big publishers to consider releasing for Linux/Proton. These handhelds fit the niche between smart phone and console gamers [1] that might have some potential growth left in it. Even the availability of Windows first handhelds was not as bad for Linux gaming as SteamOS and other gaming handheld focused Linux distros have been ported to them.

On the other hand the anti cheat side has been really ratcheting up with newer releases requiring Win 11 and Secure Boot. I somewhat hope and fear we might get a blessed version of SteamOS for the Deck that is heavily locked down and has kernel/hypervisor level anti cheat functions added to it. Essentially allowing for a boot mode similar to current consoles. While it goes against the open spirit of SteamOS, it might serve as an argument to invest a bit more into the Linux side, potentially improving the ecosystem as a whole.

Or all of it might be the usual "year of the Linux desktop" pipe dream.

[1] leaving out the Switch which is heavily focused on Nintendo IP and has comparatively weak hardware


I have a Steam Deck and run Linux on all my machines and I am a pretty big Gamer. Typically I have no problems.


Same, but I mostly play indie, older and/or singleplayer games. I now often don't even check ProtonDB when buying games, it has gotten that good. Anything AAA, multiplayer and new tends to cause problems due to anti cheats though.


Proton already runs the vast majority of games just fine. Gamers should categorically refuse rootkits and give the cold shoulder to studios that release games that require them. Anyone with a bit of maturity can do that, and nowadays there are thousands of other games to choose from.


> Gamers should categorically refuse rootkits and give the cold shoulder to studios that release games that require them. Anyone with a bit of maturity can do that, and nowadays there are thousands of other games to choose from.

the problem is, the wide masses still keep buying the latest AAA game thanks to literally sometimes hundreds of millions of euros worth of marketing (GTA V already had 150 M$ marketing budget well over a decade ago), and the free-to-play "whale hunter" games are even worse.

With ye olde purchased online games, like UT2004, you'd think twice before cheating, otherwise you'd get your serial number banned (sometimes not just on one server, but on an entire fleet of servers run by the same op) and you'd have to buy a new license. That alone put a base floor on cheater costs.

In contrast, Fortnite or other f2p games? These are overrun by cheaters, there is no cost attached at all, so it's obvious that the only solution is to ratchet up the anti-cheat measures.

All hail capitalism and the quest for f2p developers to lure in the 1-5% of utter whales that actually bring in the money.


> MacOS is not an attainable gaming support platform in general, as the people who are interested in the AAA games are going to need a Pro series

The M5's GPU cores are expected to pick up the same 40% performance boost we just saw in the newly released iPhones.

AAA games written for the M4 already work just fine, the extra performance is needed when you are also emulating other graphics APIs and CPU instruction sets to run Windows games.

Windows on ARM has the same issues, but Prism isn't as good at x86 emulation.


Attainable isn't about benchmarks and performance, it's ecosystem such as supported kernel hooks for AAA games to invest the time in maintaining their anti-cheats and other parts of the game-as-a-service platform.

It's also about the market accessibility and penetration. When the base level MBA at it's lowest RAM settings is reliably running AAA games is when you might see more interest in the platform from those studios because much like the iOS market, people running Mac tend to be more readily monetized, especially through things like in-game cosmetics.


The cheapest base M4 Mac Mini has 16 Gigs of RAM and plays AAA games written for Mac today.

The performance boost is needed when you are running Windows games under emulation.

Emulation overhead is also an issue for Proton on Linux or Windows on ARM.


> Emulation overhead is also an issue for Proton on Linux

Nope because Proton is based on WINE, which stands for Wine Is Not An Emulator. Windows executables on Linux are running natively at full speed like any other Linux program.

Wine implements the Windows ABI and is just here to answer the system calls those executables are making.

In fact, most Windows games are running faster under Linux.


I remember running warcraft 3 under Wine in a Lan party.

At one point, during a Dota match, every single Windows machine crashed. And my Linux machine was the only one left in the server.

So not only does it run faster but it's more stable too.


Back in 2005 or so I was playing WoW under Wine, and surprisingly it was faster on my crappy PC at that time, because it used less RAM!


Sorry, but DirectX games don't work on top of the Vulkan graphics API used by Linux without an emulation layer provided by the Proton fork of Wine.

Wine may not be an emulator, but Proton includes a completely necessary translation layer if you intend to play DirectX games on Linux.

On Mac, Apple provides an open source emulation layer, D3DMetal, to translate from DirectX to Metal which is used by Wine.


DXVK, VKD3D, D3DMetal, etc. are translation layers. You're implying they're far more heavyweight than they actually are. The real reason Windows games don't run as well on Macs is that they're usually built for x86_64 instead of ARM.

As someone who has used both Windows and Linux to game on the same x86_64 device, the performance hit with Proton is pretty much negligible (and sometimes games actually run faster on Linux).


> DXVK, VKD3D, D3DMetal, etc. are translation layers.

Rosetta is a translation layer that only operates the first time you run a given x86 app on Mac, and creates an ARM translation that is written to disk and used in the future.

Does that mean it has no overhead?


There is more substantial overhead translating to a different instruction set than in converting API calls to another API. Looking at basic benchmarks should be enough to demonstrate this. IIRC it's like 80% of the performance or something when using Rosetta as compared to native.

Also, Rosetta is more like a transpiler, since it basically recompiles the binary, whereas the others are literally layers that basically take calls in one API and translate them to another. They're pretty much the same thing as ANGLE.


> There is more substantial overhead translating to a different instruction set than in converting API calls to another API.

It's a one time only cost, since Rosetta only runs the first time you launch an app and the translation is written to disk to be used in the future.

That means there is no ongoing cost by your logic, and Proton translating an API call every time it is used is worse.


I'm talking about the overhead post-translation for Rosetta. Translating optimized code from one architecture to another is difficult, which is why there is a performance hit.


That's not emulation, it compiles shaders to vulkan. DXVK commonly has a slight performance advantage over DX12 on Windows for some hardware.


> That's not emulation, it compiles shaders to vulkan.

D3DMetal compiles shades into Metal.

So it doesn't introduce overhead?


Not very much at runtime. The main problem with metal is that it's not really compatible with DX12 or vulkan. DX12 and Vulkan are very similar, metal is not. I'm sure the conversion isn't as 1-1 and you lose some performance by doing stuff esoterically.


You should probably read up on the subject.

> Low level Graphics APIs such as Vulkan, DirectX, Metal, and WebGPU, are converging to a model similar to the way GPUs are currently built.

https://alain.xyz/blog/comparison-of-modern-graphics-apis

Proton is not some magic software that is immune to taking a performance hit when you translate from one API to another.


Proton is not an emulator. Games running on Proton typically have slightly better GPU performance than native DX. It's mostly a compiler.

It's sort of like saying that C++ is inherently a performance hit over C because you have to do more translation. Well, no, the translation happens ahead of time, and the result of the translation might be better suited to run on the hardware. For example, C++ has semantics that allow optimizations that are impossible in C. Rust ALSO has semantics that allow optimizations that are impossible in C.

But, I'm being sloppy here. Proton is many tools, and DXVK is just one of them, WINE is another.

But, of the modern graphics APIs, Metal is the most unique. And it does have good reason - the M series chips do have some unique GPU hardware that allows them to do certain things faster.

It's just that those things aren't generally useful or automatic, WE have to set it up. But we can't do that if we use automatic translation.


You have no idea what you're talking about and it's honestly kinda precious.


>The cheapest base M4 Mac Mini has 16 Gigs of RAM and plays AAA games written for Mac today.

The frame rates are quite low on the base M4. Cyberpunk 2077 test: https://www.youtube.com/watch?v=gID9S2hwJpU

I think you need an M4 Pro or a Max for a good gaming experience with AAA games.


The gaming is the only reason that keeps me buying computer with windows

Regarding this article here, when you said about competitive gaming, I imagined a competition of that sort. I wonder how does a windows installation look in a big gaming competition that many players attend. It's never "BYOD" rather they get the windows preinstalled onto great gaming PC.

Do the players need to login to their Microsoft account? And Download their cloud cotents to someone else's computer? Or maybe there is a loophole for gaming contests that allow installation without cloud login?


If you have to play games, just have a separate Windows computer for that, and do everything else on a Linux box.


It's really easy for people who work in tech, or tech adjacent to recommend this, but in my experience, getting anyone to try nearly anything on Linux is very rough. Friends who wanted to "take control of privacy in their life" never made it beyond a week of trying to use a Linux distribution.

We have decades of training in the consumer market for very simple install patterns using UIs, and minimal messing with configurations. The people in gaming who overclock and tweak their settings are a huge minority in gaming. Those people are the ones most likely to be able to grok switching to Linux, but when they get there and find that most of their favorite apps don't work like they are used to, they go back to Windows or Mac.

My hypothesis is that for Linux Gaming to truly take off, you'll need a true desktop (not steamdeck which i use weekly) that makes it a handful of "clicks" to get whatever they want installed working. That means you'll need a commercially backed OS where developers maintain all the things needed to support near infinite peripheral connections for a variety of use cases, clear anti-cheat interfaces, and likely clear DRM hooks as well.


> Friends who wanted to "take control of privacy in their life" never made it beyond a week of trying to use a Linux distribution.

I wonder why. Something like Linux Mint isn't materially different from Windows in terms of UI. Any peripheral sold as "Linux compatible" that you plug in will just work, and Steams allows to play practically any game that does not require an invasive rootkit (aka kernel-level anticheat).

I think a good first step would be to start using common FOSS programs such as Firefox, Thunderbird, VLC, LibreOffice on Windows during a transition period.


People probably feel less in control in an unfamiliar environment even if the superficial functionality is similar. I suspect this might be a greater factor for those who are somewhat tech-savvy and used to knowing their way around their computer to some degree. Once you go a bit beyond launching apps and using their UIs, the differences become apparent, bringing about a sense of unfamiliarity and a loss of a sense of control and competence.

People for whom the computer is just an appliance with limited applications (and who recognise their relationship to the computer as such) might even be better able to switch, provided that everything is set up for them. My elderly parents used a Linux box I set up for them for years at some point.


For myself personally the moment I stopped tweaking linux endlessly was when I installed the universalblue images (bazzite/aurora/bluefin). They made upgrading / using software so painless by providing sane defaults that I no longer feel the need to time my upgrades after the bugs have been patched out, or look up random commands to fix something. They are reliable enough that I feel comfortable recommending / installing them for family members, something which I would not have done before.


Dual boot seems like a more obvious recommendation? Or better still, play games on linux, except those that require kernel AC?


I find it annoying not to be able to run things at the same time. I've used dual boot many years ago but ran into the issue that one thing required one OS, another thing another OS. Kept having to close things down and reboot, reboot reboot. Nah, thanks. I'll use Linux with an offline Windows XP VM for Age of Empires and call it a day. One day, maybe I'll use a Windows 10 VM without Microsoft account to run modern software if the need arises


Some forms of kernel anticheat make dual booting harder, too. I can’t play valorant since that version of Vanguard requires secure boot, which doesn’t seem to work with my dual boot setup unless I invest more time fiddling than I care to. Easier just not to play that game.


If you can make it work, sure, but somebody will probably complain that it's too hard for the general population.


I agree, but I'm not sure that's acceptable to the general population


Fine, but the general population will have to accept whatever fate Microsoft has in mind for them.

Edit: I'd guess a lot of them just follow whatever instructions they are given, and create the online account. If Microsoft thought there was a chance of serious rebellion, they wouldn't be doing it.


These types of games are only a small part of gaming, I use a macbook for my main machine and I play games on my console. The majority of gaming has nothing to do with buying skins and we should all be rejecting this nonsense anyway.


Good thing they thought of that. Disclaimer: I was at Riot During some of the Valorant dev cycle and the stated goal in this tech blog [0] was a huge goal (keeping latency < 35ms).

This was only really doable because Riot has invested significantly in buying dark fiber and peering at major locations worldwide [1][2]

[0] - https://technology.riotgames.com/news/peeking-valorants-netc... [1] - https://technology.riotgames.com/news/fixing-internet-real-t... [2] - https://technology.riotgames.com/news/fixing-internet-real-t...


I've read those articles a couple of times over the years and always found it fascinating they actually built a backbone with dark fibre. That was ten years ago so it would be interesting to see an update.


This is excellent material!


It is not absurd, this is the standard for rapid prototyping in the DoD. Given that Palantir already has a strong track record for authn/z inside their systems used across the DoD, LEO, and Intelligence Agencies. It's not an untread, uncharted path for these organizations.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: