Hacker Newsnew | past | comments | ask | show | jobs | submit | bestcommentslogin

Very depressing to see this next to the "LinkedIn uses 2.4GB of RAM" post.

The solar covered parking lots near me are great because they also serve as cover for your car when it’s hot and sunny.

It’s not the most cost effective way to install solar, though. A tall structure designed to put the panels high up in the air and leave a lot of space for cars is a lot more expensive than normal rooftop solar or even field setups. This is basically a way to force some of the cost of clean energy as a tax on parking lots. Which may not be a bad thing for dense cities where parking lots have their own externalities on the limited available land.


My local county is currently in a dispute with the local bar association because they want to upgrade the courthouse security cameras and the sheriff wants to add audio capabilities. This includes to parts of the building just outside the courtroom that counsel will frequently use for brief asides with their clients (due to lack of other private rooms). The county seems to favor adding the microphones and pinky swearing they won't use them and that public records requests won't be used to listen in on privileged communication, but it's obvious how difficult that would be to trust. They keep putting off a decision because they don't want to piss off the lawyers.

Looks like what you might expect in a standard marketing app from a consultancy. They probably hired someone to develop it, that shop used their standard app architecure which includes location tracking code and the other stuff.

Even if what they hear is inadmissible in court, parallel construction is a real thing and they will find a way to work backwards.

Hmm, green account, no comments or submissions, generated website for an unsigned app with power user features. That’s a no from me dawg.


Play store is the largest distributor of spyware and viruses for Android.

Not even a small fraction of a percentage of scams come from installing software normally, but only from Google Play store.


It's a very small concession. The high initial friction still means when someone comes to me with a problem and I tell them the solution is in F-Droid, they have to wait a day. Most give up and pick a different, less trustworthy solution from Google Play.

I understand why OpenAI is trying to reduce its costs, but it simply isn't true that AI crawlers aren't creating very significant load, especially those crawlers that ignore robots.txt and hide their identities. This is direct financial damage and it's particularly hard on nonprofit sites that have been around a long time.

This is mostly because Americans keep electing total morons.

This looks cool enough, but it’s starting to drive me crazy how people are in such a rush to put out their macOS apps they can’t be bothered to get a developer account and run a one line command. It’s not hard.

I used to be sympathetic to complaints about not wanting to pay the developer account fee. But when you’re vibe coding, you’re probably paying a good chunk of change to your LLM supplier of choice every month, and the yearly developer account fee seems minor in comparison

Also, it’s just such a bad security precedent. This page describes the error you get as “the typical macOS Gatekeeper warning”, as though it were just another piece of corporate silliness, like clicking through a EULA.


> No, this is a bad solution.

You didn't say why this is a bad solution. The government mandates that cars get safer every year and fatalities are down 78% from the 1960s. Whenever government regulates things to benefit people, people tend to benefit.

> One of the things macbook users praise the most is "build quality", which often means the solidity of the device, lack of flex, etc.

It seems like the Macbook Neo has a lot of those properties as well for a very inexpensive device that is extremely easy to repair.


I am somewhat dismayed that contracts were accepted. It feels like piling on ever more complexity to a language which has already surpassed its complexity budget, and given that the feature comes with its own set of footguns I'm not sure that it is justified.

Here's a quote from Bjarne,

> So go back about one year, and we could vote about it before it got into the standard, and some of us voted no. Now we have a much harder problem. This is part of the standard proposal. Do we vote against the standard because there is a feature we think is bad? Because I think this one is bad. And that is a much harder problem. People vote yes because they think: "Oh we are getting a lot of good things out of this.", and they are right. We are also getting a lot of complexity and a lot of bad things. And this proposal, in my opinion is bloated committee design and also incomplete.


Incredibly small concession that doesn’t warrant this article’s absolutely insane framing: “Even less of a problem than we thought,” “very, very good news,” “already sounded perfectly manageable.”

The author is so giddy to defend this monopolistic restriction on Google’s part. Hackers can use F-Droid without annoyance, but this really does kill any chance at normies using it. They absolutely will use the worst spyware on Google Play instead, and the author seemingly loves it.


And how is this comment relevant here? The abstract lists the digestible model names, and you can find the details in the supplementary text:

> To evaluate user-facing production LLMs, we studied four proprietary models: OpenAI’s GPT-5 and GPT- 4o (80), Google’s Gemini-1.5-Flash (81) and Anthropic’s Claude Sonnet 3.7 (82); and seven open-weight models: Meta’s Llama-3-8B-Instruct, Llama-4-Scout-17B-16E, and Llama-3.3-70B-Instruct-Turbo (83, 84); Mistral AI’s Mistral-7B-Instruct-v0.3 (85) and Mistral-Small-24B-Instruct-2501 (86); DeepSeek-V3 (87); and Qwen2.5-7B-Instruct-Turbo (88).

edit: It looks like OP attached the wrong link to the paper!

The article is about this Stanford study: https://www.science.org/doi/10.1126/science.aec8352

But the link in OP's post points to (what seems to be) a completely unrelated study.


Nontechnical people simply don't have any idea about what LLMs are. Their only mental model comes from science fiction, plus the simple fact that we possess a theory of mind. It would be astonishing if people were able to casually not anthropomorphize LLMs, given that untold millions of years worth of evolution of the simian neocortex is trying to convince you that anything that talks like that must be another mind similar to yours.

Also, many many people suffer from low self esteem, and being showered with endorsement and affirmation by something that talks like an authority figure must be very addictive.


Tim from the Copilot coding agent team here. We've now disabled these tips in pull requests created by or touched by Copilot, so you won't see this happen again for future PRs.

We've been including product tips in PRs created by Copilot coding agent. The goal was to help developers learn new ways to use the agent in their workflow. But hearing the feedback here, and on reflection, this was the wrong judgement call. We won't do something like this again.


The thruster fix is the part that gets me. They sent a command that would either revive thrusters dead since 2004 or cause a catastrophic explosion, then waited 46 hours for the round trip with zero ability to intervene. That's a production deployment with no rollback, no monitoring dashboard, and a 23-hour latency on your logs. They nailed it.

My parents ended up being forced by circumstances to move into a retirement home about five years ago. Fortunately, the place turned out to be run by people who mostly cared about their clients and so my parents' lives were basically OK, except that the food sucked (which AFAICT is par for the course at retirement homes). But a few months ago the place was acquired by a different company, which is trying to squeeze out higher profits. Staffing and services are being cut, and prices are going up. Even the food got worse, which I didn't think was even possible. The response when someone complains is, "If you don't like it you are free to leave."

Yeah, right. My barely mobile 90-year-old parents, one of whom has Parkinson's, are just going to pack up and go. They know perfectly well that they have a captive audience.

Thankfully, my mother died before the acquisition, and my father died last week, only a few months after the acquisition, so I don't have to deal with this any more. But caveat emptor: if you ever go into a retirement home, think about what will happen if they change ownership. Even if it looks great, or even acceptable, now, there is no guarantee that it will still be great, or even acceptable, tomorrow, unless you somehow manage to negotiate such a guarantee. I have no idea what a contract provision like that would even look like. But I am going to be facing this problem myself some day, so I'd love to hear ideas.


I've taken multiple Amtrak routes, all out of the Northeast Corridor but eventually crossing the country West or South.

You don't take Amtrak because you want to get there fast, and you don't really take it because it's cheaper than flying. You take it because you can, and because it's more important to you to be (comparatively) comfortable instead of rushing from A to B. You take it because of the sights, the people, the chance encounters, the proximity to city centers that airplanes can never hope to match. It's an experience in and of itself that's distinctly foreign to many Americans, and one I wholeheartedly recommend.

Sitting in a roomette, crossing from Boston to LA over a long weekend, sharing delicious meals with total strangers as the countryside whizzed by (or we sat on a siding waiting on a freight train).

Just not comparable.


A pastime I have with papers like this is to look for the part in the paper where they say which models they tested. Very often, you find either A) it's a model from one or more years ago, only just being published now, or B) they don't even say which model they are using. Best I could find in this paper:

> We evaluated 11 user-facing production LLMs: four proprietary models from OpenAI, Anthropic, and Google; and seven open-weight models from Meta, Qwen, DeepSeek, and Mistral.

(and graphs include model _sizes_, but not versions, for open weight models only.)

I can't apprehend how including what model you are testing is not commonly understood to be a basic requirement.


One of the authors (of one of the two models, not this particular paper) here. Just a clarification, these models are *not* burned into silicon. They are trained with brutal QAT but are put onto fpgas. For axol1tl, the weights are burned in the sense that the weights are hard-wired in the fabric (i.e., shift-add instead of conventional read-muk-add cycle), but not on the raw silicon so the chip can be reprogrammed. Though, for projects like smartpixel or HG-Cal readout, there are similar ones targeting silicon (google something like "smartpixel cern", "HGCAL autoencoder" and you will find them), and I thought it was one of them when viewing the title.

Some slides with more info: https://indico.cern.ch/event/1496673/contributions/6637931/a... The approval process for a full paper is quite lengthy in the collaboration, but a more comprehensive one is coming in the following months, if everything went smoothly.

Regarding the exact algorithm: there are a few versions of the models deployed. Before v4 (when this article was written), they are slides 9-10. The model was trained as a plain VAE that is essentially a small MLP. In inference time, the decoder was stripped and the mu^2 term from the KL div was used as the loss (contributions from terms containing sigma was found to be having negliable impact on signal efficiency). In v5 we added a VICREG block before that and used the reconstruction loss instead. Everything runs in =2 clock cycles at 40MHz clock. Since v5, hls4ml-da4ml flow (https://arxiv.org/abs/2512.01463, https://arxiv.org/abs/2507.04535) was used for putting the model on FPGAs.

For CICADA, the models was trained as a VAE again, but this time distilled with supervised loss on the anomaly score on a calibration dataset. Some slides: https://indico.global/event/8004/contributions/72149/attachm... (not up-to-date, but don't know if there other newer open ones). Both student and teacher was a conventional conv-dense models, can be found in slides 14-15.

Just sell some of my works for running qat (high-granularity quantization) and doing deployment (distributed arithmetic) of NNs in the context of such applications (i.e., FPGA deployment for <1us latency), if you are interested: https://arxiv.org/abs/2405.00645 https://arxiv.org/abs/2507.04535

Happy to take any questions.


Being removed for versions 25 to 38… honestly confirms the feminist narrative of some people being idiots, though.

Like, imagine documentation on object oriented programming being removed because it offended some functional programming folks.


I replaced the keyboard MacBook Air M1 keyboard with a $20 model from Amazon and it's been going strong for a full year. I had spilled ginger ale on the original.

The board is riveted in, but there are enough screws to hold the replacement in place. Removing the board is a shockingly violent process, but it worked for me.

Keyboard: https://www.amazon.com/dp/B0CQBVMM3X (price has gone up).

Video of rivets breaking: https://i.tonybox.net/9f2083b218d5.mp4 (you can see I missed a screw and slightly cut my hand here too).


The real frustrating part is that Cloudflare's "definition" of suspicious keeps changing and expanding. VPN users, privacy-first browsers, uncommon IP ranges, they all get flagged. The people most likely to get caught by these systems are exactly the ones who care most about their privacy, and not the bots that they are apparently targeting.

I've always said this but AI will win a fields medal before being able to manage a McDonald's.

Math seems difficult to us because it's like using a hammer (the brain) to twist in a screw (math).

LLMs are discovering a lot of new math because they are great at low depth high breadth situations.

I predict that in the future people will ditch LLMs in favor of AlphaGo style RL done on Lean syntax trees. These should be able to think on much larger timescales.

Any professional mathematician will tell you that their arsenal is ~ 10 tricks. If we can codify those tricks as latent vectors it's GG


>The real frustrating part is that Cloudflare's "definition" of suspicious keeps changing and expanding.

That's... exactly expected? It's a cat and mouse game. People running botnets or AI scrapers aren't diligently setting the evil bit on their packets.


The biggest sign something is broken is when someone writes: "Thankfully, my mother died before the acquisition, and my father died last week, only a few months after the acquisition, so I don't have to deal with this any more."

Depressing to read. I'm not sure on which side.


There's a crazy story in here where Sytse invested in a click chemistry cancer research startup (Shasqi) in 2017 and ends up becoming a customer six years later.

I sincerely hope it works out for him.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: