Hacker Newsnew | past | comments | ask | show | jobs | submit | jsheard's commentslogin

Does Facebook actually use RSC? I thought it was mainly pushed by the Nextjs/Vercel side of the React team.

No, but it's primarily because Meta has their own server infrastructure already. RSCs are essentially the React team trying to generalize the data fetching patterns from Meta's infrastructure into React itself so they can be used more broadly.

I wrote an extensive post and did a conference talk earlier this year recapping the overall development history and intent of RSCs, as best as I understand it from a mostly-external perspective:

- https://blog.isquaredsoftware.com/2025/06/react-community-20...

- https://blog.isquaredsoftware.com/2025/06/presentations-reac...


No they don't. I think Meta is just big enough that they don't really care what is happening with React anymore haha.

It's a delicate subject but not an unprecedented one. Automatic detection of already known CSAM images (as opposed to heuristic detection of unknown images) has been around for much longer than AI, and for that service to exist someone has to handle the actual CSAM before it's reduced to a perceptual hash in a database.

Maybe AI-based heuristic detection is more ethically/legally fraught since you'd have to stockpile CSAM to train on, rather than hashing then destroying your copy immediately after obtaining it.


> Maybe AI detection is more ethically fraught since you'd need to keep hold of the CSAM until the next training run,

why?

the damage is already done


Some victims feel this way. Some do not.

Why would you think that? Every distribution, every view is adding damage, even if the original victim doesn't know (or even would rather not know) about it.

I don't think AI training on a dataset counts as a view in this context. The concern is predators getting off on what they've done, not developing tools to stop them.

Debating what counts as a view is irrelevant. Some child pornography subjects feel violated by any storage or use of their images. Government officials store and use them regardless.

I don't think it's how it works.

> The infographic had a statement about not training on CSAM and revenge porn and the like but the corpospeak it was worded in made it sound like they were promising not to do it anymore, not that they never did.

We know they did, an earlier version of the LAION dataset was found to contain CSAM after everyone had already trained their image generation models on it.

https://www.theverge.com/2023/12/20/24009418/generative-ai-i...


What does a visualization being to the table over a book, if it's executed in the most generic way possible? The decisions made when adapting one medium to another are what does or doesn't make it worthwhile.

Unless your goal is purely to capture people who don't and won't read, as cheaply and cynically as possible.


there is a big difference between sending the story to the AI and saying "visualize this" vs carefully describing exactly how the visualization should look like and effectively only using the AI to render your vision.

> vigorously defended the use of AI (hilariously, by bragging about how many hours it took to make that ad).

Likewise with the Coca Cola ad, the agency said in their defense that they had to sift through 70,000 video generations to assemble the few dozen shots in the final ad. And after all that sifting they still couldn't get the one element of continuity (the Coke truck) to look consistent from shot to shot, and had to manually composite over all of the Coke logos since the model kept mangling them.


I think it mostly blew up via unofficial reposts, since that original version was in French without subtitles.

This one copy on X has 27 million views after 2 days: https://x.com/pawcord/status/1998361498713038874


Ok thanks, this changes things! X exagerates how it counts views but overall I do believe millions saw it.

OP is the original upload, but the agency reposted it with English subs after it got popular outside of France: https://www.youtube.com/watch?v=iLERt5ZkpQ4

You can tell it's great visual storytelling because you don't even need to know the words.

I guess the McDonald's ad didn't need words either, but it was just depressing and awful.


What was the McDonald's ad? Could you drop a link, perhaps?

Here's a guardian link that tells the story and includes the ad: https://www.theguardian.com/business/2025/dec/11/mcdonalds-r...

Thank you! Goodness, that ad made me want to barf

Cheers bro!

They do, but it's irrelevant to performance nowadays since you're required to install all of the disc data to the SSD before you can play. The PS3/360 generation was the last time you could play games directly from a disc (and even then some games had an install process).

They do still have texture units since sampling 2D and 3D grids is a useful primitive for all sorts of compute, but some other stuff is stripped back. They don't have raytracing or video encoding units for example.

The more pertinent issue is that many TVs will only do VRR over HDMI 2.1, and many active DP to HDMI 2.1 adapters won't pass VRR through either.

That's also why the Switch 2 supports VRR on its internal display but not when connected to a TV - the dock can't encode a HDMI 2.1 signal. That's just Nintendo being Nintendo though, they could support it if they wanted to.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: