No, but it's primarily because Meta has their own server infrastructure already. RSCs are essentially the React team trying to generalize the data fetching patterns from Meta's infrastructure into React itself so they can be used more broadly.
I wrote an extensive post and did a conference talk earlier this year recapping the overall development history and intent of RSCs, as best as I understand it from a mostly-external perspective:
It's a delicate subject but not an unprecedented one. Automatic detection of already known CSAM images (as opposed to heuristic detection of unknown images) has been around for much longer than AI, and for that service to exist someone has to handle the actual CSAM before it's reduced to a perceptual hash in a database.
Maybe AI-based heuristic detection is more ethically/legally fraught since you'd have to stockpile CSAM to train on, rather than hashing then destroying your copy immediately after obtaining it.
Why would you think that? Every distribution, every view is adding damage, even if the original victim doesn't know (or even would rather not know) about it.
I don't think AI training on a dataset counts as a view in this context. The concern is predators getting off on what they've done, not developing tools to stop them.
Debating what counts as a view is irrelevant. Some child pornography subjects feel violated by any storage or use of their images. Government officials store and use them regardless.
> The infographic had a statement about not training on CSAM and revenge porn and the like but the corpospeak it was worded in made it sound like they were promising not to do it anymore, not that they never did.
We know they did, an earlier version of the LAION dataset was found to contain CSAM after everyone had already trained their image generation models on it.
What does a visualization being to the table over a book, if it's executed in the most generic way possible? The decisions made when adapting one medium to another are what does or doesn't make it worthwhile.
Unless your goal is purely to capture people who don't and won't read, as cheaply and cynically as possible.
there is a big difference between sending the story to the AI and saying "visualize this" vs carefully describing exactly how the visualization should look like and effectively only using the AI to render your vision.
> vigorously defended the use of AI (hilariously, by bragging about how many hours it took to make that ad).
Likewise with the Coca Cola ad, the agency said in their defense that they had to sift through 70,000 video generations to assemble the few dozen shots in the final ad. And after all that sifting they still couldn't get the one element of continuity (the Coke truck) to look consistent from shot to shot, and had to manually composite over all of the Coke logos since the model kept mangling them.
They do, but it's irrelevant to performance nowadays since you're required to install all of the disc data to the SSD before you can play. The PS3/360 generation was the last time you could play games directly from a disc (and even then some games had an install process).
They do still have texture units since sampling 2D and 3D grids is a useful primitive for all sorts of compute, but some other stuff is stripped back. They don't have raytracing or video encoding units for example.
The more pertinent issue is that many TVs will only do VRR over HDMI 2.1, and many active DP to HDMI 2.1 adapters won't pass VRR through either.
That's also why the Switch 2 supports VRR on its internal display but not when connected to a TV - the dock can't encode a HDMI 2.1 signal. That's just Nintendo being Nintendo though, they could support it if they wanted to.
reply