Hacker Newsnew | past | comments | ask | show | jobs | submit | boristsr's commentslogin

I think it depends on the depth of the summary, and the purpose. You can do quite an indepth analysis as part of educational material for example, which is one of the tests of fair use.

I think a key thing to remember when assessing your own liability is fair use is a defense, not an automatic guaranteed right for blanket uses.

Leaking spoilers of unpublished works can definitely cause market harm, and serves no wider good for the market the same way educational material would.

I wouldn't like to be on the receiving side of this lawsuit. At the very least it's going to be expensive to defend against.


That's the rub. When it comes to copyright, money makes right. The one with more money and willingness to go to court will win. Not who is actually legally right.


That's not just copyright, it's our entire legal system. A corporation can intentionally murder hundreds of thousands of people and get nothing but a slap on the wrist fine.


Yes please!


That's not true, or they wouldn't have settled for 1.5bln specifically for training on pirated material.

https://apnews.com/article/anthropic-copyright-authors-settl...


As I said, the initial piracy was an issue. That is what they settled over. Your link covers this:

> A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn’t illegal but that Anthropic wrongfully acquired millions of books through pirate websites.

With more details about how they later did it legally, and that was fine, but it did not excuse the earlier piracy:

> But documents disclosed in court showed Anthropic employees’ internal concerns about the legality of their use of pirate sites. The company later shifted its approach and hired Tom Turvey, the former Google executive in charge of Google Books, a searchable library of digitized books that successfully weathered years of copyright battles.

> With his help, Anthropic began buying books in bulk, tearing off the bindings and scanning each page before feeding the digitized versions into its AI model, according to court documents. That was legal but didn’t undo the earlier piracy, according to the judge.


I understand your thoughts. I've had similar motivation problems about blogging since the release of ChatGPT. Feels like you are writing for a machine rather than readers. Definitely seen a decline in readers since December 2023 on older articles that previously had steady traffic for years.

Also, I just purchased LazyVim For Ambitious Developers. I've used the online edition a number of times in recent months. Thanks for your work!


Excellent deep dive and explanation of the process of tracking down and fixing it. Thanks for sharing it, it was a fun read. Will definitely keep this in mind next time I fire up farcry for some nostalgia!


This reminds me very much of the fun of the "old internet", late 90's early 2000s geocities fun! Love it.


That's not quite true. There is support for "strict" tables which does have more stringent rules around types

https://sqlite.org/stricttables.html


Mongo DB also disables safety features by default to improve their benchmarks. But Mongo DB gets lots of criticism for that and SQLite gets none.


Nanite is a lot more than just a continuous lod system. The challenges they needed to solve were above and beyond that. Continuous lod systems have been used for literal decades in things like terrain. The challenges for continuous lod for general static meshes are around silhouette preservation, UV preservation and so on. One of nanites insights was that a lot of the issues around trying to solve automatic mesh decimation without major mesh deformation/poor results just disappear when you are dealing with triangles that are just a few pixels (as little as single pixel triangles) in size. The problem with small triangles is a problem called quad overdraw, where graphics cards rasterize triangles in blocks of 2x2 pixels, so you end up over drawing pixels many times over which is very wasteful. So the solutions they came up with in particular were:

- switch to software rasterization for small triangles. This required a good heuristic to choose between whether to follow the hardware or software path for rasterization. It also needed newer shader stages that are earlier in the geometry pipeline. These are hardware features that came with shader models 5&6.

- using deferred materials which drastically improves their ability to do batched rendering.

It's actually the result of decades of hardware, software and research advancements.

The 2 solutions posted in recent days seem heavily focused on just the continuous lod without the rest of the nanite system as a whole.

Also yes, there were also challenges around the sheer amount of memory for such dense meshes and their patches. The latest nvme streaming tech makes that a little easier, along with quantizing the vertices which can dramatically lower memory usage at the expense of some vertex position precision.


There are also pros and cons to this method of rendering, in terms of performance. The triangulation cost imposes a significant overhead compared to traditional scene rendering methods, though it scales far better with scale and scene detail. For that quality of rendering, making it viable requires a good amount of memory bandwidth and streaming speeds only possible with modern SSDs.

So it’s only really practical because GPUs have the power to render games with a certain level of fidelity, and RAM and SSD size and speeds for consumer gear are becoming capable of it.

Also there are significant benefits for a developer, especially if using photogrammetry or off-the-shelf high-detail models like Quixel scans, so there’s a reason Epic is going all-in.


Thanks to both of you for the detailed explanation!


Nanite does a few things:

- continuous lod as this library does

- software rasterization for small single pixel triangles which reduces quad overdraw

- deferred materials (only material IDs and some geometry properties are written in the geometry pass to the gbuffers, which things like normal maps, base colour, roughness maps, etc being applied later with a single draw call per material)

- efficient instancing and batching of meshes and their mesh patches to allow arrows of objects to scale well as object count grows

- (edit, added later as I forgot) various streaming and compression techniques to efficiently stream/load at runtime and reduce runtime memory usage and bandwidth like vertex quantization etc.


I see, thank you!


There was also claims that Microsoft ended their HTML engine over anticompetitive behavior from Google.

https://mspoweruser.com/google-claims-edge-slowing-empty-div...

These claims aren't isolated. I think it's a bad look for Google to repeatedly make mistakes when they are currently under multiple investigations for monopolistic behavior.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: