> I honestly cant't think of any good examples where game mechanics and stories interacted in a way that gave you significant agency while still being fun. I'd love to be given contra-examples though.
Rimworld and The Sims. Both are procedural story writers.
> I felt railroaded into comically absurd black/white choices
I agree: All these AAA titles essentially are movies where you get tons of "agency" in choices which are irrelevant to the story, but the main plot is hard scripted into a few predetermined paths.
Until we have full generative AI as game engine the only alternative remains the procedural approach mentioned in the beginning.
Not yet, I agree, but who is to say they couldn't?
Limiting life to cell based biology is a somewhat lousy definition by the only example we know. I prefer the generalized definition in "What is Life?" by Erwin Schrödinger which currently draws the same line (at cellular biology) but could accommodate other forms of life too.
> The next step is to work with chains of Bézier curves to make up complex shapes (such as font glyphs). It will lead us to build a signed distance field. This is not trivial at all and mandates one or several dedicated articles. We will hopefully study these subjects in the not-so-distant future.
If you only want to fill a path of bezier curves (e.g. for text rendering) you can do without the "distance" part from "signed distance field" [0], leaving you with a "signed field" aka. an implicit curve [1].
Meaning not having to calculate the exact distance but only the sign (inside or outside) can be done without all the crazy iterative root finding in an actually cheap manner with only four multiplications and one addition per pixel / fragment / sample for a rational cubic curve [3].
I wonder which method Apple is using for their recently introduced Bézier curve primitives for real-time 3D rendering in Metal. From their WWDC 2023 presentation [1]:
> Geometry such as hair, fur, and vegetation can have thousands or even millions of primitives. These are typically modeled as fine, smooth curves. Instead of using triangles to approximate these curves, you can use Metal's new curve primitives. These curves will remain smooth even as the camera zooms in. And compared to triangles, curves have a more compact memory footprint and allow faster acceleration structure builds.
> A full curve is made of a series of connected curve segments. Every segment on a curve is its own primitive, and Metal assigns each segment a unique primitive ID. Each of these segments is defined by a series of control points, which control the shape of the curve. These control points are interpolated using a set of basis functions. Depending on the basis function, each curve segment can have 2, 3, or 4 control points. Metal offers four different curve basis functions: Bezier, Catmull-Rom, B-Spline, and Linear. (...)
Finding the sign of the distance has been extremely challenging to me in many ways, so I'm very curious about the approach you're presenting. The snippet you shared has a "a³-bcd ≤ 0" formula which is all I get without more context. Can you elaborate on it or provide resources?
The winding number logic is usually super involved, especially when multiple sub-shapes start overlap and subtracting each other. Is this covered or orthogonal to what you are talking about?
> The winding number logic is usually super involved, especially when multiple sub-shapes start overlap and subtracting each other. Is this covered or orthogonal to what you are talking about?
Orthogonal: The implicit curve only tells you if you are inside or outside (the sign of the SDF), so that is technically sufficient, but usually you want more things: Some kind of anti-aliasing, composite shapes of more than one bezier curve and boolean operators for masking / clipping. Using the stencil buffer for counting the winding number allows to do all of that very easily without tessellation or decomposition at path intersections.
> Can you elaborate on it or provide resources?
If you are interested in the theory behind implicit curve rendering and how to handle the edge cases of cubic bezier curves checkout these papers:
> The distance is only irrelevant for plain 2D text rendering, right?
Yes, as I said it is relevant for text rendering, but not necessarily 2D. It can also be embedded in a 3D perspective as long as the text itself is planar. Meaning you can directly render text in a 3D scene this way without rendering to a texture first.
> But real shadows and lighting would require the distance aspect, no?
I think the difference is in stroke vs fill, not the illumination (as you could still use shadow mapping / projection). In stroking you need to calculate an offset curve either explicitly or implicitly sample it from a signed distance field. Thus the exact distance matters for stroking, for filling it does not.
Couldn’t you do stroking by doing a second fill operation on a slightly scaled down version of the first with the negative space color as the interior?
Yep, stroking is just filling of the space between offset curves (aka. parallel curves), and that "slightly scaled down version of the first" is the "calculate an offset curve explicitly" approach I mentioned.
Though it is very unpractical because the offset curve of a cubic bezier curve is not a cubic bezier curve anymore, instead it is an analytic curve of degree 10. Thus, in practice the offset curves for stroking are either approximated by polygons or implicitly sampled from signed distance fields.
One more thing: Offset curves are different form classical scaling from a center point in all but the most trivial cases where there exists such a center; namely regular polygons. And cubic bezier curves can be concave, even have a self intersecting loop or form a cusp.
I'm assuming no coercion. In my scenario, tier 1 doesn't need any of that except natural resources because they can self-produce everything they need from those in a cheaper way than humans can.
If someone in tier 1, for instance, wants land from someone in tier 2, they'd have to offer something that the tier 2 person values more than the land they own.
After the trade, the tier 2 person would still be richer than they were before the trade. So tier 2 would become richer in absolute terms by trading with tier 1 in this manner.
And it's very likely that what tier 2 wants from tier 1 is whatever they need to build their own AIs.
So my argument still stands. They wouldn't be poorer than they are now.
The real limiting factor is the willingness of people to actually follow the system, which I think he mentions but doesn't examine much. It'd be interesting to see if any of the papers test the better systems to see how they fair with noncompliant passengers.
I certainly see the dendritic nature of the rock but I am wondering if there are rocks found on Earth that look he same. Most of the examples of that sort of thing are more 2D patterns.
could be formed by something like a stromatilite(s), which are presumed to be the most ancient life forms to 1 leave macro fosils, and 2 are an existing species today
or just some blobby rock's
in any case, it's one more reason to go to mars, other than of course we dont realy have anywhere else to go, and with 9 billion people side eyeing each other, were going
Its main advantages are the O(log n) time complexity for all size changes at any index, meaning you can efficiently insert and delete anywhere, and it is easy to implement copy-on-write version control on top of it.
There's a good reason for that. Almost all strings ever created in programs are either very small, immutable or append-only. Eg, text labels in a user interface, body of a downloaded HTTP request or a templated HTML string, respectively. For these use cases, small string optimisations and resizable vecs are better choices. They're simpler and faster for the operations you actually care about.
The only time I've ever wanted ropes is in text editing - either in an editor or in a CRDT library. They're a good choice for text editing because they let users type anywhere in a document. But that comes at a cost: Rope implementations are very complex (skip lists have similar complexity to a b-tree) and they can be quite memory inefficient too, depending on how they're implemented. They're a bad choice for small strings, immutable strings and append only strings - which as I said, are the most common string types.
Ropes are amazing when you need them. But they don't improve the performance of the average string, or the average program.
Yes. But also overwhelming consensus is that complex indirect data structures just don’t end up performing on modern hardware due to cache and branch prediction.
Only use them when the theoretical algorithmic properties make them the only tool for the job.
They have their place. Certainly B-tree data-structures are tremendously useful and usually reasonably cache friendly. And if std::deque weren't busted on MSVC, there are times where it would be very useful. Linked lists have their place as well; a classic example would be an LRU cache, which is usually implemented as a hash table interleaved with a doubly linked list.
But yeah. Contiguous dynamic arrays and hash tables, those are usually what you want.
If you have a small dataset, yeah, memcpy will outperform a lot of indirect pointer lookups. But that doesn't stay true once you're memcpying around megabytes of data. The trick with indirect data structures on modern hardware is to tune the size of internal nodes and leaf nodes to make the cache misses worth it. For example, Binary trees are insanely inefficient on modern hardware because the internal nodes have a size of 2. If you give them a size of 64 or something, they perform much better. (Ie, make a b-tree). Likewise, a lot of bad tree implementations put just a single item in the leaf nodes. Its much better to have leaves store blocks of 128 items or something. And use memcpy to move data around within the block when needed.
This gets you the best of both worlds.
I spent about 18 months optimising a text based CRDT library (diamond types). We published a paper on it. By default, we store the editing history on disk. When you open a document, we reconstruct the document from scratch from a series of edits. After awhile, actually applying the stream of edits to a text document became the largest performance cost. Ropes were hugely useful. There's a stack of optimisations we made there to make that replay another 10x faster or so on top of most rope implementations. Using a linear data structure? Forget it. For nontrivial workloads, you 100% want indirect data structures. But you've gotta tune them for modern hardware.
My comment is an observation about how this gets tried every few years in major libraries and is usually reverted. I Agree, there are use cases where these are better. But the pattern tends to be to revert to simpler data structures.
and before anybody only reads "spacetime curvature" and thinks the paper is talking about a warp drive, it is not.
Anyway, this Genergo thingy seems to be nonsense IMO, or they would have actually explained how it works.