It seems to me that there's some overheated rhetoric that reminds me of the tech-specific spin on the Appeal to Novelty fallacy [1], where people think a new tech is going to uniformly improve on an old tech, that if it isn't an improvement on every front it is somehow a "failure", and therefore if we like the new tech and we are on Team New Tech that we must defend how the new tech is an improvement on every aspect.
Gaussian splats are definitely interesting and do something things older tech is not very good at, but at the same time, it's definitely going to end up being a tool in the tool chest and not completely murderating mesh-based tech or something because they have a lot of other weaknesses, like editability. Or dynamic animation.
What I think some people may not realize is, that's not particularly uncommon. There's a really, really long line of graphical techs that do something particularly well but their weaknesses have kept them in a limited use. It's not a problem for Gaussian splats to become a tool in the toolchest; they aren't a "failure" if we're still using meshes for a lot of things in 10 years.
Mesh-type techs are the "default" for some good reasons.
Function coloring does not mean that functions take parameters and have return values. Result<T,E> is not a color. You can call a function that returns a Result from any other function. Errors as return values do not color a function, they're just return values.
Async functions are colored because they force a change in the rest of the call stack, not just the caller. If you have a function nested ten levels deep and it calls a function that returns a Result, and you change that function to no longer return a result because it lost all its error cases, you only have to change the direct callers. If you are ten layers deep in a stack of synchronous functions and suddenly need to make an asynchronous call, the type signature of every individual function in the stack has to change.
You might say "well, if I'm ten layers deep in stack of functions that don't return errors and have to make a call that returns the error, well now I have to change the entire stack of functions to return the error", but that's not true. The type change from sync to async is forced. The error is not. You could just discard it. You could handle it somehow in one of the intervening calls and terminate the propagation of the type signature changes half way up. The caller might log the error and then fail to propogate it upwards for any number of reasons. You aren't being forced to this change by the type system. You may be forced to change by the rest of the software engineering situation, but that's not a "color".
For similar reasons, the article is incorrect about Go's "context.Context" being a coloration. It's just a function parameter like anything else. If you're ten layers deep into non-Context-using code and you need to call a function that takes a context, you can just pass it one with context.Background() that does nothing context-relevant. You may, for other software engineering reasons, choose to poke that use of a context up the stack to the rest of the functions. It's probably a good idea. But you're not being forced to by the type system.
"Coloration" is when you have a change to a function that doesn't just change the way it interacts with the functions that directly call it. It's when the changes forcibly propagate up the entire call stack. Not just when it may be a good idea for other reasons but when the language forces the changes.
It is not, in the maximally general sense, limited to async. It's just that sync/async is the only such color that most languages in common use expose.
> If you are ten layers deep in a stack of synchronous functions and suddenly need to make an asynchronous call, the type signature of every individual function in the stack has to change.
See my other post about the point. If you "just" turn an async back into a sync call by completely blocking the async scheduler, yes, you've turned the async call back into a sync call, but you've done that by completely eliminating async-ness. That's not a general solution. That is exactly what everyone back when Node was promoting this style spent paragraph upon paragraph warning you not to do, because it just punts by eliminating the asyncness entirely. "Async is great when you completely discard it" is not a winning argument for async... which is OK, because it's not the argument anyone is making.
> If you "just" turn an async back into a sync call by completely blocking the async scheduler,
I am not doing that. The caller (which is the only one being blocked here) is sync anyways and just wants to call an async function, so no async scheduler is blocked.
If you are ten nested functions deep in sync code and want to call an async function you could always choose to block the thread to do it, which stops the async color from propagating up the stack. That's kind of a terrible way to do it, but it's sort of the analog of ignoring errors when that innermost function becomes fallible.
So I don't buy that async colors are fundamentally different.
You can exit an async/IO monad just like you can exit an error monad: you have a thread blocking run(task) that actually executes everything until the future resolves. Some runtimes have separate blocking threadpools so you don't stall other tasks.
If you have something in a specific language that does not result in having to change the entire call stack to match something about it, then you do not have a color. Sync/async isn't a "color" in all languages. After all, it isn't in thread-based languages or programs anyhow.
Threading methodology is unrelated though. How exactly the call stack is scheduled is orthogonal to the question of whether or not making a call to a particular function results in type changes being forced on all function in the entire stack.
There may also be cases where you can take "async" code and run it entirely out of the context of any sort of sceduler, where it can simply be turned into the obvious sync code. While that does decolor the resulting call (or, if you prefer, recolor it back into the "sync" color) it doesn't mean that async is not generally a color in code where that is not an option. Solving concurrency by simply turning it off certainly has a time and place (e.g., a shell script may be perfectly happen to run "async" code completely synchronously because it may be able to guarantee nothing will ever happen concurrently), but that doesn't make the coloration problem go away when that is not an option.
Here's the list of requirements: 1. Every function has a color. 2. The way you call a function depends on its color. 3. You can only call a red function from within another red function. 4. Red functions are more painful to call. 5. Some core library functions are red.
You are complaining about point 3. You are saying if there's any way to call a red function from a blue function then it's not real. The type change from sync to async is not forced any more than changing T to Result<T,E>. You just get a Promise from the async function. So you logically think that async is not a color. You think even a Haskell IO-value can be used in a pure function if you don't actually do the IO or if you use unsafePerformIO. This is nonsense. Anything that makes the function hard to use can be color.
And you appear to have read my post way too hastily to get to the point you wanted to make, because I can't even correlate what you're saying to what I actually said, particular the argument that I'm "stuck" on async being the only color.
I have inside information about how that is not the case since I typed, but deleted before posting, some points about how Haskell is one of the few languages that can create arbitrary numbers of colors in libraries, but it started spiraling when I started trying to characterize when a new color was created. It is, for instance, not as simple as "it's monadic", because types that implement "monad" that allow extracting of the value like List or Maybe don't create colors even though IO does, so I just deleted it, especially when my thoughts turned to trying to explain how Haskell is also one of the few languages that can abstract over color. (Although I gather zig is giving it the college try with sync/async.) Using monad as an example is just an invitation to trouble since way more people think they understand it than actually do.
A function simply taking an annoying amount of parameters is not a color problem, because the entire essence of "color" is the transitivity. If you don't have the transitivity flowing up the call stack, you don't have color. Just being "hard to use" is not sufficient... the proof of which being that we've had "hard to call" functions for many more decades than we've had widespread "color" in our functions and nobody was talking about this transitivity problem until we started having async functions. That is a new problem... I mean, new on the relevant time frames... the whole color thing has been ongoing for years now, but it's still relatively new.
A function taking a new parameter that you have to pass all the way down the call stack is a color. If you have a large Haskell application, and then you decide that something 50 functions deep needs to access the user database, you've added a color. It's color if it affects the whole call stack in reality. You could pass an empty user database, but you obviously won't.
I will make this argument in favor of casinos, which is that at least we have coevolved with them. They've been around for centuries. We collectively recognize the dangers. We are not collectively blindsided by them.
Individually, yeah, by all means they can prey on people. But they're on the list of things that have been preying on people for centuries, like alcohol, and all kinds of other things. Ring-fencing casinos has a track record of some success at containing them.
I mean, sure, I'd love to wake up tomorrow and for all of the human race to have advanced to the point that to every last individual we are no longer gambling and that industry vanishes in a puff of smoke. I am as far into the belief that they are immoral as it is practically possible to be. But they are at least a known and knowable risk.
These prediction markets are blindsiding us. We could put up with them for another few decades until we coevolve further with them, or we could just, you know, not. Just end them now. Plus, prediction markets have a certain meta-ness to them that casinos largely lack that will keep them fresh and coevolving their own new ways to predate on us. Casinos have basically reached their final form, prediction markets could take yet more decades to get there and it's possible there isn't a stable endgame with them. Or, again, we could just end them here and now.
The libertarian argument for prediction markets is really beautiful.
It's just a pity it basically depends on all participants in the prediction market being basically unaware that they are participating in a prediction market, and being oblivious to the incentives to create the outcomes they are predicting by the very act of predicting it with money.
But other than that minor detail, that little minor catastrophic flaw in the foundation, it's a beautiful argument.
I think we can call it as a society. It's a failure. We can go back to banning them, not just for moral reasons, but just pragmatic ones. The theory doesn't work. The supposed benefits don't manifest, and "unanticipated" costs to everyone do. We did the experiment. (Again.) We can close this out now.
You shouldn't model it as incredible hard to fake. It isn't. It's harder that typing a password you've stolen into a web site, but if you set out to do it, it's not that much harder.
This is the primary reason I'm against biometrics used for identity. Yeah, the privacy invasion is a problem, but I think that's completely dominated by the fact that if everyone uses it, it will be leaked, and once leaked, can indeed be quite practically faked. If used as a password, it's a password you can never change. That is useless.
The difficulty of overcoming a security measure should be greater in cost than the thing it is valuing. The cost of, for instance, replicating a fingerprint given a photo of it, is basically a home hobbyist project for the weekend. Check out Youtube for many people who have done exactly that and give instructions how. When the cost of bypass is "home hobbyist project on a weekend", the value of what it should be expected to protect is correspondingly low.
(In fact I don't even use it on my cell phone, with all its access to bank accounts and amazon accounts and other ways to spend my real money. The idea of a password to all that stuff that I leave arbitrary copies of sitting right on my screen is completely absurd. Everything important is locked behind codes and passwords. It's less convenient than fingerprints but at least those offer actual security.)
You also have to bear in mind the costs of the biometrics gathering. If you have a physical guard watching someone do a retinal scan and verifying that they have put their real eye up to it, you're at least on track to something that takes a lot of resources to overcome, especially if it's in combination with other techniques of identification. If you don't have that, now we're back to "how cheaply can we replicate whatever passes for a retina with this scanner" and that's likely to be cheaper than most people think. Real-world biometrics are in places where attackers can perform arbitrary attacks with impunity.
You might find it interesting to learn a bit about information theory. The entire purpose of your specific number is precisely to identify which number in that list is yours. Having the list of all possible numbers is irrelevant. Conceptually you can model that as everyone has that, all the time. But that's not enough to do anything with, because having that list entire list means you have zero information.
If you say "it starts with an 8", you've eliminated 90% of the possibilities. Now you have log2(10) bits of information, but you haven't nailed it down yet. For each additional number you give you give that many more bits until you nail it down.
This is a common misconception people have. I remember someone who claimed to have copyright all possible melodies by virtue of having printed them out and thus enumerated them. But that is meaningless, because the entire job of naming a specific melody is precisely the nailing down of which one you mean. Expanding the list of possibilities you might mean is actually a reduction in the amount of information, despite the superficial appearance of listing more numbers out, and when you expand the possibilities out to "all possible instances of the thing" you're actually at the minimum of information, not the maximum.
I've acquired a sense for at least some of the bots. There's this set of bots that post a high-engagement post about once a day to an implausibly large range of subreddits, with implausible regularity. I can tell by the way I remove them and the way that the other subs are mostly not that most subs have not figured this out yet.
There is an obvious solution to that problem, which I haven't wanted to put out there, but I've become increasingly suspicious that it's already been figured out anyhow, which is to limit a specific user account to a specific "persona" with plausible interests and posting rates.
And that's where I think the race may well end, victory spammers. If there's a winning move against that in general I haven't figured it out.
I know reddit is concerned about this at the corporate level but I'm not sure they realize this is possibly their #1 threat, towering above all others. Not that I have any specific suggestions about what to do about it either. And it's years before the masses realize this and stop visiting, and by the time that happens all the social media companies are going to be in trouble for the same reason. You can see the leading edge here on HN but it's still only an almost negligible fraction of the total userbase of something like Reddit today. But that will change.
Reddit's also famous for NSFW content. There are also stories about harrasing people who post in the "wrong" subreddit (e.g. political subreddits that are the opposite view).
Reddit has a P/E ratio higher than Nvidia. Go ahead and think about that for a little while and then try and explain it with anything other than their value is being a bot-driven propaganda-pushing device for sale.
Out of curiosity, has anyone noticed a non-negligible presence of bots in threads on HN? I haven't, but I'm not sure if that's because I'm bad at spotting them or because HN is good at getting rid of them or because HN is a niche platform.
Yes, they’re very identifiable. New or resurrected account makes multi-paragraph comments on random topics with “insights” that read like AI, even if they don’t have em-dashes or “it’s not X it’s Y” (and sometimes they do).
Fortunately and in fairness to this site, they’ve become rarer, and most seem to be flagged within hours. Usually I look at the comments to confirm, and most are already dead.
I made a post here a bit ago where one of the few replies I got was one of these conversational ad-bots, albeit on the more obvious side. It was getting flagged which gives me hope that HN is good at filtering it, but I also mildly worry I'm (or we're) just missing it when it's subtle. I do suspect it's a huge volume in terms of comment count either way though.
I have suspicions but there's fewer signals on HN available to the general public so it's harder to tell.
Well... to be more precise... I'm abundantly positive there are bots and shills here in a general sense. But when it comes to identifying specific accounts as bots or shills, it gets difficult. Yeah, a lot of us have gotten pretty good at identifying the "default LLM voice", but it is trivial to kick it out of that.
I have done some formal writing with AI, and I always feed it a sample of my own writing to emulate. It doesn't do it perfectly. For instance, I'm a semi-colon kind of guy and it still em-dashes without more explicitly instructions to avoid them. But what comes out the other end would definitely pass most people's "default LLM voice" sniff test; it eliminates most of the tells [1] people look for. (I just checked. The resulting output may actually be "better" at avoiding the tells than my own actual text...)
The upshot of all of that is that we are approaching a point with the current AIs that with just a bit of clever prompting it may take many, many kilobytes of text for someone to form a justified (!) opinion that some set of posts is actually AI.
Enormous latency is all relative. A network drive on a local network holding a swap file would still outperform some number of computers I've owned that put their swap on much, much slower hard drives of their time. Of course, nobody was trying to swap two gigabytes to these drives as that would have been 10 times their capacity....
It runs Linux with Windows underneath it, hence Windows is the subsystem being subordinate (in the most literal sense where it simply means "order" with no further implications) to Linux.
Per wongarsu's post, something like the OS/2 Subsystem is an OS/2 system with Windows beneath it, but the OS/2 Subsystem is much smaller and less consequential, thus subsidiary (in the auxiliary sense) to Windows as a whole.
Isn't marketing fun?
This is how we end up with hundreds of products that provide "solutions" to your business problems and "retain customers" and upwards of a dozen other similar phrases they all slather on their frontpages, even though one is a distributed database, one is a metrics analysis system, one handles usage-based billing, one is a consulting service, one is a hosted provider for authentication... so frustrating trying to figure out just what a product is sometimes with naming conventions that make "Windows Subsystem for Linux" look like a paragon of clarity. At least "Linux" was directly referenced and it wasn't Windows Subsystem for Alternate Binary Formats or something.
This is in the class of things where even if the specific text doesn't trace to a true story, it has certainly happened somewhere, many times over.
In the math space it's not even quite as silly as it sounds. Something can be both "obvious" and "true", but it can take some substantial analysis to make sure the obvious thing is true by hitting it with the corner cases and possibly exceptions. There is a long history of obvious-yet-false statements. It's also completely sensible for something to be trivially true, yet be worth some substantial analysis to be sure that it really is true, because there's also a history of trivial-yet-false statements.
I could analogize it in our space to "code so simple it is obviously bug free" [1]... even code that is so simple that it is obviously bug free could still stand to be analyzed for bugs. If it stands up to that analysis, it is still "so simple it is obviously bug free"... but that doesn't mean you couldn't spend hours carefully verifying that, especially if you were deeply dependent on it for some reason.
Heck I've got a non-trivial number of unit tests that arguably fit that classification, making sure that the code that is so simple it is bug free really is... because it's distressing how many times I've discovered I was wrong about that.
[1]: In reference to Tony Hoare's "There are two ways to write code: write code so simple there are obviously no bugs in it, or write code so complex that there are no obvious bugs in it."
Gaussian splats are definitely interesting and do something things older tech is not very good at, but at the same time, it's definitely going to end up being a tool in the tool chest and not completely murderating mesh-based tech or something because they have a lot of other weaknesses, like editability. Or dynamic animation.
What I think some people may not realize is, that's not particularly uncommon. There's a really, really long line of graphical techs that do something particularly well but their weaknesses have kept them in a limited use. It's not a problem for Gaussian splats to become a tool in the toolchest; they aren't a "failure" if we're still using meshes for a lot of things in 10 years.
Mesh-type techs are the "default" for some good reasons.
[1]: https://en.wikipedia.org/wiki/Appeal_to_novelty
reply