> However, when an entire program consists of a single chain of dependent instructions, which may happen e.g. when computing certain kinds of hash functions over a file, you are doomed. There is no way to increase the performance of that program.
Even in that case, you would probably benefit from having many cores because the user is probably running other things on the same machine, or the program is running on a runtime with eg garbage collector threads etc. I’d venture it’s quite rare that the entire machine is waiting on a single sequential task!
> I’d venture it’s quite rare that the entire machine is waiting on a single sequential task!
But that happens all the time in video game code.
Video games may have many threads running, but there's usually a single-thread bottleneck. To the point that P-cores and massively huge Zen5 cores are so much better for video games.
Javascript (ie: rendering webpages) is single-threaded bound, which is probably why the Phone makers have focused so much on making bigger cores as well. Yes, there's plenty of opportunities for parallelism in web browsers and webpages. But most of the work is in the main Javascript thread called at the root.
Car thieves have been using antennas to remotely trigger the keyless entry to great success here in Canada. Many people leave their keys on their doors so you just need to put a range extender against the door and scan cars in front until one of them opens.
> I'm not talking about whether the scheduler can block the task on IO. This is trivial.
The whole point of coroutines is that they are trivial to block and resume, right? So the way to deal with limited resources... is to block on them.
So when you need a connection, you await until one from the pool becomes available. From the point of view of the consumer it's very natural because grabbing a connection is just a standard async call, and returning it to the pool can be done the same way you would usually close it. If anything, it's a lot easier to manage them like this because unlike in a non async program you don't have to worry too much about blocking progress.
That's not my point. The issue is that when the blocking happens, with coroutines control is yielded to the scheduler which will now schedule other tasks. Those tasks may again request and block on resources. This is where the leak is coming from. A resource pool is one way to get around this, however this stops working if you have several kinds of resources.
On the other hand, with threads the IO block is a "proper" block. No new task will be scheduled, the thread will only continue when the IO operation finishes, providing a very natural backpressure mechanism that prevents overallocation/contention.
We use it at work to send FSMs over the network for distributing tasks. Admittedly we always do this with continuations that haven't started yet, but there is no technical reason we couldn't serialize the already started ones (although you'd have to make sure you also send the coroutines you are currently awaiting on.)
Scala async coroutines build FSMs that are just standard objects. They aren't very commonly used, but the same approach could probably be adapted to C.
Basically the compiler compiles async function to a very simple FSM object where each assignment to a local variable that lives longer than a yield point becomes a member of the FSM object. This is done after the transformation to a simple normal form, at which point the function basically looks like bytecode. The compiler generate a new class for every async coroutines.
The generated FSM classes are statically sized so they could in principle be stack allocated, and their instances can be manipulated like any other JVM object.
At $work we have a system using scala async that has all of these features (arrays of FSMs, serialization, distribution by sending to network etc.) Of course there are complications (file handles for example) and it's easier with a GC.
For context, I came across this because I was interested in how they came to acquire the Twitter four-letter twitter handle @meta, which I wrote about here
Turns out it's harvested from the Meta company linked here that the Chan Zuckerberg Initiative acquired. Before that the @meta handle was held by an early twitter user fro. Turkey.
This is basically the unremarkable story of the "been on Twitter since 2010" profile with handle @meta.
I don't think it would be a good idea, given that you'd have to claim the winnings. It might work once or twice but not over and over again.
Additionally in most cases I'd think the lottery odds would be lower than the cost of traditional laundering (smurfing, through crooked banks, using cash based businesses like taxis etc.) Especially if you have to pay people to buy tickets.
> It might work once or twice but not over and over again.
Except for when it does: there are a bunch of people who have repeatedly jackpotted state lotteries, they're usually described as 'reclusive mathematicians'. But that isn't what I'm talking about. I just checked the TX Lottery Commission's site and it looks like scratchoffs would run, worst case, a 30% return. I can't be bothered to calculate the upper bounds, but I'd expect it to be 40%-ish. That seems good to me, I especially like that you can skip the part where you have to drive out to some hotel to meet an undercover Secret Service agent pretending to be a Wells Fargo employee responding to your help wanted notice in Soldier of Fortune.
This could increase the pricing differential between clean and dirty electricity sources. However, most utilities don't care and so will buy the cheaper electricity, which means that there won't be a significant push to decrease fossil fuel based generation.
Unless demands from bitcoin mining start to strain clean electricity generation, there isn't really much reason to build more renewable capacity either. The kind of thing described in the OP (the repurposing of existing facilities) is likely to be the main result for the foreseeable future.
Even in that case, you would probably benefit from having many cores because the user is probably running other things on the same machine, or the program is running on a runtime with eg garbage collector threads etc. I’d venture it’s quite rare that the entire machine is waiting on a single sequential task!