I like the idea of exploring other models. Shared data and threads is not the only approach to parallelism.
And, usually, when you are stuck solving a hard problem, it often pays to take a step back and make sure we are failing to solve a problem you should solve. The problem we want to solve is not how to get rid of the GIL, nor improve Python performance with threads but how to use Python more effectively on multi-core/thread architectures and gain performance from that.
This is not a problem only Python has. The machines I work on most of the time (a Core i5 laptop and an Atom netbook) rarely experience loads larger than 2. There are simply not enough threads to keep them busy.
That's not to say they never get slow - they do - but I'd like to emphasize that the limiting factor here is that we are not extracting parallelism from the software already written. We stand do gain a lot from that.
And, usually, when you are stuck solving a hard problem, it often pays to take a step back and make sure we are failing to solve a problem you should solve. The problem we want to solve is not how to get rid of the GIL, nor improve Python performance with threads but how to use Python more effectively on multi-core/thread architectures and gain performance from that.
This is not a problem only Python has. The machines I work on most of the time (a Core i5 laptop and an Atom netbook) rarely experience loads larger than 2. There are simply not enough threads to keep them busy.
That's not to say they never get slow - they do - but I'd like to emphasize that the limiting factor here is that we are not extracting parallelism from the software already written. We stand do gain a lot from that.