Thanks. Yes I was thinking more about the (traditionally) CPU side of graphics or games, things like CPU skinning or skeletal animation.
I was wondering mostly about the FFI overhead. So the question should really be, is it suitable for real-time low-latency contexts? But I have your answer :)
I still would love to find a use for it one day. If there's ever a chance you could go the futhark route and allow for compilation of CPU/GPU routines I think it would be an ideal kind of syntax.
There's also a GC pause issue: reference counts are the main method, but when those don't do the job we rely on stop-the-world mark and sweep. And we do very little analysis so it's easy for namespaces and closures to form reference loops. If you're careful about how you use those, it's possible to keep the reference counts working, but debugging when this fails is probably not such a nice experience. Running GC every frame might be acceptable if the stored state has few enough objects; it can be around 1ms if there's not much to do.
I was wondering mostly about the FFI overhead. So the question should really be, is it suitable for real-time low-latency contexts? But I have your answer :)
I still would love to find a use for it one day. If there's ever a chance you could go the futhark route and allow for compilation of CPU/GPU routines I think it would be an ideal kind of syntax.