I haven't. I don't have any complex math in mind (yet), just some simple transformations. The problem is that even something as simple as checking a list for potential duplicates becomes really RAM intensive for sufficiently large lists. (I'm not even doing deep equality, just comparing metadata.)
I still have plenty more work to do on the project. I think I'll end up fanning out each list iteration into a series of smaller chunks to keep me from blowing through all the RAM on any one request.
Numpy supports lots of array math, but another way to think of it is as an api for working directly with memory (and values stored as platform types instead of python objects).