Hacker Newsnew | past | comments | ask | show | jobs | submit | lreichold's commentslogin

An interesting alternative approach for instrument sound separation is to use a fused audio + video model. So, given that you also have video of the instruments being played, you can perform this separation with higher fidelity.

I was fascinated by the work done by “The Sound of Pixels” project at MIT.

http://sound-of-pixels.csail.mit.edu/


That’s quite clever but not really practical : instruments heard in most music produced today aren’t "played" by humans.


Not local, but you can start using S4TF immediately via Google Colab.

https://colab.research.google.com/github/tensorflow/swift/bl...


if I’m going to be developing, I need a compiler, not a REPL or a notebook. Haven’t managed to find that


The Dockerfile in the swift-jupyter repo is a superset of what you need. You could remove the lines dealing with jupyter and you'd be left with a Docker container with the s4tf compiler.

https://github.com/google/swift-jupyter/blob/master/docker/D...


Thank you - I will give that a try!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: