Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm teaching myself deep learning at the moment, and learning about embedding vectors was the first "holy shit!" moment I had.

To me, it's fascinating that not only can you:

-represent things like words as vectors,

-map them in a multi-dimensional space, and

-use that space to find the "closest" neighbors (i.e. the most similar words)…

…but you can actually perform "mathematical" operations on them.

The canonical example is that, if you represent "king", "queen", "man", and "woman" as vectors in your embedding space, then you can ask your model "What is king - man + woman?" and (provided it's trained appropriately) it will return "queen".

I look forward to the day when we can ask something like "What is 'Bohemian Rhapsody' - 'Queen' + 'Velvet Underground'?". Which, if OP's model were to be trained on whole songs instead of previews, would probably be a reality!



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: