Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You only need to write a training loop function once. Then you can just pass to it a model, dataloader, etc, just like you would if you used a training loop written by someone else in Keras. The only difference is it would be hidden from you behind layers of wrappers and abstraction, making it harder to modify and debug.


It sounds like you've found something that works best for you, and that the large Keras user base has found something that works best for them.


The large Keras userbase exists largely because Tensorflow sucked.


I'll agree to disagree. I find great value in the Keras API.

It's also a bit histrionic, in that Keras was very popular with the Theano backend before that project wound down.


Keras is great for painful libraries like Theano, which was similar to early TF. Btw, many Theano users already used a higher level library called Lasagne, which was similar to Keras.

When I switched to TF in 2016 Keras was still in its infancy so I wrote a lot of low level TF code (eg my own batchnorm layer), but many new DL researchers struggled with TF because they didn’t have Theano experience. That steep learning curve led to the rise of Keras as the friendly TF interface.

Pytorch is a different story.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: