You only need to write a training loop function once. Then you can just pass to it a model, dataloader, etc, just like you would if you used a training loop written by someone else in Keras. The only difference is it would be hidden from you behind layers of wrappers and abstraction, making it harder to modify and debug.
Keras is great for painful libraries like Theano, which was similar to early TF. Btw, many Theano users already used a higher level library called Lasagne, which was similar to Keras.
When I switched to TF in 2016 Keras was still in its infancy so I wrote a lot of low level TF code (eg my own batchnorm layer), but many new DL researchers struggled with TF because they didn’t have Theano experience. That steep learning curve led to the rise of Keras as the friendly TF interface.