Depending on the finetuning tool you're using, you can just start the training run, and then it shows you how long it'll take. Like give it 5 mins to stabilise, then see the estimated duration.
Axolotl is a good finetuning tool if you need one.
I really wish people developing local applications would allow users to specify an API endpoint. Most applications use an OpenAI compatible API, and if they don't the browser's implementation of local model inference can be used.
I've also been looking for an "LLM browsers" for iOS, i.e. apps which can work with LLM endpoints that I host, but I haven't been able to find anything.
For those of you who haven't read to the end of the README, its for procrastination. The idea is that if you, as a part of your work, read a lot of papers, but want to procrastinate secretly, you can convert a book or website you'd like to read to a paper, and thus it seems as if you're doing your work but instead you're enjoying your book.
Axolotl is a good finetuning tool if you need one.