One thing I thought of when I saw the demo video, that is probably on the team's radar:
There would be a lot of cool ways to improve the model by giving feedback, either showing training images where the model is uncertain, or some more advanced explanations for classifications flagged as incorrect, in order to guide the user to gather the training data that can improve it.
And possibly providing a summary of where it knows it works well.
There are a lot of benefits there, both for improving models people are building but also to help users understand why their model is performing as it does.
Thanks for your suggestions here. We are always looking at ways to improve Lobe, and the feedback loop of how to improve your model is one of the most important ones for us.
There would be a lot of cool ways to improve the model by giving feedback, either showing training images where the model is uncertain, or some more advanced explanations for classifications flagged as incorrect, in order to guide the user to gather the training data that can improve it.
And possibly providing a summary of where it knows it works well.
There are a lot of benefits there, both for improving models people are building but also to help users understand why their model is performing as it does.