There are a lot of problems that arise from lack of domain expertise, but they can be overcome with a multidisciplinary team.
The biggest defeating problem for pure AI teams is that they don't understand the domain well enough to know if their data sets are representative. Humans are great at salience assessments, and can ignore tons of the examples and features they witness when using their experience. This affects dataset curation. When a naive ML system trains on this data, it won't appreciate the often implicit curation decisions that were made, and will thus be miscalibrated for the real world.
A domain expert can offer a lot of benefits. They could know how to feature engineer in a way that is resilient to these saliency issues. They can immediately recognize when a system is making stupid decisions on out of sample data. And if the ML model allows for introspection, then the domain expert can assess whether the model's representations look sensible.
I'm scenarios where datasets actually do accurately resemble the "real world", it is possible for ML to transcend human experts. Linguistics is a pretty good example of this.
It makes sense to have a domain expert and an AI expert working together, but I'd offer two important modifications:
1) The AI expert is auxiliary here, and the domain expert is in the driver's seat. How can it be otherwise? You no more put the AI expert in charge than you'd put an electronic health record IT specialist in charge of the hospital's processes. The relationship needs to be outcome-focused, not technology-focused.
2) The end result is most likely to be a productivity tool which augments the abilities/accuracy/speed of human experts rather than replacing them. AGI being not that sciencey of a fiction, we aren't likely to be actually diagnosed by an AI radiologist in our lifetimes, nor will an AI scientist make an important scientific discovery. Ditch the hype and get to work on those productivity tools, because that's all you can do for the foreseeable future. That might seem like a disappointing reduction in ambition, but at least it's reality-based.
Unless of course the "domain experts" have fundamental disagreements or have equally limited knowledge of what should constitute what is important to extrapolate data beyond their own scope. E.g. like in comp sci, there might be multiple comparable ways to accomplish n, but which is best to reliably accomplish an unknown or unforseen n+1...depends.
The biggest defeating problem for pure AI teams is that they don't understand the domain well enough to know if their data sets are representative. Humans are great at salience assessments, and can ignore tons of the examples and features they witness when using their experience. This affects dataset curation. When a naive ML system trains on this data, it won't appreciate the often implicit curation decisions that were made, and will thus be miscalibrated for the real world.
A domain expert can offer a lot of benefits. They could know how to feature engineer in a way that is resilient to these saliency issues. They can immediately recognize when a system is making stupid decisions on out of sample data. And if the ML model allows for introspection, then the domain expert can assess whether the model's representations look sensible.
I'm scenarios where datasets actually do accurately resemble the "real world", it is possible for ML to transcend human experts. Linguistics is a pretty good example of this.