You could, but in many cases you wouldn't want to. You will get superior results with a fixed compute budget by relying on external tool use (where "tool" is defined liberally, and can include smaller narrow neural nets like GraphCast & AlphaGo) rather that stuffing all tools into a monolithic model.
Isn't that what the original resnet project disproved? Rather than trying to hand-manicure what the NN should look for, just make it deep enough and give it enough training data, and it'll figure things out on its own, even better than if we told it what to look out for.
Of course, cost-wise and training time wise, we're probably a long way off from being able to replicate that in a general purpose NN. But in theory, given enough money and time, presumably it's possible, and conceivably would produce better results.
I'm not proposing hand-engineering anything, though. I'm proposing giving the AI tools, like a calculator API, a code interpreter, search, and perhaps a suite of narrow AIs that are superhuman in niche domains. The AI with tool use should outperform a competitor AI that doesn't have access to these tools, all else equal. The reason should be intuitive: the AI with tool use can dedicate more of its compute to the reasoning that is not addressed by the available tools. I don't think my views here are inconsistent with The Bitter Lesson.