There is no profit in AI safety, just as cars did not have seat belts until Ralph Nader effectively forced them to by publishing Unsafe at any Speed. For-profit corporations have zero interest in controlling something that is not profitable, unless in conjunction with captured regulation it helps them keep challengers out. If it's open-sourced, it doesn't matter who wrote it as long as they are economically sustainable.
I'm guessing the issues will lie in cases where it appears to be doing what it's told, but it would only pretend to be doing so (with no obvious way to tell)
AI safety is barely even a tangible thing to measure like that. It's mostly just fears and a lose set of ideas for a hypothetical future AGI that we're not even close to.
So far OpenAI's "controls" it's just increasingly expanding the list of no-no things topics and some philosophy work around iRobot type rules. They also slow walked the release of GPT because of fears of misinformation, spam, and deepfakey stuff that never really materialized.
Most proposals for safety is just "slowing development" of mostly LLMs, calls for vague gov regulation, or hand wringing over commercialization. The commercialization thing is most controversial because OpenAI claimed to be open and non-profit. But even with that the correlation between less-commercialization == more safety is not clear, other than prioritizing what OpenAI's team spends their time doing. Which again is hard to tangibly measure what that realistically means for 'safety' in the near term.