"Western lie debunkers" will absolutely jump at any chance to say this is a fake, but that particular take is pathetic and indicative of problems with CJK literacy.
Unicode points and font glyphs are not the same thing, leading to situations where one Unicode character can be rendered as a different one (but similar) depending on OS and setup* -- and people enter and read characters by how they look, not by their Unicode points.
So the document can easily end up with 置's Unicode entity in the source without anyone finding out, even the person who entered it, if it always renders as a simplified version (without the left-bottom vertical line). And it will always render as a simplified version, because everyone involved is obviously using a simplified setup.
(If you have a Big Sur set up the same way as mine, you can observe for yourself by opening the same doc, such as the "Response Plan and Procedure for Escape and Disturbance Prevention During Class Times", in Quick Look and Pages and looking at the end of text following the first Arabic numeral "1" on the first page. Quick Look will show you a traditional/Japanese character at the end, while Pages will have a much better layout and consistently show simplified characters.)
The sad thing is that this initial stalling tactic is effective. Some will be swayed by his simple tweets and not have patience for the subsequent "debunking of the debunk" let alone their own research. This takes away the initial impact of the release.
* Software chooses a different glyph, the font provides a different glyph than required by Unicode standard, and so on. Example: https://stackoverflow.com/questions/54212157/. There was an in-depth article on CJK posted on HN some time ago, can't remember what it was called.
TL;DR yes, documents authored by CCP officials can easily have traditional Unicode points in them, because it is completely routine for software to be set up in a way that always renders those in simplified way.
That's interesting, I didn't realize that IMEs would silently offer you choices in different sets/styles than the one preferred by the locale, and that OS fonts could actually hide the difference.
If you're saying that an innocent error is what happened, you'd expect to see these weird traditional-in-simplified-context characters to appear across all sections of the documents, and not clustered together in a single paragraph (since that would be evidence that a single paragraph has been written by a different author than the rest of the document)
I believe if they can make it into a document in any number of ways (copy paste, input method, etc.) and no one would be able to tell, their existence alone is not an indicator.
That different authors could have written/rewritten/edited different parts of a document at different points in time is natural, what are reasons to think otherwise?
Unicode points and font glyphs are not the same thing, leading to situations where one Unicode character can be rendered as a different one (but similar) depending on OS and setup* -- and people enter and read characters by how they look, not by their Unicode points.
So the document can easily end up with 置's Unicode entity in the source without anyone finding out, even the person who entered it, if it always renders as a simplified version (without the left-bottom vertical line). And it will always render as a simplified version, because everyone involved is obviously using a simplified setup.
(If you have a Big Sur set up the same way as mine, you can observe for yourself by opening the same doc, such as the "Response Plan and Procedure for Escape and Disturbance Prevention During Class Times", in Quick Look and Pages and looking at the end of text following the first Arabic numeral "1" on the first page. Quick Look will show you a traditional/Japanese character at the end, while Pages will have a much better layout and consistently show simplified characters.)
The sad thing is that this initial stalling tactic is effective. Some will be swayed by his simple tweets and not have patience for the subsequent "debunking of the debunk" let alone their own research. This takes away the initial impact of the release.
* Software chooses a different glyph, the font provides a different glyph than required by Unicode standard, and so on. Example: https://stackoverflow.com/questions/54212157/. There was an in-depth article on CJK posted on HN some time ago, can't remember what it was called.
TL;DR yes, documents authored by CCP officials can easily have traditional Unicode points in them, because it is completely routine for software to be set up in a way that always renders those in simplified way.