Certainly not for that reason. Actually, as a non-native English speaker, I consider unicode identifiers one of the most useless features for a language.
There are reasons why unicode identifiers have never been widely used, and probably will never be, even though they've been around for more than a decade:
1. You're unlikely to be allowed to use them. Unicode identifiers stagger international collaboration. Do you want your Chinese colleague to commit some hieroglyphical identifiers into your code, and then an Arabic colleague to add right-to-left curvatures? I doubt that :) They are unwanted in outsourcing, freelancing, open-source, any company which does foreign hire, they are a problem when asking on StackOverflow, etc.
2. There is just no problem to use English in the first place. Programmers already use English every day. 80-100% of information (documentation, QA, discussion forums) on any programming topic is in English; native-language sources are lacking at best, often nonexistent. If person is a professional programmer, it's way too late for them to have a couple of words translated.
3. They are not as appealing as you might think. Many languages don't work the same way as English. For example, in expression `print(line)` the word `line` may have to have different morphological form than in `var line = ...`, and yet another form for `if '_' in line`; one form for all uses is very unnatural and requires getting used to (and if you're going to adjust anyway, why not adjust to English then).
4. Mixed-language text is simply harder to type when second alphabet is not latin. It's like 3x harder when foreign words need to be inserted. I have even seen colleagues discussing code in chat using English just for that reason. And since at least language keywords and standard libraries are already using English, mixed is what it gonna be. A person must have some real serious trouble with English to tolerate that.
Even in strictly local projects where all comments are in native language, unicode identifiers are rarely used.
P.S. When I was reading Swift book about unicode identifiers, I immediately thought: "first paragraph of every Swift coding conventions on the planet is going to be, don't use unicode identifiers".
I agree, but I think we live in an idealist bubble on Hacker News. Some of my freelancing clients maintain a bit of (Unicode free) English-German code.
What would you do if you had to write one of the gazillion business software systems we never see here on HN, and you had to codify boring terms like Boolean rightToGiveExtraordinaryNoticeOfTermination (in any other language)? The client/superior has not provided you with an English translation, and your English is mediocre. I can see how non-English code is the least evil in this situation.
I have also inherited outsourced code from Russia once and it had Russian (cyrillic) comments above every English identifier, so apparently some people don't mind switching between keyboard layouts while typing :)
There exist a lot of line-of-business Japanese applications which have methods like getKamokuList and getKogaishaList, either because the company didn't have a blessed translation for the noun or, in more than one instance in my career, because two concepts which were distinct in Japanese would have caused a hash collision when translated.
For example, if you do Japanese entrance exams, there exist two ways to say "academic subject": 教科 and 科目. Both of them are most readily translated as "subject", but the first is subject like mathematics is a subject, and the second is a subject like Algebra II is a subject. Make sense? Great.
Rather than littering our code with getSubject and getSubSubject, which was the cleanest English option available, we just went with the transliterations straight in the source code, and said "Look, if we pay your salary, you're going to learn the difference between kamoku and kyouka."
In the early 90's - ie before the internet cemented the hegemony of English in s/w development - some colleagues worked on a codebase that was an international collaboration between the UK, Germany and France. Each nation used their own native language identifiers for variables and functions, and there was a custom preprocessor to translate between them.
The feeling was very much that this was political far more than an aid to the programmers, although of course that was the English-speakers perspective! But it did make working with the code (which was a mess anyway) far more complicated.
Still, identifiers in the local language makes sense in some contexts. For example when working with terminology which does not have direct equivalent in English. Eg. some very specific tax-related terms with precise meaning in the local country. In that case translating to some not-quite-equivalent English term will just confuse the matter.
Actually trying to use hieroglyphical identifiers opens another can of worms, hieroglyphs are located outside of the basic multilingual plane and so tend break even in languages that claim to support unicode identifiers.
I have been working in big companies that do only local-language hiring, and they definitely use local business terms mixed with english programming terms in identifiers. formattedZugNummer in german or montantFacturé in french for example. Unicode is a bliss here.
Almost all of the colleagues are not native english speakers, so it does make absolute sense to not waste time translating these terms (differently!) every time.
I guess it depends how much "international" the company is.
Agreed. As a French speaker (a language with only a few accents, so even ASCII is acceptable) I don't code in French anymore, even for code I know now English reader will ever see.
There are reasons why unicode identifiers have never been widely used, and probably will never be, even though they've been around for more than a decade:
1. You're unlikely to be allowed to use them. Unicode identifiers stagger international collaboration. Do you want your Chinese colleague to commit some hieroglyphical identifiers into your code, and then an Arabic colleague to add right-to-left curvatures? I doubt that :) They are unwanted in outsourcing, freelancing, open-source, any company which does foreign hire, they are a problem when asking on StackOverflow, etc.
2. There is just no problem to use English in the first place. Programmers already use English every day. 80-100% of information (documentation, QA, discussion forums) on any programming topic is in English; native-language sources are lacking at best, often nonexistent. If person is a professional programmer, it's way too late for them to have a couple of words translated.
3. They are not as appealing as you might think. Many languages don't work the same way as English. For example, in expression `print(line)` the word `line` may have to have different morphological form than in `var line = ...`, and yet another form for `if '_' in line`; one form for all uses is very unnatural and requires getting used to (and if you're going to adjust anyway, why not adjust to English then).
4. Mixed-language text is simply harder to type when second alphabet is not latin. It's like 3x harder when foreign words need to be inserted. I have even seen colleagues discussing code in chat using English just for that reason. And since at least language keywords and standard libraries are already using English, mixed is what it gonna be. A person must have some real serious trouble with English to tolerate that.
Even in strictly local projects where all comments are in native language, unicode identifiers are rarely used.
P.S. When I was reading Swift book about unicode identifiers, I immediately thought: "first paragraph of every Swift coding conventions on the planet is going to be, don't use unicode identifiers".