I’m having trouble arranging my thoughts into something more dialogic,
so I’ll just try to list them:
The keyboard mapping I use, which is more-or-less a clone of the
standard US Apple keyboard, covers most European languages that use
the Latin script and provides many common symbols, such as “®”. Of
course, different mappings provide different character sets.
Because keyboard mappings can be quickly swapped out with a couple of
keystrokes (this can be done with any OS), users that work in multiple
languages can simply switch to a mapping that supports the language
they are currently writing.
Advanced input methods, like the phonetic approach used for CJK
characters, lets users access complex scripts with a common keyboard.
Some text editors and input support software let a user type a LaTeX
symbol command and have the control sequence be replaced with the
The way I interpret all of this is that written words in different
languages are pretty well covered by existing input methods, but many
symbols that are not part of a given language have varying degrees of
support. As such, it valuable to continue to provide commands that
cover these symbols.
As for the font/typographic/output considerations, these are important
reasons to continue using text commands.
Given that users’ situations will vary, I foresee a mixed usage of
Unicode symbols and text commands. There will need to be a way to
handle this gracefully—and more robustly than, say, just asking users
to prefer text commands.