Unicode: Difference between revisions

From wikinotes
No edit summary
No edit summary
Line 2: Line 2:
It defines a mapping of integers to characters in various languages (''code-points'').<br>
It defines a mapping of integers to characters in various languages (''code-points'').<br>
Various text-encodings alter how the integer is divided across byte(s),<br>
Various text-encodings alter how the integer is divided across byte(s),<br>
but the number/character is consistent across encodings.
but regardless of it's composition, the assigned number/character is constant.


For example, UTF-8 uses the first 1-5 bits of a byte to indicate the type of byte,<br>
For example, UTF-8 uses the first 1-5 bits of a byte to indicate the type of byte,<br>

Revision as of 22:58, 4 August 2021

Unicode is a standard for text encoding.
It defines a mapping of integers to characters in various languages (code-points).
Various text-encodings alter how the integer is divided across byte(s),
but regardless of it's composition, the assigned number/character is constant.

For example, UTF-8 uses the first 1-5 bits of a byte to indicate the type of byte,
and if the number spans multiple bytes afterwards.
The remaining bits are assembled into one large integer, that may span multiple bytes worth of bits.

UTF-1,7,8,16,32 all map to the same character set defined by unicode.

Documentation

wikipedia https://en.wikipedia.org/wiki/Unicode