ASCI is relatively limited in the number of characters it can represent. While extensions of ASCII ultimately incorporated many Roman characters that are not found in English making ASCII broadly usable by the speakers of most European languages, many languages have unique characters which do not exist in English. In short, there wasn't room to represent languages like Cyrillic, Grook, and Asian languages. They would have their own standards for representation but these were not compatible with ASCII. Thus, now standards were developed to allow for interoperability. The most important of these standards is unicode which uses multiple bytes to represent a character and it's underlying number. What's neat and very convenient) about unicode is that it's backward compatible - most of ASCII encoding works just fine in Unicode, it's just extended ASCII. However, the three bytes format allow for huge numbers of characters to be represented. In fact, over 1 million characters have been assigned using Unicode. Have you ever wondered why an emoji on Facebook, Apple, or Android look different? or have you ever sent a text that was misinterpreted because the receiver wasn't seeing the same emoji's as you? Consider the humble poop emoji: Apple/ios Picture 00 Google Android Picture 00 Google Hangouts Picture Twitter com Picture 00 LG Emoji Picture Samsung Emoji Picture Phantom Open Emoji Picture The reason that the emoji you see is different depending upon the platform is that emoji's are representations of the underlying unicode value. Each company designs and implements their own version of an emoji just like they might use their own font. However, everyone receives the same numerical value! Google "poop emoji binary" and copy the UTF-8 binary value below (remove any colons or spaces); Click Save and Submit to save and submit. Click Save All Answers to save all answers