(12 points-Correctness) Huffman Encoding (a) (5 points-Correctness) Create a Huffman Tree on the following distribution of characters: 2. b:21, t: 63, 's': 23, 'd':57, a': 44, y': 70, 'p:41, T: 69, i: 55, 'h': 34, 12) (the last character is a space, you can free to call it 'space'or"in you tree). At each split, clearly indicate which side gets 'o' and which gets 1'. At each split, you should assign o' to the subtree with the lower total weight and'1' to the other subtree. If both subtrees have the same weight, yok may assign o' and '' arbitrarily (b) (2 points - Correctness) (i) Using your above tree, encode the sentence "happy birthday (ii) Using your above tree, decode the sequence 1110110010111010010110101001100011o'. (If there are trailing bits at the end of your decoding, explicitly state what the trailing part of the sequence is.) (c) (5 points- Correctness) Consider our algorithm for constructing a Huffman coding tree: 1. 2. 3. Determine the count of each symbol in the input message. Create a forest of single-node trees containing symbols and counts for each non-zero-count symbol. Loop while there is more than 1 tree in the forest: a. Remove the two lowest count trees b. Combine these two trees into a new tree (summing their counts). c. Insert this new tree in the forest, and go to 3 4. Return the one tree in the forest as the Huffman code tree. Let n be the total number of characters in the message we're building the encoding from, u be the number of unique characters in that message, and b be the number of bytes to represent each character (pre-encoding). Derive a reasonably tight upper bound on the running time to construct a Huffman coding tree in terms of n, u, and/or b. While we do not need a formal proof, you must have enough of an argument that it is clear how you arrived at your answer. There might be more than one acceptable answer here, depending on the justification and assumptions you make. As long as your bound is correctly justified and reasonably tight you'll get full credit