Computers represent text as bit strings and rely on encoding schemes like ASCII, which maps a limited set of 128 characters to bits, but falls short for representing diverse global scripts and symbols. This limitation led to the development of Unicode, a comprehensive table that designates over 137,000 character points across various languages and symbols, though it is not an encoding scheme itself. To efficiently encode Unicode points into bits, formats like UTF-8, which uses variable-length encoding, are employed. In Python 2, the default encoding is ASCII, which poses challenges in handling Unicode, necessitating strategies like decoding early and encoding late. Python 3 addresses these issues by making all strings Unicode by default and introducing a separate 'bytes' type, with UTF-8 as the standard encoding, simplifying the handling of diverse character sets in programming.