If you've ever wondered how old-school computer game graphics worked, YouTube user iBookGuy has crafted a nifty explainer.
Back in the 1980s, even a top-flight personal computer only had 64 KB of RAM, and the video chip had to share that tiny allotment with the computer's CPU. As a result, anything other than 1-bit color would eat up way too much memory to leave any space for game code, so game engineers had to get clever.
Their solution was to divide a computer screen's pixels (and there were only 64,000 in those days) into "color cells," which were sections of 8 square pixels. 1-bit color, or black and white, only took 8 KB of RAM, but engineers figured out they could add color to each cell for only 1 KB more. The catch was, each cell couldn't contain more than two colors.
Computer graphics artists had to be very careful to camouflage this limitation, and when you look back at their work with this knowledge, it makes their achievements way more impressive.
To add even more color, the Commodore 64 added something called "multicolor mode," which widened the pixels in each color cell to twice their length. This meant that although color cells could now each contain 4 colors—twice as many as before—the screen had to trade in a chunk of its resolution.
However, color cells weren't the only early solution to providing graphics with limited memory. The Apple II used a method called artifact coloring, and Atari used CPU-driven graphics. The iBookGuy has promised to delve into these in a follow-up video. For now, we can take a trip down memory lane, and be grateful for the monumental advances we've made in just thirty years.