# Difference between revisions of "255 in video games"

Fans of retro video games will see the number 255 show up all the time, frequently as the maximum value of some attribute. A couple other numbers will also show up more frequently than others, including 15, 63, 127, and 65,535. But why were these particular numbers so common? In this article, I explain how the limits of of the 8-bit hardware influenced video game developers to use small numbers in retro games, especially 255.

## 255 Limit

The central processing unit, or CPU, of a computer or video game system doesn't count in decimal, but in binary. While we may see the number 87, the computer will see the binary equivalent, 1010111. CPUs use binary because they're based on electricity, and, when using electricity, its much easier to have a current that is on or off rather than at several discrete levels. Since off and on very easily translate to 0 and 1, it makes binary the easiest number system for CPUs to utilize. A binary digit, which can be a 0 or 1, is called a bit. In fact, the word "bit" is short for binary digit.

Nearly all of the CPUs used for the first three generations of video games were 8-bit CPUs. The term "8-bit" describes the maximum number of bits the CPU can use when performing math. This means an 8-bit CPU can recognize a range of binary numbers from 0 to 11111111. If you convert the binary number 11111111 into decimal, you'll find that it's equal to 255! So, on an 8-bit CPU, 255 is the largest value natively supported by the hardware, and that's why it's so common in retro video games.

The term "byte" now pretty much universally refers to a unit of information consisting of 8 bits, but in the earlier days of computing, there were systems which used a different number of bits per byte like the 12-bit bytes found in minicomputers made by Digital Equipment Corporation. However, every retro video game console, and all of the home computers at the time used 8-bits to a byte.

When an 8-bit CPU adds 1 to a byte which already has 255 in it, the typical result is to roll the byte back to 0. Failing to account for this is the cause of several kill screens.

## Other Powers of 2

In binary, each time you add a new column, you increase to the next power of 2. 10 is 2, 100 is 4, 1000 is 8, 10000 is 16, and so forth. But, as you can see from the table below, getting to 256 would require 9 bits, and 8-bit CPUs don't natively support that.

 Decimal Binary 2 4 8 16 32 64 128 256 10 100 1000 10000 100000 1000000 10000000 100000000

With only 8 bits to work with, the maximum for each column is a number of all 1s, and, since each new column is the next power of 2, a binary number of all 1s is always a power of 2 minus 1, which you can see in the table below.

 Decimal Binary 1 3 7 15 31 63 127 255 1 11 111 1111 11111 111111 1111111 11111111

Because old hardware had such little RAM, it wasn't uncommon for a game to store multiple values in a single 8-bit byte. The developers would use some bits for one number and the rest of the bits for another. For example, consider a byte with the value 10111111, where the two bits on the left are one number, and the six bits on the right are a different number. In binary this would be 10 and 111111, which is decimal for 2 and 63.

Now that you know what to look for, you'll probably start seeing all sorts of retro games where values top out at 15, 63, 127, and 255.

Because 255 was the limit on simple values, game developers had to design their games to work within the boundaries of this limit. This is why so many games cap their values at small numbers and why you don't see a lot of retro games using huge numbers. In fact, 99 is used as a maximum limit in hundreds of games, not just because it's ascetically pleasing, but because it's the largest number you can have within an 8-bit byte which can have a 9 in each digit. Had retro CPUs used 10-bit bytes, which can support a maximum value of 1,023, there probably would have been a lot more games using 999 as their maximum value.

## 16-bit on and 8-bit CPU

The CPUs found in the Genesis or SNES are 16-bit. This means they can natively work with binary numbers that have 16 binary digits, which take up 2 bytes in memory, so they can handle binary numbers from 0 to 1111111111111111, or, 0 to 65,535 in decimal. Although 8-bit CPUs can't handle 16-bit numbers natively, they can be programmed to process two 8-bit numbers as though they were a single 16-bit number, but this comes with the cost of taking a lot longer for the CPU to calculate them. So, even though they're greater than 255, the 16-bit maximum value of 65,535, and sometimes the 15-bit maximum of 32,767, show up in various 8-bit games as well.

Because RAM was always limited in retro games, most 16-bit games were programmed to use 8-bit bytes if the designers could get away with it. Because of this, it's not uncommon to still see 255 limits in 16-bit games. For example, the 16-bit Genesis game Sonic the Hedgehog uses 8-bits to store the player's lives since it's very unlikely the player will ever increase the number beyond 255.

## Beyond 255

Earlier, I said that 8-bit CPUs only natively support numbers no higher than 255, which is 11111111 in binary, and could simulate 16-bit which maxes out at 65,535. But there are countless 8-bit games where numbers go well-beyond these values, so how is this possible? When this occurs, the game is probably making use of something called binary coded decimal, or BCD. BCD stores each digit of a number in its own byte rather than store the entire number in a single byte. For example, in binary mode, the number 206 would be stored in a single byte as 11001110, but in BCD, 206 would be stored in three bytes, one byte for each digit: 00000010 (2), 00000000 (0), 00000110 (6). This gives the appearance of being able to go beyond 255, but notice how each of the three bytes used to store the BCD value still don't go beyond 255. Using BCD has the benefit of being able to support numbers much larger than 255, at least as far as humans are concerned, but this comes with the cost of needing even more RAM and taking longer for the CPU to calculate the value.

8-bit games frequently used BCD for numbers that were guaranteed to become large, especially those numbers which were intended to be displayed to the player in decimal value, like the score.