What Are Bits in Programming and How Do You Read Them? (Examples Included)

What are bits?

In programming, a “bit” is the smallest unit of data that can be stored on a computer.

A bit can be a 1 or a 0.

Bits are used to represent binary numbers, which are numbers that can only be written as sequences of 1s and 0s.

A set of 8 bits is also known as a byte.

In a sequence of 8 bits (aka 1 byte) consisting of 1s and 0s, the weight of each bit grows by a factor of 2 from right to left.

What number does this binary number represent?

Now that we understand what binary numbers are, let’s take a look at some examples.

Example 1

Here’s an example below:

00000011
1286432168421
The top row is your binary number, and each of these slots is a single bit. There are 8 bits otherwise known as a byte. The bottom is your binary weight that’s associated with each bit above it.

The bottom row indicates how much each 1 is worth. So in this case, there is a 1 above the binary weight 1 and another 1 above the binary weight 2.

1 + 2 = 3

The number 3 in binary looks like this:

0000011

Example 2

Let’s try another example:

00001101
1286432168421

There’s a 1 above the binary weight 1, another 1 above the binary weight 4, and another 1 above the binary weight 8.

1 + 4 + 8 = 13

The number 13 in binary looks like this:

00001101

Example 3

Last example:

10100101
1286432168421

There’s a 1 above the binary weight 1, another 1 above the binary weight 4, another 1 above the binary weight 32, and a final 1 above the binary weight of 128.

1 + 4 + 32 + 128 = 165

The number 165 in binary looks like this:

10100101

Leave a Comment