## What are bits?

In programming, a “bit” is **the smallest unit of data** that can be stored on a computer.

A bit can be **a 1 or a 0**.

Bits are used to represent **binary numbers**, which are numbers that can only be written as sequences of 1s and 0s.

A set of 8 bits is also known as a **byte**.

In a sequence of 8 bits (aka 1 byte) consisting of 1s and 0s, the **weight of each bit grows by a factor of 2 from right to left.**

## What number does this binary number represent?

Now that we understand what binary numbers are, let’s take a look at some examples.

### Example 1

Here’s an example below:

0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 |

128 | 64 | 32 | 16 | 8 | 4 | 2 | 1 |

The bottom row indicates how much each 1 is worth. So in this case, there is a 1 above the binary weight 1 and another 1 above the binary weight 2.

1 + 2 = 3

The number 3 in binary looks like this:

`0000011`

### Example 2

Let’s try another example:

0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 |

128 | 64 | 32 | 16 | 8 | 4 | 2 | 1 |

There’s a 1 above the binary weight 1, another 1 above the binary weight 4, and another 1 above the binary weight 8.

1 + 4 + 8 = 13

The number 13 in binary looks like this:

`00001101`

### Example 3

Last example:

1 | 0 | 1 | 0 | 0 | 1 | 0 | 1 |

128 | 64 | 32 | 16 | 8 | 4 | 2 | 1 |

There’s a 1 above the binary weight 1, another 1 above the binary weight 4, another 1 above the binary weight 32, and a final 1 above the binary weight of 128.

1 + 4 + 32 + 128 = 165

The number 165 in binary looks like this:

`10100101`