03-12-2022 03:56 PM
It is known that when we want to convert an IP address from binary to decimal, we use from the number 1 and its multiples, such as 2 - 4 - 8 - 16 - 32 - 64 - 128 - 256. Why do we multiply each number by 2 why 2 specifically? Thanks
03-13-2022 09:23 AM - edited 03-13-2022 09:24 AM
Actually we start with zero, not one.
The binary system position are powers of 2.
i.e. (binary = decimal)
0 = 0
1 = 1
10 = 2
100 = 4
1000 = 8
10000 = 16
100000 = 32
etc.
Each position, in binary, is a power of two (as in decimal, each position is a power of 10).
11 = 3
101 = 5
110 = 6
111 = 7
BTW, IPv6 uses hexadecimal where each position is a power of 16
1 = 1
10 = 16
100 = 256
and
11 = 17
.
.
1F = 31
.
.
FF = 255
If this still doesn't make sense, try searching the Internet, for primers, on different number bases, especially bases 2, 8 and 16.
PS:
Personally, "entering" networking from a programming background, I was surprised to see IPv4 used dotted decimal rather than hexadecimal, as computer addresses were often represented in hexadecimal, sometimes octal. The decimal representation, I believe, makes it a bit more difficult to understand shifting bit positions for understanding network vs. host allocations of a network address.
03-13-2022 02:04 PM
Thanks
Discover and save your favorite ideas. Come back to expert answers, step-by-step guides, recent topics, and more.
New here? Get started with these tips. How to use Community New member guide