I worked with networked Unixes boxes for a long time before I discovered that the TCP/IP network address is actually a 4-byte integer just represented as a dotted-decimal number. That is, an address of 129.35.42.102 is actually the number 0x7F272A0C in hex or 2133273100 decimal. I had previously thought it was a string like “129.35.42.102”
When I finally figured it out, I thought it was brilliant to write the addresses as dotted-decimal. I find it much easier to work with addresses which look like 129.35.42.102 instead of 0x7F272A0C.
Really, any 4-byte number could be written in that format, so why did TCP/IP addresses decide to do that? Why did they decide to expose the address as dotted-decimal instead of just a hex number?
Originally, each eight-bit byte of the IP address was significant, as the original IP address space was divided into “network numbers” and “node numbers.” (Classful networking)
There were three useful classes of IP address. The classes were determined by the leading bits:
Class A: Leading bits: 0, followed by 7 bit network number, followed by 24 bit node number.
Class B: Leading bits: 10, followed by 14 bit network number, followed by 16 bit node number.
Class C: Leading bits: 110, followed by 21 bit network number, followed by 8 bit node number.
So the boundary between “network” and “node” was always on one of the byte boundaries. Thus, it made sense to write the bytes out individually. On the modern Internet, classless-inter-domain routing (CIDR, pronounced “ceder”) is used, so these class distinctions don’t mean a whole lot anymore. Instead, variable length netmasks are used to determine the network/node boundary, which may or may not fall on a byte boundary, thus making the current dotted-decimal formula somewhat archaic.
Incidentally, IPv6 addresses, which are four times as long, are written as colon-separated byte pairs in hex. E.g. 3401:4dc2:83b3:d2c4:1542:4b2e:1123:bbd1