What exactly is the I.P. address?

Okay, but assuming cost had not been an issue. In any case, it’s possible to devise a theoretical system for expansion before actually paying for the hardware to fill it out.

No. Not for the machines on the real networks, anyway. Nobody cared about your little toys at home because, frankly, those things couldn’t handle a network connection and would never be able to handle a network connection. (See where this is going?)

The original IP has been remarkably resilient. It has been adapted to handle almost any contingency. If I had to pick an IP service that really needed better forethought, it would be email and the chance for SPAM. Since spam became common, they’ve been trying to plug holes in a process that was incredibly trusting and lacking in scurity.

GSM, or cellular, are totally different beasts working on a different design. You can roam with the same number because your phone company now has a computer that keeps track of every physical phone signed up with them, the phone number, and where (whether) it can be found anywhere in the world. When you show up near a compatible tower, your phone does a handshake which then causes the tower owner’s computer system to find your cell provider’s and update it. Phone networks have thier own connections, optimized for continuous timely delivery rather than bursts of bulk files.

When you browse on the internet from your phone, you think you are sending IP packets. Instead, you have a program on the phone that acts like a router or tunnel, forwards your packet to the cell company’s computer/router/firewall. Depending on the company, they may give you your own real IP address outbound from there, or you may get an internal IP address and be doing NAT behind their firewall. Those public IP addresses are getting to be valuable commodities, so if you don’t need one, you may not get one.

When memory was expensive, and even Bill Gates was quoted as saying “who needs more than 640K?” it’s not surprising that 4 bytes was considered enough address for the whole world. Despite doomsday predictions that we will run out last year or next year, things still work. IP6 is not only bigger, it’s more complicated and does more strange and weird things - this is probably helping to delay its acceptance.

I talked to a local phone company tech support once who said IP addresses were determined by the login in your modem or PC. However, to ensure people didn’t start setting up servers at home with permanent addresses, they could shuffle the addresses whenever they wanted or needed. For a permanent address you needed their more expensive commercial connection. This is entirely a service provider’s decision. Some have even taken to NATing their personal services, since addresses are scarce now.

That’s what they did. In 1974. It was just a clever way to pass data around, and look what happened! It didn’t really mature, and start to become a consumer-level thing for twenty-five years or so, and even that required a standard on top of the basic protocol.

“Why, Mr Gutenberg, what use is your movable type printing press?”

Surely the world needs only four books printed total. Maybe a couple copies of the Bible would be nice. :slight_smile:

One per city, maybe :wink:

I seem to remember setting up networks in grad school for machines with 256K of memory.

I’d say an IP address is more like a street/house address. The first 9 slots are like the street (ISPs generally own blocks of addresses with the same first 9 slots) and the last 3 are the house number.

If you up and move your laptop to a new location, say a hotel, you then are on a new network and have both a new street address and a new IP address.

I was talking about the computers that were meant to get IP addresses when the first versions of the Internet Protocol were being developed. Little LANs and dial-up-only networks like Fido weren’t relevant to my statement.

Yeah, and I remember trying to set up the the IBM networking, only to find that then visicalc, wordstar or wordperfect would not load because the drivers took up too much RAM. This is where the expansion memory cards first because popular; they would swap pages of high RAM into the over-640K location to allow the network to run and still give you enough RAM (high 400K’s) to allow most popular apps to run. Novell beat out IBM because it’s DOS drivers were a lot simpler and needed much less lower RAM.

Then the fun part was running multiple stacks to run IP and Novell protocols on the same PC. It got worse. Novell evcentually lost because they were great for small nets, but crap for wider areas and useless fo the world.

It may be better to think of your computer as like a landline phone. If you take it to someone else’s house and plug it into the plug in the wall, it will take on that house’s phone number; it won’t ring when someone calls the number at your house anymore. So the IP address is like a phone number, and your computer is like a phone [landline]. The analogy doesn’t work with cell phones.

Luxury.

First machine I attempted to program had 3.5K RAM. And I was glad of it.

There are examples of this in computing history where ample room was allowed for long term growth from the beginning, but it’s tough to predict the ones that will still be in use by the time the expansion is required. There are many of these types of decisions in every system built, it’s always a balance between trying to find a reasonable size of something, the impact it has on performance, how much time spent analyzing each of these decisions, the additional labor to exceed current natural limits within the computing infrastructure, etc. etc. - it’s constant trade-offs.