Sticky IP Address vs. Sticky MAC Address?

“Sticky MAC” is a form of MAC learning, used to enter the MAC values then used for MAC filtering.

You click on the button (shown in your web browser, at your modem/router configuraiton page), and all the dynamicly learned MACs, the ones the modem/router has already seen, “stick” in the MAC filtering table (are stuck into the mac filtering table).

UDP port 53 is DNS.

DHCP is UDP ports 67 and 68, server and client respectively.

so close! :slight_smile:

To be fair, I didn’t know the ports and had to look them up :slight_smile:

Yes, to clarify - the subnet mask applied to the destination IP address tells the sender whether the device is local (on same Ethernet subnet) or outside (needs to go through the gateway).

The sender then checks the ARP cache - if it does not already know the MAC associated with that IP address, it sends an ARP broadcast, i.e. “Hello, who is 192.168.0.77 ?” and gets a response which it adds to the ARP cache. It now knows the MAC address to send the … frame … to.

If it’s external, it does the same process, but to find the default gateway instead - “Who is 192.168.0.1 ?”

The arp cache (show it with DOS command ARP -A ) is to avoid constantly flooding the network with RP requests. Once your PC knows the correct MAC, odds are it’s valid for a while, so don’t keep asking. Once the entry times out, next time it needs to talk, it re-sends the ARP request.

IP is incredibly simple at its center - either send to a local address, or send to the default gateway and let it figure out how to get to the remote address. Gateways (routers, typically) then check an internal table and send to the best route available.

How you get those tables is an nice topic to occupy a boring season or three.

And for another example: html.
It is one implementation of SGML. When Tim Berners-Lee created it, he deliberately broke a cardinal rule of sgml-errors have to terminate the rendering. SGML is designed for industrial strength professional quality code where mistakes can be costly up to deadly. Given that the WWW is exactly not that, Dr. Berners-Lee chose to make his rendering fail-forward. Since that meant that buggy first-generation code would (mostly) work and so developing a web page was quick and easy, everyone happily jumped on the band-wagon. The SGML advocates were horrified. And got left sputtering in the dust.

Yes, I completely agree – I’m a very strong believer in prototyping and incremental development. However, that doesn’t mean an informal unstructured approach to development; it can be and at its best it is, in fact, a disciplined methodology. For large software projects to be successful and reliable, they need to start with an accurate and thorough requirements specification. They need to be guided by a robust architecture that compartmentalizes the functionality into abstract functional layers and defines the interfaces between them. There needs to be a functional specification that specifies in more concrete terms the solution that will be delivered, the components that need to be built and the relationships between them, and probably something that lays out a general plan for how the incremental development will be staged and evaluated.

It’s also important from an institutional point of view that there are common standards for documentation and signoff of all these formalities. You want to institutionalize the best practices for software quality so that they’re repeatable and subject to ongoing improvement.

That said, in most cases an excess of low-level specification and documentation is NOT a best practice. Software managers who insist on “detailed design reports” practically down to the code level, code walkthroughs and the like, before a single line of code is written for some massive project, should be taken out back and shot to get them out of the way. Or believers in the once-faddish ideas around “structured programming” – take them out back and shoot them, too. I would hope that someone’s code is readable and reasonably efficient, but beyond that I wouldn’t give a rat’s ass if it’s “structured”. Structured design is what matters.

There are, however, vast differences between ordinary business applications and mission-critical ones like avionics or spacecraft software, so approaches will vary accordingly. It’s surprising how often many of the same basic principles apply, though.

Yes, so much of the internet is ad-hoc. Pieces started with a simple idea, then extra pieces were tacked on to embellish or solve problems. Sometimes the basic flaw is unfixable, once a certain level of adoption has occurred.

I had the “opportunity” way back when to look over IBM token ring as a network; it had massively overdesigned pieces. What if the ring is broken? Oh, then we loop back so it’s initially a double ring; so every station needed a double feed. The cable was complicated. It lost to Ethernet which was breathtakingly simple by comparison.

Or email. What’s a bug now was a feature back then. Many computers on the “internets” would connect once in a while by modem; so another email server could store and forward mail until a connection was made. The number of email sites was minimal, and they were all sophisticated (?) academics. Therefore, sender verification was not built into email. Now, we’re scrambling with various schemes to try to prevent email of fraudulent origins.

HTML, yes, is another - not just bad code, but broken links. An IBM-type commercial business product would never have hit the market with the solution “if a link’s broken, someone will eventually tell you…” rather than complete verification of all links every time a page is loaded…

I agree - the key to a useable system is to have a general overall design, then get the key parts working. As each part is added, it will enhance the system. Too often, systems developers try to do the equivalent of design and build the entire city before you can move in, rather than one building at a time.

Crap.

I also had to set up a DNS forwarding rule. Must have mixed up those two.

Well, at least you know I didn’t look it up. :slight_smile:

In case anyone wonders why this is even necessary, it’s because Ethernet isn’t just a signaling protocol but defines its own data link layer, and does so in terms of two protocols, a MAC layer and a Logical Link Control (LLC) layer above it. So when local TCP/IP hosts or network devices on the same LAN talk to each other, they need to use the Ethernet data link layer and hence must broadcast to MAC addresses.

As you mentioned earlier, DECnet didn’t have this issue and didn’t need ARP because it changed the network adapter’s MAC address to a unique DEC-specific MAC that incorporated the DECnet address in its last couple of fields. My memory of this stuff is hazy, but since MAC addresses are usually fixed manufacturer-specific things, I assume that since DECnet was only implemented on DEC products and DEC made the Ethernet controllers for them, they were built with this capability for this express purpose. TCP/IP is more like Windows: it has to be designed for hardware that it has no control over.

IBM had one overarching requirement for a LAN when it selected token ring: “Not Ethernet”. Because DEC had then entered the DEC-Xerox-Intel Ethernet alliance and was not just launching many successful products based on Ethernet, but basing major corporate strategies on it, IBM wanted something that its omnipotent marketeers could use to differentiate it in a positive way. For years, they twisted and distorted the facts to try to establish token ring’s superiority, even putting out bullshit technical articles making fake allegations in a way that today we would call “fake news”.

Ah, the Ethernet-Token Ring war.

I remember the public release in 1980 of the 1st Ethernet standard. Computer magazines (friends of IBM) where decrying what a horrible system it was and would never work. It clearly had this or that problem.

Funny thing was, I had been using Ethernet for a few years and was amazed at how simple it was and how well it worked. The magazine writers clearly weren’t writing based on, you know, actual knowledge or facts or anything. Probably the only time such articles ever appeared, right? :wink:

As noted, one of the common reasons for changing a MAC address is when you buy a new router. Set the new one the old one’s address. No need to try and remember your old user name and password.

I last set a device’s MAC address a bit over a week ago. Setting up a little network thingie. It’s MAC address was in software, not hardware. I made sure the new OS software (Debian*) was using the original MAC address printed on the bottom just for consistency.

  • systemd. So, time to learn that. Uuuhhhh.

I follow what has been said here, the pros and cons. Let me ask one more thing related to this: I understand MAC spoofing is a concern should a handful of allowed machines be whitelisted. What methods might combat MAC spoofing?

Mac spoofing is difficult to detect and prevent at the network level since there’s nothing in the standard ethernet frame that gives you anything identifying about the machine other than its mac address. At higher levels though, you can look at various characteristics of the machine and build a machine profile (what operating system it is running, what service pack level is installed, what antivirus it is running, etc) and if a machine gets on your network that doesn’t match its profile, you can shut down the port and prevent it from accessing anything.

This is obviously a lot more sophisticated than the simple mac whitelist that you can configure in most routers and requires quite a bit more effort to set up and maintain.

Ah, yes - IBM claimed “with Ethernet, as traffic goes up, the congestion results in so many collisions that traffic grinds to a halt… Our Energizer Bunny token keeps going and going around the ring, so traffic always gets through.”

Then there were tests in real-world situations; 90% of sites were a server and clients; so many clients would ask for data, the server would dole it out. Until the client got its data, it didn’t generate more traffic; the server was basically a traffic cop - “I’ve heard your request for data, and I’ll get to it in turn.” Meanwhile, with high traffic, token Ring also made some workstations wait and the server could only go so fast. So in the real world, Ethernet performed pretty much as well as Token Ring. Then along came switches, thin wire and eventually RJ45 plugs and UTP, and the hardware too became so much simpler than TR.

Plus software. IBM’s Rube Goldberg software we could never fit into a DOS 640K machine and leave enough to run decent applications; Novell’s IPX was a breeze by comparison.

This is the only way to really prevent MAC spoofing. Physical security is also key, to come from a different angle. You really don’t want unauthorized people having access to unprotected network access ports.

QFT.

At one time, a truly marketable skill set was, “How good are you at juggling the order of drivers loaded into EMM386 to maximize the available 640K space left?”

AFAIK, never had to test recently, it is quite possible to change the MAC address on a PC with software. A simple Google search tells me it’s in the NIC “configure” settings from the Control panel. or, edit the registry.

Stuff like this reminds me how much the industry has changed over the course of my career. Or changing a bunch of jumpers/dip switches, or managing IRQs. Makes me feel nostalgic and old.

Comes down to the hardware. The OS may provide a facility to change the MAC address, but the hardware needs to support the capability, and the device driver needs to provide access to that capability. It isn’t a given.

Ethernet and WiFi controllers have run a very wide range of implementations and capabilities. Time was when the cheapest devices unloaded just about all of their operation onto the OS. You could go any buy a $10 Ethernet card or a multi hundred dollar Ethernet card. The gap has shrunk, but the basics are still there. One provided little more than the most basic interface to the network and required the host computer do all the work, the other did the entire job and included most of the TCP/IP stack, unloading significant work from the OS. Performance differences were significant. The expensive devices are complete computers in their own right, run an embedded OS, and provide highly complex and comprehensive configuration and control capabilities.

The biggest hurdle in MAC address spoofing is finding the address you need to spoof. That requires you to be able to snoop on the network and watch for a packet to go past. On WiFi this is of course trivial. On a wired network not so much.

Damn, I had forgotten about how good at that I was - I could equal or outperform the commercial optimizer nine times out of ten, mostly 'cause I could tweak the heck out of Decnet cards and drivers.

802.1x is port-based network access control, but it does require quite a bit of infrastructure, as well as smart switches. Add Network Policy Enforcement and you can restrict a device to an untrusted network space until it has verified it’s current security settings and identity.

You won’t find this functionality in a residential home router, though.