The ability to directly access and modify memory locations doesn’t make a language powerful, but it does make it dangerous, especially in the absence of memory protection. I imagine the reason it was there was that the whole system was so primitive it might have been the only way to get certain things done.
I’ve never been a fan of “structured programming”. It always struck me as excessively prescriptivist while producing minimal benefit. The way to avoid spaghetti code and the key to writing reliable, maintainable software is structured design. Which is something that Microsoft has generally never understood, and which is also why Word for instance is a mess of unmaintainable spaghetti code and why it periodically loses its mind and does crazy things like reformatting your entire document after a minor edit.
With direct access to memory locations, you can do literally anything, so yes, that does make a language very powerful. What it doesn’t do is make it efficient. There are a lot of things that a proper, well-designed language can do very easily, that BASIC can only do through PEEK/POKE-based kludges.
I think you might have a different definition in mind than I do. Structured programming isn’t “prescriptivist”–it’s a name for the control flow constructs that almost every language has converged on today. If/then/else, do/while, parameterized functions, local variables, and so on.
The alternative to structured programming is global variables and conditional gotos everywhere. Although it’s possible to write good code this way (some people do write decent assembly code), it’s difficult. But worse, without a common dialect for how to express things in a language, it becomes almost impossible for multiple people to work on the same code.
The purpose of a programming language is to efficiently express your goals. Structured programming is a tool for expressing very basic concepts that programmers were already doing in an ad-hoc fashion.
More to the point, it doesn’t make it easy, which I assume is what you meant. The power of a language is a measure not only of what it can do, but how easily the required code can be written. Of course a language that can poke arbitrary memory locations can in theory let you do anything the machine can do, but what you’re effectively doing is writing machine code, without even the assistance of an assembler.
My point is that code itself shouldn’t even be a particularly important issue. That’s why Musk asking his software engineers to send him “samples of your code” to judge their competence was so comically useless.
In large software systems, structured design broadly speaking is the art and science of decomposing the system functionality into a hierarchy of independent modules linked only by well-defined interfaces. Doing it right can involve a great deal of analysis and sometimes the ability to develop deep abstractions of the system components.
If that’s properly done, how “structured” the code is within any one module is relatively unimportant. Not completely unimportant, but far less important than the overall system architecture. If a module is poorly coded, it can be thrown out and rewritten since its functionality and interfaces are well defined. If the system is poorly architected, you’re screwed.
Any experienced software engineer can “smell” a piece of code and determine if it was written by someone competent or not. It’s not any one thing–it’s a gestalt. The mere appearance of a goto isn’t a strong signal, but if you see a bunch of gotos where there should be some function calls, it’s a sign that someone hasn’t properly abstracted the problem in their head and in any case isn’t striving to write clear code.
While it’s certainly possible to have a bad architecture with good code and vice versa, in practice it doesn’t happen. The people who are bad at abstractions are also bad at architecture.
Structured programming is also how those interfaces are maintained in the first place. In most cases, a function call is the interface.
As someone with a good deal of experience with large system architectures, including enterprise-wide system integration, I’ll say unequivocally that the ability to design large-scale architectures and the ability to write code are distinctly very different skills. The fact that a good programmer will have reasonably structured code and compartmentalize common functionalities into subroutines bears only the most superficial resemblance to the levels of abstraction in large-system architectures.
Not necessarily, and certainly not in a distributed computing environment, where the interfaces will typically be implemented in message-oriented middleware like IBM MQ or an object request broker architecture like CORBA, or just custom messaging implementations over common networks.
I think the real joke here is on the tech geeks who’ve flocked to the thread to discuss arcana, not the joke T-shirt. And who along the way have collectively left the poor OP shaking his head.
I was thinking of VBScript, which is what I’m most familiar with. And it is officially deprecated as of last year.
And by “familiar with”, I mean I have taken VBS files and tweaked them for my own use. I am far from an expert with them.
Apparently, VBScript is different from either VB itself, or VBA, or VB.NET. So there are at least 4 of them. And confusingly, VB.NET is now just called VB, and the original VB is called Classic VB.
At that level, we aren’t really talking programming anymore. The enterprise-level architecture has to take things into account like “do we actually have a division that can develop this module?” It’s as much a business decision as a technological one.
The vast majority of software engineers aren’t developing enterprise-wide architectures. But lots of them have influence over things at the module level. Shitty architectures at that level can still affect hundreds of engineers. Or a lot more, if the interface is exposed to the public.
Hence the “most”. But CORBA isn’t an exception anyway. We don’t call those Remote Procedure Calls for no reason. It’s just structured programming over a network. And OOP is a layer above structured programming, so all those object-oriented distributed architectures are even more restricted than the structured programming you disdain.
Back when I was doing technical writing, the head of the firm I contracted at decided the future was Python and everything had to be written in it henceforth. So he brought in a programmer who had worked with Rossum, the creator of Python (or so the story went). This was an old school, button-down, nine-to-five firm, and he was anything but. He had a habit of working late then taking naps during the regular workday. By lying under his desk with his feet blocking the [very narrow] corridor. People objected to have to step over him to get around. The head of the firm soon removed that obstacle.
Right, but we’re also absolutely not talking programming when designing the architecture of large-scale individual software systems, whether or not they involve distributed computing. In fact, the decision about what programming languages to use might not even be known at the point, and the decision might be that many different ones will be used.
I don’t disdain it. I just don’t think it’s very useful, for all the reasons I’ve already stated.
It reminds me quite a bit of something else that was equally overrated, and that’s the “Capability Maturity Model” (CMM) for software development (since superseded and renamed to CMMI). Which I consider to be 20% useful at best, and at least 80% bullshit. The “bullshit” part is the presumption that by following their methodologies, any software team can suddenly produce stellar software. Wrong. Completely and totally wrong. The only useful part is that it contains useful templates and checklists.
The entire joke is that C is crude and dangerous and that Python is cool and modern. They threw in the other languages because a 2-language comparison wouldn’t be that funny. In reality C++ and Java are trying to do the same thing, trying to map OO semantics onto a C-like language, with Java trying to be a less-sharp tool than C++.
To me the meta-joke is that everyone treats Python like it’s some sort of elegant weapon, when it’s really it’s the current leader of the glue-language category that was previously held by other languages like Unix shell, Perl, Ruby, etc. An experienced programmer would’ve represented Python not as a lightsaber but a humble roll of duct tape.
Don’t get me wrong, Python is a good tool. Duct tape is useful and important! But it’s not particularly modern or sophisticated. It just happened to figure out the right feature set that covers 80% of use cases, and made it approachable. Python didn’t try to be trendy, cute, popular, or opinionated. It tries to provide useful and standard approaches for everything, and that’s why it’s become a useful standard. For that reason I think Python will be around quite a while (though of course no language is immortal).
I still think you have a weird perception of what structured programming even is. If/then/else isn’t very useful? Function calls aren’t useful?
Well yes, a lot of that business-oriented software process stuff is utter bullshit. The difference is that structured programming was developed by actual computer scientists like Edsger Dijkstra. While CMM and related models were developed by business process schmucks. Probably as a way of funneling money to parasitic consulting companies who just happen to have the magic solution to your process woes.
I think I’d disagree with the “opinionated”. There’s a reason why people call code “Pythonic” or not. And why it’s presented as the antithesis of Perl’s “There’s more than one way to do it” approach.
That’s a virtue when it comes to learning Python, of course. There’s an idiomatic style that people avoid straying from. So the code remains fairly readable no matter who wrote it.
I do fully agree with the duct tape bit. Nothing wrong with that, either. But it doesn’t do everything.
When did I say that? Of course they’re useful! They’ve been in existence in some form since the earliest days of FORTRAN. My point is that mandating specific kinds of control transfer and prohibiting others (like the unconditional GO TO) is fairly useless prescriptivism and does little if anything to enhance code quality, and does absolutely nothing to enhance the all-important quality of the overall system architecture which in non-trivial applications is far, far more important.
.
Saying that structured programming “isn’t very useful” is the same as saying “I don’t think FORTRAN 77 should have added IF/END IF”. Structured programming is the name we gave to adding these features to languages back then.
C doesn’t prohibit goto. It’s just that the structured constructs are preferred for most uses. Goto is still very useful in certain situations. Many (perhaps most) languages retain goto.
Even Dijkstra’s famous “Goto Considered Harmful” wasn’t calling for an outright prohibition on goto. It was pointing out that other constructs generally lead to clearer code. It was written back in 1968, when languages like FORTRAN did not even have things like block-level IF statements.