I don’t interact with much code anymore but, when I do, I find myself slipping into a different kind of mindspace (whatever that means) when it comes to memory, concentration and visualization. Where I can’t normally remember a number more than 5 or 6 digits long, I can recall a lengthy variable name or address after while interpreting code. It takes a day or two to fully fall into it for that concentration of logically solving the project kicks in.
This. Often pretty shitty notation, but you can discern the melody.
Sheet music::algorithm
Composer::analyst
Arranger::programmer
Musician::computer (no disrespect intended, all analogies are imperfect)
I got my bachelors degree in music composition (before I realized I’d have to actually earn a living.) Eventually fell into computers and worked as a programmer/analyst. While I wouldn’t go so far as to say the thought processes are identical, the “feel” is very much the same. And it has nothing to do with language. I was/am abysmal at learning any foreign language, but made a pretty good living programming (and enjoyed it, too.)
That would be me. I was working offshore rigs and got curious about the survey team’s computer. On late night shifts, I began messing around with it (after watching them during the day) and figured out how to use Basic and even managed a few primitive color drawings out of the printer. Eventually was caught and had to endure the IT guy clutching his pearls and howling over a dirty, hard-hat wearing bubba touching his “precious”. I was banned from ever approaching it again, but remained curious.
I retired last year from a DARPA research lab, coding UAV flight controls and introducing some very basic AI to my software. I originally attended college on a music scholarship, and agree with others here that programming is closer to music composition than any other “language” skill (that I can think of).
Good on you, and the IT troglodyte should have his head boiled. Back in the 70s, when the Palo Alto Reseach Center (PARC) was still a Xerox research facility, they famously originated much of modern PC technology – the first PC (Xerox Alto), the first GUI, the first mouse, Ethernet, the laser printer, and much more. One of the secretaries working there was also curious about some of this wonderful work, way ahead of its time. Instead of rebuffing her curiosity, the researchers there – most of whom were highly accomplished PhDs in their fields – taught and encouraged her. Sometimes walking through the elements of your work with a bright but inexperienced person helps to clarify your own thoughts. Anyway, the upshot of this is that she eventually became a valued research contributor in her own right.
You never know where you’re going to find talent. One thing I’ve noticed in good academic environments and research institutions is that the brightest and most accomplished researchers tend to have an attitude of respect and assumption of competence when dealing with less knowledgeable people (until and unless there is good reason to believe otherwise) and it’s the lesser jackasses who tend to go around with a default attitude of disdain and superiority.
I would say that this is a misunderstanding of what lawyers do.
It makes sense for programming to meet the language requirement in education. It is important for the educated to understand the process that largely governs their daily lives. Knowing what’s behind AI is of greater value than being able to order in Spanish.
I think he’s right when applied to contract law, depending on your point of view. A well organized and clear contract prevents problems. Obviously some lawyers profit from the lack of clarity in contracts written by others just as I profited from the lack of clarity in software written by others, but assuming we’re being professionals then logical organization and clarity are an advantage in both fields.
This applies to trial law as well in some cases, especially a bench trial where the judge does need the broader or more detailed case presented the way a jury might. And a judge is less likely to be swayed or distracted in the way a jury might and not look so favorably on a case presented that way.
Also, I think lawyers need a better understanding of human language and human nature than programmers do when we are talking specifically about coding as opposed to the design of human interfaces. And sometimes such understanding and coding skills appear to be mutually exclusive, but now we are speaking of designing and writing code, something far more complex than just the ability to read code.
I agree with your general sentiment, but not the part about making sense as a language requirement. It should be it’s own requirement, perhaps there should be technology requirements now (I think there are some places). I will digress to add, there are benefits to learning additional human languages, but the practical benefits are already dwindling from the use of technology for translation.
But the whole point of this thread is that it’s not the same kind of thing. Just because it’s valuable doesn’t mean it should meet the language requirement.
I agree that a basic understanding of computer and networking technology is an important part of education. I respectfully disagree with the rest of your argument. First, knowing a computer “language” should not fulfill any actual language requirement in any specific educational program, because whatever it is that is thought to be beneficial in learning another language (or, for that matter, acquiring a better grasp of linguistics in general or even just improved literacy in English) is completely unrelated to learning a programming language, because programming languages and natural languages are almost completely unrelated, as previously noted.
The other fallacy I think is the idea that having a grasp of how any programming language works is going to give the student a better understanding of computer science in general; it will give only a very narrow, myopic view of one very specific aspect of the field. What is needed is a high level overview course called something like “Basics of computing, programming, and networking”. Knowing only the rudiments of a simple programming language is not only very inadequate to a broader understanding of digital technology, it may indeed lead the student to misguided ideas like the old saw that “computers can only do what they’re programmed to do” and therefore AI is never “real” intelligence, which is completely wrong. It also provides no understanding of the basics of the internet, one of the most important technological foundations of our times, or the role of operating systems.
People should at least be educated enough that when they get a phone call from “Microsoft Windows” telling them their computer is infected, they should confidently be able to either laugh at them, string them along, or hang up on them.
When I taught assembler, our emphasis was not in showing how higher level languages are like assembler, but in getting the students to write assembler as if it were a higher level language. Structured programming was new back then, and we tried to get them to write loops in a way that resembled loops in Pascal.
The first assignment was a word processing program in Pascal, and we came down hard on them when they wrote spaghetti code. (We later threw out that assignment when grading.) It helped a lot. As a TA it was much easier to help the students when their structured assembly language program didn’t work than when a total mess of code didn’t work.
I wrote the same way. It really helps. One day my class asked me to write a recursive factorial calculator in assembler on the board - I could do it using structured assembler.
So I disagree. Computer languages aren’t for the computer, they are for the programmer. They let you write faster and more accurately, and make it easy to optimize the parts of the code which slow the program down, not parts which don’t make much of a difference. When I was in college them using PL/1 to write Multics was controversial but that ship has sailed.
BTW, compiler optimization sometimes makes the assembly language code that comes out of the compiler look nothing like what a person would write.
Computer languages don’t work like that, in the computer. However, to a person reading them, they may very well work like that. For example, I’ve been using SAS pretty heavily for 32 years now. The dumbest syntactical error you can make is to forget the semicolon required at the end of every statement. But I can still miss this error when reading my code or somebody else’s.
Tripolar, Thudlow, Wolfpup,
Thanks for the comment.
Basically agree. The language requirement is another topic.
However, I do believe that programming is a necessary, general skill. Not so much for creating bottom up software products, but simply using and maintaining applications software. At the local wildlife refuge the fire crew, biologist and hydrologist all used Python and ARC-GIS. The same was true at the BLM for the fire crew, archaeologists and range managers. The Armendaris ranch had a small staff of programmers to track wildlife, record bison activity and maintain the legal boundary description of the 600 square mile ranch. And during elections we wrote various language routines to extract voter lists from the state data base.
These are folks who have mud on their boots and live a long way from Silicon Valley.
Is programming a valuable skill for people in general? Sure, I can go along with that. All else being equal, a person who knows how to program is more well-rounded and capable than a person who doesn’t. Is it sufficiently valuable that it should be required for everyone to learn it? Well, maybe. There’s only room for so many required classes, and so you’d have to decide what you’re doing away with to make room, and then make the case that programming is more useful than that other thing. Is knowing how to program more useful for a teacher than knowing how to speak Spanish? That, I’d say, is a clear “no”. I teach primarily science and math, and even so, the only times I’ve ever made use of my programming knowledge while teaching have been when I’ve been teaching programming itself. There have been plenty of times that other computer skills have been useful, but not the skills that are encompassed specifically by programming. And even though my Spanish-language skills are at a Sesame Street level, I’ve had plenty of occasions where even that rudimentary skill level has been useful in teaching, and plenty more occasions where a higher degree of proficiency would have been useful.
Agreed. And like programming, a good contract lawyer has a nose for the unexpected corner cases and inadvertant ambiguities.
Any dumbass can write a contract for “You make it and I’ll buy it.” Anticipating the most likely gotchas and handling them up front while both parties are still trying to get to “yes” is far faster, cheaper, and less aggravating than the alternative of “The contract is silent on that point so let’s just sue 'em for general non-performance.”
My wife is a retired contracts lawyer. My undergrad was CS and I’ve been in and out of IT since before starting college. Why yes, our casual conversations can be just scintillatingly fascinating infuriatingly precise.
Oh yeah, back on topic. The OP discusses allowing students to substitute programming for the language requirement.
It makes perfect sense to do so. It does not change the requirement for others.
I had a stint at Justice where I worked in the Regulations section. How I got there considering my education doesn’t make for an exciting anecdote but it was a personally interesting hiccup in my employment history.
I was sort of struck by some of the broad similarities between regulatory writing and programming, but only in the sense that the law had to be written agonizingly clearly so both language versions said the same thing. The precision in language required to avoid lacunae kind of struck me as broadly similar to the attention to detail required to avoid memory leaks.
I do want to emphasize “broadly”, though, having written it many too many times. Of course, not only was I never a lawyer, I never had meaningful legal training beyond what I learned on the job either.
Memory leaks are a relatively subtle problem in coding. The more common bugs are failure to recognize when, say, conditions X, Y, and Z all occur simultaneously – or condition Z’ which no one had even thought of. That applies to good software developers. Shitty ones, like the ones who churn out a lot of the stuff for Microsoft, tend to simply ignore boundary conditions, with the implicit attitude “meh, that won’t happen often enough to worry about”!
I once confronted a Microsoft exec about their shitty software, and the only defense he could come up with was that they test exhaustively where it really matters, such as file systems. Oh, yeah? FAT32 had (and possibly still has) a problem where under certain conditions it will decide that you have zero disk space even when you have lots. NTFS tends to have far fewer problems due to far greater redundancy and fixability (is that a word?) inherent in its structure, but it owes its robustness – like the NT kernel itself that is now the basis of all Windows operating systems – to talent that was imported from outside of the crappy Microsoft culture, and was often in conflict with it.
For me, reading code is just a series of steps to accomplish a goal.
I’ve always needed to fully understand what needs to be accomplished. I read the steps in the code and mentally check off things that should be done. Have they initialized their work variables, read the first record from the file, checked for EOF? Then I look at the processing as it loops and reads each record. What are they doing with the data? What is the control logic? What’s the output? A new file? Updating the master file? Screen output? Is their a printed report?
It’s a mental excerise and can be frustrating reading other people’s code. I often think of how I would have written it.