Is !(p && q) exactly the same as !p || !q?

Going even deeper with pointers–
At work I use systems with two entirely different architectures running at once. Just completely different memory/execution models, etc. One machine © manages the other (G). Sometimes C will allocate memory for G, getting back a virtual address in G’s address space. C needs to hand over that address so that G can crunch on it. What does G do with it? Well, in C/C++, G just casts the raw 64-bit number to a pointer and then works on it normally. G can address it like a pointer to a single object, or an array, or whatever it needs. Maybe it’s an array of floats, or a complex struct, or something even more advanced. Most languages don’t allow anything like this mode of operation. With some basic language extensions, you can even work with multiple address spaces at once (for example, external DRAM and internal SRAM).

No. Think about a program that simply responds to a human: As long as the program is perceptually instantaneous, it isn’t wasting any human’s time, even if it’s a thousand times less efficient than something written in C would be.

This is an important distinction to make, because it points up how the world has changed since the old days: Back then, hardware tended to be so slow (and/or spread so thin among so many users) that there was a real, perceptible difference between a program written in assembly and a program written in anything else, once processing got beyond the trivial level. That’s no longer true, and pretending it is just wastes programmer time, both in the initial writing phase and in the debugging phase.

Eh, you can have all the pointers you want if you write a virtual machine in Python and use that. Boom, your integers can now be pointers. Remember that Turing completeness is about computable functions, not making physical hardware jump and shake. That’s what I was talking about in that part.

On the subject of emulation and how slow high-level languages aren’t:

Fabrice Bellard wrote a PC emulator, not in Python, but in Javascript. You can run Linux in your browser. JSLinux.

Then there’s PCjs, a whole raft of emulators written in Javascript, also running original binary code, this time MS-DOS and the VT-100 ROMs and others.

My point is twofold: Hardware is fast, and high-level languages have gotten fast in that our strategies for compiling, executing, and GCing them have improved over the last few decades. So “wasting computer time” doesn’t necessarily mean wasting human time.

Tools that are perceptually instantaneous are, in my experience, the exception rather than the rule. My benchmark is 1/60 of a second, or one screen refresh. Though perhaps I should upgrade to a 144 hz screen.

I recently fixed a bug in a tool at work that used an O(N^2) algorithm for something that should have been O(N). When originally written, years ago, N was small and the difference imperceptible. Today, N is ~10000 and the tool takes a second or two to run.

That doesn’t sound like much but the tool is used by hundreds of engineers, often dozens or hundreds of times per day. 2000 days * 100 engineers * 100 times/day * 1 s = 231 days wasted.

I fixed the problem in 2 hours (including verification, etc.–the actual fix took 5 min). So the productivity gain was something like 2500x.

A 1 s delay in tool execution does not sound like much–it’s almost instantaneous–but it adds up. I’m not even counting the real dollar cost it caused our automated systems, which may run the tool thousands of times a day and make the automation farm that much less efficient.

Many scripting languages take a second or so just to initialize their execution engine.

Sure, but when does it cease to be the original language and become a new VM for some different language? Years ago I coined the saying, “once you can show it is Turning complete everything else is merely engineering”. If we are talking about languages I think it is reasonable that we do draw the line somewhere. Anyway, none of these integer pointers are able to reference Python or Java objects. (You could add another layer and have an indirection table I guess.) No matter what your language now has two incompatible pointers types that can’t be interchanged. So have not really added mutable pointers.
I remember back in the days of FortranIV crafting a call stack out of an array so a recursive algorithm could be implemented. I don’t think this is an argument that FortranIV supported reentrant functions.

I think this is very true. There is a very wide spectrum. I find myself sliding from remarkably inefficient “get it done right now so I can see the results” code to incredibly carefully crafted code that will run for hundreds of machine hours and where even small gains are meaningful in terms of human time as well. The goal posts move about, but their nature stays the same.

I’d run across these before but decided to try them again. The first starts you in a basic Linux shell with a compiler. It took 9 seconds to execute “gcc hello.c”. Fast it ain’t.

It is possible to have reasonably fast JS emulators. However, they work by treating JS as basically a VM, where the target is a subset of JS that interpreters can efficiently turn into machine code. It’s clever, but a bit of a fake. No one writes code in that VM; you use compilers (C++ or whatever) to target it.

Here is a quick little demo I wrote via Emscripten, which compiles C++ to JS. It works quite well. But calling it Javascript code is a stretch–it’s fast because it targets a fast subset of JS. The actual code is unreadable.

Hardware is slow and has stagnated in the past several years. Single-thread performance has improved only if you use the various instruction set enhancements, which compilers are still bad at targeting.

Hardware is only fast if you use massive threading and are math-limited. That covers a lot of ground but for an average developer that just wants to write a naive sequential program, computers have not gotten faster at all in a long time (unless you were disk-limited, in which case SSDs have helped).

I have an example where even constant speed-up saved many millions of dollars!

For a huge amount of programming speed doesn’t matter. But there is much important code where speed is very important. And sometimes improvements in speed are desirable no matter how fast it already is (e.g. in weather forecasting).

But this may have little to do with language choice. Are there any cases where a O(N[sup]2[/sup]) algorithm can only be replaced with O(N log N) by switching languages?

I hope someone starts a “Greatest programming disasters” thread in IMHO. My contribution would be a multi-million dollar effort to perform a trivial task. I was called in at the last moment: the Shift Summary process took more than eight hours to run, so the factory couldn’t run 3 shifts a day until this was fixed!

I doubt it, but language choice can help. In my example, I replaced a linear search with a hash table. C++ conveniently includes one via STL (we also have some internal versions for specialized use). Any scripting language worth mentioning also has a hash table implementation. But C does not have one built-in, so the dev would have to roll their own if not otherwise available.

Excel? Not allowing for breaking out into VB, just lots of cells full of expressions referencing other cells. The world is filled with horrible horrible such spreadsheets.

Yes you could use Excel to implement a new VM - just make all the cells memory locations and emulate a simple ISA, but if you don’t go there, and just use Excel without implementing a new execution engine of some form, I suspect you could easily find such cases.

I hope my post didn’t come across as too argumentative, I think it’s an interesting discussion (and the entire thread is a fun tangent at this point) and I was enjoying the analysis and looking forward to your response.

I don’t think there is one answer, you could analyze languages from a variety of different perspectives and end up with different “sameness” factors for the same sets of languages.

Oddly enough, Dr. Strangelove, Numerical Methods is my go-to example for Fortran code written in C, too. Of course, they’re also the ones who made a big deal about avoiding language quirks and idioms for easier portability, but then used a bitshift instead of a divide by 2 in their binary search program.

septimus, by “shift summary”, you just mean stuff like “Bob clocked in at 9:02 and clocked out at 4:57, and made 73 widgets”? How the heck do you butcher that so badly that it takes eight hours?

At its core it was even more trivial than that! … And was just the tip of the iceberg in that project’s stupidities. I really hope someone starts the IMHO thread (or in Game threads: “Describe a software fiasco even worse than that described by previous poster”) — I might win the thread.

The project was using adds to a relational table, where a log file would be more appropriate. It was too late to change that, but I sped up shift-summary by maintaining simple counters in shared-memory instead of doing complex relational database queries.

I don’t know, my browser is perceptually instantaneous when it comes to editing text and scrolling. Loading a page can cause a delay, but going out to a network is slow.

Similarly, Emacs, written largely in Emacs Lisp, not a speed demon of a language by any stretch, is instant unless I’m doing something disk-intensive.

See? This isn’t what I was talking about. That program isn’t just waiting on user input. It’s doing a lot of processing over a large dataset.

Yes, that’s what I was talking about, and now I’ve spent quite some time on a rather flip remark. :wink:

That goes to levels of abstraction, and modern languages are quite good about not letting you peel back the covers like C and C++ let you. I’ll go to my grave saying Python is strongly-typed because it insists on always having a well-defined state, even an error state, which preserves abstractions, as opposed to C and C++, where you can get at bit-level representations of any data type through a few casts.

Ah, FORTRAN, where you, too, could change the value of 7.

As you alluded to above, my actual point was that all Turing-complete programming languages can evaluate any computable function, given “sufficient” (not “infinite”, but “sufficient”… the difference being hugely pedantic beyond even my tolerance for pedantry) storage and time.

And there will always be room for the whole spectrum of languages. There’s just more room at the top end, now, than there was, and taking advantage of it is important for program correctness and safety. We can’t play around with unsafe languages like we could when networking was intermittent and people couldn’t jiggle all the doorknobs in the IPv4 address space on a whim.

Compiling is a stress-test, yes, but simpler programs run very quickly.

It’s stagnated at the GHz level, and we have enough RAM for even capacious runtimes. That makes a big difference.

Many pages are render-limited. It’s true that network latencies can often hide other stuff, but even on a gigabit connection at work, the web is pretty slow.

It’s just a simple command line tool. It sets some entries in a little local database. Usually it’s run a few times in a row to set a handful of settings, so waiting for it to complete definitely blocks user input.

Every language with binary file IO (that is, every non-toy language) lets you get at bit representations. And most don’t even need you to go that far:
>perl -e “printf ‘%08x’, unpack ‘I’, pack ‘f’, 1.0;”
3f800000

Personally, languages without static type safety don’t give me what I want out of a type system. Without getting into a debate as to what strong/weak typing even means, languages that defer most type checks to runtime are toys as far as I’m concerned. Useful toys, often, but at a certain point they become not good enough.

That’s “Hello, world!”, not the Linux kernel.

Well, it does mean that VM-based languages with bloated runtimes are no longer so unreasonable. I guess that’s a good thing.

Aside: Programmers sometimes look at the list of symptoms of ADHD, and assume that it’s just a description of the personality type common among programmers. Not so. I once watched an ADHD classmate programming while hyperfocused. He was literally writing

if(x == 0) y=2;
if(x == 1) y=3;
if(x == 2) y=4;
if(x == 3) y=5;
...

and would have continued that way all the way up through

if(x == 480) y = 482;

if I hadn’t stopped him.

He should have at least written a script to generate that code automatically :).

Emphatically agree.

Typing in Python is remarkably often misunderstood. As is C++'s.

C++ lets you build all sorts of things, often by violating type safety. Python lets you build all sorts of things within the language without violating safety. Both have the problem that you can end up with mutually incompatible extensions implemented by add-on libraries.

One of the foibles on the Stupidest Project Ever that I mentioned upthread (though it wouldn’t make the project’s Hundred Worst Flaws List) was code like
#define MYFLOOBLEGAK_ROSE__BAZBAZ HISFLOOBLEGAK_ROSE__BAZBAZ
#define MYFLOOBLEGAK_PINK__BAZBAZ HISFLOOBLEGAK_PINK__BAZBAZ
#define MYFLOOBLEGAK_BLUE__BAZBAZ HISFLOOBLEGAK_BLUE__BAZBAZ

#define MYFLOOBLEGAK_RU_BORED_YET__BAZBAZ HISFLOOBLEGAK_RU_BORED_YET__BAZBAZ

The final punchline for the project came when the customer insisted that the already-delivered system be ported from VAX to IBM, with an IBM compiler. The IBM compiler used only the first x and last y characters of each identifier so the above symbols would all need to be replaced. (Yes, a competent programmer could do this minor task in a few minutes with global edits.) The project manager called me into her office, said she was going to quote $1 Million(!) as the cost to port to IBM, and asked me for my opinion. I could only could shake my head in disbelief. In hindsight this was one of many missed opportunities for me — I should have offered to do the whole thing for a quarter-million or so. :slight_smile:

Today reminded me why I love C++.

I have a machine with very limited capabilities. 1024 bytes of RAM, though it goes slower if you use more than ~256. No indirect jumps. No real stack (and so no recursion, etc.). Slow branching. Heavy memory alignment restrictions. Etc. (the upside is that the machine does about 10 trillion operations per second).

I wanted to implement a message passing scheme from this machine to the main CPU. Something like printf, where you pass a format string and several arguments, but more safely. And with the result getting bundled up and sent over a bus.

I used variadic templates, which our compiler supports despite the limited machine capabilities, because most of the heavy lifting is done in the compiler. Not only is the system completely type safe (with the only type coercion happening in a small number of easily-inspected places), but it’s able to do bounds checking on the destination buffer, enforce the machine’s alignment requirements, and a few other things all at compile time. Which means that the whole data packing operation gets compiled down to a few instructions, and if you screwed something up you know it at compile time, not runtime.

Of course the machine is way too limited to support any kind of VM or other bytecode interpreter. It’s fast only in very particular ways, none of which are suitable for that. I could use C or a few other non-bytecode languages, but none offer anything like the template support that C++ has. The next best approach probably would have been some script-based code generation, which is a pain in the ass for all kinds of reasons (though I’ve resorted to it before).

… you’ve just described all languages which have to deal with I/O.

Type systems are, inherently, about ensuring you’re not talking nonsense. They can do this by preventing you from adding integers to floats, by preventing you from copying a variable of one struct type into a variable of another struct type, or by preventing you from adding a length in inches to an age in days, regardless of how those things are represented at the machine level. C++ sits about at the second tier, BTW: It is smart enough to know, and not too anal to admit, that adding “5 + 0.1” is both valid and the kind of headache a programmer signed up for, but isn’t usually smart enough to know that inches and days can’t be added without a conversion factor.

(I have a whole rant about types versus size specifications, but I’ll let it be for now. Just note that int is not a type, if we’re being truly precise about things.)

So, every time you accept input from the outside world, the type of that input must be checked, to see if someone entered “fourtee-fahve” when you asked for age in years, or “buy me dinner first” when you asked for sex. This checking could be automated to an extent, but it is type checking, and it must take place at runtime. It’s also the most important type checking a typical program does, especially in C++, where the attitude to buffer overflows is "It Can’t Happen Hereeeqsascdsfwefsdfwesfsdwes

That is a wholly different thing from language-defined type checking, which is about insuring that the code is properly written to handle the arrangement/structure of the data it is working on. You are talking about inspecting data content to make sure that it conforms to the requirements of the working process – basically, parsing.