True. I taught PDP-11 assembler in grad school, and when I saw the first C tech report from Bell Labs in '74 or '75 my first reaction was “++, --? This is a high level version of the PDP-11 assembler.”
However I did use C’s predecessor BCPL on Multics a bit later.
You could say the same about assembly, on some of those things (supercalar pipelining and multicore), but my point wasn’t that C was close to silicon, it was that C was close to the operating system in terms of resource management.
(The fact that a lot of OSes are written in C these days is evidence for my point.)
Resource management dictates a lot of how large programs are structured. There’s a whole language, Rust, which is really picking up steam these days because it promises memory safety without garbage collection. It does this by making memory ownership semantics an explicit part of the language syntax. That’s how important data life cycle management is once your program grows much beyond one screen of code.
In a language with a GC’d runtime, there’s no question about how memory is managed: The GC owns all of it, you get to use it for as long as you hang onto it, and once you drop it, the GC mechanism will clean it up when it gets around to it. This allows programmers to ignore memory usage details at a design level and create complex data structures with very little code.
In C, all of that’s done by hand. The heap is yours, you manage it using malloc() and free(), and you don’t even have RAII at the language level to automate any part of it. You can do RAII by hand. You can do object-orientation by hand. My point is, in C, you have to do those things by hand, and it impacts the structure of the program.
Objective-C is still C, and you’re responsible for memory management just as you are in C. The runtime (not the language per se) does manage reference counts, and only applies to Objective-C reference types. If you alloc(), you’re still responsible for free(). All the runtime does is count your strong references, and when that count becomes zero at the end of a context, executes free() for you. Weak references to reference types are not counted, and this can lead to freeing something you don’t mean to free, or not freeing something that is meant to be freed. Crashes are still possible (referencing a deallocated reference) and memory leaks are still possible (retain cycles are a common cause).
This is an awesome convenience (and saves a lot of lines of code) in that you can alloc a class on the heap in the method or function, and you don’t have to free it at the end of the context (much like a stack allocation). But if you assign it to something outside of the context, you still have to worry about weak and strong references.
In all, ARC is not something you can just forget about; you still have to worry about memory allocation to a large degree.
Although AGC (automatic garbage collection, such garbage!) came and went quickly (it’s deprecated, and not even an option with today’s tools), you can still opt out of ARC and set reference counts manually, if you want that pain. The runtime will honor your retain and release counts, and provide automatic deallocation when it hits zero. Definitely a runtime feature, and not a language feature.
Except for Java, I disagree. The others function exactly like C but have specific additions that support their overlays. The C specification is too spare to be independent from its additions. One can easily argue that stdlib is not literally part of C, in that there were versions of C for classic Macintosh OS that omitted large parts of it because they were inappropriate for the environment, but the language was still C. C is not Python or Haskell or Fortran, but whatever has C in the name is more than just related to C.
Looking at the underlying type system is really the touchstone of differentiating languages. If the type system is the same, the rest is just syntax (almost.)
One of the most critical questions is simply - does your language allow you to construct a pointer?
C does, as does C++. Java absolutely does not. Nor Python. C++ provides no protection of type, although it does provide some basic roadblocks to violating type integrity.
Other questions include: structural versus named type identity?, first class types?, Reflection? and so on.
C++ was originally written as a pre-processor for a C compiler. (Nothing work with that, Cython does the same, and there is a long an inglorious history of others - f2c anyone?) However for a long time any legal C code could be incorporated in a C++ program and be expected to work. It is still largely true.
To me, it’s simple. I know C, but I don’t know C++, Objective C, nor C#. And there are surely some who know one or two of those latter languages, but not the other(s). How can that be, if they’re the same language?
No, clearly they are not the same language at all. They share the same expression syntax and a lot of the the same structure syntax, but none of these are really the core of the language. A lot of people get hung up on syntax, when it is the least important part of a language.
Like I said above, if the type systems are the same you are very close to the same language, even if the syntax is quite different.
Syntax is like a dialect. The use of begin/end versus {, } just the accent.
As we said in the military: “If you can order a beer, a burrito, and a bimbo in the local language, you’re fluent!” For folks who don’t dev for a living, mastering the syntax and the IO basics constitutes “knowing the language”. That’s GI-level fluency.
Somebody like that can read C, C#, Objective C, and much of Java or Javascript without too much difficulty. And given a few samples in an unfamiliar C-like language, bang out useful programs in that same language up to maybe a couple hundred lines of code. C itself, like Latin, is probably the hardest because it’s the most old-school and demands a lot of “ceremony” to do the background stuff.
I’ve never written a line of C++ in my life. But I often read such code and absorb the gist, just like I can sorta read a Spanish-language newspaper. But for damn sure I’d never claim “I know C++”. unless I really want to look really stupid in the next few minutes.
At one point back in the late '60s - early '70s there wasn’t that much more to languages other than the syntax. That hasn’t been much true since the late 1970s, if even that late. But that doesn’t stop people even now from claiming “I know XYZ” when they mean “I know XYZ syntax”.
Bottom line: syntax is not what matters. Like beauty, it’s only skin deep.
Well, another view is that no one really knows all of C++. So “I know C++” is equivalent to “I know a subset of C++”. And since C is an almost perfect subset of C++, “I know C” is equivalent to “I know C++”.
Here is an observation: suppose I simply ask, “what is C?” Well, since the first compilers and books came out, the answer has changed over time, and decent compilers will allow you to specify precise versions like 1990, 1999, 2011.
Well, true. I suppose that I could write a C program, rename it as “program.cpp”, and compile it as a C++ program, and it’ll work. So in that sense, you could say that I know C++. But I still can’t do anything at all that distinguishes C++ from C.
Another way to consider if two languages are the same. Can you transliterate one to the other with nothing more complicated than a script that replaces keywords, reserved words and other syntax terminals? This should be essentially no harder than when you see an interview on TV and someone is speaking your language but in a thick accent, gets subtitled. You are allowed some simple restructuring as well, maybe you need to synthesise a do-while loop or translate a switch to a cascading if, fudge function return values. (No worse than Yoda speak really.) But past that, you should need no other translation.
The type rules should be the same, available parameter passing mechanisms should be mappable, and so on. Now, the question is, can you write a translation system that can do it both ways?
I’m sure most programmers have been down the path of needing to transliterate code at some time or another. Usually it is being given an algorithm in one language and needing to get it working in another - which usually doesn’t cause too much grief. Doing things like transliterating Matlab into Python/Numpy isn’t desperately hard. But good luck trying it the other way.
Once you get to programming in the large, almost all transliteration becomes a vastly different and harder question, because, syntax isn’t the language.
I have written stuff in Objective-C, but have run into issues that could only be handled below the object layer. In strict C. I have written utility C functions for objects to use, in traditional C. Objective-C adds some constructs that facilitate method invocations and object descriptions, which would take a traditional C programmer less than 5 minutes to ken (it is working in OOP that is initially difficult to master). But apart from those few small additions, the language is exactly like C. It compiles and runs exactly the way C does. In no way is unlike C, other than the OOP methodology.
I was mostly joking of course, but I do think C++ is a bit of a special case. C++ is a set of fairly orthogonal extensions to C. It’s C with classes. It’s C with generic programming. It’s C with operator/function overloading. It’s C with namespaces. And more recently, it’s C with lambda functions.
All of these can be mixed and matched, and depending on the application not all of the extensions are a good fit. Classes are a great extension for embedded programming, for instance–they compile down to the same code but make code more organized. It starts becoming worse if you use virtual functions, templates, etc. but there’s no need to use that extra stuff.
Francis Vaughan’s challenge applies to classes (sans inheritance) and templates (which are basically just a fancy preprocessor). It’s tougher to argue about the whole shebang, though as noted, the first C++ “compilers” were just converters to C.
This is mostly wrong: Cfront compiled C++ to C, but it didn’t use C as anything other than a compiler output. That is, Cfront was an actual C++ compiler, and it did all its own type checking and syntax checking and so on, but its output stage generated C instead of assembly language or machine code.
You can compile Pascal to C, but that doesn’t mean your Pascal compiler is a pre-processor for the C compiler.
How in the world does C# function exactly like C?
I can see you saying that about Objective-C, which is, in fact, a strict superset of ANSI C, and I can see you saying that about C++, if you don’t actually know C++, but saying it about C# is just… mind-blowing.
I’ll give you that. I was using preprocessor in a too loose manner. Cfront was indeed a fully fledged compiler. I was being scathing to the history of C++, as I really never liked it (despite having written many thousands of lines of code in it, including quite a bit that forms parts of currently marketed commercial software packages.) But my dislike comes more from what I regard as a vastly too great a reliance on much of the C heritage - such as the cpp and the inherent limitations that created, and many other things that yielded a language in which you could do almost everything equally horridly.
I did mention f2c and Cython the same breath, and they are also fully fledged compilers.
C is not a subset of C++. There are constructs in C which are invalid C++, and constructs in C which are bad C++.
Good C++ means using C++ constructs. Not all of them, but more than you can use in C.
Think of it like a job interview question: If I ask you if you know C++, and you say “yes”, based on a knowledge of C, I’m not going to be happy when I hand you some C++ code with templates and iostream functionality and you ask me why I’m trying to bit-shift using “Hello, world!” as a count.