There are really only a small number of these constructs. It’s easy to write to the common subset of both. I occasionally have a need to port straight C to C++ and the changes are always minor (usually some casting here and there) and almost always improve the code. I’m not aware of any constructs which get worse.
It all depends on the application. For some applications, good C++ really is just barely beyond good C. For others, it’s expected that one would use additional constructs (say, making a decent generics library). I would expect an interview candidate to not just have a solid grasp of C++ but to have a good grasp of when and where to use the additional functionality.
As said, I was being somewhat tongue in cheek about “if you know C, then you know C++”; nevertheless, one doesn’t have to know every single aspect of C++ to say one can program in it.
Perhaps they should be, but templates are their own Turing-complete programming language. Not relevant, perhaps, but the point is, C++ is a deep rabbithole indeed. It isn’t just C with some extra bits.
The biggest thing which is good C but bad C++ is lack of RAII and over-use of manual memory management in general. C++ generally isn’t GC’d, but it does provide mechanisms to enforce object lifetimes which are more convenient and make easier-to-read code than malloc()/free().
The biggest thing which is different about C++ is that C++ is a bit better when you’re programming in the large: You get actual packages, with namespacing, as opposed to C, where files define scope but there is, fundamentally, one global namespace for the whole program, you get destructors so objects can do clean-up work when they get destroyed, which saves a lot of repetitive and/or easy-to-forget code when working with things like network connections, and, of course, there’s the good parts of templates, the simple generics, which you mentioned.
C++ is bad mainly because it stuck too close to C, not because it strayed too far from it, and so it has all the same pointer aliasing problems and inability to be cleanly GC’d and type system escape hatches, which are acceptable in a small, simple language like C which is really good for OSes and device drivers and not a whole lot else, but are fatal in a language which also provides a lot of abstractions and is hard to internalize even without the C-like stuff.
Yeah, ain’t that the truth.
C++99 templating is the work of Satan*. I got pretty nifty with it, and have felt dirty ever since. And don’t even mention the dark pit of boost. C++11 has cleaning this up a great deal, but I managed to move to other arenas about when it hit.
I don’t mind C++ for the basics - simple encapsulation, a sensible useful superset of C mostly. But when everyone decided to add templating tweaks that make it do just about anything you could imagine it became impossible. You can’t maintain a commercial code that makes use of all that junk. You need an overarching project Nazi to keep programmers away from the cute stuff, otherwise you are going to end up in deep trouble.
I worry that Python is going too far down the route of tweaks and add ons that make it do everything as well.
*I invented a competition, choose any piece of legal existing production C++ code, and change a single character in the text. The winner is whoever creates the most lines of errors.
An interesting case is the .Net languages. Specifically C# and VB.Net. Certainly they came from utterly different family trees which should make transliteration hard. Conversely a huge fraction of any practical program in either language is calls into the .Net object model which they completely share.
So how transliteratable are they? Many websites offer free transliteration. Just paste your code into one text box, click [Translate] and get the output in another textbox. For free. That’s how transliteratable they are. IOW, very.
You can go either way, including round-tripping, or for fun, multiple round trips. **But **…
There are a number of corner cases of language use, and of object model use, that the languages don’t share. Not all of which are well-known. So a transliteration of code that happens to avoid the areas of non-overlap will have 100% correspondence. But if the source code touches one of those dissimilar areas, the transliterated code is (often very subtly) incorrect.
And this is in a set of languages designed from scratch to A) design out many opportunities for error and B) design in commonality of result. With some need to maintain backward compatibility of idiom to their decidedly non-common predecessors.
Punch line: In a sense, C# & VB.Net are simply different “accents” of a common language called “.Net”. But they each have some mutually unintelligible slang. Which slang is so commonly used in each accent that nobody who speaks only one accent thinks of those spots as slang. They feel like first class members of the underlying language. “False friends” indeed.
Right. Which is why there was a movement, back when all this was new (early 2000s or so), to call VB.Net “Visual Fred”, the idea being that it shared so little with VB6 (the last non-.Net version of Visual Basic) that calling it “Visual Basic .Net” was an outright lie.
I think this is because they both came from the same company: Microsoft wants everyone to use .Net. Granted. It also wants everyone to see C# as the adult language for .Net, the language you write in once your codebase “grows up” and leaves BASIC behind. So they want there to be a drop-dead simple migration path from VB.Net to C#. Both C# and VB.Net are Official Languages, so they must both support the Official Party Line in this matter… and as for VB6, well, they don’t care, they don’t have to, they’re Microsoft.
The JVM world does this better, because the only “official” language for the JVM was, and is, Java. That was true with Sun, and it’s true with Oracle. Therefore, the Other Languages, such as Scala and Clojure, are developed by people who are under no obligation to make their languages Java-Lite, and to treat Java as the Senior Partner in their relationship. Sure, Clojure allows you to call between Clojure code and Java code, but it’s off being Nearly Common Lisp On The JVM, and there’s no suggestion that you’ll ever even consider porting Clojure code to Java. Why would you? It’s already as portable as Java: it all compiles to the same bytecode! There’s no pretense of “mutual intelligibility” here, and so Clojure prospers.
I think for this situation it’s better to interpret the VM as a replacement for the underlying non-virtual machine and do the language analysis from that perspective.
So C# and VB.Net are different languages that compile and run on the .Net VM. Whereas C and Fortran are different languages that compile and run on a non-virtual machine with specific attributes (I know there are more complexities and layers than that, but it seems like the most consistent interpretation).
I’m not following the distinction between .Net and JVM worlds. Both are abstract machines that allow for increased independence from specific hardware attributes (same as NT’s HAL and as400’s VM). In Microsoft’s case (just like IBM and as400) it made sense to explicitly allow for and map multiple languages into the same abstract machine because multiple languages needed to be supported in the respective environments.
I don’t know if Sun thought about multiple languages for the JVM, but it is a nice side effect that once the JVM is defined and built you can route any language through it.
Let’s not get too carried a way here. I was speaking to Francis’ attempts to define a “language”.
At the topmost level, a language is not merely a syntax. At the bottommost level, a language is not a hardware target, whether physical or virtual.
The concept of “a language” is something somewhere in the middle that addresses idea like RAII or not, OOP or not, functional or not, GC or not, etc. for (WAG) a couple dozen mostly othogonal parameters.
Francis’ point as I understood it, and as I attempted to support and amplify with my VB.Net/C# comparison, is that all the interesting stuff happens in those intermediate layers. Just swapping syntax around makes a dialect, not “a language” in the deep sense.
========
Aside: we’ve not seen the OP in a few days and several digressions. I wonder if he’ll be back any time soon?
It sounds like you might be saying that when a VM is involved, any differences in language should be considered more of a dialect difference. Is this correct?
Or are you just saying that in the case of VB.Net/C# specifically that the combination of VM and specific language attributes all add up to “dialect”?
How would you categorize F# compared to VB.Net? Dialects?
Why was there such a long discussion about short-circuit evaluation? !(p && q) and !p || !q are the same even with short-circuit evaluation: if p is FALSE, then q won’t be evaluated, and the whole thing comes out to TRUE; if p is TRUE, q will be evaluated, and the whole thing comes out to !q. (No matter what, p is evaluated.)
Sometimes I have ideas I just can’t express well. Perhaps the idea is stupid. Perhaps my writing is stupid. In any case my work is an addendum to a hijack to a sidetrack to the OPs thread.
With no hostility meant at anyone I’ll drop this line of conversation. The more I’ve said the less it seems to make sense to folks.
Those aren’t contradictory statements. In the end, templates can’t do anything you can’t accomplish with copy-and-pasting your code with some text replacement and simple dead-code removal. Nevertheless they are very powerful, and as you note Turing-complee.
To be clear, I’m not claiming that good C is good C++. Obviously the language offers more. Just that good C is also good “the common subset of C and C++”, and vice versa.
I don’t agree. Not that C++ is perfect, but to me it’s the only “real” language for all those reasons and more.
I use whatever language I think I’ll be most productive with, and that usually means a scripting language for quick-and-dirty stuff, C# for more sophisticated but not perf-critical stuff, and C++ when I need something industrial strength. The thing is, while some programs just cruise along fine in their domain, for others there comes a point where it’s too slow, or unmaintainable, or undebuggable, or needs some low-level HW access, or otherwise. So you move to a different language more suited to the task.
This is not always C++, but C++ is always the end point. It’s the apex predator of languages. If you can’t accomplish your goal in C++, give up hope because it can’t be done. This trend, at least for me, never goes in the other direction, unless the original program was so horribly written that its limitations were due to that instead of the language.
Pointers, user-controlled memory management, compiling down to assembly that I can step through, deterministic performance, easy support for inline assembly and intrinsics… all these things and more are the C heritage in C++ and are what I appreciate in it. That it’s been turned into a modern language with objects and generics and lanbdas and all that stuff while retaining all the good parts of C is amazing to me.
Not all programs need that stuff. Most don’t, in fact. So there are languages that don’t offer those features and instead offer something else. C++ shouldn’t have gone that route, because otherwise we wouldn’t have anything that offers what C++ does, and that would be a tremendous loss.
Currently I’m on Chapter 5.3* Mechanics of the Method calling process*. I’m starting to see how methods call other methods and have been studying the coding examples in the book until I understand fully, or at lest I think I understand fully, why the code was written the way it was and how the pieces of the code interact with each other. I’ve seen a bunch of nested for loops so I think I’ve got that down now - although of course not to the point of instantaneous intuitive understanding, that probably won’t happen for a while, not until I finish several of the coding assignments.
I can understand on some level probably about half of what is being said in the thread at this point - so I have nothing to add to the conversation, unless someone wants to ask me about my bluegill farm.
This thread is actually making me wish I had started doing this years ago.
I think claiming a HAL as a VM is a bit odd, but I can see some similarities.
True, but you can do that without making the languages as similar as C# and VB.Net are. For example, Java and Clojure are not very similar at all, and they both compile to JVM bytecode.
The same is true of the CLR, the VM C# and VB.Net are built on top of.
Did you know there’s a compiler which compiles Java to CLR bytecode? And that there are compilers which compile Java to machine code? My point is, none of the semantics of a language really depend on what it’s being compiled to, as long as it’s minimally reasonable. The similarity of C# and VB.Net are due to a decision Microsoft made, quite independent of their VM implementation, to make them similar, at the cost of compatibility between VB.Net and VB6. That’s it. My guess is that this was done to make for a clearer upgrade path from VB.Net to C#, and my further guess is that was done to emphasize the idea that C# is the language Microsoft really wants people to write in.
Eh. You could just as well say the same about functions.
People said the same about assembly, and C, and Common Lisp, and when it comes to Common Lisp I’m more inclined to believe them, because that comes with a macro facility the equivalent of being able to arbitrarily extend the compiler. Even templates are but a pale shadow of Common Lisp macros.
In one sense, your statement is deeply uninteresting: C++ is Turing-complete. Granted. Therefore, it’s no less, and no more, expressive than any other Turing-complete language. The rest is details, a library away, ignoring constant factors in the big-O analysis.
In another sense, C++ is a rather inexpressive language, with too few dynamic features to be convenient and too few static features to ensure safety, and C does a better job of allowing people to write device drivers at any rate. The whole reason C++ people get excited over generic functions, for example, is because the C++ type system is too primitive to allow them to express things Haskell and ML programmers take for granted, and C++ programmers are just now getting lambdas, a half-century after they were first introduced into programming.
My annoyances with C++ aside, it does sit in an odd place, being a very complex language which still prioritizes machine time over human time. It’s great if you have a lot of OS-level C++ libraries to call into, but FFI render that less and less important, especially now that languages are coming with elaborate libraries built-in, a simple download away, or both, and in terms of the core language, programmers largely don’t want to put up with it if they don’t have to. Too many dark corners and odd interactions.
You know what the languages of heavy-duty data processing and statistics are now? R and Python. Both high-level languages. The gimmick, of course, is that both of them sit astride massive libraries written in tuned C and, in some cases, Fortran (Python can call into LAPACK) which do the computational heavy lifting. The business logic, though, is written in the high-level language, because that part doesn’t need to be fast. Given that programs written along those lines are already processing gigantic datasets, the old adage of alternating hard and soft layers, or writing only time-critical code in the low-level language, seems to have won.
Machine time is human time. Slow programs waste human time.
It partly depends on how many customers you have. As you note, fast math libraries are written in lower-level languages. It is worth putting up with the extra effort because the payoff is so large. My company has hundreds of millions of customers. An optimization that takes hundreds of hours for a 1% performance gain is worth it, because the physical product we sell is now worth millions more than before.
That said, most programming is not much more than glue. For which Python, etc. are more than sufficient. But when you find that the stuff you’re gluing together isn’t good enough, you switch to something more powerful.
Of course, there also the old adage that a C programmer can write C code in any language, and likewise a Fortran programmer can write Fortran code in any language. And I’m sure it’s true of many other languages as well.
Ugh, yes. I remember my first exposure to the Fast Fourier Transform through the book Numerical Recipes in C. I didn’t know it at the time, but it was a badly-translated version of Numerical Recipes in Fortran. In particular, Fortran apparently uses 1-based arrays. C is of course 0-based. But their conversion retained the 1-based offsets, with some pointer hackery to compensate. Ugh. Not a good introduction, especially since my C skills were fairly rudimentary at the time and it was hard enough grasping how pointers/arrays/etc. all worked. Not to mention that it made the FFT algorithm more difficult to understand to start with.
Actually there are a great many things that a C programmer could not do under Python or Java no matter what. As I mentioned earlier, a critical difference is that these languages will not allow you to construct a pointer. The only way you can get a pointer is to allocate something - in which case the allocation gives you back the pointer, or you assign the pointer further. You cannot construct a pointer ab-initio from basic constructs, and the language prevents you from doing anything with pointer variables - you can’t modify one except by assignment from other pointer variables.
This obviously makes things like low level machine hackery impossible. For Python you can add Ctypes, or use Cython, and you get controlled access to some lower level parts, but you still can’t access language level pointers, even though you could get to things like device control registers, including say DMA registers.
The only OS internals and device driver work I have done has been in C or Modula-2. One of the key things you really want in any language for device drivers and other OS work is a language with a way of tagging variables as volatile. These are typically not part of the base language, so it is hard to say a specific language, as opposed to specific implementation, is suitable for device drivers. You need to make sure that the compiler never ever places some values in registers and never ever optimises over them. Almost all modern implementations provide such pragmas now. It wasn’t always so.
The manner in which your driver code is instantiated is OS specific, and also be rather ISA specific. You may find yourself responsible for register save and restore and stack management. None of this is difficult, but you sure as hell need a language that gives you freedom to create mangle and overwrite machine pointers. Given you may be running during interrupt service, you have to be very cognisant of the highly limited capabilities available and the responsibilities your code has.
Famously the LMI Lisp machines wrote a very large part of their OS internals in Lisp, including such things as the virtual memory manager.