No, to both. A subroutine or function would of course need a name and an optional argument list, but a main program didn’t need any sort of declaratory gobbledegook.
As for IMPLICIT NONE, I’m getting memory challenged here but I’m not sure that IMPLICIT even existed prior to FORTRAN-77. Real Men™ – many of whom were former lumberjacks and could code FORTRAN in their heads while felling a tree the size of a city skyscraper – simply found it useful that variables beginning with I, J, K, L, M, and N were integers and all others were real. In the Good Old Days you could of course explicitly declare variables as INTEGER or REAL (the only variable types that Real Programmers™ ever needed; REAL meaning single or double precision floating point and INTEGER being a catch-all for everything else) and IMPLICIT, if it existed at all, was useless, while all the fancy-pants variable types that came later were for wusses.
However, reading some of the other comments, I’m reminded that my sample program would have needed a STOP and an END statement. I suspect most compilers would have generated the appropriate exit code without the STOP, but would have complained about the absence of END.
I have a great respect for your knowledge in general, Stranger, but in this case you are flat wrong.
I wasn’t talking about large enterprise applications, supported by many people over decades. I really meant any real-world GUI or database application more complex than a simple test program. You couldn’t write it without OOP.
Scroll up and look at the ‘Intro to VB’ video I posted above. It shows the creation of an ultra-simple Windows app. It has an application window with a button, and when you click the button, it shows a message box.
The button is a button object or class, which has properties, methods, and events.
The properties in that simple example are the button caption, the name of the button, the size and position on the form, the color. The example shows writing code for an onclick event handler for the button object.
How would you even do that without a pre-built OOP class for buttons in the language? You’d have to make a series of complex API calls to the OS.
The same in database programming. If all you are doing is sending a simple SQL query to a database server and displaying the result dataset to the user, you wouldn’t need OOP.
Say
select * from customer
join branch on customer.branch_id = branch.branch_id
where active = true
order by lastname, firstname asc
But for anything more complex than displaying that bare result set, you need a query object, record object, and field objects.
You will need methods to step through records, locate records, edit records, post and cancel changes, commit and roll back transactions, refresh the query. You may want to do sorting and filtering on the client side rather than the server side to avoid moving a large dataset across a network frequently.
The query object will have at least basic properties like record count, field count, editing status, current row, a list of field objects belonging to the query. It will have events that can be triggered by a variety of user actions.
You may have virtual calculated fields, some fields may be read-only and some editable, fields will have display names and formatting specs. The query object and/or the fields will have validation events, scroll events, etc.
You may have a more complex query object with a master-detail relationship between two tables. The query will be linked to a database connection object, which handles things like connection protocol, character encoding, connection and loss of connection events.
In a GUI application, you may be displaying the dataset in a data grid control, which the user can click around in and manipulate. That in itself will be a complex visual control object with a large number of properties, methods, and events.
And this is not for doing anything fancy, it’s just normal viewing, editing, and processing of a query, that you do all the time in database programming. You couldn’t do it without OOP.
You appear to be talking past each other. I think that I am the only one who is correct
To use c++ for anything useful, you need to use a library (for things like buttons), which requires at some points that you understand the syntax.
Also, any c++ program (excluding the ones written as c programs) will use objects as a method of abstracting from the file structure and achieving modular programming.
But, except for using the library, no small c++ program has to use inheritance or polymorphism.
OOP has evolved over the years, and you can easily buy an argument about what is part of it or not.
Creating large systems without OO isn’t at all impossible. The X-Windows system was originally written in a mix of C and CLU. GTK is all in C. As is the X-Intrinsics library. I wrote GUI systems for SunOS in C. No OO at all. It wasn’t difficult.
Now CLU is very much an OO progenitor. Minimally it provided ADTs. But ADTs (whilst IMHO the most crucial part of OO) is a long way from what came to be OO in even the most simple ideas.
C++, Java, and some others, all claim to deriving from Smalltalk-80. Personally I feel C++ missed the entire point, but there it is. Back in the day there was clear rivalry between the East and West coasts as exemplified in MIT and Xerox PARC. OOPSLA was the really big deal conference. Simula-68 is another proto-OO language that cannot be ignored.
Some people (including me in the early days of C++) just used the encapsulation capability of C++ to essentially write cleaner C. The entire edifice of inheritance, and the various language’s type systems and inheritance flattening rules is something of a separate problem. The question of static versus dynamic types is another huge question.
Eventually the most important aspect of a language is its type system. There is a lot of syntactic sugar layered on top of this, but if two languages share the same type system, the gap between them shrinks massively.
In the modern world you can’t do anything much useful without a large slab of libraries. But to some extent that is a reflection on our expectations of doing stuff quickly and being able to use stuff other people have cooked up first. The API provided by those libraries can provide all manner of paradigms. You don’t need to have an OO API for every complicated system. Indeed you will often see old libraries that wrap the API in a very obvious flat OO interface that is OO in name only.
Certainly, in an academic environment you want to teach basic principles, and have students learn how things are built from scratch, and how they work internally.
But in the outside world, when you’re creating real applications that other people are going to use to do their jobs, and writing applications under pressure of deadlines, or billing a client by the hour, it’s unreasonable to spend time reinventing the wheel, and not to build on the work of others.
I never said they were not. My point is more that these expectations do not have to lead to OO as the only paradigm. Some is accidents of history, and some is choosing from what already exists.
This isn’t true according to the most commonly-used FORTRAN standard and compiler that I mentioned.
It might compile on something, but then I could write a compiler for any language that would accept a blank file.
I also disagree with calling it gobbledegook. The couple of lines of declaration in a C# program are useful. Almost every console program is going to use the command line arguments for example.
There are lots of programming paradigms, functional and logical programming come to mind, but most haven’t got anywhere near the traction of OO or simple procedural programs.
Back in the late 80’s I lead a team that built an inventory planning system for an aerospace company. It had a graphical front end (IMS) and a large-ish database (DB2) and we did it all in COBOL. Not an object in sight.
Not to belabour this point, but it IS true. Specifically, AFAIK it remains true that even in most modern versions of FORTRAN there are no mandatory declaratory statements other than the “END” statement. Even the ubiquitous “PROGRAM” statement, which only came into use with FORTRAN 77, is not strictly necessary, and “IMPLICIT NONE” is purely a choice by those who consider it good practice to mimic languages requiring explicit variable declarations. FORTRAN doesn’t force you to do any of that.
But was it a GUI system? In the 80s it was probably on a black screen 80 Ascii characters wide.
I also built a couple of big database systems in those days using dBase. I’m trying to think of the name of development platform we used, but eludes me for the moment. It was all DOS based, and obviously not OOP. It was certainly much simpler with a text interface, and not even a mouse. But even then we could have done it far more quickly and efficiently with OOP.
Things are very different today, though there are still a few big companies that use outdated systems.
Only a few? I thought it was common: it costs a massive amount of money to replace these old outdated systems written in COBOL or whatever–so they are maintained instead. Particularly true in government.
Lots of interesting side conversations here, surprised nobody mentioned Haskell yet. And don’t think I saw Lisp either.
Anyhoo, my experience agrees with “C++ lets you shoot yourself in the foot. C will load the gun and aim it for you.” C is very useful because it is low level, and is (or was?) the default for microprocessor programming. That’s its strength and weakness. Pointer are very powerful. Memory management is very powerful. Making a mistake with pointers can be pita to diagnose. And the #1 error I’ve seen in C apps is memory leaks - things just stop working at random times for no apparent reason. I’d swear half the debug time in C is tracking down memory leaks. C++ mostly still lets you do that stuff, but you have to put in some extra effort to bypass safety features.
C also doesn’t have much support for strings, but I forget if that was a difference between C and C++, or between C family and any other language. What stuck in my memory for C-related was needing to include libraries for string work and “to change one character in a string, instead of changing the string, it creates an entire new string and points the variable to the new string”
… Anything else I could say is probably either off topic, or would be a personal attack on specific programming languages.
Memory managed languages can be bad this way as well. Concatenating strings in Java causes an intermediate string to be created, which is then destroyed and garbage collected. That’s why its good practice touse StringBuffer objects when building up strings out of many parts, only converting to a string object at the end.
Of course in a managed language the result of bad string code is a reduction in performance, while in C/C++ it can be anything from performance to a memory leak which causes strange bugs or occasional, hard to track crashes.
Ahem!! I mentioned LISP, although only in passing (post #34) in the context of the general slowness of interpreters, at least in the old days when computer cycles were slow and expensive. A LISP compiler was in fact written several years later – in LISP, since LISP was well suited to symbol manipulation of that sort. In a wonderful example of bootstrapping, the LISP compiler then compiled itself.
I work in C++ professionally, and it really is not that bad. Yes, there are a number of Very Bad Things one could do, but since C++11 formalized smart pointers, it’s been fairly simple to avoid the bad parts.
The main reason we use C++ is for the bare metal performance and the deterministic resource management (e.g. no garbage collection pauses) which Java and Python don’t provide. C++ OOP may not be the best ever made, but it is quite passible, and the features it is missing I rarely find myself wanting for.
Java/Python/C# are really targeting different thing than C++ targets so it’s no surprise that they prioritize different features. I count the Rust language as the one that most successfully aims for the same features that we are looking for in C++, but folds them into the language better.
Since C++ is historically in the “C family”, would the analogy not be, which LISP, and what are you (or should you be) doing with it? I only know from [what is now known as] Common LISP (think Guy Steele) having used it to do real work involving manipulating programs-as-data, but I understand that some people prefer to teach students algorithms in Scheme, for instance.
As for speed, I would not be amazed if you told me your LISP implementation of whatever ran 2x slower than your C++ or Fortran version, but it would depend on the type of program, and would it even matter? What many people have emphasized is that they are running interpreted code in Python or Octave or whatever which may execute statements orders of magnitude slower, but that is of no import because all of the real computation happens in an optimized library routine like LAPACK (to which LISP, Python, C++ are irrelevant)