What are the advantages of object oriented programming for a lone hobbyist coder?

It teaches you a useful job skill.

I use structured programming with QuickBASIC and QBASIC. Fast, easy to learn, much more legible than C++, and QuickBASIC has libraries.

It’s true that python makes many of these things very simple and easy from a code perspective, but it doesn’t solve the much more difficult problem of figuring out how to model the larger system to use OO or how to add value using OO for existing systems that are not already modeled with OO.

For the types of software I assume aceplace57 is working on based on his description (business systems), there are a few issues that would complicate his adoption of OO:
1 - His existing systems (typically large) are probably not modeled with OO currently but rather a typical mixed state of relational storage along with some object/entity organization where it fits nicely plus a collection of functional/action/verb oriented where that is more natural. Which means that any new dev can’t just make use of an existing OO model and code, so adding objects into that mix can be a challenge, and unless it all gets rewritten would only be for low level (inside the program) elements or things that are somewhat independent.

2 - The methodologies used in #1 are well known with existing solutions that work, whereas an OO approach has some challenges that have not really been solved yet. The initial naive thought was that we can just store objects and everything is great, but there are problems with ad-hoc queries, how to load only the information needed for specific action (as opposed to loading into memory every bit of linked info for that object, which can get extensive), and various other things.

Object DB’s never really amounted to much (other than for specific problem spaces) due to these problems and now we have ORM (object relational mapping) which is a partial not very good solution.

There are significant benefits to OO for some parts of business systems problems and there are significant benefits to relational storage (and a functional model) for some parts of business systems problems and we do not yet have a good marriage of the two.
It’s not really very easy to identify how aceplace57 could get much benefit from OO given that he is working with an existing system with an existing model.

I’ve written a bunch of scripts over the years that are in the < 30 lines of code range. Unless there’s a compelling reason to (reuse), I rarely even write functions for these. It’s just a series of steps.

I’ve written a bunch of scripts over the years that are > 30 lines but < 200 or so lines. For these, I’ll organize code into functions, but not classes.

I’ve written and maintained systems in the (and I’m just guessing here) > 50 kilolines where I wrote 90%+ of the code, and for that, I’ve organized it into classes. And of course, I’ve contributed to projects that are in the millions of lines, where we’d have been insane not to organize code into classes.

So those would be my rules of thumb. Functions help you organize code well - if the functions are well defined, you get a rigorous interface where you can reduce your chances of introducing bugs when you make a single isolated change. And you get reuse, so if you do make a change to fix a bug, it’s fixed for all users of that code instead of having to remember how many places you made a similar logic error and go fix them all. Classes do the same thing, but moreso (the reason being that in addition to isolating chunks of behavior, you are also isolating chunks of data, which gives you increased confidence that you can be aware of all the places that use that data when you need to make changes.)

Spend a short amount of time upfront reasoning about how big the project is going to be, and choose the right approach. Don’t worry too much if you get it wrong. It’s quick to refactor 50 lines into functions, or 300 lines into classes.

Some responses above give the impression (or directly state) that classes are organized around data.

That is true some of the time, but not always.

I’ve dealt with classes where the “data” within them are fairly trivial. It’s the effects that the functions have that are the core of the class.

There’s a lot more to programming than just “This function does this to some data”.

The key is to think abstractly. You figure out what the idea behind something is and you make it abstract. That helps tremendously to break things into reasonable pieces. E.g., classes, function, etc.

One of the brick walls I kept butting my head into over my years as a CS prof was trying to get students to think abstractly about a problem. All too many of them were taught from an early age that abstraction = useless.

So, how would you do that?

Classes without data are basically just namespaces. Sure, that’s useful, but you’re missing out on a lot of the value if that’s all you’re using them for.

Abstraction is essentially the process of generalizing and classifying, the recognition that some functional module is just a particular instance of something more generic, which leads to a generalized way of thinking about the functional architecture of a complex system. One way to teach it is by example. One could examine the architecture of a moderately complex software system that had been designed that way, and show how it leads to clean modularity and facilitates understanding and debugging.

One could examine the structure of network protocols, which are classic examples of layers of abstraction with well-defined interfaces between them, so that each layer performs a specific function and only a specific function, and talks to layers above and below only through well-specified documented interfaces. One could show how each layer can have different implementations without having any effect on the layers above or below. But in order to build that kind of structure, network designers had to think long and hard in abstract terms about all the things that a network protocol must accomplish, from the lowest signaling layer to the highest levels of application functionality. Defining abstractions is really the very heart of the process of system design.

I didn’t say no data, just a virtually trivial amount. The main purpose of such a class is to have effects rather than maintain data.

Probably part of the problem is that the word “data” is overloaded. It means something different to a computer scientist than ordinary conversation. If I am being more precise I will talk of “state”. Objects encapsulate state. But to a CS person’s thinking there is often no semantic difference - which is confusing when talking generally. (Worse is when you start talking about “information” which has a very precise theoretical definition that is quite different to its use in ordinary conversation.)

I am reminded of my old undergraduate text - Algorithms + Data Structures = Programs. Long time ago.

Back when I taught undergraduate CS one course I taught was Algorithms & Data Structures to the second years. Really not much had changed, however the thing I tried really hard to do was - ironically - to divorce what was taught from the OO paradigm. This was a battle - there were other staff members who viewed the course as primarily a course in OO methodology, not one of algorithms. I once set a class exercise in algorithm complexity and got the students to code and measure different search algorithms to look for the different break points in scaling, a couple of years later another staff member had added a whole raft of OO cruft to the same exercise, IMHO obscuring the core point.

In general an OO paradigm encapsulates the state of an abstraction, and provides the mechanisms for accessing and mutating that state as part of the encapsulation. Add the subclassing mechanism and you have OO. After that you just need to codify the binding resolution rules and you have your almost your entire language.

Thanks! When I was taking a class in C I used Turbo C++ while reading a sort of OOP for dummies. Instance entered my vocabulary and I had to force it out of my vocabulary so civilians could understand me, though in context its meaning is obvious.

Hi all, a poorly relative kyboshed my weekend and I’m still working through the thread. I really appreciate all the responses, even if I haven’t been that vocal in the thread.

I will just answer this, though. I’m using Treehouse, working through the Python track. I like that there’s a mix of video tutorials, quizzes and coding exercises. You do all the coding right in the browser, so nothing is required except for an internet connection.

Thanks.

This.

Complex problems of all kinds, not just software, are very often solved via modularization – breaking down a problem into smaller, manageable chunks.
Software engineering has always involved modularization, but it turns out when you break problems up in a casual way it’s possible to lose a lot of the benefits of modularizing, and end up with code that’s difficult to understand, difficult to build on and generally error-prone.

OO is just a set of principles for breaking up problems in a strict, consistent way such that you can fully enjoy the benefits of modularization.
For implementing a non-trivial program, it just makes sense to use OO, or some equivalent paradigm.

That’s my avoiding-jargon explanation.

Another thing I like, is when I occasionally get “writer’s block” and don’t know where to start with implementing something, I can just start with classes that reflect the real world system, and a program quickly takes shape.

By the same token, in procedural programming, there are often functions with no return value, or a trivial return value that you almost never care about (when was the last time you set something equal to a printf() ?).

Been coding since 1979.
Started with assembly language.
OOP is nice for big projects.
Don’t like so much for small things like interfacing with hardware.
What should be small and linear tends to get bloated and difficult to comprehend.
Objects also bring with them whole new classes of bugs, as what you’ve defined as a canonical rectangle may not work so well when called by something that is close to a rectangle, but really a parallelogram. OOP changes the whole scope of error propagation.

The professional programming environment today is OOP. Modern computers are less limited by execution speed and memory size than were their predecessors. Even if you are programming Arduino or Android for fun, it’s all OOP. I’d say that anyone entering the field today has to start with OOP.

However, home projects are a matter of degree. I use Texas Instruments MSP430 family for instruments and gadgets. My library of assembly language subroutines suffices in place of classes. When gadgets are linked to the PC, VB6 works great for the GUI to get data into files and Excel does the rest.

I began programming at IBM in 1957. We used octal object code to write diagnostics. The advantage being that the bits were closely associated with the computer (IBM704) and its’ operation. COBOL and FORTRAN were useless for trouble shooting. Assemblers made things easier, but were one level removed from the operation of the system. That was another time. We did things different there.

Crane