Object Oriented Programming -- What is it?

I tried to read a book about Object Oriented Programming (The object Technology revolution by Guttman & Matthews), but I can’t make heads or tails of it. The book talks about how objects are superior because they are more flexible and efficient, but there aren’t many concrete examples. I have no idea how to apply it to say, a business of my own.

Can someone explain what exact OOP is? I don’t get it.

The object oriented paradigm applies the philosophy of using single functions to resolve small portions of a larger problem.

Essentially, it takes the larger “problem” and breaks it down into several small problems, each more manageable than the last, until the level of detail is fine enough that a single action will resolve a single sub-problem.

This may or may not be an over-simplification.

I presume you are familiar with a data structure. In C you might see

struct {
short attrib1;
char * attrib2;
long attrib3;
} aDatavalue;

An object is an extension of this. You want to define a thing, that has the data attributes you want, but also has member functions to manipulate the data attributes. The idea is you cannot manipulate the data attributes directly, but only by the means provided via the member functions.

This provides security for the data, but also provides a means to handle an object as an object, not a number, character string, etc. I can show you an object definition for a rail car I wrote as part of a program to generate traffic for my model railroad, if you like. Once you make the leap in understanding, it’s quite convenient to think in terms of an object of Class Car, instead of the constitiuent data.

Do you understand procedural programming? Because if not, you’re not going to understand OOP, which is procedural programming with fancy bells and whistles.

The advantages of OOP over pure procedural programming are numerous, but most come down to inheritance and polymorphism.

The first allows you to easily define buckets of functionality that inherit the bulk of their behavior from a list (or tree, in some cases) of parent buckets. The child can override only the behavior that distinguishes itself from the parent. In a well-structured program, this results in substantial code reuse and abstraction, cutting down on redundant code and poor design.

Polymorphism means that different objects can behave differently in response to the same message. At one job, we had a program which downloaded and aggregated certain financial data from 2500 or so different clients. Each client used one of a dozen different formats for this data. We had a large class dedicated to processing this data and presenting an interface to the data to the rest of our system. Each data format required a different subclass. Each subclass overrode the default function that parsed the data into an internal data structure. Everything else – hundreds of functions, was inherited. This meant that a piece of client code could call the “get the data” function on any object of any ones of dozens of subclasses, and it would do so using the right function for the data type. The client code did not need to know anything about the underlying data format. This means it is trivial to add new formats – just write a new subclass to handle that format, and the client code doesn’t change.

There are other aspects of OOP that get a lot more complicated and abstract, such as interfaces, mixins, class factories (the thing described above uses one), prototypes, introspection, multimethods, aspects, assertions, unit testing, standard design patterns, and so on. But inheritance and polymorphism are the most important things to understand and form the foundation of nearly all the other stuff.

I must disagree with **CoBa **. I agree with friedo but I think that post dives down a little too deep for a first crack at it, assuming that the **sassyfras **is not a professional programmer.

Functional decomposition was the earlier method for design. You described your entire system as a function: “Generate payroll checks.” Then you figured what what the subfunctions of that big function were: Calculate salary. Calculate taxes. Calculate insurance deductions. Then you kept defining lower and lower levels of subfunctions until a subfunction is clear and small enough that you could write a single procedure or function in code to implement it.

In the late 80’s/early 90’s there was a fundamental shift in thinking. In object-oriented programming, and design, the system is now thought of as a collection of objects which all interact. Each object has a state (which is implemented simply as data owned by the object), methods (functions associated with the object that you can call for it to do something), and behavior (what are the outputs and state changes that occur if you invoke a method). You define a class, which is an abstract definition of the state, method, and behavior of a type of object, then the objects themselves. So in the OOP model you might have a paycheck class, create a paycheck object for every employee. The paycheck might have an “Add Taxes” method to put the tax withholdings, and a “Print” method when the paycheck is finished.

Object-oriented design doesn’t throw away everything we used to know about software design but it adds a whole new layer with its own sets of rules and heuristics about what is “good design.”

I’d say it is. After all, procedural programming applies the same methods to break a problem down and solve pieces of it.

There are a number of competing definitions for OOP. The one I learned in college was that OOP consists of three design patterns - Encapsulation, Polymorphism, and Inheritance. That’s a definition that came from C and C++. The Smalltalk folk define OOP in terms of objects interacting via message passing. Kristen Nygaard defined it as, “A program execution is regarded as a physical model, simulating the behavior of either a real or imaginary part of the world.” More than you ever wanted to know is here.

It’s really an “I’ll know it when I see it” type of problem. When was your book written? When OOP was being pushed the promise was that you could create an Animal object, and then reuse all of that code for each new animal you wanted to create. In practice, I haven’t seen appreciably more code reuse in OOP than in procedural.

And on preview: I disagree with friedo’s list of other aspects. Unit testing, in particular is orthogonal to the language.

Mostly it’s breaking things down into fundamental pieces(objects) that are function centers, rather than thinking about the whole process in terms of what is required to do.

Rather than looking a car as a whole you break it down in OO philosophy.

Engine. Must output power in a spinny metal thing of x diameter, and an intake for fuel and air.

Transmission. Must recieve a spinny metal thing of x diameter, and output on a spinny metal thing of y diameter.

Differential: Must take input from spinny metal thing of y diameter, and output to two 5 bolt wheel mounts.

etc. Each piece only concentrates in development on it’s own functions, and delivers to the set requirerments for input and output(interfaces). Any piece can be easily replaced by a completely different implementation of a object as long as it fits the interfaces established.

In the simplest terms, it makes it possible to write code using language and organization that resembles your business concepts more than the computer’s concepts, making it easier to read, change, and maintain.

Since I was working on this in the '70s, long before C++ (but after Simula 67) let me make it still simpler.

OOP at its most basic level is about how what is important about any data structure is how to use it, not how it is implemented. By having the language hide the details of the implementation, it lets people change things without having to change the rest of the code, and prevents dependencies on unimportant implementation details from getting in and causing maintenance problems.

The example used from prehistoric days was a stack. You want to create a stack, push something onto it, pop something off of it, and test it for emptiness or fullness. Given that, it shouldn’t matter if the stack was implemented as an array, a linked list, or a hash (for some bizarre reason.)

The other stuff mentioned here comes from the details of actually implementing this simple concept.

Instructions do one thing. Subroutines encapsulate doing the same action repeatedly. Functions encapsulate classes of subroutines that do the same general thing repeatedly, but for a few variables. Objects encapsulate a set of behavior associated with a concept that will be used many times in a program.

Some examples:

Instruction: goto a memory location
Subroutine: refreshing a screen with data from a fixed location
Function: numerical integration of a dataset
Object: a chess board, with its own evaluation functions and board state information and etc

Usually you will say that a class is the generic description, and an object is an instance of a class. So for the last example, you might say you have a class that represents an NxN chessboard. An 8x8 chessboard in some particular state would be an instance of that class.

If the book reads like most of the posts here, no wonder you’re confused. I gave the simple answer already. I’m not sure I’d call OOP more flexible - hacking data structures any which way but loose is actually far more flexible, but not safer, and it is more fragile, in that small changes can break lots of things. More efficient? Maybe, especially if an efficient but confusing implementation of an access method is hidden from the user.

I’m not sure I understand your question about applying it to your business. At that level, you want an application, and how it is implemented doesn’t matter, assuming the alternatives are of equal cost and quality. In a sense any application is object oriented. In Word, if you underline a word, you don’t care how that text is represented internally.

I sort of understood Object Oriented Programming - and then they hit me with Object Oriented Design. As far as I could tell, that involved drawing diagrams full of circles, arrows, and wavy lines over and over again until somebody yelled “stop!” - except that nobody ever yelled “stop!”.

I finally solved the problem by retiring. Best decision I ever made. (And I suspect that the folks who were promoting OOP came to the very same conclusion:) .)

Fortran has had OOP support for nearly a decade now, ya know. :stuck_out_tongue:

No. But I’ll give it a shot anyway :smiley:

I’m assuming you’re already familiar with functions/subroutines, and know why they’re a good idea. I’m also assuming you’re familiar with structs or similar kinds of complex data types (methods of treating programmer-specified combinations of data as single units).

The basic, fundamental idea behind OOP is that you tend to bind a set of subroutines or functions to specific data types: this data type can hold this kind of data, and then it has these operations that you can perform on it, and that other datatype has those other operations.

A powerful (and IMHO the most important) abstraction that flow from this basic idea, is that you can use the “same” operations on different datatypes (the technical term is polymorphism). For instance (and forgive the banal examples), you can have different datatypes for different kinds of shapes, say squares (denoted using 4 points) and circles (denoted using 2 points) both of which have a grow() operation - in this case you will really define two different operations, but you just give them the same name - and then can have a set of mixed shapes (circles and squares) but you can operate on each item in the set and grow() it without having to know which kind of shape each item is.

The reason that’s such a good idea is, that if it’s done right, the only parts of the code that know about the difference between the kinds of shapes are directly related to the particular shape types - basically, it means you have a ready-made framework for hiding internal complexities so that you can build larger systems out of recursively smaller components.

Also, OO seems to map relatively well to the way people like to think about problems (though not all problems map well to OO, and even if they seem to map well, message passing and static, class-based implementations make some fairly common problems much more difficult to solve than they have to be).

My opinion: OO is a good idea, but don’t believe anybody who tries to tell you it’s the answer to all the problems.

This is true. Everyone knows that functional programming will end poverty, cure cancer, pass the Turing test, and make delicious pumpkin pie.

Hmmmm Pumpkin pie.

But seriously; I’m very happy that the languages I use the most professionally have at least basic in-build support for both OO and functional techniques. (And good riddance to Java).

OOP is a single name for a bunch of concepts, each of which can be useful in isolation but have always been marketed as a single design philosophy. A bunch of fundamentally different languages all claim the mantle of OO-ness: Each OO language takes a different list of features from the general menu, meaning the ‘One True Definition of OO’ is about as slippery as the ‘One True Definition of God’ but a lot more contentious. The term ‘object-oriented’ was invented by Alan Kay for Smalltalk, so let’s use his list:
[ol]
[li]Everything is an object. (This means every piece of data you can manipulate in the language acts the same at a fundamental level. To Kay, this meant everything worked by receiving messages.)[/li][li]Objects communicate by sending and receiving messages (in terms of objects). [/li][li]Objects have their own memory (in terms of objects). [/li][li]Every object is an instance of a class (which must be an object). [/li][li]The class holds the shared behavior for its instances (in the form of objects in a program list) [/li][li]To eval a program list, control is passed to the first object and the remainder is treated as its message.[/li][/ol]Now the C++ and Common Lisp programmers are all angry, the C++ programmers because it doesn’t even mention information hiding and the Common Lisp programmers because their language’s notion of OO isn’t based around message-passing. In fact, Common Lisp programmers get really really angry at the last point, because it makes one object special and forces it to control the whole process, whereas Common Lisp is based on a somewhat complex scheme where everything passed into a function gets a say (multiple dispatch based on generic functions). (Plus, in C++ everything is not an object, something polite society usually passes lightly over.) In fact, the list only works if you want to define OO as equal to ‘What Smalltalk does’ and leave everyone else out. This list made a lot of people very angry and has been widely regarded as a bad idea.

Paul Graham, noted blowhard, has linked to this list due to Jonathan Rees. It’s much less Smalltalk-centric and is, as a result, longer and less comprehensible:
[ol]
[li]Encapsulation - the ability to syntactically hide the implementation of a type. E.g. in C or Pascal you always know whether something is a struct or an array, but in CLU and Java you can hide the difference.[/li]
[li]Protection - the inability of the client of a type to detect its implementation. This guarantees that a behavior-preserving change to an implementation will not break its clients, and also makes sure that things like passwords don’t leak out.[/li]
[li]Ad hoc polymorphism - functions and data structures with parameters that can take on values of many different types.[/li]
[li]Parametric polymorphism - functions and data structures that parameterize over arbitrary values (e.g. list of anything). ML and Lisp both have this. Java doesn’t quite because of its non-Object types.[/li]
[li]Everything is an object - all values are objects. True in Smalltalk (?) but not in Java (because of int and friends).[/li]
[li]All you can do is send a message (AYCDISAM) = Actors model - there is no direct manipulation of objects, only communication with (or invocation of) them. The presence of fields in Java violates this.[/li]
[li]Specification inheritance = subtyping - there are distinct types known to the language with the property that a value of one type is as good as a value of another for the purposes of type correctness. (E.g. Java interface inheritance.)[/li]
[li]Implementation inheritance/reuse - having written one pile of code, a similar pile (e.g. a superset) can be generated in a controlled manner, i.e. the code doesn’t have to be copied and edited. A limited and peculiar kind of abstraction. (E.g. Java class inheritance.)[/li]
[*]Sum-of-product-of-function pattern - objects are (in effect) restricted to be functions that take as first argument a distinguished method key argument that is drawn from a finite set of simple names.[/ol]This list makes people happier because it’s explicitly a variety platter: Language designers are like people at a huge buffet, picking and choosing and roundly deriding the others for what are, in the end, matters of taste and style. Reading both of the pages I linked to is a very good idea.

(Underlining added.)
I very much doubt that depending on a language’s privacy controls to protect memory from unwanted access is secure.

Two things: It isn’t my list. It’s taken directly from the site I linked. Also, this might be reasonable if untrusted or semi-trusted client code is being passed objects where (for example) they can only access a hashed version of a password the object knows. But there are better ways to do that without objects.

In a third generation language data and code are conceptually separate. You have a data structure, and a library of procedures. To perform a task you call a procedure, and pass the data structure as a parameter.

In object oriented programming, data and code are not separated. You create a class description that includes both data and code. You don’t pass the data structure to a procedure, instead you operate a *method *that is part of the data structure.