A couple questions about the java programming language.

In some more modern languages, primitive types are objects as well. That doesn’t necessarily mean that primitives get treated awkwardly in the compiled code, however, since modern compilers are smart enough to make the required transformations.

There has been a clear trend towards making languages simple and consistent, and letting compilers do the work of figuring out how to allocate memory, arrange calculations, clean up afterwards, &c. C++ lets you create objects on the stack or on the heap at your discretion depending on how the object is to be used. Java simplifies this by only ever creating objects on the heap. An ideal compiler would just let you create the object, and then decide how to allocate it itself based on how it is used in the code, making “heap vs. stack” an implementation detail, rather than part of the language.

This is what Go does… sort of. Things are allocated on the heap unless the compiler can prove the value won’t be referenced outside the function/method. I’m not sure exactly what the criteria for “proving” this is, but I think you’re generally safe if you don’t return the address of a value, and you don’t assign the address to any struct or pointer that also can’t be proven to exist only inside the function.

ETA: On C++, C++ can be OO if you want. Hell, it can get pretty close to functional if you want. C++ is the goddamned kitchen sink language.

Man, I need to get me one of those…

:slight_smile:

It’s probably better (and more correct) to say that the variable holds the value in all cases. Objects are not values in Java; references to objects are. If you think about it like this, the picture becomes simpler: operations on variables are always operations on the direct value the variable holds. This is as true of the == operator as it is of the . operator. Pretending that objects are really values can lead to other common misconceptions, like the idea that Java has pass-by-reference, which it doesn’t.

Maybe I’m being overly pedantic, but none of these things are necessary or sufficient for object-oriented programming. And certainly none of them are necessary for good programming. :wink:

Presumably, at least part of the reason for this behavior on Java’s part is its evolutionary descent from C. C treats scalars and arrays (including strings) differently for the quite simple reason that at the time, it just wasn’t practical to treat them the same. Hence also why arrays in C are (effectively) pass-by-reference even though everything else is pass-by-value.

It’s worse than that. C doesn’t have an actual array type at all. Just pointers, a rule that (pointer) p + (integer) n is the pointer to the memory slot n away from p, and some syntactic sugar that converts the expression “p[n]” to “*(p+n)” to make it look like it has arrays.

But by the time Java was conceived, C++ had been around for a while, and it supports both stack and heap creation of objects. I think they just said, “two is complicated, let’s just do one” and picked the heap because it was more flexible.

Yeah, I meant to convey that C’s arrays are a sort of hodgepodge kludge, but you expressed that better than I could.

Here’s a list of things ‘Object Oriented’ can mean. Any given OO language will pick and choose from this list like a diner at a restaurant, meaning that, overall, OO is not very well-defined. More to the point, people in Internet debates can pick and choose from the list to do things like prove Java isn’t OO, because (to choose one item) it has values which aren’t objects, such as ints.

Anyway, here’s an interesting little document which shows how to do one common definition of OO in ANSI C without actually breaking down and choosing Objective-C. (The C-like subset of C++ isn’t ANSI C. Objective-C, OTOH, actually is a strict superset of ANSI C.)

If you want to get super pedantic, you can use pretty much any Turing-Complete language as primarily functional, or OO, or whatever, because you can implement the infrastructure to handle it (at the most ridiculous level of this, you can build a compiler in the language to compile an OO syntax). Though obviously if the language isn’t very OO to begin with at least a portion of your “backend” is going to be non-OO. That kind of robs it of what little meaning it had, though.

It sounds like you’re understanding the basics, but be aware the equals method is a method on the object written by the creator of the class. It’s up to the programmer to decide what equal means when comparing two objects. The Java machine itself doesn’t figure out how to compare two objects.

String is an object and it has the equals method written to compare the characters in the two String objects and return true if they match.

When you write your own object, you will need to write your own equals method if you want to be able to call the equals() method on your object. So if you want to do this:

if (myobject1.equals(myobject2))

Then you have to write the MyObject.equals() method yourself.

I’ve been an avid low-level language user for 50 years. Higher languages just didn’t exist then, unless the term includes Lisp or Snobol. For a while I tried to get into C++ but the details were much too off-putting. (Lately I’ve done some non-trivial Javascript for webpages. If it is “O-O” include me out!) In the thread to recommend student languages I almost mentioned, only half-jokingly, Fractran.

I’ll accept the claim that “O-O” is best for beginners; that makes me unqualified to participate in these threads. Nevertheless I have to rebut this:

C has excellent syntax and semantics for arrays. I am aware that “type” can be defined in such a way that C’s arrays do not qualify.

Beauty is in the eye of the beholder. Rather than hijacking this thread, let’s agree to strongly disagree.

There’s an old saying that C programmers can write C code in any language, and Fortran programmers can write Fortran code in any language. From my experience, this is true, and also applies to a great many other programming languages. Heck, I’ve seen people write C++ code in the dinky little scripting language that came with an old chat program.

Hey, I never said it wasn’t beautiful. C is actually my programming language of choice, and hodgepodge kludges can have an aesthetic all their own, so long as they work.

As I said above, when you write an equals() method, also be sure to implement hashCode(). In java, any object can be used as a hash key (for the HashMap and HashTable classes for instance), and Object defines a hashCode() method for this. If you implement equals(), you also need to implement hashCode() in a way that assures that the same value is returned for any two object instances which return true from your implementation of equals(). The base implementation of hashCode matches the base implementation of equals(), and probably simply returns the address of the instance.

As for the more general parts of this discussion, OK I’ll play:

I’m retired now, and mostly only program for my own purposes. I use java. I first worked as a professional software developer in 1978, so I think I can say I have a very long perspective. I moved from mainframes to PC/workstation to the web over that time.

I would suggest that java is what C++ should have been, as an OO language suitable for commercial development, and being comfortable for the old base of C programmers out there. C++ was heavily constrained by Stroustrup’s goal that existing K&R C code would go through the C++ compiler and work. He retained a bunch of old baggage that java made a clean break from, such as the pointer and the way it masqueraded as array notation in C, the lack of a boolean primitive type which led to its sloppy way doing conditional tests, the indeterminacy of the sizes of primitive types, and other things. Not to mention retaining the old “struct” semantics grafted onto class definitions, and having to say “virtual” to get actual polymorphism.

What I like about java, not necessarily OO features:

Garbage collection - I would say that 95% of the bugs I had to chase for years in C code were from memory corruption or leaks. In large commercial projects involving many people, programmers managing their own memory allocations led inevitably to these. You could run something like purify, and discover an embarassing number of bad memory references in standard runtime libraries you were using. When you first start coding java, it makes your scalp crawl to do something like:

x.someMethod(new Thing());

but you get used to it, and eventually you realize how much effort you were putting in in C based code to manage memory allocation.

Java has a very good, usable thread implementation. MUCH easier to write threads in java than any lightweight process / thread model I ever used in C / C++.

It doesn’t have C++'s fragile multiple inheritance lattice, forcing you into a lot of investigation as to WHICH method implementation you are going to wind up with. Java doesn’t support multiple inheritance for a good reason - the concept of interfaces takes care of almost all of the places you would want it anyway. The interface concept is very useful.

The way inner classes work in java seems mondo screwed up when you first encounter it, but it’s actually useful. You can sub-encapsulate a lot of logic which needs to have access to the members of an instance of the defining object. If it was simply a scoping mechanism, it wouldn’t work nearly as well for a large number of the places where you use it.

Oh, and I’ll add Exception handling. I’ve always believed that exceptions were the correct way to handle errors. It kept you from having to write a lot of dreadful error handling / status block returning code to pass back error information from the low level call that actually encountered it, to the high level call that actually knew how it should be handled. Having exception specific try / catch / finally operations built into the language streamlines a lot of your error paths and cleanup.

I don’t like exceptions, nor the try/catch framework. I think errors should sanely be handled by multiple return values (or similar like tuples), one of which is an error type or satisfies an error interface.

IMO, if you’re going to do something like exceptions that will interrupt the program and bubble up the stack, the only sane circumstance to “catch” such a thing is to clean up the program and kill it.

Basically, I think that if it’s worth blowing up over, it shouldn’t be recoverable. Some people complain this way is too verbose, though. Here’s what I’m talking about in Go. Functions return multiple values, so to open a file:



func DoThingsWithAFile(filename string) (int,error) {
  file, err := os.Open(filename)
  defer file.Close() // defer causes this function to execute whenever the function returns (or exits due to a panic)

  if err != nil {
    return 0, errors.New("Could not open file: " + err.Error()) // error.Error returns a string
  }

  // Do stuff
  return myCalculatedNumber, nil
}


Go’s “equivalent” to exceptions are panics, which are only used when an error is fatal, this doesn’t necessarily mean that your entire program can’t run, but at least you’ve failed at the package level. It’s used to unwind the stack (for instance the json decoder will panic on malformed json until it hits the top-level function, and then return an error to the user). Here’s a blogpost on it: Defer, Panic, and Recover. I much prefer this to exceptions, personally, which I always found obnoxious and overused. Especially checked exceptions.

“Exception” is just a fancy word for “goto.”