Learning Java Programming

I’m not going to argue that Java is not wordy because when I program in it I feel like I am typing way too much. But, in your example, wouldn’t you just do something like the following:



int max=0;
for(int i=100;i<=999;i++)
 for(int j=100;j<=999;j++)
  if (i*j>max) {
    StringBuffer sb=new StringBuffer(Long.toString(i*j));
    if (sb.equals(sb.reverse()))
    max=j*i};


I didn’t try to compile, so it could have errors, but you get the idea.

Yeah, I was gonna say the same, but refrained because of the GQ forum. Perhaps this thread should be moved to IMHO…if the OP is interested.

I’ve always (broadly) classed programming languages as 1) imperative, 2) functional (or declarative if you insist), and 3) logic (e.g., Prolog).

It does depend. For problems that are inherently side-effect free, I think that with some practice, a well-designed functional language makes it very straightforward to implement composable solutions. But it only works if you internalize the higher-level primitives those languages provide (things like map, filter, reduce, apply etc).

Right now I’m in a stage that when I see code written in languages that provide both functional and imperative constructs, imperative solutions to inherently functional problems are becoming irritating, mostly because they tend to lack so much abstraction.

Yeah, probably. The sample code does for Java the opposite of what it does for J, and could also probably be improved quite a bit in terms of conciseness and directness (as you’ve shown). This is the danger of comparing languages using random sample code…

Addendum: a quick rule of thumb to see the (lack of) abstraction is to count the number of variables (especially mutating ones) you need to implement an imperative solution vs a functional one. My palindrome code above only uses 2 (or 3, if you count the argument to the palindrome? function), and none of them are altered (since you can’t even do that to these particular “variables” and function arguments in Clojure.

Counting mutable variables used unnecessarily is one thing, but counting variables in general seems a bit of an arbitrary heuristic. For instance, we could rewrite all lambda expressions into point-free style in terms of various basic combinators, thus removing all the explicit variables, but would anything actually be gained by this?

I see it as a rule of thumb, not something that’s cast in stone, but if both alternatives still look “natural” or idiomatic, I think it does give a hint.

As an example, implementing a MAX function in terms of REDUCE and > instead of a C-style for loop and a separate “current max” variable:

pseudocode:



function max(values) {
 mymax = 0
 for (i =0; i < length(values); i++) {
  if (mymax < values*) {
    mymax = values*
  }
 }
 return mymax
}


vs



function max(values) {
  reduce(function(a,b) { return (a > b) ? a : b }, values)
}


And the second version also allows for “non-linear” collections, since it only cares that reduce() can fetch each value.

Of course, the second version looks a lot neater in a Lisp :slight_smile:



(defn max [values]
  (reduce (fn [a b] (if (> a b) a b))) values))


Well, some might argue that your second version has two variables (a and b) just as your first does (mymax and i). And while you are presumably thinking of mymax and i as mutable, and thus less abstracted, that interpretation is not really intrinsic to that pseudocode, I would say. But perhaps that’s because I’m so automatically willing to interpret for loops as folds, whereas others may choose not to consider the transformation between the two as an identification. At any rate, whatever. As you say, just a heuristic.

Incidentally, just for fun, I decided to rewrite my Haskell code to have absolutely no variables:


maximum $ filter (uncurry (==) . (id &&& reverse) . show) 
        $ map product 
        $ mapM id [[100..999], [100..999]]

Of course, now it’s much less clear what’s going on without learning the particular vocabulary, whereas the earlier version was presumably quite readable…

Sure. I just picked the first 3. Actually, i skipped a c# example cos it was just horrible!

But there is something about java (and friends) that makes people want to be verbose and obscure. People who learn ruby or lisp or whatever tend to write better java too.

Hence my advice that java is not the best languge for begginners.

Can you give any examples of ways that Java is written better when someone knows Ruby? I’m still curious about your original statement regarding non-Ruby/Python languages encouraging (or something) poor design, but I’m still not sure what you are getting at. Can you list some concrete points/examples? (Alternatively I could google and read various articles on Ruby, which I have done a little, but I thought it would be quick to have you explain your experiences).

Oh, definitely, even though I took issue with the specific sample, I agree with all of this.

I still don’t know what the best language for beginners is, though. I would like to see more weaned on languages outside the imperative paradigm, for example, but I’ve mellowed out from previous dogmatism and come to think, for the most part, no matter what, any first language will do, so long as one is interested enough to keep exploring the full spectrum of possibilities afterward.

(My first language, incidentally, was QBASIC. I don’t know if that was good or bad, but it’s what I found a book on and an interpreter for as a kid, way back when. I think that’s how it is for most people; you pick a language that’s for some reason or another most “available” to you, and start with that. If it’s not the ideal, eh, you can do the ideal next. [But I am large, and contain multitudes, and, like I said, get annoyed by how set various people do get in the ways of just what they’re used to. Oh well; figuring out optimal pedagogy is hard.])

I started with Sinclair Basic for the same reason - it’s what I had.

And then you get the weird corner cases like Mercury (a functional logic language that uses narrowing instead of unification), Maude (a language based on rewriting logic), Pure (a language based on equational logic), and then there’s the Haskell State monads which essentially let you write imperative programs in a pure functional language, as well as the Logic and List monads which let you embed Prolog-like backtracking logic programming into Haskell.

Of course, let’s not even get on to the subject of dependent types! Are we programming or doing mathematics in languages/tools like Agda, Epigram, Matita and Coq? Is there really a difference? :wink: