Learning Java Programming

And, yes, you should learn some discrete mathematics. People who are scared of mathematics are people who are scared of abstract thinking. You can’t be a good programmer if you are scared of abstract thinking, particularly thinking about the kinds of abstract things which are most relevant to programming, which are… discrete math.

Well, in the sense that anything viewed abstractly is to be considered math. But algorithms are certainly possible that have nothing to do with numbers or computers or anything else you’d normally think of when you think of math or computer science. For example, listening to a series of dots and dashes and then translating them into letters using Morse code? That’s an algorithm. Learning a particular mechanical (ambiguity-free) strategy for how to avoid ever losing at tic-tac-toe? That’s an algorithm. Following the instructions “First, if the number you would like to call is non-local, dial 1 and then the area code. Then, no matter what, dial the seven digit number. Wait until you hear either a busy tone, an answering machine, or a human asking ‘Hello’. If a busy tone, hang up, wait one hour, and then repeat this process from the beginning. If an answering machine, wait for the beep, then say your name and number, and then hang up. If a human, say ‘Hello. It’s me’…” is an algorithm. (You might object that potentially other possibilities could arise in the indicated situation… well, it’s all a matter of viewing things at the right level of abstraction. Use the abstraction where these are the only possibilities. It’s still an algorithm. Even in programming, we are always abstracting away from possibilities… the computer could glitch, suffer a power failure, etc. The level of abstraction to select is always context-dependent.)

You’ve convinced me. I was thinking about the limits of the mind, and practical physical limitations, but even there, it’s not as clear cut as I made it seem.

Well, surely 2 * 2 = 4 is a rather fundamental fact in discrete math. :slight_smile: But, yeah, De Morgan’s laws are just particular random facts. But programming is one place where these facts are particularly useful.

What are De Morgan’s laws, you ask? Consider bits, which can be either YES or NO. Just like there are various things we can do with numbers (add them, multiply them, and so on), there are various things we can do with bits. For example, we could conjoin (AND) them: when ANDing a bunch of bits, the result is YES if all of the bits were YES, and the result is NO if at least one of the bits was NO. So YES AND YES = YES, while YES AND NO = NO. Or we could disjoin (OR) them: when ORing a bunch of bits, the result is NO if all of the bits were NO, and the result is YES if at least one of the bits was YES. So NO OR NO = NO, while NO OR YES = YES.

If you want to be a programmer, you really should become comfortable with this. The exact terminology might be different (booleans instead of bits, 0 and 1 or FALSE and TRUE instead of NO and YES, all kinds of different names or symbols to represent AND and OR), but these concepts come up over and over and over in programming. Whenever you’re talking about something that can have one of two values, these abstract concepts come into play.

Anyway. Observe that the definition of AND is exactly like the definition of OR, except all the YESes and NOs are swapped (i.e., negated). This observation is De Morgan’s laws. Nothing more.

That is, De Morgan’s laws tell us that whenever you take the AND of a whole bunch of bits, you get the opposite result from taking the OR of their opposite, and vice versa. So, for example, YES AND YES AND NO AND YES AND NO = YES, while NO OR NO OR YES OR NO OR YES = NO. AND and OR are each other’s mirror images under the symmetry/duality swapping YES and NO. (We can phrase this same fact various ways, of course, but that’s one simple way to put it). A fact of some importance which will at least help you wrangle down complex boolean expression, and which you would do well to be familiar with at some intuitive level in your programming life. But knowing specifically that it’s called De Morgan duality or whatever doesn’t matter. That’s just history/terminology.

The designers of Ruby and Python have gone to great lengths to make it easy to express your intention as a designer and as a programmer without worrying about the piles of cruft that builds up in those other languages. There is close to no cruft, so you can think about the problem in the language of the domain instead of thinking about how Session EJBs work or when it’s safe to free your memory or what a char** pointer is pointing too.

Once you have been programming with Java for a while (or C or VB or whatever) you start to think that all that crap is natural. Java has the additional problem that the libraries are even uglier than the language so all your examples that you might want to learn from are actually bad examples.

C has a different problem in that it forces you to think too much about how computers work. If you are build a solution for Problem X, you should be thinking about Problem X, not about how computers work.

Just to re-emphasize an earlier point - if you are learning Java as a first language, you’ll be fine. Any bad habits you pick up will be fixable if you are motivated to learn and if you are not, you’ll be doomed anyway.

If you have a choice, you’d be better off with a different language but you’ll be fine with Java.

I can’t imagine learning a language from an instructor but, then again, I have never tried it.

I think that this part of your post is mostly correct. However - and I don’t want to get into language wars - if you really think that Ruby & Python are the shit, take a look at clojure and especially its collection function/methods + its implementation. I really don’t know of any language that makes efficiency and ease of use so very clearly great.

Man, I just realized the irony of using this as an example of an algorithm that has nothing to do with numbers… :slight_smile:

I was hoping you would give me a concrete example or two. I haven’t used Ruby or Python, so I don’t know what I’m missing.

No. I spent much of my career working on proprietary languages. But when somebody asks specifically about learning Java and you say, “No! Learn this instead!” it’s good to tell them, “Oh, by the way, it’s a proprietary language designed to work only on Windows.” For all you know, he may be a Mac or Linux user.

Emphatically YES!

However, there’s a caveat. There are certain concepts that are pretty universal, like variables and loops. But if you start with a language like C, there will be a lot of new concepts to pick up when you go to an OOP language. And assembly languages are a world unto themselves (although not much work is done in them these days).

Learn Java, and C++, C#, and VB will be pretty easy to pick up. Even Javascript. But APL will be starting over from scratch.

Does anyone still use APL? I taught a class in it in the 70’s, but I don’t think I’ve seen it since. It was a really cool programming language.

Well, probably no one uses APL itself that much anymore, but it has modern successors like J… (Does anyone use J?)

Incidentally, J also incorporates ideas from Backus’s function-level programming, which, in particular… avoids explicit variables (in favor of combinators/point-free style), though the name caused it to often be confused with functional programming, which among other things… avoids explicit loops (preferring recursion).

IMHO, it’s probably the most important thing for a novice to learn, as it lays the foundation for anything and everything you’ll learn subsequently. (I’ll soften that a bit and say that that’s not necessarily the case if you’re just looking to be a code monkey.) As a freshman, I can say that I didn’t get that – I sold my textbook as soon as I finished that class. But it’s the only textbook that I re-bought later.

There are so many topics that you’ll use over and over again: boolean logic, set theory, basic proofs, permutations/combinations, probability, graph theory, finite state machines, grammars and languages, etc. You just can’t be proficient without it, no matter what branch of computer science you go into.

Here’s the thing, though: IMHO, discrete math is one of those things that is possible to pick up incidentally. That is, if you really get into computer work, you’ll end up knowing a lot of discrete math without even knowing you’ve learned it. However, it’s good to have it presented explicitly in one bundle.

Ugly? That seems like an odd characterization to me. My grad school OS prof quipped one time that CS is nothing but integer (i.e., discrete) math. Heh…things don’t get much cleaner than that. Difficult? Probably…from what I can tell, it’s the first isolated theory (that is, divorced from any application) many people get.

DeMorgan is basic (boolean) logic. Tedious, but necessary…do enough truth tables and/or circuit diagrams and such logic reductions will become second nature. All I’ll say concerning algorithms is that perhaps you should look at it as an exercise in definitions. You’ll be doing lots more of that…

Oh, I might as well pipe up on the “first language” thing also: either C or Scheme. Thinking about it now, perhaps those suggestions are simply dancing around the extremes – C gets one pretty close to bare metal while Scheme is/can be pretty damn abstract (i.e., lambda calculus).

My alma mater used Java and Prolog as the primary teaching languages in the first two years. The year after I started, they completely revamped the curriculum and merged all “informatics” (CS, AI, cognitive science, computational linguistics etc.) courses under a single introductory course, using Haskell, Java, C and Prolog as the teaching languages in the first two years.

Here’s an example in Ruby, that I just Googled. It uses a library (WaTiR) to open a browser, go to a web page, find all the images and save them to a file.



require 'watir'

browser = Watir::Browser.new
browser.goto('http://twitter.com')

count = 0

browser.images.each do |image|
  count += 1
  image.save("c:	mp\file-#{count}.jpg")
end


Quick anecdote:
I have used Java and C# for about 15 years and I know them very well. I have used Ruby on and off for about 5 years but not enough to be good at it yet.

I teach a class on Extreme Programming where I have the students build a spreadsheet in a day using Test Driven Development. My current gig is a .Net shop so I teach the class in C#. I hadn’t taught the class for a while and hadn’t used C# for a while so I did a little rehearsal the night before.

I tried three times but was never happy with the results and then I switched and tried it in Ruby and the design just flowed from my fingers… I went back and did it again in C#, copying the design of the Ruby version and it ended up great (although about 3 times as long because of all the cruft).

I probably still have the code somewhere. I’ll try to find it.

This is a thread about learning programming, so I won’t get into language wars but I’ll comment after some background so newbies will know what we are talking about.

There are three great “schools” of programming: imperative, functional and object-oriented. A good programmer would be familiar with all these styles but would spend 98% of their time with one style. In industry, the object-oriented school has overwhelmingly “won”.

Functional programming languages (like Clojure, F# and Scala) are making a bit of a come-back, but they are also infiltrating object-oriented languages. This is one of the reasons that C# is so much better than Java and Ruby is so much better than both.

A functional language like Scheme is a great choice to learn if you just want to dabble in programming. (I’m not suggesting that functional languages are only for dabblers - just that they are especially good for dabblers)

A great way to compare programming languages is a site like Project Euler.

They post a new math question every day that is best solved with a little programming. Once you have entered the right answer, you have access to a forum where people post their solutions in a multitude of different languages. It fascinating to see how people approach the problem differently in Java and Basic and Lisp and Assembler and bunch of unfamiliar (to me) languages like J and Haskel.

The questions range in difficulty from simple

to challenging.

Compare Java



int max=0;
for(int i=100;i<=999;i++)
 for(int j=100;j<=999;j++)
  if (palin(j*i))
   if(j*i>max)
    max=j*i;

System.out.println(max);

Note: I've created 2 utility methods for Palindrome and Reverse

public static long rev(long n)
{ // This method simply returns a reversed number

String s=""+n;
StringBuffer sb=new StringBuffer(s);
sb=sb.reverse();
s=""+sb;

return Long.parseLong(s);
}

public static boolean palin(int n)
{ //This method checks if a number is palin or no

String s1=""+n;
String s2=""+rev(n);
if(s1.equals(s2))
return true;

return false;
}


with Ruby



max = 0
100.upto(999) { |a|
  a.upto(999) { |b|
    prod = a * b
    max = [max, prod].max if prod.to_s == prod.to_s.reverse
  }
}
puts "Maximum palindrome is #{ max }."


and J



>([:{: ]#~ (=|.&.>)) <@":"0 /:~(0:-.~[:,>:/**/)~(i.100)-.~i.1000


Which one would you recommend for a beginner? Which is easiest to read?

I don’t know about you actual programmers, but me, as someone who just tinkered with BASIC as a kid, I think the Ruby is easiest to read, but I like the Java because I can actually see how it was done, instead what looks like having to memorize a bunch of functions.

And J looks as indecipherable as commercially obfuscated Javascript. IN fact, it looks like the language was designed to look intimidating.

I wouldn’t put those three thing together like that, really. Object-oriented programming isn’t exclusive to either of the other categories. Functional programming is a specific style of declarative programming, which can be contrasted with imperative programming. OOP is more of a problem-solving methodology (representing a problem in terms of the interactions between objects) which is sensibly contrasted with procedural programming (representing a problem in terms of procedures or subroutines).

But even procedural vs. OOP isn’t a perfect breakdown: although procedural programming is strictly a sort of structured imperative programming, OOP is more abstract, and can be used in both imperative and declarative paradigms. A functional program can easily be object-oriented as well, and many popular languages that readily support programming in a functional paradigm provide object-oriented features. Examples include F#, OCaml, Common Lisp, Nemerle, and JavaScript.

It would be more precise to say that imperative programming is far and away the most commonly encountered in industry, and as a consequence nearly all object-oriented programming in industry is done with imperative languages. OOP is pretty popular – so much so that people everywhere have a nasty tendency to try to use it when it’s not at all appropriate – but its popularity is largely as a subset of imperative programming and not in contrast to it.

Just a straight up translation to clojure for that plaindrome program:


(defn palindrome? [num]
  (= (str num) (apply str (reverse (str num)))))

(apply max (filter palindrome? 
                    (for [x (range 100 999) y (range 100 999)] (* x y))))

Nice!

A lot of people say that functional languages are “more natural” - maybe they are for maths problems - but I have to twist my mind a little to write with them. I guess is depends on what you are used to.

The Ruby and Clojure are quite readable while still being appropriately, trivially compact, but presumably, the J could be made that way as well, it just hasn’t been. And, of course, some of the extent to which it looks unreadable is simply unfamiliarity with its symbolic vocabulary, which is not really a terrible flaw.

Anyway, because I always feel the urge to do this, here’s Haskell (a very purely functional language) for the palindrome thing, though it’s quite close to the Ruby and Clojure proposals as well, since they’re all quite direct (as they should be):


maximum [c | a <- [100..999], b <- [100..999], 
             let c = a * b, 
             let s = show c, s == reverse s]

I’d originally written that all on one line (to be handed directly to the interpreter, no need for compilation), but broke it up when posting it here to look nicer in the code box.