Programmers! What language for a total novice?

Befunge!

Don’t be silly, he wants Whitespace.

Because I had no idea what the difference was. I was asking you to explain.

I’m a total novice too, but I’ve been learning ActionScript and it’s pretty easy. It has both procedural and object-oriented concepts so I’m hoping it will help me transition to doing more with object-oriented languages as I’ve already been exposed to it; but am not forced to use it when it gets too complicated for me as a beginner.

None of the ones whose resumes cross my desk. Maybe they’re embarrassed.

I would probably recommend python since it at least gives you some freedom to explore techniques, has decent support for all kinds of external libraries and still sort of forces you into a sane style. The reason I’m not 100% certain is that I’ve hardly done any python programming myself.

Javascript is also an interesting language for beginners, but the main problem with it is that browsers tend to be filled with annoyingly incompatible extensions and almost every book and website about it don’t know what they’re talking about.

OO focused languages like C++, Java and C# might look good on a resume, but anyone who claims OO is easy to do right is full of crap. It takes lots of work to really learn OO - IMHO at least a couple of years for an average bright programmer - and even then it doesn’t match quite a lot of the problems programmers are trying to solve.

Hell, back in the day (1991/1992-ish), we had Pascal as our first programming course, and C as the second. C++ was around, but OOP was a special topics senior-level course, and JAVA was just some guys at Sun’s fever dream at that point.

I agree with TimeWinder- it’s best to learn HOW to program; although I’d take it one step further- there’s learning to think about how to solve your problems procedurally, and there’s learning to do it in an object oriented fashion.

For learning procedural programming (and as an intro to programming in general), Pascal’s still probably the best bet. For learning OOP concepts, I’d say JAVA is better than C++ or C#, if only because JAVA was built from the ground up as OOP, while C# is sort of an outgrowth of C++, which grew out of C, and probably lets you do things you probably shouldn’t, while JAVA’s a little more controlled.

I wouldn’t recommend Python, if only because it’s implicitly typed, which will cause fits when people move to manifestly typed languages. I also think off-side rule is a horrid, horrid idea for delimiting blocks, and that’s one more reason to not recommend Python.

If you’re interested in Javascript, there’s an excellent book by Douglas Crockford called Javascript : The Good Parts. It’s not a beginner’s tutorial, but it will provide you with excellent strategies for avoiding the major pitfalls. And it has an excellent overview of Javascript’s prototypal OO and functional features.

Embarrassed of what?

I know. It’s a great book, but it’s pretty emberassing that a language as widely used (by pros and enthusiasts) as javascript only has one really good book. And as much as I think Crockford knows what he’s talking about, I think his book is targeted at serious programmers that want to tackle javascript. But what the language needs is a book aimed at fairly novice programmers that does what Crocford does well:

a) really explain the language mechanics correctly instead of making shit up
b) shows how to use the mechanics efficiently

But there’s nothing that does that but first tackles:

c) (PROFIT!) do a good, well founded and correct “programming 101 for practical use” +
d) “how to deal with the damn browser” before going to a & b.

How? How exactly would you define OO? OO seems to me to be incredibly simple & intuitive but I see this sort of comment often enough to almost make me think I don’t even know what OO is. But looking it up on Wikipedia the summary

is pretty much what I thought. And though it’s not phrased in the clearest words it’s just not that complicated a topic.

I started clustering my functions and variables into objects before I even knew what OO was. It is the most intuitive way to do it. Especially if you use a language where you have to clear up memory manually. I can’t imagine how you could keep all the memory you’ve allocated straight without OO, though I know it can be and is done. The only thing that can even kind of be considered confusing is polymorphism. And even then it’s more that polymorphism sounds like a confusing word than that it’s a genuinely confusing subject.

Oh, this is going to be a long rant. Sorry :slight_smile:

Organizing functions as “belonging to” data structures is a decent rule of thumb. And it’s at least relatively unambiguous. But not every function belongs to (only one) structure. To pick an easy and obvious example:

Should it be int.toString() or String.parseInt() or both? Maybe neither? How do you deal with user-defined types?

But that’s really just the start. Pick the Model-View-Controller paradigm for GUI programming. As far as I can see, the controller is not an object at all (as in, it doesn’t seem to contain any inherent data). It might “contain” some connections/callbacks, but really, it’s just a bunch of translations from user actions to model changes. Where’s the object?

But even that is just mostly a philosophical objection. The real issue comes into play when you’re dealing with a strictly class-based OO language (like Java C++ etc) where everything MUST belong to a class. Then you get into the muddy realm of “Design Patterns”. I’ve read the book, and at least 50% of the designs are ridiculously complicated only because of the restrictions of sticking to class-based designs to the point that in any language with closures or maybe even just function pointers the problem they’re trying to solve would hardly even make a programmer think.

And I’m not going to go into “but polymorphism is good” argument. Of course it is. It’s over-used, but it’s definitely a Good Thing. But you don’t need classes or even objects to get polymorphism, in fact I think classes muddy the water a lot WRT polymorphism, and even if you only got objects and classes (and not, for instance, interfaces or purely abstract classes), you still don’t need them as much as you’re lead to think you do.

ETA: And don’t get me started on inheritance. The most over-used feature of OO ever. And most of the OO language designers will agree with me on that.

Were they objects, or were they modules? Bundling together related functions and hiding data is good. But this is not object oriented programming. That’s module oriented programming, and has been around a long time.

It was ages ago, but if I recall correctly it was initially a mixture of structs for the data, as I needed to instantiate multiple copies, and modules for the functions most of which ended up being fed the structs because they needed to know which specific object they were working with. I was basically trying to implement OO without knowing what OO was or the syntax for doing it properly. I cant even remember if the language I was using at the time, VB6.0, had proper OO.

Joke.
I feel like I’m reading SIGPlan notices again.

To say what Superfluous Parentheses more briefly and more superfically, it is simple in concept but complicated in practice. In 1976 I took a seminar class on designing an O-O language. This was after Simula67 and long before C++. We had to deal with all the stuff that SP mentioned from first principles, and it was damn difficult. I don’t think the language we produced was very good, but you wouldn’t expect a language designed by 20 people to be any good, would you.
The language I designed for my dissertation was very slightly object oriented, but it was meant to solve a very specific problem and not be general purpose.

OO is another ambiguous term, and poorly named. It should have been ‘Class Oriented’, objects are just data. The purpose of OO is to eliminate repeated functionality which is the source of numerous bugs, and creates huge maintenance issues. Otherwise, it does nothing much different from any other high level structure in a programming language. The Model-View-Controller (a terrible design) is a class. It doesn’t have much data of its own except for state variables and references to other classes. That’s one way to use a class. Some classes do nothing but define data structures, and the typical case is a collection of properties (date) and methods (code). Nothing about OO is unique or absent from a general procedural language, OO languages just provide a simpler syntax to encode them.

Superfluous, Design Patterns annoy me too, along with MVC, all from the kunieform school of algorithms. But please explain to me how inheritance can be overused. Poorly used, incorrectly used, and underused I understand. But it’s purpose is to eliminate replication of code. To my mind the only overuse is circular, where you fail to statically define anything.

Well, first off, using inheritance just to eliminate duplication is wrong on a philosophical level. Object are either in an “this is a specialization of that” relationship, or they’re not, and if they’re not, they shouldn’t be inheriting.

Also: there’s a reason many languages these days don’t support multiple inheritance; mixing different types (and especially implementations of types) into a new type gets more and more complex because you’re being restricted by all of the interface you need to maintain.

There’s another reason Java killed multiple inheritance and forces you to use Interfaces instead: even when the interfaces don’t clash (and if they do, you’re usually completely screwed) implementations are much more likely to do so.

The fact that this strategy is even workable proves to my mind that what is generally needed is polymorphism, not inheritance.

Lastly, if you really do need to use some functionality in your new object, chances are very good that you should use composition (“has-a” relations) instead of inheritance (“is-a” relations), while possibly implementing an interface (“has-wheels” instead of “is-a-wheeled-thing”).

Ok, you’re mostly complaining about the deficiencies of particular OO systems, and their misuse. For instance, I consider duplication to be exact duplication, and nothing else. Pretending things are the same when they are not is a misuse of inheritance. But specialization is only one form of relationship that can be used to define inheritance. The failure of the systems to account for the breadth of relationships necessary to properly implement inheritance is a failing of the implementations. Poorly implemented polymorphism is a problem too. The main problem is that systems keep trying to exploit static definitions where only dynamic ones will be complete. The result is a requirement for the program to discreetly maintain the operating logic of the structures, which was the problem OO was intended to solve.

Well, yes. But I started out by saying that using OO is just really not remotely as easy to do right as some people seem to think. It’s a very useful tool, but it can be mis-applied and the typical “industry standard” OO languages are too basic in their OO mechanisms while at the same time many will force you to use classes/objects where it makes no sense.

Sounds like you’re talking about mixins or traits. But those are fundamentally different things than inheritance, and again, many “standard” languages will only allow this sort of thing by abusing inheritance - which gets tricky almost immediately.

That’s true, but I do maintain that some problems are just not usefully tackled in an OO manner at all. I don’t think it’s a coincidence that some of the languages with very rich dispatching mechanisms also allow you write completely non-OO code.