I’m currently learning Java, and I’m completely mind blown about all of the things you can use to create programs. What I don’t get is how do you learn a programming language if there’s so much you can learn?
I’m not sure if I understand, but do you just learn how to use certain methods and classes of a library that’s useful for you at the present moment? Such as, if I was working on a project and I needed my program to produce a certain thing, I look up online for methods and code that can allow me to do it?
Is this similar to learning an actual language where you don’t need to learn every single word in the dictionary, but only learn words that will be useful to you in certain situations? Such as greetings and goodbyes?
The way computer science education teaches it is that you solve coding problems as a series of subdivisions. You define the overall problem, then you define solutions to that problem (very abstract, high level solutions). Then you choose the algorithms to implement the solution.
Then you choose the coding pattern, and the actual data input and outputs to specific submodules of your code that will actually implement the chosen algorithms.
Finally, when it comes time to writing the code, now that you know what you need, you dig out of your memory or google searches the specific language features you need in order to actually write working code.
Done right, you already know what you are looking for - you have a solution to your problem in mind in a universal language, hopefully written down in pseudocode, and you’re just finding the “words” in the actual language you happen to be using that say the same thing.
Doing it this way, programmers don’t have much trouble shifting languages. I usually find I waste at least as much time in a language shift learning a new IDE’s quirks and where all the crap I need is buried in deep menus than I do learning the new syntax.
Some problems, you don’t know if a given solution will work. In that case, you do an abbreviated version of the software architecture steps I described above, and you immediately get to work coding up a small program to test a specific algorithm to attempt to solve just a tiny piece of the much larger problem.
As for OS calls and library features - I never try to learn those. What I do is when I rough out a sketch of my program, I identify the places where I’m going to need to ask an external piece of code to do something for me.
I then create my own, separate function in my code, with a name and an interface to that function that make sense to me.
So, for instance, if I want to check the current time in microseconds, I might write a function called Time getTimeMicroseconds()
“Time” is a variable I define that is appropriate to storing the microseconds. So my code and algorithm is not contaminated by the dirty business of asking whatever system I happen to be on for the time.
Then, inside the function getTimeMicroseconds() I embed whatever magic monkey dance the system I am currently on requires to spit out the current time. Some systems require arcane setting of bits in memory to enable a timer, and for me to track every timer interrupt, and other shenanigans. I look up those elements only when I need to write getTimeMicroseconds(), and I try to forget that crap the moment I’ve got the function debugged.
Yes, more-or-less. Stack Exchange is popular for this, as well as any number of language- and OS-focused fora. Both references and example code are quite easily found for doing common things in common languages, once you poke around using Google a bit.
Somewhat. When I learn a new language, I go in with example code and both a tutorial and a reference to hand; I’ll copy-and-paste a lot, especially in the beginning, and I learn the basics of getting a program which does something going pretty quick through sheer repetition. After that, I lean on reference material and example code quite a bit, especially when I’m trying to do something new and/or using a new external library.
I never stop using reference material. I still use reference material for POSIX libraries when I’m writing code in C, for example, and that’s where I’m pretty much at my most comfortable. There’s just too many details to remember, and my time and effort is better used for thought, not rote memorization.
Java, or JavaScript? Sounds like your are talking about JavaScript.
If JavaScript, you can’t learn the entire ‘language’. With the libraries out there, you never, never will. And while it is a language, it is a base to work with other tools. Like having fingers and thumbs. Google is your friend.
There is only one thing that computers can do. Yes, or no. This of course is turned into if/then do/while and such. Of course it’s very important to learn what objects that you can put these tests to. And may use over and over again in your particular discipline.
Eh. If you focus entirely on the core language, you can learn everything Javascript itself does. Libraries are add-ons to the language, and you only learn those on an as-needed basis, but experienced programmers typically do learn all, or pretty much all, of the core language they’re most experienced with. (C++ programmers might not, but C++, as opposed to C, is a huge language with lots of oddities. C, in comparison, is a very simple language with relatively few dark corners.)
This is over-simplified to the point of being wrong. It’s a bad description of what transistors can do, and computers are made out of billions of transistors, so they can do quite a bit more, something machine language software relies on.
That’s probably not a bad way to look at it. Learning basic English teaches you words and general rules for how to put them together (i.e. grammar). You can then write a book with the words and rules; if you need to use a word that you don’t know you look it up in a dictionary.
When learning to program you are generally taught the basics of a language and how to put them together (what we might call programming patterns). You can then write programs using the language and patterns. As time goes on you learn more complex patterns. You can also use patterns that somebody else has written by looking up their documentation (programming book, web site, StackExchange, etc.). I’m old enough to remember what it was like to code before the internet became ubiquitous and I don’t know how I functioned.
To add on to what Derleth has already said, fancy code editors (known as IDE’s like Eclipse or TextMate) can help you. To continue your English comparison, it’s like using MS Word, which will point out mistakes in grammar and give suggestions for spelling mistakes.
So, you’re saying that for every project you do, you write your own abstraction layer on top of the abstraction layer you’re given in your system or language library, right? I agree this seems to be what one must do in most modern programming language (especially the BIG languages like Java), and I too did this a lot back in my more active programming days.
But it sure comes off seeming like a hella lot of re-invention of many wheels for every project.
My layer is paper thin. In many cases, the actual functions implementing the abstraction layer are 1 line of code. The reason for it is so I can make arguments and accept data fields that make sense in the context of my design. I’m not contaminating my design with some other programmer’s work, and I do not need to remember the arbitrary way to call on some other programmer’s functions.
See, when I work up the software architecture, I call the functions that get things from the OS, from libraries, etc sensible names that make sense to me on my various levels of documentation. Then, when I write the real code, I can call them the same names and use the same interface as in my architecture.
There’s a similar reason why you redefine all your data types this way. Partly so the names are more descriptive - instead of an int, you might call it integer32 or something, and partly because you can change a single line of code and fix significant numerical precision problems in some cases.
Learning Java, or C#, is similar to learning an actual language where there are a lot of textbooks and encylopedias. When you want to actually do something, you look up what you need to know. Which you can understand and use because you have learned the language.
Programming was not always like this. When I first learned programming, libraries were much smaller. And much less effective. And I was often programming stuff where no library existed at all.
You wrote the algorithm on paper; then you wrote out the routines and subroutines that you needed to make it work, looking stuff up in the library (you know - that big room with all the books), as you needed to.
Once it was written out you booked time on the college computer - this might be 4am on Sunday if you were a junior; sat at the terminal and keyed all those lines of code in. Then you ran some tests and took the printouts away to debug the code and start all over again.
I used to joke - only partly in jest that what made a language was syntax and the type system.
If you have your head around the type system that the language has, then you are on top of 80% of what it can do. I find that when swapping between languages, keeping in mind the underpinning type system, and the peculiarities in various parts of the semantics - often eventually sheeted down to choices made in implementation, you pretty much know what a language is capable of, and roughly how to use that capability to do what you want.
Algorithms don’t change much, and the vast majority of important algorithms are some form of graph walk. The remainder are either numeric or variants on the classical concurrency abstractions.
The problem with languages like Java is indeed that the libraries and the entire mess of support infrastructure got wildly out of control. It wasn’t so much creeping featurism, but an outright mass enslaught of feature production. Maintainability is sorely compromised, and code can sometimes be seriously opaque. This seems to be the fate of drinking the OO Kool Aid to excess. Years ago I used to teach the standard data structures and algorithms course using Java. The trick was to steer everything away from the OO part of the language, and try to keep people’s minds focussed on the meat of the course. Otherwise you got graduates who thought that design patterns and OO was how to write programs, and had no idea what really was going on.
As noted above, Google, and the various code example and guidance sites becomes a constant companion. The trick is to know what it is you need to do down to a clear technical level.
JAVA is a massively bug-ridden virus vector that is hugely inefficient and effectively useless for almost any application. It exists because of massive marketing.
Try and think of any serious application you’ve used that is Java based. Asides from the odd mobile phone app it’s not relevant
I’ve always thought that knowing a language isn’t what makes a programmer; it’s the knowledge of programming as a discipline or art that does it.
By analogy, if a man entered the Marine Corps in 1942 at 18 and stayed in for 25 years, he might well have been using the M1903 rifle, then the M1 Garand, then the M14, and finally the M16. What would make that guy an effective rifleman isn’t his knowledge of any one of those 4 rifles, it’s his knowledge and training in how to shoot.
Programming is much the same way; once you know how to program, the rest of it is just details that concern WHICH library/function/class/object to use in what situations. And most of that can be looked up via reference manuals, the web, or via something like Stack Exchange or some sub-Reddits or online communities like that.
A huge chunk of modern programming isn’t writing custom code to do some common task, but rather just stringing together the right components and tying their parameters and outputs together correctly.
To specifically counterclaim this. Please keep learning Java. The Java language and the JVM platform is the backbone technology of the modern internet. Development in Java is 10x as fast as development in C++, and 100x less error prone. There is a real shortage of competent Java developers, and salaries are commensurate with that fact.
Compared to C++, Java is a relatively clean, well supported, object-oriented language, that is ideal for developers taking their first step into the object-oriented realm. The tools are good, the community is large, and jobs are plentiful.
(and then when you’re really solid in Java, switch to Scala. )
I’m a software developer. This is correct. Learning to code isn’t about memorizing functions or classes. If you’re trying to do that, you’re not doing it right. Learn the principles of programming - how a compiler works, how languages are structured (e.g. the difference between value and reference parameters - this can really fuck you up one day if you get it wrong), and how to divide a problem up into segments (pay attention to OOP principles so you can use them right, and not do god-code pseudo-OOP). Also, when it comes to exception handling, remember that programming isn’t Pokémon, you don’t need to catch 'em all. Catch only the ones that you can actually do something about.
Even veteran developers don’t have entire libraries memorized. We look them up when we need to.
I once was very proud of a piece of complicated code that I’d written. I showed it to a coworker who told me that there was a built-in, one-line function that would do the same thing. That’s when I learned to not reinvent the wheel. See what’s already out there and if it will meet your needs.
This is really helpful if you want your software to run on multiple systems. Say your software needs to Save files to disk. So you write a SaveFile(YourCrap) method. You want your software to run on Windows, so you add stuff so that SaveFile can talk to Windows and get your data saved. But the Macintosh is a Unix system that saves files differently, so you implement a Mac/Unix version too. Maybe you also want it to run on Android, so you write the code to do an Android file save operation. You can now have a single piece of overall software that can run on whatever crap computer your user has. Using OOP principles, one common way to do this is by a technique called Dependence Injection.
No shit. That post triggered me to check the posting date, suddenly thinking “oh, this must be a zombie”.
I might agree (somewhat) with jezzaOZ if we were just talking about Java browser applets. But I haven’t seen one of those for a while.
[QUOTE=jezzaOZ]
Try and think of any serious application you’ve used that is Java based. Asides from the odd mobile phone app it’s not relevant
[/QUOTE]
Are you fucking serious? Apparently, since you’re here, you’ve heard of a little thing called The Internet. You should consider learning something about how it works.
Maybe it’s just because I’m a programmer too, but it really bugs me when people who don’t know what they’re talking about show up in computing-related threads in GQ. There are enough people who do know what they’re talking about here to answer the questions. If you came around just to spout some bullshit a friend of a friend told you, shooting the shit at your first-year CS pub night, save it for some other board. If you keep your mouth shut and read some of the posts by professionals who actually know what they’re talking about, maybe you’ll learn something.