Yes, I second this. It may sound like an exaggeration but really I found the market to be more buoyant 18 months ago when I changed jobs than it was in the supposed boom years.
Conversely, I have been involved in interviewing and hiring now, and several times we’ve had the dilemma of whether to settle for someone who only has basic skills and knowledge* because no qualified candidates applied.
So I think, in the UK at least, if you enjoy programming I don’t think you need to be too worried about finding work right now.
…and I’m not being a snob about such things. I too hate it when employers give a shopping list of every API they can think of and then put ESSENTIAL over it. I’m just talking about trying to hire a senior C#/.NET developer and having candidates that are unable (even with google’s help) to write an app that can read a list of integers from an XML file.
That’s not what I’ve been finding up here in New England. Very tight market but then again I get a lot of people thinking because my primary focus is C++ that I can’t do C# or Java.(Which last I checked were specifically developed so C++ guys could transition quickly over.)
I guess I can answer the other questions though. So the place I got my degree from (which I won’t call a school given that they sucked but are actually very well known) only awards BA’s in CS. Unfortunately that means I was subjected to the greatest nightmare in the undergrad arena, the foreign language requirement. (BTW if the school has that requirement be VERY wary.) Anyway being a programmer, oh I’m sorry software engineer has its ups a downs. I mean I like the work and all but often the environment can make it a headache. My current company is technically not a start up but they haven’t figured that out yet. So they do incredibly stupid things like use us developers as tech support. (Which is extra aggravating when the idiot in question is from QA who won’t troubleshoot AGAIN.) I guess you could say I’m like the others here in that I’m “wired” to be happy when I can just go in and figure stuff out, code it, and test it in peace. (But my current company isn’t very well run so it’s hard for me to get in the zone with all the interruptions.)
I mean it’s like a lot of other jobs, if you find a good company it can be a lot of fun. Find a stupid company well it’ll suck.
This matches my perception of the market in the UK as well. I’m in the South East and there’s no recession ongoing here as far as I can see in the IT sector. Incidentally, it may just be where I am, but bioinformatics is incredibly in demand at the moment. My wife had a recruiter ask her what I did when she mentioned I was a computer scientist as there’s a massive shortage of bioinformaticians and he could give me ten jobs tomorrow if it had been me looking for a job, and not her.
I guess I’ll offer my counter and say that I’ve been out of school for about 6-7 months with a CS degree and haven’t found anything. Part of it was probably my choices in school. I didn’t do any internships or part-time with any software development companies. Rather than internships I did computer vision, graphics, and machine learning (neural nets and transfer learning) research – I have an honors thesis and everything in ML. My job experience is upper-division course TAing and grading instead of retail or code monkeying. My extra-curriculars are primarily in CS Education (both for young kids and college students) rather than programming competitions, etc. Hell, I’m currently the manager of a(n infant) open source scientific computing package. It’s just that all the companies I’ve found (and I’ve applied to quite a few) want web programming, or else low level systems programming (which is closer to computer engineering than computer science).
It’s not that there are no jobs in my area of expertise, it’s just that the intersection between my skillset and entry level positions is almost nonexistent. Either they want an advanced degree (I only have a BS), or they’re senior/lead positions that want 5-10 years of experience. All the entry level stuff wants in-depth knowledge of CSS and HTML and various web frameworks (Django, etc), which is certainly possible for a recent graduate, I know plenty of people who chose that path – it’s just outside what I focused on.
That said, this shouldn’t be an issue if you’re looking to go to school with the intent of becoming a programmer straight out of college. You can do internships instead of TAing for 400-level Into to AI courses and take web design courses rather than computer graphics.
It may be that the patterns of where the jobs are in terms of experience is different than the 90s.
Back then, it was all about entry-level jobs- to me at least, it seemed like the hot jobs were either at dot-com startups, or in industry doing Y2k remediation.
Now, I think it’s all about mid-level people, because there was the glut for a long time there in the 2000s where not a lot of entry level people were hired. Now that things are going well again, there’s a shortage of mid-level people, but not of entry level people.
It’s possible. I’m constantly getting bombarded with mid to senior project manager jobs.
There are still “hot jobs” in tech startups. Now it’s mostly around social networking or “big data” and analytics. Or working for well established braniac farms like Amazon, Facebook or Google.
In my experience, most tech companies are run stupidly. The reason is that the are often founded by a really smart technical guy and they hire equally smart technical guys. But running a business isn’t just about technology. A business can run as if every project or task is an “all hands on deck” firedrill. Problem is these really smart guys think they are so smart that they can just figure stuff out on the fly. Even worse, not only do they forego process, they actively eschew it as part of the “big corporate stuff” they disdain. So what happens is you end up with Band-Aid fix on top of Ban-Aid fix.
I definitely agree with you about process, but I don’t think the cause is sneering at “big corporate stuff.” People who design innovative new stuff, be it software or hardware, have to do it differently every time. Process is seen as above change control and bug tracking is seen as overhead, and frequently is. People with some manufacturing background live and die making and adhering to processes.
Look at six sigma. People didn’t get that it wasn’t something you could apply to design. On the other hand people who mostly do design don’t get that this one little deviation from the process can really screw things up.
New development methodologies seem to be a process for reducing most of the process.
That’s my experience too- no shortage of jobs for mid-level guys at all; nobody I know has had a hard time finding a job of some kind in the field, even if it’s not exactly what they want to do.
It seems to be the recent college grads and really young people bitching about the lack of jobs these days.
Really? I’m in the Boston area and I’m seeing a major down tick. Oh, and I have experience. Admittedly it’s in C++ and I don’t want to go to the north shore. Plus like I say I get the idiots that think C++ guys can’t transition to C#/java very quickly to it. (Which I’ve read was the entire point, so C++ guys could do that.) Admittedly I do have a job so I guess I could be putting more effort into. (I have had 2 interviews but I came back both times thinking the companies were jokes and wouldn’t have worked for them if they hired me, they might actually be worse than my current company:D )
Hasn’t this always been the case in high-tech, though? Keeping your skill set up-to-date is crucial. C++ is still being used, but not like it was in the 90s or even early 2000s, it doesn’t surprise me that the jobs for C/C++ coders are not particularly hot.
Like you said, you can transition (I moved from C/C++ to C# and a little java. The language differences are minor, the really big differences are going to browser-based coding), but I’ve always found you have to do that on your own time; few companies are going to hire someone with no C#/java experience and train them up, no matter how great their resume is. But a C/C++ person with many years experience who has a couple open source or personal C#/Java projects on their resume, who can demonstrate in an interview that they know the ins and outs of the new languages? Sure, that person is marketable.
True but on the other hand it’s not as though I don’t do C# where I currently work. I think part of it is that a lot of companies don’t seem to react very well when I tell them one of the best development skills is being able to “Just fucking Google it.” (Not that I say it that way but the point is being able to and knowing when to research is an important skill but sometimes I get the “Oh he can’t do everything off the top of his head” reaction.) Oh I also get the “That’s not the answer we were looking for” reaction at times. Such as when I was asked what an interface was and I pretty much said it’s a standard, published way to interact with something and most of the time implemented as a collection of functions. This resulted in his impression of “This guy doesn’t know what an interface is” which had me rolling my eyes when I heard about it afterwards. (Doubly weird since I’ve actually implemented an interface in C# and to be honest, it wasn’t particularly difficult.)
Actually that reminds me of something the I should tell the original poster. When you finish your CS studies don’t be surprised when you learn the generic concept and can apply it anywhere but the guy at the interview thinks you don’t because you don’t use the language specific jargon when talking about it. (For example I’ve worked with people who didn’t really understand the difference between a list and an array because they never had CS and they’d use the wrong one for what they were doing.)
Do you perhaps mean a linked list? A list is simply an ordered collection of values, and is often (I’d argue almost always if it’s part of a standard library) an array under the hood. Python’s list, for instance, is basically just a smart pointer to an array – though I believe it over-allocates to prevent having to waste time reallocing every time you call append. The only time I’d every personally use an array over an array-implemented list is if I was working with something like vectors (math vectors, not C++ vectors) which are constant-length so the fancy append functions and such are needless.
No, Perl for instance doesn’t actually have arrays, but just lists of lists. (Nothing to do with linked lists.) Most of the time it might look the same, but in some cases not understanding the difference can really get you into trouble.
OH sorry, yes I meant a linked list.(I guess I got thrown since in MFC it’s a CList. Is it a vector in STL?) Sorry about the confusion but I have seen other developers use an array when they really want a linked list.(IE they want something re-sizable and they don’t care about being able to get to the n th element in an instant. That’s totally a linked list and they use an array. Man did that suck.)
In the STL a std::vector is backed by an array and a std::list is backed by a linked list.
This case is nowhere near as clear-cut as you seem to think. Arrays have the very important property that they are much more cache-friendly than linked lists. This is critical for perfomance.
Another problem with a linked list is that you end up calling into your memory allocator O(n) times. On some systems the memory allocator is horribly slow (this is true with the standard malloc() implementation found on most Linux systems, for example). An automatically-resizing vector implementation will amortize the calls to the allocator over many insertions, so you get O(log n) calls during ramp-up and 0 at the steady state.
Where linked lists really shine is when you need to insert or remove arbitrary elements frequently.
One of the systems that I work with uses linked lists quite effectively. It’s all in C, and macros are used to embed the link list pointers inside the structures that will be placed in the list. Now you don’t have to allocate any memory at all to insert the structure in a list. The downside is that it’s a massive breach of encapsulation, so it’s really only a technique that one should consider when performance is absolutely paramount, or when failure to insert into a list cannot be tolerated.
That’s true, I am simplifying a lot. However the thing with arrays, well ok MFC arrays, is you can do that stuff you’re talking about with lists but with arrays.(Which is incredibly slow.) So what I was running into was another coder basically didn’t bother to figure out before hand how big of an array they wanted. (Which would have been ok) Instead they added elements to the array one at a time until they were done. So behind the scenes the way this worked is the first time an element was added it effectively created an array of size 1 and copied in the item. When the next element was added a new array of size 2 was created, the one item from the first array was copied into the second, the new item was added to the second array, and the first array was deleted. To add a 3rd item it’d create an array of size 3, copy everything from the size 2 array to the size 3 array and add the new item to the size 3 array. (So basically every time you add the object behind the scenes creates a new slightly larger array and adds the new item.)
Anyway to clarify if you realize that’s how arrays really work you wouldn’t try to build up your array one element at a time. I think we’re both on the same page that it’s ok to do that with a list, they’re good for that.(Which is nice when you don’t actually know how many items you’ll have in the end until you actually go through the process. However doing that with an array is insane.
Just noticed you mentioning this. So in STL they’re actually smart about allocation over numerous insertions to keep the running time down? (On a vector) Just curious because the MFC list does not work that way. (It will literally allocate on every single Add which as you can imagine is incredibly slow especially since that means a copy every single time.) Then again this developers code had more than a few O(N^2) algorithms although he was convinced it was linear. Oh, original poster. Since you’re doing CS learn that order-n notation stuff. It’s actually extremely useful.
Yes. And just in case, you can pass your own allocator function to STL and allocate things yourself if you don’t trust (or have reason to not like) STL’s allocation func. In STL implementations I use, I trust STL’s allocator to be smart, most times.
I’m not sure I understand, a list is an abstract type, all arrays are lists but not all lists are arrays. Some lists are implemented over arrays with some of the array-ness abstracted out (like allocation). Perl may well implement them without using arrays at all, but there’s no “difference” in the sense that there are times when you should use a list vs an array, there are just times when you should use a dense array list vs a sparse array list vs a linked list vs an array of pointers and so on.
Multi-dimensional “lists”, however, are another thing entirely and require being super careful even if you use arrays if the language doesn’t have baked-in support for multi-d arrays.