­xkcd thread

Thanks for the explanation. It sounds like you do embedded systems, which is a fairly specialized area.

Of course, one might question the value of the whole premise of “solve a standard computing problem during the interview”. On the one hand, yeah, someone who does the job well is more important than someone who interviews well, and it’s nice that companies recognize that. On the other, though, writing code on a whiteboard is very different from writing it on a computer: A keyboard is not a marker, and the computer also has things like copy-and-paste and real-time syntax checking. And frankly, I wouldn’t want to hire a programmer who, when a quicksort is needed in a project he’s working on, sits down and writes one from scratch on the spot. I do want one who understands what quicksort is, when to use it, and how it works, but I would still expect them to then take a standard, existing implementation, and plug it in (possibly with custom modifications if needed).

A better interview test might be to take some already-existing code that isn’t working right, and to see if the candidate can figure out why, and fix it.

I’ve been a Software Engineer for over a decade, and I don’t recall ever having to choose between sorting algorithms because it’s never been relevant in my work. For the dataset sizes I’ve worked with and the use cases I’ve addressed, any common sorting method will do just fine.

When I graduated college I could have probably discussed or coded up a quicksort, but I’ve long since forgotten which sort is which because it almost never matters. If it did become relevant at some point I’m fully capable of looking up the different algorithms and choosing the appropriate one, but I certainly couldn’t code it on a whiteboard without preparation.

Can I just say I have Knuth Vol 1, “Sorting and Searching” on the bookshelf? :slight_smile:

In other words, “when to use it” is “when we have larger data sets than @tofor ever deals with”. Though of course, when you just grab the pre-existing routine called “sort”, in whatever environment you’re typically working in, the chances are pretty good that it’s already quicksort or something closely related to it, anyway.

Though there are also other distinctions between different sorts, aside from efficiency. For instance, if you start with a list in one order, and some entries have the same value for the thing you’re sorting on, will they remain in the same order they were in, in the final sorted list? That’s a feature that you sometimes care about and sometimes don’t, and some sort algorithms do that inherently, while others do it only if you add extra pieces to make them do that.

With large datasets (or even tiny datasets for that matter), you will be using some flavor of SQL server, or other high-level database system.

You will simply never have to deal with sorting yourself. The system has already been optimized over a number of years by dozens, or hundreds, of programmers, and the bugs have been ironed out in hundreds of thousands of applications. You are not going to do any better.

The last thing you will be thinking about are the details of how the system stores, retrieves, sorts, filters, and optimizes the requests you send it. Modern database programming has long since moved beyond that.

You only have to think about how to structure a request to do what you want, and what to do with the results it sends back.

Even for tiny datasets in small standalone apps, there is SQLite. The compiled dll is about 2MB in size (nothing in today’s terms) and will do all kinds of sophisticated operations on data very, very fast and efficiently.

And that is indeed a feature I have had to consider occasionally.

I do think that sorting is an excellent example task for learning purposes. It illustrates, for example, how a problem can have many solutions with different properties, and how to think through the question of which approach to take.

If you can remember to bring it to the interview, I’ll let you use it. If you instead give me a page number where the answer is, I might just give you a pass :slight_smile: .

Graphics chip drivers. Though really we’re in a similar position to video game engines, which are also written in C++ and have the same hard performance requirements, and where much of the code runs on the GPU (which is extremely fast, but has limits that CPU code does not). The specific environment we’re in is fairly specialized, but the general type is not all that rare. OS development is a similar one.

We used to have about 17 milliseconds to do all the work related to a rendered frame, but standards have improved and now it’s more like 7 milliseconds. That might involve hundreds of megabytes of “stuff”, which expands to many gigabytes once the GPU gets going on it.

When I was still interviewing entry-level programmers, I’d ask them if they knew how database indexes worked behind the scenes. If they did, great, but the best conversations came with those who didn’t. “Well, take a guess, how you implement them?” I learned about their ability to reason and think logically, which was more useful than what books they recently read.

I think linked list questions are much the same. The number of programmers like Dr.Strangelove who actually need to work with them is small. But the ability to understand the implications of such a data structure and how one might work them is a skill every programmer should have.

Still, I would guess that at least 95% (maybe 99%) of all programming work in the world today consists of commercial database applications, web programming, cloud systems, mobile apps, financial systems, medical record systems, big data analysis, interfacing different commercial systems, design and modelling systems, system security, etc., etc.

Hence xkcd wanting to donate code for linked lists to a museum.

Agreed. Most code is high level, and getting higher on average. Still, I was responding to TroutMan’s comment about non-programmers and newbies. It’s really just that broad middle of programmers that may have vaguely heard of linked lists, but have never come close to implementing one, for whom the joke lands. The rest of us are confused, but for different reasons.

As for my interview style, it sounds similar to TroutMan’s, in that I try to ask open ended questions. Not knowing some specific thing isn’t a fail. How you approach problems is more important.

Is this the case? It does seems as though programmers would have learned how to do so in the early days of their learning.

It was 25 years ago, but in my high school programming class, we had to write linked list and sorting functions. Ah, good 'ol Pascal… those were the days…

I doubt that it’s anything that many actually do, but I’m surprised that it’s not something that most have learned at some point.

Yeah, I think we’re mostly familiar with linked lists and sorting algorithms, but that knowledge is rarely used directly.

25 years means you’re an old school programmer. The broad middle of programmers I’m referring to may have no formal training; they just picked up programming as they went to solve the problem at hand. They may vaguely know that there are some list types that allow direct indexing and some that don’t, and may know that linked lists tend to be the latter. But really you can go quite a ways without knowing the difference, or at least caring about the difference.

Perl is my go-to scripting language, and it has a native list type of some form. It has direct indexing as well as push/pop/shift/unshift ops, and they all seem to be pretty fast (in the sense that they don’t seem to be O(N) ops). I’ve no idea how it works internally (I’ve never bothered to download the interpreter source), if it automatically switches between arrays and linked lists, or has some hybrid system or whatever. Any programmer working exclusively in Python or JavaScript or whatever is likely in the same boat.

You’ll never write high performance code this way, but most code is just glue. So I guess it doesn’t matter for most. Most scripting languages don’t even allow you to choose floats vs. doubles. Hmm, is it time for another rant about why programmers would benefit from understanding the internal representation of floating-point types, so they don’t get confused about why 0.1 isn’t 0.1 in floats? Where’s @Jragon, anyway…?

I learned programming nearly 45 years ago on an IBM mainframe. I guess that makes me an ancient school programmer.

I’m comfortable with low-level programming, but I haven’t needed to do it for a long time.

Modern students don’t really learn it because a) they are unlikely to ever need it and b) there is so much else to learn that didn’t exist in the old days.

I’ve certainly come across younger programmers who simply don’t know how numbers and text are stored internally, or how pointers and memory addresses work.

Borland Delphi in the 1990s was the best and most innovative language of its time. Borland also invented the concept of the IDE, which was copied by everyone else. And Pascal was (and still is today) a great language.

The great thing about Pascal (and Free Pascal today) is that it can be both very high level and very low level, as you choose, and both styles of programming can be mixed freely.

On the one hand, it’s just as fast and efficient as C++, and you can even include blocks of Assembler if you want to.

On the other hand, you can easily plug in, say, a large and sophisticated graphics component, or a database interface, or a web server. You can just as easily create an elaborate and smart-looking user interface, or… a linked list with pointers.

Object Pascal and Delphi still live well and prosper today in the form of Lazarus, an excellent, modern open-source clone that’s been constantly improved and updated over the years.

Lazarus and Free Pascal, unlike the old Delphi, are also cross-platform, and will run on practically any machine or OS.

They’ll just curse at you, BHG.

Not sure I really get this one, I’ll be honest

The Greek letters are being used to name hurricanes after the regular alphabet is used up, and they are used for COVID variants. The comic hypothesizes a future where these letters are known only because of the terrible things named after them.

(The nanobot swarms are presumably coming.)

Things like disease variants and hurricanes get assigned names from Greek letters, because they don’t want to offend people by naming them after (for instance) places.

This comic is positing that in the far future, this will continue so long that most people will forget the original meanings of those labels, and associate alpha, beta, etc. only with lists of bad things.