How do some people graduate from college without understanding the fundamentals of their major?

I had a sadistic vision of asking electrical engineers to solve some of the OP’s problems using no aids beyond a supplied slide rule. Obviously nobody has removed one from its case in years, but they should be able to figure it out, and that would sort out who can tell picofarads from terafarads and degrees from radians.

I’m beginning to see why the international particle physics institute at the U hired my husband as an engineer when he was a year away from a B.A, in physics. He ended up spending forty years there doing engineering without an engineering degree.

I was applying for an analytical chemist job just out of college with a bachelor’s degree in chemistry. The interviewer asked me specifically how I’d test for the presence of certain ions in a solution. I told him that off hand I didn’t remember, but that I could look it up in ten minutes. Luckily, I failed that interview and went on to work in another analytical laboratory with more rational people.

Schools should teach something beynd memorization… I feel like if you took a competent physicist and had to turn them into an engineer, or a mathematician into a biochemist, they would at least have the skills to start learning.

I have never studied Engineering of any kind, and the last Physics class was in high school in 1984 (A-Levels). I answered the first question correctly (all parts) without looking up anything.

The second problem is (as stated) a simple algebra problem. I had no problem with that either. Dunno if I would have been confused by the question without the hint that was supplied in the note.

That someone with an Electrical Engineering degree would struggle with either of these would have been mind boggling to me when I first entered an American university 30+ years ago. But after a few months as a tutor, I quickly realized that some students can skate by without understanding the fundamentals of even the subject they are majoring in.

I suspect more than 1% of physicians could do that, but that isn’t the point. A PhD is not long removed from their studies, and have applied circuits and units often and recently.

Zwitterions are not fundamental to medical practice, and are niche even in biochemistry, and are covered in pre-med much more than medicine.

Still, your point is taken that you best remember what you use often, that testing is not always reliable or without stress, or that memories fade, particularly if the material is not mastered.

You probably remember the molar mass of glucose and could recognize which things don’t belong in the Krebs cycle. Having taught MCAT preparation in Med school, I remember most of that stuff. That doesn’t make it that useful to me.

I used to ask this question of potential technician candidates:

“Would you touch the terminals of a 12v car battery with your bare hands- why or why not?”

If they answered something like “No, the battery has too many Amps in it,” I would reject them.
And, a lot of them answered in that fashion.

As an aside, I did notice that the figures in the OP’s Case #1 are concocted so that no calculation is necessary at all, except for some basic arithmetic in one’s head and possibly not remembering that ln(2) = 0.693

Calculator, smartphone, Internet, etc. were permitted during the test. Even then, and answer of
-ln(1/2) or ln(2) would have been perfectly fine.

I was asking about the PhD candidate. Someone graduating with a BS should remember the fundamentals. But I worked over 30 years in areas around digital logic design, and I assure you Ohm’s Law never came up. Looking at their GPA is a good indicator of basic knowledge. People who got Cs in their majors never made it to me.
I interviewed mostly new PhDs. If they had a background in our specialty, I’d ask questions to see if they truly understood it or had just been exposed to it. Observing how someone approaches a problem is a much better technique than a test.
I had a very good success rate in hiring, including hiring some people who became famous. One had an EE PhD, and I sure never asked him about Ohm’s Law during an interview.
BTW, when I got my PhD we didn’t have no stinkin’ GUIs. Asking about them would have shown a degree of cluelessness that would have made me walk.

There is always pressure on wages, regardless of profession. Just like there is pressure on component costs. Cynically, the employer wants to MAKE more money on you than you are COSTING him. And, most employees (and employers) are oblivious to the non-wage costs that are associated with employment.

Don’t YOU look at the price of goods that you purchase? How do you decide if this shirt is “worth” $X while this other may be worth $Y? Will it stand up better in the wash? Be appropriate with a wider variety of your slacks? Not show stains as obviously?

How do you decide if this (currently unknown) person is worth $X (salary/hourly)?

I had an early boss who once made the statement: “If I wanted someone to fill a need that I had TODAY, I would hire a Northeastern graduate. I hired YOU because I want someone to address TOMORROW’s needs”. Sort of a left-handed compliment; did he not think that I was capable of using today’s technology??

The curriculum in most schools/teaching establishments is geared towards today’s technology. Local employers want to hire people that they can put to work immediately.

The problem with this is that those employees are essentially obsolete (in fast paced STEM careers) before they’ve received their diploma.

The employer doesn’t seem to care as he’ll plan on replacing them in a year or two with some similarly “current level of technology” graduate.

Money is not a motivator. Money can be a DEmotivator but other issues typically factor more into job satisfaction (which correlates with job performance).

I, for example, enjoy novelty in that it lets me learn new things. Don’t ask me to work on “Model 2” of anything if I already worked on Model 1! Yeah, it may seem efficient from the employer’s standpoint (as I already know about the Model 1’s experiences, design, etc.) but it’s boring as hell, for me. The equivalent of digging a ditch – you learn something from the first one (i.e., how much you dislike digging ditches!) but the second one is just more of the same – even if a different soil type, different depth, shaped hole, purpose, implements, etc.

Ideally, you want people who are interestING and interestED to work for you.

I asked technical behavioral questions. Things like, to people with programming background, “what’s the hardest bug you ever had to fix?” Anyone who has says they never screwed up in an interesting way is either lying or hasn’t written anything harder than “Hello world.” You can tell pretty quickly if the candidate really knows their stuff.

I agree that the generic behavioral questions you gave are useless.

I’d say “how do you think a number like 37.9 is represented again? Cool, you own an IBM1620, the last decimal machine I know about.”
Kids these days never had to worry about hardware multipliers work, or writing microcode to do multiplication on a machine without a multiply unit.
Not to give you a hard time, but things can be trickier than simple tests show.

Really? The programming classes didn’t have programming assignments? The department had no class to screen out those who thought computers were cool but had no idea of how to program?
There not being a lot of great programmers I get - real programming requires real problems, not the kind you find in a class which you do in a week by yourself.
But I agree that the best approach is to hire the best. Anyone with any experience knows that.

Not if they did all their work in the digital domain. People who design cell libraries worry about this stuff, not anyone else in the digital realm. If the person was working in analog, then I see it.

Tests like this are good for figuring out who works well under these conditions. The way I programmed in the small was to get a request for a feature, go back to my office and read the Dope for five minutes, and then write the code which my subconscious wrote for me from brain to file. Always worked the first time. In the large I wrote down the logic in pseudo-code, found the corner cases I had to consider, found logic bugs before I wrote any real code, and then did it.
Neither way would look very good in solving the programming tests mentioned here.

Not as the primordial factor. If something is obviously overpriced I walk away, but not because I can’t afford it, but because I don’t like being treated as a sucker. I am in a privileged position regarding money. I know I am in a minority.

It depends on how you take it, I guess, which may depend on the tone he uses while making that statement. I would think he means that he takes it as given that I am of course capable of using today’s technology appropiately, but that, on top of that, he expects me to be able to handle tomorrows needs with aditional knowledge, ideas, and ressources as needed.

I probably would make a quick mental calculation: how much more does this person cost compared to the alternative? What are his references, what impression does he make? Supposing I hire a cheaper person, what is the ratio between what I would gain as savings compared with what it could cost me additionally in money, reputation, and opportunity costs like future contracts lost if the cheaper alternative is not up to the task?
There are other factors, of course, but that would be a start.

No. Because I would have to wash my hands, later (either from the corrosive residue or the grease/grime commonly present on everything in an engine compartment).

The better question is “why does a slide rule work the way it does? I.e., why are each of the scales created the way they are?”

But, you would have known the difference between molar and molal solutions.

I never remember what the integral of a particular trig function is. I either rederive it or consult a table of integrals. It’s not worth cluttering my mind with facts that I will only rarely use. Quizes should take these sorts of things into account.

Sadly, that’s not universally true. Too often, folks are taught “do this to get this result”. There is often a lack of basic understanding conveyed so the student doesn’t know how to use the knowledge beyond the stated application.

I see lots of engineers/programmers DESIGNING products (i.e., determining what they should do, how they should be used, etc.). Yet, this isn’t a skillset that is taught in most university programs. Who decided that these individuals were qualified to make such decisions?

We have a stove. It has a simple user interface: most functions are selected by turning a large knob until the “choice” is displayed, then pressing it inward to make the selection. Later, rinse repeat for the next choice to be made (cooking mode, cooking temperature, cooking time, what to do when time has elapsed, etc.)

I’m sure some ENGINEER decided this was a great universal interface. And, it is! But, totally inappropriate for this sort of application! Too many actions required of the user to do simple things. (why do I have to select a particular number of HOURS, then a particular number of minutes? why can’t I just select minutes and have the oven know that 59 minutes becomes 1 hour? And, if I want to enter 4 hours, have the oven notice how QUICKLY I spin the dial and adjust the increment so slow rotation allows fine precision while faster rotation jumps 10 or 15 minutes at a time?

The same interface can obviously also be used to set a general purpose timer.

Ah, but what happens if the timer and the cook timer expire simultaneously? How do you distinguish between them?

What happens when you are USING the interface to set some parameter and an event occurs that requires your intervention/confirmation? Is your current activity interrupted? Aborted? How do you resume it, later?

The simplistic engineer’s interface conceptualization failed to account for the reality of the application.

Is there some OTHER entity, on staff, who would be better qualified to make these design decisions? Is there any credentialling process to give those skills to staff?

You can provide decimal data types. E.g., I support Big Rationals in my current project, mainly because I want users to be able to write code and not have to explain why X * 1/X isn’t ALWAYS 1 (for X != 0)

Should they have to? I designed a processor with a serial multiplier (cheaper and less power hungry than a parallel approach; heavily pipelined to compensate for the inherent delay from a serial approach). The guy who used the processor (40 years ago) didn’t care how I did it; he just knew that he could count on getting an accurate result in N clocks.

It is relatively easy to produce a piece of code (for a class) that meets the stated goals. You typically have a machine handy on which you can TEST the code. I suspect most “programming students” (not “computer scientists”) just hammer away at their code until it 1) compiles successfully and 2) appears to give correct answers for the test cases they can imagine.

Give a “programmer” a blank sheet of paper and a problem and he’ll often choke as he’s more accustomed to CHANGING some code (even if it doesn’t work) than CREATING something from scratch.

This is also apparent in their work style. They want a fast computer so they can “turn the crank” in a few minutes/moments and see the result of their changes. Then, they can make another guess and try again.

When I started designing software-based products, I had two shots at the code in an 8 hour day. The ONE development system was shared by 3 developers (so getting time to edit your code was difficult). You then had to burn the image into EPROMs – that you had previously erased under ultraviolet light (of course, just one EPROM programmer – on the development system – and one UV light). Finally, get access to the prototype hardware to install your EPROM set, apply power… and “wonder” why it wasn’t doing what you had planned.

That’s also true of hardware design. Contrived problems/examples don’t teach anything significant. I built a double-player “Breakout” (video) game in one of my classes. As a kid, I built a “football” game using analog computers and “logic kits” – with analog meters to display parameters like position on field, down, yards gained/lost on that play, yards remaining to first down, etc.

Real problems expose real issues that you WILL eventually face “in the wild”.

I’d be more interested in your use of invariants in the implementation, your inclusion of each of those potential bugs (that you think you’ve anticipated) in a formal test suite (to ensure they have been addressed, tell the next maintainer about those potential hazards and ensure they don’t creep into the codebase, later). Because getting code to work is relatively easy; proving that it will always work (even when you are no longer the maintainer) requires a fair bit more effort and discipline.

Everyone has a budget in which they must operate. Businesses, moreso. If I spend $50K on “hard tooling” for an injection mold to be used in a product I’m designing, then the funds I have left for other molds are reduced. As, eventually, I will have to recover any investments in the design, I have to keep total sales, selling price, profit margin, etc. in mind to contain any extravagances in the design process.

His point was well made. Most of the people I went to school with had never used a soldering iron, wire wrap gun, etc. before graduation. Anything constructed was made from kits that made this sort of thing easy to do. I was an exception in that this was also a hobby of mine.

The alternative “from Northeastern” wouldn’t know how to design a computer language, implement a compiler, design a CPU, or use any of the “future” programming languages. So, those POSSIBLE problems would only manifest later.

I just look at it as any other cost-benefit decision: is paying $X worth the services/capabilities that I expect to recover from him? Will I have a continuing need for his services – or, would it be smarter to hire a consultant for THIS need and avoid the commitment?

What if they just answered “No, because that would be stupid”?

What modern work environment works that way? When would the number of attempts to get a code working ever be relevant any more? If I’m employing programmers, I care how long it takes them to produce finished code that does what it’s supposed to do. I don’t care if they do that by meditating for three days and then typing something out in five minutes that works the first try, or if they go through six cycles of writing, debugging, and re-writing that each takes half a day.

I would ask them to explain why they thought it would be “stupid.”

That seems to have remained constant in your life, good for you! And as you have not refuted what I wrote, but just topped it, I will modestly step back.