Ah … yeah! … um … sure, who doesn’t know THAT?!
Maybe someone who didn’t look up “Capacitor” on Wikipedia? This isn’t a closed-book test.
I doubt the book would help me. Don’t worry, my only interaction with electricity involves the “on” and “off” button. LOL
Same. There are moments in my life where I feel like an idiot, and this is one of them.
I was assuming hardware. If you want to do it in software you figure out that 37.9 * x is 379 * x / 10 and then do repeated addition - of course moving the decimal point as needed for each example. But there are no decimal points in hardware, your datatype just maps to the correct one. We all know that von Neumann thought floating point was stupid. (My first graduate adviser was his student and also worked at IAS.)
My point was that these simple test questions might not be so simple.
I thought that “kids these days” was a giveaway that it was kind of a joke. But computer science has grown so much since I was in grad school that lots of students don’t get the slightest background in hardware. I quite the IEEE Compute Society when years of Computer magazines had no hardware articles. They used to, I edited a special issue of one on hardware back when I was on the Editorial Board.
Not what I observed when I TAed an assembly language class. Lots of kids hammered away for hours without getting anywhere. And many made the same kind of mistakes. We TAs got a reputation for being geniuses for pointing this out in seconds - thanks to our seeing the same mistake a dozen times already.
Plus, one would hope that assignments after the easy initial ones would be too difficult to solve by brute force.
Changing code and making it work can be more difficult than starting from scratch. My dissertation involved tearing the guts out of the Jensen and Wirth portable Pascal compiler and making it compile my own slightly object oriented language. Portable assuming your computer used 60 bit words. I was the first person to get it to compile itself on Multics. I’ve done other major revamps. It’s not easy.
Fun fact - Wirth was allergic to variable names more than 2 or 3 letters long. He did this in the examples in his data structures book which I taught from. It took me months to document all the variables in the compiler, but grad students have tons of time.
You really want to go back to batch and desk checking? Yeek!
This was why when I was an undergrad I’d hang around the computer center in the middle of the night when you could get jobs run quickly. When I TAed we had one PDP-11 for running jobs, and a few of us stayed up all night the couple of days before an assignment was due to help when the jobs didn’t work. I’m not complaining, the students brought us Southern Comfort.
I don’t know why you want students today to suffer like we suffered.
The bugs I mentioned were not potential bugs, they were real logic bugs.
Let me tell you the real real world I had to deal with. My last project was getting fab data from our foundry, loading the new stuff, building a data base, displaying it in web pages, and doing alerts when things went out of spec. And I had a couple of months to do it, direct orders from my VP.
I called meetings of the users, and quickly found they had no idea of what they wanted, since this was the first time we were in charge of our own data. So I have to chuckle when you mention formal test suite. Requirements changed by the week. The one thing I would have sworn was invariant (a lot would not be split across two revisions of a chip) turned out not to be.
And I got it to work. Though I had half time help from one person and part time help from some guys who knew databases better than I. (They didn’t really exist when I was in grad school, not relational ones anyway.) When I retired I was replaced by six people.
Good luck in proving software will work. You can’t even prove hardware works - you can prove that an implementation is equivalent to another one. I once went to a seminar with Proferssor Dijkstra when he was on his writing correct software phase. I concluded you could do it if you were as smart as him, but none of us were.
I was lucky enough to have only worked on cutting edge projects, never on boring done it five times projects. And that’s the kind of project I hired for.
I worked for 35 years in ways of improving chip quality (and we did) and this cost millions of bucks and lots of silicon area. Sometimes you had to convince management the investment was worth it, but we developed some good tricks. And I have only worked for very large companies.
A good team player doesn’t just do their job and wait for the others, but might be a leader without being a manager and help slower moving people. People are justifiably nervous about complaining to a manager about slowpokes, but you can usually help.
It sometimes takes people skills. One of the guys assigned to my project was young and aggressive, and he wanted to organize our weekly meetings. Being close to retirement I was fine with that since it made him more committed to the project and maybe helped in his development. Since he came from another vice presidential area, I knew he was not going to get brownie points for helping us. I also made sure he had visibility when we reported to my VP.
As for the person they wanted to replace, I can assure you that firing someone in a big company (outside of a RIF) is a big pain in the ass and takes lots of paperwork. Maybe that person was being offensive in some way. One of the hardest things about being a a manager is dealing with performance problems, and dealing with stuff you can’t tell your reports.
Sadly, (unfortunately?) I don’t see it as such. I am dismayed at the quality of students coming out of universities, these days. “Computer Science” now seems to be synonomous with “programmer” – someone for which I have only a modicum of respect.
My CS education was in the EE department; I’m technically an EE. So, I am well versed in hardware (analog and digital) as well as software. The other “types” (subspecies?) of EEs included pure/traditional EE (with a smattering of CS) and something like "bioelectronic engineering – an option I never knw anyone to take.
The point being, that I could design a processor (in hardware) or a compiler to run on that processor. (The “pure EEs” could design the processor but not write the compiler – though could probably write code to be compiled UNDER it). I opted for the CS option thinking that it was going to be about designing computers, not “writing code”. So, had to scurry to take as many supllemental courses as possible to address my hardware interests.
But, the curriculum was designed in such a way that students could relate concepts from one “subspecies” to efforts in the other. E.g., I would use Petri Nets in my hardware designs to address the degree of parallelism I wanted to support.
Exactly. There is a lot of “bad press” about the use of pointers – like running with scissors. I don’t see anything wrong or difficult or error-prone about the use of pointers – because I see them as (eventual) “contents of the address register”. I look at a line of code and see the nominal operations (opcodes) that a processor would perform (in the absence of optimizations). No magic involved.
As I’m not a “joiner” the IEEE has never had an appeal for me. Sometimes, this is annoying as they lock up certain publications behind a paywall. But, there are always ways around those impediments.
They likely weren’t thinking.educated in terms of hardware. OTOH, most of my formal schooling was done in HLLs – PL/1, LISP, Algol, WATFOR, SNOBOL, ML, etc. At the time, I was developing “embedded” products with the “rececntly introduced” microprocessor devices. So, assembly language was on-the-job training. And, had real constraints – costs, execution times, maintainability, buglessness, etc.
Nowadays, its too easy to throw changes at your codebase until things LOOK like they are working.
That depends on the extent of the change(s) and your familiarity with the codebase or its intended functionality. A goal for this week is to modify the DHCPd service to support the use of symbolic names from ethers(5) in lieu of MAC addresses (why doesn’t it do this already? Who the hell wants to hardcode MAC addresses in a configuration file when they already exist in ethers(5) and there are library hooks to access them, there??)
Most “programmers” seem to be asked to make similarly small changes. And, often just look for where they HOPE the (1) change needs to be made. Changes that have far reaching consequences will quickly swamp them.
“Mutt Licks” Heh heh. Corby was my undergrad advisor.
No. There are other tools that can automate much of this. The point was the developer should be THINKING about what he is doing and not just “checking to see if it APPEARS to work” – especially if he’s not going to develop a testing strategy for it!
Each of our CS classes effectively had their own “system”, often designed and maintained by the professor and his grad students. (MuttLicks was used campus wide as a “service” but I don’t recall it used for any courseware). It was a common hack to intentionally crash the system just before class on the day an assignment was due – to “reward” students for procrastinating (after YOU have completed your assignment, of course).
[Odd sense of humor but that’s the way it was…]
No desire for suffering. Rather, a hope that they would be more concientious in their development styles.
There are so many more tools available, nowadays, that it is shameful that the quality of code hasn’t markedly improved! I attribute it to lack of discipline; folks not caring (or, perhaps, not being COMPENSATED enough to care?) about their work.
I promise prompt/unlimited/lifetime support for any project I’ve been contracted to design. But, only to fix bugs/deviations between the implementation and its formal, agreed upon specification! You want to add a feature or want me to figure out why YOUR modifications don’t work? Sorry, I’m busy with other projects (and have no contractual obligation to you).
Because of this attitude, I spend a lot of time/effort into my designs, design methodology, test suites, documentation, etc. I need to PROVE to myself (moreso that the client) that I won’t be spending any more time on this project after release.
I keep VM images of each development environment so I can be back where I was in case a project comes back to me. The first step is restoring the DELIVERED image and then testing it against the claimed problem conditions. If I don’t see a bug manifest, then I check the customer’s image and inform him that it is “corrupt” (a polite way of telling him that he has ALTERED it). “Here, in case you lost the image I gave you at the completion of the contract, here is another copy of it and all of the other deliverables…”
My projects, now, are solely for my own enjoyment and “edification”; no time tables, no budgets, no marketing constraints, etc. This also means I am free to try different technologies without having to answer for any “miscalculations” on my part (“Ooops! That’s not quite ready for prime time”)
I use symbolic execution to try to identify corner cases in my implementations. LInk my code and hardware to my documentation process. Explore new approaches to age old problems (e.g., any “error” generates a (FILE, LINE) tuple that acts as an index into a database that lets me bind explanations and recommendations to the event without dicking with the actual code. “Error 1234? What the hell does that mean? Where is it signaled? And how do I fix it??”)
A paying customer/client/employer would likely not want to incur the costs of my “experiments”.
When I went into business for myself, I was keenly aware of this. It was one of the main reasons that I stopped the 9-to-5: “The time YOU (employer) spend screwing around is actually hours of my life. While you may be willing to pay me for them, I’m not keen on spending my life that way!”
Most of my peers would advocate for time-and-material contracts… surrendering to what they saw as an inevitability. Instead, I opted for fixed-cost contracts – which requires a clear problem statement to clarify it’s scope. Once you get this buy-in from a potential client, it gives you complete freedom to approach the problem however you want, knowing that you have ONLY the agreed upon time and monies available to accomplish that result.
The “unlimited support” is a natural consequence of this: “I agreed to provide the device specified and appear to have made a demonstrable mistake”.
Ingres dates to the early/mid 70’s. As my $WORK had neither the need nor resources to support a live DBMS (let alone RDBMS), I never used one.
However, my current design eliminates the filestore completely; the ONLY persisitent store is implewmented in the RDBMS – including downloadable sofware images! This seems considerably more sensible – let the RDBMS ensure the integrity of all of the data within. Why write countless parsing and domain checking routines for every possible datum in a system?
This also offers a means of providing for redundant storage.
And, lets me install upcalls so individual bits of code can register to receive notifications when parameters of interest to them are changed! No need to “reboot” when something changes some other thing!
The first step to writing correct software is to know exactly what it is supposed to do. Without a defined statement of performance/functionality, how will you ever KNOW if the implementation meets ANY goals? If you let those specifications change AND DON"T THROW EVERYTHING AWAY (to be safe) with each change, how will you know that you haven’t got some baked in assumptions in the parts you decided to “keep”? Especially as there is such a strong temptation NOT to discard “done things”!
I actually brain-farted in an interview once like that. They were describing a trading floor where they basically wanted updates broadcast to everyone, and they asked me if I’d choose TCP or UDP to do that.
I got flustered and said TCP, mostly because I forgot which was which in the moment; I had specialized more in software development and database stuff, not networking, and had only taken the single required network course a few years before.
When wasn’t it? Back as far as 1995 when I graduated, my university had basically this sort of arrangement:
-
Solving business solutions with technology : Management of Information Systems, business school
-
Programming/how to solve problems using computers: Computer Science, College of Engineering
-
Solving problems that had specialized hardware involved: Electrical Engineering, software track
-
Devising the specialized hardware itself: Electrical Engineering, digital hardware track.
So if you wanted to say… write microcode for processors or work on embedded systems, you typically went EE -hardware or software track. If you wanted to be an actual developer and write software, you went computer science. If you wanted to work in corporate IT and solve business problems, you typically went MIS.
There was a LOT of overlap- in essence, a CS degree with an EE minor in digital design was nearly the same thing as the software track EE, with the only real exceptions being that CS majors got more leeway on our science credits (we weren’t required to take chemistry and physics, we could take other sciences if we so chose) and didn’t have to take all the engineering-specific stuff like engineering drawing that the EEs did. But we did have to take several digital design courses and computer design courses, as well as a bunch of math-centric optimization courses like linear programming that the EEs didn’t have to.
I am impatient. As I’ve said elsewhere, “let’s move on” (I had an empoyer once pull me aside and comment: “I hear you kept everyone here until 1PM last night?” “Huh? Sure. It was the first time bringing up the hardware. I assumed everyone WANTED to be here…”)
I’ve had techicians build (wirewrap) prototypes and discovered that there were too many mistakes to leave me with any confidence that they followed my schematic, accurately.
“I’ll just take it home and rebuild it, myself”
I’ve never allowed anyone else to design hardware on which I’ll be writing software. I know what I will want the hardware to do and how I will want to interface to it. And, by prototyping in foil (cheaper than hiring someone to buld a single prototype), I can get the results I want with relatively little effort/time/money on my part.
[And, will then have multiple prototypes to play with for roughly that single effort!]
In my 9-to-5’s, any project management responsibilities I had I treated as being an expediter: Tell me what you need and I’ll get it for you. I’m not going to watch over your shoulder to see that you are working, on schedule, in budget, etc. I expect YOU to know how to do those things as you are a professional. But, you (as a group) likely need someone to deal with the stuffed shirts so that’s what my job will entail.
Yet again, let’s agree to a spec (even if you have to pay me hourly to develop one from questioning you) and then get out of my hair. If I want to subcontract some portion of the work, I will assume the responsibility of selecting the vendor, verifying the work, paying them, etc. Overhead is, by definition, a manifestation of inefficiency. It’s part of the risk I assume for you!
I’d have asked if there was a need for guaranteed delivery or acknolwedgement of delivery. And, what the fabric looked like (are broadcasts routed?)
[I seem to have missed part of this post in my earlier reply?]
[CS == programmer]
My education enabled me to design a CPU, create a programmng language, design a compiler for that language, write code for an application in that language – as well as design I/O devices to be controlled by it and a power supply to power it. Had I not sought out courses on chip design, those would have not been part of my curriculum – but more likely adopted by the “pure EEs”.
This was called “EE/CS”. We had no notion of a “minor”.
A “pure EE” would know a lot more detail about the “electronics” but likely not as much focus on “designing computers”. They might have more detail about semiconductor physics, device modeling, etc. And, just a smattering of programming skills (EVERY department/major had some amount of programming instruction as it is essential to engineering – but at a much higher level of abstraction and with less varied exposure to different programming languages and concepts) They would be incapable of doing a complex system design as they would be far less experienced with different programming paradigms, etc. Ask them how virtual memory can be used to implement copy-on-write and their eyes would glass over. Ask them how to use copy-on-write to implement call-by-value semantics and they’d be hopelessly lost: call by what?
Everyone had to take a set of core courses – a couple of semesters of calculus, diffeqs, some “humanities”, phys ed, etc. Beyond that, each “course”/major diverged to highlight the skills necessary for that “specialty”.
So, I had courses in abstract algebra, artifical intelligence, advanced algorithm design, compiler/language design, etc. As well as things like “The socioeconomic impact of computer technology” (actually one of the most enlightening courses, and incredibly prophetic!). In addition to a set of core EE courses (network analysis/synthesis).
I would trust an EE to write a payroll program and not much more. Anything “complex” would quickly tax his ability to structure an effective solution. He would similarly trust me to design a switching power supply, but not a microwave amplifier or an electrical distribution network for a city.
“Real-time” would be far beyond his level of understanding – as would (software) fault tolerance. He’d not understand why cramming 8 unrelated digital I/Os onto a single port (to save on decoding hardware) was A Bad Thing.
Wasn’t the idea of the “tough question interview” a thing at Google, and the subject of a Poundstone book? Although I tend to do okay at tough abstract theoreticals, I am willing to bet that this method was only mediocre at predicting the best hires (if you looked back at things years later). Tests ain’t everything.
This thread isn’t about a tough question interview. GPA’s don’t mean a damn thing if the applicant can’t handle easy questions. The op already has 1 employee who is useless.
It means that if we’re interviewing you for an engineering role (which, quite literally 100% of the time we are interviewing engineers or graduates who are potential engineers), we’re asking questions about how you’ve interacted with people and other teams.
We’re supposed to follow official HR processes. There are 10 questions, all about behavior in specific circumstances. We can have free chat, get-to-know-you, etc., but the official interview is 10 questions about how you deal with other people.
No idea how we’re supposed to judge competency. From recent grads, their major, perhaps. For experienced folks, from their resume/CV.
It’s a small field, so for well-experienced people we know them anyway. For grads and less-experienced people, though, it’s a problem. And I can only speak for my particular discipline. I have no idea how people in, say, Research positions can hire qualified candidates. Per the formula, I could pass a Research position interview with flying colors and be totally incompetent.
I worked in real microprocessor design teams, and I don’t think a pure EE could design a processor any more complicated than the one in Knuth. Things have gotten really complicated since I taught computer architecture. And I don’t even know if they teach compiler writing any more. I took the class, which was damn useful. But back then SIGPLAN Notices had one or two new languages each issue. I’m not sure they are growing quite so quickly any more.
Do they teach Petri Nets any more? I’m not sure I ever used them, but I did make use of what were basically PV synchronization in a project. I doubt they teach those, since everyone works at a higher level these days.
Pointers are fine, really useful in the kinds of simulators I wrote. But people can badly screw them up. Hell, go tos are fine, except that they get misused.
They all had some basic HLL class before. This was the class which separated the real programmers from those who weren’t going to hack it. And you had to think in terms of hardware, at least at the register transfer level. But that wasn’t the problem we saw, it was normal bugs made worse by being hidden in assembler. Anyhow, as a microprogrammer assembler was an HLL for me.
The trick is if you can figure out what the code really does from reading it, and knowing which comments to believe or not believe. And how clean the code is, of course.
The justification for object oriented programming. It doesn’t always solve this problem, but it can help if done right.
I don’t agree. Back 50 years ago we had a unit on the software crisis, with multiple examples of systems costing more, taking longer, and often never working. SDI was said to be impossible because the software was too difficult. But we have orders of magnitude more software today, and most outages seem to be from attacks, not bugs. It is not perfect, of course, and I’ve said that I think people may be careless because they can fix minor bugs by pushing updates, but that our economy is not collapsing shows that we are in general writing much better software than we did 50 years ago.
Our hardware is much better too. I just have a sense of software quality, but I have (or had) data on hardware quality.
As I said, the reason for this was not the users being dumb or obnoxious, but that since this was a new effort they just didn’t know. We did look at commercial solutions, but they weren’t flexible enough, and as we learned more they would have been a disaster. Not from being bad, just from being a mismatch. Since I was also a subject matter expert in the area, I could put together a system that could handle anything. One part was a translator for the output of testers. Comments were semantically significant. And despite my best efforts, there were no standards. On the plus side they were so scared of me leaving that when I said I wanted to retire they paid me full salary for four months for working one day a week.
Since we made databases, they were free for the project. I hacked together a stopgap solution to meet the deadline, and then we migrated to a real one. We were dealing with multiple gigabytes of data a day, and we were low volume, so there was no other choice. But quals in grad school didn’t even cover databases, and my office mate was doing some basic research on them for her Ph.D.
I’m not sure you’re getting what you have to do to prove software correct. You have to describe its supposed function mathematically. You can validate with written requirements, but you can’t verify, in other words prove.
Hardware verification is a lot easier, being more constrained, but except for proving equivalency you still can’t do it. Design verification is done by throwing zillions of randomly generated code segments at simulations of our processors. Random code must be used, because engineers never think of all the corner cases. It helps to make computers when you do this. Our compute server ranch had thousands of high end processors running jobs 24/7.
I did hear a talk from someone at Microsoft about how they do software testing. They put in tons of effort, but they are hardly bug free.
Note that CPUs and expectations of them were considerably simpler 40+ years ago. E.g., no cache, no read-ahead, no speculative execution, etc.
OTOH, you can design instruction sets that make COTS processors look dog slow, by combining multiple operations into a pipelined single operation.
In one such processor, I would compute:
(dX,dY) = min(Sx/XAx, Sy/YAy) * (XAx,YAy)
in a single ~3us instruction (16 bit).
Doing this with 8b CPUs (similar price point) would take a couple of orders of magnitude longer, or more.
The downside was the lack of tools like VHDL, at that time (so, you were working on a Mentor workstation – that you didn’t posess!)
I’ve not met anyone past my generation who had such a curriculum. Sadly, these have associated skills that do have value outside of compiler writing!
E.g., I have been replacing my other half’s “bookshelf stereo” with a media server. But, emulating the UI that is now so ingrained in its use – to be able to point the remote at the HiFi, in an unlit room, and convince it to do exactly what you want, just by remembering the position of particular buttons on the remote.
To do this, I have written a grammar to describe the legal command sequences recognized by the HiFi, in tokens defined by the remotes key names. A few commands (lex/yacc) and the parser is done.
And, because it is in such an expressive form, I can alter the command bindings or add new commands (e.g., it has a 6 CD changer; the remote has a 10key pad – so, why not a 10 CD changer? It’s all just software so there are no hardware consequences of such a change!
Doing it the brute force technique is more time consuming, more error prone, harder to maintain and comprehend, etc.
Again, I’ve not met anyone past my generation who had such a curriculum. But, they are useful for expressing dependencies in graphical ways (easier to visualize than stanzas of code).
My current project use many (hundreds) multicore processors. So, “jobs” are executing in true parallelism. How do you express (to yourself, or a subsequent maintainer) the temporal and data dependancies, easily?
You still need semapores/mutexes if you are working with multiple concurrent threads of execution. Even moreso if multiple hardware cores/processors. But, you can bury them in other mechanisms that the application layer invokes to minimize the chances of deadlock, livelock, failure-to-lock, etc.
Many coding guidelines eschew pointers. Particularly formalized standards. Its as if someone got spooked and they all inherited the state of alarm.
However, pointers have ot be abstracted, further, in a truly distributed system due to NORMA. Unless you use tuples to reference the pointer and extent of the object referenced. (I’ve chosen to just use “object references” which I can freely pass around the system and access remotely)
Gotos? Meh. You can usually restructure your code to avoid a literal goto
(e.g. artificially wrap a portion of the code in a do/while loop and break out of that to fall into the “goto-ed” code).
But, if you think about the algorithm, often the need for them goes away, with planning.
Yes. I microcoded each instruction set in my processors – using high speed bipolar PROMs in the prototypes and a mixture of logic arrays in custom/standard-cell designs. It is just infinitely easier – esp if the folks who will be writing the code decide they want some other capability that you’ve not accommodated.
Mick&Brick and *Bit-slice design were my goto references. Along with AMDs 29xx series datasheets.
Yes. I have grown to eschew comments embedded in the code (line endings) and, instead, rely on stanzas before each function or significant part of a function.
I rely, heavily, on assertions to make all contracts explicit. So, in addition to knowing the data types of arguments/results, you also have expressed limits on their ranges of values and other criteria that the algorithm relies on or creates. This simplifies symbolic execution by laying out all the constraints. It also acts as a run-time diagnostic to help detect things that truly can’t happen (e.g., if a caller passes an odd number to a function that expects an odd number but a memory/processor failure causes it to manifest as an EVEN number, once at/inside the function)
[My overall goal is “computing as a service” – much like Mutt Licks attempted.]
What I have not been able to find an acceptable and EASY solution for is creating a roadmap of the various modules in a way that makes it easy for a future maintainer to figure out WHERE particular things are done and in what order/relationship. So far, I am relying on prose descriptions but these are subject to the same sorts of “maintenance failures” as normal comments.
Yes. Hence my use of an object-BASED design (even if not strictly OOPS). You learn how you;ve been bitten in the past and adjust your style to make it harder to get similarly screwed (even if that is “by your own actions!”)
This. We have become accustomed to patchig code. And, often, imposing silly, arbitrary constraints on HOW that is done – along with opening a door for the possibiolity of remote exploits.
Why do so many things need to be RESET for a change/update to take effect? Why not design things so they can know that a reset is coming and take whatever actions ARE APPROPRIATE TO THEM to accommodate the change without bringing the entire system down?
I do this by exploiting the object paradigm; backing each object with a server for that particular type. Then, when a service need updating, “rewiring” the (remote) connections to the OLD service so they now reference the NEW service. And, arranging for new objects from teh Factory to be instantiated with the new service backing them
This also makes it easy to MIGRATE an object (a service is also an object) to another physical processor – insert a shim to cache connections, notify the service of a need to prepare for migration (.premigrate method), then, .migrate it to another host and tell the shim to forward transactions to the new host until everything settles into place (at which time, the old serice can be elided from memory and, if the hosting processor is now idle, it can be powered down – to save energy)
Hardware has become increasingly good at “protecting” itself. E.g., The idea of parity/ECC on INTERNAL busses would have been unheard of 50 years ago. When geometries keep shrinking and processes keep improving, it makes sense to “spend” some of that to improve yield and reliabilty.
ISTR an anecdote that some of the earlier “DRAM controllers” reduced the reliability of their memory arrays, due to the complexity baked into the controller (and tight timing requirements)
I’d partially agree. I;m not so sure as to how innocent I woul dconsider their motives.
I recall having a lengthy discussion witht he director of manufacturing about the EPROM set that we were installing in the product. He thought each chip had a specific function – this chip does this while this other chip does that, etc.
The idea that functionality could be spread across “identical” (from the purchasing agent’s point of view) was just something he couldn’t wrap his head around. I eventually settled on a book analogy:
- This SET of N chips is the book.
- It contains M chapters – M <> N!
- Each chapter has a particular length (pages are bytes in this analogy)
- Chip #1 is pages 1 through 1000. Chip #2 is pages 1001 through 2000. etc.
- Now, where will chapter 7 lie? Ans: who knows, who cares. As long as its in the correct RELATIVE place in this SET of chapters on groups of pages, it will be correct. The next revision may change the lengths of these chapters or their placement in the chips – but, that’s MY job; nothing for you to fret about (except to know that you must treat the entire set as a unit, even though each has a different part number to reflect its unique contents!)
The problem was a total lack of prior experience with this sort of dessign approach.
[databases]
As I said, this is a new tool for me. And, my approach is likely incredibly inefficient – if your goal was JUST to store data! But, as with all other projects, I treat this as a learning experience to try new concepts and approaches that a “paying client” might shy away from (in terms of risk)
I do know that it dramatically impacts how I think about the data that I store. And which attributes, tables, tablespaces, etc. to support it all. Me having NO interest in learning how to implement a DBMS.
Symbolic execution allows me to verify that the contract declared (in the expressions provided) is met, in all cases, for a given set of input conditions (also explicitly expressed). A “solver” then tries to find holes in the coverage that these conditions/expressions formalize.
If I declare those expressions incorrectly (by misinterpreting requirements) then all the solver will do is tell me what’s wrong with my WRONG assumptions – it can’t correct them as they represent the formalization of the contract.
E.g., if f(x) is fsupposed to take an even integer and return the next greater odd integer, the solver will look at the operations being executed and determine if all such input cases are handled correctly. If not, it will identify the cases that fail to meet the stated contract.
Repeat this for every function and every function that invokes each.
It is heavily compute intensive because it “interprets” your code to see what it is actually doing with the marked variables within and tying down the ends to verify the middle always fits. It relies on the compiler to provide an abstract descriotion of each operation that it can then interpret and apply symbolically to the class of data.
Have a look at KLEE for a better idea.
As to MS, I suspect much of their problem is related to trying to address the future and the past on a platform they have little real control over. I can see countless bugs that are undoubtedly related to not checking the return value of some utility function. Or, checking it but not knowing how to andle it – besides ignoring it.
I can routinely crash a W7 box by trying to copy a few terabytes over SMB.
The icons on my dsktop magically move (aren’t they supposed to retain their positions? Or, only on alternate weekdays?)
How long will these bugs remain in the codebase? Am I really the only person who has noticed them (Ans: No).
My particular concern is with end users – MY customers. I, as a developer, can recognize and deal with quirks in a toolchain/OS/etc. But, my customers can’t – and shouldn’t have to! So, there is extra pressure to come up with manageable susystems lest the complexity of the gestalt overwhelm them – and me!
But, that’s what makes it fun! If it was easy, others would already have done it!
I don’t know about exams, but back in my day, we had what we called “The Photocopy Club”. It was a group who would go around and ask other students for “help” with the assignment questions, and then would compile the answers together to pass. They spread it out enough that you’d only get one or another of them asking you for help every few weeks, so you’d never really notice it until much later.
They learned just enough to answer the standard questions, but anything that required them to actually think for themselves killed them.
Wow, I know almost nothing about electronics, and I can answer those–Ohm’s law was taught when I went to the Red Cross with my dad and brother to study for our ham licenses when I was 13.
These questions serve as gatekeeper questions, and are broad enough that someone who is the real deal in the field will laugh at them and instantly respond.
When I was a young machinist looking for work, I did a similar test at one potential employer. The questions were mostly math related, but there was one that showed an image of a micrometer and asked “What is the reading on this micrometer?”
I would say that for anyone who claims to be a machinist, they had better be able to read a micrometer–if not, then they shouldn’t waste anyone’s time. It’s not complicated, but involves reading two or three different scales on the device (they didn’t include the Vernier scale, the most precise and fiddly bit). Any machinist worth hiring would know the answer instantly, and I’m sure that most of you here could read a micrometer even without formal training.
As I remarked back as an undergraduate – “there are EEs I wouldn’t trust with a toaster.”
<Jeff Foxworthy’s voice> If it’s a surprise to you that people can obtain a college degree without having any practical job skills then you must be a college graduate.</Jeff Foxworthy’s voice>