How do the other engineering professions view software engineering?

By CE I will assume that you mean Computer Engineering (which Pitt refers to as CoE so as not to confuse it with CivE and ChemE) and attempt to answer your question.

Where as EEs often deal with large (physically big) circuits including those that run industrial motors and with numerous analog systems, a CoE deals more often with smaller (physically) circuits and almost entirely digital systems these days.

We design the digital logic that things like microprocessors use in order to execute instructions, the pipelines the caches, the memory modules the flow of ones and zeros through the ALU (which we also design) into registers and such, then we deal with the placement of the microscopic transistors that make all this stuff become a physical chip that you can hold in your hand.

In some cases there are pre-fab parts that we work with in order to create a new device, such as taking existing things like processors and flash memory and turning it into a portable device that plays mp3 CDs. It’s not really much of a lego method, since there’s research to do, plans to draw, changes to make, testing that has to occur, measurements to be taken, etc. before the prototype ever really happens.

Granted there is quite a bit of overlap between EE and CoE, at least in the early stages of becoming one, until you get to the point of specializations. EEs did not have to deal with designing a CPU chip, and CoEs didn’t learn to work with ladder logic diagrams or high voltage systems. We pack it into a quarter inch square and run it with +3.3VDC and they make it fill a factory and run on 480VAC.

The math requirements were the same, the physics requirements were the same, we took the same circuits courses and our departments shared the third floor of Benedum Hall. The majors were so similar that the College of Engineering at U Pitt would not allow an EE to take a CoE minor, or a CoE to take an EE minor because technically you could’ve had one anyway with the classes that were already dual-CRN (meaning they had a registration number out of two deparments and could be referred to as say, EE1185 and CoE1185).

So they’re the same, but different.

That’s almost the complete opposite of my experience as a software engineer/programmer/whatever. By far, most places I’ve worked have gone with writing their own libraries as opposed to buying or using open source.

Heck, I even had to talk one place into using SQL Server over writing their own DB.

I personally hate relying on other people’s APIs unless it’s something completely gadgety - ie, a database, or GUI controls. Anything else, I’ve been burned by the few times I’ve tried them. They are not exactly right, and you spend as much time getting them to work or customizing them as writing them from scratch would take.

I’m right on the borderline of Software Engineering and Computer Engineering. My PhD is in Computer Architecture, but my dissertation consisted of a compiler for microcode. I’ve written and managed the writing of Electronic Design Automation tools, and then used the suckers in a processor design group.

First, don’t knock software engineering. I was in grad school during the se revolution (Dijkstra’s letter, Wirth’s paper, etc.) and it made a big difference. When I was an undergrad we were not taught decent programming techniques - by the time I was a TA we were ramming them down the throats of our students.
A lot of hardware design today is writing Verilog which looks a lot like software. Sure, you have to get the timing right, but you don’t have to worry about guis. Hardware designers use libraries too - no one designs transistors, except for the most timing intensive custom logic (like assembly code) but use cell libraries customized for a process. I’m not considering analog, but volts scare me - give me 1s and 0s any day.

One thing I’ve noticed is that software people think hardware is scary, and hardware people think software is scary. Having good software skills in a mostly hardware group is excellent - you can be the hero with a fairly simple Perl script.

Oh, and no interesting piece of software can be described by equations any more usefully than it can be modeled by a Turing machine. I thought software verification was effectively dead. If every programmer was as smart as Dijkstra there might be a chance, but the classic DeMillo, Perliss and Lipton paper showing that most proofs of trivial programs in the literature had bugs finished up that line as a viable way of demonstrating the correctness of software. Not that hardware is any better - except for very small chunks, or showing two versions of hardware are equivalent, throwing pseudo random vectors at simulation models seems the best way of verification these days.

It’s probably already been said in this explosively growing thread, but I always understood the title “software engineer” to refer to someone who can really design an application from the ground up, gather the requirements, decide what kind of languages or other utilities to use for the various components, and be able to cost, plan, and manage the project, whether actually managing other people or working alone. I’ve met excellent ones with regard to whom I would have a problem with any CE or EE who looks down on them, and I’m sure there are also some who are incompetent and deserve to be looked down on by anyone.

As for the title, we’ve had a lot of discussions here about who should be able to call themselves an engineer, and during the last go-around, I came around to the view that the more restrictive use was correct and software engineers should go find themselves another title, such as software designer, builder, or architect.

As for me, this fall I plan (admissisions officers willing), to start an online master’s degree in software engineering, hoping to make some sense out of the hodgepodge of computer knowledge and experience I’ve acquired over the year. Don’t blame me…that’s what the university calls the program; it’s actually an MS in software engineering. At least it’s being taught by their school of engineering and computer science, so that will hopefully inform on the content of the courses.

I grabbed onto BobLibDem’s post because of the mention of physics and calculus. IMO there simply is practically no need for those disciplines in the development of 99.99% of the software out there, and it therefore doesn’t make sense to try to make software development into a true engineering discipline.

I haven’t used Calculus once in the 35 years since I’ve taken it. I could see its utility if you are working on a project for which the knowledge base requires that sort of stuff, but its rare.

This isn’t strictly true, at least with regard to the word ‘science’. The older meaning of the word is broader and simply means learning in any discipline, as the Latin verb ‘scire’ means ‘to know’. Sort of like our SDSAB, not all of whom are by any means scientists by the modern, narrower definition.

CE=Civil Engineer, not Computer Engineer.

About the same way they view Sanitation Engineers? :dubious:

I teach software engineering, and I will say that about half of what passes as “software engineering” in the textbooks is little more than a collection of management techniques. It may be valuable on large team projects, but it is management, not engineering. Of the remaining subject material, about half of that is “best practices” backed up largely by anecdotal evidence. Again, that doesn’t mena it’s useless, but it’s not engineering (and a lot of what other engineers do on a day-by-day basis falls into this category). The remaining 25% may qualify as “engineering” in the sense of practices extrapolated from a mathematical/scientific understanding of the field. And I think that engineers from other fields would, when introduced to those practices, feel comfortable with them as a kind of engineering.

At another level, though, engineering is a state of mind, a way of approaching problem solving that is very much at odds with the undisciplined hack-your-way-though-it approach taken by some programmers. When I talk to engineers in the traditional engineering disciplines, it’s clear that there is a certain shared attitude and approach to tackling design and analysis.

But there is one overwhelming reason why most members of the traditional engineering fields will never believe that software engineers are “real” engineers - real engineers are certified by their professional organizations. Software engineering certificates do exist, but are not widely recognized as valid and most software engineers don’t bother.

I’m an EE drop out* and a Professional Computer Programmer. Calling me a Software “Engineer” would be insulting to real engineers. I have had that title in a past job.
My wife has a bachelors in ME and a Masters in Computer Science. She is a Software Engineer and worthy of the name. She has been trained for system design, compiler design and programming. Of course she was already an ME so calling her an “Engineer” is very appropriate and AT&T does so.
So I guess I am saying it can go either way, some Software Engineers are “True Engineers” and most of us are not.

Jim

  • I made the decision that I had 4 more years of taking evening classes or I could jump into a much simpler Computer Programming Course.

I always considered Computer Engineering to be a specialized EE. When I started off in college it was considered tougher than EE and indeed I was told that you should probably expect to put in 5 years to get your Computer Engineering Degree.

Jim

It’s hard to say whether at Pitt the CoEs or the EEs had it harder. CoE was widely regarded as being harder than CS.

Pretty much anybody whose department was located in Benedum Hall was thought to be completely nuts and there were very few people trying to get in compared to how many were flunking or dropping out.

Prove World of Warcraft.

I mean, certainly you can prove the math that you are using, and you can reduce the program to some sort of arrows and boxes representation, and certainly that will help building a product that has fewer bugs. But still there is no way to reduce it such that your UML representation is going to let you know where you’re going to get a pointer error, where your scripting language opens up a potential security flaw etc.

In engineering you know what issues you have to deal with; gravity, wind, wear, and earthquakes for instance. With a database of building materials and their statistics, you can create a representation of a building or bridge and verify it through simulation.

If you’ve got a program you want to verify, just running it still doesn’t help because the number of potential errors is linear to the number of lines of code.

Engineering works because for any building material you have set stats, tensile strength, elasticity, weight, etc. that you can trust to within a certain range. Thus you can design at a high level, and know that when that is made in reality, it is going to be equally reliable as it was on paper. With programming, you can’t simulate your UML–as doing such is building the application itself–and you can’t trust any box or arrow to actually work as described as each is handmade and reliable only to the extent that it has been tested, and to the extent that your understanding of it matches what really happens inside it.

Though I would recommend the game Bridge Construction Set to any programmer.

Probably another reason is that in the early days of programming, most programmers likely were actual engineers, if not mathematicians, because the primitive state of programming languages then was such that only someone who was at ease in the intricate world of complex mathematics could make any headway. They spoke of a software project just like as they would a physical engineering project and approached it on those terms. Higher level languages have since taken most of the difficulty out of actually instructing a computer to do things, but the other aspects of project development, inherited from the engineering disciplines, remain. To the extent that such methodology is effective in software development, I think you could say it overlaps with actual engineering as a profession, but I personally would not choose to refer to myself as an SE.

Notwithstanding that, it’s been pointed out that in many instances, your employer decides what to call you, and there are probably many folks whose business cards say “Software Engineer” or some other title containing the word “Engineer”, but would personally agree with me.

You say that as if it were obvious that it can’t be done. Actually, it could be done, but it almost certainly would not be a cost-effective thing to do.

You are confusing multiple phases of design and development here. You could, at the UML level, prove that your design met a previously written specification.

You could later prove that your implementation satisfied the design. That would, contrary to your assertion, tell you that you were not going to get pointer errors. In fact, there’s an interesting feedback process. If you knew that you were going to need to prove the correctness of the implementation, you would probably choose to implement in a fashion that aided in the proof, keeping things simpler and cleaner in the first place.

And as for security flaws, it’s a tenet of computer security that security can only be demonstrated via analysis and proof - you can’t “test” for security.
These things are not impossible at all, but they are, IMO, impractical for most projects. A game is a good example of that - the cost of a software failure just isn’t high enough to justify the cost of a massive proof of correctness. Now, if we want to talk about the control software for cancer-treatment radiation dosing machines, …
The fact that we have to choose the techniques that are cost-effective, rather than just the “best” known possible technique, is part of what makes this an engineering activity rather than a purely scientific one. Engineers have to conside cost-effectiveness in any product or process they design.

This is, I have to say, a very narrow and naive view of real engineering. The truth is that some engineers have to design large complex systems out of disparate components, and such “systems engineering” is often faced with many of the same limitations on what can be known or proven as we face in software engineering. Bridges are, by comparison, relatively simple constructs. (Bridges are to traditional engineers what compilers are to software engineers - a limited application domain that has been so extensively studied that an entire mini-discipline has been developed around it, an application domain that is far better understood and can therefore be approached with far more confidence than almost anything else in the field.)

Oh, I wish the number of potential errors was actually that small. It would make our job so much easier.

It’s not written in a language that is amenable to proof.

I think you hit on something here. When I was an undergrad everyone in an engineering program had to learn to write code regardless of the choice of major. Writing code was seen as a necessary skill for the engineer. Just one of many tools used to approach a problem.

To specialize in a particular skill to the extent that it becomes a separate branch of engineering seems counter-intuitive to some. Imagine someone were to claim to be an AE (Algebra Engineer). You’d look at them like they were nuts.

I’m not saying that Software Engineer isn’t a valid title, just that it seems odd to some people.

Add to that the fact that a lot of engineers are a little irked already by the perception that the title is getting watered down by every Tom, Dick and Harry who claims to be a WhateverYouWantToCallIt-ENGINEER just because they don’t like the term “janitor” or “plumber” etc. and you have the makings of a sore spot for some folks.

I think it goes beyond cost effective. Do you know of techniques to prove assertions about thousands of asynchronous events? With an input and state space that is huge? I know of work in these areas that concerns small, well contained, examples, I don’t know of anything practical for such a large and complex program. A proof that would take the computing capacity of the US and run for thousands of years can be said to be impractical at least.

This is hardly enough. Specifications are known to be buggy. The specification and design have to be in synch. As a counterexample, techniques to prove that a high level and circuit description are equivalent are well known, and are commonly used in microprocessor design. However there are no processors that get released without known and unknown bugs. An architecture may not be complete and consistent, and the high level implementation of the architecture may not be correct. There may also be physical problems not modeled in the circuit description. And, having done both, processor designs are much, much simpler than large software designs.

Of course you can test for security. You can’t verify security through testing, but a tested system is more secure than an untested one.

I don’t disagree - you simply strengthen my point that an essential quality of engineering is selecting cost-effective techniques from among the scientifically and mathematical possible.

The challenge to which I responded and the comments that followed upon it suggested that a mathematical proof was impossible. I disagree. It’s not impossible, merely impractical. But recognizing such an impracticality and moving on anyway is all part and parcel of what engineers have to do.

(For that matter, sometimes they have to recognize the impossible and move on anyway. As the Red Queen said, “Why, sometimes I’ve believed as many as six impossible things before breakfast.” Every time we run a test case, we have to face the impossibility of knowing whether the program is going to halt and print an answer. The mathematical impossibility never stops us from actually running the tests!)

Ah, well that’s a different issue. “Be careful what you wish for - you might get it.” I agree that I did not address the question of whether a proof was actually desirable, because that question seemed secondary to the original argument.

But here you will not get an argument from me. In my “real life”, I have tracked projects that used formal specifications and compared the number of software patches that were later issued. In many instances, the major reasons for after-release patches were changes to the original requirements or to the operating environment, bugs in the compiler and/or support libraries, and defects in the formal spec that were faithfully and accurately translated into the eventual implementation. None of these would have been avoided by proof of correctness.

On the other hand, we also found the total number of patches to be smaller than expected in projects of the same size that were not developed using formal specifications.

Our conclusion was that writing formal specs was a good idea because, for relatively small initial investment, the total code reliability at release-time was increased, but that the additional benefit that would have resulted from performing the proofs was negligable.

Sounds like you and I have rather similar backgrounds. I’ve worked in software V&V for a couple of decades, and have seen some of my work adapted by others to validation of circuit designs. I suspect that you and I are actually in violent agreement on these issues.

Well, maybe not in agreement on this one. Testing for security flaws tends to be exactly the kind of thing testing is not good for - trying to draw a conclusion that a thing is absent because we have not seen it yet. Almost all work on measuring the quality of a test set and on designing good ones begins with an assumption that the defects are not being inserted or exploited maliciously, and that’s not a viable assumption in the security area.

Still, I can see your point. Like my argument that proofs are possible but not cost-effective, I suppose one could argue that testing for security is not completely worthless, though it’s probably not effective in practice.

So?

Well, my first response is that as an engineer and computer whatever the proof would be so impractical as to be indistinguishable from impossible. But whether such a proof is possible depends on the boundary conditions you would require. What operating system enivronment is assumed, and is it stable? What latency between commands being dispatched and their reception? In a very well defined testbench environment, it might be theoretically possible to prove that such a program was correct, but I’m not sure theory even exists to do it in a real environment.

Writing formal specs is an excellent idea, not the least because it forces one to express the internal specs one has, and requires expressing assumptions that might not be realistic.

I think Dijkstra said that testing never proves the absence of a bug, only its presence, which was why I was careful to say that testing can never verify security. Given human fallibility and the number of assumptions one has to make to get verification to work, I’d always want to test even a completely verified system. Just to be safe.