Are incredibly complicated pieces of software built from lines-of-code up?

It happens. On more than one occasion, I have been present - and in one case even coded - the "int main void{} that turned into a multi-million line of code program. Sure commercial libraries were used for some functions later on, when we found out we didn’t have the programmers to do the work, but in some cases every single line, every graphics library, every database call, everything was done from scratch.

I do some Word macro work unofficially at work. It’s not part of my job description, but it makes things vastly easier and efficient for the proofing squad. (I know the real coders will laugh at me for bringing Visual Basic macros into a coding thread, but you work with what you’ve got. :smiley: )

For a long time, I had a setup where, for example, I’d have three macros that each had one instruction unique to the macro and three instructions common to all three. Recently I rewrote the macros so that those three did their unique instruction, then called a fourth macro that contained the three common instructions. This is ‘better’ code because:

  1. It uses a total of 9 instructions instead of 12, making things more efficient,
  2. It’s more readable,
  3. If I add a fourth common instruction, I only have to add it once instead of three times, but all three macros will get the benefit.

Both styles do the same thing, but one is more difficult to work with. And I’m only talking about a few instructions. Real code will have orders of magnitude more than that, and a 25% reduction can mean big savings.

'Scuze me? FCP runs on Mac, not Windows. I’m told during development it was once portable to other systems, but I think at this point that pretense has been dropped, and it is pure Mac. I do know that FCP is heavily dependent on QuickTime for most, perhaps all, of its video and audio I/O (although Core Graphics is probably used for some still image I/O, and perhaps Core Audio for some audio file formats).

Yeah, I once had a conversation with this guy – he just absolutely couldn’t understand why calling Windows’ SetPixel() a zillion times to draw into a buffer would be slow, e.g. calling SetPixel 345600 times to fill a 720x480 buffer should be as fast as a C loop or SIMD code, right? Right? :smack:

Re: the OP.

Yes, a lot of software is done from the line level straight up, but let’s define some terms. ALL code is ultimately ‘lines of text’ in some fashion, or we’d all be entering octal bits into an array of switches. The guys that make CPUs have some sort of text representation of the tiny programs that run in the microcode. Someone comes along and writes an OS on top of that. Someone comes along and writes drivers on top of that. Someone comes along and writes a C++ (for example) environment on top of that. Someone comes along and uses the development environment to write applications. Its turtles all the way down.

I work in a large software company, and there are certainly layers of things that application developers for this company use to get work done more quickly and consistently. A basic tool set for strings, another for graphics, more for other low-level tasks. On top of that a windowing system abstraction, all cross-platform from Mac to Windows. On top of that, libraries that handle various things that several applications in the same company need – some I/O libraries to read & write documents from/to several different formats generated by each app, so all the apps can read all the other apps’ documents. Then there are a set of libraries that all the apps use to communicate “live” to each other when they’re all running. On top of all that, and a bunch of other shared pieces, a specific app team writes code leveraging all those things.

Turtles, all the way down.

Hell yes. I’ve seen some real crap code in my time. I was able to rewrite about 100 lines of buggy code (for a very simple application) into 10. Barry Boehm, a very famous software engineering guru, did a study showing 100 - 1 variation in programming skill. The mantra was that it was worth it to pay big bucks for the best programmers.

I don’t think anyone has mentioned open source code, another way of doing software reuse. I was doing some visualization stuff, involving drawing graphs, and I found a nice Perl package which did all the graph drawing I needed and let me concentrate on my, customized, part.

The plus is saving time and improving quality. The minus is that to draw other graphs, not supported, would be a lot of work. Reuse is fine, but you’re stuck with what the package can do.

CISC machines, like x86 architectures, still use microcode. A bit. RISC machines, like SPARCs and MIPS, don’t use microcode.
Even the CISC microcode isn’t what it used to be. With cheaper transistors, a lot of stuff that used to be done in microcode is done in hardware today. It’s much faster.

Microcode is basically done at the assembly language level. 30 years ago there was a lot of work on higher level microprogramming (I created and implemented an object oriented microprogramming language for my dissertation) but it never caught on. Microprogramming as a field is pretty much dead. The old microprogramming workshop still lives, but it is now a microarchitecture workshop.

There is great chapter, by Grant Martin, on why the things we wanted microcode to do didn’t work. This is in a book on ways of customizing processors in Systems on a Chip which I reviewed a few months ago.

Sorry for the hijack, but I enjoyed microprogramming, but was glad I decided to move to something else before the entire area crashed and burned with the end of the minicomputer.

Object-oriented programming needn’t involve classes, which are just one way of managing types and behavior reuse. For example, see JavaScript or any other prototype-based object-oriented language.

ETA: I was under the assumption that FinalCut runs on Windows (as it is not specified on the OP). But there are cases of dual-platform software like Photoshop, Premier, which will be addressed in the remainder of the post.

ETA2: All right, squeegee is part of the programming profession too, so just take the rest of my reply as ‘informational’ instead as a ‘reply’.

That’s the wonder of using the ‘presentation code’ (or in programmer’s speak, the windowing API) of the OS. Software that runs on dual platform (example Photoshop) usually uses the widgets library of the OS that it is developed for, but it’s own specialised algorithms, etc. are written on its own code.

It may take some fooling around to get your code to run dual platform, but it is possible, as long as you adhere to standard C++ (glares at Microsoft)

So for Photoshop, you can (I don’t work at Adobe, so I cannot be 100% sure) write the specialised code in C++, then pass it to another group who will do the individual port of OS, using that OS’ windowing library and its compiler to produce the program. (On the other note, it’s Visual Studio that don’t adheres to 100% standard C++. Software shops could use others development IDE or compilers).

Because regardless of OS, there is a core group of code that is specialised to the application. Once you got that done, you can concentrate on porting to other OS, using whatever libraries they provide for audio, graphics, file opening and etc. If a company is intent on support >1 platform, they could even write their own ‘interface’ which deals with the intricacies of each OS and throw it at their programmers and say, “Just use the functions from this interfacing API. It take cares of all the OS differences”

If I’m understanding you correctly, you can do this, but its not a good way to maintain an ongoing product. If you port an app to a platform once, you must do it again, and again, and again, as the product is updated, or simply as bugs are found. It becomes a huge PITA to maintain. OTOH, you’re quite correct that there would be a huge amount of code that has nothing to do with the UI and would be portable as-is.

This is the way to go – write a cross-platform UI API once (probably several levels deep – a top level that does windows and widgets and floating palettes, a lower-level one for general UI drawing (“I need a red line from here to here”), plus one for events (“the user clicked here! A timer went off!”), that sort of thing. Then you have the application team write to that API, once, and all the fixes (and all the bugs :slight_smile: ) are cross platform immediately. Then for the next release you just add more stuff to the pile, rather than have a team who’s sole purpose is to wade through this stuff over and over again on every release (hell, there are a couple of hundred internal releases to QA folk during development, which would be damn near impossible to do as a piecemeal port-a-thon).

Agreed, this can be maddening, when you take code from VS to gcc on Mac, and find all the little things that were close but just off a bit, and you have to fix it in gcc, take it back to VS (where it generally compiles fine with the gcc changes), then finally check in the code change (so you never check in broken code on either platform). OTOH, after some adjustment, its mostly manageable, and you find out which constructs each compiler hates and either avoid or be particularly careful about them when you need to.

Sorry, one more thing:

Actually, Photoshop isn’t the best example here. If you look at the interface, it uses a number of widgets that definitely are not provided by the OS (Mac or Win) – floating, tabbed palettes that dock to each other would be an obvious example, but there are more than a few other things. It generally takes some time and hand-wringing for stuff like this to enter the OS-vendor’s UI APIs, if it ever does. MS and OS-X have a concept of floating windows in an application layer, but that took some while to gel and get into the OS’s, and there’s quite a bit more specialized widgetry that Photoshop does with palettes. MS has a way to do tabs in dialogs (I don’t recall if OS-X does), but its definitely their own oddball implementation. And if you look at the UIs of newer things from Adobe, say Lightroom, they seem to be going the opposite way from what MS and Apple are doing: very dark UI with little color, hideable pop-out tools, large transient floating text lables – where Apple and especially MS are going for a bright, happy, cartoony look (which I find quite annoying).

Thanks for the info; I don’t do much Windows development applications so I thought all those came with the standard Windows SDK.

I think Adobe is going the way of some developers who said “screw OS compliant - I am coming up with my own style!” by unrolling their own UI interfaces

Photoshop and Illustrator actually did something early-on that most commenters here would find unusual: they wrote the code to one platform’s API and then did an in-house layer to implement that API on the other platform. Photoshop was written using MacApp, an object-oriented windowing library for the mac, and MacApp itself was effectively ported to Windows to make it run. Same for Illustrator, except they didn’t use MacApp – large sections of the Mac “Toolbox” were duplicated on top of windows. Later, Adobe moved towards their own internal cross-platform UI toolkit(s) that the applications (or at least large parts of them) used. The first such toolkit made a pretty good effort to make the UI widgets feel windowsy on windows and macish on the mac, but they were still mostly custom widgets, not the widgets provided by the OS. It seems like lately they’ve mostly abandoned trying to make their UIs fit in with the platform they run on.

This is probably the most important point. To make an analogy (which may sound like the silly old computer-as-car analogy, but bear with me, it’s actually pretty apt here), when you think technically about a car, you don’t think of every nut and bolt on it, but the thing is made of little things like nuts and bolts and raw hunks of metal. Rather than describing the individual pieces of your transmission that might be shot, you’ll think abstractly and say “my transmission needs fixing.” You divide the problem up into logical units that can be worked on independently without having to keep every detail of the whole system in your head at once, even if you happen to know how all the systems work. And sometimes, they interact in ways that cause problems that force you to think of more than one of them at once, but rarely the whole system. But they’re still all just lines of code.

Also, there have been a few allusions to programs mostly being composed of libraries that other people write, with a little code sprinkled in between as “glue”. To me, this discounts the role of the code you sprinkle in between. It’s more like, when you write a program, you use libraries to solve problems that countless others before you have already solved (e.g. how to measure the length of a string, how to draw text in a particular font, how to uncompress data), and you spend effort writing code that’s unique to your particular problem. It’s not like a jigsaw puzzle where if you get the pieces all in the right order you get the Mona Lisa. You’ve got to paint the interesting parts yourself.

>Object-oriented programming needn’t involve classes

I think I might better have said that the objects around which OOP is oriented are program entities typically able to encapsulate both variables and functions and to offer specific limited connections to the rest of the program they’re in.

You might not even have to write this interface layer yourself. There are some groups working on general-purpose cross-platform interfaces, some of which are available for some semblence of free. They still don’t cover everything, so it’s likely that you’ll still have to implement particular features yourself, but they’ll take you a long way there.

This is precisely true. I guess we have someone here who did or does work for Adobe, at least in the early to mid '90s. Or maybe knows people who did. I have a feeling we’ve met; send me a private message if you can, I’d love to know. (FYI, I will be w/o internet starting mid-morning Weds through Saturday mid-day (family trip), but will check back with you.)