Software ICs (Software Engineering)

Way back when (late '80s to early '90s), I heard about a concept called Software ICs and that that was what the discipline of Software Engineering hoped to produce. The idea was borrowed from other engineering disciplines, specifically EE in this case, where systems were built bottom-up using well-understood, well-tested parts similar to commodity integrated circuits. Is this still a thing in Software Development? When I googled it, I found that the term was coined by the makers of Objective-C to describe a unit of software packaged for sale.

Thanks,
Rob

I’ve never heard of “Software ICs,” but building parts out of reusable functional blocks (“IP”) has been around for ages. It’s how Apple tailors the ARM chips it uses to meet it’s specific requirements.

I’m currently working with a really cool device which has an ARM Cortex M3 core and a sub-gigahertz radio from a different device glommed onto it.

Object oriented design would seem to be the way this is done today. The reason hardware designs have some good qualities is that IC1 can’t look inside IC2, but must communicate through well defined ports. Object oriented design attempts to keep module 1 from looking inside or depending on module 2.
Hardware ICs are hardly as stable as this model assumes, but that is another matter.

Units of software packaged up sounds like what I would call a library (if distributed as a binary) or an API (if accessed remotely), these days. They’re intended to be used from within some other software, so they don’t include things like user interfaces, just the raw underlying logic.

The API (Application Program Interface) is the documented interface for using the software, and when you call the API functions, you get back the result of whatever the software does. This might be distributed to run on a local machine as a library, or there might be an online interface where you post your function arguments to a web server and the server returns the results.

It was an idea that never really went were it was expected to go. Interchangeable units of program text never became the standard way to build large software systems.

Partly that was because building interchangable software units suitable for building large software systems turned out to be a lot of work. But perhaps mostly because the world changed, and turned out to be a different place than the world was in the 70’s 80’s.

One of the really notable things is that there is just a lot less buliding of software than you would have expected, for the amount of software in use. How much custom software do you see in your home or business. Any? If all the software you use was custom built, then it would all be built using efficient building practices using expensive parts: like you would never build a brick building if you had to fire the bricks yourself, but bricks are sold to people who build buildings.

The interchangeable software units that we did get (ActiveX and Java Beans) are a diminishing part of the market, and aren’t what was first imagined. The “Software IC” you saw might have been a competing product, but objects like that were not what was originally imagined, they were what came out of the process.

Objects like that aren’t like the IC’s that go to make up an electronic product – they are more lego blocks, erector set or knex, which is why the market is shrinking: in a world where every software program has already been built, it’s all a carefully crafted framework on which you hang your own drawings, not a DIY kit-home.

Well the graphics coprocessor is a bit of a “software that does the role of a state machine IC”.

But I guess it does exist in the graphics copro, but its not something that you change the software just willy nilly… Well you can run tasks on the copro, but its a specific reason… like a raytrace or a competition or public service (encryption cracking , SETI, that sort of thing.) and you aren’t just getting your main app and every applet and add on and plugin on their any time soon.

The micro-kernel idea hasn’t come about, the idea there was the cpu was being run by a more easily debuggable micro-kernel, and so it would be far less likely to crash, and then that kernel acts as a hypervisor for the real OS “kernel”…Objective C is an apple thing, and back then the apple was thinking of microkernels so as to run classic OS with OS X and anyway to smooth out the reliability thing… what if all the RAM is used up ? crash… turn off the power… that sorta thing that classic mac OS did. Instead though, they went for OS X totally replace classic OS.
OS’s now do this “hypervisor” thing, but currently its the full OS hypervising for another full OS.

ARM is a semiconductor IP company - it licenses designs, know-how, modules, design tools, etc., to customers who use it to custom design and make purpose-built integrated circuits. It mostly started with microprocessor technology and added to it to allow customers to address more areas. The resulting devices are often of the so-called system on a chip (SOC) nature, highly integrated devices with diverse functions supplied by various cores and other capabilities that can be added.

Not just recently, and not just companies like ARM, every semiconductor device has always had a large software component embedded in and integral to the function of the hardware.

For products that offer variable designs or functionality, the cores and functions are building blocks, are like software libraries, and to make them requires both the hardware “blueprints” and recipes as well as the software that makes them work. As just one example of a non-hardware approach to customization, a product family that has been around for a long time are called FPGAs, field programmable gate arrays. They’re customer configurable by software for specific applications

Changing the topic away from semiconductors - Software companies very much have internal libraries, core routines, methodologies, etc., so that whenever someone want to do or change something, they’re not starting with a blank piece of paper.

Back in the day, the term used with high level languages was subroutine. Subroutines were written to provide standardized code (e.g., a date manipulation routine) which could be called from different parts of a program as well as entirely different programs. With the Y2K problem it was obviously much easier to fix or upgrade code in a single subroutine used by many programs rather than having to update each primary program with its own date handling code.

This is just structured programming, which ideally reduces to three rules:

[ul]
[li]Everything a program knows about doing one specific task is in a specific piece of code.[/li][li]No other code knows anything about doing that task.[/li][li]All code is organized into functions or subroutines with well-defined (that is, written down) inputs, outputs, and behaviors. All communication between different blocks of code is done by calling subroutines or functions.[/li][/ul]

Object-orientation is one possible way to achieve that ideal. It’s not the only way, it’s not even necessarily the best way, but it’s a popular way.

So, those “blocks of code” I mentioned above are the “Software ICs”. All programming languages designed since the 1960s support this basic level of organization to some degree.

I agree with Derleth.

The main reason reusable code took a long time to arrive is due to the complexity of tasks software solves. In the early days, the tasks kept getting more complex, with way too many “wires” to be reduced to anything resembling a “software IC”. It took quite a while before functional utility blocks emerged (with a lot of design effort to produce just that).

Also, hardware was slow and small. Techniques we use today wouldn’t have worked then due to these limitations, even had we known them.

By the late 80’s and early 90’s, though, things had started moving toward code reuse (building blocks), with extensive libraries of solved problems. The first really popular solution sets were the Unix/C standard libraries, with robust solutions with simple interfaces for complex things like regular expressions (for text processing), all sorts of math, sorting, etc. These were very low-level software modules packaged as simple C libraries.

These days, to get an idea of the level of modularization, all you need to do is be given a task to solve using Perl or Python or Java and see the huge wealth of highly sophisticated problem solutions at your fingertips, including communications, encryption, database access, parsing, send/expect for testing external units, etc.

Excuse me, but that IS the way modern software systems are built, providing services in the data centers (like Amazon Web Services, etc.) Open Stack is a significant example.

Perhaps I’m misunderstanding your argument regarding the word “text”, which I don’t mean limited to source code. Who cares whether it’s source code, intermediate code, or object code, as long as it can be built like a brick into a product or service?

No argument there. Object orientation helped dramatically, as did dependency injection systems like Java Beans.

Those are just the tip of the iceberg. In today’s world, we have things like OpenStack and myriad other software solutions that are designed to be used by applications to provide services (or to be used to provide platforms or infrastructure as a service, but let’s not go there.)

The software we use today, in our computers, phones, etc., is all built on layers upon layers of reusable software blocks. That’s one reason it isn’t a whole lot faster than it might be!

It’s also a big reason for a lot of bugs, thanks to the “leaky abstraction” principle.

My knowledge is limited - not an EE or software engineer - but some of you are making me look like an expert with your speculation.

Back in the dark ages, every computer hardware maker had hardware, operating systems, and programming languages that were unique, completely different and incompatible and not operable with anything made by another company. Software written for an IBM machine couldn’t be used without significant change on a DEC (Digital Equipment) machine, or a Burroughs, or a Univac and on and on. IBM’s Fortran, or COBOL, or whatever, were similarly different from each of the others and vice versa.

The whole concerpt with UNIX was to create a different version of a layer that could do the heavy lifting to communicate with each type of hardware and then turn around and allow IT ITSELF to be communicated to using one language mostly the same and independent of the machine it was running on. UNIX was the language, C was the programming language for Unix. C code and most of the other high level languages and scripting systems in use today are mostly completely portable from hardware type to hardware type. Systems are open, databases are accessible by you name it software written in you name it language from anywhere to anywhere.

Unix was proprietary to ATT, but the original Linux was a clean replication (no copyright infringement) to provide the same functionality.

Today, the same concert is used even with embedded systems, where the programming tools do the heavy lifting insofar as differences between the microprocessors, and code is portable from one type of hardware to another