What happens when the robots (peacefully) take over?

Sounds more like “The perfect is the enemy of the good”

Sure, it does not work in all contingencies; however, history showed that capitalism falls flat in its face too, of course less often than the silly countries that besides socialism also try to make a command economy to work. And one can’t help but notice the irony when one notices that those posts worked (arrived to their destination) thanks in great part with the work from servers on the web that according to W3Cook, are powered by Linux/Unix. They run about 96.5 percent of the top one million domains in the world (as ranked by Alexa).

Open sourced and community oriented systems that are usually free and egalitarian. Based on the job they do and their reach in the internet, I can say that I fell more than lucky. (but always back up your stuff and help keep the competition alive)

You haven’t been paying attention to software development trends, apparently. Software development has been moving away from central designs for decades, and ‘agile’ programming is a refutation of grand central plans, which we find never survive contact with the customer. We’ve learned not to create grand up-front designs by high-powered architects and planners, and instead we build the smallest viable product we can, then release it and respond to how the market reacts.

In software itself, we’ve discovered that borders are really important. In the early days of computing, there were no borders. All code ran in the same space, and your bad code could kill my program, or the operating system itself.

So first we learned to partition code into protected spaces. Then we learned to write object-oriented code that had isolated functions. Then we learned to code against interfaces, because the code that calls a library function doesn’t need to know how the library function works. It just needs to comply with the interface.

And the ‘main’ function (in languages that have it) does not act as a central authority, and does not handle all contingencies. In a modern language, the ‘main’ method may be completely empty, or it may construct a few high-level objects, but the actual control of the program is left to controllers, interrupt-driven code, etc. The move towards more decentralized logic has been going on for decades.

And the reason this is true is because of the same thing that makes an economy impossible to plan and control from the top down - complexity. Software is much more complex than hardware, and as a result we have learned to architect it in a way that allows modules to have autonomy and for the overall system to emerge from the bottom-up interaction of the modules.

So at every level - hardware, operating systems, and programs - software has been trending towards compartmentalization and decentralization. We don’t use the ‘smart people think of everything’ waterfall process any more because it’s high risk and rarely works well. Instead, we’ve moved to ever more decentralized and iterative designs. You know, like a market.

Speaking of that, even inside software we find that markets are working. Token Ring networks have been designed that use pricing to have modules bid for network access based on their internal needs, allowing the overall schedule to emerge dynamically. This solves the problem we had with top-down allocation whenever the requirements changed or something new is added to the system, or when the guys designing the bandwidth allocation failed to understand the needs of the software or the customer.

Software as an industry also has virtually no regulation. You don’t need a license to be a programmer. You don’t need a degree. You don’t need government certification of your code. You don’t have to wait for government inspectors to sign off on it. There are no regulations specifying what language to use, or what best practices you must follow. The computer industry is the closest thing to a wild west we have - for good and for bad.

Thanks for the enlightening comments. I stand corrected.

Still, catastrophic failure is the result of unanticipated events. The programs you describe cannot be thoroughly tested.

Interrupt driven systems are never used in bomb fuzes.

What quality controls are in place for the objects you invoke? What quality assurance do you use when distributing them? What incoming tests do you perform on purchased objects?
Crane

Objects are tested with their own unit tests, written by the people who most intimately understand the object, and the tests belong to the object. Integration tests are used to make sure nothing else breaks when the object is added to code, for as much as you like to isolate functional modules there is always interaction effects.

Unit tests are executed every time the software is built, and anything that breaks as a result of a code change is detected and reported.

When using a 3rd party module, it will will either have its own unit tests, or if the code isn’t inspectable you devise tests that can execute all the functions you need and verify that they work. And some software we don’t test at all, trusting in the provider, market reviews and history that the quality is there. No one outside of Microsoft is going to write tests to verify that the caching inside SQL server is bug free. We simply test that our own statements sent to the server result in the data we expected, and we hold Microsoft responsible for ensuring the quality of their own product. If that quality starts to slip, we’ll choose another vendor. Just like people do in the rest of the market. The fact that we will do so keeps Microsoft focused on their own quality.

In even more distributed systems (and most systems are becoming more distributed), and with online services like AWS or OpenID servers, we can’t test the code behind the interface, so all we care about is whether the functions exposed in the interface do what they are supposed to. So we write tests that exercise the interface. The developers of the service can change all the code behind it if they want and we don’t care - the interface abstracts away the implementation.

This is analogous to one of the greatest advantages of money in a free economy. It’s an abstraction that allows us to trade without having to know all the details of production of each object. It allows us to work together even when we disagree on just about everything. Arabs buy goods with Jewish workers in the supply chain. White Supremacists buy products from businesses owned by minorities, and they don’t care. Like the interface in software, all you care about is the value of the product represented by dollars, snd the details of how it’s made don’t matter.

Politics is not like this. Which is why on the political front people will bicker endlessly and ecen go to war, while economically they are perfectly fine trading with each other. Which is also why it’s tragic that some people are trying to politicize the marketplace.

Then you are describing ever wider distribution of untested systems. That’s a formula for disaster.

Consider the Boeing 737 angle of attack issue. That’s an obvious QA problem that could have easily been anticipated. They did not test it. Running the program to see if it does what it’s supposed to, is not testing.

Robotics is not desk top stuff. Robotics controls material that moves with force. If the software only assumes that it’s input data is correct the system is on the edge of failure.

Software can be tested. Automotive ABS systems are examples of software that is well tested and not failure prone.

I don’t know how you came to the conclusion that software isn’t tested. Unit and integration tests are part of the build process. But testing doesn’t end there. There are also functional tests, customer acceptance tests, alpha tests, beta tests, yada yada.

As for robots, the software I write is for factory automation, so I know all about writing software for robots. I have no idea why you think that software that controls robots only considers its own inputs - generally physical systems like this give lots of feedback in the form of many sensors, and control software uses this feedback to keep the robots in control. This is yet another reason why software isn’t top-down control and planning - it’s more like a market where you make an input, and the input changes the outputs, and the results of the output are used to modify the inputs. That’s also how airplanes fly and remain stable - through feedback. Top down control is terrible at responding to feedback from complex systems.

Partially true and partially one sided.

The reality is that big design up front vs iterative just have different pros and cons and are each the best tool for different problems.

Waterfall/big design up front is the most efficient method and produces the best design when the solution is known. Agile and/or iterative methodologies are best when the solution is not known and is only discoverable by having people use the system.
In my experience, people that are good naturally apply both styles based on the specific need. Dogmatic approaches one way or another tend to be less optimal.

Not really.

The focus is on how you manage resources and time, not the novelty of the design or product per se.

So if the world has never seen a sprocklett before, but you already know exactly all the work involved and how long it will take, then waterfall makes sense.

Meanwhile if you need to make boxes of matches, and while it’s an established process, it’s new to your team and there are some things you’ll need to research and potential risks…good reason to agile.

Disagree, especially if we’re talking about all parts of the chain.
I know lots of great engineers who are very dismissive of agile practices, even as they benefit from agile workflows by avoiding months of “crunch time” and rewrites.

Yes, and if this is responding to Sam Stone’s statement that “[waterfall] never survive[s] contact with the customer” I agree: that’s too broad.

Yes, really.

If you know what the solution to the problem is, meaning you have the knowledge to be able to design up front, then designing up front creates both a cleaner design and less work than iteratively reworking the design and code.

When you don’t really know what the solution is, because it’s unclear which features the users will actually make use of vs the ones they will ignore, or if it’s unclear to what degree the solution actually meets the business requirements, then it’s better to iteratively chip away at portions of the problem, reworking the system as you add capabilities. This method increases the time spent re-designing and re-coding some aspects of the system and typically decreases the quality of the overall design compared to if you had all of the knowledge up front.

I’ve just never met someone that I consider good that approached a problem full of unknowns by trying to guess and predict all of the solutions in advance, and designing the entire thing prior to any dev/prototype/testing/etc.

The natural inclination is to break down problems into the chunks that are known, the chunks that are less known but experience allows a high success rate using patterns and educated guesses, and the chunks that require some method of prototyping and validation.

I know there are companies that force people into a particular methodology that may not be appropriate, but I’ve never worked in one of those companies so my experiences might be somewhat skewed compared to yours.

Sam Stone,

Have you never had a user reported problem with your software?

Of course. No large piece of software is completely free of bugs. No complex user facing software ever gets all the requirements exactly correct, or has a perfect UI.

Most large software projects have hundreds to thousands of bugs, and bug triage is a constant activity before and after the software is delivered. A large amount of effort in the industry has gone into building the capacity to fix fast and release fast so that critical bugs can be patched quickly when they crop up in the field. Hotfixes and service packs are a common part of the software dev process.

Again, this is bottom-up correction to plans gone wrong. The perfect design doesn’t exist, and bug free software generally doesn’t exist. The next best thing is shorter dev cycles, better bottom-up feedback in terms of bug reporting and user engagement for feature requests and complaints.

If you can’t release perfect software, it’s better to let the software evolve by updating it rapidly in response to complaints and suggestions from the field. This is yet another way in which software development has moved more towards bottom-up, market-like processes and away from heavy top-down control.

I wasn’t disagreeing as such just making clear the distinction:

Agile and waterfall are project management methodologies, so what matters is how many unknowns there are from a project management point of view.

Yes this will generally be correlated with how much uncertainty there is in terms of the design of the product itself, but not necessarily, and I illustrated that with a couple of examples.

It’s not as simple as knowing or not knowing one simple nugget of information.

Yes most good engineers, indeed most everyone with some engineering experience, know that designing a novel system in isolation and expecting it to work first time is asking for trouble if not completely impossible.

But project management is a lot more than that. It at least includes when you test; how you break down and prioritize the work, and at what points you have something shippable. For sure you can be a great engineer, who understands design cannot be performed in a vacuum, without understanding any of that.

Anyway, this is all a tangent, maybe we can pull it into a separate thread.

Sam Stone,

Thanks, you made my point.

I must have missed it. What was your point again?