Yeah, this. As a person who was in IT while the previous methodologies prevailed (major releases), the idea of continuous delivery horrified me at first. How would you arrange the testing? How would you facilitate transition and user acceptance and training and data transformation. How could all of that stuff be done on a daily or sometimes multiple-times-daily basis?
Reality: when you are doing lots of tiny updates, ‘all that stuff’ isn’t all that stuff.
My guess is that the app interacts with a backend “McDonald’s Ordering Service” API, which is on its own release schedule. Each release of the app interacts with a different version of the API, so if the backend team wants to deprecate any API calls they have to make sure all instances of the app “in the wild” are upgraded past the last version that wants to make that API call.
Banks have more mature APIs that don’t change very often, and better commitments to maintaining old versions of backends which gives them more leeway.
Incidentally, looking up to see if McDonald’s had a public API (they don’t) I found this where someone has found a way to access the internal API to test whether the ice cream machines are broken.
I used lint long before there was such a concept as CI. Certainly it doesn’t force anyone to be sloppy, but it encourages it.
When I worked at Bell Labs I worked on and managed EDA software, mostly for internal use, where people could call the developers up and ask for bug fixes. Which was a problem when you wanted your software to resemble being production rather than an experiment.
I switched jobs and became a customer. IBM was trying to sell their EDA software, When I met with them, one feature they talked about was the ability to talk directly to developers for new features. Uh, no thanks. They finally sold the software and team to a production EDA company who knew what they were doing.
Certainly reasonable CI processes are not that out of control, but I’ve seen the downside from the inside.
Many of you may not remember when the entire long distance network went down. The cause was someone messed up a switch statement in the C code that ran the telephone switch. It was too small a change to actually test. Or so they thought.
Sure, it doesn’t have to mean no testing.
That’s not necessarily bad practice in some circumstances. It can actually be fairly common when the customer is very large and especially when dealing with low-volume high-value software, though it’s not usually developers that are the direct contact but something like technical marketing managers. Before the PC revolution, which made software behemoths like Microsoft and Google completely inaccessible to all but the very largest customers, collaborative relationships between customer and vendor were not uncommon.
What is EDA? Electronic Design Automation? For silicon and board layout?
I used to hold a similar view, but then Android moved to automatic updates and the app updates didn’t slow down. I think CI/CD is more likely.
Another option is that smaller shops might use it to push changing content. It’s a free distribution network and doesn’t require additional systems and processes.
Prior to automatic updates being the default, I do remember discussions about pushing out frequent updates as a way to catch people up that opted out of a previous update. If you had a bad bug in an early version, it could be hard to get everyone off that version.
If they had said technical marketing managers I would have had no trouble with it. They set priorities for new features and non-critical bug fixes. That’s why the standard response to a request for a new feature is “that’s an interesting idea. We’ll get back to you.”
Not to mention that only a subset of developers can be trusted to even talk to customers. Collaboration is good, but it needs to be strictly controlled.
Yup. For chip testing, in particular. But we were a part of a bigger Bell Labs effort involving all kinds of EDA.
Back a while ago people way behind on Windows updates were susceptible to getting attacked by those who found holes in a particular version. So automatic updating was probably good, except for those times when it messed stuff up.
I don’t know if it is still true, but big companies didn’t migrate easily. They didn’t exactly trust new operating systems. When I worked for Intel the OS on my work PC was way behind that on my home PC.
I recall a comedian some years ago who said something like, “GM, Ford, Chrysler, can put cars on the road that never need updating. Unless there’s a recall, and those are rare. But if my car was a Microsoft product, it would have to be in the shop every week to be ‘updated’.”
I think that’s what bothers people. You buy a machine, you expect it to work, it does, and it does what you want it to. Eventually, it wears out, or becomes obsolete, and you replace it. No interference from the manufacturer during the time you use it.
Yet I cannot seem to go a week without a notice from Microsoft telling me updates have been downloaded (without my permission), and should we install them now? If not now, then when? Let’s schedule a time. We can do it overnight, if you like. But there is no option for “never.” These updates are going to be installed eventually, whether you want them or not.
“If it ain’t broke, don’t fix it” works for cars, for machinery, for household appliances, for lawnmowers; for, oh hell, disposable lighters and coffee makers and so much else. Why do tech companies feel that, “If it ain’t broke, and it ain’t, we need to fix it anyway”?
Found an example of an old trope along those lines, although there are many variants:
At a recent computer exposition, Bill Gates reportedly compared the computer industry with the auto industry and stated: “If General Motors had kept up with the technology like the computer industry has, we would all be driving $25.00 cars that got 1,000 miles to the gallon.”
In response to Bill’s comments, GM issued a press release stating: “If General Motors had developed technology like Microsoft, we would all be driving cars with the following characteristics”:
- For no reason whatsoever, your car would crash twice a day.
- Every time they repainted the lines in the road, you would have to buy a new car.
- Occasionally your car would die on the freeway for no reason. You would have to pull over to the side of the road, close all of the windows, shut off the car, restart it, and reopen the windows before you could continue. For some reason, you would simply accept this.
- Occasionally, executing a maneuver such as a left turn would cause your car to shut down and refuse to restart, in which case you would have to reinstall the engine.
- Macintosh would make a car that was powered by the sun, was reliable, five times as fast and twice as easy to drive – but would run on only five percent of the roads.
- The oil, water temperature, and alternator warning lights would all be replaced by a single “General Protection Fault” warning light.
- The airbag system would ask “Are you sure?” before deploying.
- Occasionally, for no reason whatsoever, your car would lock you out and refuse to let you in until you simultaneously lifted the door handle, turned the key and grabbed hold of the radio antenna.
- Every time GM introduced a new car, car buyers would have to learn to drive all over again because none of the controls would operate in the same manner as the old car.
- You’d have to press the “Start” button to turn the engine off.
Not true anymore, thanks to the huge amount of software in cars. I think Teslas get updated all the time. I’ve had firmware upgrades done at the shop.
They definitely do a software/firmware update when I get my oil changed. And sometimes the UI is a little different.
So glad I still have an older car where the bullshit that permeates the modern software industry doesn’t affect me. I know it has “software” in the ECU but it’s invisible and never needs “updating”. Most devices I’ve owned that are subject to the plague of frequent “updating” have been either broken or degraded at one time or another.
Regards,
A Cranky Old Fart and probable future Luddite
Yeah, but if in the short term netting big bucks from advertisers comes at the cost of small incremental grumbles from users, good luck being the manager/exec trying to get stakeholders on the side of the user.
There is definitely a large amount of software churn that is unnecessary.
However, a lot is. The car works fine, but someone else changed the material roads are made of, and the tire manufacturers shifted to octagonal tires, and all gas stations have shifted to a different fuel source… and all of that happens on a short time scale.
Haha, as a software dev, I find this quite flattering
But we get to write the code. The business decisions are not ours to make. We have management and business analysts who make those choices.
We just write the code, test it, write tests of various types for automatic testing, pass it on to QA to manually and automatically test it. Deploy it throught several stages, each of which has both manual and automatic testing. Often A/B testing if it is a new feature. Even though I am not a QA guy, I do a shit load of testing.
I personally like well managed CI/CD. I think your argument is with the business and/or the designer, not the developers.
Are there any security issues if an app doesn’t update regularly? i.e. Does it get easier for bad actors to cause mischief when its security is out of date?