Computer “glitches”. Why?

For example, when I stream video and make the view full screen, a little message pops up at the top of the screen that says “Press Esc to exit full screen”. Sometimes that message fades away after a second or two. I imagine that’s what it’s supposed to do. But about half the time, it don’t go nowhere. Just stays.

I’ve tried toggling back to partial screen. Sometimes that makes it go away. Other times not. I’ve tried closing the program and reopening. Sometimes that sets it right, other times not so much. I’ve tried restarting the whole damn computer. The first time I tried that it worked, and I thought I’d found the cure, inconvenient though it be. But then - heh, heh - back again.

It is AGGRAVATING way more than it should be, and I know I should just take a chill pill (legal in my state :grinning:) and let it be. But what can I say? I’m too old to change my proclivity of feeling persecuted by “random” things. And this is just but ONE tiny example of such phenomena. The world, and computers in particular, seem Hell-bent on destroying my peace of mind with such petty aggravations.

So what I’m seeking - simply in effort to give my mind an alternative to its natural tendency falling back on persecutory “delusions” - is understanding. WHY do such things happen?? Aren’t computers machines, and don’t machine behave in rational and predictable ways? Sure, when you don’t know all the variables and rules, predictable behavior seems random. So what I’d like is a better understanding of the variables and rules at play with these kind of GOD DAMNED ANNOYING things.

Can anyone help?

Computers are indeed the generally deterministic. But there is devil in the details.

Overall it is extraordinarily difficult to create bug free code. Systems are just too big. No individual can grasp the extent of the design or operation. Systems are always works in progress in some parts. In real life even the most perfect artefacts have flaws. The bar for computer systems is very very high. But nothing is ever perfect.

A crucial theoretical point is that computers are not actually deterministic if you include time. You can break non-deterministic behaviour into two parts, choice and temporal. If you have multiple parts in your computer system running concurrently, and especially if they interact with the real world, things can happen in different orders. Then different runs of the same code can operate in different ways. This can make life really hard for designers of systems and is the source of a great deal of difficulty. Everyday use can run afoul of such problems.

Other more mundane things might include using uninitialised components that have near random state when they are used. So there can be latent bugs that only crop up randomly. Modern programming languages and coding practices can help avoid such things but it remains hard.

I cannot tell you the answer for your specific problem, but here is the general idea. (I have a 40-year+ career in IT though not in development of software for personal devices.)

First, there is no such thing as bug-free software. The definition of testing is looking for the bugs you expect to find. And many modern applications are so complex that you could never test all possible scenarios that could have bugs.

A product like a streaming client has to be written to work on multiple operating systems on dozens, maybe hundreds of different devices. An app that works OK on my iPhone could barf on your Android phone. Or one that works on my Android Samsung Galaxy phone could barf on an Android Samsung tablet. So different versions of an app can have different bugs. The same version of an app could have a bug that only surfaces on some devices.

Well, no. Computers are machines but software is just a bunch of instructions written by fallible people. And machines can have bugs, too.

Long story short, while computers might be rational and predictable, they are only as good as the instructions they receive, and those instructions come from unpredictable, irrational humans. On top of that, modern websites are hideously complex, with dozens of independent programs talking to various servers to recieve, buffer, display, and send back information. If those programs hit a snag, either from bad programming or lost info as it makes its way from the remote server to your computer they can cause all kinds of unpredictable problems. From the point of view of the programmers, a little message that gets stuck is preferable to an error that crashes the whole website, or browser, or computer itself, so they try to build toward those things.

The bad news for you is that websites try to be as efficient as possible, so they store as much of themselves as they can locally in a browser cache. This is why when you get an error it might come back over and over again, your luckless browser saved the glitched message in the cache and there it will remain until it’s overwritten, sometimes the next time you load the site, or the next time you restart the browser, or maybe never!

Good news: If you hit F5, that will reload the page you’re currently on. If that doesn’t fix the problem, try hitting Ctrl and F5 at the same time, which ignores the cache and reloads everything fresh. Nine times out of ten that will banish weird glitchy display bugs.

I know these answers are probably no small comfort, but I can tell you we all face these same problems, and if you think visiting a website can be aggravating, that’s noting compared to trying to make one yourself.

Oh yeah? My Hello World program displayed the message “HELLLO WORLD” perfectl—oops!

You’re a genius!! That key combo rehabbed my peace of mind back to maybe the universe isn’t out to totally stick it to me. Not right now anyway :grinning:

Thanks for the info!

IMHO, there would be less software glitches if everyone who encountered a glitch demanded a fix or a refund. Money talks, and all.

But software companies are ahead of the eight ball. You probably didn’t even pay for the software you’re using to stream videos - the consumer has no individual leverage at all. Google Chrome, Mozilla Firefox, Apple’s Safari, and Microsoft Edge are all free-of-charge products provided as-is.

~Max

I was wondering this just the other day! So thank you for asking the question, and thank everyone else for dumbing down their answers enough for me to almost understand.

With Netflix, if you want your full screen movie with nothing else, make sure your your mouse arrow is at the bottom below the screen view after you click full view. So, the next time you do that, pay attention to where your mouse arrow is. Even if it isn’t Netflix, try that and see if it works.

I appreciate these answers. Unfortunately, they do not change my general disfavor of computers. Yeah, I know that is irrational. EVERYTHING today incorporates/relies on computers.

My complaint is that my uses are consistently EXTREMELY low level. I don’t wish to “program” my car or appliances, and I could do nearly all of my computing (perhaps other than some web surfing) on a 10-20 year old box. But today’s boxes are exceedingly complex because any one box is intended to do all things for all people - not just my silly little needs/wants. And they keep being updated in ways that change their basic function/appearance.

So instead, I basically adopt the expectation that computers are unreliable and cumbersome. At any moment, the electrons can hiccup, causing the computer to appear/act differently than it has the last thousand times. Lacking the interest/expertise to troubleshoot the unusual behavior, I generally just reboot, figuring I may lose whatever is the most recent thing I was working on.

Rather than inspiring me to develop greater expertise into how computers work, each little hiccup fuels my general disinterest. As a result, I’m making myself more and more out-of-touch…

I’m glad I could help. I’ve done a lot of website building in my past and present so that combo is a lifesaver to let me see if the changes I’ve made actually work, since browsers are happy to keep showing me the old version and not my update.

While there are tools available to check the temporal behaviour of a number of processes or systems interacting in parallel and mathematically prove that the desired properties will be satisfied, e.g.
https://lamport.azurewebsites.net/tla/tla.html

, this relies on engineers knowing how to use them, especially abstracting/simplifying the problem into a form amenable to checking; and, even so, this will not take into account unforseen possibilities that were left out of the model in the first place.

My favorite reason for a glitch that I’ve heard over the years is “it might be a stuck bit”.

Some that we use all the time:

PEBCAK - Problem Exists Between Chair And Keyboard
BUOD - Bad User on Device

If any of my devices seem “glitchy”, I restart it and all is well again.

Debugging simple sequential code is atonishingly hard. The very first programmers quickly realized that they were spending far more time debugging than the initial coding. (And their programs were tiny compared to what followed 10 years later. Never mind today.) This is a direct corollary to Turing’s Halting Problem. Think of the fastest growing computable Math function you can. The difficulty of debugging grows much, much faster than that.

But that’s nothing compared to debugging non-sequential programs. I.e., programs where, depending on the time of events, different things happen. There is no remotely reasonable way to even test such programs for basic errors since the number of combinations of possible events is just too large.

I’ve seen many of the best sequential programmers be utterly out of their depth when it comes to non-sequential programming. And since concurrent stuff is all over the place with networking, multicore in addition to old fashioned multithreaded stuff this is a problem.(And given the difficulties, I don’t AI will even help here.)

Ever hear the saying, often incorrectly attributed to Einstein: “The definition of insanity is repeatedly doing the same thing and expecting different results”? Whoever did come up with that saying never used a computer.

Example: I attempt some action on a computer-- opening an application, performing some command, etc.

((GLITCH))

Damn…should I reboot? Eh, first let’s just try the exact same thing again…

((Works perfectly))

The reason there are glitches because all code has to run on a nearly infinite number of hardware configurations while sharing the computer with a nearly infinite number of other processes while being used in possibly weird ways.
I used to get paid looking for glitches in microprocessors. Here are some interesting factoids.
I heard a presentation from someone who did system test at Dell. For each machine they made, which were custom configured, they built the machine, loaded all the software, and ran tests to see if Word, for instance, worked. The reason was that there were millions of potential configurations and there was no way of verifying all software worked on all of them without trying.
When you are designing a microprocessor you need to do design verification testing, to make sure there are no logic bugs. (Manufacturing issues come later.) The only way of doing this is to throw random sequences of assembly language instructions at the simulated processor, since no person can think of configurations which would sensitize a bug. Weird orderings of instructions find problems that would no doubt show up in the field. This still doesn’t give complete coverage since even a server farm of thousands of processors doesn’t give enough compute power to match a day of running a real processor. I always thought of users as a way of testing our stuff in ways we real testers couldn’t.

Kind of. One fun bug I spend days in meetings about showed up randomly, sometimes once a year, and threw an exception that caused the system to crash. Upon rebooting it might not show up for months. (I had data.) Once we had suspect processors returned for testing, we found the best way to make the bug happen was to let Solaris sit at the prompt. The bug would show up in anywhere from hours to weeks.
We found that it happened on a signal line that was close to a power bump. We respun the chip to move the line, and the problem pretty much went away. Not everyone agreed this was the root cause, but it worked. Oh, and it was correlated to specific wafer locations. I detected this through the data and we boycotted the suspect locations and that helped reduce the fail rate.
Since the machine worked fine after a reboot this was never a big customer issue, but it drove us crazy. It only failed in a system, btw, never on a chip tester.
Not all that deterministic.
Plus, things like power droop when the voltage of a signal is pulled down if there is too much switching activity nearby can look pretty nondeterministic.

You know what’s way more unreliable, unpredictable, and glitchy than computers?

People.

Yet, EVERYTHING incorporates/relies on people.

Good point. But I’m not able to figure out how to do too many things while writing ME out of the equation. Whereas there are PLENTY of things I can do reasonably well while using my computer/phone in the most rudimentary manner possible.