Would a standard retail anti-malware/AV program protect my PC from a Stuxnet-level threat?

Imagine a Stuxnet-level malware threat racing around the world, infecting millions of computers. Do retail-level antivirus/anti-malware programs such as Norton, McAfee, BitDefender, MalwareBytes, Kaspersky, etc. offer sufficient protection?

In this scenario, my day-to-day computing sees me uploading/downloading MS Word/Excel files on the Google cloud, sharing work files with colleagues via Slack or Microsoft Teams, doing Zoom teleconferences, surfing mainstream news sites, listening to hours of relaxing music via Spotify or YouTube, and horsing around afterhours on SDMB. I avoid the Dark Web, online gaming, and shady websites that involve the viewing or downloading of anything. Also this: no one else uses my computer and I would never insert someone else’s flash drive.

My inner cynic tells me that retail AV/malware protection vs. a rampaging Stuxnet-level threat, certainly in today’s hyperconnected world, would afford me little protection, especially when I enter an online work-collaboration site. On a related note, how safe is it to stream music for hours on end?

This is all hypothetical so the answer is a definite maybe. Stuxnet isn’t the magical creation that the media portrayed. It was malware that was used to damage equipment controlled by computers. I don’t recall all the details available but Stuxnet may not have infected the computer outside of that purpose. And it worked by taking advantage of security flaws in the software, which is how all malware works. There is no absolute protection against malware from the software you import onto your computer. OTOH OS level security is pretty good now compared to then and most malware is detected and prevented from spreading easily, though it is quite easy to be harmful on an individual machine.

Your commercial antivirus stuff doesn’t stand a chance against a determined Western intellegence operation targeting a known individual.

I can’t pretend to understand all of the hows and whys of the attack but this seems a bit dismissive. By many accounts, Stuxnet is the most sophisticated hack known and as close to magic as we can conjure.

Stuxnet was detected by the AV companies, so I assume they’d be able to (eventually) detect the next one and build defenses against it.

The only really unusual thing about Stuxnet was the manner in which it used four separate zero day vulnerabilities. A single zero day is a pretty valuable commodity, and for the authors to have had the resources to have corralled four for use points to a very well resourced entity. Zero days have a use by - there is every chance they may be discovered by someone else and the hole closed. So hoarding four, whether home grown or purchased, for this one purpose, was a big deal. Someone put a lot of effort and resources into it.

But the exploits themselves were nothing special versus all the other exploits that crop up. And as noted, the worm was discovered by anti-virus groups. The manner in which Stuxnet was designed to work - only being intended to infect one entity, meant that there was incentive to keep its reproduction low, so it wasn’t quickly detected outside its intended attack area. This was so low that there is evidence it was released more than once to try to hurry things up. It never rampaged around the world. It sort of crept about and really didn’t infect that many computers, so took a while to come to notice.

That really isn’t true. Stuxnet was very clever in the way it targeted PLA programs in a very specific and known piece of equipment. But that doesn’t make it the greatest hack ever. The vulnerability it exploited has been discussed as a vulnerability in other systems well before. The writers had a clear idea about the precise system setup they were attacking, and hand crafted the attack. A lot of traditional espionage would have got them most of the way there.

If you want clever hacks that I would rate as much more sophisticated, I would look to things like Heartbleed, and efforts like power line glitching to recover encryption keys. Those get my vote for black-belt level exploits.

All zero day exploits are a worry. And they continue to crop up. But in general computer systems are more robust. You won’t see the equivalent of the Morris Worm. Now that was a big deal, even if it didn’t exploit much more than human naivety.

In my opinion it was not. It was just software and no software actually comes close to being magical in nature and is only described that way by people who don’t know how it works or have a motive to be disingenuous.

Several of the subscription AV products include frequent live updates as new threats are discovered. If a virus is seen and reported with a new and different signature, the detection criteria are downloaded within hours to all customers who are online.

I saw one instance, years ago, where the shared files disk for a server had started encrypting the files and folders. It started it seems in alphabetical order and had done folders starting with A to C before it stopped - presumably at that point, the AV began to recognize this virus or its pattern.

What matters is the manner of delivery, too. Many of the problems nowadays are links that people click on, often emails. Spam and AV filters on email servers need to be up to date. (Also, your PC should be as up to date as possible) Sometimes, there’s a novel exploit route - I saw a note recently that WinRAR has had a vulnerability to allow code to execute when trying to unpack JPG and PDF files.

In order to infect the Windows PCs in the Natanz facility, Stuxnet exploited no fewer than four zero-day bugs—a Windows Shortcut flaw, a bug in the print spooler, and two escalation of privilege vulnerabilities—along with a zero-day flaw in the Siemens PLCs and an old hole already used in the Conficker attack. The sheer number of vulnerabilities exploited is unusual, as typically zero-days are quickly patched in the wake of an attack and so a hacker won’t want to reveal so many in a single attack.

But apparently to get in, it used a USB connection (The Iranian facility was not accessible by internet).

I suppose anothe point for the OP is that I haven’t heard much about compromised home routers lately; so someone seeking to load a virus onto your PC needs that you somehow get them connected - web, email, download…

Another vulnerability mentioned is IoT - all those devices like Ring Doorbells, Smart TVs, Nest therrmostats, etc. that automagically connect to outside servers behind your back. Usually those have a fixed programming and sometimes don’t update themselves. If someone can figure out a way to connect remote to that, they now have a live connection on your network from which to try and infiltrate your PC, even from a very simple method like hacking your passwords.

Once you know about a computer threat, any threat, it’s easy to stop it. At worst, the threat is based on some unintentional vulnerability in an otherwise-useful piece of software, and the rapid-response might involve temporarily disabling some features of that software for a day or two, while a real fix is developed.

Yeah, a significant proportion of malware requires human assistance to get a foothold; it’s easier to get the humans to do that when the machines are multi-purpose and are connected to the internet (because they can be persuaded to click links or open email attachments, and the vulnerabilities of their online collaborative tools can be exploited, but even without that, an attacker can just fly a drone over an area where the staff sit to smoke or eat lunch, and drop a USB stick containing a file called ‘Senior_Management_Salary_Reviews_Confidential’ and let some curious end user open the door.

This is also important - Antivirus software has broadly three (maybe more) ways of working:

  • Signature recognition - malware is detected on the basis of what it looks like - patterns of code in the malware files
  • Permissions engineering - malware can’t do what it wants to do, because important parts of the system are locked down or hidden (over and above the standard permissions model of the OS)
  • Heuristics - malware is detected based on the sort of thing it is trying to do - such as renaming a lot of files, making a lot of network connections, etc - actions that might be individually innocent and permissible in principle, but are suspicious en masse.

A new threat won’t be detected by signature, but the other two methods might (or might not) intercept it.

The opportunities where a hacker can send a packet to a PC and have it allow access are few and far between, and each one is usually extreme situations that are plugged as sooon as they come to light. (Which is why using a zero day exploit is expensive to a hacker - it tells the companies what to fix) Not to mention that most PC’s are behind firewalls, often actively examining traffic. It’s not just the content (although unfortunately most web traffic is encrypted, so harder to detect at the firewall) but the patterns. If a virus has to “phone home” for further downloads or instructions, often those sites become known and blocked. It’s a constant arms race. Usually it’s a very specific circumstance - i.e. if you open this specially boobytrapped ZIP file with this version of WinRAR…

USB is even worse. Some devices - USB and Printers, for example - can install their drivers when first connected. To some extent signed certificates (which Stuxnet stole from legitimate sources) are a security measure to validate these programs. However, USB sticks can be set up, by more advanced actors, to substitute the standard memory stick driver for something more nefarious. Also recall there was a warning a while ago about plugging into random USB charger ports in public places in case those were actually trying to load malware onto your phone. (More of a scare than anything, but…)

And AV programs will scan files and usually block them if not kosher - so your executive pay spreadsheet better be using a virus technique unknown to commercial AV to have a hope of working. Most AV looks for simple stuff, like labelling an executable as a spreadsheet, or embedding an executable inside a spreadsheet cell.

Or a USB device that looks exactly like USB storage can be made to be seen by the host machine as a just a standard USB keyboard - whereupon the microcontroller inside of it just sends a stream of keypresses. Most of the stuff in Windows can done via the keyboard, so it’s relatively easy to script a series of keypresses that will launch the default browser, download an executable file from somewhere, then open it, and answer ‘yes’ at the privilege escalation dialog, or other such actions that compromise the machine.
That sort of attack is pretty difficult to guard against, since end users want to be able to just plug in a new keyboard and start using it.

I think that if I were going that attack route, I’d make the USB device look like a keyboard and an external drive, and then use the “keyboard” for just enough keystrokes to run a program off of the drive. No network connection needed that way, and almost no windows opening that would be visible to the human who plugged in the drive.

One important point, though, is that customized hardware like that wouldn’t be contagious. It’d only spread as fast as you could manufacture and plant the fake thumb drives. It might be worthwhile for an espionage agency trying to sabotage a very specific high-value target (like Stuxnet did), but it won’t spread like wildfire and bring the world to its knees.

Yeah. The hardware is the Trojan horse to get the soldiers through the outermost gate(s). After that, any spread depends on the payload app(s) exploiting whatever internal network inside whichever perimeter(s) exist wherever the USB device(s) first got plugged in and went live.

The point though, is that once a piece of software is installed, then the sky’s the limit. I can infect other devices on the same internal network, if that’s the programming. It can download more pernicious software - plenty of malware use the person’s contact list or buld a list of every address that’s sent you email, then send out the infectious material to them.

the catch is to not get noticed. The more things the malware does, the more likely something will call attention to it. Downloading additional malware? The source site may be flagged already. Reading too many files? The AV may flag that. Are you sure your hack can disable AV surveillance to allow it to continue to run? The AV program may be looking and proof against tricks that try to terminate it. Excessive email volume will trigger warnings. The Email server has its own AV vairiation specifically looking for certain patterns.

This is why often computer viruses are very like their biological namesake. One appears everywhere to take advantage of a newly discovered vulnerability before it is guarded and vaccinated against, then reappears sporadically when it may find the occasional victim not yet properly protected.

The keyboard part is very creative. But again, even running a program triggers AV reviews, and the “Allow?” prompt does not appear in the same console window - especially if the AV also provides a prompt. Making a successful virus today is not a trivial programming exercise.

Not the sky. A fake thumb drive can use the keyboard trick to gain full access to the computer it’s plugged into, and if that computer contains credentials for accessing some other computers, it can gain access to those computers, too. But that’ll just get that network. Getting access to one computer doesn’t give you the ability to infect any other networks: If it did, we’d all be screwed, because every hacker already has full access to one computer.

I saw a security audit once a while ago - it took them less than 20 minutes to get administrator access. This gave us some serious clues what to change. For example, DO NOT HAVE USER PASSWORD THE SAME. A lot of generic accounts (like the PC that scans entry cards) may still have that hanging over. If the user has admin access to their machine and a hacker gets on, they can download the security database, which includes the encrypted tokens for anyone who’d logged in for the last month - then a dictionary attack can often find the network admin password if the network administrator has been on that PC recently (I.e. doing maintenance). Turn off the option to enumerate usernames from the server (i.e. send “who is #1?” get the full username; repeat until you have a list of users…) Regularly review who has admin access - too often diagnosing software issues includes adding full permissions - then forgetting to remove them.

Microsoft has gone a long way toward mitigating some of these issues by removing insecure defaults, it’s still complex out there. For example, disable Ethernet ports not in use, and filter unauthorized firewall traffic. Today you can build a simple PC the size of a USB key with an ethernet plug. Find a wall socket in a hidden corner in a building, plug this in, and it phones home - now the user is effectively sitting on a PC in your building, behind the firewall, to explore at leisure.

IT security is a huge and complex business today.

If you have a system where hiding the usernames represents any increase in security at all, then your security is already so weak as to be practically nonexistant. As soon as anyone in your organization ever emails anyone else outside your organization, everyone knows the format of how ordinary users’ usernames are formed from their human names, and it’s easy to find human names of people (especially important people) who work at a company. Meanwhile, there are probably also usernames called “admin”, “root”, “system”, “techsupport”, and so on in your system.

Meanwhile, yes, many intranets (WiFi and/or Ethernet) are set up in such a way that having a toehold in any computer on the intranet makes it much easier to access everything else on the intranet. Which is also a sign that your intranet design is bad.

One of the cautions was “rename the administrator account to something different.” (Some even suggested then creating an account named “administrator” that was powerless.) Just because it might be obvious, does not mean you have to hand it to hackers on a silver platter. Unless you’re a prime target, the idea was if you are more difficult to hack then the next guy, they will go on to an easier target.

Also, just knowing the email is “Bob.Smith@mycompany.com” does not reveal whether the user is bob, bsmith, bobs, roberts, rsmith, rjsmith, etc. …or even guarantee the Windows domain name is “mycompany.com”. Make them work for every bit. The security company demonstrated that by enumerating every name in the userlist, they found several accounts, mostly utility, where the username and password were the same. Since they did not want to lock people out, they limited tests to only 5 simple passwords per user. You can set utility accounts to disallow interactive logins, and set accounts to disallow remote login (i.e. not at the console, but over the network). disable sharing of the C: drive so the security database is not easily accessible. Enforce access rules - especially, the backups should be protected (duh).

It’s like your house, You can lock the doors. If someone is determined to get in, and willing to use a crowbar or break doors, they will - but the average burglar will give up and go elsewhere.