Websites: "powered by XYZ" - bad security choice?

It’s really common to see declarations like “powered by Wordpress/Joomla/Whatever” on CMS-based websites now - aren’t these sites creating greater security exposure by assisting attackers in targeting their exploits?

Since this is GQ, factually, yes.

However, I imagine that the people who make the software are more concerned with making money than keeping stuff secure. If security holes aren’t big news and keeping customers away, then they’re happy enough to promote themselves loud and hard.

(Note that the SDMB is “Powered by vBulletin® Version 3.7.3”. A quick check reveals that the very next version has a release note identifying that a security hole was present in the previous version.)

Maybe a little, but finding out which CMS a site is using isn’t usually that hard. You can examine their site structure, source code, cookies, etc. and look for hints.

There are also sites like that can help you guess the CMS being used. If your only defense against attackers is removing the CMS’s logo, well… it’s not very safe.

In theory, nearly all of that can be modified to obscure the identity of the CMS (although I’m sure few people do).

Sure - I wouldn’t dream of using it as an actual line of defence - but the point is that Google indexes the ‘powered by [whatever]’ line - so leaving it in there means someone who has chosen their method of attack before their target might select such a tagged site when, if the text was simply removed, they wouldn’t even know the site existed.

Attackers don’t find out what CMS is being used by reading something. They have a list of vulnerable pages and directories and just hit a domain with all of their scripts. If they get in they get in. If they don’t, they move on.

None of the sites on my company’s Web server have CMS and the servers don’t even run PHP. Yet the sites’ logs consist of thousands of hits like:; 404; 404; 404

Hackers don’t need a list of sites with CMS, they need a list of domains and a list of vulnerabilities.

How are those lists of domains/vulnerable pages/directories compiled in the first place?

Anyone can download a copy of any free CMS, install it and sit there and try to find vulnerabilities.

You can also find out about vulnerabilities by following the CMS’s upgrades. If something gets patched, you can assume it was vulnerable. You can also assume that 99% of the users of the software have not updated their code.

Hackers hack shit for fun and profit. It’s what they do. They’re also very nice and work in large groups, and share their findings with others who use the info and build on it.

If obscurity about what system you’re using represents any non-negligible increase in your total security, then your total security is weak enough to be already useless. It’d be like trying to improve the security of a padlock by hiding the name of the lock manufacturer. If knowing the name of the manufacturer makes any difference, it’s a lousy lock to begin with. Proper padlock security comes from getting a high-quality lock (where you don’t care if attackers know it’s high-quality) and then being careful about who you give the keys to. Proper computer security works the same way.

You can scan IP ranges and do reverse DNS, or just query DNS servers for the list of hosts they are authorititive for. That is how Google find new sites to index (if you don’t register the site first).

Or use Google cause they have done all the hard work.


Yeah, point taken - I guess if it were the case that individual, malicious/personal handcrafted attacks significanly predominated over automated script-based ones, it would be worth obscuring the machinery of the site.

No, you (generally) can’t. And no, they don’t.

Quoting for truth. This is how it’s done.

You’re right . My excuse (such as it is) is that when I used to do such things, you could (host -a -l Of course, that was in the 90’s when people on the internet trusted each other, no-one had considered email relays and spoofing, and was before AltaVista made finding things on the internet much easier.

Now, if you want to look for subdomains, you use Google. But it is true that you can get a large amount of data about hosts out of WHOIS searches and DIG/NSLOOKUP tools. It does not take too long for a web host (even one not indexed) to be probed in a variety of ways that will reveal enough of the underlying structure for a specific attack. I regularly scan my server logs to see what is hitting apache, and I serve very little, and nothing public.