DesertNet? As in DesertNet, located in Arizona? Do TPTB send the hamsters south for the winter, or we depending on a bunch of senior hamsters who can’t live on Social Security alone?
curious as to what OS and database and webserver package the server runs? Best bang for your buck is probably Linux OS, Postgres or MySql DB and Apache web server.
It’s always been MySQL and Apache.
Jenny
your humble TubaDiva
Administrator
might be time to try Postgres or MariaDB
Maria is a fork of MySQL MariaDB - Wikipedia
DesertNet was originally started as a platform for alternative newspapers; the Reader was one of the founders.
I think we’ve always been with them.
Jenny
your humble TubaDiva
Administrator
The current flavor of MySQL is 10.0.21-MariaDB-log.
Jenny
your humble TubaDiva
Administrator
IANADB expert neither but everybody keeps telling me 22 million posts is a bunch. Discussing with TPTB how we can optimize that load. It is taking longer to figure this out and implement a fix than any of us would like.
Jenny
your humble TubaDiva
Administrator
Because of my local time zone, I’m usually logged in between 10-pm and 6-am, Chicago time, and I’m having pretty much the same experiences as those you on the other side of the clock. So the former seems to be the case.
backup the database. Then start deleting records , if the problem goes away with a smaller database that proves the DB is the problem. Delete 25% oldest, then 50% ,etc.
22 million records sounds like a lot but I worked with databases with around 1 billion records using Oracle DB on very big/fast linux servers. The DBAs had to really optimize those DBs. And we had programs that took 12 hours to run even when the data was spread across 10 different servers, each server got 10% of the data. Banks, airlines, stock brokers, etc have giant databases too.
Cut the junk traffic that’s chewing up CPU cycles – the CIDR blocks of useless scrapers, bots, search engines not named “Google” or “Bing”, SEO bots from SEMrush/Moz/etc., Amazon AWS and Microsoft Azure, and spam/scraper/hacking-friendly hosts like ColoCrossing, Psychz Networks, ServerMania, Yesup, OVH, Hetzner, Frantech, Eonix, Leaseweb/Nobis/Ubiquity, etc.
Give up vBulletin 3, and consider XenForo or IPB.
You want us to delete content from the board?
Jenny
your humble TubaDiva
I assume that’s what the back up is for?
AIUI, deleted threads aren’t actually deleted, they go to a cornfield, right? How many tens of thousands of chunks of spams, socks, etc. have been dumped in the corn field over the years. Would anything really be lost if you plowed the field?
For better or worse, I’m on the Microsoft side of things. In a situation like this, we’d use trace logs and analyze them with Profiler, which would tell us in detail which queries or operations were problematic, and we’d look for evidence of blocking or deadlocks. To rule out the Web server (which is not my area), I imagine we would look for CPU load, RAM utilization, disk activity, etc.
There must be some MySQL equivalent.
yes you restore everything from the backup once the problem is fixed. I thought that was obvious but I guess not.
There’s still the question of why the problem happens to some users more than others, and at some times more than others. If the problem were just a database that’s too big, I’d expect it to be slow for everyone all the time. It could be a combination of things, like a big database and a large number of concurrent users.
If it’s a problem with concurrency, it might be possible for us (the SDMB user community) to reproduce it by creating a test thread and all trying to post to it at the same time. What do people think of this idea?
TubaDiva, do you know whether the people who are investigating this have reproduced the problem, or have observed it happening?
Deleting records temporarily should be very easy to try. 1 SQL statement will do it.
If it turns out the fix is going to be too expensive, deleting records for good may be the only way to improve performance. There is no cost to do that except I suppose people may be unhappy. For me I don’t care if everything older than 5 or 10 years (for example) is deleted.
How do you propose deleting records temporarily? Do it in a transaction which you later roll back? If the number of records is significant, this would likely lock up the table entirely, making the database unusable by any other transaction.
To delete records temporarily and still have a usable system, you’d have to copy the records into a different table before deleting them, commit the transaction, and then copy the records back when you’re done testing.
you don’t have to roll it back although I guess you could if you wanted .
The simple way is after the test is done wipe the database and restore all records from the backup. You would have to not allow users in the system while the test is done . They can be let in when the DB is fully restored.