I have zero idea about this case, but it seemed like there was a question about why cases involving purely electronic evidence take so long, and I can maybe help answer that in the larger context…
So, not specific to this case, but I did work in the electronic discovery industry for some time. This is the industry (yes, entire industry) that exists around the process of turning over relevant electronic data between different parties who are hostile to each other (e.g. because of a lawsuit, or a DOJ investigation, etc). It is notoriously expensive and slow (and inaccurate!).
Yes, the data exists on computers… of a sort. The evidence needed is the proverbial needle in a stack of needles: you are searching for some incriminating email or document among potentially many, many other emails and documents. The main problem is sheer volume.
Imagine a lawsuit against a large manufacturer, who employs tens of thousands of people and produces many thousands of products. If someone successfuly files a discovery motion against them to turn over “all documents and correspondence pertaining to defects in product X”, that search is going to be over a LOT of documents. Depending on the time ranges involved and the number of people named in the motion, the results could easily be several terabytes of email and document data.
All of this data needs to be loaded into a system where a team of lawyers can search and review everything in it: this means that email archives need to be pulled and exported from whatever mail server/tape backup/network storage/etc system. This data has to be dumped onto hard drives and shipped, and loaded into whatever system is to be used by the lawyers - this means processing whatever format the IT dept used to dump the data in the first place. The data has to be then processed extensively: Exotic document formats need to be converted into searchable form. Email headers need to be normalized and indexed and cross referenced. Images needs to be OCR’d (people scan documents quite a bit!). Any foreign language needs to be identified for special handling/indexing. All text needs to be indexed so it is searchable.
(All of this data processing is slow, and requires a reasonable amount of computing power. I worked on systems that involved dozens of servers dedicated to this task, working around the clock.)
Lawyers for the company then need to review all of these documents and identify any that are covered under attorney client privilege, or otherwise fall outside of the criteria for discovery. Finally, once they’ve gathered all of the documents to be turned over, the whole bundle needs to be itemized and exported into a format that can be shipped to the party who filed the motion in the first place… depending on the case, this may not be much data, or it can be massive. I’ve been involved in cases with millions of pages turned over.
All of that data gets dumped onto new hard drives, and shipped.
At this point, the party that filed for discovery get to have THEIR team of lawyers comb over everything they received to actually look for evidence. Usually, they use a software product similar to the one used to locate the documents in the first place, so that data needs to be loaded, indexed, etc. all over again…
Also, sometimes documents are redacted during this process, which is typically a manual process and very slow. There is also a fair amount of gamesmanship that goes on about the format that data is handed over in, which helps keep the process slow.
Again, I don’t think this necessarily is what happened here, and the amount of data may have been fairly small in this case. But in general, this is a thing.