Also, I don’t think the rules that apply to true voice traffic that travel over copper telephone lines are the same rules that travel over VoIP or “public” internet traffic. Doesn’t it fall on the responsibility of the user to encrypt/decrypt their own IP traffic if they want it to be secure?
No you wouldn’t. You’d just need to buy off the computer programmers who set the thing up. You do some or mostly biology work, but when you need 1.6 million CPUs you divert as much as you need to spy work. The end user sees the same neat graphics and incomprehensible code.
Folding at home has been at it for more than 10 years and has had far more people join than they originally envisioned. Yet they seem to make very few scientific announcements. Same thing for SETI@home.
But, it would be easier and much much more powerful for them to just build their own super (which, of course, they have) including off the shelf chips as well as special purpose/custom chips, fpga’s etc.
True dat. In the short run. But 10 and 20 years from now they would still have access to 1 million of the most powerful home computers in the world.
I’m sorry, are you contending that the bandwidth is the insurmountable problem, or are you claiming that it would be impossible to keep under wraps? I was responding to the former argument.
I’m not claiming that it’s real. It’s probably not. I’m saying that’s more technically plausible than many of these responses believe. Voice recognition and expert systems have made tremendous strides over the past few years. “Double the killer delete select all” isn’t state of the art any more.
Sorry, my post wasn’t entirely clear. Bandwidth would be a big, but not insurmountable problem, if the voice processing servers were located outside of the Telco networks. All of the voice traffic would need to be sent to them, which would require fat data pipes all over the place. If the servers were hosted inside the Telcos, bandwidth would be less of a problem, but greater co-operation with them would be required. Either way, a lot of people working for these companies would have to be in on the conspiracy, as it would have a major impact on their engineering operations. That’s what makes it a fantasy.
I agree the technical problems are not as great as some posters think. If I get the time today, I’ll post some technical links to YouTube’s copyright matching technology, and a pattern matching system called Autonomy. They show some of the things that are possible these days.
Did you read the links I posted?!
This is not conspiracy woo woo, the Bush administration admitted to it. There is no need to discuss it as a hypothetical, it is real.
Yes, I read it. There is a big difference between that and the surveillence of the entire telephone network, which the OP is proposing. Some people look at something like that and wonder if large scale surveillence is going on unreported. I’m arguing that it would be much harder to hide a larger scale operation.
Something tells me the NSA/IC doesn’t have problems getting bandwidth. They pay for it. And getting translators is (usually) not too big an issue (they pay for it). Obviously, having a human/monkey listening to everything would be problematic, but if you can buy voice recognition software for your home PC (or Mac… we still like you Mac-heads, just not in that way) and it will type out your speech into easily searchable terms… do you think the intel community can’t?
That’s just silly.
On the other hand, I’ve seen some government implementations that made me glad that we didn’t ask them to invent fire… they would think that was too complicated.
I’ve always figured we all might as well be trusting or paranoid as we choose (kind of like being married); hypotheses don’t matter, and you’ll never really know for sure.
Wouldn’t the amount of bandwidth required to monitor every phone conversation be equal to the amount of bandwidth required to conduct every phone conversation? In other words, the total surveilance system would would require infrastructure roughly equal in size to the actual phone system.
You can reason about this from a pure mathematical perspective, the more rare an event, the more sensitive filters need to be to make filtering even viable. For example, there are many rare cancers that we deliberately do not test for because, even though the tests are 99% accurate, a 99% accuracy rate would result in thousands of false positives for every genuine case.
Same with this hypothetical system, even if we designed a filter that was correct 99.99% of the time, the resulting output would be useless for analysis purposes.
Wow, that sounds eerily related to Room 641A, although that’s a fiber optic thing. This is the same operation that the EFF lawsuit mentioned earlier was based on.
You could greatly reduce the number of conversations you need to analyze by not only processing individual communications but by correlating traffic patterns with those communications. If you see one phone call with some hits in it, no big deal. If you see a cluster of communication hits within two or three conversation-hops of each other, you take a deeper look.
I don’t know about voice either but I have some (first-hand, but not highly technical) knowledge of the architecture used by AT&T, at least in the Southeastern U.S. The project was referred to as CALEA or “Lawful Intercept.” I don’t know what’s designated “lawful” or how it was ultimately used, but it certainly had (and I assume still has) the ability to mirror all data traffic passing through a POP.