Calculating power of all smartphones - can it be used for anything good?

I remember sharing my desktop computer a decade or so ago for the S E T I project. I think my cell phone has now more capacity and calculating power. What would be the best idea to share it with others? has been doing this kind of stuff for almost 20 years. They don’t seem to have any cell phone clients, though:

The problem with running the CPU of your phone at 100% continuously is twofold:

Obviously you’ll run down your battery in no time at all. So you’d only want to do this when connected to power. But even then the device would get relatively hot, as they’re not designed to do CPU-intensive tasks for long stretches of time. All this heat could be bad for various components, notably the battery.

I think in the future, in cold climates we’ll have CPU wallpaper and we rent out CPU cycles to the highest bidder so we get to heat up our houses for cheap with the waste heat.

One doesn’t have to wait for the future:

While distributed computing projects like SETI@home still do valuable work, the computing landscape has dramatically changed in the years since they were conceived.

Today total computational capacity of all SETI@home computers places it at about 155th on the list of top 500 supercomputers. IOW there are 155 faster individual non-distributed supercomputers. Today the world’s fastest supercomputer (Sunway TaihuLight) is about 140 times faster than all the SETI@home computers combined, and it can tackle a much more diverse array of problems.

Within two years the IBM Summit supercomputer is expected to reach 200 petaflops which would be about 300 times faster than all the combined SETI@home computers.

In the years since those distributing computing platforms were devised, supercomputing has changed from vector-oriented machines to massively parallel machines which has unleashed huge performance gains, displacing the “crowd sourced” distributed approaches far down the list of computational capacity. They are still valuable and do useful work, but much faster performance is now available and accessible than in years past.

Smartphone battery life is bad enough as it is without some barely-useful distributed project tanking it even faster.

Smartphones can be useful for things other than distributed computing projects. For example, if all of the smartphones in the cars on a highway report their location regularly, the accumulated data can tell you a lot about how fast traffic is moving along that highway. Or if you can track each consumer in a supermarket by their smartphone, you might see patterns in where people stop. Perhaps a particular endcap has people stopping there, but sales figures show people aren’t buying what’s displayed there. (Yes, it’s a little Big Brother-ish, but let’s assume that the data is anonymized.)

It’s always been true that there were faster supercomputers than the combined capacity of any public distributed computing system. The point isn’t that SETI@home and such projects provide the fastest available computing, the point is that it’s FREE. Supercomputers are massively expensive, and even renting limited time on one costs a lot of money. Projects like SETI@home, prime searches, etc. need large amounts of computing power for long periods of time, which would require a big monetary investment if it were done on commercial supercomputers.


There is a distributed sensing project for detecting earthquakes:

Of course you didn’t want any harm for your cell phone, so there could be parameters like core temp under 80°C, when charching, between hours 23pm and 6am…
How about earning money for the leased time? Maybe my telephone company gives me back money if I let them use my phone? I know that operators have huge computer facilities for their networks.

That’s more like what I’m thinking of. Smartphones are less valuable for their computing power than for the fact they are mobile computers with various sensors and their location can be identified fairly closely.

don’t apps like Waze and Inrix do this already?


No offense, but you needn’t feel guilty that you’re using the billions-of-calculations-per-second-capable computer in your pocket to play Candy Crush instead of solving world hunger. Computers have been commodities for at least two decades now. That means they are not only ubiquitous but cheap and disposable. Plus the days of computer scientists of the 60s and 70s being enamored with the notion that some kind of ‘electronic brain’ supercomputer could solve all the world’s problems is long, long past. And it was never true to begin with…

Another thing. Rather than having a supercomputer at your company or in your university, you can rent one through Amazon Web Services or similar companies.

Yes, and Google Maps too (Google bought out Waze a few years ago).
There’s also a project called CRAYFIS which attempts to use a network of smartphones to observe cosmic rays. An extremely powerful cosmic ray gets absorbed in the upper atmosphere and trigger a huge cascade of particles. These can be detected by smartphone cameras.

Apparently Google Maps is becoming extremely good at predicting traffic- they basically have phones send location and speed info, and then have constructed a big database of roads, times and traffic patterns.

They can literally tell you when traffic is worse than usual, and devise a route that will avoid predicted future jams and bottlenecks.

Why did you put spaces between the letters of SETI?

And I doubt the actual computer experts ever really believed it, except maybe Minsky and a few others who got too excited about the prospects of AI before we learned how hard it was…

Anyone who wants pure computational power quickly can rent time on Amazon’s big compute cluster, for example. Putting together a smaller cluster of blade servers running Linux and OpenMP software is similarly easy, especially given that you can buy gigabit Ethernet routers and Cat6 cable at Best Buy. A step down from that, and you’re looking at Raspberry Pis, which aren’t that dissimilar from cell phones but are a lot cheaper.

So, yes, we are carrying around a lot of pocket computers these days. Yes, most of their clock cycles are pretty much wasted on things like looking up sports scores and other utter trivialities. Ultimately, it would be more trouble than it’s worth to harness those wasted cycles. I’m reminded of an old quote about yoking together chickens instead of buying four strong oxen. I think a CEO of DEC said it…

That would have some pretty high cluck cycles.

Could you explain to non-CS types what the difference is between vector-oriented machines and massively parallel machines?

What’s the difference between a supercomputer and a server farm?

What does distributed computing have a comparative advantage for? Absolute advantage?

This isn’t a technical hurdle. BOINC for android already does that:

It’s just that there is very little demand for it vs renting out compute clusters. To give you an idea of available power, an iPhone 6 might do about 200 gigaflops (a measure of how many math operations a processor can do). Your average gaming console or computer has 10x that, and there’s still no real demand for using them as distributed computers (Sony tried it with the PS3 and gave up shortly). Even Bittorrent miners, who generate income from nothing but math, gave up on using off-the-shelf hardware and built their own custom chips (because the cost of electricity became too expensive after a while).

Industry doesn’t really care where the computing comes from, normally, just the $/FLOP: but that has to take into account not only raw computing power, but the overheads of electricity, management (making sure results come back reliable, consistent, and not fraudulent ), coding (cell phones have different CPU types and can’t all run the same code), results transmission (who is going to pay for the data downloads and uploads?) etc. all have to be factored in. Phones just aren’t that powerful. If you think otherwise, feel free to write your own app that pays users for their computing and see if it takes off… it hasn’t really with desktop or laptop computers, both of which are far more powerful already and almost permanently connected to fast internet.

Various processing flavors/styles/approaches:
Single instruction on single set of data (basic original non-vector CPU model)

Single instruction on multiple sets of data (vector or SIMD):
Things to all add at the same time:

Symmetric Multi-Threading:
Pretend you have 2 (or more) CPU’s and work on both threads of instructions kind of at the same time to the extent that it’s possible. While reading data for thread 1, do some math on thread 2, keep hopping back and forth but look for opportunities to maximize throughput.

Kind of like multiple cpu’s in one package, but there is some sharing of memory etc.

Has many lightweight SIMD/Vector “cores”.
Each one may perform the same operation on 32 sets of data at each step.
Memory access is optimized to feed all of these cores sequentially and write the results back out. But each core can be operating on different instructions and memory than the one next to it.

The interconnects between processors.
Speed of data transfer becomes the bottleneck. Supercomputers are optimized to make sure that processors working in parallel can communicate fast enough that they don’t slow each other down waiting for data.

Server farms have slower connections between processors so they are more effective for problems that have large chunks of processing that is independent.

It’s a cheaper method of distributing work if the problem can be split up in a way that each processor is not very dependent on information/results from other processors.

An extreme example of computing that does not need to be shared:
Doing your taxes
There are few dependencies between people for doing taxes, so it can be done largely independently.

An example of computing that requires shared info (probably, I don’t have experience with this):
Modeling the flow of water molecules in a pipe. Everything is connected and impacts the things around it.

There are many problems with this.

The first one is that you can’t actually rely on the computation being available when you need it. Telephone companies do have big computer facilities, but they need them to be running all the time, and have capabilities that meet their needs. They can’t rely on when a bunch of phones may or may not be plugged in. That’s why distributed computing did things like crunching away at huge data sets. No requirements for responsiveness or availability. The aliens trying to communicate with us can wait an extra few days if there aren’t enough clients online. Ma Bell’s webserver can’t.

The second one is that it’s fairly unlikely that telephones can do enough computation at scale to be worth the power it takes to run them. Power that is piped to residences, run through a wasteful AC->DC converter, and then used on a chip that’s optimized for lower power consumption and throwing a bunch of pixels around a high definition screen, not the kinds of operations that such a buyer might need. There’s a reason that if you go into a server room, you don’t see a bunch of phones running. You don’t even see boards with a bunch of chips that go into phones. Because they’re not designed for the task.

Computers are specialized devices.

Let’s analogize to cars. Imagine that self-driving cars are a thing. Totally automated, no drivers needed. So, at night, when most people are asleep, we could use lots of the auto fleet to haul freight. Except we won’t. Because cars aren’t designed for freight. And even if all the coordination problems were solved, and we didn’t have to pay drivers, it’s still not efficient to use a passenger vehicle to haul lots of stuff, because the energy costs of the 20 cars it would take to replace the hauling power of one big rig vastly outstrip the capital costs of just buying the big rig in the first place.

Companies that need more computing power won’t use our phones because it’s cheaper for them to just build a computer that does what they need.