Home cloud.
May. 7th, 2010 08:58 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
As now I move from being mostly Xeon performance focused to Atom performance focused on my daily job, I recalled my old idea. Technical, goes under cut
The idea I was thinking about more then 1 year ago now becomes more feasible, I may work on POC.
Currently wifi and bluetooth RTT latency between home PC and Atom based mobile device is in the range of 2 to 10 msec, with median around 4msec. It's almost 3 orders of magnitude worse then Ethernet latency, but still could worth a try to be used to harness 15x difference in performance between mobile and desktop CPUs.
Using modified Google's NaCl as a deployment tool one can potentially run untrusted code from mobile devices on the nearest computing capable machine, e.g. last mile wireless access device :), or ISP's cloud (that would add few msecs).
There is a number of very promising mobile applications that would benefit from more computing power achievable this way: mobile augmented reality, especially with image recognition capabilities, real time raytracing, etc. The goal is to make developing such apps nearly transparent for developers, and I think now I have API and architecture in mind that would work.
P.s. Of course one might argue that VNC connection could do almost the same thing. However VNC has the worst possible app deployment model for the case.
P.p.s. The real bottleneck that I've omitted in the quantitative estimations above is in the throughput: both network and image decompression. This is my biggest concern now.
P.p.p.s. Have I managed to outline the idea in the most cryptic way possible, so no one would understand? I think yes.
The idea I was thinking about more then 1 year ago now becomes more feasible, I may work on POC.
Currently wifi and bluetooth RTT latency between home PC and Atom based mobile device is in the range of 2 to 10 msec, with median around 4msec. It's almost 3 orders of magnitude worse then Ethernet latency, but still could worth a try to be used to harness 15x difference in performance between mobile and desktop CPUs.
Using modified Google's NaCl as a deployment tool one can potentially run untrusted code from mobile devices on the nearest computing capable machine, e.g. last mile wireless access device :), or ISP's cloud (that would add few msecs).
There is a number of very promising mobile applications that would benefit from more computing power achievable this way: mobile augmented reality, especially with image recognition capabilities, real time raytracing, etc. The goal is to make developing such apps nearly transparent for developers, and I think now I have API and architecture in mind that would work.
P.s. Of course one might argue that VNC connection could do almost the same thing. However VNC has the worst possible app deployment model for the case.
P.p.s. The real bottleneck that I've omitted in the quantitative estimations above is in the throughput: both network and image decompression. This is my biggest concern now.
P.p.p.s. Have I managed to outline the idea in the most cryptic way possible, so no one would understand? I think yes.