Verily I say unto thee, that Roy Schestowitz spake thusly:
> Along the lines of a network mesh (/a la/ OLPC), maybe you could pool
> RAM and CPU cycles. The only problem with that is the low throughput
> (wireless) and the possibility of losing connectivity. You can't
> recover from lost threads/memory buffer like you can recover from
> lost packets by retransmitting.
No but you can split jobs into chunks across distributed clients, then
periodically poll for results, or better yet just have a socket
listening for announcements from other clients. That way even if the
connection is temporarily lost, the worst case is that one or more of
the clients may be idle for some of that period, after they run out of
input data (e.g render farm).
The bottleneck is network speed, of course. My gigabit network handles
raw AV quite admirably, but I wouldn't fancy trying it over 802.11.
But this is distributed computing rather than virtualisation though.
Still, the distributed model might be applicable to a surprisingly
diverse range of tasks, up (or is that down) to and including the OS
itself, given the right approach.
--
K.
http://slated.org
.----
| "OOXML is a superb standard"
| - GNU/Linux traitor, Miguel de Icaza.
`----
Fedora release 7 (Moonshine) on sky, running kernel 2.6.22.1-41.fc7
23:12:36 up 54 days, 22:07, 2 users, load average: 0.03, 0.07, 0.08
|
|