<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
Marc Feeley wrote:
<blockquote
cite="midE7E19ED6-6476-462B-BCC7-BEA21C12387D@iro.umontreal.ca"
type="cite">
<pre wrap="">On 26-Feb-08, at 6:37 AM, Jeff Read wrote:
</pre>
<blockquote type="cite">
<pre wrap="">On Mon, Feb 25, 2008 at 10:39 PM, James Long <a class="moz-txt-link-rfc2396E" href="mailto:longster@gmail.com"><longster@gmail.com></a>
wrote:
</pre>
<blockquote type="cite">
<pre wrap="">Heh, well, one of my side projects is a high performance graphics
engine. True parallel processing is becoming extremely important;
though I'm still not sure of the best way to achieve it. Using STM
can incur high lock contention, but going with the Termite model
incurs quite a bit of overhead with serializing/deserializing,
copying, and TCP packet construction/destruction. It's possible you
could avoid the TCP overhead for local processes using pipes of some
sort.
</pre>
</blockquote>
<pre wrap="">
</pre>
</blockquote>
<pre wrap=""><!---->
It is possible to interface to Gambit's low-level I/O system, but it
is not documented and non-obvious. I will put together a brief
documentation so that you can experiment with this.
Marc
</pre>
</blockquote>
Not sure if it is relevant, but Nvidia's CUDA comes to mind...<br>
<br>
"The GeForce GPU, for example, can act as a co-processor to the CPU,
has
its own 16K-bit memory and runs more than 128,000 instruction threads
in parallel, he said. Groups of threads can also work together to
accomplish one task."<br>
<a class="moz-txt-link-freetext" href="http://www.pcworld.com/article/id,132206-page,1-c,graphicsboards/article.html">http://www.pcworld.com/article/id,132206-page,1-c,graphicsboards/article.html</a><br>
<br>
-Bob-<br>
<br>
</body>
</html>