A quick update and a follow up question. I was able to successfully call into the CUDA code from compiled scheme. (Horray!)
But now I don't seem to be able to load the object file from the interpreter, even when I start it as gsc.
I even tried renaming my shared object (build with -fPIC) to end in .o1. But still no luck using (load "myobject.o") or (load "myobject.o1").
Is there a special call to load a shared library into the interpreter? Alternatively, is there a way to invoke an interpreter REPL from within a compiled executable?
Thanks.
Jason
## demonstration of problem loading .o file (full compilation steps and source attached for the curious)
jaten@afarm:~/NVIDIA_GPU_Computing_SDK/C/src/matrixMul$ gsc Gambit v4.6.0
(load "obj/x86_64/release/matrixMul.cu")
*** ERROR IN (console)@1.1 -- No such file or directory
(load "obj/x86_64/release/matrixMul.cu") 1> (load "obj/x86_64/release/matrixMul.cu.o1") *** WARNING -- Could not find C function: "____20_matrixMul_2e_cu_2e_o1" *** ERROR IN (console)@2.1 -- /home/jaten/NVIDIA_GPU_Computing_SDK/C/src/matrixMul/obj/x86_64/release/matrixMul.cu.o1: only ET_DYN and ET_EXEC can be loaded (load "obj/x86_64/release/matrixMul.cu.o1") 2> 1>
(load "obj/x86_64/release/matrixMul.cu.o")
*** ERROR IN "obj/x86_64/release/matrixMul.cu.o"@1.5 -- Illegal character: #\x02 1>
(load "obj/x86_64/release/matrixMul.cu")
*** ERROR IN (console)@4.1 -- No such file or directory (load "obj/x86_64/release/matrixMul.cu") 1>
(load "prog.scm")
*** ERROR IN "prog.scm"@2.1 -- Interpreter does not support ##c-declare 1>
*** EOF again to exit
jaten@afarm:~/NVIDIA_GPU_Computing_SDK/C/src/matrixMul$ cat prog.scm
(c-declare "extern double foo(int a, int b);") (define foo (c-lambda (int int) double "foo")) (println (foo 11 22)) ;; test it...
jaten@afarm:~/NVIDIA_GPU_Computing_SDK/C/src/matrixMul$ ./prog Device 0: "GeForce GTX 460" with Compute 2.1 capability Error when parsing command line argument string.
Using Matrix Sizes: A(80 x 160), B(80 x 80), C(80 x 160)
Run Kernels...
matrixMul, Throughput = 79.8960 GFlop/s, Time = 0.00003 s, Size = 2048000 Ops, NumDevsUsed = 1, Workgroup = 256
Check against Host computation...
PASSED
foo() called: Test code integrating Gambit Scheme with nVIDIA CUDA-C code complete! Answer to foo(11,22) = .02268041237113402 jaten@afarm:~/NVIDIA_GPU_Computing_SDK/C/src/matrixMul$
On Sun, Feb 13, 2011 at 6:47 PM, Jason E. Aten j.e.aten@gmail.com wrote:
I'll try to answer, but feel free to ask for clarification if I don't illuminate the topic.
nvcc is really just a coordinator for many calls. nvcc makes a series of calls, producing intermediate code (PTX files), that can then be just-in-time compiled into final (.cubin) architecture specific code by the nvida libraries. In a sense, yes, it compiles source code, but then breaks out kernel code to be targeted to the GPU device architecture(s). Later in the compilation sequence they all get merged into one file.
I attach an example of a C++ file that includes at the top in the comments the 15 commands issued by the nvcc nVidia compiler coordinator. The final (15th command) is the link stage, producing a final executable that depends upon several shared libraries. The matrixMul.cu example from the SDK is consolidated into one file, matrixMul.cu.txt, and the commands used to assemble the final binary broken out at the top of the file.
Thanks again.
Best regards, Jason
-- Jason E. Aten, Ph.D.
Perhaps I don't understand how nvcc and CUDA work. Is it the case that all source code is compiled by nvcc and then linked to create an executable, or is there some other preprocessor/compiler/linker/loader that is involved?
Marc