[gambit-list] embedding CUDA-C (for nVIDIA GPU usage) in Gambit?

Jason E. Aten j.e.aten at gmail.com
Sun Feb 13 21:46:24 EST 2011


A quick update and a follow up question.  I was able to successfully call
into the CUDA code from compiled scheme. (Horray!)

But now I don't seem to be able to load the object file from the
interpreter, even when I start it as gsc.

I even tried renaming my shared object (build with -fPIC) to end in .o1.
 But still no luck using (load "myobject.o") or (load "myobject.o1").

Is there a special call to load a shared library into the interpreter?
 Alternatively, is there a way to invoke an interpreter REPL from within a
compiled executable?

Thanks.

Jason

## demonstration of problem loading .o file (full compilation steps and
source attached for the curious)

jaten at afarm:~/NVIDIA_GPU_Computing_SDK/C/src/matrixMul$ gsc
Gambit v4.6.0

> (load "obj/x86_64/release/matrixMul.cu")
*** ERROR IN (console)@1.1 -- No such file or directory

(load "obj/x86_64/release/matrixMul.cu")
1> (load "obj/x86_64/release/matrixMul.cu.o1")
*** WARNING -- Could not find C function: "____20_matrixMul_2e_cu_2e_o1"
*** ERROR IN (console)@2.1 --
/home/jaten/NVIDIA_GPU_Computing_SDK/C/src/matrixMul/obj/x86_64/release/matrixMul.cu.o1:
only ET_DYN and ET_EXEC can be loaded
(load "obj/x86_64/release/matrixMul.cu.o1")
2>
1>
> (load "obj/x86_64/release/matrixMul.cu.o")
*** ERROR IN "obj/x86_64/release/matrixMul.cu.o"@1.5 -- Illegal character:
#\x02
1>
> (load "obj/x86_64/release/matrixMul.cu")
*** ERROR IN (console)@4.1 -- No such file or directory
(load "obj/x86_64/release/matrixMul.cu")
1>
> (load "prog.scm")
*** ERROR IN "prog.scm"@2.1 -- Interpreter does not support ##c-declare
1>
>
*** EOF again to exit




jaten at afarm:~/NVIDIA_GPU_Computing_SDK/C/src/matrixMul$ cat prog.scm

(c-declare "extern double foo(int a, int b);")
(define foo (c-lambda (int int) double "foo"))
(println (foo 11 22)) ;; test it...


jaten at afarm:~/NVIDIA_GPU_Computing_SDK/C/src/matrixMul$ ./prog
Device 0: "GeForce GTX 460" with Compute 2.1 capability
Error when parsing command line argument string.

Using Matrix Sizes: A(80 x 160), B(80 x 80), C(80 x 160)

Run Kernels...

matrixMul, Throughput = 79.8960 GFlop/s, Time = 0.00003 s, Size = 2048000
Ops, NumDevsUsed = 1, Workgroup = 256

Check against Host computation...

PASSED

foo() called: Test code integrating Gambit Scheme with nVIDIA CUDA-C code
complete!
Answer to foo(11,22) =
.02268041237113402
jaten at afarm:~/NVIDIA_GPU_Computing_SDK/C/src/matrixMul$






On Sun, Feb 13, 2011 at 6:47 PM, Jason E. Aten <j.e.aten at gmail.com> wrote:

> I'll try to answer, but feel free to ask for clarification if I don't
> illuminate the topic.
>
> nvcc is really just a coordinator for many calls.  nvcc makes a series of
> calls, producing intermediate code (PTX files), that can then be
> just-in-time compiled into final (.cubin) architecture specific code by the
> nvida libraries.  In a sense, yes, it compiles source code, but then breaks
> out kernel code to be targeted to the GPU device architecture(s).  Later in
> the compilation sequence they all get merged into one file.
>
> I attach an example of a C++ file that includes at the top in the comments
> the 15 commands issued by the nvcc nVidia compiler coordinator.  The final
> (15th command) is the link stage, producing a final executable that depends
> upon several shared libraries.  The matrixMul.cu example from the SDK is
> consolidated into one file, matrixMul.cu.txt, and the commands used to
> assemble the final binary broken out at the top of the file.
>
> Thanks again.
>
> Best regards,
> Jason
>
>
> --
> Jason E. Aten, Ph.D.
>
>
>  > Perhaps I don't understand how nvcc and CUDA work.  Is it the case that
>> all source code is compiled by nvcc and then linked to create an executable,
>> or is there some other preprocessor/compiler/linker/loader that is involved?
>> >
>> > Marc
>> >
>> >
>>
>
>
>
>
>


-- 
Jason E. Aten, Ph.D.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.iro.umontreal.ca/pipermail/gambit-list/attachments/20110213/81722028/attachment.htm>
-------------- next part --------------
/* Original location: ~/NVIDIA_GPU_Computing_SDK/C/src/matrixMul/matrixMul.cu
   NB: this code will only work on version >= 2.0 CUDA devices

   jea: begin compilation instructions /breakdown of commands issued, discerned by adding
   NVCCFLAGS       := -v -keep
   in the ../../common/common.mk  makefile addin.

# The sequence of 15 commands to produce an executable for running on a CUDA 2.1 device
#  are documented below. (version 1.0 device commands are also produced, but are omitted
#  by default by the nvidia make files and compiler coordinator nvcc.
#
# INPUT: this file, matrixMul.cu (I integrated the files matrixMul_kernel.cu and matrixMul.h
#        into this file matrixMul.cu to make it a single file as input).
#
# OUTPUT: matrixMul executable binary, that loads the shared nvidia libraries 
#           ( -lcutil_x86_64 -lshrutil_x86_64 -lcuda -lcudart  )


export  _SPACE_= 
export  _CUDART_=cudart
export  _HERE_=/usr/local/cuda/bin
export  _THERE_=/usr/local/cuda/bin
export  _TARGET_SIZE_=64
export  TOP=/usr/local/cuda/bin/..
export  LD_LIBRARY_PATH=/usr/local/cuda/bin/../lib:/usr/local/cuda/bin/../extools/lib:/usr/local/cula/lib64:/usr/lib
export  PATH=/usr/local/cuda/bin/../open64/bin:/usr/local/cuda/bin:/usr/local/bin:/usr/local/cuda/bin:/home/jaten/uns/bin:/usr/local/bin:/usr/local/cuda/bin:/home/jaten/uns/bin:/usr/local/bin:/usr/local/cuda/bin:/home/jaten/uns/bin:/usr/local/bin:/usr/local/cuda/bin:/home/jaten/uns/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
export  INCLUDES="-I/usr/local/cuda/bin/../include -I/usr/local/cuda/bin/../include/cudart"  
export  LIBRARIES="-L/usr/local/cuda/bin/../lib64 -lcudart"
export  CUDAFE_FLAGS=
export  OPENCC_FLAGS=
export  PTXAS_FLAGS=

# (0) input: matrixMul.cu         (host code)
#     input: matrixMul_kernel.cu  (device code, where is this read?)
#     input: matrixMul.h          (included by both of the above)

# (1) 1st gcc outputs : matrixMul.compute_20.cpp1.ii

gcc -D__CUDA_ARCH__=200 -E -x c++ -DCUDA_DOUBLE_MATH_FUNCTIONS  "-I/usr/local/cuda/include" "-I/usr/local/cuda/include/cudart"   -I. -D__CUDACC__ -C  -fno-strict-aliasing -O2 -I"." -I"/usr/local/cuda/include" -I"../../common/inc" -I"../../../shared//inc" -D"UNIX" -include "cuda_runtime.h" -m64 -o "matrixMul.compute_20.cpp1.ii" "matrixMul.cu" 



# (2) CUDA Front End : output: matrixMul.compute_20.cudafe1.c matrixMul.compute_20.cudafe1.gpu matrixMul.compute_20.cudafe1.stub.c

cudafe --m64 --gnu_version=40405 -tused --no_remove_unneeded_entities  --gen_c_file_name "matrixMul.compute_20.cudafe1.c" --stub_file_name "matrixMul.compute_20.cudafe1.stub.c" --gen_device_file_name "matrixMul.compute_20.cudafe1.gpu" --include_file_name "matrixMul.fatbin.c" "matrixMul.compute_20.cpp1.ii" 

# (3) output:  matrixMul.compute_20.cpp2.i

gcc -D__CUDA_ARCH__=200 -E -x c -DCUDA_DOUBLE_MATH_FUNCTIONS  "-I/usr/local/cuda/include" "-I/usr/local/cuda/include/cudart"   -I. -D__CUDACC__ -C  -fno-strict-aliasing -O2 -D__CUDA_PREC_DIV -D__CUDA_PREC_SQRT -I"." -I"/usr/local/cuda/include" -I"../../common/inc" -I"../../../shared//inc" -m64 -o "matrixMul.compute_20.cpp2.i" "matrixMul.compute_20.cudafe1.gpu" 



# (4) ouptput:  matrixMul.compute_20.cudafe2.c matrixMul.compute_20.cudafe2.gpu matrixMul.compute_20.cudafe2.stub.c

cudafe --m64 --gnu_version=40405 --c  --gen_c_file_name "matrixMul.compute_20.cudafe2.c" --stub_file_name "matrixMul.compute_20.cudafe2.stub.c" --gen_device_file_name "matrixMul.compute_20.cudafe2.gpu" --include_file_name "matrixMul.fatbin.c" "matrixMul.compute_20.cpp2.i" 



# (5)  output:  matrixMul.compute_20.cpp3.i 

gcc -D__CUDA_ARCH__=200 -E -x c -DCUDA_DOUBLE_MATH_FUNCTIONS  "-I/usr/local/cuda/include" "-I/usr/local/cuda/include/cudart"   -I. -D__CUDABE__  -fno-strict-aliasing -O2 -D__CUDA_PREC_DIV -D__CUDA_PREC_SQRT -I"." -I"/usr/local/cuda/include" -I"../../common/inc" -I"../../../shared//inc" -m64 -o "matrixMul.compute_20.cpp3.i" "matrixMul.compute_20.cudafe2.gpu" 



# (6) output:  matrixMul.hash

filehash -s " " "matrixMul.compute_20.cpp3.i" > "matrixMul.hash"


# (7) output:  matrixMul.cpp4.ii

gcc -E -x c++ "-I/usr/local/cuda/include" "-I/usr/local/cuda/include/cudart"   -I. -D__CUDACC__ -C  -fno-strict-aliasing -O2 -I"." -I"/usr/local/cuda/include" -I"../../common/inc" -I"../../../shared//inc" -D"UNIX" -include "cuda_runtime.h" -m64 -o "matrixMul.cpp4.ii" "matrixMul.cu" 



# (8) output:  matrixMul.compute_20.cudafe1.cpp

cudafe++ --m64 --gnu_version=40405 --parse_templates  --gen_c_file_name "matrixMul.compute_20.cudafe1.cpp" --stub_file_name "matrixMul.compute_20.cudafe1.stub.c" "matrixMul.cpp4.ii" 


# (9) output:  matrixMul.compute_20.ptx

nvopencc  -TARG:compute_20 -m64 -CG:ftz=0 -CG:prec_div=1 -CG:prec_sqrt=1  "matrixMul.compute_20" "matrixMul.compute_20.cpp3.i"  -o "matrixMul.compute_20.ptx"


# (10) output:  matrixMul.compute_20.sm_20.cubin

ptxas  -arch=sm_20 -m64  "matrixMul.compute_20.ptx"  -o "matrixMul.compute_20.sm_20.cubin" 



# (11) output: matrixMul.cpp4.ii  (again!; same as 7)

gcc -E -x c++ "-I/usr/local/cuda/include" "-I/usr/local/cuda/include/cudart"   -I. -D__CUDACC__ -C  -fno-strict-aliasing -O2 -I"." -I"/usr/local/cuda/include" -I"../../common/inc" -I"../../../shared//inc" -D"UNIX" -include "cuda_runtime.h" -m64 -o "matrixMul.cpp4.ii" "matrixMul.cu" 


# (12) output: matrixMul.fatbin.c

fatbin --key="15530d92dba8868a" --source-name="matrixMul.cu" --usage-mode=" " --embedded-fatbin="matrixMul.fatbin.c" "--image=profile=compute_20,file=matrixMul.compute_20.ptx" "--image=profile=sm_20 at compute_20,file=matrixMul.compute_20.sm_20.cubin"


# (13) output: matrixMul.cu.cpp

gcc -D__CUDA_ARCH__=200 -E -x c++ -DCUDA_DOUBLE_MATH_FUNCTIONS  "-I/usr/local/cuda/include" "-I/usr/local/cuda/include/cudart"   -I.  -fno-strict-aliasing -O2 -D__CUDA_PREC_DIV -D__CUDA_PREC_SQRT -I"." -I"/usr/local/cuda/include" -I"../../common/inc" -I"../../../shared//inc" -m64 -o "matrixMul.cu.cpp" "matrixMul.compute_20.cudafe1.cpp" 


# (14) output: obj/x86_64/release/matrixMul.cu.o

gcc -c -x c++ "-I/usr/local/cuda/include" "-I/usr/local/cuda/include/cudart"   -I. -fno-strict-aliasing -O2 -I"." -I"/usr/local/cuda/include" -I"../../common/inc" -I"../../../shared//inc" -fpreprocessed -m64 -o "obj/x86_64/release/matrixMul.cu.o" "matrixMul.cu.cpp" 


# (15) link into ./matrixMul

# g++ -m64 -o ./matrixMul  obj/x86_64/release/matrixMul.cu.o  -L/usr/local/cuda/lib64 -L/home/jaten/NVIDIA_GPU_Computing_SDK/C/lib/ -L/home/jaten/NVIDIA_GPU_Computing_SDK/shared/lib/    -lcutil_x86_64 -lshrutil_x86_64 -lcuda -lcudart 



# (16) useGambit Scheme gsc to compile:
/usr/local/Gambit-C/bin/gsc -exe  -ld-options "-L/usr/local/cuda/lib64 -L/home/jaten/NVIDIA_GPU_Computing_SDK/C/lib/ -L/home/jaten/NVIDIA_GPU_Computing_SDK/shared/lib/    -lcutil_x86_64 -lshrutil_x86_64 -lcuda -lcudart" prog.scm  obj/x86_64/release/matrixMul.cu.o 

where prog.scm is simply these 3 lines:

(c-declare "extern double foo(int a, int b);")
(define foo (c-lambda (int int) double "foo"))
(println (foo 11 22)) ;; test it...


=========================== results of Gambit Scheme compilation and testing in interpreter

jaten at afarm:~/NVIDIA_GPU_Computing_SDK/C/src/matrixMul$ gsc
Gambit v4.6.0

> (load "obj/x86_64/release/matrixMul.cu")
*** ERROR IN (console)@1.1 -- No such file or directory
(load "obj/x86_64/release/matrixMul.cu")
1> (load "obj/x86_64/release/matrixMul.cu.o1")
*** WARNING -- Could not find C function: "____20_matrixMul_2e_cu_2e_o1"
*** ERROR IN (console)@2.1 -- /home/jaten/NVIDIA_GPU_Computing_SDK/C/src/matrixMul/obj/x86_64/release/matrixMul.cu.o1: only ET_DYN and ET_EXEC can be loaded
(load "obj/x86_64/release/matrixMul.cu.o1")
2> 
1> 
> (load "obj/x86_64/release/matrixMul.cu.o")
*** ERROR IN "obj/x86_64/release/matrixMul.cu.o"@1.5 -- Illegal character: #\x02
1> 
> (load "obj/x86_64/release/matrixMul.cu")
*** ERROR IN (console)@4.1 -- No such file or directory
(load "obj/x86_64/release/matrixMul.cu")
1> 
> (load "prog.scm")
*** ERROR IN "prog.scm"@2.1 -- Interpreter does not support ##c-declare
1> 
> 
*** EOF again to exit
jaten at afarm:~/NVIDIA_GPU_Computing_SDK/C/src/matrixMul$ cat prog.scm 

(c-declare "extern double foo(int a, int b);")

(define foo (c-lambda (int int) double "foo"))

(println (foo 11 22)) ;; test it...


;; gsc -exe -ld-options "-lCUDA" prog.scm foo.c

;; ./prog


jaten at afarm:~/NVIDIA_GPU_Computing_SDK/C/src/matrixMul$ ./prog
Device 0: "GeForce GTX 460" with Compute 2.1 capability
Error when parsing command line argument string.

Using Matrix Sizes: A(80 x 160), B(80 x 80), C(80 x 160)

Run Kernels...

matrixMul, Throughput = 79.8960 GFlop/s, Time = 0.00003 s, Size = 2048000 Ops, NumDevsUsed = 1, Workgroup = 256

Check against Host computation...

PASSED 

foo() called: Test code integrating Gambit Scheme with nVIDIA CUDA-C code complete!
Answer to foo(11,22) =
.02268041237113402
jaten at afarm:~/NVIDIA_GPU_Computing_SDK/C/src/matrixMul$ 


=========================== end results of Gambit compilation


   jea: end compilation breakdown
   jea: being input of matrixMul.cu file (which originally #included matrixMul_kernel.cu and matrixMul.h, but those are now built into this file for simplicity of transport )

*/ 


/*
 * Copyright 1993-2010 NVIDIA Corporation.  All rights reserved.
 *
 * Please refer to the NVIDIA end user license agreement (EULA) associated
 * with this source code for terms and conditions that govern your use of
 * this software. Any use, reproduction, disclosure, or distribution of
 * this software and related documentation outside the terms of the EULA
 * is strictly prohibited.
 *
 */

/* Matrix multiplication: C = A * B.
 * Host code.
 *
 * This sample implements matrix multiplication as described in Chapter 3
 * of the programming guide.
 * It has been written for clarity of exposition to illustrate various CUDA
 * programming principles, not with the goal of providing the most
 * performant generic kernel for matrix multiplication.
 *
 * CUBLAS provides high-performance matrix multiplication.
 * See also:
 * V. Volkov and J. Demmel, "Benchmarking GPUs to tune dense linear algebra,"
 * in Proc. 2008 ACM/IEEE Conf. on Superconducting (SC '08),
 * Piscataway, NJ: IEEE Press, 2008, pp. Art. 31:1-11. 
 *
 */

// Utilities and system includes
#include <shrUtils.h>
#include "cutil_inline.h"

// includes, kernels
//jea put it all in one file for now, instead of: include <matrixMul_kernel.cu>

////////////////////////////////////////
////////////////////////////////////////
//////// begin kernel, from matrixMul_kernel.cu
////////////////////////////////////////
////////////////////////////////////////
/*
 * Copyright 1993-2010 NVIDIA Corporation.  All rights reserved.
 *
 * Please refer to the NVIDIA end user license agreement (EULA) associated
 * with this source code for terms and conditions that govern your use of
 * this software. Any use, reproduction, disclosure, or distribution of
 * this software and related documentation outside the terms of the EULA
 * is strictly prohibited.
 *
 */

/* Matrix multiplication: C = A * B.
 * Device code.
 */

#ifndef _MATRIXMUL_KERNEL_H_
#define _MATRIXMUL_KERNEL_H_

#include <stdio.h>
//// include "matrixMul.h"

//// begin matrixMul.h

#ifndef _MATRIXMUL_H_
#define _MATRIXMUL_H_

// Thread block size
#define BLOCK_SIZE 16

// Basic Matrix dimensions (can be amplified by command line switch)
// (chosen as multiples of the thread block size for simplicity)
#define WA (5  * BLOCK_SIZE) // Matrix A width
#define HA (10 * BLOCK_SIZE) // Matrix A height
#define WB (5  * BLOCK_SIZE) // Matrix B width
#define HB WA  // Matrix B height
#define WC WB  // Matrix C width 
#define HC HA  // Matrix C height

#endif // _MATRIXMUL_H_



/// end matrixMul.h

//// resume matrixMul_kernel.cu

#define CHECK_BANK_CONFLICTS 0
#if CHECK_BANK_CONFLICTS
#define AS(i, j) cutilBankChecker(((float*)&As[0][0]), (BLOCK_SIZE * i + j))
#define BS(i, j) cutilBankChecker(((float*)&Bs[0][0]), (BLOCK_SIZE * i + j))
#else
#define AS(i, j) As[i][j]
#define BS(i, j) Bs[i][j]
#endif

////////////////////////////////////////////////////////////////////////////////
//! Matrix multiplication on the device: C = A * B
//! wA is A's width and wB is B's width
////////////////////////////////////////////////////////////////////////////////
__global__ void
matrixMul( float* C, float* A, float* B, int wA, int wB)
{
    // Block index
    int bx = blockIdx.x;
    int by = blockIdx.y;

    // Thread index
    int tx = threadIdx.x;
    int ty = threadIdx.y;

    // Index of the first sub-matrix of A processed by the block
    int aBegin = wA * BLOCK_SIZE * by;

    // Index of the last sub-matrix of A processed by the block
    int aEnd   = aBegin + wA - 1;

    // Step size used to iterate through the sub-matrices of A
    int aStep  = BLOCK_SIZE;

    // Index of the first sub-matrix of B processed by the block
    int bBegin = BLOCK_SIZE * bx;

    // Step size used to iterate through the sub-matrices of B
    int bStep  = BLOCK_SIZE * wB;

    // Csub is used to store the element of the block sub-matrix
    // that is computed by the thread
    float Csub = 0;

    // Loop over all the sub-matrices of A and B
    // required to compute the block sub-matrix
    for (int a = aBegin, b = bBegin;
             a <= aEnd;
             a += aStep, b += bStep) {

        // Declaration of the shared memory array As used to
        // store the sub-matrix of A
        __shared__ float As[BLOCK_SIZE][BLOCK_SIZE];

        // Declaration of the shared memory array Bs used to
        // store the sub-matrix of B
        __shared__ float Bs[BLOCK_SIZE][BLOCK_SIZE];

        // Load the matrices from device memory
        // to shared memory; each thread loads
        // one element of each matrix
        AS(ty, tx) = A[a + wA * ty + tx];
        BS(ty, tx) = B[b + wB * ty + tx];

        // Synchronize to make sure the matrices are loaded
        __syncthreads();

        // Multiply the two matrices together;
        // each thread computes one element
        // of the block sub-matrix
        for (int k = 0; k < BLOCK_SIZE; ++k)
            Csub += AS(ty, k) * BS(k, tx);

        // Synchronize to make sure that the preceding
        // computation is done before loading two new
        // sub-matrices of A and B in the next iteration
        __syncthreads();
    }

    // Write the block sub-matrix to device memory;
    // each thread writes one element
    int c = wB * BLOCK_SIZE * by + BLOCK_SIZE * bx;
    C[c + wB * ty + tx] = Csub;
}

#endif // #ifndef _MATRIXMUL_KERNEL_H_


////////////////////////////////////////
////////////////////////////////////////
//////// end kernel, from matrixMul_kernel.cu
////////////////////////////////////////
////////////////////////////////////////

static char *sSDKsample = "matrixMul";

////////////////////////////////////////////////////////////////////////////////
// declaration, forward
void runTest(int argc, char** argv);
void randomInit(float*, int);
void printDiff(float*, float*, int, int, int, float);

extern "C"
void computeGold(float*, const float*, const float*, unsigned int, unsigned int, unsigned int);

// jea: used to be in matrixMul_gold.cpp, but simplify a little by including it here.
void
computeGold(float* C, const float* A, const float* B, unsigned int hA, unsigned int wA, unsigned int wB)
{
    for (unsigned int i = 0; i < hA; ++i)
        for (unsigned int j = 0; j < wB; ++j) {
            double sum = 0;
            for (unsigned int k = 0; k < wA; ++k) {
                double a = A[i * wA + k];
                double b = B[k * wB + j];
                sum += a * b;
            }
            C[i * wB + j] = (float)sum;
        }
}

// test calling from Gambit Scheme
extern double foo(int x, int y);

double foo(int x, int y)
{
   runTest(0, 0);
   printf("foo() called: Test code integrating Gambit Scheme with nVIDIA CUDA-C code complete!\nAnswer to foo(%d,%d) =\n",x,y);
   return x/(1.0+y*y);
}

////////////////////////////////////////////////////////////////////////////////
// Program main
////////////////////////////////////////////////////////////////////////////////
int orig_main(int argc, char** argv)
{
    printf("[ %s ]\n", sSDKsample);

    shrSetLogFileName ("matrixMul.txt");
    shrLog("%s Starting...\n\n", argv[0]);

    runTest(argc, argv);

   // shrEXIT(argc, (const char**)argv);
   return 0;
}

////////////////////////////////////////////////////////////////////////////////
//! Run a simple test for CUDA
////////////////////////////////////////////////////////////////////////////////
void runTest(int argc, char** argv)
{
#if 0
    if(shrCheckCmdLineFlag(argc, (const char**)argv, "device"))
    {
        cutilDeviceInit(argc, argv);
    }
    else
    {
        cudaSetDevice(cutGetMaxGflopsDeviceId());
    }
#endif // 0

    cudaSetDevice(cutGetMaxGflopsDeviceId());

    int devID;
    cudaDeviceProp props;

    // get number of SMs on this GPU
    cutilSafeCall(cudaGetDevice(&devID));
    cutilSafeCall(cudaGetDeviceProperties(&props, devID));

    printf("Device %d: \"%s\" with Compute %d.%d capability\n", devID, props.name, props.major, props.minor);

	// set seed for rand()
    srand(2006);

    // Optional Command-line multiplier for matrix sizes
    unsigned int uiWA, uiHA, uiWB, uiHB, uiWC, uiHC;
    int iSizeMultiple = 1;
    shrGetCmdLineArgumenti(argc, (const char**)argv, "sizemult", &iSizeMultiple); 
    iSizeMultiple = CLAMP(iSizeMultiple, 1, 10);

	// For GPUs with fewer # of SM's, we limit the maximum size of the matrix
	if (props.multiProcessorCount <= 4) {
		uiWA = 2 * BLOCK_SIZE * iSizeMultiple;
		uiHA = 4 * BLOCK_SIZE * iSizeMultiple;
		uiWB = 2 * BLOCK_SIZE * iSizeMultiple;
		uiHB = 4 * BLOCK_SIZE * iSizeMultiple;
		uiWC = 2 * BLOCK_SIZE * iSizeMultiple;
		uiHC = 4 * BLOCK_SIZE * iSizeMultiple;
	} else {
		uiWA = WA * iSizeMultiple;
		uiHA = HA * iSizeMultiple;
		uiWB = WB * iSizeMultiple;
		uiHB = HB * iSizeMultiple;
		uiWC = WC * iSizeMultiple;
		uiHC = HC * iSizeMultiple;
	}
    shrLog("\nUsing Matrix Sizes: A(%u x %u), B(%u x %u), C(%u x %u)\n\n", 
            uiWA, uiHA, uiWB, uiHB, uiWC, uiHC);

    // allocate host memory for matrices A and B
    unsigned int size_A = uiWA * uiHA;
    unsigned int mem_size_A = sizeof(float) * size_A;
    float* h_A = (float*)malloc(mem_size_A);
    unsigned int size_B = uiWB * uiHB;
    unsigned int mem_size_B = sizeof(float) * size_B;
    float* h_B = (float*)malloc(mem_size_B);

    // initialize host memory
    randomInit(h_A, size_A);
    randomInit(h_B, size_B);

    // allocate device memory
    float* d_A;
    cutilSafeCall(cudaMalloc((void**) &d_A, mem_size_A));
    float* d_B;
    cutilSafeCall(cudaMalloc((void**) &d_B, mem_size_B));

    // copy host memory to device
    cutilSafeCall(cudaMemcpy(d_A, h_A, mem_size_A,
                              cudaMemcpyHostToDevice) );
    cutilSafeCall(cudaMemcpy(d_B, h_B, mem_size_B,
                              cudaMemcpyHostToDevice) );

    // allocate device memory for result
    unsigned int size_C = uiWC * uiHC;
    unsigned int mem_size_C = sizeof(float) * size_C;
    float* d_C;
    cutilSafeCall(cudaMalloc((void**) &d_C, mem_size_C));

    // allocate host memory for the result
    float* h_C = (float*) malloc(mem_size_C);
    

    // setup execution parameters
    dim3 threads(BLOCK_SIZE, BLOCK_SIZE);
    dim3 grid(uiWC / threads.x, uiHC / threads.y);

    // kernel warmup
    matrixMul<<< grid, threads >>>(d_C, d_A, d_B, uiWA, uiWB);
    cudaThreadSynchronize();
    
    // create and start timer
    shrLog("Run Kernels...\n\n");
    unsigned int timer = 0;
    cutilCheckError(cutCreateTimer(&timer));
    cutilCheckError(cutStartTimer(timer));

    // execute the kernel
    int nIter = 30;
    for (int j = 0; j < nIter; j++) 
		{
            matrixMul<<< grid, threads >>>(d_C, d_A, d_B, uiWA, uiWB);
        }

    // check if kernel execution generated and error
    cutilCheckMsg("Kernel execution failed");

    cudaThreadSynchronize();
    // stop and destroy timer
    cutilCheckError(cutStopTimer(timer));
    double dSeconds = cutGetTimerValue(timer)/((double)nIter * 1000.0);
    double dNumOps = 2.0 * (double)uiWA * (double)uiHA * (double)uiWB;
    double gflops = 1.0e-9 * dNumOps/dSeconds;

    //Log througput, etc
    shrLogEx(LOGBOTH | MASTER, 0, "matrixMul, Throughput = %.4f GFlop/s, Time = %.5f s, Size = %.0f Ops, NumDevsUsed = %d, Workgroup = %u\n", 
            gflops, dSeconds, dNumOps, 1, threads.x * threads.y);
    cutilCheckError(cutDeleteTimer(timer));

    // copy result from device to host
    cutilSafeCall(cudaMemcpy(h_C, d_C, mem_size_C,
                              cudaMemcpyDeviceToHost) );

    // compute reference solution
    shrLog("\nCheck against Host computation...\n\n");    
    float* reference = (float*)malloc(mem_size_C);
    computeGold(reference, h_A, h_B, uiHA, uiWA, uiWB);

    // check result
    shrBOOL res = shrCompareL2fe(reference, h_C, size_C, 1.0e-6f);
    if (res != shrTRUE) 
    {
        printDiff(reference, h_C, uiWC, uiHC, 100, 1.0e-5f);
    }
    shrLog("%s \n\n", (shrTRUE == res) ? "PASSED" : "FAILED");

    // clean up memory
    free(h_A);
    free(h_B);
    free(h_C);
    free(reference);
    cutilSafeCall(cudaFree(d_A));
    cutilSafeCall(cudaFree(d_B));
    cutilSafeCall(cudaFree(d_C));

    cudaThreadExit();
}

// Allocates a matrix with random float entries.
void randomInit(float* data, int size)
{
    for (int i = 0; i < size; ++i)
        data[i] = rand() / (float)RAND_MAX;
}

void printDiff(float *data1, float *data2, int width, int height, int iListLength, float fListTol)
{
    shrLog("Listing first %d Differences > %.6f...\n", iListLength, fListTol);
    int i,j,k;
    int error_count=0;
    for (j = 0; j < height; j++) 
    {
        if (error_count < iListLength)
        {
            shrLog("\n  Row %d:\n", j);
        }
        for (i = 0; i < width; i++) 
        {
            k = j * width + i;
            float fDiff = fabs(data1[k] - data2[k]);
            if (fDiff > fListTol) 
            {                
                if (error_count < iListLength)
                {
                    shrLog("    Loc(%d,%d)\tCPU=%.5f\tGPU=%.5f\tDiff=%.6f\n", i, j, data1[k], data2[k], fDiff);
                }
                error_count++;
            }
        }
    }
    shrLog(" \n  Total Errors = %d\n\n", error_count);
}


More information about the Gambit-list mailing list