Text Summarization using Sequence-to-Sequence model in Tensorflow and GPU computing: Part 2 – AWS P2 Instance Installation

Merry Christmas!

I finally got my toys for the Christmas – AWS P2 instances, Yay!  This weekend Chicago snowed again, so a perfect timing to try Cloud!  This article was a footnote of my first touch with multi-GPU platform.  I discussed the advantage of multi-GPU platform in Deep Learning package Tensorflow, and tried Seq2Seq attention model and Convolutional Neural Network and their applications in text summarization and image classification.

This article was mainly about architecture, and a follow up of my previous post:

Text Summarization using Sequence-to-Sequence model in Tensorflow and GPU computing: Part I – How to get things running

Operating System: Ubuntu Linux 16.04 64-bit
Tensorflow r 0.11  (At the time this blog is written, TF r 0.12 has been released but I downgrade to 0.11 because of some compatibility issues)
Anaconda Python 2.7 64-bit

AWS Platform

P2-xlarge  / 1 GPU / 4 vCPU / 61 G RAM

P2-8xlarge / 8GPU / 32 vCPU / 488 G RAM

P2-16xlarge / 16GPU / 64 vCPU / 732 G RAM

1. Install CUDA on P2 instance


I followed my early post to setup CUDA and Tensorflow.  Though I never expected a home run cause I knew there would always be some hurdles,  still I ended up spending half of day figuring out all the problems. Here are some major ones and solutions:

1.1 Disable Nouveau

To disable Nouveau on AWS, I added the following lines to the /etc/modprobe.d/blacklist-nouveau.conf  file:

blacklist nouveau 
blacklist lbm-nouveau 
options nouveau modeset=0 
alias nouveau off 
alias lbm-nouveau off

1.2 Update the kernel

$ apt-get install linux-source 
$ apt-get install linux-headers-3.13.0-37-generic

1.3 Test CUDA

Once CUDA installed correctly, I used ./deviceQuery to determine that the GPU model in P2 instance is Tesla K80.  I expected Titan Pascal :O.


2. Install Tensorflow r 0.11.0

2.1  Some issues in compilation and installation

While installing r 0.12.0, at the first time, the ./configure successfully finished but at

"bazel build -c opt //tensorflow/tools/pip_package:build_pip_package"

it reported an error:

 ......./tensorflow/tensorflow/python/BUILD:1806:1: in cc_library rule //tensorflow/python:tf_session_helper: non-test target '//tensorflow/python:tf_session_helper' depends on testonly target '//tensorflow/python:construction_fails_op' and doesn't have testonly attribute set

The workaround was commenting out these two lines of “…./python/BUILD”:

 1811 ":construction_fails_op",
 1813 ":test_ops_kernels",
 then the build is fine.

2.2  CXXABI_1.3.8 not found error

When testing TF installation in Python, ran into an error like following:

CXXABI_error.pngand resolved by the following solution:

strings /usr/lib/x86_64-linux-gnu/libstdc++.so.6 | grep CXXABI_1.3.8
cp /usr/lib/x86_64-linux-gnu/libstdc++.so.6 /home/ubuntu/anaconda2/lib/

2.3  Why I still use r 0.11.0

At the time I tried this, Tensorflow r 0.12 has been released, but it popped up too many warnings and error messages when running the text summarization models. For example:


So I decided to stick to r 0.11.0.

3.  Tensorflow on multi-GPU instance

3.1 Tensorflow supports no more than 8 GPUs

When running text summarization model on P16-xlarge instance, I ran into the following error:


A dig on github (https://github.com/tensorflow/tensorflow/issues/5789) indicates that current releases of Tensorflow do not fully support more than 8 GPU devices.  To avoid error on platform which has more than 8 GPU,  I restricted the number of GPU in Python code:

os.environ["CUDA_VISIBLE_DEVICES"] = "0,1,2,3,4,5,6,7,8"

Of course, the other GPUs are wasted therefore it seems there is no advantage of using P16-xlarge instance for Tensorflow.

3.2 Text summarization performance comparison

My ultimate goal was originally to optimize the platform to carry large scale text summarization training.  With that in mind, I reran the experiment using the toy data mentioned in my early post.  The model was trained on 20 articles, and epoch was set to 5000.

To my surprise, Multi-GPU architectures did not bring significant improvement to Text Summarization model.  I mainly compared 7 settings:

  • P1 GPU:  AWS 2xlarge instance equipped with Single TELSA K80 GPU
  • P8 n_gpu=8:  AWS 8xlarge instance equipped with Eight TELSA K80 GPU, setting the —num_gpus=8 explicitly
  • P8 n_gpu=2:  AWS 8xlarge instance equipped with Eight TELSA K80 GPU, setting the —num_gpus=2 explicitly
  • P8 n_gpu=0:  AWS 8xlarge instance equipped with Eight TELSA K80 GPU, setting the —num_gpus=0 explicitly
  • P16 GPU:  AWS 16xlarge instance equipped with Eight TELSA K80 GPU, setting the —num_gpus=0 explicitly.  Because Tensorflow has difficulty handling > 8 GPUs, I set the os.environ[“CUDA_VISIBLE_DEVICES”]=”0,1,2,3,4,5,6,7,8″
  • P16 CPU: AWS 16xlarge instance equipped with Eight TELSA K80 GPU, with the train function of seq2seq_attention model uses tf.device('/cpu:0'All other settings use tf.device('/gpu:0'
  • GTX 970: My own desktop machine with single GTX 970 GPU and 6-core AMD CPU

    As the picture shows, multi-GPU architecture does not bring significant improvement on efficiency. A further look of the model code gives me a feeling that the text summarization model does not exploit the parallelization of GPU.  The “_Train” function seems only using the first GPU / CPU device.   As known, Tensor flow was originally deployed on Google Cloud Platform and was not optimized for GPU.  So maybe the Text Summarization model hasn’t fully exploited the advantages yet.  It ought to be rewritten in a GPU-specified divide-and-conquer manner, and to exploit parallelism at all stages on multiple GPUs.  Sounds a lots of work, and I am not an expert in low-level programming.  Maybe for the same effort, it will be easier for me to create a similar model structure in Caffe because its better support for multi-GPU?

def _Train(model, data_batcher):
   """Runs model training."""
  with tf.device('/gpu:0'):  # was ('/cpu:0')
      saver = tf.train.Saver()
      summary_writer = tf.train.SummaryWriter(FLAGS.train_dir)
      sv = tf.train.Supervisor(logdir=FLAGS.log_root, is_chief=True,<span class="Apple-converted-space"></span>saver=saver, summary_op=None,<span class="Apple-converted-space"> </span>save_summaries_secs=60, save_model_secs=FLAGS.checkpoint_secs,global_step=model.global_step)
      sess = sv.prepare_or_wait_for_session(config=tf.ConfigProto(allow_soft_placement=True)
      running_avg_loss = 0
      step = 0

With a bit disappointment, I continued to explore other models.  In next section, I will show another (better) example in Tensorflow that can actually benefit from high-end multi-GPU architecture.

3.3 CNN Image classification benchmark CIFAR-10

CIFAR-10 is a CNN (convolutional neural network) demo included in Tensorflow as an image classification benchmark. The problem is to classify RGB 32×32 pixel images across 10 categories: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck.


Tensorflow provides a multi-GPU version algorithm for model training. According to Tensorflow documentation:

” In a workstation with multiple GPU cards, each GPU will have similar speed and contain enough memory to run an entire CIFAR-10 model. Thus, we opt to design our training system in the following manner:

  • Place an individual model replica on each GPU.
  • Update model parameters synchronously by waiting for all GPUs to finish processing a batch of data.

Note that each GPU computes inference as well as the gradients for a unique batch of data. This setup effectively permits dividing up a larger batch of data across the GPUs. 


Therefore, the advantage of multi-GPU architecture here is to build model on larger datasets within approximately the same amount of time. More training data can lead to more accurate model predictions.  In other words, for a same amount of data, more GPU can reduce the training time but keep the model performance more or less the same. Due to the time limitations, I only trained models for 10K steps.

CIFAR10 CNN model is located at




is the training algorithm for multi-GPU and


evaluates the trained model (auto-saved as checkpoint) as precision value of 10-class image classification on 10K held-out CIFAR test data.  To specify the number of GPU in training, I used:

python cifar10_multi_gpu_train.py --num_gpus=2

Tensorflow r.12.0 has made changes on these script from r.11.0, if the Tensorflow package was compiled and installed as r.11.0, there will be compatibility issues.  I extract these script from archive r 0.11.0 version.

1GPU  (20k images) 2GPU (40K images) 3GPU (60K images) 4GPU (80K images) 6GPU (120K images) 8GPU (160K images)
Time(min) 20 23 30 32 42 52
Precision 0.815 0.826 0.826 0.832 0.832 0.836
Images/Sec 1022 2064 2422 2719 3000 3347
Sec/batch 0.125 0.068 0.047 0.045 0.04 0.035


As shown in the figure above,  with more images used in training, the model performance increased from 0.816 (20K images in training) to 0.836 (160K images in training). While the training data increased 8 times, due to the parallelization of multi-GPU, the training time only increased 2.5 times.  The author of CIFAR model reported a best precision score about 0.86 trained for 1M steps.


The second figure compared the ratio of data processing speed (Images/sec) / (Second/batch) vs. the increase of GPU numbers. It is very difficult to obtain linear speed-up in m-GPU platform.  As shown, an 8 GPU platform only led to 3.5 times speed up comparing to single GPU system.

4. Conclusion

AWS P2 instances are powerful platforms to exploit the power of GPU computation. Unfortunately, the software package seems lagging behind, and cannot benefit from the high-end multi-GPU architecture provided in P2.16xlarge instances.

The current text summarization model in Tensorflow has not exploited the full potential of multi-GPU system, and hopefully smart contributors will restructure the code in future releases.

One side note was, regarding AWS P2 instance, each time I stopped/restarted, or created a new instance from a saved AMI image with CUDA/Tensorflow preinstalled, the environment was broken again and again.  I ended up recompiling / reinstalling the environment so many times and now I can install everything with my eyes covered.  It was a big fun, and more fun when I see the coming Amazon AWS bill :).

Is my data big enough for GPU computing? A PyCuda experiment

PyCUDA,  Python,  C/C++, Nvidia CUDA,  GPU Computing

GPU: Geforce GTX 970 4G
CPU: AMD FX 6300
Memory: 8G

Operating System: Ubuntu Linux 16.04 64-bit

We often heard about GPU computing being faster than CPU.  There is no doubt for most Deep Learning algorithms GPU outperforms CPU on efficiency.  But what about other daily computations?  Now exploiting the advantage of GPU becomes more and more easier, due to the improved development platform and API access interface, and a more intriguing paradigm is to combine the power of CPU cluster with GPU-powered node for very complex computation.  When data is parallelized and assigned to each node, the algorithm should be able to determine the most powerful computational device. The answer of this question depends on many factors, but one of them is surely empirical: How big is the data? Is it big enough to gain advantage from GPU?  In this blog, I will setup a PyCuda experiment to explore the best CPU-GPU switching point on my old desktop for matrix mulitplication problem.

0.  Install NVIDIA CUDA

I have discussed the step of installing NVIDIA CUDA in my previous post (https://wordpress.com/post/eilianyu.wordpress.com/177)

1.  Compile and Install Numpy and OpenBLAS

To fairly compare performance between GPU and CPU, we have to ensure Numpy is linked to OpenBLAS. Otherwise, the comparison would be “flawed” because we haven’t fully exploit opportunities provided by the underlying hardware, as suggested by this interesting article (https://people.eecs.berkeley.edu/~sangjin/2013/02/12/CPU-GPU-comparison.html).

Please follow  https://hunseblog.wordpress.com/2014/09/15/installing-numpy-and-openblas/  to install Numpy and OpenBLAS.

(I had an error “Program received signal SIGILL: Illegal instruction.” when upgrading from AMD Phenom to FX because the architectural change.  Please make sure download the latest version of openBLAS and set the Target correctly as suggested in this post  https://github.com/xianyi/OpenBLAS/issues/382 ).

After installation, please double check that Numpy is correctly linked to openBLAS:

>>import numpy as np
    libraries = ['openblas', 'openblas']
    library_dirs = ['/opt/openblas/lib']
    define_macros = [('HAVE_CBLAS', None)]
    language = c
    libraries = ['openblas', 'openblas']
    library_dirs = ['/opt/openblas/lib']
    define_macros = [('HAVE_CBLAS', None)]
    language = c
    libraries = ['openblas', 'openblas']
    library_dirs = ['/opt/openblas/lib']
    define_macros = [('HAVE_CBLAS', None)]
    language = c
    libraries = ['openblas', 'openblas']
    library_dirs = ['/opt/openblas/lib']
    define_macros = [('HAVE_CBLAS', None)]
    language = c


$locate lapack_lite.so

$ldd /usr/local/lib/python2.7/dist-packages/numpy-1.11.2-py2.7-linux-x86_64.egg/numpy/linalg/lapack_lite.so
cyberyu@cyberyugpu:~$ ldd /usr/local/lib/python2.7/dist-packages/numpy-1.11.2-py2.7-linux-x86_64.egg/numpy/linalg/lapack_lite.so
    linux-vdso.so.1 =>  (0x00007ffceef67000)
    libopenblas.so.0 => /opt/openblas/lib/libopenblas.so.0 (0x00007f8ea1b45000)
    libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f8ea18fd000)
    libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f8ea152d000)
    libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f8ea121d000)
    libgfortran.so.3 => /usr/lib/x86_64-linux-gnu/libgfortran.so.3 (0x00007f8ea0eed000)
    /lib64/ld-linux-x86-64.so.2 (0x00005578bbd00000)
    libquadmath.so.0 => /usr/lib/x86_64-linux-gnu/libquadmath.so.0 (0x00007f8ea0cad000)
    libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f8ea0a95000)

2.  Install PyCuda

Obtain and install PyCuda from Andreas Klöckner’s web page.

PyCUDA provides an interface to access Nvidia‘s CUDA parallel computation API from Python.  Please notice it is just an interface, so the core algorithm should still be written in CUDA’s programming model based on C and C++.

PyCUDA has a series of dependencies, so please make sure installing them all.

3.  Install OpenCV and OpenMP

To fully exploit the platform, I install OpenMP to enhance OpenCV with multithreading. OpenCV is available at http://docs.opencv.org/2.4/doc/tutorials/introduction/linux_install/linux_install.html.  OpenMP is available at http://bisqwit.iki.fi/story/howto/openmp/.

To keep the discussion concise, I skip all the trouble-shootings for OpenCV and OpenMP setup. Once they are correctly installed, I use the following command to compile the C++ code:

$g++ -fpic rbf_kernel_simple.cpp -o rbf_kernel_simple.o


4.  A simple test comparing CPU and GPU

My experiment started with a very simple (naive) function. The goal is to take a large single dimensional float array x[i], where i is the index, as input and update its elements as new values equal to x[i]*x[i] – 0.25 and repeat some stupid loops for 1000 times.  The overall Python code is illustrated as follows:

import os
os.environ['LD_LIBRARY_PATH'] = '/opt/openblas/lib/'
import numpy
import time
from pycuda.compiler import SourceModule
import pycuda.autoinit
from numpy import linalg as LA

mod = SourceModule("""
__global__ void cudakernel(float * buf)
   int i = threadIdx.x + blockIdx.x * blockDim.x;
   buf[i] = 1.0f * i / 10000;
   for (int j = 0; j < 1000; j++)
      buf[i] = buf[i] * buf[i] - 0.25f;

a = numpy.ones(1000)  # construct an array of 1000 elements
a = a.astype(numpy.float32)

import pycuda.driver as cuda
t1 = time.time()
a_gpu = cuda.mem_alloc(a.nbytes)
cuda.memcpy_htod(a_gpu, a)

func = mod.get_function("cudakernel")
func(a_gpu, block = (250, 1, 1), grid=(4,1))
a_doubled = numpy.empty_like(a)
cuda.memcpy_dtoh(a_doubled, a_gpu)

t2 = time.time()-t1
print "GPU time used:", t2*1000, " ms "

t3 = time.time()

import ctypes as C
my_test_lib = C.CDLL('/home/cyberyu/PycharmProjects/mpi_pycuda_test/cudakernelmp.o')
outdata = numpy.zeros(a.size, dtype=numpy.double)
a_cpued = my_test_lib.serial_add(C.c_void_p(a.ctypes.data), C.c_int(1000), C.c_int(2000), C.c_void_p(outdata.ctypes.data))

t4 = time.time()-t3
print "CPU time nedded:", t2*1000, " ms "
print "Error is " + str(LA.norm(a_doubled - outdata))

The code above uses the PyCUDA SourceModule system to pass the cuda C code to the cubin file compiler, and load that cubin file into the current CUDA context. The Cuda C code is wrapped in mod=SourceModule(“”” … “””).   Notice that the first line defines the index of the array using threadIdx, blockIdx, and blockDim variables, which are delicately controlled by Cuda’s block and grid parallelism mechanism.   Understanding the concept of threads, blocks, and grids, more importantly, to design heuristics that automatically scales with data is critical to, also the beauty of, CUDA development.  Some nice references online:



The relationship of block and thread in single dimensional assignment is illustrated in the first reference document by Samuel S. Cho:


In my code, I choose the block size as (250,1,1) , and grid size as (4,1) because we are processing a single dimensional array.  So in the first dimension, the multiplication of block size with grid size should be equal to the array length 1000.  Otherwise, the parallelism will not lead to correct result (can be checked via the error variable).  Other settings are also possible, such as

func(a_gpu, block = (100, 1, 1), grid=(10,1))
func(a_gpu, block = (500, 1, 1), grid=(2,1))

The first part of the code is to compute the processing time for GPU.  In the second part, I calculate the processing time of CPU by writing a separate C code  cudakernelmp.c :

void serial_add(float * indatav,  int n_count, int m_count, float *outdatav){
const double * indata = (double *) indatav;
double * outdata = (double *) outdatav;

#pragma omp parallel for
for(int i = 0; i < n_count; i++)
        outdata[i] = 1.0f * i / n_count;

        #pragma omp parallel for
        for(int j = 0; j < m_count; j++)
           outdata[i] =outdata[i] * outdata[i] - 0.25f;

Then I compile it to linked library:

gcc -shared cudakernelmp.c -o cudakernelmp.o

And it is called in line 38 of the python code.

The output of the python code is:

GPU time used: 1.57809257507  ms 
CPU time nedded: 21.1148262024  ms 
Error is 8.85293575516e-08

It shows GPU is 20 faster than CPU on this naive task and the difference (L2 norm) of the results is almost negligible.

5.  Comparing CPU and GPU for Matrix Multiplication

The real advantage of GPU computing lies in the complicated matrix computation. In the next experiment, I extract the matrix multiplication code from Andreas Klöckner’s PyCuda library. The original algorithm is listed at:


Unlike the previous single dimensional experiment, now we need to exploit the assignment of threads w.r.t. blocks and grids in 2D space.  The relationship is illustrated as follows (taken from Samuel S. Cho’s lecture slide):



I made small changes on the dynamic assignment of block and grid size. To make it simple, I consider square matrix multiplication only. I haven’t tested it on non-square matrices.

#!/usr/bin/env python
# -*- coding: utf-8 -*-

from __future__ import division

Multiples two square matrices together using multiple blocks and shared memory.
Each thread block is assigned a "tile" of the resulting matrix and is responsible
for generating the elements in that tile. Each thread in a block computes one element
of the tile.

import numpy as np
from numpy import linalg as la
import time
import matplotlib.pylab as plt

from pycuda import driver, compiler, gpuarray, tools
# -- initialize the device
import pycuda.autoinit

kernel_code_template = """
__global__ void MatrixMulKernel(float *A, float *B, float *C)
 const uint wA = %(MATRIX_SIZE)s;
 const uint wB = %(MATRIX_SIZE)s;

 // Block index
 const uint bx = blockIdx.x;
 const uint by = blockIdx.y;

 // Thread index
 const uint tx = threadIdx.x;
 const uint ty = threadIdx.y;

 // Index of the first sub-matrix of A processed by the block
 const uint aBegin = wA * %(BLOCK_SIZE)s * by;
 // Index of the last sub-matrix of A processed by the block
 const uint aEnd = aBegin + wA - 1;
 // Step size used to iterate through the sub-matrices of A
 const uint aStep = %(BLOCK_SIZE)s;

 // Index of the first sub-matrix of B processed by the block
 const uint bBegin = %(BLOCK_SIZE)s * bx;
 // Step size used to iterate through the sub-matrices of B
 const uint bStep = %(BLOCK_SIZE)s * wB;

 // The element of the block sub-matrix that is computed
 // by the thread
 float Csub = 0;
 // Loop over all the sub-matrices of A and B required to
 // compute the block sub-matrix
 for (int a = aBegin, b = bBegin;
 a <= aEnd;
 a += aStep, b += bStep)
 // Shared memory for the sub-matrix of A
 __shared__ float As[%(BLOCK_SIZE)s][%(BLOCK_SIZE)s];
 // Shared memory for the sub-matrix of B
 __shared__ float Bs[%(BLOCK_SIZE)s][%(BLOCK_SIZE)s];

 // Load the matrices from global memory to shared memory
 // each thread loads one element of each matrix
 As[ty][tx] = A[a + wA * ty + tx];
 Bs[ty][tx] = B[b + wB * ty + tx];
 // Synchronize to make sure the matrices are loaded

 // Multiply the two matrices together;
 // each thread computes one element
 // of the block sub-matrix
 for (int k = 0; k < %(BLOCK_SIZE)s; ++k)
 Csub += As[ty][k] * Bs[k][tx];

 // Synchronize to make sure that the preceding
 // computation is done before loading two new
 // sub-matrices of A and B in the next iteration

 // Write the block sub-matrix to global memory;
 // each thread writes one element
 const uint c = wB * %(BLOCK_SIZE)s * by + %(BLOCK_SIZE)s * bx;
 C[c + wB * ty + tx] = Csub;

def benchmarkCPU(scale):
  rsCPU = []

  print 'Start CPU processing'

  for scaleFactor in xrange(scale):

       # load the matrices
       a_cpu = np.load('testmat_{}.npz'.format(scaleFactor))['arr_0']
       b_cpu = np.load('testmat_{}.npz'.format(scaleFactor))['arr_1']

       MATRIX_SIZE = 2**(scaleFactor) * 16
       print "==" * 100
       print 'Loading matrix size of ' + str(MATRIX_SIZE)

       # compute reference on the CPU to verify GPU computation
       at1 = time.time()
       c_cpu = np.dot(a_cpu, b_cpu)
       at2 = time.time()
       dt12 = (at2 - at1)*1000
       print "CPU time used:", dt12, " ms "

       # save the results in npz
       np.savez('cpu_res_{}.npz'.format(scaleFactor), c_cpu)

  return rsCPU

def benchmarkGPU(scale):
   rsGPU = []
   rsCOPY= []
   print 'Start GPU processing'

   # define size of blocks and tiles sub-matrix
   # (we assume that the block size is same as tile size)
   TILE_SIZE = 16

   for scaleFactor in xrange(scale):

       MATRIX_SIZE = 2 ** (scaleFactor) * 16

       print "==" * 100
       print 'Loading Matrix size of ' + str(MATRIX_SIZE)

       # load the matrices
       a_cpu = np.load('testmat_{}.npz'.format(scaleFactor))['arr_0']
       b_cpu = np.load('testmat_{}.npz'.format(scaleFactor))['arr_1']

       at1 = time.time()
       a_gpu = gpuarray.to_gpu(a_cpu)
       b_gpu = gpuarray.to_gpu(b_cpu)

       at2 = time.time()
       dt12= (at2-at1)*1000

       print "COPY time used:", dt12, " ms "

       # create empty gpu array for the result (C = A * B)
       c_gpu = gpuarray.empty((MATRIX_SIZE, MATRIX_SIZE), np.float32)

       # get the kernel code from the template
       # by specifying the constants MATRIX_SIZE and BLOCK_SIZE
       kernel_code = kernel_code_template % {
           'BLOCK_SIZE': BLOCK_SIZE,

       # compile the kernel code
       mod = compiler.SourceModule(kernel_code)

       # get the kernel function from the compiled module
       matrixmul = mod.get_function("MatrixMulKernel")

       # call the kernel on the card
         # inputs
         a_gpu, b_gpu,
         # output
         # grid of multiple blocks
         # Andreas' original code is: grid = (MATRIX_SIZE // TILE_SIZE, MATRIX_SIZE // TILE_SIZE),
         # block of multiple threads
         block=(TILE_SIZE, TILE_SIZE, 1),

      # copy result from GPU
      re = c_gpu.get()

      at3 = time.time()
      dt23 = (at3 - at2)*1000
      print "GPU time used:", dt23, " ms "

      np.savez('gpu_res_{}.npz'.format(scaleFactor), re)


    return [rsGPU, rsCOPY]

def calErr(scale):

   print 'Comparing Error'

   for scaleFactor in xrange(scale):
        res_cpu = np.load('cpu_res_{}.npz'.format(scaleFactor))['arr_0']
        res_gpu = np.load('gpu_res_{}.npz'.format(scaleFactor))['arr_0']

        err = la.norm(res_cpu - res_gpu)

   return rsErr

def generate_mat(scale):
     # generate some large matrices and store them as npz files
     # I can only try scaleFactor = 9 because of the memory limit of my GPU card.

     print 'Generating Matrices'

     for scaleFactor in xrange(scale):
         MATRIX_SIZE = 2 ** (scaleFactor) * 16
         a_cpu = np.random.randn(MATRIX_SIZE, MATRIX_SIZE).astype(np.float32)
         b_cpu = np.random.randn(MATRIX_SIZE, MATRIX_SIZE).astype(np.float32)
         np.savez('testmat_{}.npz'.format(scaleFactor), a_cpu, b_cpu)

def main():

    GSCALE = 10

    rsCPU = benchmarkCPU(GSCALE)
    rs = benchmarkGPU(GSCALE)
    rsGPU = rs[0]
    rsCopy = rs[1]
    rsErr= calErr(GSCALE)

    labels = [2**(x)*16 for x in xrange(GSCALE)]
    plt.plot(xrange(GSCALE), rsCPU,'b-', label="CPU processing time")
    plt.plot(xrange(GSCALE), rsGPU,'r-', label="GPU processing time")
    plt.plot(xrange(GSCALE), rsCopy, 'y-', label="Copy processing time")
    plt.xticks(xrange(GSCALE), labels, rotation='vertical')

    plt.grid(True, which="major", linestyle="dotted")

    plt.ylabel("Logrithm Response time (msec)", fontsize=9)
    plt.xlabel("Matrix Size ", fontsize=9)


    plt.legend(loc='upper left', fancybox=True, shadow=True, prop=dict(size=8))

    ax2 = plt.twinx()
    ax2.set_ylabel('Error', color='g')
    ax2.plot(xrange(GSCALE), rsErr, 'g-', label="Norm difference")


if __name__ == "__main__":

The program generates 10 pair of matrices from 16×16 to 8192×8192, and calculates their mutiplications respectively using CUDA tiled matrix multiplication algorithm and numpy dotproduct. The matrices are pregenerated and saved as numpy npz files, and loaded back for benchmarking.  The CPU processing time and GPU procesing time are compared on multiplication matrices of the same sizes. The Copy time from CPU to GPU is separately tracked. Finally, the difference of the CPU and GPU results are evaluated as the L2-norm of production matrix.


Matrix Size 16 32 64 128 256 512 1024 2048 4096 8192
GPU (ms) 170.27 1.76 1.36 1.39 0.98 2.59 9.24 45.35 303.45 2134.40
Copy from CPU to GPU (ms) 0.38 0.71 0.51 0.53 0.52 1.23 3.23 8.64 31.01 116.77
CPU (ms) 0.42 0.01 0.03 0.74 0.55 3.21 50.41 199.14 1675.36 11114.58
Error(L2 norm) 0.00 0.00 0.00 0.00 0.00 0.00 0.02 0.08 0.31 1.22


As shown in the figure above (notice that the timing is represented in log scale), GPU computing does not bring any advantage on multiplication of smaller matrices, and the overhead of Copy time is significant.  When the matrix size becomes larger than 512,  GPU computing outperforms CPU on efficiency.