benchmark optimize(1)

No Comments

source code: whetstone.c

based compiler flags: -std=c89 -DDP  -DROLL -lm

no warning, no error

始めよう

GCC First:

1.simply run:

Rolled Double  Precision 703148 Kflops ; 2048 Reps

2.703148 is too slow,then we add flag: -O4, optimize the loops,then compile again,run it:

Rolled Double  Precision 4177105 Kflops ; 2048 Reps

better now!

now come to these flags:

gcc -std=c89 -DDP  -DROLL -O4 -ffast-math -funroll-all-loops -mavx whetstone.c -fopenmp -lm -o b.out

fast-math means faster but sacrifices the accuracy

avx means using the avx instruction
5340310 Kflops now!

 

ICC THEN:

1.simply run:

Rolled Double  Precision 4636137 Kflops ; 2048 Reps

seems good at first,if we add flag:-O3, the program isn’t faster at all,then we think about using parallel methods

flags -xHost can improve about 14%

2.parallel methods:

we have to run vtune_amplifier_xe above all,this software locate in /opt/intel/vtune_amplifier_xe_xxx/bin64, run /opt/intel/vtune_amplifier_xe_xxx/bin64/amplxe-gui and you will see the software window.(ps: xxx means the version of vtune_amplifier_xe)

run command(as root):

root# echo 0 > /proc/sys/kernel/yama/ptrace_scope

then refer to the tutorial:hotspots_amplxe_lin.pdf

it shows those hotspots:

 DeepinScreenshot20160229191414

it also shows the Utilization situation:

DeepinScreenshot20160229191713

Poor!Now we have to consider to parallel it.

Categories: Programming

CUDA learning(2)–simple parallelism cuda program

No Comments

now we use the function add,with these codes:
__global__ void add(int *a, int *b, int *c) {
*c = *a + *b;
}

add() runs on the device, so a, b and c must point to device memory

but we can allocate memory on the GPU

we can use cudaMalloc(), cudaFree(), cudaMemcpy() to handle device memory

now comes with a simple program:

#include <stdio.h>
__global__ void add(int *a, int *b, int *c)
{
*c = *a + *b;
}
int main(void)
{
int a, b, c;
int *d_a, *d_b, *d_c;
int size = sizeof(int);
// host copies of a, b, c
// device copies of a, b, c
// Allocate space for device copies of a, b, c
cudaMalloc((void **)&d_a, size);
cudaMalloc((void **)&d_b, size);
cudaMalloc((void **)&d_c, size);
a = 2;
b = 7;
cudaMemcpy(d_a, &a, size, cudaMemcpyHostToDevice);
cudaMemcpy(d_b, &b, size, cudaMemcpyHostToDevice);
// Launch add() kernel on GPU
add<<<1,1>>>(d_a, d_b, d_c);
// Copy result back to host
cudaMemcpy(&c, d_c, size, cudaMemcpyDeviceToHost);
// Cleanup
cudaFree(d_a); cudaFree(d_b); cudaFree(d_c);
return 0;
}

So how do we run code in parallel on the device?

change “add<<< 1, 1 >>>();” to “add<<< N, 1 >>>();”(Instead of executing add() once, execute N times in parallel,N means N blocks)

Vector Addition on the Device

Terminology: each parallel invocation of add() is referred to as a block .Each invocation can refer to its block index using blockIdx.x

then we change the add function:

__global__ void add(int *a, int *b, int *c) {
c[blockIdx.x] = a[blockIdx.x] + b[blockIdx.x];
}

 

so they change to three arrays,so something has to be changed in main()

maybe we can use function random_ints()

Categories: Programming Tags: 标签: