CUDA learning(2)–simple parallelism cuda program

No Comments

now we use the function add,with these codes:
__global__ void add(int *a, int *b, int *c) {
*c = *a + *b;
}

add() runs on the device, so a, b and c must point to device memory

but we can allocate memory on the GPU

we can use cudaMalloc(), cudaFree(), cudaMemcpy() to handle device memory

now comes with a simple program:

#include <stdio.h>
__global__ void add(int *a, int *b, int *c)
{
*c = *a + *b;
}
int main(void)
{
int a, b, c;
int *d_a, *d_b, *d_c;
int size = sizeof(int);
// host copies of a, b, c
// device copies of a, b, c
// Allocate space for device copies of a, b, c
cudaMalloc((void **)&d_a, size);
cudaMalloc((void **)&d_b, size);
cudaMalloc((void **)&d_c, size);
a = 2;
b = 7;
cudaMemcpy(d_a, &a, size, cudaMemcpyHostToDevice);
cudaMemcpy(d_b, &b, size, cudaMemcpyHostToDevice);
// Launch add() kernel on GPU
add<<<1,1>>>(d_a, d_b, d_c);
// Copy result back to host
cudaMemcpy(&c, d_c, size, cudaMemcpyDeviceToHost);
// Cleanup
cudaFree(d_a); cudaFree(d_b); cudaFree(d_c);
return 0;
}

So how do we run code in parallel on the device?

change “add<<< 1, 1 >>>();” to “add<<< N, 1 >>>();”(Instead of executing add() once, execute N times in parallel,N means N blocks)

Vector Addition on the Device

Terminology: each parallel invocation of add() is referred to as a block .Each invocation can refer to its block index using blockIdx.x

then we change the add function:

__global__ void add(int *a, int *b, int *c) {
c[blockIdx.x] = a[blockIdx.x] + b[blockIdx.x];
}

 

so they change to three arrays,so something has to be changed in main()

maybe we can use function random_ints()

Categories: Programming Tags: 标签:

发表评论

邮箱地址不会被公开。 必填项已用*标注

*

code