Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
92 changes: 86 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,93 @@ CUDA Stream Compaction
======================

**University of Pennsylvania, CIS 565: GPU Programming and Architecture, Project 2**

* Tom Donnelly
* [LinkedIn](https://www.linkedin.com/in/the-tom-donnelly/)
* Tested on: Windows 11, AMD Ryzen 9 5900X, NVIDIA GeForce RTX 3070 (Personal Desktop)

### Description
An Implementation and Comparison of Stream Compaction and Scan Algorithms for Naive, Work Efficient, Naive, and Thrust implementations on the CPU and GPU.

### Questions and Analysis

## Roughly optimize the block sizes of each of your implementations for minimal run time on your GPU.
![](img/naive_blocksize.png)
![](img/Efficent_block_size.png)
Average runtimes were taken for 5 runs at an array of 1048576 elements. A block size of 128 was chosen for the Naive implementation and a block size of 64 was chosen for the efficient implementation.

## Comparison
![](img/pot_scan.png)
![](img/npot_scan.png)
![](img/compact.png)
After around 1048576 elements, the CPU implementation of Scan loses performance taking the most time out of any implementation. The faster implementation is thrust, followed by the work efficient and Naive implementations respectively. This trend occurs both in arrays of a power of two and non power of two arrays. The compact GPU implementation is much faster than the CPU at higher array values, having a 5.7X speedup at 268435456
elements.
Thrust appears to be calling an asynchronous malloc and memcopy in CUDA to initialize device vectors. It is then calling DeviceScanInitKernal and DeviceScanKernal to implement the exclusive scan. It is likely using shared memory to perform parallel operations quickly.
### Explanation
All CPU implementations are serialized, meaning they are run one after the other an constrained by the speed of one thread on the CPU. The memory operations are quick so the implementation works well for small arrays but very quickly falls off for larger arrays. The Naive implementation is limited by the access pattern, it launches the maximum number of threads for each offset and is limited by the number of calculations being done. The work efficient implementation launches the minimum number of threads and is likely limited through I/O global memory on the GPU. It could be improved by using shared memory. The thrust implementation is the fastest and most likely hardware limited.

```

****************
** SCAN TESTS **
****************
[ 39 5 12 8 25 17 10 49 5 23 16 21 3 ... 34 0 ]
==== cpu scan, power-of-two ====
elapsed time: 0.0006ms (std::chrono Measured)
[ 0 39 44 56 64 89 106 116 165 170 193 209 230 ... 6408 6442 ]
==== cpu scan, non-power-of-two ====
elapsed time: 0.0003ms (std::chrono Measured)
[ 0 39 44 56 64 89 106 116 165 170 193 209 230 ... 6372 6383 ]
passed
==== naive scan, power-of-two ====
elapsed time: 0.077664ms (CUDA Measured)
[ 0 39 44 56 64 89 106 116 165 170 193 209 230 ... 6408 6442 ]
passed
==== naive scan, non-power-of-two ====
elapsed time: 0.082944ms (CUDA Measured)
passed
==== work-efficient scan, power-of-two ====
elapsed time: 0.169984ms (CUDA Measured)
[ 0 39 44 56 64 89 106 116 165 170 193 209 230 ... 6408 6442 ]
passed
==== work-efficient scan, non-power-of-two ====
elapsed time: 0.140256ms (CUDA Measured)
[ 0 39 44 56 64 89 106 116 165 170 193 209 230 ... 6372 6383 ]
passed
==== thrust scan, power-of-two ====
elapsed time: 0.052992ms (CUDA Measured)
[ 0 39 44 56 64 89 106 116 165 170 193 209 230 ... 6408 6442 ]
passed
==== thrust scan, non-power-of-two ====
elapsed time: 0.070656ms (CUDA Measured)
passed

*****************************
** STREAM COMPACTION TESTS **
*****************************
[ 3 3 2 0 1 3 0 3 3 3 0 1 1 ... 2 0 ]
==== cpu compact without scan, power-of-two ====
elapsed time: 0.0006ms (std::chrono Measured)
[ 3 3 2 1 3 3 3 3 1 1 1 1 1 ... 1 2 ]
passed
==== cpu compact without scan, non-power-of-two ====
elapsed time: 0.0008ms (std::chrono Measured)
[ 3 3 2 1 3 3 3 3 1 1 1 1 1 ... 3 1 ]
passed
==== cpu compact with scan ====
elapsed time: 0.0014ms (std::chrono Measured)
[ 3 3 2 1 3 3 3 3 1 1 1 1 1 ... 1 2 ]
passed
==== work-efficient compact, power-of-two ====
elapsed time: 0.16896ms (CUDA Measured)
passed
==== work-efficient compact, non-power-of-two ====
elapsed time: 0.218112ms (CUDA Measured)
passed
Press any key to continue . . .
```


* (TODO) YOUR NAME HERE
* (TODO) [LinkedIn](), [personal website](), [twitter](), etc.
* Tested on: (TODO) Windows 22, i7-2222 @ 2.22GHz 22GB, GTX 222 222MB (Moore 2222 Lab)

### (TODO: Your README)

Include analysis, etc. (Remember, this is public, so don't put
anything here that you don't want to share with the world.)

Binary file added img/Efficent_block_size.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/compact.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/naive_blocksize.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/npot_scan.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added img/pot_scan.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
8 changes: 4 additions & 4 deletions src/main.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ int main(int argc, char* argv[]) {
printDesc("naive scan, power-of-two");
StreamCompaction::Naive::scan(SIZE, c, a);
printElapsedTime(StreamCompaction::Naive::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
//printArray(SIZE, c, true);
printArray(SIZE, c, true);
printCmpResult(SIZE, b, c);

/* For bug-finding only: Array of 1s to help find bugs in stream compaction or scan
Expand All @@ -71,21 +71,21 @@ int main(int argc, char* argv[]) {
printDesc("work-efficient scan, power-of-two");
StreamCompaction::Efficient::scan(SIZE, c, a);
printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
//printArray(SIZE, c, true);
printArray(SIZE, c, true);
printCmpResult(SIZE, b, c);

zeroArray(SIZE, c);
printDesc("work-efficient scan, non-power-of-two");
StreamCompaction::Efficient::scan(NPOT, c, a);
printElapsedTime(StreamCompaction::Efficient::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
//printArray(NPOT, c, true);
printArray(NPOT, c, true);
printCmpResult(NPOT, b, c);

zeroArray(SIZE, c);
printDesc("thrust scan, power-of-two");
StreamCompaction::Thrust::scan(SIZE, c, a);
printElapsedTime(StreamCompaction::Thrust::timer().getGpuElapsedTimeForPreviousOperation(), "(CUDA Measured)");
//printArray(SIZE, c, true);
printArray(SIZE, c, true);
printCmpResult(SIZE, b, c);

zeroArray(SIZE, c);
Expand Down
25 changes: 23 additions & 2 deletions stream_compaction/common.cu
Original file line number Diff line number Diff line change
Expand Up @@ -18,12 +18,22 @@ void checkCUDAErrorFn(const char *msg, const char *file, int line) {
namespace StreamCompaction {
namespace Common {

__device__ int getIndex()
{
return threadIdx.x + (blockIdx.x * blockDim.x);
}

/**
* Maps an array to an array of 0s and 1s for stream compaction. Elements
* which map to 0 will be removed, and elements which map to 1 will be kept.
*/
__global__ void kernMapToBoolean(int n, int *bools, const int *idata) {
// TODO
int index = threadIdx.x + (blockIdx.x * blockDim.x);
if (index > n)
{
return;
}
bools[index] = (idata[index] != 0);
}

/**
Expand All @@ -32,7 +42,18 @@ namespace StreamCompaction {
*/
__global__ void kernScatter(int n, int *odata,
const int *idata, const int *bools, const int *indices) {
// TODO
int index = threadIdx.x + (blockIdx.x * blockDim.x);
if (index > n)
{
return;
}
if (bools[index] == 1)
{
int new_index = indices[index];
odata[new_index] = idata[index];
}


}

}
Expand Down
1 change: 1 addition & 0 deletions stream_compaction/common.h
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@ inline int ilog2ceil(int x) {

namespace StreamCompaction {
namespace Common {
__device__ int getIndex();
__global__ void kernMapToBoolean(int n, int *bools, const int *idata);

__global__ void kernScatter(int n, int *odata,
Expand Down
65 changes: 60 additions & 5 deletions stream_compaction/cpu.cu
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,17 @@ namespace StreamCompaction {
*/
void scan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
// TODO
odata[0] = idata[0];
for (int k = 1; k < n; k++)
{
odata[k] = odata[k - 1] + idata[k];
}
//shift
for (int i = n - 1; i > 0; i--)
{
odata[i] = odata[i - 1];
}
odata[0] = 0;
timer().endCpuTimer();
}

Expand All @@ -30,9 +40,17 @@ namespace StreamCompaction {
*/
int compactWithoutScan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
// TODO
int num_elements = 0;
for (int i = 0; i < n; i++)
{
if (idata[i] != 0)
{
odata[num_elements] = idata[i];
num_elements++;
}
}
timer().endCpuTimer();
return -1;
return num_elements;
}

/**
Expand All @@ -42,9 +60,46 @@ namespace StreamCompaction {
*/
int compactWithScan(int n, int *odata, const int *idata) {
timer().startCpuTimer();
// TODO
int* temp_array = new int[n];
for (int i = 0; i < n; i++)
{
if (idata[i] != 0)
{
temp_array[i] = 1;
}
else
{
temp_array[i] = 0;
}
}
//Scan
odata[0] = temp_array[0];
for (int k = 1; k < n; k++)
{
odata[k] = odata[k - 1] + temp_array[k];
}
//shift
for (int i = n - 1; i > 0; i--)
{
odata[i] = odata[i - 1];
}
odata[0] = 0;

//Scatter
int num_elements = 0;
for (int i = 0; i < n; i++)
{
if (temp_array[i] == 1)
{
odata[odata[i]] = idata[i];
num_elements++;
}
}
delete[] temp_array;


timer().endCpuTimer();
return -1;
return num_elements;
}
}
}
Loading