Showing posts with label Volcanic Islands. Show all posts
Showing posts with label Volcanic Islands. Show all posts

Thursday, May 19, 2016

mixbench on an AMD Fiji GPU

Recently, I had the quite pleasant opportunity to be granted with the Radeon R9 Nano GPU card. This card features the Fiji GPU and as such it seems to be a compute beast as it features 4096 shader units and HBM memory with bandwidth reaching to 512GB/sec. If one considers the card's remarkably small size and low power consumption, this card proves to be a great and efficient compute device for handling parallel compute tasks via OpenCL (or HIP, but more on this on a later post).

AMD R9 Nano GPU card

One of the first experiments I tried on it was the mixbench microbenchmark tool, of course. Expressing the execution results via gnuplot in the memory bandwidth/compute throughput plane is depicted here:

mixbench-ocl-ro as executed on the R9 Nano
GPU performance effectively approaches 8 TeraFlops of single precision compute performance on heavily compute intensive kernels whereas it exceeds 450GB/sec memory bandwidth on memory oriented kernels.

For anyone interested in trying mixbench on their CUDA/OpenCL/HIP GPU please follow the link to github:
https://github.com/ekondis/mixbench

Here is an example of execution on Ubuntu Linux:



Acknowledgement: I would like to greatly thank the Radeon Open Compute department of AMD for kindly supplying the Radeon R9 Nano GPU card for the support of our research.

Tuesday, March 17, 2015

ISA reference guide for Volcanic Islands architecture

A new GPU ISA manual is available for AMD GCN 3rd generation GPUs. This is probably regarding the Tonga GPU and Carrizo APU as it mentions that context switching is an additional capability to the architecture.

You may download it here:

http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2013/07/AMD_GCN3_Instruction_Set_Architecture.pdf

Sunday, October 5, 2014

Least required GPU parallelism for kernel executions

GPUs require a vast number of threads per kernel invocation in order to utilize all execution units. As a first thought one should spawn at least the same number of threads as the number of available shader units (or CUDA cores or Processor Elements). However, this is not enough. The type of scheduling should be taken into account. Scheduling in Compute Units is done by multiple schedulers which in effect restricts the group of shader units in which a thread can execute. For instance the Fermi SMs consist of 32 shader units but at least 64 threads are required because 2 schedulers are evident in which the first can schedule threads only on the first group of 16 shader units and the other on the rest group. Thus a greater number of threads is required. What about the rest GPUs? What is the minimum threading required in order to enable all shader units? The answer lies on schedulers of compute units for each GPU architecture.

NVidia Fermi GPUs


Each SM (Compute Unit) consists of 2 schedulers. Each scheduler handles 32 threads (WARP size), thus 2x32=64 threads are the minimum required per SM. For instance a GTX480 with 15 CUs requires at least 960 active threads.















NVidia Kepler GPUs

Each SM (Compute Unit) consists of 4 schedulers. Each scheduler handles 32 threads (WARP size), thus 4x32=128 threads are the minimum requirement per SM. A GTX660 with 8 CUs requires at least 1024 active threads.

In addition, more independent instructions are required in the instruction stream (instruction level parallelism) in order to utilize the extra 64 shaders of each CU (192 in total).



















NVidia Maxwell GPUs

Same as Kepler. A GTX660 with 8 CUs requires at least 1024 active threads. A GTX980 with 16 CUs requires 2048 active threads.

The requirement for instruction independency does not apply here (only 128 threads per CU).




















AMD GCN GPUs

Regarding the AMD GCN units the requirement is more evident. This is because each scheduler handles threads in four groups, one for each SIMD unit. This is like having 4 schedulers per CU. Furthermore the unit of thread execution is done per 64 threads instead of 32. Therefore each CU requires the least of 4x64=256 threads. For instance a R9-280X with 32 CUs require a vast amount of 8192 threads! This fact justifies the reason for which in many research papers the AMD GPUs fail to stand against NVidia GPUs for small problem sizes where the amount of active threads is not enough.



Wednesday, October 23, 2013

AMD "Hawai" compute performance extrapolation

Here is a graph of the theoretical peak performance of current top AMD GPUs. These include the Tahiti GPU known from the HD-7970 and the soon to be released Hawai GPU as the heart of the AMD R9-290X and R9-290. In this extrapolation each compute element in the GPU is supposed to perform 2 floating point operations per clock which is 1 MAD (multiply-add) operation per clock.


Each vendor will probably provide different cards operating in different frequencies so this diagram could be helpful for anybody who intends to by a new card for compute.

Saturday, May 11, 2013

Volcanic Islands to erupt at the end of year?

Update:
Sadly, this post is probably based on an old speculation posted in a forum and not on reliable information. For more info click here:
http://semiaccurate.com/2013/05/20/amds-volcanic-islands-architecture/

The time that follows seems to unveil fascinating GPU disclosures. AMD is rumored to unveil a new GPU architecture (named Volcanic Islands) in the Q4 2013 which according to the purportedly leaked diagram seen bellow, will contain a vast number of parallel compute elements (4096) plus 16 serial processors. Hopefully, these serial processors will alleviate serial code execution bottlenecks evident on the GPU. Thus, GPU compute could be further adopted for algorithms containing interleaved serial parts.

Volcanic Islands (Serial Processing Modules and Parallel Compute Modules)
Volcanic Islands block diagram


It is not known whether this information is correct but it highlights the trend of GPUs towards compute. Whatever the truth is I hope it will push AMD's rival to enforce the GPU computing capabilities of its desktop products that proved to be weak in it's last generation in favor of gaming and efficiency.