Typically, streaming memory loads on GPUs are applied sequentially by the programmer by assigning sequential threads to sequential addresses in order to enforce coalescing. On the other hand, CPUs tend to favor sequentially accessed addresses by each individual thread in order to increase the degree of spatial locality and make better use of cache. Are there any intermediate patterns between these two extreme cases? How would a CPU or GPU device behave under an intermediate situation?

Here is where oclmempatterns benchmark tool comes into play. This benchmark leverages OpenCL to explore the memory performance under different access stride selections. Accessing a memory space is benchmarked by applying all possible access strides that are integer powers of 2, till the amount of total threads is reached. For example, imagine in a simplified scenario that we have to access 16 elements by using a total of 4 threads. For CPUs a good choice is typically using single strided accesses as shown in the figure below.

Accessing memory with unit strides |

Accessing memory with strides of 4, which equals to the amount of total threads |

However, there are many intermediate cases where we can apply various strides. In this simplified example we could apply strides of 2 as shown below:

Accessing memory with strides of 2 |

In a real examples there can be tens of different cases as the memory space can be large, each applying a different power of two stride. If we assume that we have a total amount of N elements quantified as an exact power of 2 then an exact amount of log

Each cell represents a bit. These 26 bits consist the whole element space. Red bits represent the part of the address that is designated by the thread stride and the green bits are designated by the thread index. This means that each thread uses its thread index to define the green part of the address and thereafter enumerates sequentially each possible value of the red part, applying the memory access for each element address.

Of course there are other intermediate cases as seen bellow that are tested by the benchmark:

Each one corresponds to a different shift of the red part in the whole address representation. The last one is the other extreme case typically used on CPUs where each thread accesses elements residing on sequential addresses.

So, what would the memory access bandwidth would be in all these cases? This is the purpose of clmempatterns benchmark tool. In the figure below you can see measurements of memory bandwidth by using this tool to access 64M of int elements by using 1M of total threads on a GTX-1060 GPU. As seen using any power of two stride from 32 and beyond leads to good memory bandwidth.

_{2}(N) bits are required to address them. For instance, if 2^{26}elements are accessed then 26 bits are required for indexing these elements. Having a smaller amount of threads to process these elements, e.g. 2^{20}, would yield a thread index space of 20 bits total. So, by using strides equal of the total thread index space would lead to the following representation whole element space:b_{25} |
b_{24} |
b_{23} |
b_{22} |
b_{21} |
b_{20} |
b_{19} |
b_{18} |
b_{17} |
b_{16} |
b_{15} |
b_{14} |
b_{13} |
b_{12} |
b_{11} |
b_{10} |
b_{09} |
b_{08} |
b_{07} |
b_{06} |
b_{05} |
b_{04} |
b_{03} |
b_{02} |
b_{01} |

Of course there are other intermediate cases as seen bellow that are tested by the benchmark:

b_{25} |
b_{24} |
b_{23} |
b_{22} |
b_{21} |
b_{20} |
b_{19} |
b_{18} |
b_{17} |
b_{16} |
b_{15} |
b_{14} |
b_{13} |
b_{12} |
b_{11} |
b_{10} |
b_{09} |
b_{08} |
b_{07} |
b_{06} |
b_{05} |
b_{04} |
b_{03} |
b_{02} |
b_{01} |

b_{25} |
b_{24} |
b_{23} |
b_{22} |
b_{21} |
b_{20} |
b_{19} |
b_{18} |
b_{17} |
b_{16} |
b_{15} |
b_{14} |
b_{13} |
b_{12} |
b_{11} |
b_{10} |
b_{09} |
b_{08} |
b_{07} |
b_{06} |
b_{05} |
b_{04} |
b_{03} |
b_{02} |
b_{01} |

b_{25} |
b_{24} |
b_{23} |
b_{22} |
b_{21} |
b_{20} |
b_{19} |
b_{18} |
b_{17} |
b_{16} |
b_{15} |
b_{14} |
b_{13} |
b_{12} |
b_{11} |
b_{10} |
b_{09} |
b_{08} |
b_{07} |
b_{06} |
b_{05} |
b_{04} |
b_{03} |
b_{02} |
b_{01} |

b_{25} |
b_{24} |
b_{23} |
b_{22} |
b_{21} |
b_{20} |
b_{19} |
b_{18} |
b_{17} |
b_{16} |
b_{15} |
b_{14} |
b_{13} |
b_{12} |
b_{11} |
b_{10} |
b_{09} |
b_{08} |
b_{07} |
b_{06} |
b_{05} |
b_{04} |
b_{03} |
b_{02} |
b_{01} |

b_{25} |
b_{24} |
b_{23} |
b_{22} |
b_{21} |
b_{20} |
b_{19} |
b_{18} |
b_{17} |
b_{16} |
b_{15} |
b_{14} |
b_{13} |
b_{12} |
b_{11} |
b_{10} |
b_{09} |
b_{08} |
b_{07} |
b_{06} |
b_{05} |
b_{04} |
b_{03} |
b_{02} |
b_{01} |

b_{25} |
b_{24} |
b_{23} |
b_{22} |
b_{21} |
b_{20} |
b_{19} |
b_{18} |
b_{17} |
b_{16} |
b_{15} |
b_{14} |
b_{13} |
b_{12} |
b_{11} |
b_{10} |
b_{09} |
b_{08} |
b_{07} |
b_{06} |
b_{05} |
b_{04} |
b_{03} |
b_{02} |
b_{01} |

b_{25} |
b_{24} |
b_{23} |
b_{22} |
b_{21} |
b_{20} |
b_{19} |
b_{18} |
b_{17} |
b_{16} |
b_{15} |
b_{14} |
b_{13} |
b_{12} |
b_{11} |
b_{10} |
b_{09} |
b_{08} |
b_{07} |
b_{06} |
b_{05} |
b_{04} |
b_{03} |
b_{02} |
b_{01} |

b_{25} |
b_{24} |
b_{23} |
b_{22} |
b_{21} |
b_{20} |
b_{19} |
b_{18} |
b_{17} |
b_{16} |
b_{15} |
b_{14} |
b_{13} |
b_{12} |
b_{11} |
b_{10} |
b_{09} |
b_{08} |
b_{07} |
b_{06} |
b_{05} |
b_{04} |
b_{03} |
b_{02} |
b_{01} |

b_{25} |
b_{24} |
b_{23} |
b_{22} |
b_{21} |
b_{20} |
b_{19} |
b_{18} |
b_{17} |
b_{16} |
b_{15} |
b_{14} |
b_{13} |
b_{12} |
b_{11} |
b_{10} |
b_{09} |
b_{08} |
b_{07} |
b_{06} |
b_{05} |
b_{04} |
b_{03} |
b_{02} |
b_{01} |

b_{25} |
b_{24} |
b_{23} |
b_{22} |
b_{21} |
b_{20} |
b_{19} |
b_{18} |
b_{17} |
b_{16} |
b_{15} |
b_{14} |
b_{13} |
b_{12} |
b_{11} |
b_{10} |
b_{09} |
b_{08} |
b_{07} |
b_{06} |
b_{05} |
b_{04} |
b_{03} |
b_{02} |
b_{01} |

b_{25} |
b_{24} |
b_{23} |
b_{22} |
b_{21} |
b_{20} |
b_{19} |
b_{18} |
b_{17} |
b_{16} |
b_{15} |
b_{14} |
b_{13} |
b_{12} |
b_{11} |
b_{10} |
b_{09} |
b_{08} |
b_{07} |
b_{06} |
b_{05} |
b_{04} |
b_{03} |
b_{02} |
b_{01} |

b_{25} |
b_{24} |
b_{23} |
b_{22} |
b_{21} |
b_{20} |
b_{19} |
b_{18} |
b_{17} |
b_{16} |
b_{15} |
b_{14} |
b_{13} |
b_{12} |
b_{11} |
b_{10} |
b_{09} |
b_{08} |
b_{07} |
b_{06} |
b_{05} |
b_{04} |
b_{03} |
b_{02} |
b_{01} |

b_{25} |
b_{24} |
b_{23} |
b_{22} |
b_{21} |
b_{20} |
b_{19} |
b_{18} |
b_{17} |
b_{16} |
b_{15} |
b_{14} |
b_{13} |
b_{12} |
b_{11} |
b_{10} |
b_{09} |
b_{08} |
b_{07} |
b_{06} |
b_{05} |
b_{04} |
b_{03} |
b_{02} |
b_{01} |

b_{25} |
b_{24} |
b_{23} |
b_{22} |
b_{21} |
b_{20} |
b_{19} |
b_{18} |
b_{17} |
b_{16} |
b_{15} |
b_{14} |
b_{13} |
b_{12} |
b_{11} |
b_{10} |
b_{09} |
b_{08} |
b_{07} |
b_{06} |
b_{05} |
b_{04} |
b_{03} |
b_{02} |
b_{01} |

b_{25} |
b_{24} |
b_{23} |
b_{22} |
b_{21} |
b_{20} |
b_{19} |
b_{18} |
b_{17} |
b_{16} |
b_{15} |
b_{14} |
b_{13} |
b_{12} |
b_{11} |
b_{10} |
b_{09} |
b_{08} |
b_{07} |
b_{06} |
b_{05} |
b_{04} |
b_{03} |
b_{02} |
b_{01} |

b_{25} |
b_{24} |
b_{23} |
b_{22} |
b_{21} |
b_{20} |
b_{19} |
b_{18} |
b_{17} |
b_{16} |
b_{15} |
b_{14} |
b_{13} |
b_{12} |
b_{11} |
b_{10} |
b_{09} |
b_{08} |
b_{07} |
b_{06} |
b_{05} |
b_{04} |
b_{03} |
b_{02} |
b_{01} |

b_{25} |
b_{24} |
b_{23} |
b_{22} |
b_{21} |
b_{20} |
b_{19} |
b_{18} |
b_{17} |
b_{16} |
b_{15} |
b_{14} |
b_{13} |
b_{12} |
b_{11} |
b_{10} |
b_{09} |
b_{08} |
b_{07} |
b_{06} |
b_{05} |
b_{04} |
b_{03} |
b_{02} |
b_{01} |

b_{25} |
b_{24} |
b_{23} |
b_{22} |
b_{21} |
b_{20} |
b_{19} |
b_{18} |
b_{17} |
b_{16} |
b_{15} |
b_{14} |
b_{13} |
b_{12} |
b_{11} |
b_{10} |
b_{09} |
b_{08} |
b_{07} |
b_{06} |
b_{05} |
b_{04} |
b_{03} |
b_{02} |
b_{01} |

b_{25} |
b_{24} |
b_{23} |
b_{22} |
b_{21} |
b_{20} |
b_{19} |
b_{18} |
b_{17} |
b_{16} |
b_{15} |
b_{14} |
b_{13} |
b_{12} |
b_{11} |
b_{10} |
b_{09} |
b_{08} |
b_{07} |
b_{06} |
b_{05} |
b_{04} |
b_{03} |
b_{02} |
b_{01} |

Each one corresponds to a different shift of the red part in the whole address representation. The last one is the other extreme case typically used on CPUs where each thread accesses elements residing on sequential addresses.

So, what would the memory access bandwidth would be in all these cases? This is the purpose of clmempatterns benchmark tool. In the figure below you can see measurements of memory bandwidth by using this tool to access 64M of int elements by using 1M of total threads on a GTX-1060 GPU. As seen using any power of two stride from 32 and beyond leads to good memory bandwidth.

clmempatterns benchmark execution on GTX-1060 (64M int elements, 1M grid size, 256 workitems/workgroup, granularity: 64 elements/workitem) |

The tool is open source and you may freely experiment with it. I would be glad to let me know about any interesting results you might get.

URL: https://github.com/ekondis/clmempatterns

URL: https://github.com/ekondis/clmempatterns