Xraytrace documentation

raytracing software for x-ray standing wave calculations

User Tools

Site Tools


gpu_use

GPU use

Warning: graphics card use is still in the evaluation phase, so it is reasonable to check every calculation first on CPU (e.g. for a single angle) and then to run the graphics card calculation.

Use of graphics card (GPU - graphics processing unit) is the cheapest way how to speedup some calculations based on repeating some simple algorithm many times. It can be very efficient in cases when the algorithm runs can be made independent, so individual calculations do not depend on the other calculations. This is the case of raytracing. Graphics card is a multiprocessor device that allows to perform many calculations at the same time, using the single instruction multiple data architecture. The only problem in implementation of raytracing algorithms on a graphics card is the recursion, as the simplest raytracting algorithms are usually written in this way. Recursion is elegant, however creates problems in determining the necessary stack memory for the computation. That's why we have implemented the raytracing on GPU iteratively, using a LIFO stack as shown below.

Xraytrace supports Nvidia graphics cards via the CUDA computing environment. To set the graphics card use on, we can run this command

GPU
1

and by using 0 we would switch it off. If we have multiple graphics cards in the system, we can choose which one will run the calculation, e.g. by using command

UGPU
1

which means that the GPU with index 1 will be used (second GPU in the system, as they are numbered from 0).

As an example, here is the parameter file that was used to benchmark different GPU settings. If we play with CPU and GPU use and number of rays, we can obtain graph like this (depending on actual processor and graphics card speed):

Dividing the CPU time by GPU time we can get the speedup factor:

gpu_use.txt · Last modified: 2018/01/26 19:05 by pklapetek