Xraytrace documentation

raytracing software for x-ray standing wave calculations

User Tools

Site Tools


gpu_use

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
gpu_use [2018/01/26 16:51]
pklapetek
gpu_use [2018/01/26 19:05] (current)
pklapetek
Line 1: Line 1:
 ===== GPU use ===== ===== GPU use =====
  
-Graphics ​card use is still in the evaluation phase, so it is reasonable to check every calculation first on CPU (e.g. for a single angle) and then to run the graphics card calculation.+//Warning: graphics ​card use is still in the evaluation phase, so it is reasonable to check every calculation first on CPU (e.g. for a single angle) and then to run the graphics card calculation.//
  
 +Use of graphics card (GPU - graphics processing unit) is the cheapest way how to speedup some calculations based on repeating some simple algorithm many times. It can be very efficient in cases when the algorithm runs can be made independent,​ so individual calculations do not depend on the other calculations. This is the case of raytracing. Graphics card is a multiprocessor device that allows to perform many calculations at the same time, using the single instruction multiple data architecture. The only problem in implementation of raytracing algorithms on a graphics card is the recursion, as the simplest raytracting algorithms are usually written in this way. Recursion is elegant, however creates problems in determining the necessary stack memory for the computation.
 +That's why we have implemented the raytracing on GPU iteratively,​ using a LIFO stack as shown below.
 +
 +{{ :​refactorisation.png?​600 |}}
 +
 +
 +Xraytrace supports Nvidia graphics cards via the CUDA computing environment.
 To set the graphics card use on, we can run this command To set the graphics card use on, we can run this command
 <​code>​ <​code>​
Line 18: Line 25:
 As an example, here is the {{samples:​gpu.tar.gz|parameter file}} that was used to benchmark different GPU settings. If we play with CPU and GPU use and number of rays, we can obtain graph like this (depending on actual processor and graphics card speed): As an example, here is the {{samples:​gpu.tar.gz|parameter file}} that was used to benchmark different GPU settings. If we play with CPU and GPU use and number of rays, we can obtain graph like this (depending on actual processor and graphics card speed):
  
-{{ :​gpuscaling.png?​570 |}}+{{ :​gpuscaling.png?​500 |}}
  
 Dividing the CPU time by GPU time we can get the speedup factor: Dividing the CPU time by GPU time we can get the speedup factor:
  
-{{ :​gpuspeedup.png?​550 |}}+{{ :​gpuspeedup.png?​480 |}}
  
gpu_use.1516981860.txt.gz ยท Last modified: 2018/01/26 16:51 by pklapetek