News:

PlayBASIC2DLL V0.99 Revision I Commercial Edition released! - Convert PlayBASIC programs to super fast Machine Code. 

Main Menu

Larrabee Cancelled - Ray Tracing Dead ?

Started by kevin, December 13, 2009, 11:22:31 PM

Previous topic - Next topic

kevin


http://www.bit-tech.net/news/hardware/2009/12/07/intel-larrabee-cancelled/1

markel422

Is this how Next Generation Game Systems Start? By having a more powerful graphics processor for them to begin coding on every 5 or more years, in order for it to take the processing power that will make the previous generation seem primitive?

I always wanted to know how much processing power is required for Ray Tracing. :)

kevin


   Texture Mapped rasterizers have pretty much come full circle and are now (and have been for a while) treading water.  They started out with of software rendering on general purpose Cpu's, rushed into hardware implementations, then soon returned to specialized  software rendering in the form of GPU shaders. 

   The main problem with real time ray tracing is the cost per pixel is none linear, in particular when compared to texture mapped pixel.   Texture mappers are fixed cost per pixel (transformation from texel space to screen), so cost for drawing a scanline of pixels, is pretty constant cost.   So, the theory is, if we can make the render cores faster, the GPU can render more fixed cost pixels per second.  The faster this all gets, the more polygons we can draw per second, which ultimately developers can get a better representation of a 3D.

   While the texture mapping might have a fixed cost per pixel,  we often have to draw the scene over and over and over again.  So lot of extra effort needs to do into the scene.   A good example of this is occurs in reflections.   Effects like environment/cube mapping are very costly and require drawing the scene multiple times from the object's perspective, before the scene can be drawn from cameras perspective.   So the more objects with real reflections you have in scene, the more brute force that's needed.  The trouble is, a lot of brute force doesn't actually get rendered.  So its just overhead for nothing.

   Ray tracing fires a ray from every pixel on the cameras view port into the 3D scene.  So we're modelling light, just backwards.  For each ray, we find the intersection point(s) with the scene geometry / light sources & reflections.   This a pretty costly thing to do, so the render cost per pixel could be anything.   For example if a ray hits a reflective surfaces inside a closed area, it'll bounce around.  While on other surfaces it won't be reflect at all.  There's no real way of knowing how much time each pixel will cost.   There are lots of approximation that can be made, to help reduce the cost per pixel..  Things like building better scene graphing for example.

   All the same, it's easy to argue that there's no place for ray tracing,  since all the GPU manufactures have to do is keep making gpu cores faster.  Understatement of the century :)  And we can get a pretty good approximation of 3D via texture mapping.    But at some point, the cost/complexity starts to even out.   

  While  real time ray tracing is possible today on the cpu.  But we don't really have enough core's to spread the work load across.  Ray Tracing is one those things that work in parallel though.   So the more multiple cores can render different segments  of the screen.   Dual and quad aren't enough though.  We need arrays of them..