News:

PlayBASIC2DLL V0.99 Revision I Commercial Edition released! - Convert PlayBASIC programs to super fast Machine Code. 

Main Menu

Intel pledges 80 cores in five years

Started by kevin, September 26, 2006, 04:40:28 PM

Previous topic - Next topic

kevin

 erm... 80 cores :)

dead link removed

kevin

#1
Intel demo 80 core cpu


techy info


Ian Price

If current cards are pushing the same envelope, then surely there's a lot of wasted time/space/money in developing something like this?

Or it's bottlenecking somewhere. Either way, if the consumer isn't going to see a HUGE difference immediately, then it's going to take a while for it to catch on. And the amount of data it could theoretically handle, will make it nigh-on impossible for developers to make the most of it. This could therefore kill the race stone dead (in theory), if people can't make the most of it's power, then there's no point in improving on it.

Unless they want to use it's power to create realistic blades of grass and individual leaves on trees, then that power will be wasted. And if they do make individual leaves and grass, then that power will be wasted :P

The future looks interesting - shame that the games don't (I can imagine 80 Core Tetris) :S
I came. I saw. I played some Nintendo.

kevin

#3
   Well, it's clear that in recent years that GPU's have been morphing into something more like a dedicated risc CPU's on the video card (i.e. Shaders) than a traditional dumb 'render device'.    There's only so far you go with this design though, before you end up with entire computer on video card. 

   The current PC is designed around the idea that data flows into system memory and out to the peripherals.  This flow is high-speed, but only fast in one direction.   Hence the changing GPU design.   Manufactories realize that a generic approach can give vastly boarder capability than a fixed implementation can (aka software rendering is better than hardware! :) ).  So we pass the GPU code user defined code fragments (shaders) and  execute them on it's local side (the GPU) .  The shader would be limited in terms of scope though.   I.e  We have to feed everything it needs into the video memory (textures / scene data & and the code itself).. Which eats lots of bandwidth.     The CPU can't share this task as there's no way to read video memory.

    If you think about it, the GPU is becoming more like simple general purpose CPU core.  The problem is where stuff is sitting in memory.  If we have lots of GPU cores, then we need lots of memory on the video card & massive bandwidth (i.e power) to copy the data into video memory.     So another solution could well be by taking the same approach, but the CPU side of the wall. 

   With single cored cpu,  it made sense to the pass off as much work from the main CPU to the co-processors.  Clearly this was the better approach at the time.  But if CPU's can run huge arrays of CELL like cores on then,  this start to blur the existing model way out of wack.

    Now, in theory we could set up cores of software renders on separate cores to draw parts of the scene (reflections, refractions, shadows, lighting perhaps).  While each core would probably (depending upon the task) have a slower through put that than say a dedicated single GPU shader core would, we can of course run many complete general purpose cores together.   It's a bit  like running two video cards in parallel.  Where Card 1 draws part of the frame, while card 2 draws the rest of the frame.    But what if we had 5, 10, 20, 30 etc of these... 

   The problem with multi cores (cpu/gpu) is bus contention.   As there's only limited path ways to the memory (system/video).  So if one core is accessing memory, then the others  have to wait till it's done. Kind of like a bunch of people all waiting to check out the same book at the library..     One solution around this has been building pathways that are sharable.  You see this in some game consoles  where video+cpu hardware can share memory.  The cost is that this memory is slower to access than segregated memory.   The benefit is that it's shared. and we avoid the memory shoveling (moving textures to video memory for example).   

   It seems inevitable that if CPU's are to move into cell arrays, then they'll no doubt start offering specialize cores for Video/Sound processing in these super chips.   So basically some type CPU + GPU hybrid.       


kevin


kevin