Do you have a question? Post it now! No Registration Necessary. Now with pictures!
- Dave "IT"
August 8, 2005, 8:36 pm
rate this thread
that I'm building, primarily for gaming. I was looking at the GeForce 6600
chipset, as it seems to be getting fantastic benchmark scores, even compared
to cards priced well above it...sounds like a winner to me. However, there
are SO many different cards using this chipset, and the prices range from
$169 to around $300 commonly.
What core 'numbers' am I looking at for maximum performance? Is there a big
performance difference between a card with DDR memory and DD3 memory? Or
8.8GB/sec bandwidth compared to 16GB/sec? What about fill rate per second?
Verticles per second? Core and Memory clocks?
Obviously, the higher the better in all these categories, but which ones are
the ones that will REALLY make a difference. The more I look around, the
obvious 256MB memory compared to the 128MB really doesn't seem to matter
that much at all.
Any help that you guys/gals can provide is greatly appreciated.
Re: Graphic Cards Comparison
" What core 'numbers' am I looking at for maximum performance? Is
there a big performance difference between a card with DDR memory and
DD3 memory? Or 8.8GB/sec bandwidth compared to 16GB/sec? What about
fill rate per second? Verticles per second? Core and Memory clocks? "
I get the feeling you haven't seperated the 6600 from the 6600GT. They
are very different cards, as you'll see from this comparison:
You can see how graphics cards compare technically at
" Obviously, the higher the better in all these categories, but which
ones are the ones that will REALLY make a difference. "
They all count in different ways; pixel pipelines, memory interface
(64-bit, 128-bit, 256-bit), core clock, memory clock, amount of RAM,
shader-model suport, DirectX level support etc...
The problem is that just one *let down* of a particular spec can create
a card that suddenly has half the performance than it might appear to
have. SapphieTech re-released the 9800 Pro a while back with a 128-bit
memory interface in place of the standard 256-bit, for which they
received a kicking across forums and newsgroups in return. Similarly,
many lower-end cards have 64-bit memory interfaces replacing the
standard 128-bit, which causes a massive drop in performance.
It takes quite a bit of reading to understand the correlation between
all of these specs. You're better off first looking at comparisons like
the Tom's Hardware VGA Charts, checking the standard specs it lists for
the cards, and making sure that any card you buy isn't *cut-down* from
the listed specs.
" The more I look around, the obvious 256MB memory compared to the
128MB really doesn't seem to matter that much at all. "
There are many cards where the only difference is the amount of RAM, but
yet it's often likely that the 128MB version will perform faster in
games than the 256MB one. This is usually down to the type and speed of
RAM used to manufacture the cards, as they nearly always use cheaper
stuff for the 256MB cards. When you aren't using more than 128MB (which
is most of the time), the slower RAM on the 256MB cards doesn't match up
to the stuff on the 128MB ones.
Lots of people still think that the amount of RAM is the most important
thing on a graphics card. In reality, it is one of the least important
specifications when gaming is played at the most common settings. I
pity those who buy cards like the 64-bit GeForce 6200 512MB when they
believe they'll be getting superior gaming performance from it.
Obt:- I found this press announcement quite funny.
- » Wattage date for graphic cards. Would ati radeon all-in-wonder 8500 or 7500 work in 230W P...
- — Next thread in » Computer Hardware