NVIDIA GeForce 8800 Ultra Pure Muscle power for your PC
Greetings and salutations earthlings, welcome to yet another new NVIDIA product review. It's been discussed widely ever since... hmm what was it, February? Today NVIDIA is launching its GeForce 8800 Ultra. Now, NVIDIA tried to keep this product as secret as can be, why? Two reasons. First to prevent technical specifications leaking onto the web. Secondly, obviously to change specs at the last minute. See ATI is releasing their R600 graphics card soon and the Ultra is the product that NVIDIA prepared to counteract it in the market, an allergic reaction to the R600 so to speak.
It's fair to say that the leaked R600 info you have seen has some validaty (yes, we'll have an article soon) in it and yes, obviously NVIDIA corporate is scratching their heads right now asking what the heck happened with R600?
Anyways welcome to the review. We're not going extremely in-depth today as despite the rumors of a GX2 based 8800 (which where false) the 8800 Ultra is a respin product. This means it's technically similar to the original GeForce 8800 GTX. So then no 196 Shader cores or whatever the Inquirer figured it would be. No my friends we have exactly the same stuff, yet a respin means its core is clocked faster, has faster memory and the 128 shaders processors are clocked faster.
Pricing. Initially NVIDIA set this product at a 999 USD price point, which well honestly I think my pants dropped when I heard that the first time. In the latest presentation the Ultra was priced 829 USD. And here me now good citizens I'm changing the price myself and will say it'll be finalized at 699 USD/EUR. Which is still a truckload of money and way too much money to just play games but hey; this is the high-end game. Which means completely insane prices yet quite a number of you guys will buy it anyway. And hey you know what? I can't blame you for being a hardcore gamer.
So what can we expect from the GeForce 8800 Ultra? I stated it already, higher core, memory and shader frequencies (I really prefer to call them shady frequencies) thus an accumulated amount of additional performance and good thermals, man look at that new cooler! And all that at 175 Watts maximum as in this silicon revision NVIDIA claims to have some architectural advantages that got wattage down. So in my opinion that would a slightly lower core voltage(s) or a better cooled product. Yes my guru's a better cooled product equals less power consumption.
Over the next few pages we'll quickly go through the technical specs, we'll skip the in-depth DX10 part as honestly please read it in our reference reviews. We'll look at heat, power consumption, give the card a good run for the money with a plethora of up-to-date games and then we'll try and torch the bugger in a tweaking session where we'll overclock the shiznit (
Ed: I'm banning you from ever using that word again, Hilbert) out of it...
Follow me gang, next page.
Wazzuuup! Welcome to page two of 15 (yeah, really). Six months people, that is how long it's been since NVIDIA released its GeForce 8800 GTS and GTX. It really took a lot of us by surprise as the performance was and is breathtaking. In-between the 320 MB model of the 8800 GTS was released and a week or two ago the 8500 / 8600 series of DirectX 10 products.
Question - why are there no DX10 titles available on the market yet? What the heck was Microsoft thinking releasing Vista propagating the new era in DX10 gaming? Microsoft put out some really good (and I still say photoshopped) MS Flight-Simulator screenshots. Microsoft has its own game-studio... so again Microsoft what the hell are you guys doing? Give us at least one DX10 game, dudes I'm begging you. A quarter of an entire year has passed and after 10+ updates and three reinstalls of Vista finally is crashing just once a week. Give us a game, please? We bought four business licenses at top-dollar, come on just one game... just one... ? *sighs*
Alright back on topic. So the new big pappa of graphics cards is called the GeForce 8800 Ultra that comes with no less than and precisely the same as the GTX, 768 MB memory. So how does the new and current GeForce product line shape up? Have a look:
- GeForce 8800 Ultra
$999 - $829 - $699 - GeForce 8800 GTX - $599
- GeForce 8800 GTS 640 MB- $449
- GeForce 8800 GTS 320 MB - $300
- GeForce 8600 GTS - $219
- GeForce 8600 GT - $149
- GeForce 8500 GT - $99
- GeForce 7600 GS - $89
- GeForce 7300 GT - $89
- GeForce 7300 LE - $79
- GeForce 7100 GS - $59 but they should give it away for free
Now obviously the minute ATI's R600 & 8800 Ultra will become available I expect another shift in manufacturer suggested retail prices. Small hint, expect the 320MB to drop in price soon. The MSRP for the Ultra is 829 USD - but since nobody will buy it at that price expect it to be 699 by the end of this month.
The GeForce 8800 Ultra Ultra. Try to image how that would sound out of the mouth of Arnold Schwarzenegger : "I just bought this GeForce 8800 Oeltra". No clue why I just wrote this? Well guns, action, gamers ammo, muscle power, see the parallel here ? Wouldn't it be fun to have a Terminator edition of cards called Oeltra? Just like the Dodge or Pontiac its a muscle car for PC gaming.
Ahem let's be geek again and do transistors; yay! So did you know that G70/G71 (GeForce 7800/7900) each had nearly 300 Million transistors? Well, G80 is a 681 million transistor and counting product. Which means performance. And the faster you clock these transistors the faster it'll perform... or do something like spark, boom... smoke. Now the 8800 Ultra has the 128 streaming cores (Unified Shader processors) and it comes with 768 MB of gDDR3 memory.
The main differences between the GTX and Ultra: memory was clocked at 1800 MHz (2x900) on the GTX resulting into 86 Gigabyte per second memory bandwidth. This has changed. Memory is now defaulting at 2160 MHz, which equals a theoretical bandwidth of 101.3 GB/s. So do the math, that's roughly 15% extra bandwidth, which is one of the most limiting factors for a high-end GPU. The memory is still on that weird 384-bit (12 pieces of 16Mx32 memory) bus.
Now here's where we'll go on a quick side-track. All reviews are rambling on Unified Shaders, on last gen hardware it was Shader's model 2 and 3, Pixel Shaders, Vertex Shaders and now we have geometry shaders.
But do you guys even know what a shader is? Allow me to show give you a quick brief on what a shader operation actually is as very few consumers know what they really are.
Demystifying the shader. If you program or play computer games or even recently attempted to purchase a video card, then you will have no doubt heard the terms "Vertex Shader" and "Pixel Shader" and with the new DirectX 10 "Geometry shaders". In these today's reviews the reviewers actually tend to think that the audience knows everything. I realized that some of you do not even have a clue what we're talking about. Sorry, that happens when you are deep into the matter consistently. Let's do a quick course on what is happening inside your graphics card for to be able to poop out colored pixels.
What do we need to render a three dimensional object as 2D on your monitor? We start off by building some sort of structure that has a surface, that surface is built from triangles. Why triangles? They are quick to calculate. How's each triangle being processed? Each triangle has to be transformed according to its relative position and orientation to the viewer. Each of the three vertices that the triangle is made up of is transformed to its proper view space position. The next step is to light the triangle by taking the transformed vertices and applying a lighting calculation for every light defined in the scene. And lastly the triangle needs to be projected to the screen in order to rasterize it. During rasterization the triangle will be shaded and textured.
Graphic processors like the GeForce series are able to perform a very large amount of these tasks. The first generation was able to draw shaded and textured triangles in hardware. The CPU still had the burden to feed the graphics processor with transformed and lit vertices, triangle gradients for shading and texturing, etc. Integrating the triangle setup into the chip logic was the next step and finally even transformation and lighting (TnL) was possible in hardware, reducing the CPU load considerably (everybody remember GeForce 256 ?). The big disadvantage was that a game programmer had no direct (i.e. program driven) control over transformation, lighting and pixel rendering because all the calculation models were fixed on the chip.
And now we finally get to the stage where we can explain shaders as that's when they got introduced.
A shader is basically nothing more than a relatively small program executed on the graphics processor to control either vertex, pixel or geometry processing and it has become
intensely important in today's visual gaming experience.
Vertex and Pixel shaders allow developers to code customized transformation and lighting calculations as well as pixel coloring or all new geometry functionality on the fly, (post)processed in the GPU. With last-gen DirectX 9 cards there was separated dedicated core-logic in the CPU for pixel and vertex code execution, thus dedicated Pixel shader processors and dedicated Vertex processors. With DirectX 10 something significant changed though. Not only were Geometry shaders introduced, but the entire core logic changed to a unified shader architecture that is a more efficient approach to allow any kind of shader in any of the stream processors.
GeForce 8800 GTX and Ultra have 128 stream processors. These are the shader processors I just mentioned. And it's very unlikely that you understand what I'm about to show you, but allow me to show you an example of a Vertex and a Pixel shader. A small piece of code that is executed on the Stream (Shader) processors inside your GPU:
Example of a Pixel Shader:
#include "common.h"
struct v2p
{
float2 tc0 : TEXCOORD0; // base
half4 c : COLOR0; // diffuse
};
// Pixel
half4 main ( v2p I ) : COLOR
{
return I.c*tex2D (s_base,I.tc0);
}
Example of a Vertex Shader:
#include "common.h"
struct vv
{
float4 P : POSITION;
float2 tc : TEXCOORD0;
float4 c : COLOR0;
};
struct vf
{
float4 hpos : POSITION;
float2 tc : TEXCOORD0;
float4 c : COLOR0;
};
vf main (vv v)
{
vf o;
o.hpos = mul (m_WVP, v.P); // xform, input in world coords
o.tc = v.tc; // copy tc
o.c = v.c; // copy color
return o;
}
Now this code itself is not at all interesting for you and I understand it means absolutely nothing to you (
Ed: Hey! Some of us are software engineers!) but I just wanted to show you in some sort of generic easy to understand manner what a shader is and involves.
Okay now back to the review.
Now then the generic "core" clock for the GTX is 575 MHz. The Ultra is at 612 MHz. Agreed that's seems a little low. But one of the most important dimensions of the GPU is the stream processors, which are clocked independently from the rest of the GPU. On the GTX they were clocked at 1350 MHz, now on the Ultra we see a 1500 MHz Stream processor or call it the shader domain clock frequency.
Size then, just like the GeForce 8800 GTX graphics card the Ultra is 27 CM long, you could say a well hung piece of hardware. It's been said and explained to me by quite a number of female counterparts (render targets as I like to call them) size does matter (
Ed: So many jokes, so little time... ).
Due to the size, note that the power connectors are routed off the top edge of the graphics card instead of the end of the card, so there is no extra space required at the end of the graphics card for power cabling. But before purchasing please check if you can insert a 27 CM piece of hardware in that chassis.
Okay, so this is really all you need to know for now. It's a faster clocked respin product with the same power consumption and a new cooler. The result: 10-15% more performance.
Some generic facts:
- All NVIDIA GeForce 8800 GTX / Ultra and GeForce 8800 GTS-based graphics cards are HDCP capable.
- The GeForce 8 Series GPUs are not only the first shipping DirectX 10 GPUs, but they are also the reference GPUs for DirectX 10 API development and certification and are 100% DirectX 9 compatible.
- GeForce 8800 GPUs deliver full support for Shader Model 4.0.
- All graphics cards are being built by NVIDIA’s contract manufacturer.
- All GeForce 8800 GPUs support NVIDIA SLI technology. rds
- The NVIDIA GeForce 8800 GTX has a 24 pixel per clock ROP. The GeForce 8800 GTS has a 20 pixel per clock ROP.
- GeForce 8800 GTX requires a minimum 450W or greater system power supply (with 12V current rating of 30A).
- GeForce 8800 GTS requires a minimum 400W or greater system power supply (with 12V current rating of 26A).
In the photo shoot we'll have a closer look at all three products and tell you a little about connectivity and also that memory mystery.
|
|
|
|
|
GeForce 8800 Ultra |
GeForce 8800 GTX |
GeForce 8800 GTS |
Stream (Shader) Processors |
128 |
128 |
96 |
Core Clock (MHz) |
612 |
575 |
500 |
Shader Clock (MHz) |
1500 |
1350 |
1200 |
Memory Clock (MHz) x2 |
1080 |
900 |
800 |
Memory amount |
768 MB |
768 MB |
640 MB |
Memory Interface |
384-bit |
384-bit |
320-bit |
Memory Bandwidth (GB/sec) |
101.3 |
86.4 |
64 |
Texture Fill Rate (billion/sec) |
39.2 |
36.8 |
24 |
HDCP |
Yes |
Yes |
Yes |
Two Dual link DVI |
Yes |
Yes |
Yes |
|
|
|
|
|
The Unified state of DirectX 10 We just had a brief chat about shader operations and the importance of it. What you also need to understand that the new microarchitecture of the the DX10 GPUs (Graphics Processing Unit) has been changed significantly.
Despite the fact that graphics cards are all about programmability and thus shaders these days you'll notice in today's product that we'll not be talking about pixel and vertex shaders much anymore. With the move to DirectX 10 we now have a new technology called Unified shader technology and graphics hardware will adapt to that model, it's very promising. DirectX 10 is scheduled to ship at the beginning of next year with the first public release version of Windows Vista. It will definitely change the way software developers make games for Windows and very likely benefit us gamers in terms of better gaming visuals and better overall performance.
The thing is, with DirectX 10 Microsoft has removed what we call the fixed function pipeline completely (what you guys know as 16 pixel pipelines, for example) and allowing it to make everything programmable. How does that relate to new architecture? Have a look.
The new architecture is all about programmability and thus shaders as we on the previous pages explained.
So DirectX 10 and its related new hardware products offer a good number of improvements. So much actually that it would require an article on its own. And since we are here to focus on NVIDIA's two new products we'll take a shortcut at this stage in the article. Discussed in our Guru3D forums I often have seen the presumption that DX10 is only a small improvement over DX9 Shader Model 3.0. Ehm yes and no. I say it's a huge step as a lot of constraints are removed for the software programmers. The new model is more simple, easy to adapt and allows heaps of programmability, which in the end means a stack of new features and eye candy in your games.
Whilst I will not go into detail about the big differences I simply would like to ask you to look at the chart below and draw your own conclusions. DX10 definitely is a large improvement, yet look at it as a good step up.
Here you can see how DirectX's Shader Models have evolved ever since DX8 Shader Model 1.
So I think what you need to understand is that DirectX 10 doesn't commence a colossal fundamental change in new capabilities; yet it brings expanded and new features into DirectX that will enable game developers to optimize games more thoroughly and thus deliver incrementally better visuals and better frame rates, which obviously is great.
How fast will it be adopted well, Microsoft is highlighting the DX10 API as God's gift to the gaming universe yet what they forget to mention is that all developers who support DX10 will have to continue supporting DirectX9 as well and thus maintain two versions of the rendering code in their engine as DXD10 is only available on Windows Vista and not XP, which is such a bitch as everybody refuses to buy Vista.
However, you can understand that from a game developer point of view it brings a considerable amount of additional workload to develop both standards.
Regardless of the immense marketing hype, DirectX 10 just is not extraordinarily different from DirectX 9, you'll mainly see good performance benefits due to more efficiency in the GPU rather than vastly prominent visual differences with obviously a good number of exceptions here and there. But hey DirectX is evolving into something better, more efficient and speedier. Which we need to create better visuals.
The Luminex Engine The one thing I again want to touch, as I respect this move from NVIDIA, is Image quality. This is a quickie copy/paste from our original GeForce 8800 article last year as, well nothing changed in this segment.
One of the things you'll notice in the new Series 8 products is that number if pre-existing features have become much better and I'm not only talking about the overall performance improvements and new DX10 features. Nope, NVIDIA also had a good look at Image Quality. Image quality is significantly improved on GeForce 8800 GPUs over the prior generation with what NVIDIA seems to call the Lumenex engine.
You will now have the option of 16x full screen multisampled antialiasing quality at near 4x multisampled antialiasing performance using a single GPU with the help of a new AA mode called Coverage Sampled Antialiasing. We'll get into this later though with pretty much this is a math based approach as the new CS mode computes and stores boolean coverage at 16 subsamples and yes this is the point where we lost you right? We'll drop it.
So what you need to remember is that CSAA enhances application antialiasing modes with higher quality antialiasing. The new modes are called 8x, 8xQ, 16x, and 16xQ. The 8xQ and 16xQ modes provide first class antialiasing quality TBH.
If you pick up a GeForce 8800 GTS/GTX/Ultra then please remember this; Each new AA mode can be enabled from the NVIDIA driver control panel and
requires the use to select an option called “
Enhance the Application Setting”. Users must first turn on ANY antialiasing level
within the game’s control panel for the new AA modes to work, since they need the game to properly allocate and enable anti-aliased rendering surfaces.
If a game does not natively support antialiasing, a user can select an NVIDIA driver control panel option called “Override Any Applications Setting”, which allows any control panel AA settings to be used with the game. Also you need to know that in a number of cases (such as the edge of stencil shadow volumes), the new antialiasing modes can not be enabled, those portions of the scene will fall back to 4x multisampled mode. So there definitely is a bit of a tradeoff going on as it is a "sometimes it works but sometimes it doesn't" kind of feature.
So I agree, a very confusing method. I simply would like to select in the driver which AA mode I prefer, something like "Force CSAA when applicable", yes something for NVIDIA to focus on.
But 16x quality at almost 4x performance, really good edges, really good performance, that obviously is always lovely.
One of the most heated issues over the previous generation products opposed to the competition was the fact that the NVIDIA graphics cards could not render AA+HDR at the same time. Well that was not entirely true through as it was possible with the help of shaders as exactly four games have demonstrated. But it was a far from efficient method, a very far cry (
Ed: please no more puns!) you might say.
So what if I would were to say that now not only you can push
16xAA with a single G80 graphics card, but also do full
128-bit FP (Floating point)
HDR! To give you a clue the previous architecture could not do HDR + AA but it could do technically 64-bit HDR (just like the Radeons). So NVIDIA got a good wakeup call and noticed that a lot of people were buying ATI cards just so they could do HDR & AA the way it was intended. Now the G80 will do the same but it's even better. Look at 128-bit wide HDR as a palette of brightness/color range that is just amazing. Obviously we'll see this in games as soon as they will adopt it, and believe me they will. 128-bit precision (32-bit floating point values per component), permitting almost real-life lighting and shadows. Dark objects can appear extremely dark, and bright objects can be exhaustingly bright, with visible details present at both extremes, in addition to rendering completely smooth gradients in between.
As stated; HDR lighting effects can be used together with multisampled antialiasing now on GeForce 8 Series GPUs and the addition of angle-independent anisotropic filtering. The antialiasing can be used in conjunction with both FP16 (64-bit color) and FP32 (128-bit color) render targets.
Improved texture quality it's just something we must mention. We all have been complaining about shimmering effects and lesser filtering quality than the Radeon products, it's a thing of the past. NVIDIA listened and added raw horsepower for texture filtering making it really darn good. Well .. we can actually test that !
Allow me to show you. See, I have this little tool called D3D AF Tester which helps me determine how image quality is in terms of Anisotropic filtering. So basically we knew that ATI always has been better at IQ compared to NVIDIA.
|
|
|
GeForce 7900 GTX 16xAF (HQ) |
Radeon X1900 XTX 16xHQ AF |
GeForce 8800 16xAF Default |
Now have a look at the images above and let it sink in. It goes too far to explain what you are looking at; but the more perfect a round colored circle in the middle is the better image quality will be. A perfect round circle is perfect IQ.
Impressive to say at the least. The the AF patterns are just massively better compared to previous generation hardware. Look at that, that is default IQ; that's just
really good...
Demystifying HDCP We're going multimedia now, as not everything is about playing games these days. Your brand spanking new 8800 Ultra, 8800 GTX, 8800 GTS or 8600 GTS is HDCP compatible. but what the heck does it mean?
A HD Ready television or monitor will have either a DVI (Digital Video Interface) or HDMI (High Definition Multimedia Interface). Both connections provide exceptional quality, HDMI is often referred to as the digital SCART cable as it also provides audio. DVI supplies picture only, separate cables are needed for audio. Both HDMI and DVI support HDCP (High-bandwidth Digital Content Protection) which will be a requirement for protected content.
With Vista when you want to playback HDCP content (movies) on your monitor, the resolution could be scaled down or even worse. It's like this: your screen will go black during playback, if you do not have a HDCP encoder chip working on the graphics card.
PureVideo HD This is a copy & paste from the previous 8800 GTX article, as the video engine is 100% the same.
Ever since that past generation of graphics cards (Series 6), NVIDIA did something really smart. They made the GPU (the graphics chip) an important factor in en/decoding video streams. With a special software suite called PureVideo you can offload the video encoding/decoding process from the CPU towards the GPU, and the best thing yet it can greatly enhance image quality.
PureVideo HD is is a video engine built into the GPU (this is dedicated core logic) and thus is dedicated GPU-based video processing hardware, software drivers and software-based players that accelerate decoding and enhance image quality of high definition video in the following formats: H.264, VC-1, WMV/WMV-HD, and MPEG-2 HD.
So what are the key advantages of PureVideo? In my opinion two key factors are a big advantage. To offload the CPU by allowing the GPU to take over a huge sum of the workload. HDTV decoding through a TS (Transport Stream) file, for example, can be very demanding for a CPU. These media files can peak to 20 Mbit/sec easily as HDTV streams offer high-resolution playback in 1280x720p or even 1920x1080p
without framedrops and image quality loss.
By offloading that big task for the bigger part of the graphics core, you give the CPU way more headroom to do other which makes your PC actually run normal. The combination of these factors offer you stutter-free high quality and high resolution media playback. All standard HDTV resolutions of course are supported, among them the obvious 480p, 720p and 1080i modes and now also 1080P (P=Progressive and I=Interlaced).
Ever since the Series 75 ForceWare driver, PureVideo is doing something I've been waiting on for quite some time now, 2-2 pull down which converts 24 frames per second to 50 frame per second PAL. But along with this the new G80 series (and this'll work on G70 as well) will offer HD noise reduction, which is great feature with older converted films. And this is where we land at Image Quality. PureVideo can offer a large amount of options that'll increase the IQ of playback. This can be managed with a wide variety of options. Obviously NVIDIA has some interesting filters available in the PureVideo suite like advanced de-interlacing, which can greatly improve image quality while playing back that DVD, MPEG2 or TS file (just some examples). Aside from that, things like color corrections should not be forgotten. All major media streams are supported by NVIDIA with PureVideo. And yes High Definition H.264 acceleration, which will become a big, new and preferred standard, is also supported.
Paradox: You do not need PureVideo for HDTV playback and connectivity, but it is recommended if you have that dedicated hardware in your system anyway.
Connect your PC to the HDTV screen use the best and thus most expensive connection available. You can go with a component adapter and the 3-way RCA cable. However, and as weird as this might sound, image quality while being good is simply not perfect, it's still analog you know. It's a relatively cheap way to connect to a HDTV screen though. Now what you want to do is to go digital on the connection. Obviously you spent a lot of money on the HDCP (HD Ready) HDTV screen and PC, so invest a little more into digital connectivity. You'll notice that your videocard has a lovely DVI-I/D compatible output, so please use it!
Guru3D uses this DVI-D <-> HDMI cable. Once you boot up into Windows you'll immediately notice the difference; rich colors and good quality.
Once booted into windows you'll likely notice some seriously bad overscan (the Pioneer screens are known and feared for this). Basically the outer segments of the screen are not being displayed on your HDTV. NVIDIA is now offering some really cool under and overscan options. In essence you shrink the resolution a little to make it fit perfectly on your HDTV. I believe this is being done by adding black borders to the video signal. The new ForceWare 75 series actually allow independent X and Y underscan control in a very simple manner. Doesn't matter because after you've done that the only thing you can say while playing back an HDTV file is "Oh my God." The image quality is simply beautiful and seems to work extremely well with NVIDIA's recent graphics cards.
To test if the CPU indeed was offloaded by the GPU to see if the PureVideo claims are true we'll put this to the test. Well, obviously it works. With a HD movie (.ts file) we logged CPU utilization during playback.
Support of the .TS files to me personally are important, if your satellite box can record an MPEG stream it'll do that in a .TS file after which you can playback the content through MediaPlayer / MediaCenter easily and without quality loss.
In our case we had a .TS episode recording of the Sopranos (which is our standard test file). This puppy is doing a 12 to 20 mbit/sec datastream which is one of the most difficult things to manage for a PC right now (if you do not have the proper decoders). At standard a mid-range AMD Athlon 64 4000+ would peak out at 55-60% CPU utilization. If we'd use a H.264 file, it'll be 100% easily.
Once you offload it to the graphics processor things look much better with the help of PureVideo. Have a look at the graph below where you are monitoring the CPU at work at roughly 12-20% decoding a HDTV .TS file:
Indeed, a huge improvement over standard decoding. We are now at a CPU utilization of 12-20%, really nice for a HD MPEG2 stream. The processor is almost doing nothing. Let me remind you again that this is a Transport Stream file with a HDTV resolution of 1920x1080i.
In combo with the new drivers you can now also decode
High Definition H.264 streams. H.264 is a compression algorithm used to transmit video efficiently between endpoints. This algorithm is seen as the replacement for its predecessor, H.263. What is different about H.264 is that it promises to deliver high quality video, H.264 also enables very high quality encoding, producing way better results than even MPEG2 and of course HDTV levels.
These GeForce series 7 and 8 cards can also manage 3:2 and 2:2 pull down (inverse telecine) of SD and HD interlaced content.
New with the introduction of GeForce 8800 we see a 10-Bit display processing pipeline and also new post-processing options (works for GeForce series 7 as well):
- Adds VC-1 & H.264 HD Spatial-Temporal De-Interlacing
- Adds VC-1 & H.264 HD Inverse Telecine
- Adds HD Noise Reduction
- Adds HD Edge enhancement
Software like WinDVD, PowerDVD and Nero showtime will support PureVideo from within their software. You can also buy the PureVideo software for a few tenners at NVIDIA after which MediaPlayer or Media Center will work with it flawlessly.
To give you an idea how intensely big one frame of 1920x1080 is with a framerate of 24 frames per second. Click on a the two example images above. Load them up, and realize that your graphics card is displaying that kind of content 24 times per second, while enhancing them in real-time.
The Photos On the next few pages we'll show you some photos. The images were taken at 2560x1920 pixels and then scaled down. The camera used was a Sony DCS-F707 at 5.1 MegaPixel.
Here we go, the GeForce 8800 Ultra 768 MB
Okay there she is, slightly overexposed, sorry about that. The color is actually darker. This my friends is the GeForce 8800
Oeltra... ehm
Ultra. It looks different yes,. but it's the same PCB, length and well, everything else as the 8800 GTS.
And yes Einstein, that's the backside. The card is 10.5 inches long, 27 CM for those using the Metric system.
On top the two SLI connectors that to date NVIDIA has not explained. The second SLI connector on the GeForce 8800 GTX is hardware support for "impending future enhancements" in SLI software functionality. Don't you love diplomacy. So that's either Physics as add-on or two-way SLI. With the current drivers, only one SLI connector is actually used. You can plug the SLI connector into either the right or left set of SLI connector; it really doesn't matter which one.
DVI connectors, dual-link DVI of course. With the 7-pin HDTV-out mini-din, a user can plug an S-video cable directly into the connector, or use a dongle for YPrPb (component) or composite outputs. The prior 9-pin HDTV-out mini-din connector required a dongle to use S-video, YPrPb and composite outputs.
Here we can see the two 6-pin power connectors. Since the PCB is rather long they have been placed logically on the upper side of the card. Very clever as that'll save lots of space.
The card in the test-system, eVGA nForce 680i SLI. A rather sexy mainboard for sure. The cooler by the way. Don't get confused, it's still the old cooler yet with a new shell. The design allows better air draw and thus better airflow.
Twilight, evening... a little lighting. Aah mon cheri, all we need is a nice Chardonnay now.
One more but that's really it. It's time to move forwards towards to the test session, e.g. benchmarks!
Bookmarks