Página 15 de 15 PrimeiroPrimeiro ... 5131415
Resultados 211 a 222 de 222
  1. #211
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Nov 2013
    City 17
    1 (100%)
    Nvidia GeForce 8800 Ultra

    Nvidia GeForce 8800 Ultra

    Manufacturer: Nvidia
    Price (as reviewed): £528.74 (inc VAT)

    Following the rumours that spread across the web last week, most people were expecting to see AMD unveil its latest R600 family of graphics cards today. Unfortunately for those desperate to see how R600 stacks up, that isn’t what you’re going to see rearing its head from the tubes this afternoon. Instead, Nvidia is back to launch its high-end refresh after GeForce 8800 GTX’s near seven month reign at the top of the tree.

    It’s time to say hello to GeForce 8800 Ultra.

    Nvidia hasn’t released an Ultra for two generations now, with the last one being the horribly delayed GeForce 6800 Ultra – it took far too long for that product to become available for consumers to buy. However, many would argue that the GeForce 7800 GTX 512 certainly deserved the Ultra nomenclature based on its shoddy availability.

    So, GeForce 8800 Ultra has a reputation to live up to but we’re certainly hoping that history won’t repeat itself – we’re told that it’s not going to be another limited edition. However, one thing has changed with this launch: you’re not going to be able to buy the hardware by the time you’ve read the review. Instead, you’re going to have to wait a couple of weeks, as availability is scheduled for May 15th.

    The company explained to us that the reason for this is because it is attempting to cut down on the number of pre-launch leaks. It’s also an attempt to prevent the bizarre instances where consumers have been able to buy the hardware days before its release, sparking Internet fame in some parts of the world (sorry guys! – Ed.).

    G80 with Go Faster Stripes:

    As has been the case in the past, Nvidia’s GeForce 8800 Ultra is essentially a GeForce 8800 GTX with go faster stripes and a few tweaks to the manufacturing process in its A3 revision silicon. If you’re familiar with the architecture behind Nvidia’s G80 graphics processing unit, there’s not much more to learn about the Ultra – it’s merely a clock speed bump.

    Like the GeForce 8800 GTX, there are a total of 128 1D scalar stream processors inside the 8800 Ultra. Each of these stream processors is designed to perform all of the shader operations and calculations sent to the GPU by today’s 3D graphics engines in both floating point and integer form. Also, each of G80's ALUs are capable of both a MADD and MUL instruction in the same clock cycle. In Nvidia’s GeForce 8800 GTX, the shader ALUs were clocked at 1350MHz – Nvidia has increased this clock to 1500MHz on its GeForce 8800 Ultra, which represents an 11 percent shader core frequency increase.

    The shader processors are split into eight clusters of 16 shader processors, which share four texture address units and eight texture filtering units, along with both L1 and L2 caches. Like GeForce 8800 GTX, the Ultra features a total of 32 texture address units and 64 texture filtering units. These are completely decoupled from the stream processors, meaning that it’s possible to texture whilst still processing other shader operations. G80’s texture units run at a different clock speed to the stream processors – 612MHz on the Ultra, compared to 575MHz on the GTX.

    Along with the texture units, both the render backend (ROPs) and set up engine also run at the same 612MHz clock speed. This clock frequency is what Nvidia calls the ‘core clock’ as it represents the clock speed at which most of the chip operates at. The increase isn’t quite as healthy as the shader clock, but it does represent a six and a half percent bump.

    Nvidia's G80 graphics processing unit -- flow diagram
    The GeForce 8800 Ultra’s render backend features total of six pixel output engines that are each capable of four pixels per clock, making a total of 24 ROPs (in traditional terms) – this is the same as GeForce 8800 GTX. With colour and Z processing, the ROPs are capable of 24 pixels per clock and with Z-only processing, the ROPs are capable of 192 pixels per clock if there is only a single sample used for each pixel.

    G80’s ROPs also support simultaneous HDR and anti-aliasing with FP16 and FP32 render targets, meaning you can use all of Nvidia’s anti-aliasing algorithms with 128-bit HDR enabled. In addition to this, you also get high quality angle-independent anisotropic filtering by default. In the past, we’ve been pretty hard on Nvidia for its shoddy texture filtering quality, but thankfully the company adopted a ‘no compromises’ stance with its GeForce 8-series architecture.

    Each of the pixel output engines is attached to its own 64-bit memory channel, amounting to a 384-bit memory bus width. Nvidia has increased the memory clocks from 900MHz (1800MHz effective) to 1080MHz (2160MHz), which equates to a 20 percent increase. This takes the GeForce 8800 Ultra’s memory bandwidth through the 100GB per second barrier to 103.7GB per second, which comes thanks to the Samsung BJ08 GDDR3 DRAM chips on the board, which are rated to 2200MHz.

    The increased clock speeds aren’t all that has changed with GeForce 8800 Ultra though, as Nvidia has been busy making efficiency improvements to the 90-nanometre manufacturing process it is using. As a result, GeForce 8800 Ultra actually uses less power than the GeForce 8800 GTX. At launch, GeForce 8800 GTX had an 185W maximum TDP and over the past seven months, Nvidia has managed to get that down to 177W. However, has gone one step further with GeForce 8800 Ultra because, despite the higher clocks, its maximum TDP is rated at 175W.

    Without further ado, let’s have a look at some hardware...

    GeForce 8800 Ultra – reference design:

    Reference design hardware is normally pretty dull and boring, but Nvidia has actually managed to create something that steps away from the norm. The first thing to note is that we see the return of the black PCB, which disappeared and was replaced by the familiar green PCBs at some point after volume ramped up on retail GeForce 8800 GTX and 8800 GTS cards.

    The colour of the PCB normally wouldn’t make a massive difference, but when a card is predominantly black anyway, it affects aesthetics a little when the PCB is reverted to a rather dull green. Of course, the colour of the dye used in the PCB isn’t going to make any difference to how this card performs and that’s going to be your primary concern on a high end graphics card.

    If you were expecting GeForce 8800 Ultra to be cheap, you’re out of luck – it’s going to set you back almost five hundred pounds once it’s available. If partners choose to overclock GeForce 8800 Ultra (and based on the past, we’d certainly expect them to do so), we can expect the card to cost the wrong side of half a grand. Yow!

    Click to enlarge

    Before we suffer from too much shock though, let’s remember that this is Nvidia’s replacement for what was the fastest graphics card on the planet though, so you’re going to have to pay a premium for that privilege. Normally we wouldn’t proclaim something to be the fastest graphics card on the planet without first showing you the benchmarks, but this is an exceptional case because Nvidia hasn’t had any competition at the bleeding edge since it launched GeForce 8800 GTX almost seven months ago.

    Though performance is the primary concern at the bleeding edge of graphics technology, who said you weren’t allowed to make it look good in the process? Well, Jen-Hsun Huang, Nvidia’s CEO and founder, certainly didn’t listen to whoever said that when he conceived his latest baby – the geek inside me wants to say that GeForce 8800 Ultra is an incredibly good looking piece of kit.

    Click to enlarge
    Nvidia’s GeForce 8800 Ultra reference card is the same size as the GeForce 8800 GTX at just over 270mm long, meaning that it is not going to fit in every case without some hiccups along the line. It features a different heatsink to the GeForce 8800 GTX, which Nvidia says is an optimised design. Both this and the original GeForce 8800-series coolers are designed and manufactured to Nvidia’s specifications by Cooler Master.

    The new design uses the same radial blower as the older GeForce 8800-series heatsinks, but the fan is slightly offset on the 8800 Ultra to allow for much cooler air to enter the heatsink assembly. On both GeForce 8800 GTX and 8800 GTS, the radial blower was actually directly above the PWM components, which meant that the card ran hot. Whilst the GeForce 8800 Ultra isn’t exactly a tamed beast, it’s able to run at higher clock speeds than the GeForce 8800 GTX, without compromising on either acoustics or heat output because of the new offset fan.

    That’s not the only change to the heatsink design though, as Nvidia has extended the black plastic shroud along the entire length of the card and actually looks similar to EVGA’s ACS3 cooling solution, but it’s a bit more elegant in our opinion. The remainder of the heatsink is an almost carbon copy of the GeForce 8800 GTX heatsink, with one heatpipe extracting heat from the GPU core and moving it into the fins as quickly as possible.
    Toda a analise:

    NVIDIA GeForce 8800 Ultra Review - The Best Just Got Better

    The NVIDIA GeForce 8800 Ultra


    Usually every 3-6 months we would have a new king of the hill in graphics card performance, or at least an update in a neck-and-neck fight to the death. Since the 8800 GTX was released back in November of last year, NVIDIA has had a strangle hold on the high performance gaming market. ATI, purchased by AMD, doesn't seem to have answer, though NVIDIA was readying a counter-attack should the need arise. Turns out it hasn't been needed, but NVIDIA got tired of sitting on their product, so we are blessed with the release of the GeForce 8800 Ultra.

    The new GeForce 8800 Ultra

    The GeForce 8800 Ultra is basically a glorified, overclocked 8800 GTX graphics card with a slightly optimized PCB and core binning in order to bring power consumption to a lower level. The G80 GPU is still 681 million transistors strong and features 128 stream processors in a unified shader architecture. We won't get into more of the details on the GPU architecture than that though, but we have TONS of information on it in our original 8800 GTX article.

    Click to Enlarge
    The PCB on the 8800 Ultra is the same size as on the 8800 GTX though the cooler has gone through some significant modifications to enable it to cool with the same power envelope but with less noise. And the results were noticeable (or rather, NOT) as the new cooler design was quieter than the GTX and much less noisy than the X1950 XTX's cooler.
    Here's a quick rundown on WHAT is being overclocked on the 8800 Ultra:

    • 612 MHz core clock versus 575 MHz on GTX

    • 1500 MHz shader clock versus 1350 MHz on GTX

    • 2.16 GHz memory clock versus 1.8 GHz on GTX

    That memory speed equates to a rise from 86.4 GB/s to 101.3 GB/s of theoretical bandwidth and the frame buffer remains at 768MB.
    The new fan expands a bit past the top of the PCB in order to allow for more room for air to pass over the GPU and memory. The fan is also bigger and quieter.
    Power requirements continue at dual 6-pin PCIe connections for the 8800 Ultra.
    The external connectors also remain the same including dual dual-link DVI output and a TV output with support for HDTV.
    Under the black sheath you can see the inner workings of the 8800 Ultra's cooler design.
    Here's the beastly card all stripped and naked for you to see -- notice how absolutely huge the GPU is. 681 million transistors aren't cheap!!
    So, with the 8800 Ultra in the mix, what does NVIDIA's product line up look like?

    8800 Ultra
    8800 GTX 768MB
    8800 GTS 640MB
    8800 GTS 320MB
    8600 GTS 256MB
    8600 GT 256MB
    8500 GT
    8400 GS
    OEM only
    8300 GS
    OEM only
    7600 GS
    7300 GT

    While no one can deny that the 8800 Ultra is an expensive graphics card, NVIDIA continues to offer a complete range of graphics cards for just about every budget and almost all are DX10 capable (excluing the 7600 and 7300).
    Toda a analise:

    NVIDIA GeForce 8800 Ultra review - Page 1

    NVIDIA GeForce 8800 Ultra

    Pure Muscle power for your PC
    Greetings and salutations earthlings, welcome to yet another new NVIDIA product review. It's been discussed widely ever since... hmm what was it, February? Today NVIDIA is launching its GeForce 8800 Ultra. Now, NVIDIA tried to keep this product as secret as can be, why? Two reasons. First to prevent technical specifications leaking onto the web. Secondly, obviously to change specs at the last minute. See ATI is releasing their R600 graphics card soon and the Ultra is the product that NVIDIA prepared to counteract it in the market, an allergic reaction to the R600 so to speak.
    It's fair to say that the leaked R600 info you have seen has some validaty (yes, we'll have an article soon) in it and yes, obviously NVIDIA corporate is scratching their heads right now asking what the heck happened with R600?
    Anyways welcome to the review. We're not going extremely in-depth today as despite the rumors of a GX2 based 8800 (which where false) the 8800 Ultra is a respin product. This means it's technically similar to the original GeForce 8800 GTX. So then no 196 Shader cores or whatever the Inquirer figured it would be. No my friends we have exactly the same stuff, yet a respin means its core is clocked faster, has faster memory and the 128 shaders processors are clocked faster.
    Pricing. Initially NVIDIA set this product at a 999 USD price point, which well honestly I think my pants dropped when I heard that the first time. In the latest presentation the Ultra was priced 829 USD. And here me now good citizens I'm changing the price myself and will say it'll be finalized at 699 USD/EUR. Which is still a truckload of money and way too much money to just play games but hey; this is the high-end game. Which means completely insane prices yet quite a number of you guys will buy it anyway. And hey you know what? I can't blame you for being a hardcore gamer.
    So what can we expect from the GeForce 8800 Ultra? I stated it already, higher core, memory and shader frequencies (I really prefer to call them shady frequencies) thus an accumulated amount of additional performance and good thermals, man look at that new cooler! And all that at 175 Watts maximum as in this silicon revision NVIDIA claims to have some architectural advantages that got wattage down. So in my opinion that would a slightly lower core voltage(s) or a better cooled product. Yes my guru's a better cooled product equals less power consumption.
    Over the next few pages we'll quickly go through the technical specs, we'll skip the in-depth DX10 part as honestly please read it in our reference reviews. We'll look at heat, power consumption, give the card a good run for the money with a plethora of up-to-date games and then we'll try and torch the bugger in a tweaking session where we'll overclock the shiznit (Ed: I'm banning you from ever using that word again, Hilbert) out of it...
    Follow me gang, next page.

    Wazzuuup! Welcome to page two of 15 (yeah, really). Six months people, that is how long it's been since NVIDIA released its GeForce 8800 GTS and GTX. It really took a lot of us by surprise as the performance was and is breathtaking. In-between the 320 MB model of the 8800 GTS was released and a week or two ago the 8500 / 8600 series of DirectX 10 products.
    Question - why are there no DX10 titles available on the market yet? What the heck was Microsoft thinking releasing Vista propagating the new era in DX10 gaming? Microsoft put out some really good (and I still say photoshopped) MS Flight-Simulator screenshots. Microsoft has its own game-studio... so again Microsoft what the hell are you guys doing? Give us at least one DX10 game, dudes I'm begging you. A quarter of an entire year has passed and after 10+ updates and three reinstalls of Vista finally is crashing just once a week. Give us a game, please? We bought four business licenses at top-dollar, come on just one game... just one... ? *sighs*
    Alright back on topic. So the new big pappa of graphics cards is called the GeForce 8800 Ultra that comes with no less than and precisely the same as the GTX, 768 MB memory. So how does the new and current GeForce product line shape up? Have a look:

    • GeForce 8800 Ultra $999 - $829 - $699
    • GeForce 8800 GTX - $599
    • GeForce 8800 GTS 640 MB- $449
    • GeForce 8800 GTS 320 MB - $300
    • GeForce 8600 GTS - $219
    • GeForce 8600 GT - $149
    • GeForce 8500 GT - $99
    • GeForce 7600 GS - $89
    • GeForce 7300 GT - $89
    • GeForce 7300 LE - $79
    • GeForce 7100 GS - $59 but they should give it away for free

    Now obviously the minute ATI's R600 & 8800 Ultra will become available I expect another shift in manufacturer suggested retail prices. Small hint, expect the 320MB to drop in price soon. The MSRP for the Ultra is 829 USD - but since nobody will buy it at that price expect it to be 699 by the end of this month.
    The GeForce 8800 Ultra

    Ultra. Try to image how that would sound out of the mouth of Arnold Schwarzenegger : "I just bought this GeForce 8800 Oeltra". No clue why I just wrote this? Well guns, action, gamers ammo, muscle power, see the parallel here ? Wouldn't it be fun to have a Terminator edition of cards called Oeltra? Just like the Dodge or Pontiac its a muscle car for PC gaming.
    Ahem let's be geek again and do transistors; yay! So did you know that G70/G71 (GeForce 7800/7900) each had nearly 300 Million transistors? Well, G80 is a 681 million transistor and counting product. Which means performance. And the faster you clock these transistors the faster it'll perform... or do something like spark, boom... smoke. Now the 8800 Ultra has the 128 streaming cores (Unified Shader processors) and it comes with 768 MB of gDDR3 memory.
    The main differences between the GTX and Ultra: memory was clocked at 1800 MHz (2x900) on the GTX resulting into 86 Gigabyte per second memory bandwidth. This has changed. Memory is now defaulting at 2160 MHz, which equals a theoretical bandwidth of 101.3 GB/s. So do the math, that's roughly 15% extra bandwidth, which is one of the most limiting factors for a high-end GPU. The memory is still on that weird 384-bit (12 pieces of 16Mx32 memory) bus.
    Now here's where we'll go on a quick side-track. All reviews are rambling on Unified Shaders, on last gen hardware it was Shader's model 2 and 3, Pixel Shaders, Vertex Shaders and now we have geometry shaders.
    But do you guys even know what a shader is? Allow me to show give you a quick brief on what a shader operation actually is as very few consumers know what they really are.
    Demystifying the shader.

    If you program or play computer games or even recently attempted to purchase a video card, then you will have no doubt heard the terms "Vertex Shader" and "Pixel Shader" and with the new DirectX 10 "Geometry shaders". In these today's reviews the reviewers actually tend to think that the audience knows everything. I realized that some of you do not even have a clue what we're talking about. Sorry, that happens when you are deep into the matter consistently. Let's do a quick course on what is happening inside your graphics card for to be able to poop out colored pixels.
    What do we need to render a three dimensional object as 2D on your monitor? We start off by building some sort of structure that has a surface, that surface is built from triangles. Why triangles? They are quick to calculate. How's each triangle being processed? Each triangle has to be transformed according to its relative position and orientation to the viewer. Each of the three vertices that the triangle is made up of is transformed to its proper view space position. The next step is to light the triangle by taking the transformed vertices and applying a lighting calculation for every light defined in the scene. And lastly the triangle needs to be projected to the screen in order to rasterize it. During rasterization the triangle will be shaded and textured.
    Graphic processors like the GeForce series are able to perform a very large amount of these tasks. The first generation was able to draw shaded and textured triangles in hardware. The CPU still had the burden to feed the graphics processor with transformed and lit vertices, triangle gradients for shading and texturing, etc. Integrating the triangle setup into the chip logic was the next step and finally even transformation and lighting (TnL) was possible in hardware, reducing the CPU load considerably (everybody remember GeForce 256 ?). The big disadvantage was that a game programmer had no direct (i.e. program driven) control over transformation, lighting and pixel rendering because all the calculation models were fixed on the chip.
    And now we finally get to the stage where we can explain shaders as that's when they got introduced.
    A shader is basically nothing more than a relatively small program executed on the graphics processor to control either vertex, pixel or geometry processing and it has become intensely important in today's visual gaming experience.
    Vertex and Pixel shaders allow developers to code customized transformation and lighting calculations as well as pixel coloring or all new geometry functionality on the fly, (post)processed in the GPU. With last-gen DirectX 9 cards there was separated dedicated core-logic in the CPU for pixel and vertex code execution, thus dedicated Pixel shader processors and dedicated Vertex processors. With DirectX 10 something significant changed though. Not only were Geometry shaders introduced, but the entire core logic changed to a unified shader architecture that is a more efficient approach to allow any kind of shader in any of the stream processors.
    GeForce 8800 GTX and Ultra have 128 stream processors. These are the shader processors I just mentioned. And it's very unlikely that you understand what I'm about to show you, but allow me to show you an example of a Vertex and a Pixel shader. A small piece of code that is executed on the Stream (Shader) processors inside your GPU:
    Example of a Pixel Shader:
    #include "common.h"

    struct v2p
    float2 tc0 : TEXCOORD0; // base
    half4 c : COLOR0; // diffuse

    // Pixel
    half4 main ( v2p I ) : COLOR
    return I.c*tex2D (s_base,I.tc0);
    Example of a Vertex Shader:
    #include "common.h"

    struct vv
    float4 P : POSITION;
    float2 tc : TEXCOORD0;
    float4 c : COLOR0;
    struct vf
    float4 hpos : POSITION;
    float2 tc : TEXCOORD0;
    float4 c : COLOR0;

    vf main (vv v)
    vf o;

    o.hpos = mul (m_WVP, v.P); // xform, input in world coords =; // copy tc
    o.c = v.c; // copy color

    return o;
    Now this code itself is not at all interesting for you and I understand it means absolutely nothing to you (Ed: Hey! Some of us are software engineers!) but I just wanted to show you in some sort of generic easy to understand manner what a shader is and involves.
    Okay now back to the review.

    Now then the generic "core" clock for the GTX is 575 MHz. The Ultra is at 612 MHz. Agreed that's seems a little low. But one of the most important dimensions of the GPU is the stream processors, which are clocked independently from the rest of the GPU. On the GTX they were clocked at 1350 MHz, now on the Ultra we see a 1500 MHz Stream processor or call it the shader domain clock frequency.
    Size then, just like the GeForce 8800 GTX graphics card the Ultra is 27 CM long, you could say a well hung piece of hardware. It's been said and explained to me by quite a number of female counterparts (render targets as I like to call them) size does matter (Ed: So many jokes, so little time... ).
    Due to the size, note that the power connectors are routed off the top edge of the graphics card instead of the end of the card, so there is no extra space required at the end of the graphics card for power cabling. But before purchasing please check if you can insert a 27 CM piece of hardware in that chassis.
    Okay, so this is really all you need to know for now. It's a faster clocked respin product with the same power consumption and a new cooler. The result: 10-15% more performance.
    Some generic facts:

    • All NVIDIA GeForce 8800 GTX / Ultra and GeForce 8800 GTS-based graphics cards are HDCP capable.
    • The GeForce 8 Series GPUs are not only the first shipping DirectX 10 GPUs, but they are also the reference GPUs for DirectX 10 API development and certification and are 100% DirectX 9 compatible.
    • GeForce 8800 GPUs deliver full support for Shader Model 4.0.
    • All graphics cards are being built by NVIDIA’s contract manufacturer.
    • All GeForce 8800 GPUs support NVIDIA SLI technology. rds
    • The NVIDIA GeForce 8800 GTX has a 24 pixel per clock ROP. The GeForce 8800 GTS has a 20 pixel per clock ROP.
    • GeForce 8800 GTX requires a minimum 450W or greater system power supply (with 12V current rating of 30A).
    • GeForce 8800 GTS requires a minimum 400W or greater system power supply (with 12V current rating of 26A).

    In the photo shoot we'll have a closer look at all three products and tell you a little about connectivity and also that memory mystery.
    GeForce 8800 Ultra
    GeForce 8800 GTX
    GeForce 8800 GTS
    Stream (Shader) Processors
    Core Clock (MHz)
    Shader Clock (MHz)
    Memory Clock (MHz) x2
    Memory amount
    768 MB
    768 MB
    640 MB
    Memory Interface
    Memory Bandwidth (GB/sec)
    Texture Fill Rate (billion/sec)
    Two Dual link DVI
    The Unified state of DirectX 10

    We just had a brief chat about shader operations and the importance of it. What you also need to understand that the new microarchitecture of the the DX10 GPUs (Graphics Processing Unit) has been changed significantly.
    Despite the fact that graphics cards are all about programmability and thus shaders these days you'll notice in today's product that we'll not be talking about pixel and vertex shaders much anymore. With the move to DirectX 10 we now have a new technology called Unified shader technology and graphics hardware will adapt to that model, it's very promising. DirectX 10 is scheduled to ship at the beginning of next year with the first public release version of Windows Vista. It will definitely change the way software developers make games for Windows and very likely benefit us gamers in terms of better gaming visuals and better overall performance.
    The thing is, with DirectX 10 Microsoft has removed what we call the fixed function pipeline completely (what you guys know as 16 pixel pipelines, for example) and allowing it to make everything programmable. How does that relate to new architecture? Have a look.
    The new architecture is all about programmability and thus shaders as we on the previous pages explained.
    So DirectX 10 and its related new hardware products offer a good number of improvements. So much actually that it would require an article on its own. And since we are here to focus on NVIDIA's two new products we'll take a shortcut at this stage in the article. Discussed in our Guru3D forums I often have seen the presumption that DX10 is only a small improvement over DX9 Shader Model 3.0. Ehm yes and no. I say it's a huge step as a lot of constraints are removed for the software programmers. The new model is more simple, easy to adapt and allows heaps of programmability, which in the end means a stack of new features and eye candy in your games.
    Whilst I will not go into detail about the big differences I simply would like to ask you to look at the chart below and draw your own conclusions. DX10 definitely is a large improvement, yet look at it as a good step up.

    Here you can see how DirectX's Shader Models have evolved ever since DX8 Shader Model 1.
    So I think what you need to understand is that DirectX 10 doesn't commence a colossal fundamental change in new capabilities; yet it brings expanded and new features into DirectX that will enable game developers to optimize games more thoroughly and thus deliver incrementally better visuals and better frame rates, which obviously is great.
    How fast will it be adopted well, Microsoft is highlighting the DX10 API as God's gift to the gaming universe yet what they forget to mention is that all developers who support DX10 will have to continue supporting DirectX9 as well and thus maintain two versions of the rendering code in their engine as DXD10 is only available on Windows Vista and not XP, which is such a bitch as everybody refuses to buy Vista.
    However, you can understand that from a game developer point of view it brings a considerable amount of additional workload to develop both standards.
    Regardless of the immense marketing hype, DirectX 10 just is not extraordinarily different from DirectX 9, you'll mainly see good performance benefits due to more efficiency in the GPU rather than vastly prominent visual differences with obviously a good number of exceptions here and there. But hey DirectX is evolving into something better, more efficient and speedier. Which we need to create better visuals.

    The Luminex Engine

    The one thing I again want to touch, as I respect this move from NVIDIA, is Image quality. This is a quickie copy/paste from our original GeForce 8800 article last year as, well nothing changed in this segment.
    One of the things you'll notice in the new Series 8 products is that number if pre-existing features have become much better and I'm not only talking about the overall performance improvements and new DX10 features. Nope, NVIDIA also had a good look at Image Quality. Image quality is significantly improved on GeForce 8800 GPUs over the prior generation with what NVIDIA seems to call the Lumenex engine.
    You will now have the option of 16x full screen multisampled antialiasing quality at near 4x multisampled antialiasing performance using a single GPU with the help of a new AA mode called Coverage Sampled Antialiasing. We'll get into this later though with pretty much this is a math based approach as the new CS mode computes and stores boolean coverage at 16 subsamples and yes this is the point where we lost you right? We'll drop it.
    So what you need to remember is that CSAA enhances application antialiasing modes with higher quality antialiasing. The new modes are called 8x, 8xQ, 16x, and 16xQ. The 8xQ and 16xQ modes provide first class antialiasing quality TBH.
    If you pick up a GeForce 8800 GTS/GTX/Ultra then please remember this; Each new AA mode can be enabled from the NVIDIA driver control panel and requires the use to select an option called “Enhance the Application Setting”. Users must first turn on ANY antialiasing level within the game’s control panel for the new AA modes to work, since they need the game to properly allocate and enable anti-aliased rendering surfaces.
    If a game does not natively support antialiasing, a user can select an NVIDIA driver control panel option called “Override Any Applications Setting”, which allows any control panel AA settings to be used with the game. Also you need to know that in a number of cases (such as the edge of stencil shadow volumes), the new antialiasing modes can not be enabled, those portions of the scene will fall back to 4x multisampled mode. So there definitely is a bit of a tradeoff going on as it is a "sometimes it works but sometimes it doesn't" kind of feature.
    So I agree, a very confusing method. I simply would like to select in the driver which AA mode I prefer, something like "Force CSAA when applicable", yes something for NVIDIA to focus on.
    But 16x quality at almost 4x performance, really good edges, really good performance, that obviously is always lovely.
    One of the most heated issues over the previous generation products opposed to the competition was the fact that the NVIDIA graphics cards could not render AA+HDR at the same time. Well that was not entirely true through as it was possible with the help of shaders as exactly four games have demonstrated. But it was a far from efficient method, a very far cry (Ed: please no more puns!) you might say.
    So what if I would were to say that now not only you can push 16xAA with a single G80 graphics card, but also do full 128-bit FP (Floating point) HDR! To give you a clue the previous architecture could not do HDR + AA but it could do technically 64-bit HDR (just like the Radeons). So NVIDIA got a good wakeup call and noticed that a lot of people were buying ATI cards just so they could do HDR & AA the way it was intended. Now the G80 will do the same but it's even better. Look at 128-bit wide HDR as a palette of brightness/color range that is just amazing. Obviously we'll see this in games as soon as they will adopt it, and believe me they will. 128-bit precision (32-bit floating point values per component), permitting almost real-life lighting and shadows. Dark objects can appear extremely dark, and bright objects can be exhaustingly bright, with visible details present at both extremes, in addition to rendering completely smooth gradients in between.
    As stated; HDR lighting effects can be used together with multisampled antialiasing now on GeForce 8 Series GPUs and the addition of angle-independent anisotropic filtering. The antialiasing can be used in conjunction with both FP16 (64-bit color) and FP32 (128-bit color) render targets.
    Improved texture quality

    it's just something we must mention. We all have been complaining about shimmering effects and lesser filtering quality than the Radeon products, it's a thing of the past. NVIDIA listened and added raw horsepower for texture filtering making it really darn good. Well .. we can actually test that !
    Allow me to show you. See, I have this little tool called D3D AF Tester which helps me determine how image quality is in terms of Anisotropic filtering. So basically we knew that ATI always has been better at IQ compared to NVIDIA.
    GeForce 7900 GTX 16xAF (HQ)
    Radeon X1900 XTX 16xHQ AF
    GeForce 8800 16xAF Default
    Now have a look at the images above and let it sink in. It goes too far to explain what you are looking at; but the more perfect a round colored circle in the middle is the better image quality will be. A perfect round circle is perfect IQ.
    Impressive to say at the least. The the AF patterns are just massively better compared to previous generation hardware. Look at that, that is default IQ; that's just really good...

    Demystifying HDCP

    We're going multimedia now, as not everything is about playing games these days. Your brand spanking new 8800 Ultra, 8800 GTX, 8800 GTS or 8600 GTS is HDCP compatible. but what the heck does it mean?
    A HD Ready television or monitor will have either a DVI (Digital Video Interface) or HDMI (High Definition Multimedia Interface). Both connections provide exceptional quality, HDMI is often referred to as the digital SCART cable as it also provides audio. DVI supplies picture only, separate cables are needed for audio. Both HDMI and DVI support HDCP (High-bandwidth Digital Content Protection) which will be a requirement for protected content.
    With Vista when you want to playback HDCP content (movies) on your monitor, the resolution could be scaled down or even worse. It's like this: your screen will go black during playback, if you do not have a HDCP encoder chip working on the graphics card.
    PureVideo HD

    This is a copy & paste from the previous 8800 GTX article, as the video engine is 100% the same.
    Ever since that past generation of graphics cards (Series 6), NVIDIA did something really smart. They made the GPU (the graphics chip) an important factor in en/decoding video streams. With a special software suite called PureVideo you can offload the video encoding/decoding process from the CPU towards the GPU, and the best thing yet it can greatly enhance image quality.
    PureVideo HD is is a video engine built into the GPU (this is dedicated core logic) and thus is dedicated GPU-based video processing hardware, software drivers and software-based players that accelerate decoding and enhance image quality of high definition video in the following formats: H.264, VC-1, WMV/WMV-HD, and MPEG-2 HD.
    So what are the key advantages of PureVideo? In my opinion two key factors are a big advantage. To offload the CPU by allowing the GPU to take over a huge sum of the workload. HDTV decoding through a TS (Transport Stream) file, for example, can be very demanding for a CPU. These media files can peak to 20 Mbit/sec easily as HDTV streams offer high-resolution playback in 1280x720p or even 1920x1080p without framedrops and image quality loss.
    By offloading that big task for the bigger part of the graphics core, you give the CPU way more headroom to do other which makes your PC actually run normal. The combination of these factors offer you stutter-free high quality and high resolution media playback. All standard HDTV resolutions of course are supported, among them the obvious 480p, 720p and 1080i modes and now also 1080P (P=Progressive and I=Interlaced).
    Ever since the Series 75 ForceWare driver, PureVideo is doing something I've been waiting on for quite some time now, 2-2 pull down which converts 24 frames per second to 50 frame per second PAL. But along with this the new G80 series (and this'll work on G70 as well) will offer HD noise reduction, which is great feature with older converted films. And this is where we land at Image Quality. PureVideo can offer a large amount of options that'll increase the IQ of playback. This can be managed with a wide variety of options. Obviously NVIDIA has some interesting filters available in the PureVideo suite like advanced de-interlacing, which can greatly improve image quality while playing back that DVD, MPEG2 or TS file (just some examples). Aside from that, things like color corrections should not be forgotten. All major media streams are supported by NVIDIA with PureVideo. And yes High Definition H.264 acceleration, which will become a big, new and preferred standard, is also supported.
    Paradox: You do not need PureVideo for HDTV playback and connectivity, but it is recommended if you have that dedicated hardware in your system anyway.
    Connect your PC to the HDTV screen use the best and thus most expensive connection available. You can go with a component adapter and the 3-way RCA cable. However, and as weird as this might sound, image quality while being good is simply not perfect, it's still analog you know. It's a relatively cheap way to connect to a HDTV screen though. Now what you want to do is to go digital on the connection. Obviously you spent a lot of money on the HDCP (HD Ready) HDTV screen and PC, so invest a little more into digital connectivity. You'll notice that your videocard has a lovely DVI-I/D compatible output, so please use it!
    Guru3D uses this DVI-D <-> HDMI cable. Once you boot up into Windows you'll immediately notice the difference; rich colors and good quality.
    Once booted into windows you'll likely notice some seriously bad overscan (the Pioneer screens are known and feared for this). Basically the outer segments of the screen are not being displayed on your HDTV. NVIDIA is now offering some really cool under and overscan options. In essence you shrink the resolution a little to make it fit perfectly on your HDTV. I believe this is being done by adding black borders to the video signal. The new ForceWare 75 series actually allow independent X and Y underscan control in a very simple manner. Doesn't matter because after you've done that the only thing you can say while playing back an HDTV file is "Oh my God." The image quality is simply beautiful and seems to work extremely well with NVIDIA's recent graphics cards.
    To test if the CPU indeed was offloaded by the GPU to see if the PureVideo claims are true we'll put this to the test. Well, obviously it works. With a HD movie (.ts file) we logged CPU utilization during playback.
    Support of the .TS files to me personally are important, if your satellite box can record an MPEG stream it'll do that in a .TS file after which you can playback the content through MediaPlayer / MediaCenter easily and without quality loss.
    In our case we had a .TS episode recording of the Sopranos (which is our standard test file). This puppy is doing a 12 to 20 mbit/sec datastream which is one of the most difficult things to manage for a PC right now (if you do not have the proper decoders). At standard a mid-range AMD Athlon 64 4000+ would peak out at 55-60% CPU utilization. If we'd use a H.264 file, it'll be 100% easily.
    Once you offload it to the graphics processor things look much better with the help of PureVideo. Have a look at the graph below where you are monitoring the CPU at work at roughly 12-20% decoding a HDTV .TS file:
    Indeed, a huge improvement over standard decoding. We are now at a CPU utilization of 12-20%, really nice for a HD MPEG2 stream. The processor is almost doing nothing. Let me remind you again that this is a Transport Stream file with a HDTV resolution of 1920x1080i.
    In combo with the new drivers you can now also decode High Definition H.264 streams. H.264 is a compression algorithm used to transmit video efficiently between endpoints. This algorithm is seen as the replacement for its predecessor, H.263. What is different about H.264 is that it promises to deliver high quality video, H.264 also enables very high quality encoding, producing way better results than even MPEG2 and of course HDTV levels.
    These GeForce series 7 and 8 cards can also manage 3:2 and 2:2 pull down (inverse telecine) of SD and HD interlaced content.
    New with the introduction of GeForce 8800 we see a 10-Bit display processing pipeline and also new post-processing options (works for GeForce series 7 as well):

    • Adds VC-1 & H.264 HD Spatial-Temporal De-Interlacing
    • Adds VC-1 & H.264 HD Inverse Telecine
    • Adds HD Noise Reduction
    • Adds HD Edge enhancement

    Software like WinDVD, PowerDVD and Nero showtime will support PureVideo from within their software. You can also buy the PureVideo software for a few tenners at NVIDIA after which MediaPlayer or Media Center will work with it flawlessly.
    To give you an idea how intensely big one frame of 1920x1080 is with a framerate of 24 frames per second. Click on a the two example images above. Load them up, and realize that your graphics card is displaying that kind of content 24 times per second, while enhancing them in real-time.

    The Photos

    On the next few pages we'll show you some photos. The images were taken at 2560x1920 pixels and then scaled down. The camera used was a Sony DCS-F707 at 5.1 MegaPixel.
    Here we go, the GeForce 8800 Ultra 768 MB
    Okay there she is, slightly overexposed, sorry about that. The color is actually darker. This my friends is the GeForce 8800 Oeltra... ehm Ultra. It looks different yes,. but it's the same PCB, length and well, everything else as the 8800 GTS.
    And yes Einstein, that's the backside. The card is 10.5 inches long, 27 CM for those using the Metric system.
    On top the two SLI connectors that to date NVIDIA has not explained. The second SLI connector on the GeForce 8800 GTX is hardware support for "impending future enhancements" in SLI software functionality. Don't you love diplomacy. So that's either Physics as add-on or two-way SLI. With the current drivers, only one SLI connector is actually used. You can plug the SLI connector into either the right or left set of SLI connector; it really doesn't matter which one.
    DVI connectors, dual-link DVI of course. With the 7-pin HDTV-out mini-din, a user can plug an S-video cable directly into the connector, or use a dongle for YPrPb (component) or composite outputs. The prior 9-pin HDTV-out mini-din connector required a dongle to use S-video, YPrPb and composite outputs.

    Here we can see the two 6-pin power connectors. Since the PCB is rather long they have been placed logically on the upper side of the card. Very clever as that'll save lots of space.
    The card in the test-system, eVGA nForce 680i SLI. A rather sexy mainboard for sure. The cooler by the way. Don't get confused, it's still the old cooler yet with a new shell. The design allows better air draw and thus better airflow.
    Twilight, evening... a little lighting. Aah mon cheri, all we need is a nice Chardonnay now.
    One more but that's really it. It's time to move forwards towards to the test session, e.g. benchmarks!

    Toda a analise:,1.html

    New Ultra High End Price Point With GeForce 8800 Ultra


    NVIDIA owns the high end graphics market. For the past six months, there has been no challenge to the performance leadership of the GeForce 8800 GTX. Since the emergence of Windows Vista, NVIDIA hardware has been the only platform to support DX10. And now, before AMD has come to market with any competing solution whatsoever, NVIDIA is releasing a refresh of its top of the line part.

    The GeForce 8800 Ultra debuting today doesn't have any new features over the original 8800 GTX. The GPU is still manufactured using a 90nm process, and the transistor count hasn't changed. This is different silicon (A3 revision), but the GPU has only really been tweaked rather than redesigned.

    Not only will NVIDIA's new part offer higher performance than the current leader, but it will introduce a new price point in the consumer graphics market moving well beyond the current $600 - $650 set by the 8800 GTX, skipping over the $700 mark to a new high of $830. That's right, this new high end graphics card will be priced $230 higher than the current performance leader. With such a big leap in price, we had hoped to see a proportional leap in performance. Unfortunately, for the 38% increase in price, we only get a ~10% increase in core and shader clock speeds, and a 20% increase in memory clock.

    Here's a chart breaking down NVIDIA's current DX10 lineup:

    NVIDIA G8x Hardware
    SPs ROPs Core Clock Shader Clock Memory Data Rate Memory Bus Width Memory Size Price
    8800 Ultra 128 24 612MHz 1.5GHz 2.16GHz 384bit 768MB $830+
    8800 GTX 128 24 576MHz 1.35GHz 1.8GHz 384bit 768MB $600-$650
    8800 GTS 96 20 513MHz 1.19GHz 1.6GHz 320bit 640MB $400-$450
    8800 GTS 320MB 96 20 513MHz 1.19GHz 1.6GHz 320bit 320MB $300-$350
    8600 GTS 32 8 675MHz 1.45GHz 2GHz 128bit 256MB $200-$230
    8600 GT 32 8 540MHz 1.19GHz 1.4GHz 128bit 256MB $150-$160
    8500 GT 16 4 450MHz 900MHz 800MHz 128bit 256MB/512MB $89-$129

    We do know NVIDIA has wanted to push up towards the $1000 graphics card segment for a while. Offering the top of the line for what almost amounts to a performance tax would give NVIDIA the ability to sell a card and treat it like a Ferrari. It would turn high end graphics into a status symbol rather than a commodity. That and having a huge margin part in the mix can easily generate additional profits.

    Price gaps larger than performance increases are not unprecedented. In the CPU world, we see prices rise much faster than performance, especially at the high end. It makes sense that NVIDIA would want to capitalize on this sort of model and charge an additional premium for their highest performing part. This way, they also get to introduce a new high end part without pushing down the price of the rest of their lineup.

    Unfortunately, the stats on the hardware look fairly similar to an overclocked 8800 GTX priced at $650: the EVGA e-GeForce 8800 GTX KO ACS3. With core/shader/memory clock speeds at 626/1450/1000, this EVGA overclocked part poses some stiff competition both in terms of performance and especially price. NVIDIA's G80 silicon revision might need to be sprinkled with magic fairy dust to offer any sort of competition to the EVGA card.

    We should also note that this part won't be available until around the 15th of May, and this marks the first launch to totally balk on the hard launch with product announcement standard. While we hate to see the hard launch die from a consumer standpoint, we know those in the graphics industry are thrilled to see some time reappear between announcement and launch. While hard launches may be difficult, going this direction leaves hardware designers with enough rope to hang themselves. We would love to believe AMD and NVIDIA would be more responsible now, but there is no real reason to think history won't repeat itself.

    But now, let's take a look at what we are working with today.

    The GeForce 8800 Ultra

    Physically, the layout of the board is no different, but NVIDIA has put quite a bit of work into their latest effort. The first and most noticeable change is the HSF.

    We have been very happy with NVIDIA's stock cooling solutions for the past few years. This HSF solution is no different, as it offers quiet and efficient cooling. Of course, this could be due to the fact that the only real changes are the position of the fan and the shape of the shroud.

    Beyond cooling, NVIDIA has altered the G80 silicon. Though they could not go into the specifics, NVIDIA indicated that layout has been changed to allow for higher clocks. They have also enhanced the 90nm process they are using to fab the chips. Adjustments targeted at improving clock speed and reducing power (which can sometimes work against each other) were made. We certainly wish NVIDIA could have gone into more detail on this topic, but we are left to wonder exactly what is different with the new revision of G80.

    As far as functionality is concerned, no features have changed between the 8800 GTX and the 8800 Ultra. What we have, for all intents and purposes, is an overclocked 8800 GTX. Here's a look at the card:

    While we don't normally look at overclocking with reference hardware, NVIDIA suggested that there is much more headroom available in the 8800 Ultra than on the GTX. We decided to put the card to the test, but we will have to wait until we get our hands on retail boards to see what end users can realistically expect.

    Using nTune, we were able to run completely stable at 684MHz. This is faster than any of our 8800 GTX hardware has been able to reach. Shader clock increases with core clock when set under nTune. The hardware is capable of independent clocks, but currently NVIDIA doesn't allow users to set the clocks independently without the use of a BIOS tweaking utility.

    We used RivaTuner to check out where our shader clock landed when setting core clock speed in nTune. With a core clock of 684MHz, we saw 1674MHz on the shader. Pushing nTune up to 690 still gave us a core clock of 684MHz but with a shader clock of 1728MHz. The next core clock speed available is 702MHz which also pairs with 1728MHz on the shader. We could run some tests at these higher speeds, but our reference board wasn't able to handle the heat and locked up without completing our stress test.

    It is possible we could see some hardware vendors release 8800 Ultra parts with over 100MHz higher core clocks than stock 8800 GTX parts, which could start to get interesting at the $700+ price range. It does seem that the revised G80 silicon may be able to hit 700+ MHz core clocks with 1.73GHz shader clocks with advanced (read: even more expensive) cooling solutions. That is, if our reference board is actually a good indication of retail parts. As we mentioned, we will have to wait and see.

    Toda a analise:

    Nvidia's GeForce 8800 Ultra graphics card

    The G80 girds for battle

    WHAT HAPPENS WHEN YOU take the fastest video card on the planet and turn up its clock speeds a bit? You have a new fastest video card on the planet, of course, which is a little bit faster than the old fastest video card on the planet. That's what Nvidia has done with its former king-of-the-hill product, the GeForce 8800 GTX, in order to create the new hotness it's announcing today, the GeForce 8800 Ultra.
    There's more to it than that, of course. These are highly sophisticated graphics products we're talking about here. There's a new cooler involved. Oh, and a new silicon revision, for you propellerheads who must know these things. And most formidable of all may be the new price tag. But I'm getting ahead of myself.
    Perhaps the most salient point is that Nvidia has found a way to squeeze even more performance out of its G80 GPU, and in keeping with a time-honored tradition, the company has introduced a new top-end graphics card just as its rival, the former ATI now owned by AMD, prepares to launch its own DirectX 10-capable GPU lineup. Wonder what the new Radeon will have to contend with when it arrives? Let's have a look.
    It's G80, Jim, but not as we know it
    For us, the GeForce 8800 is familiar territory by now. We've reviewed it on its own, paired it up by twos in SLI for killer performance, and rounded up a host of examples to see how they compared. By and large, the GeForce 8800 Ultra is the same basic product as the GeForce 8800 GTX that's ruled the top end of the video card market since last November. It has the same 128 stream processors, the same 384-bit path to 768MB of GDDR3 memory, and rides on the same 10.5" board as the GTX. There are still two dual-link DVI ports, two SLI connectors up top, and two six-pin PCIe auxiliary power connectors onboard. The feature set is essentially identical, and no, none of the new HD video processing mojo introduced with the GeForce 8600 series has made its way into the Ultra.
    Yet the Ultra is distinct for several reasons. First and foremost, Nvidia says the Ultra packs a new revision of G80 silicon that allows for higher clock speeds in a similar form factor and power envelope. In fact, Nvidia says the 8800 Ultra has slightly lower peak power consumption than the GTX, despite having a core clock of 612MHz, a stream processor clock of 1.5GHz, and a memory clock of 1080MHz (effectively 2160MHz since it uses GDDR3 memory). That's up from a 575MHz core, 1.35GHz SPs, and 900MHz memory in the 8800 GTX.
    Riding shotgun on the Ultra is a brand-new cooler with a wicked hump-backed blower arrangement and a shroud that extends the full length of the board. Nvidia claims the raised fan allows the intake of more cool surrounding air. Whether it does it not, it's happily not much louder than the excellent cooler on the GTX. Unfortunately, though, the longer shroud will almost certainly block access to SATA ports on many of today's port-laden enthusiast-class motherboards.
    If you dig the looks of the Vader-esque cooling shroud and want the bragging rights that come with the Ultra's world-beating performance, you'll have to cough up something north of eight hundred bucks in order to get it. Nvidia expects Ultra prices to start at roughly $829, though they may go up from there depending on how much "factory overclocking" is involved. That's hundreds of dollars more than current GTX prices, and it's asking quite a lot for a graphics card, to say the least. I suppose one could argue it offers more for your money than a high-end quad-core processor that costs 1200 bucks, but who can measure the depths of insanity?
    The Ultra's tweaked clock speeds do deliver considerably more computing power than the GTX, at least in theory. Memory bandwidth is up from 86.4GB/s to a stunning 103.7GB/s. Peak shader power, if you just count programmable shader ops, is up from 518.4 to 576 GLOPS—or from 345.6 to 384 GFLOPS, if you don't count the MUL instruction that the G80's SPs can co-issue in certain circumstances. The trouble is that "overclocked in the box" versions of the 8800 GTX are available now with very similar specifications. Take the king of all X's, the XFX GeForce 8800 GTX XXX Edition. This card has a 630MHz core clock, 1.46GHz shader clock, and 1GHz memory. That's very close to the Ultra's specs, yet it's selling right now for about $630 at online vendors.
    So the Ultra is—and this is very technical—what we in the business like to call a lousy value. Flagship products like these rarely offer stellar value propositions, but those revved-up GTX cards are just too close for comfort.
    The saving grace for this product, if there is one, may come in the form of hot-clocked variants of the Ultra itself. Nvidia says the Ultra simply establishes a new product baseline, from which board vendors may improvise upward. In fact, XFX told us that they have plans for three versions of the 8800 Ultra, two of which will run at higher clock speeds. Unfortunately, we haven't yet been able to get likely clock speeds or prices from any of the board vendors we asked, so we don't yet know what sort of increases they'll be offering. We'll have to watch and see what they deliver.
    We do have a little bit of time yet on that front, by the way, because 8800 Ultra cards aren't expected to hit online store shelves until May 15 or so. I expect some board vendors haven't yet determined what clock speeds they will offer.
    In order to size up the Ultra, we've compared it against a trio of graphics solutions in roughly the same price neighborhood. There's the GeForce 8800 GTX, of course, and we've included one at stock clock speeds. For about the same price as an Ultra, you could also buy a pair of GeForce 8800 GTS 640MB graphics cards and run them in SLI, so we've included them. Finally, we have a Radeon X1950 XTX CrossFire pair, which is presently AMD's fastest graphics solution.
    Toda a analise:

    Dia 2 de Maio de 2007, a nVidia lança mais uma placa grafica de topo.
    Depois de mais ou menos seis meses de dominio absoluto e intocado com a 8800GTX, a nVidia numa jogada de possível antecipação à AMD, andaram-se várias semanas a especular que a AMD ia finalmente colocar o R600 no mercado na mesma semana, a nVidia coloca novamente a fasquia ainda mais alta, aquilo que era bom ficou ainda melhor.
    Tecnicamente a 8800 Ultra é exactamente igual à 8800GTX, mas apenas com os clocks de memória e streams processors mais elevados sensivelmente uns 10%, a nVidia afirma que o chip levou uma pequena revisão, mas para efeitos esta placa pode ser considerada como um rebrand vitaminado da 8800GTX, começava aqui o ciclo quase interminável que o G80 teve de rebrands.
    Não há muito mais acrescentar sobre este chip, apenas que veio elevar a fasquia ainda mais um pouco e colocar a tarefa da AMD com o R600 ainda mais dificil.
    A nVidia nos meses anteriores manteve algum secretismo sobre este chip e sobre esta placa, chegou-se a falar numa nova versão da GX2, mas com o G80, chegou-se a especular que esta placa rondaria os 1000€ ou mais aqui em Portugal, mas quando chegou o preço deveria rondar os 800€. Esta placa fez a nVidia fazer um pequeno ajuste de preços logo nas gamas a seguir.
    Outro aspecto curioso da placa, é a nova parte de refrigeração que a nVidia colocou, algo um pouco estranho ver a fan quase fora da placa.
    Como não poderia deixar de ser, esta placa foi mais uma das que vendeu e vendeu e teve um longo reinado como a placa mais rápida do mercado.

  2. #212
    O Administrador Avatar de LPC
    Mar 2013
    31 (100%)
    Estava nessa altura nos meus tempos áureos...

    Tive a felicidade de ter tido diversas 8800 Ultras, inclusive para SLI e afins...

    O preço efectivamente oscilava entre os 800€ e 900€ por placa.

    O cooler desfazado era para poder receber ar por baixo e por cima em configurações SLI, já que esta besta aquecia e bem em full load...

    Sinceramente não compensava face á 8800GTX, que era bem mais acessível e a performance era mesmo muito próxima...


    My Specs: .....
    CPU: AMD Ryzen 9 3900X - 4.3Ghz @ 1.30v - Board: MSI AMD B450M Mortar MAX - RAM: 16 GB DDR4 G.Skill RipJaws V 3200Mhz CL14 - GPU: MSI AMD Radeon RX 5700XT Gaming X 8GB - Monitor: BenQ EW3270U 4k HDR
    Case: Tt Core V21 - Cooling: 5x Arctic Cooling P14 PWM PST - CPU Cooler: CM Hyper 212 Black Edition + 2x Gentle Typhoon AP-15 - Storage: ADATA XPG SX8200 Pro NVMe 1TB + WD SN520 NVMe 512GB - PSU: Corsair HX1200i

  3. #213
    Master Business & GPU Man Avatar de Enzo
    Jan 2015
    País Campeão Euro 2016
    41 (100%)
    Ainda tenho ali duas e acho-as brutais. Ainda hoje, quase uma década depois.
    Ideias sem Nexo e Provas do Tráfico de Hardware
    "que personifica o destino, equilíbrio e vingança divina." Dejá vú. Que cena!

  4. #214
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Nov 2013
    City 17
    1 (100%)
    NVIDIA 8800 Ultra SLI review

    NVIDIA 3-way SLI review 3x XFX 8800 Ultra - Page 1 - Introduction

    Are you ready for a three-some ?
    NVIDIA introduces 3-way SLI.
    Info: XFX Graphics
    Hey guys; as you know from the rumors around the web, NVIDIA is launching 3-way SLI. And darn, there hardly is a secret for anything released by NVIDIA these days. Over a six weeks ago we started planning our 3-way SLI review, yet due to some very dark circumstances less than 20 hours prior to the launch, we received our boards. So before we even start off with this article, forgive me for the short technical explanations. There's nothing really in-depth or "guru style" today. We'll just focus on the basics of 3-way SLI and what matters the most: benchmarks.
    You guys know exactly what SLI & Crossfire is, that doesn't need any profound explanation. But basically you take two graphics cards, place them on a supported mainboard, connect the cards together with an SLI or Crossfire bridge and with any luck, you can double up the 3D rendering performance of your games.
    For the last year, if you had a closer look at the 8800 GTX & Ultra, you'd notice there's a second SLI finger (connector) and to date there has been a lot of speculation about that extra connector, to name one, it could have been an extension for physics over a third card. The truth is often to be found in the most simple and logical solution; it was all about SLI from the get go. If you have one of these rather expensive cards you can now link up three cards and enable 3-way SLI.
    But now you go ... "Oy Hilbert!, but didn't we have Quad-SLI already". Yes Sir we did, and it miserably failed due to a plethora of factors. In the end Quad SLI was killed off due to a DX9 backbuffer (we called it backbugger) limitation which pretty much hindered games to utilize more than 2 GPUs, and from thereon a lack of driver development. NVIDIA tried to evangelize this with sexy AA modes running over the 3rd and 4th GPU, yet it never took off.
    front buffer = display images
    back buffer = computing images
    Pretty sour if you bought two of the GX2 cards at that time. But if you do still have them, you guys might actually benefit from the driver development of 3-way SLI, in theory you can apply the new SLI modes on Quad SLI as well.
    What we'll do today is simple. We'll go right into a small preview and photo session, then fire off a dozen or two 3-way SLI result with a bunch of games, as hey ... that's what it's all about. Unfortunately the one game lacking will be Crysis. At the time of this writing, Crytek still have not released their Crysis update which is needed to enjoy the benefits of 3-way SLI.
    So not only did NVIDIA plan to refresh its lineup of performance graphics accelerators this year, but also intended to introduce its 3-way SLI multi-GPU technology. Initially Nvidia plans to enable triple SLI support for the top-of-the-range GeForce 8800 GTX and Ultra graphics cards, but eventually it may support 3-way configurations of other GPUs as well, and with rumors of these cards reach EOL soon, they'd better.
    Systems with three graphics cores will be powered by Nvidia nForce 680i as well as nForce 780i platforms with the former supporting PCI Express 1.1/1.0a, whereas the latter featuring PCI Express 2.0
    Next page please.

    Before we start, we need to have a chat about system requirements. First off, you'll need a nForce 680i SLI or 790i SLI mainboard. Luckily ... we have both of them. But in general, you need to have a pretty beefy system to be able to keep up with three-way SLI. It's also pretty much for the rich and famous only.
    NVIDIA recommended system specifications:

    • Three GeForce 8800 GTX or Ultra graphics cards
    • Intel Core 2 Duo X6800 2.93 GHz or better processor
    • 2 GB 800 MHz DDR2 memory
    • nForce 680i or 780i SLI mainboard
    • Windows Vista
    • Two HDD in RAID0 (Stripe) configuration
    • DVD drive
    • Sound card

    Woohoo, our test platform matches pretty much completely. Except for the RAID0 configuration. NVIDIA here's a small hint, we really want Hardware RAID, not the silly mid-windows RAID configuration. So PRIOR to booting windows, we need our RAID configuration to be setup. Anyway, we'll use a 150 GB WD Raptor drive at 10k RPM.
    Now here's where the system requirements will get a little sour. NV advises a power supply that's ... really making you go like "uh what ?":

    • Minimum 1100 Watt peak power
    • Six PCI-E power connectors

    There are not a lot of PSUs out there who can do that. Rest assure thought hat NVIDIA's requirements are a bit on the high side. An 900/1000 Watt I presume would suffice also unless you seriously start to overclock. We however do have a couple of 1200 Watt PSU's in the lab and obviously will use one of them. And if I do not forget, I'll monitor peak wattage output of the PC.
    The SLI connector
    Now this is something you'll need. Expect vendors like XFX to sell the new SLI connector separately if you already have a 680i board. If you purchase the soon to be released 780i mainboard, it'll be included, both (which we'll now call) the two-way and three way SLI connectors.

    The new 3-way SLI bridge connector, like the three musketeers. All for one, one for all.

    NVIDIA has been working really hard on the new 3-way SLI drivers, and from the look of it, this technology is here to stay. Installation is fairly easy, insert the cards, place the SLI dongle on the SLI fingers and boot into windows. After the installation is completed browse to the SLI tab in the NVIDIA control panel.
    Once there, simply flag 3-way SLI and after a GPU init you should be good to go. I however, restarted the system to be sure. 3-way SLI works straight out of the box. You can configure games independently with a plethora of SLI settings (which we'll later on in another article take a peek at). Seek that in the game profiles please.
    Also pretty cool, that's a lot of monitors supported eh ?
    Power consumption
    Freakish gawd ... that's a lot. Ehm, a high-end system plus three graphics cards live in action will guarantee a pretty high power consumption. It is imperative that you purchase a really fine PSU, have a peek.
    Lousy photo sorry about that, but that rig is sucking in 728 Watt from your power plug when fully utilized. But we already knew that in advance didn't we. Things like these should not worry you. If it does, then 3-way SLI is not for you my man. This PC is not even overclocked, everything runs at default settings. 750-800 Watt is possible you know.

    But let's move onwards to the photo shoot.

    The Photos

    On the next few pages we'll show you some photos. The images were taken at 2560x1920 pixels and then scaled down. The camera used was a Sony DCS-F707 5.1 MegaPixel.
    XFX Technology was kind enough to send in three 8800 Ultra's for the test. Yeah ... that's 1800-2000 bucks right there. To your left you can see the new 3-way SLI connector.
    Well, I just like odd photography. Can I hear an "Ooh" please ?
    Once installed and alive in the test system. Three Ultra's plus the SLI connector clearly visible.
    Slightly different angle. Please don't mind the lousy wiring ...
    Okay one more, but that's it. Let's peek a little at performance.

    Hardware and Software Used

    Now we begin the benchmark portion of this article, but first let me show you our test system plus the software we used.
    nVIDIA nForce 680i SLI (eVGA)
    Core 2 Duo X6800 Extreme (Conroe)
    Graphics Cards
    XFX GeForce 8800 Ultra XXX edition (x3)
    2048 MB (2x1024MB) DDR2 CAS4 @ 1142 MHz Dominator Corsair
    Power Supply Unit
    Sirtec High-Power 1200 Watt
    Dell 3007WFP - up-to 2560x1600
    OS related Software
    Windows Vista Business Edition
    DirectX 9.0c/10.0 End User Runtime
    NVIDIA ForceWare 169.18 Beta
    NVIDIA nForce 680i platform driver 9.53
    Software benchmark suite
    Call of Duty 4
    Ghost Recon: Advanced Warrior 2
    Battlefield 2
    A word about "FPS"
    What are we looking for in gaming performance wise? First off, obviously Guru3D tends to think that all games should be played at the best image quality (IQ) possible. There's a dilemma though, IQ often interferes with the performance of a graphics card. We measure this in FPS, the number of frames a graphics card can render per second, the higher it is the more fluently your game will display itself.
    A game's frames per second (FPS) is a measured average of a series of tests. That test often is a time demo, a recorded part of the game which is a 1:1 representation of the actual game and its gameplay experience. After forcing the same image quality settings; this timedemo is then used for all graphics cards so that the actual measuring is as objective as can be.
    Frames per second Gameplay
    <30 FPS very limited gameplay
    30-40 FPS average yet very playable
    40-60 FPS good gameplay
    >60 FPS best possible gameplay

    • So if a graphics card barely manages less than 30 FPS, then the game is not very playable, we want to avoid that at all cost.
    • With 30 FPS up-to roughly 40 FPS you'll be very able to play the game with perhaps a tiny stutter at certain graphically intensive parts. Overall a very enjoyable experience. Match the best possible resolution to this result and you'll have the best possible rendering quality versus resolution, hey you want both of them to be as high as possible.
    • When a graphics card is doing 60 FPS on average or higher then you can rest assured that the game will likely play extremely smoothly at every point in the game, turn on every possible in-game IQ setting.
    • Over 100 FPS? You have either a MONSTER of graphics card or a very old game.

    Toda a analise:,1.html

    XFX Triple SLI - 8800 Ultra's in 3 Way SLI


    SLI has been around for a few years and Nvidia have pretty much been the driving force behind multi-GPU solutions since the inception of their SLI solution. Last year they tried to ramp it up a notch with Quad SLI, but this failed to impress enthusiasts, review websites and pretty much the whole industry.

    Now they've brought in another crazy solution: Triple SLI, or 3 Way SLI.

    What you need for SLI, according to Nvidia is:

    3-way NVIDIA SLI-Ready GPUs:
    NVIDIA GeForce 8800 Ultra
    NVIDIA GeForce 8800 GTX

    3-way NVIDIA SLI-Ready MCPs:
    NVIDIA nForce 780i SLI for INTEL
    NVIDIA nForce 680i SLI for INTEL

    3-way NVIDIA SLI-Ready Power Supplies:
    Please visit the SLI Zone Certified SLI-ready Power Supply website and choose a power supply model from the section For “Three GeForce 8800 Ultra or GeForce 8800 GTX.”

    3-way NVIDIA SLI Cases:
    Please visit the Please visit the SLI Zone Certified SLI-ready Cases website and choose a cases from the section For “Three GeForce 8800 Ultra or GeForce 8800 GTX.”

    3-way NVIDIA SLI Connector:
    3-way SLI requires a unique SLI connector in order to operate properly. These connectors may not have been included with your previous purchase of SLI-ready components or PCs. PCs specifically sold as 3-way SLI PCs will have this connector included and preinstalled. what is it?

    SLI stands for Scaleable Link Interface is the marketing name for a way of using two or more graphics processors in parallel. Using both the PCI Express bus and the proprietary SLI connector made by Nvidia, the graphics cards communicate using dedicated scaling logic in each GPU. Load balancing, pixel and display data are passed between each GPU over the PCI-e and SLI connector; basically the two cards share the workload.

    SLI isn't perfrect, but it is improving as Nvidia revise and re-work their drivers and algorithms to get the best out of SLI. Most situations where SLI is supported see a 1.5-1.9x increase in performance, although unsupported games do not see any at all, some even having a loss in performance.

    There are three different rendering or balancing modes for Tri SLI, below is an excerp from WikiPedia with the details:

    * Split Frame Rendering (SFR), the first rendering method. This analyzes the rendered image in order to split the workload 50/50 between the two GPUs. To do this, the frame is split horizontally in varying ratios depending on geometry. For example, in a scene where the top half of the frame is mostly empty sky, the dividing line will lower, balancing geometry workload between the two GPUs. This method does not scale geometry or work as well as AFR, however.

    * Alternate Frame Rendering (AFR), the second rendering method. Here, each GPU renders entire frames in sequence – one GPU processes even frames, and the second processes odd frames, one after the other. When the slave card finishes work on a frame (or part of a frame) the results are sent via the SLI bridge to the master card, which then outputs the completed frames. Ideally, this would result in the rendering time being cut in half, and thus performance from the video cards would double. In their advertising, NVIDIA claims up to 1.9x the performance of one card with the dual-card setup.

    * SLI Antialiasing. This is a standalone rendering mode that offers up to double the antialiasing performance by splitting the antialiasing workload between the two graphics cards, offering superior image quality. One GPU performs an antialiasing pattern which is slightly offset to the usual pattern (for example, slightly up and to the right), and the second GPU uses a pattern offset by an equal amount in the opposite direction (down and to the left). Compositing both the results gives higher image quality than is normally possible. This mode is not intended for higher frame rates, and can actually lower performance, but is instead intended for games which are not GPU-bound, offering a clearer image in place of better performance. When enabled, SLI Antialiasing offers advanced antialiasing options: SLI 8X, SLI 16X, and SLI 32x (8800-series only). A Quad SLI system is capable of up to SLI 64X antialiasing.

    Note that Tri SLI generally tends to use 3 GPU AF rendering and this certainly has the biggest performance benefit.

    Triple SLI

    Triple SLI works on the same precept as SLI, with 3 cards sharing the load. Unfortunately for the masses who went out and bought the excellent 8800 GT or 8800 GTS, Triple SLI supports only the 8800 GTX or the 8800 Ultra meaning that those without those expensive top-end GPU's will not see the benefit of Tri-SLI.

    What do we have here then?

    XFX have kindly sent us three of their top-end 8800 Ultra's to perform the review, along with their 780i SLI motherboard we reviewed previously.

    Oh, and not forgetting the larger one of the two connectors in this picture:

    We'll take a brief look at whats inside those rather large boxes, then get into the benchmarks!

    Pictures - Tri SLI

    Really we know what an 8800 Ultra looks like and we've seen what XFX's 780i looks like, but hell....let's get some gratuitious shots of Tri-SLI:




    OK, enough GPU-porn, let's get on with some serious stuff!

    Test Setup

    For this high-end Tri-SLI review we are going to use the normal graphics card hardware, but with a beefier PSU, DDR2 and a 780i motherboard. It's worth noting that a monster 1100w Tagan was required for the Tri-SLI system with an overclocked quad.

    Intel Q6600 @ 3.6GHz
    Noctua NH-U12P CPU cooler
    2GB Cellshock PC6400 @ 800Mhz 4-4-4-12
    XFX 780i SLI Motherboard
    3 x XFX 8800 Ultra's in Tri SLI
    Tagan 1100w Turbojet PSU
    Hitachi 7K160 HDD
    Lite-on SATA DVD-RW

    Games and benchmarks used

    I tried to get as many games benchmarked in the time I had and hopefully I've covered a decent amount for all of you out there.

    Synthetic Benchmarks

    Please note all Synthetic benchmarks were run at stock settings; just as the free ones would be, as well as 1920 x 1200, with 4 x AA added. All benchmarks are repeated three times for consistency.

    FutureMark 3DMark03
    FutureMark 3DMark05
    FutureMark 3DMark06

    Gaming Benchmarks

    All gaming benchmarks are run through at a demanding stage of the game with no savepoints to affect FPS. These are manual run-though's approximating 3 minutes and all gaming benchmarks are run three times through the same points for consistency. We hope that this gives an accurate and interesting depiction of "real-life" gaming situations. Note the resolutions and AA each game was run at.

    All gaming tests were performed in Windows Vista Ultimate, under DX10 if available.

    Call of Duty 4 - 1920 x 1200, 4 x AA set in-game
    Oblivion - 1920 x 1200, 4 x AA set in drivers and HDR set on in-game. Settings on "Ultra"
    F.E.A.R. - 1920 x 1200, 4 x AA set in game, soft shadows enabled
    Bioshock - 1920 x 1200, all settings to maximum in-game
    Unreal Tournament 3 - all settings set to maximum in-game
    Company of Heroes - DirectX10 patch. 1920 x 1200 with in game settings as here.
    Crysis - 1680 x 1050, all in-game settings set to "High" and "Very High for Tri SLI

    Again, all game run-through's are repeated three times for consistency and accuracy.

    I've used the latest official Forceware drivers and enabled Tri SLI, leaving the drivers to use the preset configurations for load-balancing (which seemed to be Triple SLI AF rendering).

    Toda a analise:

    NVIDIA GeForce 8800 Ultra & SLI

    NVIDIA is bringing the name “Ultra” back with today’s launch of the GeForce 8800 Ultra. This new video card pushes the GeForce 8’s performance to new heights, but at $829 is it worth it? We cover single card and SLI performance in Oblivion and S.T.A.L.K.E.R.


    On November 8th, 2006 NVIDIA launched a new GPU generation known as the GeForce 8 series. Specifically, the chip was known internally to NVIDIA as G80 but to you and me we know it as the GeForce 8800 GTX. When the GeForce 8800 GTX was introduced it had a suggested retail price of $599. You can currently find GeForce 8800 GTX cards online from $529 all the way up to $939 for an overclocked and water-cooled BFGTech GeForce 8800 GTX.

    As a quick refresh the GeForce 8800 GTX utilizes 128 stream processors and 768 MB of GDDR3 memory on a 384-bit memory bus. The core, which is the ROPs and everything else, runs at 575 MHz on the 8800 GTX; the stream processors run at 1.35 GHz. The memory runs at 900 MHz (1.8 GHz) which provides 86.4 GB/sec of memory bandwidth on the 8800 GTX.

    The only performance difference with the new GeForce 8800 Ultra are higher core, stream processor and memory clock speeds explained below.

    GeForce 8800 Ultra

    The “GeForce Ultra” branding is back, and we are happy to see it once again. We have fond memories of highly clocked GeForce based Ultra cards over the years. The Ultra name has always been synonymous for just flat out fast performance and higher clock speeds. As such, that is exactly what the GeForce 8800 Ultra is, a faster GeForce 8800 GTX. The core architecture is exactly the same, 128 streaming processors and 768 MB of GDDR3 memory on a 384-bit bus.

    While the architecture is the same, there are a couple of things NVIDIA has tweaked with this new GPU. The GPU itself is actually a newer refined revision compared to the GeForce 8800 GTX GPU, though still built on 90nm process. NVIDIA has done some tweaking internally, concerning timing tuning and other minor things to coax a little more performance but yet keeping the power utilization in check. In fact, according to the specifications the maximum load power draw has been reduced by a few watts compared to the 8800 GTX even running at the faster clock speeds.

    The quoted power consumption for a GeForce 8800 GTX is 177 Watts. For the new GeForce 8800 Ultra, running at higher clock speeds, the quoted power consumption is 175 Watts. We will do our own testing to see how the Ultra compares to a GTX later in this evaluation. I would suggest that the R600 power comparisons be ignored as they are verified as not being correct or at least all the information about them is not disclosed on this slide. We will have our own R600 power comparison numbers here in a couple of weeks.

    Here is the GeForce 8800 Ultra in all its glory. It runs with a core frequency of 612 MHz (versus 575 MHz on the GTX). The stream processors are clocked at 1.5 GHz (versus 1.35 GHz on the 8800 GTX). Finally, the memory is clocked at 1080 MHz (2.16 GHz DDR), compared to 900 MHz (1.8 GHz DDR) on the GTX.

    The core frequency isn’t much higher, but the shader clock speeds is a healthy 150 MHz faster, which should help in more shader intensive games. Most impressive is the memory clock frequency increase. At 2.16 GHz the memory bandwidth available is now 103.6 GB/sec versus 86.4 GB/sec on the 8800 GTX. This kind of a memory bandwidth increase can help with antialiasing performance improvements or games that use a lot of alpha textures, like the grass in Oblivion.

    The heatsink and shroud also received an update with this video card. The shroud now covers the entire length of the video card, though the actual heatsink does not. The heatsink itself is very close in design to the 8800 GTX with some minor tweaks to allow the fan location to be moved. Overall it is a bit more elegant of a design and does a better job of protecting components on the card and also makes it more comfortable to handle.

    So how much is this going to cost? You would think not much higher than a GeForce 8800 GTX right? After all it is only a clock speed bump. We’ll, prepare yourself, this is a shocker; the GeForce 8800 Ultra has an MSRP of $829 USD. Yes, that is correct, 829 big dollars. To us, it seems tremendously overpriced and NVIDIA is already warning of shortages, but more on that later. The only hope we see for this video card is if that price falls close to GTX levels at $600. Two GeForce 8800 Ultra’s for SLI are going to cost you a whopping $1,658 at MSRP which is just simply an insane amount of money for two video cards.

    Also, while the video card is being announced today, availability will by May 15th. So yes, we are back to NVIDIA Paper Launch status with the 8800 Ultra. Cards will most likely show up before then, but they may be even more pricy for a while until we see more options and brands in the market. We will discuss this a bit later.

    In these photographs you can see the new black heatsink shroud which covers the entire length of the video card. The fan has been moved to the top and actually bulges out beyond the edge of the video card. This does provide some benefit as it is able to now grab air from the back of the card as well as the front. This could come in handy if you are running a three video card system in a 680i motherboard or running a motherboard where the second or third video card butts right up next to the primary video card blocking the fan. With the fan off center like this it is able to grab more air for cooling in such an enclosed system. We think the new shroud design is very pleasing aesthetically and adds a bit of function as well.

    Lengthwise this video card is exactly the same length as the GeForce 8800 GTX at 10.5 inches. Height and width are also the same (With the exception of the top bulge for the fan extension, which should cause little issue if any.); the only difference now is that the height is now carried over the entire length of the video card. Running in SLI, two of these video cards look quite menacing in a system.

    System Test Setup

    For evaluation we are using an EVGA NVIDIA nForce 680i SLI motherboard, an Intel Core 2 Duo X6800 2.93 GHz processor, and 2GB of Corsair XMS2 Dominator CM2X1024-8888C4D at 4-4-4-12 1T. We are using the latest chipset drivers available and the latest BIOS at time of evaluation.

    Video Card Comparison Setup

    There is no point in comparing this video card to any of the current ATI Radeon series. The GeForce 8800 GTX simply owns the battlefield in gaming performance compared to ATI video cards. Instead, we are going to take a non-OC BFGTech GeForce 8800 GTX (running at stock NVIDIA clock speeds) and compare it to the 8800 Ultra. We will see how much faster the Ultra is over the GTX and what benefits it allows in games. We will also test GTX SLI versus Ultra SLI for the same performance comparisons.

    We are only going to use two games in this evaluation because quite simply they are the most popular shader intensive games and nearly every other game plays at the highest settings possible on the 8800 GTX. So this evaluation mixes it up a bit from our norm. We will test on a 19” CRT at 1600x1200 doing apples-to-apples, as well as a Dell 24” at 1920x1200 and for SLI we will be using a Dell 30” at 2560x1600. If you are using a monitor with a resolution smaller than those listed, you are likely wasting your money purchasing a video card of this caliber. Make a new display your next upgrade.

    Evaluation Setup

    Please be aware we test our video cards a bit differently from what is the norm. We concentrate on examining the real-world gameplay that each video card provides. The Highest Playable section shows the best Image Quality delivered at a playable frame rate. We use a high performance system, with a very fast CPU in order to remove CPU bottlenecking.

    In our graphs we use some abbreviations to indicate the method of AA or AF being used.

    TR MSAA = Transparency Multisampling Antialiasing – Indicates the use of NVIDIA’s Transparency Multisampling quality setting on GeForce 7 and 8 series video cards.

    TR SSAA = Transparency Supersampling Antialiasing – Indicates the use of NVIDIA’s Transparency Supersampling quality setting on GeForce 7 and 8 series video cards.

    Toda a analise:

    Ficam aqui algumas analises da nVidia 8800 ULtra em configurações Multi GPU.

  5. #215
    Moderador Avatar de Winjer
    Feb 2013
    Santo Tirso
    4 (100%)
    Aqui fica um belo resumo, em vídeo, da história das GeForce.

    Ryzen R5 3600X / Noctua NH-D15 / ASUS Rog Strix X370 / Cooler Master H500 Mesh / 16Gb DDR4 @ 3800mhz CL16 / Gigabyte RTX 2070 Super / Seasonic Focus GX 750W / Sabrent Q Rocket 2 TB / Crucial MX300 500Gb + Samsung 250Evo 500Gb / Edifier R1700BT

  6. #216
    O Administrador Avatar de LPC
    Mar 2013
    31 (100%)
    Excelentes 2 vídeos que mostram a evolução das gráficas da NVIDIA e como "dantes" a evolução media-se em 100% e mais porcentos sobre a geração anterior e com bons preços a serem cobrados...


    My Specs: .....
    CPU: AMD Ryzen 9 3900X - 4.3Ghz @ 1.30v - Board: MSI AMD B450M Mortar MAX - RAM: 16 GB DDR4 G.Skill RipJaws V 3200Mhz CL14 - GPU: MSI AMD Radeon RX 5700XT Gaming X 8GB - Monitor: BenQ EW3270U 4k HDR
    Case: Tt Core V21 - Cooling: 5x Arctic Cooling P14 PWM PST - CPU Cooler: CM Hyper 212 Black Edition + 2x Gentle Typhoon AP-15 - Storage: ADATA XPG SX8200 Pro NVMe 1TB + WD SN520 NVMe 512GB - PSU: Corsair HX1200i

  7. #217
    Tech Iniciado Avatar de Bushnell
    Nov 2018
    Portland; Oregon
    Muito obrigado por todas essas informações!
    Ela me ajudou muito!

  8. #218
    Tech Iniciado
    Nov 2018
    Aurora; Colorado
    AKA HD 2900 series.

  9. #219
    Moderador Avatar de Winjer
    Feb 2013
    Santo Tirso
    4 (100%)
    Já tem 20 anos....

  10. #220
    Tech Membro
    Jul 2015
    1 (100%)
    E que revolução na altura
    Devia ter guardado para museu
    Última edição de APLinhares : 09-04-19 às 13:49

  11. #221
    Moderador Avatar de Winjer
    Feb 2013
    Santo Tirso
    4 (100%)
    Que tal relembrar uma das batalhas mais épicas do munda dos GPUs: nVidia FX 5800 Ultra VS ATI Radeon 9700 Pro

    Eu tive uma 9700, uma das melhores gráficas que já tive até hoje.

  12. #222
    Master Business & GPU Man Avatar de Enzo
    Jan 2015
    País Campeão Euro 2016
    41 (100%)
    Citação Post Original de Jorge-Vieira Ver Post
    NVIDIA’s GF100: Architected for Gaming

    Toda a noticia:

    Um artigo que explica em detalhe a arquitetura Fermi da nvidia.
    Um bonus do passado...descoberto e analisado recentemente
    Ideias sem Nexo e Provas do Tráfico de Hardware
    "que personifica o destino, equilíbrio e vingança divina." Dejá vú. Que cena!


Página 15 de 15 PrimeiroPrimeiro ... 5131415

Informação da Thread

Users Browsing this Thread

Estão neste momento 1 users a ver esta thread. (0 membros e 1 visitantes)



  • Você Não Poderá criar novos Tópicos
  • Você Não Poderá colocar Respostas
  • Você Não Poderá colocar Anexos
  • Você Não Pode Editar os seus Posts