Registar

User Tag List

Likes Likes:  2
Página 13 de 16 PrimeiroPrimeiro ... 31112131415 ... ÚltimoÚltimo
Resultados 181 a 195 de 226
  1. #181
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Já não me recordava e, pelo que li na procura que fiz, apenas se referia aos OEMs (incluindo até a analise que está em cima), daí a referencia no texto.
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  2. #182
    O Administrador Avatar de LPC
    Registo
    Mar 2013
    Local
    Multiverso
    Posts
    17,815
    Likes (Dados)
    74
    Likes (Recebidos)
    156
    Avaliação
    31 (100%)
    Mentioned
    31 Post(s)
    Tagged
    0 Thread(s)
    Citação Post Original de Jorge-Vieira Ver Post
    Já não me recordava e, pelo que li na procura que fiz, apenas se referia aos OEMs (incluindo até a analise que está em cima), daí a referencia no texto.
    Boas!
    Curte

    - http://www.portugal-tech.pt/showthread.php?t=4035

    Cumprimentos,

    LPC
    My Specs: .....
    CPU: AMD Ryzen 7 5800X3D :-: Board: MSI B550M BAZOOKA :-: RAM: 64 GB DDR4 Kingston Fury Renegade 3600 Mhz CL16 :-: Storage: Kingston NV2 NVMe 2 TB + Kingston NV2 NVMe 1 TB
    CPU Cooling Solution: ThermalRight Frost Commander 140 Black + ThermalRight TL-C12B-S 12CM PWM + ThermalRight TL-C14C-S 14CM PWM :-: PSU: Corsair HX 1200 WATTS
    Case: NZXT H6 FLOW :-: Internal Cooling: 4x ThermalRight TL-C12B-S 12CM PWM + 4x ThermalRight TL-C14C-S 14CM PWM
    GPU: ASUS TUF
    AMD RADEON RX 7900 XTX - 24 GB :-: Monitor: BenQ EW3270U 4K HDR


  3. #183
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Ainda falta no historial algo importante no reinado da ATI antes de aí chegar a essa bomba

    Relativamente à GTO; sinceramente não me lembro mesmo...
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  4. #184
    O Administrador Avatar de LPC
    Registo
    Mar 2013
    Local
    Multiverso
    Posts
    17,815
    Likes (Dados)
    74
    Likes (Recebidos)
    156
    Avaliação
    31 (100%)
    Mentioned
    31 Post(s)
    Tagged
    0 Thread(s)
    Citação Post Original de Jorge-Vieira Ver Post
    Ainda falta no historial algo importante no reinado da ATI antes de aí chegar a essa bomba

    Relativamente à GTO; sinceramente não me lembro mesmo...
    Boas!
    A que irás falar também a tive alias foi dai que tirei o meu adorado black box (com o Portal o Ep2 e o TF2)...
    E no geral até gostei da X2900xt(x). Fartei-me de jogar o Battlefield 2143 nela!

    Cumprimentos,

    LPC
    My Specs: .....
    CPU: AMD Ryzen 7 5800X3D :-: Board: MSI B550M BAZOOKA :-: RAM: 64 GB DDR4 Kingston Fury Renegade 3600 Mhz CL16 :-: Storage: Kingston NV2 NVMe 2 TB + Kingston NV2 NVMe 1 TB
    CPU Cooling Solution: ThermalRight Frost Commander 140 Black + ThermalRight TL-C12B-S 12CM PWM + ThermalRight TL-C14C-S 14CM PWM :-: PSU: Corsair HX 1200 WATTS
    Case: NZXT H6 FLOW :-: Internal Cooling: 4x ThermalRight TL-C12B-S 12CM PWM + 4x ThermalRight TL-C14C-S 14CM PWM
    GPU: ASUS TUF
    AMD RADEON RX 7900 XTX - 24 GB :-: Monitor: BenQ EW3270U 4K HDR


  5. #185
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Não, é muito antes disso, é logo pouquinhas semanas a seguir à 7900 GTO da nVidia

    A HD 2900 vem muito depois da 8800GTX.
    Última edição de Jorge-Vieira : 12-02-16 às 18:42
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  6. #186
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    ATI Radeon X1950 XT

    Connect3D Radeon X1950 XT 256MB:

    Core Clock: 621MHz
    Memory Clock: 1800MHz
    Warranty: Two years (parts and labour)
    Price (as reviewed): £140.99 (inc VAT)

    Ever since the launch of Nvidia’s GeForce 8800-series graphics cards, AMD’s ATI-based products have been out of favour at the high end. With the launch of GeForce 8800 GTS 320MB, Nvidia started to threaten the mileage that AMD had left in the now ageing Radeon X1000-series product family.

    If you add this the fact that AMD recently announced that its eagerly anticipated next-generation R600 graphics processor was to be delayed until the second quarter of this year, AMD had very little left to do but lower its prices so that its partners could remain relatively competitive. Just a couple of weeks after the launch of 8800 GTS 320MB and its subsequent demolition of Radeon X1950 XT 256MB, AMD has cut the price quite considerably.

    With Connect3D’s X1950 XT 256MB now retailing for a hair above £140 (inc VAT), it’s been pushed down into a completely different price bracket. Today, we’re going to see if this quite drastic price drop makes the Radeon X1950 XT 256MB an attractive proposition for those that aren’t interested in DirectX 10 in the short term.


    Click to enlarge
    Connect3D sells its cards as ‘barebones’ products, meaning that there are no games included. The bundle is very familiar, with only the bare essentials included. There is one essential item missing from this cards bundle: a six pin PCI-Express power connector. Despite this omission (which has started to become a trend these days), everything else that you’d expect to see is there.

    Because the card has VIVO capabilities, there is an S-Video In/Out and Composite In/Out combination cable, a Component cable, as well as both S-Video and Composite extension cables. To round the selection of cables and connectors off, Connect3D has included a couple of DVI-to-VGA converters in the box.

    This is quite different to the tactics that many board partners seem to employ – i.e. increasing the price with sometimes needless games and additional extras that will never be used. Many enthusiasts aren't interested in paying extra for bundled games and are just looking for the best price on the hardware that they're interested in. This is where Connect3D makes a lot of its sales, because it is able to beat the competition on price more often than not.



    Click to enlarge
    The card looks just like a typical Radeon X1800/X1900 and uses the same dual-slot cooler as the previous high-end Radeon X1800/X1900’s. We’ve complained about this cooler every time we’ve seen it and the one on Connect3D’s Radeon X1950 XT 256MB is no different – it can get incredibly loud under heavy load, so it’s not one for the feint hearted amongst you. It’s a shame that Connect didn’t move away from the noisy reference cooler, and we would have preferred to see something similar to the cooler on the Radeon X1950 XTX.

    Connect3D’s card is clocked right in line with the reference Radeon X1950 XT 256MB specifications, meaning a 625MHz core clock, and 256MB of GDDR3 memory running at 900MHz (1800MHz effective). The core clock is the same as that of ATI’s Radeon X1900 XT (which was launched back in January 2006) and, as memory speeds have improved over the last twelve months, the company has been able to deploy faster memory on cards that don’t cost an arm and a leg.

    Because the X1950 XT 256MB is based on the same R580 graphics processing unit that has been at the forefront of ATI’s (and now AMD’s) product line-up for more than twelve months, you should know exactly what that means if you’ve kept your head above the sand over that period of time. For those that haven't managed that, we forgive you - here’s a quick refresher: there are 48 pixel shader processors, eight vertex shaders, sixteen texture units and sixteen ROPs (pixel output engines) that connect to a 512-bit internal ring-bus memory controller with an external 256-bit interface connecting the chip to the on-board graphics memory.

    Warranty:

    Connect3D offers a two-year warranty covering parts and labour on all of its cards. During the first year in the product’s life, your point of contact should be the retailer. Of course, if you’re having problems getting hold of the retailer, or the retailer goes out of business, Connect3D will pick up the slack and help you. During the second year of the warranty, you should talk directly to Connect3D if you’ve got problems with the product. While it’s nothing special, most of ATI’s partners offer the same warranty period on competing products.

    ATI System Setup:

    • Connect3D Radeon X1950 XT 256MB – operating at its default clock speeds of 621/1800MHz using Catalyst 7.1 WHQL;
    • ATI Radeon X1950 Pro 256MB – operating at its default clock speeds of 580/1400MHz using Catalyst 7.1 WHQL;
    • ATI Radeon X1950 XTX 512MB – operating at its default clock speeds of 650/2000MHz using Catalyst 7.1 WHQL.

    Intel Core 2 Duo E6600 (operating at 2.40GHz - 9x266MHz); Asus P5W DH Deluxe motherboard (975X Express); 2 x 1GB Corsair XMS2-8500C5 (operating in dual channel at DDR2-800 with 4-4-4-12 timings); Seagate Barracuda 7200.9 200GB SATA hard drive; OCZ GameXtreme 700W PSU; Windows XP Professional Service Pack 2; DirectX 9.0c; Intel inf version 7.22 WHQL.

    NVIDIA System Setup

    • BFGTech GeForce 8800 GTS OC 320MB – operating at its default clock speeds of 550/1300/1600MHz using Forceware 97.92 WHQL;
    • BFGTech GeForce 8800 GTS OC 640MB – operating at its default clock speeds of 550/1300/1600MHz using Forceware 97.92 WHQL;
    • Nvidia GeForce 7900 GTX 512MB – operating at its default clock speeds of 650/1600MHz using Forceware 93.71 WHQL;
    • Nvidia GeForce 7900 GT 256MB – operating at its default clock speeds of 450/1320MHz using Forceware 93.71 WHQL.

    Intel Core 2 Duo E6600 (operating at 2.40GHz - 9x266MHz); Asus Striker Extreme motherboard (nForce 680i SLI); 2 x 1GB Corsair XMS2-8500C5 (operating in dual channel at DDR2-800 with 4-4-4-12-1T timings); Seagate Barracuda 7200.9 200GB SATA hard drive; OCZ GameXtreme 700W PSU; Windows XP Professional Service Pack 2; DirectX 9.0c; NVIDIA nForce 680i SLI standalone drivers version 9.53 WHQL.

    __________________________________________________ ______________________________

    Company of Heroes:

    Publisher: THQ

    We used the full retail version of Company of Heroes patched to version 1.3.0. It's touted as one of the best real-time strategy games of all time. Not only is the gameplay incredibly good and immersive, the graphics engine is simply stunning, making extensive use of post processing and advanced lighting techniques in the fully destructible environment. It's also scheduled to get a DirectX 10 update soon.

    The graphics already look superb, but with the additional performance benefits and image quality enhancements that DirectX 10 will bring, we're expecting it to look even better than it does now. Relic tells us that it plans to make extensive use of the geometry shader, with the addition of things like point shadows and also fuzzy grass support too. By fuzzy grass, Relic means grass that will have micro displacements that break up the detail in the base terrain texturing.

    Relic also plans to leverage some of the other benefits to DirectX 10, to improve performance with more graphical features turned on. The developer's plan to add more detail into the world with more smaller object details in the world. Of course, all of these will react with the world and will be fully destructible like every other element in the Company of Heroes world. For our testing, we used the in-built demo to gauge performance - in this rolling demo, there is heavy use of water, lighting, explosions and also masses of vegetation and it represents fairly typical performance throughout the game.

    We had some problems getting ATI's cards to run with anti-aliasing enabled, so we have limited comparisons between the cards to 0xAA 16xAF at 1280x1024, 1600x1200 and 1920x1200. All in-game details were set to their maximum values.







    At 1280x1024, Connect3D's Radeon X1950 XT 256MB is a faster graphics card than Nvidia's GeForce 7900 GTX 512MB, but it's not faster than the BFGTech GeForce 8800 GTS 320MB. Increasing the resolution to 1600x1200, and then to 1920x1200 sees Connect3D's card fall behind Nvidia's GeForce 7900 GTX. Having said that though, you will still get playable frame rates with the Connect3D X1950 XT 256MB at 1920x1200 if you don't mind a fairly low minimum frame rate.
    Toda a analise:
    http://www.bit-tech.net/hardware/gra...950_xt_256mb/1






    PowerColor Radeon X1950 XT 256MB

    Introduction

    In case you have not noticed, ATI's website has undergone some superficial changes in appearance to match its new owners - AMD. The red team has gone green, though its logos and brands seem to have remained the same. Its website URL has been changed to reflect the new order at the graphics chipmaker. For AMD, it's a rather familiar story as it takes over a graphics business facing stiff competition in the market from NVIDIA, analogous to what it encounters now with Intel in the CPU arena. Going by the numbers, the new combined entity is currently at a performance disadvantage for both processors and graphics, but the synergy potential was highlighted recently with AMD's launch of a dedicated stream processor for high performance computing.
    Meanwhile, the latest in ATI's recent rehashes of its graphics lineup has slipped quietly into retail channels with hardly any fuss. The Radeon X1950 XT looks to be the final update for the retiring Radeon X1900 XT though it has more in common with the older chip than other existing members in the Radeon X1950 series. For one, the R580 core is carried over from the Radeon X1900 XT, unlike the modified R580+ in the Radeon X1950 XTX or the 80nm RV570 in the Radeon X1950 PRO.
    What ATI did essentially was to slap on faster memory chips on the older Radeon X1900 XT and up its clock from 1450MHz DDR to 1800MHz DDR. Therefore, the 'new' Radeon X1950 XT has a core and memory clock of 625MHz and 1800MHz DDR, along with 256MB of DDR3 memory. No doubt, we may find custom 512MB versions from vendors but 256MB seems like the more cost efficient and common variant.
    So is the Radeon X1950 XT 256MB worth your time? We recently got one from PowerColor and put it through our usual tests. Before revealing the details, here's a glance of the PowerColor retail package:

    The PowerColor Radeon X1950 XT 256MB.



    The PowerColor Radeon X1950 XT 256MB

    Another mainstay of the older Radeon X1900 series, the two-slot cooler that became infamous for its noise, is retained for the PowerColor Radeon X1950 XT. Hence, it could be difficult to tell between the new from the old, given that same cooler. At least there has been some improvement from that initial batch of Radeon X1900 XT cards as the cooler on the PowerColor Radeon X1950 XT is slightly quieter. We could still hear the cooler of course but it was not as annoying. However, if you are used to the excellent coolers on the Radeon X1950 XTX or the X1950 PRO, this is a step backwards.

    After the GeForce 8 series, the Radeon X1950 XT looks almost puny by contrast. It is actually the same size as the original Radeon X1900 XT and has the same cooler too.
    The main difference between the new Radeon X1950 XT and the Radeon X1900 XT lies in the memory. For the PowerColor, we found Hynix 1.0ns memory modules onboard and as we mentioned earlier, these are clocked at 1800MHz DDR, which is about 350MHz DDR higher than the Radeon X1900 XT. Unlike the Radeon X1950 XTX however, these are still GDDR3 memory modules and not the more energy efficient GDDR4 so while you could theoretically overclock the memory on the Radeon X1950 XT to approach that of the X1950 XTX, heat could be a limiting factor.

    The memory modules are rated at a very fast 1.0ns, more than sufficient for its clock speed of 1800MHz DDR.
    For those who have not given up on CrossFire despite its many changes and setbacks, the PowerColor Radeon X1950 XT supports software CrossFire, i.e. there's no hardware compositing engine onboard. Since ATI did not redesign this card extensively, the newfangled Internal CrossFire (through a SLI look-alike bridge and found only on the Radeon X1950 PRO and X1650 XT) is not possible while the original CrossFire method requiring the CrossFire dongle is also not implemented for the Radeon X1950 XT. Instead, software CrossFire requires the latest Catalyst 6.11 drivers, bringing us to yet another common ATI grouse - the drivers.

    It's a familiar sight, the passive heatsink over the power circuitry.
    Namely, we tried to install Catalyst 6.11 (8.31.5 drivers) on the PowerColor Radeon X1950 XT. However, the PowerColor was not recognized and we had to resort to the included drivers from the vendor, which turned out to be Catalyst 6.10 (8.30.2 drivers). ATI should really try to make sure that its latest drivers support their new products, especially since we feel that adding support for the Radeon X1950 XT is a trivial matter and particularly for a redesigned update of a card like this. In short, this looks like the driver fiasco with the Radeon X1950 PRO again (where one had to wait a couple of driver revisions to get official Catalyst support) and we hope that it can be resolved soon.

    There appears to be no HDCP support on the PowerColor, though this is likely to be vendor dependent.
    Like most high-end ATI graphics cards, the Radeon X1950 XT comes with VIVO thanks to a Rage Theater ASIC but if you are concerned about future-proofing, HDCP support appears to be missing. You'll find the standard dual-link DVI outputs of course and PowerColor has included a generous selection of cables and adaptors suitable for almost every purpose. There was only one software CD however and no games in the bundle, though that's the norm for PowerColor. At least the included software is a comprehensive suite of DVD utilities from CyberLink. Here are the items we found in the PowerColor package:

    • 1 x DVI-to-VGA adaptor
    • 6-pin Molex power connector
    • 9-pin mini-DIN to Component dongle
    • 9-pin mini-DIN to Composite/S-Video dongle
    • S-Video extension cable
    • Composite extension cable
    • Quick Installation Guide
    • Driver CD
    • CyberLink DVD Solution
    Dia 17 de Outubro de 2006, a ATI lança mais uma grafica para o segmento medio de mercado, esta placa era a ATI X1950 XT.
    Esta placa fazia uso do mesmo chip que equipava as placas de topo, o R 580+ e com isso estava garantido o suporte total a todas as funcionalidades do DirectX 9.0C.
    A ATI nunca tinha dado grande atenção a este segmento de mercado, segmento esse quase sempre dominado pela nVidia e com esta placa a ATI tentava inverter um pouco as coisas a seu favor e em certa parte conseguiu, ao equipar esta placa com o GPU de topo e coloca-lo no segmento medio era uma aposta segura na performance.
    Outro aspecto que todas as placas da ATI até esta altura continuavam a ser superiores ás placas da nVidia era a qualidade de imagem que chegava ao monitor, a ATI desde o inicio até agora tinha estado sempre à frente nesse aspecto e, para quem teve placas de ambos os fabricantes, sabe como essas diferenças eram notórias e favoraveis à ATI.
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  7. #187
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    ATI's Radeon X1950 Pro


    ATI's Radeon X1950 Pro graphics card


    At last, ATI nails a mid-range GPU—and CrossFire goes native


    SINCE LATE AUGUST, we've been first-person witnesses to the fall parade of video cards. Personally, I've reviewed so many video cards that I'm having trouble separating a GeForce from a Radeon, let alone a GT from a GS or a Pro from, er, an amateur.
    Fortunately, the new product ATI is unveiling today brings with it good news in several forms: the Radeon X1950 Pro is a strong new contender at the value-oriented $199 price point, and it's based on a brand-new mid-range GPU. ATI hasn't had the best of luck with mid-range graphics processors, but it looks like that's about to change. What's more, this new graphics processor at long last incorporates CrossFire capability directly into the GPU. Gone are the external dongles and proprietary CrossFire Edition graphics cards, replaced by simple, SLI-like bridge connectors between the cards.
    Sounds nice, doesn't it? But is the Radeon X1950 Pro's formula sufficient to challenge the excellent GeForce 7900 GS at $199? Let's take a look.
    ATI's new middle manager: the RV570
    I'm going to be a huge geek and start off by talking about the new GPU. I could kick things off with a discussion of the product itself, but that would totally appeal to a broader audience and spoil the challenge for our ad sales guy. Can't have that.
    The RV570 GPU
    The GPU that powers the Radeon X1950 Pro is code-named RV570. The RV570 is a true mid-range part, with quite a bit more power than the RV530 GPU that drove the ill-fated Radeon X1600 XT. The RV570 shares its technological DNA with the rest of ATI's R500-series GPUs, but it has its own mix of on-chip resources, including eight vertex shader units, 36 pixel shader processors, and 12 texture address units. Given the design of the X1000-series GPUs, that means it also has 12 Z-compare units, 12 render back-ends, and can manage a maximum of 384 concurrent threads. There will be a quiz on these numbers after class.
    TSMC manufactures this chip using its 80nm fab process—a "half-node" process that's an incremental improvement over the more common 90nm node. This slight process shrink ought to help the RV570 to run slightly cooler and perhaps hit higher clock speeds than its siblings in the Radeon X1000 family, all of which are 90nm chips.
    ATI estimates the RV570's transistor count at 330 million. The chip's dimensions are roughly 16.7 mm by 13.8 mm, for a total die area of about 230 mm2. Compare that to the Nvidia G71 GPU on the GeForce 7900 GS. Although methods of counting transistors vary, Nvidia says the G71 has 278 million transistors, and at 90nm, the G71 is about 196 mm2. Of course, the G71 is Nvidia's high-end GPU, hobbled slightly for use in the 7900 GS, while the RV570 is brand-new silicon aimed at the middle of the market.
    In spite of its middle-class pedigree, the RV570 still gets membership in the Radeon X1950 club. That's a departure from the past, when the GPU powering the video card determined its model number. ATI says performance, not silicon, now determines its naming scheme. That means it's possible we might see a future Radeon card in, say, the X1650 series based on this exact same GPU.
    The Radeon X1950 Pro
    Whatever you call it, the Radeon X1950 looks pretty suave, with the same red transparent cooler motif present on the rest of the X1950 lineup, only this time in a sleek, quiet, single-slot cooler.
    A pair of Radeon X1950 Pro cards
    Onboard this card you'll find an RV570 GPU clocked at 575MHz and 256MB of GDDR3 memory running at 690MHz (1380MHz effective, thanks to DDR black magic.) Sticking out of the PCI slot cover is a TV-out port and a pair of dual-link DVI ports. Those DVI ports have full support for HDCP in order to protect you from the movie industry.
    Err, wait. Other way 'round.
    Anyhow, as you'd expect, the X1950 Pro drops into a PCI Express x16 slot and has a six-pin auxiliary power connector to keep the GPU cranking.
    ATI says these puppies should be on the virtual shelves at online stores starting today for $199, and unlike with some past mid-range Radeon products, ATI will not be relying solely on its partners to get the boards out there. There will be "built by ATI" versions of the X1950 Pro available, as well.
    For those of you tracking GTs versus XTXs versus Pros at home, the Radeon X1950 Pro is indeed a replacement for the Radeon X1900 GT. The X1900 GT is based on an R580 GPU with portions deactivated, but its basic capabilities work out to almost exactly the same as the X1950 Pro: 36 pixel shaders at 575MHz, albeit with slightly slower memory. One of the X1900 GT's weaknesses was the lack of a matching CrossFire Edition card for it. The GT could pass CrossFire data via a PCIe link, but at the risk of reduced performance. The X1950 Pro solves that problem, and the X1900 GT will be phased out as the X1950 Pro takes its place.


    CrossFire internalizes, goes native
    Nearly a year since the debut of its CrossFire multi-GPU scheme, the red team has finally integrated CrossFire's image transfer and compositing capabilities directly into a graphics processor. Before now, the high-performance implementations of CrossFire required the use of a specialized CrossFire Edition video card that came with an FPGA chip onboard to handle image compositing. Getting data from the other Radeon card to the CrossFire Edition required the use of an external dongle cable that hung out of the back of the PC like a hemorrhoid. Thanks to Preparation RV570, that's no longer necessary.
    Any Radeon X1950 Pro can talk to a peer via the two "golden fingers" connectors on the top of the card.
    Each Radeon X1950 Pro has two CrossFire connector, uh, connectors
    Look familiar? These are similar to Nvidia's SLI connections, but they're neither physically nor electrically compatible with an SLI bridge. (The golden fingers thingy is wider, for one.)
    CrossFire connector (left) and SLI bridge (right)
    Here's a look at SLI and CrossFire bridges side by side. Right now, SLI bridges come with SLI-ready motherboards, but ATI has a different plan. They will provide one CrossFire bridge with each Radeon X1950 Pro, so users with older motherboards won't be stuck without one.
    Oddly enough, native CrossFire requires two connections between cards in order to work properly. Each of these connections is a 12-bit link, and native CrossFire will scale up to 2650x1600 at 60Hz via a dual-link arrangement. (I'm fairly certain SLI scales to that same resolution using just a single bridge connector.) ATI says CrossFire could work with just one of these two connectors between cards, but that its graphics drivers currently enforce a dual-link config. In fact, when first setting up our Radeon X1950 Pro CrossFire test config, I attached just one connector, and the system refused to go into CrossFire mode.
    Gotta have two links to make it go
    The presence of two CrossFire bridges naturally raises some suspicions in the curious mind. This is an all-new thing for ATI, and surely they've put some thought into it. Why complicate a dual-card setup unnecessarily by requiring the installation of two physical bridges? Could the excess capacity be there for future three- and four-card configurations? The two, staggered connectors per card could work well in a sort of daisy-chain of graphics cards, if need be.
    Hmmmm.
    Can't wait to see what they do with this one. Potentially, any model of Radeon with native CrossFire could be put into a serially connected team of three or four cards by interleaving the connectors used. With DirectX 9's three buffer limit (which I explain in my really cool Quad SLI article,) a triple-card CrossFire rig might be the sweet spot for performance. Mobos with three PCIe x16 slots are already beginning to show up in various quarters.

    Toda a analise:
    http://techreport.com/review/11032/a...-graphics-card








    Radeon X1950 Pro 512MB Review - Page 1



    The yummy Radeon X1950 Pro 512MB
    Series: Radeon X1950 Pro
    More info:
    PowerColor
    MSRP: 229 USD
    Seek best price on Guru3D.com pricewatch, click here.

    A new review today as we recently received a fresh product from PowerColor, which coincidentally is launching officially today. It's now October and although slightly delayed ATI gives you the *drum roll* Radeon X1950 Pro!
    ATI worked hard to finish up its new 80 nanometer products and despite a delay of all the 80 nanometer chips, it is finally ready to be released. The Radeon X1950 Pro as tested today is such a product, and is actually the RV570 based card that you probably heard about a couple of times already.
    Tul, the actual company behind the PowerColor line of products, asked us if we would be interested in reviewing some of their products a while ago, and so we did.

    Roughly 2 weeks ago is was clear that ATI was about to release yet another graphics card onto the market that comes from the successful X1900 series of graphics cards. It's targeted at the high-end segment yet in the lower price range. The product is being positioned against NVIDIA's GeForce 7900 GS and 7950 GT.

    So today we'll be looking at the rather lovely Radeon X1950 Pro from this company, a review on ATI's latest 12-pipe mid-range product which obviously was based off the R580 silicon, and is quite frankly a very credible graphics card as you'll learn in this article. The card features 36 Pixel Shaders units. For $199 you can pickup the 256MB version already, it sounds like a great deal as it should offer at least twice the performance of an X1600 Pro.

    With that being said I'd like to invite you to hit the next page where we'll start off with a little technical information on the new Radeon X1950 Pro, discuss it's pricing, followed by a little photo shoot, after which we'll startup a large benchmark session to see how well it behaved performance wise.


    The Radeon X1950 512MB, and an elephant sitting on top of it

    Tech Bits
    Specifications of the RV570 core are cuddling up into 12 pipelines and 36 pixel shaders with a 580 MHz core clock. Memory will be clocked at 1.4 GHz and have a 256-bit interface.
    The standard Radeon X1950 Pro cards will be equipped with either 256 MB or 512MB of graphics memory and sport either a single or dual slot slot cooler (depending on the manufacturer's choice). The ATI Radeon X1950 Pro is also ATI’s first card with internal CrossFire compatibility for dongle-less CrossFire connectivity. ATI claims performance of the Radeon X1950 Pro will be faster than the GeForce 7900 GS.
    In short, this card will work at 575-600MHz core and 1400MHz memory and will be pushed over a 256-bit memory interface. ATI plans to launch as we speak here in October and we know for a fact that chips have sampled for quite some time. Also an interesting fact is that the Radeon X1950 Pro and X1650XT have both launched as 80nm products.
    Radeoncard
    Pixel Shader Units
    Vertex Shader
    Units
    Texture Units
    Max Threads
    Core Frequency
    Memory Frequency
    Memory MB
    X1950 XTX 48 8 16 512 650 1000 512 gDDR4
    X1900 XTX 48 8 16 512 650 1550 512 gDDR3
    X1900 Crossfire 48 8 16 512 625 1450 512 gDDR3
    X1900 XT 48 8 16 512 625 1450 512 gDDR3
    X1900 AIW 48 8 16 512 500 1000 256 gDDR3
    X1950 Pro 36 8 12 384 580 1380 512 gDDR3
    X1800 XT
    16
    8
    16
    512
    625
    1.5 GHz
    512 gDDR3
    X1800 XL
    16
    8
    16
    512
    500
    1.0 GHz
    256 gDDR3
    X1600 XT
    12
    5
    4
    128
    590
    1.38 GHz
    128 / 256 MB
    X1600 PRO
    12
    5
    4
    128
    500
    780 MHz
    128 / 256 MB
    X1300 PRO
    4
    2
    4
    128
    600
    800 MHz
    256 MB
    X1300
    4
    2
    4
    128
    450
    500 MHz
    128 / 256 MB
    X1300 Hypermemory
    4
    2
    4
    128
    450
    1 GHz
    32 / 128 MB
    High Dynamic Range (HDR)
    ATI ever since the X1000 family focused extremely hard on HDR, just like NVIDIA did. They put a lot of money into their technology to support HDR in the best possible way. And they should as it just is a fantastic effect that brings so much more to the your gameplay experience. HDR is something you all know from games like Far Cry, extremely bright lighting that brings a really cool cinematic effect to gaming. This effect is becoming extraordinarily popular and the difference is obvious. HDR means High Dynamic Range. HDR facilitates the use of color values way beyond the normal range of the color palette in an effort to produce a more extreme form of lighting rendering. Typically this trick is used to contrast really dark scenery. Extreme sunlight, over-saturation or over exposure is a good example of what exactly is possible. The most simple way to describe it would be controlling the amount of light used present in a certain position in a 3D scene. HDR is already present in Splinter Cell Chaos Theory, Far Cry, Oblivion, Half Life 2: Lost Coast, Episode One, Serious Sam2, 3DMark06 and will be available in Unreal 3 to name a few titles.
    One last thing about HDR; ATI's HDR resolution can manage Antialiasing with HDR enabled, some games need to be patched though. And you might want to check our download section for an unofficial patch. As you know HDR together with AA enabled always has been an issue, no longer.

    You can enable HDR and up to 6xAA simultaneously. This is hurting NVIDIA big-time. NVIDIA's cards can do HDR through shaders, yet it's not at all supported well by software developers. It is likely too hard and thus costly to implement.

    Obviously ATI has a far better AA+HDR solution at hand.
    AVIVO (Advanced Video in and Out)
    Ever since the release of last years Catalyst 5.13 driver some stuff has changed for the better, media wise
    . As we all know and as I've been preaching for a while now we see the living room entertainment coming to the PC more and more in a very fast fashion. One of the most popular things we've noticed here in Europe has to be HDTV and everything related to it. The trend started last year and hey, even yours truly bought a HDTV recently, and I'm a technology trend setter! It's coming fast and quite frankly, thank God for that as watching content in HD is simply fantastic. So how does that relate to graphics cards? In more ways than you think, just look at the latest trend of HTPCs (Home Theater PCs). Things like Media Center PCs here and there? Do you get where I'm going with this?
    Yes exactly this kind of thing is what I am talking about. This is the future of media playback and the PC is going to play a very important role in that. Since it's a PC you probably want a graphics card in there that can support all the cool and extensive features. So media playback and decoding is a process that can, is and will be moved towards the graphics card. Both NVIDIA and ATI already had excellent implementations of it. ATI just took it onto a next level though. With exactly this kind of stuff in mind they introduced the new AVIVO feature.
    Avivo features according to the ATI website:

    • Supports hardware MPEG-2 compression, hardware assisted decode of MPEG-2, H.264 and VC-1 video codecs, and advanced display upscaling
    • 64 times the number of colors currently available in current PCs; higher color fidelity with 10-bit processing throughout Avivo´s display engine
    • Resolutions, such as 2560x1600 or higher, on the latest digital displays using dual-link DVI, as well as high color depth support over DVI
    • Advanced up or down resolution scaling on any flat panel display using ATI´s solutions
    • Video capture with features like 3D comb filtering, front-end video scaling, and hardware MPEG video compression
    • Hardware noise reduction and 12-bit analog-to-digital conversion
    • Supports standard TV, HDTV, video input and all PC displays via digital (DVI, HDMI) and analog (VGA, Component, S-Video, composite) ports

    Avivo will be an integral component in all of ATI's upcoming desktop, mobile, chipset, workstation and software products. As stated Media Center PC's are getting really popular. TV is going digital and HD/HD2(?) Blu-ray and HD-DVD are coming. Digital photography is everywhere. AVIVO is a video and display platform that achieves better video quality. AVIVO will be integral in all future ATI products. Smooth vivid playback. Flawless playback for both SD and HD television that's what this stuff is intended for from a decoding point of view. With two dual link DVI ports which are supported on the entire X1000 range two High definition screens can be connected.
    It's almost suffice to say that you can have HDTV output digital over the DVI both analog and digital but also (YPrPb component), as well as S-Video and Composite (which of course can't do HD signals). The product series has full support up-to 1080P H.264 hardware Accelerated decoding and mark my words H.264 is the next standard that can and probably will replace MPEG4. I've seen it, I've tested it and it is looking brilliant with far less bandwidth.
    If you like to have a slight idea how big an 1080i/p HD image actually is just click this example image. Did you load it ? Make sure you enlarge it to full screen. This is just one frame, the Radeon X1000 series cards will have to (and can) decode images like these in real-time. The HQV benchmark - behind the scenes here in the Guru3D caves we are compiling data for a way of "measuring" image quality of graphics cards in terms of decoding. HQV is a professional way of testing and awarding scores to different types of playback. In combination with the not-so-old Catalyst beta 6.5 drivers we had available the score went sky-high as we measured a perfect 130 performance score at 128, which makes AVIVO currently the best possible solution to playback quality rich high definition and "standard" definition media files.


    Toda a analise:
    http://www.guru3d.com/articles-pages...-review,1.html








    ATI Radeon X1950 Pro: CrossFire Done Right

    Introduction
    It seems that ATI has been releasing a constant stream of new or rebadged graphics cards lately, and it looks like this month won't be any different. Today is quite a special treat: ATI has integrated new CrossFire specific features onto the GPU itself. The release of another part at the $200 price point after ATI's recent price drops and re-badging would otherwise seem redundant, but the advantages of the changes ATI has made to CrossFire really bolster its ability to compete with NVIDIA's SLI.
    The new Radeon X1950 Pro is a pretty heavy hitter at $200, bringing slightly faster than the current X1900 GT performance to a slightly lower price point. With the X1900 GT currently being phased out, we would expect nothing less. This will certainly help strengthen ATI's ability to compete with the 7900 GS at the $200 price point, and might even make the X1950 Pro a viable option over some more expensive overclocked 7900 GS parts.
    In spite of the fact that ATI is using TSMC's 80nm process, we don't expect to see very many overclocked versions of the X1950 Pro, as the high transistor count, large die size and high speeds tend to get in the way of stable overclocking. We will certainly be testing out the overclocking capabilities of the X1950 Pro when we get our hands on some retail versions of the cards (overclocking with reference cards doesn't always give an accurate picture of the products capabilities). For now, we'll just have to wait and see. In the mean time, we've got plenty of other things to explore.
    For this look at ATI's newest graphics card, we'll take a peek at the details of the RV570 hardware, what differences have been introduced into CrossFire with the new silicon, and performance of single and multi-GPU configurations from the midrange through the high end. We will find out if the X1950 Pro is really a viable replacement for the X1900 GT, and whether or not the enhancements to CrossFire are enough to bring ATI on to the same playing field as NVIDIA.


    RV570 and the Demise of the X1900 GT
    The silicon used in the X1950 Pro is based on an 8-vertex 36-pixel shader configuration. While the X1900 GT has the same pipeline configuration as the X1950 Pro, the X1900 GT is based on R580 cores with disabled or non-functional pipelines. The RV570 core is built with the X1950 Pro in mind. With the introduction of the X1950 Pro, the X1900 GT will be phased out. It is unclear whether or not ATI has a use planned for R580+ GPUs that don't make the cut on the high end, but it looks like they won't just fall neatly into the X1900 GT. We would like to say that the X1950 Pro has the same core clock speed as the X1900 GT, but the issue is a little more complicated.
    The X1900 GT will be going through a slight revision before its disappearance. Due to a shortage of original R580 cores that can clock to 575MHz, ATI is dropping the specs on the X1900 GT to 512MHz while attempting to make up for this by boosting memory speed to 1320MHz from 1200MHz. This is being done to keep the supply of X1900 GT parts steady until the X1950 Pro is able to take over. It is difficult to describe just how inappropriate it is to retard the specs on a long shipping product in this manner.
    It is hard enough for us to sort things out when parts hit the shelves at different speeds than originally promised, but to do something like this after a part has been on the market for months is quite astounding. Be very careful when looking at buying an X1900 GT over the next couple months. The safest route is to avoid the X1900 GT altogether and simply let the X1950 Pro act as an immediate replacement for the X1900 GT. Leaving 512MHz product sitting on shelves is the best way to send the message that this type of action is not to be taken again. For our part, we have to express our extreme disappointment in ATI for taking this route. We certainly understand that it is difficult to make decisions about what to do when faced with product shortages, but we would like to strongly urge everyone in the computing industry to avoid doing anything like this to stretch the life of a product.
    For now, let's get back to the X1950 Pro. Weighing in at about 330 million transistors and about 230 mm2, the RV570 is no small GPU. In addition to the features listed below, RV570 includes an integrated compositing engine for what ATI calls "native" CrossFire support which we'll explain shortly. The heatsink has a different look to match the rest of the X1950 family in a single slot solution. There are also the new CrossFire connectors in nearly the same position as the NVIDIA SLI bridge position. Here are some pictures and tables to help illustrate.

    NVIDIA Graphics Card Specifications
    Vert Pipes
    Pixel Pipes
    Raster Pipes
    Core Clock
    Mem Clock
    Mem Size (MB)
    Mem Bus (bits)
    Price
    GeForce 7950 GX2
    8x2
    24x2
    16x2
    500x2
    600x2
    512x2
    256x2
    $600
    GeForce 7900 GTX
    8
    24
    16
    650
    800
    512
    256
    $450
    GeForce 7950 GT
    8
    24
    16
    550
    700
    512
    256
    $300-$350
    GeForce 7900 GT
    8
    24
    16
    450
    660
    256
    256
    $280
    GeForce 7900 GS
    7
    20
    16
    450
    660
    256
    256
    $200-$250
    GeForce 7600 GT
    5
    12
    8
    560
    700
    256
    128
    $160
    GeForce 7600 GS
    5
    12
    8
    400
    400
    256
    128
    $120
    GeForce 7300 GT
    4
    8
    2
    350
    667
    128
    128
    $100
    GeForce 7300 GS
    3
    4
    2
    550
    400
    128
    64
    $65


    ATI Graphics Card Specifications
    Vert Pipes
    Pixel Pipes
    Raster Pipes
    Core Clock
    Mem Clock
    Mem Size (MB)
    Mem Bus (bits)
    Price
    Radeon X1950 XTX
    8
    48
    16
    650
    1000
    512
    256
    $450
    Radeon X1900 XTX
    8
    48
    16
    650
    775
    512
    256
    $375
    Radeon X1900 XT
    8
    48
    16
    625
    725
    256/512
    256
    $280/$350
    Radeon X1950 Pro
    8
    36
    12
    575
    690
    256
    256
    $200
    Radeon X1900 GT
    8
    36
    12
    575
    600
    256
    256
    $220
    Radeon X1650 Pro
    5
    12
    4
    600
    700
    256
    128
    $99
    Radeon X1600 XT
    5
    12
    4
    590
    690
    256
    128
    $150
    Radeon X1600 Pro
    5
    12
    4
    500
    400
    256
    128
    $100
    Radeon X1300 XT
    5
    12
    4
    500
    400
    256
    128
    $89
    Radeon X1300 Pro
    2
    4
    4
    450
    250
    256
    128
    $79


    The New Face of CrossFire
    There haven't been any changes to the way CrossFire works from an internal technical standpoint, but a handful of changes have totally revolutionized the way end users see CrossFire. NVIDIA's SLI approach has always been fundamentally better from an end user standpoint. Internal connectors are cleaner and easier to use than ATI's external dongle, and the ability to use any X1950 Pro in combination with any other X1950 Pro is absolutely more desirable than the dedicated master card approach. ATI has finally done it right and followed in NVIDIA's footsteps.
    At the heart of the changes to CrossFire is the movement of ATI's compositing engine from the card onto the GPU itself. This does add cost to every GPU and thus every graphics card, but the added benefits far out weigh any negatives. In early versions of CrossFire, digital pixel information was sent between cards using TMDS transmitters (the same transmitters used to send display information over DVI and HDMI). While this format is fine for displays, it isn't as well suited for chip to chip communication.
    With the compositing engine built into every GPU, ATI is now able to send pixel data through an over-the-top NVIDIA style bridge directly to another GPU. This also eliminates the necessity of a TMDS link for use in transmitting pixel data. ATI hasn't talked about what type of communication protocol is used between the compositing engines on each chip, but we suspect that it is a little lower speed than NVIDIA's 1GHz connection. ATI is using a higher bit-width connection split into 2 12-bit parallel channels. At full capacity, ATI states that these connections can support resolutions of up to 2560x2048, but that communication doesn't happen any faster than the old style TMDS method.
    ATI did make it clear that even though this incarnation of CrossFire supports a higher resolution than we are currently able to test, it won't necessarily run well. Of course, we'd much rather see a situation where we aren't limited by some technical aspect of the hardware. The first incarnation of CrossFire was quite disappointing due to its low maximum resolution of 1600x1200.
    One of the oddities of this multi-GPU implementation is the splitting up of the connector that links the GPUs. Both are required for the driver to enable CrossFire, but only one is technically necessary. As bridges will be bundled with graphics cards, everyone who purchases 2 X1950 Pro cards will have two bridges. This eliminates the need for end users to buy bridges separately or rely on them shipping with their multi-GPU motherboard. When pressed further about why two connectors were used, ATI asked us to envision a system with 3 or 4 graphics cards installed. With 2 channels, cards can be easily chained together. This does offer ATI a little more flexibility than NVIDIA in scaling multi-GPU configurations, but it is also a little more cumbersome and offers more small parts to lose. Overall, though, the 2 channel configuration is a good thing.
    Now that we have a chip built specifically for the $200 price point with a robust, full featured, CrossFire implementation, we are very interested in seeing what type of performance ATI is offering.
    Toda a analise:
    http://www.anandtech.com/show/2104









    ATI Radeon X1950 Pro – More Than Frequency Changes

    The ATI Radeon X1950 Pro, codenamed RV570, has been built on the 80nm process, contains 36 pixel shader engines, 12 parallel pipelines and eight vertex engines, utilizes digital PWM, supports dongle-less CrossFire, fits in a single PCIe slot and comes with an MSRP of $199. When the AMD/ATI merger was announced many critics and analysts thought that ATI would be too busy merging with AMD to keep producing quality products for the high end graphics market, but it’s safe to say that with today’s successful launch ATI has cast away those worries.
    image: http://www.legitreviews.com/images/r...o_features.jpg

    ATI has clocked the core on the X1950 Pro at 575MHz and the 256MB of GDDR3 memory at 1.38GHz, which are aggressive for a card with a $199 price tag. The core on the X1950 PRO is manufactured on an 80nm fabrication process, and is completely separate in almost every way imaginable from the existing Radeon X1950 video cards as we will show you later in this article. ATI has been impressed by the cores made on the 80nm process and told Legit Reviews that the average overclock on the core has been 100MHz from what they have seen and been hearing back from those that are lucky enough to have a card already.
    image: http://www.legitreviews.com/images/r...PRO_single.jpg

    The 80nm RV570 core features 36 Pixel Shaders (12 fewer than the X1950XTX) and 12 Texture Units (four less than the X195XTX) to get the job done. Video output options include dual-link DVI + S-Video, dual-link DVI + single link-DVI + S-video or dual-link DVI + VGA + S-video. The ATI Radeon X1950 Pro has internal dual-link TMDS transmitters for both DVI outputs. As usual, ATI doesn’t recommend any particular configuration and it is up to the add-in board partner to choose the appropriate one.


    Native CrossFire Technology

    For years ATI has been ridiculed for having a dongle and to be honest it’s been a pain for us too as we tend to install and remove CrossFire cards on the test bench quite often. Gone are the annoying small screws and external dongle, in are a pair of internal CrossFire bridges that connect both cards. This also means that there is no longer the need for a master and slave card, which will make purchasing a graphics card from ATI much easier than it has been in the past.
    image: http://www.legitreviews.com/images/r...pro_native.jpg

    ATI requires the use of both Crossfire Connectors and will include the cables with the purchase of any X1950 Pro series cards. ATI informed Legit Reviews that each cable will supply 12-bit performance meaning both have to be used to reach 24-bit gaming up to 2560×2048 at 60Hz on cards fast enough to run graphics that high. It should also be noted that if ATI wanted to pair more than two graphics cards together these connector locations could also be used to daisy chain cards together. While we don’t expect support for three daisy chained graphics cards any time soon, but it’s something to ponder.
    image: http://www.legitreviews.com/images/r..._crossfire.jpg

    Here is a shot of our ATI Radeon X1950 Pro cards with the pair of CrossFire connectors connected to the graphics cards. The installation of the CrossFire connectors is identical to NVIDIA’s SLI solution as all that needs to be done is push them down for them to be correctly installed.
    image: http://www.legitreviews.com/images/r...0pro_molex.jpg

    One interesting thing to note about the CrossFire Connectors are that they were made by Molex Incorporated and dated August 22, 2006, so these have been around for at least a couple months. The take-home message from this page is that ATI has done away with the dongle on the X1950 Pro and their upcoming X1650 XT, so the need for a master and slave card for CrossFire is a thing of the past on these series.
    Now that we have covered the most important new feature let’s look at the other noteworthy changes on the X1950 Pro.


    Goodbye Analog and Hello Digital PWM

    image: http://www.legitreviews.com/images/r...50pro_both.jpg

    When the ATI Radeon X1950 Pro was placed next to the Radeon X1950 XTX it was obvious that ATI changed a few things at the end of the card as the layout is much cleaner. This picture also shows the size differences on the cores and answers the question if the X1950 Pro and X1950 XTX share the same core as they obviously don’t.
    image: http://www.legitreviews.com/images/r...950pro_pwm.jpg

    The reason the PCB looks cleaner is because ATI was the first in the to use digital Pulse Width Modulation (PWM), a technology never seen before on high end desktop graphics cards. Digital controllers eliminate the dangers of overheating and exploding capacitors, giving users a safer and better monitored control over their video cards. By moving over to a digital PWM, ATI was able to save space on the PCB, but didn’t shrink the size of the card as they wanted to stick with good low noise cooling solution.
    image: http://www.legitreviews.com/images/r...0pro_vitec.jpg

    ATI and NVIDIA have been using analog signals up to this point and it hasn’t been a problem, but it’s been proven in the labs that digital signals are the way of the future. One of the disadvantages of an analog circuit is that they tend to drift with time and they are difficult to tune. Analog circuits are also usually hot and are sensible to noise. A digital signal is easier to implement, requires a smaller circuit, can be fine tuned, is easily reproducible, dissipates less heat, is immune to noise and weights less meaning that it is the best way to dial a graphics card in. ATI uses an RoHS compliant Multi-Phase SMD Coupled Inductor (part #59PR9852) by Vitec Electronics Corporation to make sure the digital signals are in check and hopefully enthusiasts will end up getting clean power.
    image: http://www.legitreviews.com/images/r...pro_vitec2.jpg

    Other than the move to digital PWM ATI has moved the heat sink fan power header to the top of the card and actually moved the additional +12V header down to the middle of the card. With that said the digital PWM is covered and we can move on to better things!


    Toda a analise:
    http://www.legitreviews.com/ati-rade...QRjmUUrxS7b.99



    Dia 17 de Outubro de 2006, a ATI faz finalmente uma aposta fortissíma na gama de entrada de mercado, com um produto inovador, totalmente novo e carregado de novidades, algo estranho para um produto que custava pouco mais de 100€ e esse produto era a nova ATI X1950 PRO.
    A ATI para esta placa equipou-a com um chip totalmente novo, de seu nome, RV 570, claro que trazia com ele todas as funcionalidades presentes no DirectX 9.0C e a famosa qualidade de imagem que a ATI tinha.
    A grande novidade deste chip, o RV 570, é trazer a primeira revisão do CrossFire, revisão essa que ia durar mais ou menos até 2012. Esta revisão eliminava os inesteticos cabos exteriores e eliminava a necessidade de uma placa "Master", passando tudo para dentro da caixa e utilizando uma ponte de CF a ligar as duas placas graficas, ou seja, a ATI igualava aqui aquilo que a nVidia tinha de inicio.
    Outra novidade, a fan do cooler da grafica passaram a ser controlados digitalmente, deixando assim o modo analogico de parte e eliminando alguns perigos com o arrefecimento do GPU.
    Relativamente a esta placa, o modelo de referencia, o mesmo que se vê nas imagens (excepto do Guru3D), é de uma estetica excelente, o acrilico vermelho/transparente com o cooler por baixo, torna esta placa como uma das mais bonitas vindas da ATI.
    O unico senão que estas placas tiveram na minha opinião, foi o timing, dado a nVidia estar a pouco tempo de lançar uma bomba no mercado, logo se estas placas tinham vindo dois meses mais cedo, a ATI teria tido uma jogada de mestre.
    Mesmo sendo uma placa de entrada de gama, pelo que esta placa marca no caminho da ATI e as inovações que trás, a mesma deve constar nos artigos de colecionadores de hardware.


    Edit:
    Correcção ao preço inicial do texto, por lapso referi que o custo desta placa seria pouco acima de 100€, mas o preço certo na altura em que saiu deveria rondar os 180 a 190€ e esta é uma das primeiras placas onde a ATI faz uma aposta certa e fortíssima na gama media de mercado.
    Última edição de Jorge-Vieira : 17-02-16 às 09:57
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  8. #188
    Tech Membro Avatar de Nirvana91
    Registo
    Jun 2013
    Local
    Lisboa
    Posts
    3,915
    Likes (Dados)
    1
    Likes (Recebidos)
    19
    Avaliação
    0
    Mentioned
    1 Post(s)
    Tagged
    0 Thread(s)
    Citação Post Original de Jorge-Vieira Ver Post



    Dia 17 de Outubro de 2006, a ATI faz finalmente uma aposta fortissíma na gama de entrada de mercado, com um produto inovador, totalmente novo e carregado de novidades, algo estranho para um produto que custava pouco mais de 100€ e esse produto era a nova ATI X1950 PRO.
    De certeza que só custava isso?

  9. #189
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Nopes, foi um erro meu no texto... não sei onde fui buscar os 100€ , era mais cara um pouquinho, mas não muito mais.
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  10. #190
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    ATI's New High End and Mid Range: Radeon X1950 XTX & X1900 XT 256MB

    ATI has this nasty habit of introducing way too many GPUs into its lineup, and today is no letdown to tradition as ATI is introducing a total of five new video cards.
    We'll start at the bottom with the Radeon X1300 XT, a new $89 part from ATI. The X1300 XT is effectively a rebadged X1600 Pro, and thus should offer a significant performance boost over the rest of the X1300 family.
    Since the X1300 XT is the same thing as an X1600 Pro, the X1600 family gets a new member with the introduction of the X1650 Pro. The X1650 Pro is identical to the X1600 XT except for a 10MHz increase in core clock and memory clock frequency. Yes, an entirely new product was created out of a 10MHz bump in GPU/memory clocks. The X1650 Pro will be priced at $99.

    ATI's Radeon X1650 Pro
    Last week we took a look at currently available mid-range GPU solutions in the $200 - $300 price range and found that for around $340 you could pick up a 512MB X1900 XT and generally get some very solid performance. Today ATI is introducing a 256MB version of the X1900 XT at the suggested retail price of $279, which has the potential to give ATI a firm grasp on the performance mainstream GPU market. The X1900 XT 256MB is no different than its 512MB brother other than memory size, so pipes and clocks are the same. If you're wondering why the X1900 XT (512MB) noticed such a sharp decline in price over the past couple of weeks, the impending release of the cheaper 256MB version is your answer.
    At the high end we've got the final two cards that round out today's launch: ATI's Radeon X1950 XTX and X1950 CrossFire. The X1950 XTX is identical to the X1900 XTX except that it uses faster GDDR4 memory, running at 1GHz compared to 775MHz on the X1900 XTX. With more memory bandwidth, the X1950 XTX could outperform its predecessor, but performance isn't what we're mostly excited about with this card - it's the price. ATI is hoping to sell the X1950 XTX for $449, a drop in price compared to the introductory price of the X1900 XTX, which is a trend we haven't seen too often among GPU makers.

    ATI's Radeon X1950 XTX
    To make things even better, the CrossFire version, which has identical clocks, is also priced at $449; in other words, there's no reason not to get the CrossFire version. ATI confirmed to us that you can run a pair of X1950 CrossFire cards in CrossFire mode, further reinforcing the fact that there's no reason to even buy the regular card. You get the same performance, same features and better flexibility with the CrossFire card so why not?

    ATI's Radeon X1950 CrossFire
    NVIDIA Graphics Card Specifications
    Vert Pipes
    Pixel Pipes
    Raster Pipes
    Core Clock
    Mem Clock
    Mem Size (MB)
    Mem Bus (bits)
    Price
    GeForce 7950 GX2
    8x2
    24x2
    16x2
    500x2
    600x2
    512x2
    256x2
    $600
    GeForce 7900 GTX
    8
    24
    16
    650
    800
    512
    256
    $450
    GeForce 7900 GT
    8
    24
    16
    450
    660
    256
    256
    $280
    GeForce 7600 GT
    5
    12
    8
    560
    700
    256
    128
    $160
    GeForce 7600 GS
    5
    12
    8
    400
    400
    256
    128
    $120
    GeForce 7300 GT
    4
    8
    2
    350
    667
    128
    128
    $100
    GeForce 7300 GS
    3
    4
    2
    550
    400
    128
    64
    $65


    ATI Graphics Card Specifications
    Vert Pipes
    Pixel Pipes
    Raster Pipes
    Core Clock
    Mem Clock
    Mem Size (MB)
    Mem Bus (bits)
    Price
    Radeon X1950 XTX
    8
    48
    16
    650
    1000
    512
    256
    $450
    Radeon X1900 XTX
    8
    48
    16
    650
    775
    512
    256
    $375
    Radeon X1900 XT
    8
    48
    16
    625
    725
    256/512
    256
    $280/$350
    Radeon X1900 GT
    8
    36
    12
    525
    600
    256
    256
    $230
    Radeon X1650 Pro
    5
    12
    4
    600
    700
    256
    128
    $99
    Radeon X1600 XT
    5
    12
    4
    590
    690
    256
    128
    $150
    Radeon X1600 Pro
    5
    12
    4
    500
    400
    256
    128
    $100
    Radeon X1300 XT
    5
    12
    4
    500
    400
    256
    128
    $89
    Radeon X1300 Pro
    2
    4
    4
    450
    250
    256
    128
    $79
    Today we're able to bring you a look at performance of the mid range and high end solutions, the X1950 cards and 256MB X1900 XT. We're still waiting for ATI to send us our X1300 XT and X1650 Pro samples, and we will follow up in the coming weeks with a look at the performance of those offerings as well. Note that although ATI is lifting the veil on its five new products today, you won't actually be able to buy any of them until September 4th (on the high end) with no real availability until the 14th. Given the pricing that ATI is promising however, these cards are worth waiting for.
    With five new cards being introduced, ATI is hoping to slowly phase out all of its other offerings to simplify its product lineup. Unfortunately, it will take some time for all inventory to dry up, but when it does ATI hopes to have the following cards in its lineup:
    Class Card
    Price
    Enthusiast ATI Radeon X1950 XTX
    $449
    ATI Radeon X1900 XT 256MB
    $279
    Performance ATI Radeon X1900 GT
    $249
    Mainstream ATI Radeno X1650 Pro
    $99
    ATI Radeon X1300 XT
    $89
    Value ATI Radeon X1300 Pro
    $79
    ATI Radeon X1300 256
    $59
    ATI Radeon X1300 64-bit
    $49

    The performance difference between the X1900 XTX and XT was small enough that it didn't make sense to have two different products, which is why ATI left the X1950 XTX as the only high end GPU on its roster.
    As we don't have availability right now, we can't confirm real street prices, but we did speak with a few companies who manufacture ATI cards. HIS has stated that they should be able to meet ATI's pricing on all of these parts, which is promising. We also heard from PowerColor on pricing, and it looks like they will be able to meet the MSRP price on the X1950 XTX. With the X1900 XT and X1900 XT 256MB, PowerColor will be listing them for $400 and $300 respectively. Depending on how the rest of the manufacturers stack up, we could see some good prices next month or be sorely disappointed; at this point it's best to be cautious with a launch so far in advance of availability.



    What is GDDR4?

    The major advancement that makes the new X1950 series possible is the availability of GDDR4. This really is an incremental step forward from GDDR3 with an eye towards power saving. Of course, in the high end graphics world "power saving" is a euphemism for overclockability. Thus we have a technology designed to run efficiently and to be pushed beyond the limit of reason. Sometimes we can have our cake and eat it too. While the majority of the power savings come in at lower clock speeds, we will see in our tests that there are some power benefits at the high end as well.
    We have gotten our hands on some information about GDDR4, and will do our best to extract the most useful data. The major advances of GDDR4 include a lower voltage requirement of 1.5V (or up to 1.9V if overclocking). At the low end, this offers a 30% power savings over GDDR3 clock for clock. We also see a fixed burst length of 8 bits with GDDR4 as opposed to 4 with GDDR3. This allows the RAM to run at half the core frequency while offering the same memory bandwidth as GDDR3, which results in significant power savings (a 2GHz data rate GDDR3 chip would run with a core clock of 500MHz, while GDDR4 can run at 250MHz). Alternately, this can be used to provide higher memory speeds in high end systems.
    Data bus inversion (DBI) also makes its way into memory with GDDR4. This technique helps to lower the average power used by the bus by minimizing the number of zeros transmitted. At first glance, this might not make much sense, but it all has to do with how zeros are sent. These days, it's most common to see digital logic use active low signaling. This means that a digital 1 is actually represented by a low power state. This is ostensibly because it is easier to create a sink than a source (it's easier to pull voltage down from a high state than to raise it up from a ground state). This means that we are actually using more power when we are sending a zero because the signal for a zero is a high voltage state.
    The way DBI works is that all the data is inverted if the current byte to be transmitted contains more than 4 zeros. A separate control bit (aptly named the DBI flag) is used to indicate whether the data is inverted on the bus or not. Here are a couple examples of what would happen when transmitting data over a bus using DBI.
    data to send: 11100000
    data on bus: 00011111, DBI Flag = 1
    data to send: 11111000
    data on bus: 11111000, DBI Flag = 0
    Addressing is also done differently with GDDR4. If we are considering the 16Mx32 (this means 16 million address that hold 32bits of data each) 512Mbit GDDR4 modules currently available from Samsung, we will have only 12 address pins. A full address is sent in two consecutive clock cycles (as 24-bits are needed to select between 16 million addresses). This frees pins to use for other things, like power and ground which could increase the capability of the DRAM to run at high speeds. Among the other optimizations, a multi-cycle preamble is used to make sure that timing is accurate when sending and receiving data (allowing for faster speeds), GDDR4 has a lower input capacitance than GDDR3, and memory manufacturers have more control over the properties of the transistors and resistors used in the driver and receiver in order to better tune products to specific needs.
    Right now, ATI is using Samsung's 80nm 0.91ns K4U52324QE GDDR4 modules on its X1950 products. This is actually the slowest GDDR4 memory that Samsung sells, clocking in at a max of 1100MHz. Their 0.714ns RAM is capable of hitting 1400MHz which will be able to put future graphics cards beyond the 2.5GHz data rate and up near the 80GB/s range in memory bandwidth. Of course, the X1950 XTX memory bandwidth of 59.6GB/s is pretty impressive in itself. From a clock for clock perspective, GDDR4 can offer advantages, but we shouldn't expect anything revolutionary at this point. We ran a couple tests underclocking the X1950 XTX, and saw performance on par with or slightly faster than the X1900 XTX.

    Toda a analise:
    http://www.anandtech.com/show/2069






    ATI's Radeon X1950 XTX graphics cards

    ...and family

    PC GRAPHICS TECHNOLOGY HAS EARNED itself a reputation as a fast-moving locus of innovation, and that rep is certainly well deserved. Still, the much-ballyhooed talk of six-month product cycles and the breakneck pace of change is a little bit overheated. About 25% of everything that happens in PC graphics involves truly novel innovations, such as new GPU microarchitectures with features never seen before. The rest is mostly just dance remixes.
    Today is a day of dance remixes for ATI. You can hear the thump-thump-thump of the drum track throbbing in the background if you listen closely. Fresh off the announcement of its public engagement to AMD, the red team has cued up five new Radeon video cards, from the low end to the very high end, and they are all remakes of already familiar tunes.
    Fortunately, in the world of video cards, remixes actually bring improvements most of the time. They tend to offer more graphics horsepower at lower prices, not just a torrid, syncopated rhythm from a drum sequencer. The new flavors of Radeons range from the X1950 XTX at just under five hundred bucks to the X1300 XT at well under a hundred. In the middle of the pack is a potential gem for PC enthusiasts: a new $279 version of the Radeon X1900 XT that looks to redefine the price-performance equation. Keep reading for the info on ATI's revamped lineup, including our tests of the most appealing cards for enthusiasts.
    The Radeon X1000 remixes
    All told, ATI is unveiling five new video cards today. To cover them, we'll start at the high end and move down.
    The Radeon X1950 CrossFire Edition (left) and Radeon X1950 XTX (right)
    The two cards you see pictured above are the Radeon X1950 XTX and its CrossFire Edition. The X1950 XTX is based on a chip that ATI has dubbed "R580+" for its status as a tweaked version of the R580 GPU found in all Radeon X1900-series graphics cards. Like its forebear, the R580+ is still manufactured at TSMC on a 90nm fabrication process, and it still tops out at 650MHz on the Radeon X1950 XTX, just as the R580 does on the X1900 XTX. The plus, however, extends ATI's tradition of pioneering new types of graphics RAM by adding support for GDDR4 memory. ATI's PR types claim GDDR4 memory uses less power per clock cycle than the current GDDR3-standard memory chips.
    So the big gain with the R580+ is memory clock speeds. They're up from 725MHz on the X1900 XTX to a cool 1GHz on the X1950 XTX—or 2GHz effective, once you take the double data rate memory thing into account. The faster RAM gives the Radeon X1950 XTX a grand total of 64GB/s of peak theoretical memory bandwidth, well above the 49.6GB/s possible on the X1900 XTX.
    That, of course, raises an intriguing question: was the Radeon X1900 XTX really so limited by memory bandwidth that the switch to a new RAM type alone can yield real performance benefits? We'll soon find out.
    You may also have noticed the X1950's fancy new cooler. ATI says it switched providers in order to get this puppy, which looks to be an improvement on the double-wide cooler used in the X1800 and X1900-series cards. This fansink still exhausts hot air out the back of the PC case, but the blower is located further inside of the case, with the aim of reducing the noise that escapes the enclosure. The new cooler is also endowed with a heatpipe that pulls heat away from the GPU into a battalion of copper fins. You won't doubt that the thing is real copper when you pick it up; it carries more heft than a U.N. resolution.
    At the back of the Radeon X1950 XTX is a pair of DVI-out ports and a video-in/video-out connector. The "built by ATI" versions of the X1950 will have support for HDCP via the DVI ports, enabling playback of DRM-encrusted Blu-ray and HD-DVD content. If you're buying a version of the X1950 XTX from an ATI partner, you'll need to check the spec sheet to ensure HDCP support is present, should you want it.
    Oh, and of course, this is a PCI Express video card; we have no word on plans for an AGP version.
    The Radeon X1950 CrossFire Edition is essentially the same thing as the X1950 XTX, save for the fact that it adds a special compositing engine for use with multi-GPU setups. ATI hasn't yet incorporated this image compositing engine into the GPU, so a CrossFire Edition card is still required. This time around, though, the CrossFire card runs at the same clock speeds as the XTX.
    ATI says to expect both the Radeon X1950 XTX and the CrossFire Edition to sell for $449. That puts it directly opposite the current prices of the GeForce 7900 GTX at online vendors.
    The Radeon X1900 XT 256MB (left) and Radeon X1950 XTX (right)
    On the left above is the next stop in our tour through the new Radeons. This is a 256MB version of the already-familiar Radeon X1900 XT. This card is still based on an R580 GPU clocked at 625MHz and mated with 725MHz memory, so it packs nearly as much graphics processing power as the former top-of-the-line Radeon X1900 XTX. The only change here is half the memory of the original X1900 XT and a much nicer price—$279, to be exact, about the price of a GeForce 7900 GT. The X1900 XT 256MB offers a heckuva lot of graphics processing power for the money.
    One of the few potential drawbacks to the X1900 XT 256MB is the lack of a CrossFire Edition card that's well matched to it. The card will operate in CrossFire mode with either the Radeon X1900 CrossFire or the Radeon X1950 CrossFire, but both of those cards are more expensive and will have to disable half of their RAM in order to work with it. ATI claims to be evaluating the possibility of enabling dongle-free CrossFire that operates via PCI Express for the X1900 XT 256MB, but they haven't committed to a timetable for delivering it. That's probably just as well, since this beast is probably too fast to work well in a PCI-E-based scheme.
    With the introduction of these new cards, the current Radeon X1900 XT 512MB and X1900 XTX will eventually be phased out. Such things take time, though, so both products will probably linger in the market for some time to come.
    We have the new X1900 and X1950 cards in our hot little hands for testing, but we haven't get gotten our mitts on the other two cards ATI is cooking up. The first of those is the Radeon X1650 Pro, which is a dead ringer for the current Radeon X1600 XT. Both are based on the RV530 GPU. While the X1600 XT runs at 590MHz with 690MHz memory, the X1650 Pro runs at 600MHz with a 700MHz RAM frequency. Accompanying this fine-tuning of clock speeds is a price cut to $99 for the X1650 Pro, between 10 and 40 bucks less than current X1600 XT prices. The X1650 Pro is the first installment in the plan for a new Radeon X1650 family to supplant the X1600 line with faster, cheaper parts.
    If a 99-dollar video card is beyond your means, there's now an option at $89 in the form of the Radeon X1300 XT. This is the first and only member of the X1300 lineup to be based on the same RV530 chip found in the X1600/X1650 lines. For this application, the RV530 will be clocked at 500MHz and paired with 400MHz memory, so performance should be quite a bit lower than the X1650 Pro. (Think twice about saving that ten bucks, folks.) However, the XT should easily be the fastest Radeon X1300 thanks to its 12 pixel shader processors, five vertex shader processors, and eight Z-compare units, versus the RV515's four of each. The remaining RV515-based members Radeon X1300 family will stick around, but come down in price to slot below the X1300 XT.
    The scoop on pricing and availability
    All of these new cards are scheduled to become available at online retailers on September 14. We've seen claims about pricing and availability fail to work out as planned in the past, though, so we talked to ATI board partners to see what they had to say. Fortunately, both Diamond Multimedia and Connect3D confirmed that they're on track to hit that date.
    Diamond is planning a full lineup of new Radeons from top to bottom, and they were willing to divulge suggested retail pricing for those products. Both their Radeon X1950 XTX and its CrossFire Edition are slated to list at $499, and their X1900 XT 256MB should list at $399. Diamond will also be building both PCI-E and AGP versions of the Radeon X1650 Pro with 512MB of memory onboard, and both will list at $229, while their Radeon X1300 XT with a PCI Express interface and 256MB of RAM will list at $149. That may sound pricey, but those are the price tags you can expect to see at big-box retail stores. Online vendors will discount substantially off of list price, as always seems to be the case. Diamond's Radeon X1900 XT 512MB, for example, is currently selling for under $449 online, well below the $499 suggested retail price.
    The folks at Connect3D are even more oriented toward selling through online stores, and they were willing to give us a sense of likely pricing at e-tailers. Their versions of the Radeon X1950 XTX and CrossFire will come in at "under $450" at places like Newegg right out of the gate, and their X1900 XT 256MB should arrive at "well under $300," likely in the neighborhood of $275—all of which is right in line with ATI's projections. They also let slip word of another new card coming a few weeks behind the others: the Radeon X1950 Pro, which won't have all 48 pixel shaders enabled on it. They're expecting 256MB versions of the X1950 Pro to sell for under $199, with the 512MB version arriving at about $240. They were especially excited about the X1950 Pro and X1900 XT 256MB, which they acknowledged will fill some gaps in ATI's enthusiast-class product lineup.
    Both Diamond and Connect3D told us they were considering producing liquid-cooled versions of the Radeon X1950 XTX, as well, although those products aren't likely to arrive in the first wave of X1950 cards.
    Toda a analise:
    http://techreport.com/review/10615/a...graphics-cards



    Ficam dois artigos onde é focada os GPUs, graficas e as modificações/novidades presentes nas duas ultimas graficas da ATI colocadas aqui anteriormente.
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  11. #191
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    ATI Radeon X1650 XT



    ATI's Radeon X1650 XT graphics card


    DON'T LET THE Radeon X1650 XT's name fool you. Although the amalgamation of letters and numbers behind "Radeon" might lead you to believe this card is a direct heir of the notoriously poky Radeon X1600 XT, this puppy is much more potent than its predecessor. In fact, its GPU is more like two X1600 XTs fused together, with roughly twice the graphics processing power in nearly every meaningful sense. The X1650 XT has 24 pixel shader processors instead of 12; it has eight texturing units rather than four; and it can draw a healthy ocho pixels per clock, not just an anemic cuatro like the X1600 XT before it.
    Those numbers may be the recipe for success for the Radeon X1650 XT, making it a worthy rival of the GeForce 7600 GT at around $149. If so, this product arrives not a second too soon. It seems like ATI hasn't had a credible offering in this segment of the market since hooded flannel shirts were all the rage. Can the Radeon X1650 XT break the red team's mid-range curse? Let's have a look.
    The Radeon X1650 XT
    Meet the wild child
    The Radeon X1650 XT's unassuming appearance conceals its true personality. Under that pedestrian single-slot cooler lies a wildly transgressive graphics card, driven by a GPU that refuses to honor the boundaries of class or convention. The X1650 XT is part of the Radeon X1600 series, yet its graphics processor is not the RV530 silicon that has traditionally powered cards in that product line. The intrigue gets even deeper when you examine this mysterious GPU, code-named RV560. Truth be told, this is actually the same graphics chip behind the Radeon X1950 Pro that we reviewed a couple of weeks ago, the R570. For the X1650 XT, though, ATI has disabled portions of the chip and assigned it a new code name. If I recall correctly, this is the first time ATI has fabricated two code names for the same piece of silicon. So basically, ATI has chucked the conventions for both video card names and GPU code names in recent weeks, and the Radeon X1650 XT is the result.
    Not that there's anything wrong with that.
    In fact, the X1650 XT benefits from its upper-middle-class heritage. Its RV570 GPU (sorry, but I'm not calling it RV560) has had a portion of its on-chip resources deactivated, either because of faults in some parts of the chip or simply for the sake of product segmentation. This hobbled GPU can still take on the GeForce 7600 GT with its one good arm, though, thanks to 24 working pixel shaders, eight vertex shaders, and eight texture units/render back-ends. These rendering bits and pieces run at a GPU core clock of 575MHz. To keep card costs down, the Radeon X1650 XT has only a 128-bit path to memory (like the GeForce 7600 GT) and not 256 bits (like its big brother, the X1950 Pro.) The X1650 XT's 256MB of GDDR3 RAM runs at 675MHz.
    Dual dual-link DVI ports flank the X1650 XT's TV-out port
    The X1650 XT's cluster of ports befits a brand-new graphics card. The two dual-link DVI ports come with full support for HDCP, so they can participate in the copy-protection schemes used by the latest high-def displays.
    If you're driving a big display at high res with an X1650 XT, you may want to give it some help in the form of additional X1650 XT cards that run alongside it. That's a distinct possibility thanks to the pair of internal CrossFire connectors on the top edge of the card. We've tested the X1650 XT in a dual-card CrossFire config, and ATI has confirmed for us that they plan to enable support for more than two cards in CrossFire using staggered connectors at some point in the future, although we don't know much more than that.
    That's pretty much it for the Radeon X1650 XT's basic specifications. Of course, it's based on very familiar Radeon X1000-series GPU technology, with features and image quality that match everything up to the Radeon X1950 XTX. ATI says to expect cards at online retailers the week of November 13. The big remaining question is performance.

    Toda a analise:
    http://techreport.com/review/11131/a...-graphics-card








    Introducing the Radeon X1650 XT: A New Mainstream GPU from ATI

    Introduction

    A few weeks ago brought the release of ATI's new Radeon X1950 Pro, a GPU designed to replace the X1900 GT and to improve CrossFire operation over its ATI predecessors. It would seem ATI is releasing lots of new products as the holidays come nearer, which isn't that surprising considering the heated competition between card makers that we always see during this time of year. The recent merger between ATI and AMD will make things even more interesting, and we are already seeing some changes over where ATI used to be (particularly with the new ATI/AMD website). We're curious to see what this merger will mean for ATI in the coming months.

    We recently looked at the X1950 Pro, and we found it to be a good competitor to the NVIDIA GeForce 7900 GS, assuming the price is right. From what we've seen so far though, the price for the X1950 Pro isn't where it should be, and this looks like a bit of a problem for ATI and their potential buyers right now. We aren't seeing many of these cards for sale right now, but those that are available are selling for much higher than the $200 target price ATI mentioned at the card's release. This makes us wonder what we will be seeing in the near future, price wise, for the next card from ATI: the newly launched Radeon X1650 XT.

    Yes, ATI has just launched the newest member of the X1650 family, and it looks to offer good performance competition for NVIDIA's GeForce 7600 GT. However, with the X1650 XT, ATI has one strike against it right off the bat. ATI has said that these parts will not be available for purchase until sometime in mid-November, which means we have on our hands another frustrating paper launch. We were glad to see the X1950 Pro launched with parts immediately available (even if they were $100 more expensive than expected), and considering the X1650 Pro and X1300 XT paper launched a while back, we were hoping ATI might have turned a new leaf in this regard; but it seems this isn't the case.

    The second strike ATI might have against it is something we mentioned earlier: the price. The X1650 XT is priced by ATI at $150, and although ATI enabled vendors to offer the card at this price, we aren't sure if this is what we will be seeing. In the past, prices for launched GPUs have been fairly close to ATI's suggested mark, but what we have been seeing with the X1950 Pro lately has us a little worried about the X1650 XT. We can speculate that with AMD buying out ATI, on top of the fact that the holiday season is coming up fast, things are a little hectic for ATI. This could account for some of these price and availability issues for their parts.

    That said, the point here is to take a look at a different card, the ATI Radeon X1650 XT, and give you our first impressions. We plan to look at the actual card and its features, as well as how it performs as a single GPU, then with two cards in CrossFire mode. We are also going to look at power consumption, and perhaps most importantly, how does the card compare with others available now and what's it worth to the average buyer. Our initial impressions of the X1650 XT and its performance aren't bad at all (provided manufacturers can hit the $150 target), but we'll delve into this later. For now, let's look at the card.

    The Card

    The X1650 XT is based on the older X1600 cards, but it's really a completely new spin on the silicon. Although the X1650 has lower clock speeds than the older X1600 XT, we will see better performance out of the X1650 XT because it has twice as many pixel pipelines (24 vs. 12) and internal bridges for CrossFire, which we will touch on later in the review. This new ATI card is fabbed on TSMC's 80 nanometer process. That means the chip is smaller, and it should run cooler as well. This also means it's cheaper for ATI to produce, which could be part of why ATI is offering a part with more pipelines for a (potentially) lower price. The 24 pipeline configuration also fills the gap between the 12 and 36 pipeline parts ATI has been offering. This should give them a little more clock speed flexibility in the mainstream arena.

    In performance, the X1650 XT is poised to nudge a few of the current cards on the market out of the way, including the X1650 Pro. Despite the similar product names, like the X1600 XT the X1650 Pro and XT are very different cards. In the past, the X1600 XT was meant by ATI to be competition for the NVIDIA 7600 GT, but as such it was a miserable failure. Its current replacement, the X1650 Pro isn't able to do any better with only a 10MHz boost in core and memory clock speed over the X1600 XT. With the X1650 XT, however, ATI seems to have finally come up with some competition for this mainstream NVIDA card. Of course price will be a factor when trying to determine actual competition and card value, but we will see in our performance section how they compete strictly in the gaming arena.

    Speaking of price, as we said in the introduction, the card comes with an ATI MSRP of $150. Whether it will actually be available at this price is anyone's guess, but we feel that given the performance (which we will see next), at this price the X1650 XT would be a good deal. As we've talked about before, price plays a vital role in the success of a graphics card, and no amount of power will make a card worth buying if the price isn't competitive. The value is important when shopping for a new card, and because the GPU market can be very fickle sometimes from week to week, pinning this down can be difficult.



    Looking at the X1650 XT, we are initially struck by how it seems nearly identical to the X1600 XT which launched what seems like ages ago. Only by holding the two cards next to each other can you see the subtle differences. Both have dual DVI connections and a matte black heatsink with a small fan in them. Component selection and placement was tweaked probably to accommodate internal CrossFire connectors. As looks go, the X1650 XT isn't nearly as impressive as the X1950 Pro, in fact it's just the opposite. The original reference X1600 XT looked a bit crude in our opinion, and the X1650 XT is no different. But we realize these are only the reference designs, so we'll wait to see what different vendors do before passing aesthetic judgments. Besides, a card's looks are not of any consequence when compared to its performance and value. So moving on, let's take a look at how the X1650 XT's specifications stack up against the rest of the cards out there. Then we'll see how well this ugly duckling from ATI performs.

    NVIDIA Graphics Card Specifications
    Vert Pipes Pixel Pipes Raster Pipes Core Clock Mem Clock Mem Size (MB) Mem Bus (bits) Price
    GeForce 7950 GX2 8x2 24x2 16x2 500x2 600x2 512x2 256x2 $600
    GeForce 7900 GTX 8 24 16 650 800 512 256 $450
    GeForce 7950 GT 8 24 16 550 700 512 256 $300-$350
    GeForce 7900 GT 8 24 16 450 660 256 256 $280
    GeForce 7900 GS 7 20 16 450 660 256 256 $200-$250
    GeForce 7600 GT 5 12 8 560 700 256 128 $160
    GeForce 7600 GS 5 12 8 400 400 256 128 $120
    GeForce 7300 GT 4 8 2 350 667 128 128 $100
    GeForce 7300 GS 3 4 2 550 400 128 64 $65

    ATI Graphics Card Specifications
    Vert Pipes Pixel Pipes Raster Pipes Core Clock Mem Clock Mem Size (MB) Mem Bus (bits) Price
    Radeon X1950 XTX 8 48 16 650 1000 512 256 $450
    Radeon X1900 XTX 8 48 16 650 775 512 256 $375
    Radeon X1900 XT 8 48 16 625 725 256/512 256 $280/$350
    Radeon X1950 Pro 8 36 12 575 690 256 256 $200-300
    Radeon X1900 GT 8 36 12 575 600 256 256 $220
    Radeon X1650 XT 8 24 8 575 675 256 128 $150-250
    Radeon X1650 Pro 5 12 4 600 700 256 128 $99
    Radeon X1600 XT 5 12 4 590 690 256 128 $150
    Radeon X1600 Pro 5 12 4 500 400 256 128 $100
    Radeon X1300 XT 5 12 4 500 400 256 128 $89
    Radeon X1300 Pro 2 4 4 450 250 256 128 $79


    With more vertex power and a higher potential fill rate, we can expect the RV560 (the chip behind the X1650 XT) to perform in a more well balanced manner than RV530 (the heart of the X1600 XT). Double the raster pipes not only means better frame rates at higher resolution, but better Antialiasing and Z/stencil performance as well. More stencil and Z power should contribute to higher performance in advanced shadow rendering techniques, and the benefit of higher performance AA on a mainstream part speaks for itself.
    Toda a analise
    http://www.anandtech.com/show/2110









    AMD ATI Radeon X1650 XT GPU Review


    Introduction and X1650 XT Specs

    Introduction

    The second in a pair of new mid-range graphics chip releases from ATI, the new Radeon X1650 XT GPU is aimed at the more casual gamer with a tighter budget on their PC components. A couple weeks ago AMD/ATI released the Radeon X1950 Pro that impressed us with its performance and pricing. Will the X1650 XT be able to do the same? And will AMD be able to keep their pricing and release schedule with the card this time?

    Let's find out...

    AMD's ATI Radeon X1650 XT

    As with all of the Radeon 1k-series of graphics cards, the X1650 XT is a simple derivative of ATI's R580 GPU technology that we saw with the X1900 and X1950 graphics cards.

    The X1650 XT is set at some fairly high clock speeds, though with fewer active pixel processing pipes, performance is scaled to the $150 segment fairly well. The X1650 XT will run at 575 MHz core clock and 1.35 GHz memory clock by default and will come with 256MB of GDDR3 memory. There are still two dual-link DVI connections and a standard video output connection as well. HDCP support is optional on all third-party cards but AMD claims that all of their Built-by-ATI cards will have HDCP support built into them, making this one of the least expensive HDCP-ready graphics cards on the market.
    The X1950 XTX, the top flagship card from ATI using the same GPU technology, has 48 total pixel pipes for graphics processing. The X1950 Pro released earlier this month had 36 pipes and the new X1650 XT has 24 pixel pipes. Here ATI provided a table that compared their X1650 XT to the NVIDIA 7600 GT card, showing that the ATI card has more pipelines than NVIDIA's option. Sure, that is indeed the case, but it is not necessarily an indicator of any gaming performance.

    Looking at the Card

    The AMD/ATI Radeon X1650 XT looks very much like the other mid-range and budget cards that ATI has brought onto the scene in the last year.
    The card's heatsink and fan are a single slot only and can be noisy when the fans are in full-speed spin mode. Luckily, once you get the card drivers installed, that doesn't seem to happen very often and the card runs relatively quiet. You'll also notice that these cards do not require an external power connection, indicating that total power usage is probably around 75 watts or below.
    The rear of the X1650 XT is empty as well with all of the 256MB of memory on the front of the card under the heatsink.
    On the connection side of the card you'll find two dual link DVI connections so you can plug in monitors like the 30" Dell with resolutions of 2560x1600 without a problem.
    Internal CrossFire (from X1950 Pro review)
    Since the very beginning of ATI's CrossFire technology introduction, we have been critical of ATI's implementation of dual-GPU technology. Using an external DVI-like dongle and an external programmable logic chip, with often debilitating limitations, it seemed very hacked together, an obvious last minute addition that went uncorrected for way too long. But with the introduction of the X1950 Pro card (and now the X1650 XT), ATI has implemented internal and integrated CrossFire technology.

    As this slide tells us, the new compositing engine has been integrated into the GPU ASIC. No longer are you going to need to worry about buying a specific "master" CrossFire card to go with your standard "slave" cards! The data transfer between the compositing engine is now done internally as well, with a conneciton very similar to what NVIDIA's SLI connection looks like, but with two connectors instead of one. This internal connection is capable of transfering enough data for a 2560x2048 resolution @ 60 Hz, running on two independent 12-bit data paths.
    What you might be suprised to learn though is that the technical functioning of CrossFire remains unchanged; the entire process is the same to the software as previous CrossFire versions. The functions of the custom Xilinx programmable logic has simply been moved into the GPU. So functionally, CrossFire remains the same -- any issues or limitations that existed before will be carried over here.
    If there is one place where ATI's CrossFire has had the advantage over NVIDIA's SLI multi-GPU technology, it is in platform support. CrossFire will run on any of the CrossFire-ready ATI chipsets for AMD or Intel platforms, as well as the Intel 975X chipset (and recently the P965 chipset). From what I have been told as well, support on NVIDIA motherboards is probably closer than we think too, as it is only a software switch holding it from customers.
    At the top of the X1650 XT card you can see the two sets of gold connectors for the new internal CrossFire connections.
    You'll need two of these cables for CrossFire to get the best performance. ATI told us that since their multi-GPU implementation is platform independent, they will be including one dongle cable with each X1650 XT graphics card. Motherboard vendors don't have to worry about that item then, and when a user buys two X1650 XT GPUs, they'll have all the hardware they need for CrossFire to run correctly.
    Here you can see our ATI X1950 Pro CrossFire setup running on the Intel 975XBX motherboard; no external dongles and a clean internal data connection! Software setup remains the same, you need only to check a single box in the Catalyst Control Center to enable CrossFire and you are off and running.
    Toda a analise:
    http://www.pcper.com/reviews/Graphic...-XT-GPU-Review


    Dia 30 de Outubro de 2006, a ATI decide novamente apostar forte na gama media baixa do mercado, gama essa que quase sempre foi esquecida pela ATI e dominada pela nVidia. Para conquistar esse segmento de mercado, dominado nesta altura pela nVidia 7600GS a ATI lançou a ATI 1650 XT.
    Esta placa fazia uso do novo chip que estava presente na ATI X1950 PRO, o qual foi cortado algumas unidades de shaders e vertex e com a isso a ATI renomeou este chip para RV 560... o que no meu entender não faz lá muito sentido quando o chip é mesmo.
    Como este chip tinha introduzido uma serie de novidades na ATI X1950 PRO, os quais o mais importante é o CrossFire interno, esta placa fazia uso já dessa novidade. Sendo também uma placa ATI, a qualidade de imagem continuava soberba.
    O suporte ao DirectX 9.0C e restantes funcionalidades estavam asseguradas nesta placa.
    O preço desta placa rondaria os 140 a 150€ e nos testes andava ás vezes à frente outras atrás das placas da nVidia no mesmo segmento de mercado, sendo este um dos chips mais poderosos que a ATI tinha apresentado até esta altura nesta gama de preços e praticamente dobrava em especificações os anteriores chips na mesma gama de mercado, por isso, esperava-se um pouquinho mais em desempenho.
    Esta placa valia pelas novidades, baixo preço e possibilidade de construir um sistema multi GPU de baixo custo.
    Esta é a segunda placa a trazer o novo sistema de CF interno na história da ATI.
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  12. #192
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    NVIDIA GeForce 8800 GTX


    Introduction

    DirectX 10 is sitting just around the corner, hand in hand with Microsoft Vista. It requires a new unified architecture in the GPU department that neither hardware vendor has implemented yet and is not compatible with DX9 hardware. The NVIDIA G80 architecture, now known as the GeForce 8800 GTX and 8800 GTS, has been the known DX10 candidate for some time, but much of the rumors and information about the chip were just plain wrong, as we can now officially tell you today.

    Come find out why the GeForce 8800 GTX should be your next GPU purchase.

    What is a Unified Architecture?
    The requirement of a unified architecture is one of the key changes to the upcoming release of DirectX 10 on Windows Vista. The benefits and pitfalls of a unified graphics architecture have been hotly debated since DX10 specs first became known several years ago. With Vista just months away now, both NVIDIA and ATI no longer get to debate on the logic of the move; now they have to execute on it.
    A unified graphics architecture, in its most basic explanation, is one that does away with seperate pixel pipelines and texture pipelines in favor of a single "type" of pipeline that can be used for both.

    Traditional GPU Architecture and Flow
    This diagram shows what has become the common GPU architecture flow; starting with vertex processing and ending with memory access and placement. In G70, and all recent NVIDIA and ATI architectures, there was a pattern that was closely followed to allow data to become graphics on your monitor. First, the vertex engine, starting out and pure texture and lighting hardware, processing the vertex data into cohesive units and passes it on to the triangle setup engine. Pixel pipes would then take the data and apply shading and texturing and pass the results onto the ROPs that are responsible for culling the data, anti-aliasing it (in recent years) and passing it in the frame buffer for drawing on to your screen.
    This scheme worked fine, and was still going strong with DX9 but as game programming became more complex, the hardware was becoming more inefficient and chip designers basically had to "guess" what was going to be more important in future games, pixel or vertex processing, and design their hardware accordingly.
    A unified architecture simplifies the pipeline significantly by allowing a single floating point processor (known as a pixel pipe or texture pipe before) to work on both pixel and vertex data, as well as new types of data such as geometry, physics and more. These floating point CPUs then pass the data onto a traditional ROP system and memory frame buffer for output that we have become familiar with.
    I mentioned above that because of the inefficiencies of the two-pipeline-style, hardware vendors had to "guess" which type was going to be more important. This example showcases this point very well: in the top scenario, the scene is very vertex shader heavy while the pixel shaders are being under utilized, leaving idle hardware. In the bottom scenario, the reverse is happening, as the scene is very pixel shader intensive leaving the vertex shaders sitting idle.
    Any hardware designer will tell you that having idle hardware when there is still work to be done is perhaps the single most important issue to address. Idle hardware costs money, it costs power and it costs efficiency -- all in the negative direction. Unified shaders help to prevent this curse on computing hardware.
    In the first example, notice that the sample "GPU" had 4 sets of vertex shaders and 8 sets of pixel shaders; a total of 12 processing units that were used inefficiently. Here we have another GPU running with 12 unified processing shaders that can dynamically be allocated to work on vertex of pixel data as the scene demands. In this case, in the top scene that was geometry heavy uses 11 of the 12 shaders for vertex work and 1 for pixel shading, using all 12 shaders to their maximum potential.
    This is of course the perfect, theoretical idea behind unified architectures, and in the real world the problem is much more complex.
    In the real world, there are more than 12 processor pipelines and the ability to break down a scene into "weights" like we did above is incredibly complex. NVIDIA's new G80 hardware has the ability to dynamically load balance in order to get as high of an efficiency out of the unified shaders as possible. As an example from Company of Heroes, on the left is a scene with little geometry and one on the right with much more geometry to process. The graph at the bottom here shows a total percentage usage of the GPU, yellow representing pixel shading work and red representing vertex shading work. When the scene shifts from the left to the right view, you can see that the amount of vertex work increases dramatically, but the total GPU power being used remains the same; the GPU has load balanced the available processing queue accordingly.

    DirectX 10

    So the move to unified shaders is a requirement of DX10 as I mentioned before, but that's not all that has changed in the move to double-digits. The goal from Microsoft's DX10 designers was mainly to improve the ease of programming and allow the designers to more easily implement improved graphics, effects and more. Shader Model 4.0 is being introduced with some big enhancements and geometry shaders make their debut as well. Another important note about DX10 is that it is much more strict on the hardware specifications -- with no more "cap bits" hardware vendors can't simply disable some DX10 features to qualify as DX10 hardware, as Intel did in many cases with DX9.
    The updated DX10 pipeline looks a little something like this; new features are listed off to the right there. Geometry shaders and stream output are two very important additions to the specification. Geometry shaders will allow dynamic modifications of objects in the GPU (rather than the CPU) and will give game creators a boost in creativity. Stream output allows the processing engine (GPU) that DX10 runs on to communicate within the GPU, effectively allowing the pipelines to communicate with each other by outputting to shared memory.
    The geometry shader is placed right after the vertex shader in the DX10 pipeline, allow the vertex data to setup the geomerty that may be modified by the new processing unit. The input and output from the geometry shaders must follow the same basic types as points, lines or triangle strips, but the data has the possibility of being modified inside the shader to produce various effects.
    Here are some examples of what designers might be able to do with geometry shaders; automatic shadow box generation and physical simulations are among the most powerful. Real-time environment creation is a fantastic idea that could allow a game's replay value to be potentially limitless; imagine games where the world really DOESN'T end can be dynamically generated on the fly by the gamer's GPUs to be different than any other gamer's.
    Stream output is the other new addition to the DX10 pipeline and its importance will be seen in great detail through out this review and in the future months as more about NVIDIA's processing potential is revealed. Strictly for gaming, stream output allows for a mid-phase memory write by the processing engine to store geometry or vertex shader data. By enabling multi-pass operations, programmers can now do recursion (the bane of my undergraduate years...) as well as numerous other "tricks" to improve their software.
    Maybe you remember a thing called physics? It's all the rage in gaming recently, and with the ability to output data to memory mid-phase in the processing pipeline, NVIDIA is going to be able to compete with AGEIA's hardware more directly as that ability to communicate between pipes is what gave AGEIA's PhysX engine a performance advantage.
    Another example of stream output at work is improved instancing -- where before you were limited to instancing items like grass that would all look the same and following the same paths, with stream output you can have instanced items that have individual "states" and attirbutes that allow programmers to use them for unique characters and items.
    For data junkies, this table summarizes up the changes moving from DX8 and Shader Model 1.0 to DX10 and the new SM4.0 specs.
    All of this new DX10 ability will allow programmers to do more than ever on the GPU, removing the CPU as a gaming bottleneck in many cases. Here are a couple of examples: above we see an algorithm for human hair simulation. Before, the majority of the physics and setup work was done on the CPU but now with the new options DX10 provides it can all be moved to the GPU.
    Another example is using a stencil shadow algorithm (a popular method in current games) where most of the work previously relegated to the CPU can be moved to the GPU using DX10.
    As an example of DX10 at work, this NVIDIA demo shows a landscape being created on the fly using geometry shaders with a particle system running only on the GPU to simulate the water running down the rock. Oh, and the graphics are rendered on it too.


    The G80 Architecture

    Well, we've talked about what a unified architecture is and how Microsoft is using it in DX10 with all the new features and options available to game designers. But just what does NVIDIA's unified G80 architecture look like??

    Click to Enlarge
    All hail G80!! Well, um, okay. That's a lot of pretty colors and boxes and lines and what not, but what does it all mean, and what has changed from the past? First, compared to the architecture of the G71 (GeForce 7900), which you can reference a block diagram of here, you'll notice that there is one less "layer" of units to see and understand. Since we are moving from a dual-pipe architecture to a unified one, this makes sense. Those eight blocks of processing units there with the green and blue squares represent the unified architecture and work on pixel, vertex and geometry shading.
    Even the setup events at the top of the design are completely new, from the Host and below. The top layer of the architecture that includes the "Vtx Thread issue, Geom Thread Issue and Pixel Thread Issue" units is part of the new thread processor and is responsible for maintaining the states of the numerous processing threads that are active at one time and assigning them to (or issuing) processing units as they are needed. With this many processing units and this many threads, this unit is going to keep quite busy...
    Okay, so how many are there already?? There are 128 streaming processors that run at 1.35 GHz accepting dual issue MAD+MUL operations. These SPs (streaming processorss) are fully decoupled from the rest of the GPU design, are fully unified and offer exceptional branching performance (hmm...). The 1.35 GHz clock rate is independent of the rest of the GPU, though all 128 of the SPs are based off of the same 1.35 GHz clock generator; in fact you can even modify the clock rate on the SPs seperately from that of the GPU in the overclocking control panel! The new scalar architecture on the SPs benefits longer shader applicaitons to be more efficient when compared to the vector architecture of the G70 and all previous NVIDIA designs.
    The L1 cache shown in the diagram is shared between 16 SPs in each block, essentially allowing these 16 units to communicate with each other in the stream output manner we discussed in the DX10 section.
    Looking at the raw numbers, you can see that the GeForce 8800 SP architecture creates some impressive processing power, resulting in more than double the "multiplies" that the G71 or R580 could muster. Also, we found out that the new G80 SPs are 1 to 2 orders of magnitude faster than G71 was on branching -- this should scare ATI as their branching power was one of the reasons their R580 architecture was able to hold off the 7900 series for as long as it did.
    In previous generations of NVIDIA's hardware, the texture units actually used a small portion of the pixel shader in order to keep from doubling up on some hardware. This has the potential to create "bubbles" in the GPU processing that looked like the GeForce 7-series diagram above. Math operations often had to wait for texture units to complete their work before continuing. That is no longer the case with G80; there are literally thousands of threads in flight at any given time allowing the memory access to completely decouple from processing work. This keeps those "bubbles" from occuring in this new design allowing for seemingly faster memory access times.
    This threading process has been dubbed "GigaThreading" by NVIDIA and refers to the extremely high amout of active threads at any given time. In a CPU, when a cache miss occurs, the CPU usually has to wait for that data to be retrieved and the thread stalls as it waits. On the G80, if there is a data cache miss, the problem isn't so severe as there are many threads ready to be placed into one of the 128 SPs while the data is retrieved for the cache miss. And in case you were wondering what this constant thread swapping might add to computing overhead, NVIDIA told us that it technically takes 0 clocks for threads to swap!
    Moving on to the texture units on the G80, there are two texture fetch (TF) units for every texture address (TA) unit; this allows for a total of 32 pixels per clock of texture addressing and 64 pixels per clock of texture filter ops. These units have been optimized for HDR processing and work in full FP32-bit specifications, but can support FP16 as well. Because of all this power, the G80 can essentially get 2x anisotropic filtering for free, as well as FP16 HDR for free.
    This small table compares the 7900 GTX, X1950 XTX and the 8800 GTX in terms of texture fill rates; on 32-bit dual texturing, the X1950 XTX and the 7900 GTX could get approximately 50% performance on 2x AF while the 8800 GTX gets 100% performance because and 3.6x faster 16x AF rates than the X1950 XTX.
    The ROPs on G80 are changed a bit from the G71 architecture as well, starting with support for up to 16 samples for AA; however these are not programmable samples like those on the ATI X1950 architecture; NVIDIA's are still using static, rotated grid sample patterns. As if we would allow NVIDIA to do otherwise, antialiasing is support with HDR! The ROPs can support up to 16 samples and 16 Z samples per partition with up to 32 pixels per clock Z-only per partition. The color and Z compression designs have been improved by a multiple of 2 and the ROPs now support 8 render targets.
    With six ROP and Z partitions available, that gives the G80 a total of 96 AA samples and 96 Z samples per clock, as well as 192 pixels per clock of Z-only work. Also, each ROP has a 64-bit interface with the frame buffer; if you do your math you'll come up with an odd-sounding 384-bit total memory interface between the GPU (and its ROPs) and the memory on the sytem. That 64-bit interface is attached to 128MB of memory, totalling 768MB of frame buffer. Yes, the numbers are odd; they aren't the nice round numbers we are used to. But there is no trick to it as many had thought; NVIDIA is segregating some portion to vertex and some to pixel or anything like that.
    The Z culling process on the G80 has been improved drastically as well, including the ability to remove pixels that are not visible during the rendering BEFORE they are processed. NVIDIA was pretty tight-lipped about the actual process on how they are doing this, but keeping in mind that the NVIDIA driver uses a just-in-time compiler before passing instructions on to the GPU, its possible that the compiler is doing some work to help the GPU out in this case. Either way, the more capable the Z-culling is on the core, the less work the GPU has to do per frame, improving performance and game play.

    NVIDIA CUDA

    The idea of GPGPU (general purpose graphics processing unit) isn't new, but the momentum has been gaining on the benefits of GPGPU work since ATI and NVIDIA started pushing it over a year ago. ATI recently made headlines by working with Stanford to produce a GPU-based Folding @ Home client while NVIDIA was quiet on the subject. I think now we'll all know why -- NVIDIA didn't want to talk up standard GPGPU when they had something much better lined up.
    If you paid attention on the previous pages, you have surely noticed that with the changed DX10 brings in stream output and unified pipelines, and with NVIDIA's work in threading and branching, the G80 architecture is looking more and more like a processor than the GPU we have come to love. But worry not, it's all for the best! In NVIDIA's view, the modern CPU is aimed towards multi-tasking; using large cores that are instruction focused (able to do MANY different things) but are not granular in the way a GPU is. Current GPGPU configurations are difficult to use and require programmers to learn graphics APIs and streaming memory in ways they are not used to at all.
    NVIDIA CUDA (Compute Unified Device Architecture) attempts to remedy those GPGPU and CPU issues; by adding some dedicated hardware to G80 for computing purposes, NVIDIA is able to design a complete development solution for thread computing. Probably the most exciting point is the fact that NVIDIA is making available a C compiler for the GPU that will work to thread programming for parallel data and that scales with new GPUs as they are released; super computing developers aren't interested in re-tooling their apps every 6 months! NVIDIA is working to create a complete development environment for programming on their GPUs.
    This first example of how this might be used was physics; a likely starting point knowing what we know about GPGPU work. The work is about finding the positions of the flying boxes by doing some work on the old position and taking into account the velocity and time.
    Looking at how the CPU would solve this problem, based on solving one equation at a time (maybe 2 or 4 with multiple core processors now), we can see that the design is inefficient for solving many of these identical equations in a row. Operating out of the CPU cache, the large complete control logic is used to keep the CPU busy, but even it can't do anything about the lack of simultaneous processing that curent CPUs offer.
    The current generation of GPGPU options would solve this problem faster due to the parallel nature of GPUs. The shaders would solve the equations and could share information using the video memory.
    NVIDIA's CUDA model would thread the equations and the GPUs shaders would be able to share data much faster using the shared data cache as opposed to the video memory.
    What this example doesn't take into consideration is the need for the threads to communicate during execution; something that is ONLY possible on DX10 capable hardware using stream output. Take an example of calculating air pressure: the equation involves calculating the influences of all neighboring air molecules. Only through the stream output option could the equations being truly run in parallel, using the shared cache to talk to each other much faster than the current generation of GPUs could in GPGPU architectures.
    The G80 has a dedicated operating mode specifically for computing purposes outside of graphics work. It essentially cuts out the unncessary functions and unifies the caches into one cohesive data cache for ease of programming. To super computing fanatics, the idea of having 128 1.35 GHz processors working in parallel processing modes for under $600 is probably more than they can handle -- and NVIDIA hopes they buy into it and is doing all they can to get the infrastructure in place to support them.
    Some quick performance numbers that NVIDIA gave us comparing the GeForce 8800 to a dual core Conroe running at 2.67 GHz show significant leaps. Ranging from 10x speed up on rigid body physics to 197x on financial calculations (think stock companies), if these applications can come to fruition, it would surely bring a boom in the super computing era.

    New and Improved AA

    With the new G80 architecture, NVIDIA is introducing a new antialising method known as coverage sampled AA. Because of the large memory storage that is required on multisampled AA (the most commonly used AA), moving beyond 4xAA was not efficient and NVIDIA is hoping the CSAA can solve the issue by offering higher quality images with less storage requirements.
    This new method of AA always computes and stores the boolean coverage of 16 samples and then compresses the color and depth information into the memory footprint of either 4x multisamples or 8x multisamples. The static nature of the memory footprint allows NVIDIA to know how much space is need for CSAA to run at any resolution and allows the hardware and software to plan ahead better.
    This table shows how the new CSAA methods compare in memory storage and samples. You can see that there is still only a single texture sample required for CSAA to work (like multisampling) but that with CSAA you can get 16x quality levels with only 4 or 8 color and Z samples, results in 16 coverage sample total.
    You can see from the above slide that the advantages look pretty good for CSAA, though I am not sure how well it will be accepted by gamers. You can see with the fallback mentioned that if there is a problem that won't allow CSAA to work (such as inter-penetrating triangles and stencil shadows) then it can revert to standard AA methods easily and transparently.
    Coverage Sampled AA Example
    Here I have taken a screen grab from 3DMark06 to show the differences that CSAA can make in image quality. We have 0xAA, 2xAA and 4xAA shown in addition to the 2xAA with 16xCSAA and 4xAA with 16xCSAA screenshots.
    To enable CSAA, the game must have AA enabled in the game AND you must set the NVIDIA control panel driver to "enhanced" mode, seen below:



    1600x1200 - 0xAA - CSAA Off - 16xAF - Click to Enlarge

    1600x1200 - 2xAA - CSAA Off - 16xAF - Click to Enlarge

    1600x1200 - 2xAA - CSAA 16x - 16xAF - Click to Enlarge

    1600x1200 - 4xAA - CSAA Off - 16xAF - Click to Enlarge

    1600x1200 - 4xAA - CSAA 16x - 16xAF - Click to Enlarge
    In my evaluation of these screen shots, the CSAA mode at 16x setting looks the same whether in the game options are at 2x or 4x; this is the normal behavior. Comparing the image quality of the CSAA enabled screen grabs to the 2xAA setting, there is an obvious improvement. That improvement is less noticeable when compared to the 4xAA image quality, but I'd still have to side with CSAA bringing a slightly better picture to the game.
    How does this affect performance though?















    1600x1200 - 3DMark06 - GT1
    0xAA
    2xAA
    2xAA16xCSAA
    4xAA
    4xAA16xCSAA
    FPS
    34.017
    31.022
    26.257
    28.023
    26.251

    Overall, the performance with CSAA enabled is about on par with standard, in-game 4xAA. And with slightly better image quality to boot, this could make CSAA a well utilized feature in GeForce 8800 cards.

    New and Improved Texture Filtering

    I mentioned in the discussion on the new G80 architecture that the texture filtering units are much improved and offer us better IQ options than ever before. While we haven't looked at it in depth on PC Perspective recently, there has been a growing concern over the filtering options that both ATI and NVIDIA were setting in their drivers, and the quality they produced. If you have ever been playing a game like Half Life 2 or Guild Wars (probably one of the worst) and noticed "shimmering" along the ground, where textures seem to "sparkle" before they come into focus, then you have seen filtering quality issues.

    ATI X1950 XTX - 16x AF - Default Settings - Click to Enlarge
    This first diagram shows us what 16x AF at default settings on the ATI X1950 XTX produces in terms of image quality. In short, anywhere you see the colors banding out wards towards the edge of the screen, filtering will be less in that area. At 45 and 135 degree angles (and their inverses) in the above filtering algorithm, filtering is not done as precisely because of the larger amount of processing required at these angles. This is what produces the shimmering and sparkles you see.

    ATI X1950 XTX - 16x AF - High Quality AF Settings - Click to Enlarge
    ATI started offering a new "high quality AF" mode in their Catalyst driver with the X1900 release, and this greatly enhanced the AF quality levels we saw. It also reduced the shimmering effect I had come to despise.

    NV 7900 GTX - 16x AF - Default Settings - Click to Enlarge
    On the 7900 GTX, NVIDIA's default settings look much like the ATI default settings did with the same angle issues seen on the ATI cards.

    NV 7900 GTX - 16x AF - High Quality AF Settings - Click to Enlarge
    However, even when setting the filtering level to "high quality" NVIDIA's driver does not allow for much improvement in the filtering quality on the 7900.

    NV 8800 GTX - 16x AF - Default Settings - Click to Enlarge
    Image quality fanatics rejoice; this is the default setting on the new GeForce 8800 GTX. At 16x AF, the 8800 GTX shows much better filtering quality than even the high quality setting on the ATI X1950 XTX card.

    NV 8800 GTX - 16x AF - High Performance AF Settings - Click to Enlarge
    Even when set at "high performance" (which we've never told anyone to do) the 8800 GTX is impressive.

    NV 8800 GTX - 16x AF - High Quality AF Settings - Click to Enlarge
    Finally, when setting the 8800 GTX to "high quality" mode in the driver, we see an improvement from the default settings; when you see how much power this card has you might find yourself pretty likely to turn features like this on!

    The 8800 GTX and 8800 GTS

    NVIDIA's Specifications
    After talking about architecture and technology for what seems like forever, let's get down to the actual product you can plug into your system. NVIDIA is releasing two seperate graphics cards today: the GeForce 8800 GTX and the 8800 GTS.
    The new flagship is the 8800 GTX card, coming in at an expected MSRP of $599 with a hard launch; you should be able to find these cards for sale today. The clock speed on the card is 575 MHz, but remember that the 128 stream processors run at 1.35 GHz, and they are labeled as the "shader" clock rate here. The GDDR3 memory is clocked at 900 MHz, and you'll be getting 768MB of it, thanks to the memory configuration issue we talked about before. There are dual dual-link DVI ports and an HDTV output as well.
    The runner up for today is the 8800 GTS, though it is still a damn fast card. Expected to sell for $449 and should be ready for launch today, the 8800 GTS runs at a core clock speed of 500 MHz and has 96 SPs that run at 1.2 GHz, compared to the 1.35 GHz on the GTX model. The 640MB of GDDR3 memory runs at 800 MHz and the same dual dual-link DVI ports and HDTV output connections are included.
    Those of you looking for HDCP support will be glad to know that it is built into the chip, but an external CryptoROM is still necessary (and needs to be included by the board partner) for HDCP to actually function. At first glance, it does look like most of the first run of 8800 GTX cards are going to support HDCP though; GTS cards as well.
    The Reference Sample
    The NVIDIA GeForce 8800 GTX card is big; no getting around that. It's about an inch and a half longer than the 7900 GTX cards, though its not really noticeably heavier; the X1950 XTX definitely has it beat there.

    Click to Enlarge
    The heatsink on the card is big as well, leaving most of the PCB hidden behind it, save for the far right edge of it. The fan on the dual-slot cooler is pretty quiet, and I didn't have any issues with fan noise like I did with the X1900-series of cards last year.
    All 768MB of GDDR3 memory is located on the front of the card, and the rear of the PCB is pretty empty.
    Here you can see the gun-metal color of the case bracket on the card, though I don't know if this is going to be carried over to the retail cards. There are two dual-link DVI connections here, so you can support two of the Dell 30" monitors if you are so included! The TV output port there supports HDTV as well with a dongle that most retailers will include.
    Yeah, you'll notice that there are TWO PCIe power connectors on the GeForce 8800 GTX card (though the 8800 GTS will only require one). NVIDIA says that since the card can pull more than the 150 watts that a single PCIe power connection would technically allow (75 watts from the PCIe port and 75 watts from the single 6-pin connector), they erred on the side of caution by including two. They did allude to another graphics card that pulled more than 150 watts but did NOT have two connections for power...I think that card starts with "X19" and ends with "50 XTX". You are required to connect two power cables though, as leaving one empty will cause the system to beep incessently.
    The 8800 GTX is not without an SLI connection, though NVIDIA wasn't ready for SLI at the time of this launch; they are working on final drivers. But, we did notice that there are two SLI connections here, much like we saw on the ATI X1950 Pro cards last month. This is probably going to be used for chaining more than two cards together in a system.

    Click to Enlarge
    Removing the heatsink from the card shows this mammoth chip under the hood...simply...huge. The G80 die is surrounded by a heatsink leveling plate for safety reasons and then by the 12 64MB GDDR3 memory chips.
    Using the quarter test, I can officially say the G80 is the biggest chip EVAR!!! Or maybe just the biggest I have tested, but you get the point. How else did you expect 681 million transistors to look? For comparison, below is the ATI X1950 XTX chip next to the SAME quarter.
    No camera tricks here!!!
    Finally, this little chip off to the side is the NVIDIA TMDS display logic put into a custom ASIC. NVIDIA claims this was done to "simplify package and board routing as well as for manufacturing efficiencies"; sounds like 681 million was the limit to me!
    Toda a analise:
    http://www.pcper.com/reviews/Graphic...d-Architecture









    GeForce 8800: Here Comes the DX10 Boom


    GeForce 8800GTX is head and shoulders above the competition.

    We have been hearing about DirectX 10 hardware and the miraculous advantages it has over DX9. We have even seen the screenshots of games in development. However, until today, the hardware has been lacking. Earlier today we wanted to whet your appetites for Direct3D (D3D) hardware and Nvidia delivers the goods first. We are pleased to announce the arrival of DirectX 10 compliant hardware in the form of Nvidia's GeForce 8800GTX and 8800GTS.
    As our DX10 preview article concluded, a unified architecture will get more gains out of the shader units as they can be utilized more efficiently than a fixed function layout. Ushering in the new era of computer graphics is the GeForce 8800GTX with 128 unified shader units and the GeForce 8800GTS with 96 shader units. Long gone are the days of pipelines, finally. Without further ado, let's look inside the mouth of the beast and see how this thing works.
    slide show: GeForce 8800GTX/GTS
    More Information on DirectX 10

    What Direct3D 10 Is All About
    Today's launch of the Nvidia GeForce 8 series marks the advent of next-generation graphics. What can we expect from graphics makers with respect to DirectX 10 hardware?
    The New Graphics
    A tale of Direct X 10, and rumors of the hardware to drive it. While the demand for Direct X 9 hardware is not slipping, and more graphics cards are constantly being launched, there is much interest in this new standard and the hardware that will support it.
    The Graphics State Of The Union
    Tom's Hardware graphics presidente Polkowski is concerned about the 3D arms race. Power and heat dissipation are skyrocketing, but external graphics boxes could eliminate the imminent need for 1,000 W power supplies.



    Worth twice its weight in gold, a wafer with 80 graphics processing cores can deliver about twice the performance than GeForce 7900GTX (G71). A 681-million transistor count makes for a large silicon footprint, but when asked about its size, CEO Jen-Hsun Huang replied: "If my engineers said that they could double performance by doubling the amount of silicon used, I would have told them 'go for it!'"
    As time has shown, doubling does not mean double performance but Nvidia seems to have struck the right balance of technology advances with silicon engineering and implementation.
    Staying close to DX10 specifications, GeForce 8800GTX and 8800GTS fully comply with the DX10 standard with Shader Model 4.0, various data storage and transmitting specifications, Geometry Shaders and Stream Out. While you have seen what DX10/D3D10 compliant hardware should operate, let's look at how Nvidia gets the job done.
    To start, Nvidia deviated from the fixed function design that the industry had been using for the past 20 years in favor of a unified shader core.

    We have shown similar images in the past demonstrating the trend towards more pixel shading. Nvidia as well understands this trend and moved to balance the utilization needs by implementing unified shaders running threaded data streams to maximize efficiency and performance.

    Nvidia said: "The GeForce 8800 design team realized that extreme amounts of hardware-based shading horsepower would be necessary for high-end DirectX 10 3D games. While DirectX 10 specifies a unified instruction set, it does not demand a unified GPU shader design, but Nvidia GeForce 8800 engineers believed a unified GPU shader architecture made most sense to allow effective DirectX 10 shader program load-balancing, efficient GPU power utilization and significantly improved GPU architectural efficiency." This logically makes the most sense as pointed out in our Direct3D 10 Preview.



    Click to see a larger version of this image.

    The processor core itself operates at a frequency of 575 MHz for the GeForce 8800GTX and 500 MHz on GeForce 8800GTS. While the rest of the core runs at 575 MHz (or 500 MHz), the shader core has its own independent clock generator. GeForce 8800GTX runs at 1,350 MHz and the 8800GTS' clock speed is 1,200 MHz.
    "Streaming Processor" is the term given to each shader core unit. The GeForce 8800GTX has 16 sets of eight streaming processors in a block. A total of 16 blocks makes up the entire number of 128 SPs. Like ATI's design in R580 and R580+ with its Pixel Shader units, Nvidia stated that more units can be added to future designs and some can be taken away. This can be seen in the implementation of 96 streaming processors in GeForce 8800GTS.

    Click to see a larger version of this image.


    The previous issues with Nvidia not being able to do AA with HDR are history. Each ROP supports frame buffer blending. This means that both FP16 and FP32 render targets can be used with multi-sample antialiasing. Under D3D10, eight Multiple Render Targets can be utilized in conjunction with new compression technologies to accelerate color and Z processing in the ROPs.
    The GeForce 8800GTX can fill 64 textures per clock cycle, and at 575 MHz, it can serve up a maximum of 36.8 billion textures per second (GeForce 8800GTS = 32 billion/sec). The GeForce 8800GTX has 24 ROPs, and when running at a core frequency of 575 MHz, the card has a peak pixel throughput of 13.8 Gpixels/sec. Similarly, the GeForce880GTS version has 20 ROPs and therefore has a peak fill rate of 10 Gpixels/sec at 500 MHz.
    Nvidia GeForce Reference Specifications
    8800GTX 8800GTS 7950GX2 7900GTX 7800GTX 512 7800GTX
    Process Technology (nm) 90 90 90 90 110 110
    Processor Core G80 G80 G71 G71 G70 G70
    Number of Processors 1 1 2 1 1 1
    Number of Transistors per Processor (Millions) 681 681 278 278 302 302
    Vertex Frequency (MHz) 1350 1200 500 700 550 470
    Core Frequency (MHz) 575 500 500 650 550 430
    Memory Clock (MHz) 900 600 600 800 850 600
    DDR Rate (MHz) 1800 1200 1200 1600 1700 1200
    Vertex Shaders (#) 128 96 16 8 8 8
    Pixel Shaders (#) 128 96 48 24 24 24
    ROPs (#) 24 20 32 16 16 16
    Memory Interface (bit) 384 320 256 256 256 256
    Frame Buffer Size per Processor (MB) 768 640 512 512 512 256
    Memory Bandwidth (GB/sec) per processor 86.4 48 38.4 51.2 54.4 38.4
    Vertices/Second (Millions) 10800 7200 2000 1400 1100 940
    Pixel Fill Rate (# ROPs x clk) in Billions/sec 13.8 10 16 10.4 8.8 6.88
    Texture Fill Rate (# pixel pipes x clk) in Billions/sec 36.8 32 24 15.6 13.2 10.32
    RAMDACs (MHz) 400 400 400 400 400 400
    Bus Technology PCI Express PCI Express PCI Express PCI Express PCI Express PCI Express
    Check out the GeForce 8800GTX/GTS slide show.
    If you were scratching your head at the memory bus width, here is your answer. From the core logic image on the previous page, you can see that six memory partitions exist on a GeForce 8800 GTX GPU. Each of these provides a 64-bit interface to memory, yielding a 384-bit combined interface width. The 768 MB of GDDR3 frame buffer memory is attached to a memory subsystem that utilizes a high-speed crossbar design, similar to GeForce 7x GPUs. This crossbar supports DDR1, DDR2, DDR3, GDDR3 and GDDR4 memory.
    The GeForce 8800GTX uses GDDR3 memory, with a default clock speed of 900 MHz (GTS version is clocked at 800 MHz). With a 384-bit (48 byte-wide) memory interface running at 900 MHz (1800 MHz DDR data rate), frame buffer memory bandwidth is very high at 86.4 GB/sec. With 768 MB of frame buffer memory, far more complex models and textures can be supported at high resolutions and image quality settings.
    Toda a analise:
    http://www.tomshardware.com/reviews/...8800,1357.html









    BFG GeForce 8800 GTX review - Page 1




    BFG GeForce 8800 GTX; Pure Gaming Pleasure
    Info:
    BFGtech

    Price: 599 USD/EUR.
    Three weeks ago NV launched the GeForce 8800 onto the market. Availability is not really that bad and prices are finally settling a little. Time for another review !
    Hey I have to say that I knew upfront the 8800 products would be a great success as quite frankly they're really something else aren't they ? But I am surprised when I browse the Guru3D.com forums to see how many of you almost immediately after our review either ordered or bought a GeForce 8800. I mean, this is an expensive product for sure. It also confirms what I have been spotting for a longer while now. Percentage wise I think that more and more people tend to invest in a high-end product compared to a couple of years ago. It makes sense yet it doesn't. I mean a lot of people tend to see it as an investment for the next 2 years while others simply have more to spend. I think there's also a third ideology behind buying a high-end graphics card. Pretty much any modern PC can utilize the card really well thus it keeps you from spending 400-700 EUR on an XBOX 360 or Playstation 3 because one thing is a certainty; and that's that the latest graphics cards will always have a leading edge over consoles. So a lot of people these days tend to spend that money on an expensive graphic card that is upgradeable rather then console technology.
    That's also a new trend as two years ago literally people where thinking that consoles would grow bigger and become a huge issue for PC gaming. PC gaming was dying and blah blah blah. Now think again and look where we are. The power of playing games on a PC is huge and definitely something that is here to stay.
    With that in mind I'd like to guide you to today's review. We'll be looking at BFG's hottest product that money can buy, and that is of course the GeForce 8800 GTX with 768 MB of gDDR3 memory. A product that offers tremendous amounts of performance, image quality and offers Full DirectX compatibility whether you play your "old" DirectX 8 and 9 titles or even the upcoming DirectX 10.
    Gentlemen... fire up your power supplies and make that PC roar ! Next page please.


    GeForce 8800 GTX
    As you should know by now NVIDIA developed the GeForce 8800 series under the graphics core codename G80. Expect a LOT of new products in 2007 as this core is the new basis architecture for things to come. I expect the first mid and low range products already in February, close to the CeBIT and obviously somewhat merged together with the public release of the ridiculously priced Microsoft's Windows Vista.
    Windows Vista requires a DirectX9 compatible graphics card, so do not worry as a DX10 card is not a requirement ! It however is desired, and exactly that is creating a gap in the market that NVIDIA would love to fill. And trust me when I say they will.
    Until February however we'll have two Series 8 products available followed by a good number of series 7 cards. The two cards are as speculated the 640MB GeForce 8800 GTS and the big pappa graphics card called the GeForce 8800 GTX that comes with no less than 768MB memory. So how does the GeForce product line shape up until February ? Have a look:

    • GeForce 8800 GTX - $599
    • GeForce 8800 GTS - $449
    • GeForce 7950 GT - $299
    • GeForce 7900 GS - $199
    • GeForce 7600 GT - $159
    • GeForce 7600 GS - $129
    • GeForce 7300 GPUs - <$99

    Be afraid, be very afraid! Next to being a 681 million transistor accounting MONSTER (G70 had ~300 Million transistors), this is going to be the top of the line product. The big kahuna, the mack daddy, the quarter pounder, the beast that pimps your rig... the US Enterprise on a PCB! It's also rather exclusive as you can expect a sales price of 599 USD. Now before you point that middle finger at me hear me out okay ?
    It took NVIDIA four years to build and it took 400 Million dollars to develop. Obviously in the coming year well see a stackload of products based on this new mArchitecture. But hey I mean this is it, this is the graphics card you want in your uber powered PC. It has the (on 90nm fabricated) G80 core and ALL features as discussed below. It has the 128 streaming cores (Unified Shader processors), it comes with 768 MB of gDDR3 memory that theoretically can push 86 GB/second of memory bandwidth. Again, think about that for a second. 86 GIGABYTES per second memory bandwidth that is being utilized by a 128 Shader cores with a 681 Million transistor counting micro-architecture. Frack... I just realized that this thing is gonna consume serious power, or does it ?
    We'll check that out later also
    So this is the card with a total of 768 MB of gDDR3 memory at 384-bit (actually 12 pieces of 16Mx32 memory) with that memory clocked at 2x 900 MHz, and a "core" clock at 575 MHz with the 128 Unified Shaders running at 1350 MHz. No water-cooling solution, just a big dual-slot cooler (which actually is quite silent) on a large long black PCB with two 6-pin power connectors.
    Size then, and I noticed the concern in our forums once specs started to leak out, indeed the card is very long. The GeForce 8800 GTX graphics card is 27 CM long. But note that the power connectors are now routed off the top edge of the graphics card instead of the end of the card, so there is no extra space required at the end of the graphics card for power cabling. You might want to measure before buying though.
    Okay, I seriously can't cram more info in such a small piece of text, we need to speed this article up a little.
    BFG GeForce 8800 GTX
    In today's review we'll actually include a good number of results from other G80 brands as well. Fact is all boards are supplied by NVIDIA to the manufacturers and that's a bit of a shame. This means that all 8800 cards are the same. What do you need to remember when you buy the product ? Well you should not focus on the actual hardware itself. As stated look at other things such as additional software, special coolers, extended warranty and so on. Look at the little extra's if you are going for that reference cooled model.
    In the BFG box we'll find such a 100% reference NVIDIA GeForce 8800 GTX 768MB. The only thing different is a sticker saying it's a BFG product. Now BFG is also offering a model of this graphics card with a water-cooling block on it. That product was not ready to be submitted for this review just yet, but we do expect it for a review soon.
    Bundle wise BFG did a mediocre job, they include:

    • GeForce 8800 GTX 768 MB
    • Driver CD
    • HDTV block (3-way RCA component)
    • 6-pin to Molex power cable
    • manual
    • VGA->DVI dongle
    • BFG t-Shirt

    There's also some mouse sticker crap in the box; which quite honestly could be left out rather then including it. It looks so cheap. BFG:
    BFG Teflon® Slick Pads fit over existing mouse feet and instantly reduce the friction between your mouse and mousing surface. These super slick Teflon® pads extend the life of your mouse and improve mouse action during gaming sessions.
    I see people running to the stores already. Get outta here, it's 20 cents plastic stickers. Anyway, we do not see any included games but there's one thing that makes up for it bigtime. BFG offers a life-time warranty on their products. Now you can't beat that.
    Overall this is a complete kit to get you started. On closing note, since the card is reference based, it follows all the specs mentioned earlier. The BFG GeForce 880 GTX can be bought for roughly 599 USD/EUR.
    Some generic GeForce 8800 facts:

    • All NVIDIA GeForce 8800 GTX- and GeForce 8800 GTS-based graphics cards are HDCP capable.
    • The GeForce 8 Series GPUs are not only the first shipping DirectX 10 GPUs, but they are also the reference GPUs for DirectX 10 API development and certification and are 100% DirectX 9 compatible.
    • GeForce 8800 GPUs deliver full support for Shader Model 4.0.
    • All graphics cards are being built by NVIDIA’s contract manufacturer.
    • All GeForce 8800 GPUs support NVIDIA SLI technology.
    • At this time there are no 256MB models of the cards
    • The NVIDIA GeForce 8800 GTX has a 24 pixel per clock ROP. The GeForce 8800 GTS has a 20 pixel per clock ROP.
    • GeForce 8800 GTX requires a minimum 450W or greater system power supply (with 12V current rating of 30A). GeForce 8800 GTS requires a minimum 400W or greater system power supply (with 12V current rating of 26A).

    GeForce 8800 GTX GeForce 8800 GTS
    Stream Processors 128 96
    Core Clock (MHz) 575 500
    Shader Clock (MHz) 1350 1200
    Memory Clock (MHz) x2 900 800
    Memory amount 768 MB 640 MB
    Memory Interface 384-bit 320-bit
    Memory BW (GB/sec) 86.4 64
    Texture Fill Rate (bil/sec) 36.8 24

    The Unified state of DirectX 10. What you need to understand that the new microarchitecture of the the Dx10 GPU (Graphics Processing Unit) has been changed significantly, the generic elements are all still there.
    Despite the fact that graphics cards are all about programmability and thus shaders these days you'll notice in today's product that we'll not be talking about pixel and vertex shaders much anymore. With the move to DirectX 10 we now have a new technology called Unified shader technology and graphics hardware will adapt to that model, it's very promising. DirectX 10 is scheduled to ship at the beginning of next year with the first public release version of Windows Vista. It will definitely change the way software developers make games for Windows and very likely benefit us gamers in terms of better gaming visuals and better overall performance.
    The thing is, with DirectX 10 Microsoft has removed what we call the fixed function pipeline completely (what you guys know as 16 pixel pipelines, for example) and allowing it to make everything programmable. How does that relate to new architecture? Have a look.
    The new architecture is all about programmability and thus shaders. But the term Shader might be a little Shady for you.
    What is a shader ?
    To understand what is so important today first allow me to explain what a shader is and how does it relate to rendering all that gaming goodness on your screen (the short version).
    What do we need to render a three dimensional object; 2D on your monitor? We start off by building some sort of structure that has a surface, that surface is being built from triangles. And why triangles? They are quick and easy to compute. How's each triangle being processed? Each triangle has to be transformed according to its relative position and orientation to the viewer. Each of the three vertices the triangle is made up of is transformed to its proper view space position. The next step is to light the triangle by taking the transformed vertices and applying a lighting calculation for every light defined in the scene. At last the triangle needs to be projected to the screen in order to rasterize it. During rasterization the triangle will be shaded and textured.
    Graphic processors like the GeForce series are able to perform a large sum of these tasks. Actually the first generation was able to draw shaded and textured triangles in hardware, which was a revolution. The CPU still had the burden to feed the graphics processor with transformed and lit vertices, triangle gradients for shading and texturing, etc. Integrating the triangle setup into the chip logic was the next step and finally even transformation and lighting (TnL) was possible in hardware, reducing the CPU load considerably (surely everyone remembers the GeForce 256). The big disadvantage was that a game programmer had no direct (i.e. program driven) control over transformation, lighting and pixel rendering because all the calculation models were fixed on the chip, and that is the point where shaders come in. We now we finally get to the stage where we can explain Shaders. Vertex and Pixel shaders allow software and game developers to program tailored transformation and lighting calculations as well as pixel coloring functionality.
    And here's the answer to your the initial question and also reach the new G80 micro-architecture; each shader is basically nothing more than a relatively small program (programming code) executed on the graphics processor to control either vertex or pixel processing. So a Pixel or Vertex unit in fact is/was a small Pixel or Vertex shader processor within your last generation GPU and such a processor is basically a floating point processor.
    What is a shader core then? In the past graphics processors have had dedicated units for diverse types of operations in the rendering pipeline, such as vertex processing and pixel shading as we just explained. A good idea to understand that is to have a look at the image below.
    Each of these unit's above the L1 cache (memory) is a shader core. For the unified architecture, NVIDIA engineered a single floating point shader core with multiple independent processors. Each of these independent processors is capable of handling any type of shading operation, including pixel shading, vertex shading, geometry shading, and yes, physics shading.
    Pixel Shaders .. Vertex Shaders and now .. Geometry Shaders
    With the introduction of DirectX 8 & 9, in the traditional way when you executed a shader instruction you had to to send it to either the pixel or the vertex processor. And when you think about that it a little more that seems somewhat inefficient, as you could have the pixel shader units 100% utilized while the vertex units were only 60% utilized. And that's a waste of resources, efficiency and power, thus power consumption as you are not using a lot of transistors. The new approach in DirectX 10 (DX10) and thus new generation of graphics processors is simple. Any shader; pixel or vertex is being sent to a unified shader processor and executed. This way you can 100% utilize the architecture and have as little performance loss as possible as you can use ALL shader processors on the GPU. That's pretty cool from an efficiency point of view as maximum utilization means more computing power, which means either more eye candy on your screen at a better rendering frame rate. So this entire story has one word written all over it: efficiency.
    You will notice that NVIDIA will call the unified shader processors the stream processors. And the stream processors will manage pixel, vertex and Geometry shaders. That's right, geometry shaders! We have a new third shader. DX10 and thus Shader Model 4 is exciting. Like the three musketeers, all for one and one for all.
    Geometry Shaders
    Again we need to get a little techie here I'm afraid. You might want to skip this part unless you are a true geek.
    We already discussed Pixel and Vertex Shaders, but with DX10 comes a new shader: Geometry Shaders (GS). Geometry Shaders do some quite specific things that make no common sense for a PS/VS program and that is why this new shader was introduced.
    A Geometry shader will be a innovative set of shaders present in next generation graphics hardware like GeForce 8 Series and Radeon R600. Geometry shaders do per-primitive operations on vertices grouped into primitives like triangles, lines, strips and points outputted by vertex shaders. Geometry shaders can make copies of the inputted primitives; so unlike a vertex shader, they can actually create new vertices. Examples of use include shadow volumes on the GPU, render-to-cubemap and procedural generation. A geometry shader works at a larger level of granularity than vertices (which are at a larger granularity than pixels): triangles, objects, lines, strips, points. Primitives.
    So after the vertices are processed by the vertex shader, the geometry shader can be utilized to push further work on them. And that's exactly where the money shot is to be found as a limitation of the traditional vertex shader is that it really can’t create new vertices. This is where the geometry shader surfaces, as it can be used to work on the edges of a triangle to create a different figure.

    The surface (rock vertices) of this demo are created at random and realtime with a Geometry shader. When the camera moves up the GPU will calculate new surface constantly and endlessly, random if you wish. This is a very good example of a the usage of a geometry shader. Small interesting side-note; the water follows the path of the surface and thus reacts to with it, which is a physics model.
    Now let me try to explain this in more simple wording as most of you likely did not understand a word of what I just tried to explain. Example: imagine a landscape, usually precomputed static data. By firing off geometry shader code to the Unified shader processors you could very well have all the landscape generated randomly or simply change calculated real-time in the graphics processor. And that is something very cool. There are of course thousands of applications in which you can use a Geometry shader. Think of stuff like automatic stencil shadow polygon generation, skinning operations, physical simulations (hair for example) environment map creation, better and more complex point sprites, adaptive subdivision, all that while offloading work from the CPU and Guru3D believes that the impact of Geometry shaders will be bigger than expected.
    Dynamic creation of geometry on the GPU, that's what you need to remember. Now at this stage in the DX10 pipe-line the GPU also can do something I already mentioned: Physical simulations creation. A couple of examples that NVIDIA gave us:

    • Litter and Debris
    • Smoke and Fog that moves
    • Cloth and fluid flows with object and characters
    • large amounts of rubble (collapsing buildings, explosions, avalanches)
    • Rampaging tornados full of debris
    • swarms of insects
    • So many more possibilities!

    As you can see this is a very hot topic in game rendering as in the end we can push gaming to a new dimension. Close at this stage in the DX10 pipeline we can do a lot of other cool stuff. A function called stream out for example.
    Stream output is very important and useful new DirectX 10 feature as it enables data generated from geometry shaders (or vertex shaders if geometry shaders are not used) to be sent to memory buffers and subsequently forwarded back into the top of the GPU pipeline to be processed again. Basically what I'm saying here is such dataflow permits even more complex geometry processing, advanced lighting calculations and GPU-based physical simulations with little CPU involvement. You simply keep the data to be altered in the pipeline.
    A good example of the new stream architecture for example allows us to do instancing dozens of not hundreds of the same objects with a slightly different movement can fill your screen without a huge impact on performance as it hardly requires CPU calculations.
    So DirectX 10 and its related new hardware products offer a good number of improvements. So much actually that it would require an article on its own. And since we are here to focus on NVIDIA's two new products we'll take a shortcut at this stage in the article. Discussed in our Guru3D forums I often have seen the presumption that DX10 is only a small improvement over DX9 Shader Model 3.0. Ehm yes and no. I say it's a huge step as a lot of constraints are removed for the software programmers. The new model is more simple, easy to adapt and allows heaps of programmability, which in the end means a stack of new features and eye candy in your games.
    Whilst I will not go into detail about the big differences I simply would like to ask you to look at the chart below and draw your own conclusions. DX10 definitely is a large improvement, yet look at it as a good step up.
    Here you can see how DirectX's Shader Models have evolved ever since DX8 Shader Model 1.
    So I think what you need to understand is that DirectX 10 doesn't commence a colossal fundamental change in new capabilities; yet it brings expanded and new features into DirectX that will enable game developers to optimize games more thoroughly and thus deliver incrementally better visuals and better frame rates, which obviously is great. How fast will it be adopted well, Microsoft is highlighting the DX10 API as God's gift to the gaming universe yet what they forget to mention is that all developers who support DX10 will have to continue supporting DirectX9 as well and thus maintain two versions of the rendering code in their engine as DXD10 is only available on Windows Vista and not XP, which is such a drama.
    You heard the rumors and it's false, DirectX 9.0L will NOT make Windows XP DX10 compatible as it is the other way around, if you have DirectX 9 hardware, you will be using DirectX 9.0L as your API in Windows Vista. With that statement you also need to realize that a DX10 card like the G80 is fully DX9 compatible!
    However, you can understand that from a game developer point of view it brings a considerable amount of additional workload and cost to PC game development until Vista finally becomes mainstream.
    Regardless of the immense marketing hype, DirectX 10 just is not extraordinarily different from DirectX 9, you'll mainly see good performance benefits on DirectX 10 rather than vastly prominent visual differences with obviously a good number of exceptions here and there; but DX is evolving into something better and faster.

    Stretchy skin, geometry shaders as you alter surface vertices on the fly. Poor Froggy.
    With the introduction of Unified Shader technology the industry will also make you believe that GPU's no longer have a pixel pipeline. That's true but not entirely, we obviously are still dealing with a pixel pipeline yet the dynamics simply have changed.
    Stating that this product has 24 pixel pipelines does not apply anymore and that by itself will force a shift on how we need to look at new GPU microarchitecture. So I'm afraid that from now on, we can't say ooh this product has 24 pixel pipelines. The new method of making you guys understand what we are talking about and relate that to performance will simply be the cumulative number of shader processors.
    Just remember this: we have moved from a fixed pipeline based architecture to a processor based architecture.
    With that in mind that "number" of processors will be our new and more easy to understand and comprehend method of relating how fast a product "can" be. I know this is shady to explain.
    Prepare for the impact now, the GeForce 8800 GTS has 96 shader processors / stream processors and the GeForce 8800 GTX has 128 of these unified processor units. Think for a moment about the GeForce 7900 GTX and relate that to its 8 vertex and 24 pixel processors. See the parallel already?
    The internal GPU clocks have changed quite a bit also. A year or two ago our own Alexey (Rivatuner programmer) made a discovery. NVIDIA's architecture all of the sudden was showing registers for multiple clocks coming from the graphics processor. So at that time it became clear that, for example, a GeForce 7900 GTX has three (and likely more) different internal clocks. This is something you need to get used to as the G80 series has many clocked domains within the graphics processor and everything seems to be asynchronous, which is quite interesting as everything in the history of graphics cards has been developed in a very parallel manner.
    So as contradicting as it may sound the GeForce 8800 GTX has a "generic" 575 MHz core clock, yet its memory is running at 900 MHz (x2) and get this, the Shader processors are clocked at 1350 MHz. And I'm pretty confident that we can find a few other clocks in there as well. Memory is a tad weird also. The GTX for example has no less than 768 MB of memory and it's 384-bits wide, now this is where things can get a little tricky to understand but there is no 384 Bit wide memory with the GTS at 320-bits.



    The Luminex Engine
    Okay which marketing bozo came up with that word? One of the things you'll notice in the new Series 8 products is that number if pre-existing features have become much better and I'm not only talking about the overall performance improvements and new DX10 features. Nope, NVIDIA also had a good look at Image Quality. Image quality is significantly improved on GeForce 8800 GPUs over the prior generation with what NVIDIA seems to call the Lumenex engine.
    You will now have the option of 16x full screen multisampled antialiasing quality at near 4x multisampled antialiasing performance using a single GPU with the help of a new AA mode called Coverage Sampled Antialiasing. We'll get into this later though with pretty much this is a math based approach as the new CS mode computes and stores boolean coverage at 16 subsamples and yes this is the point where we lost you right? We'll drop it.
    So what you need to remember is that CSAA enhances application antialiasing modes with higher quality antialiasing. The new modes are called 8x, 8xQ, 16x, and 16xQ. The 8xQ and 16xQ modes provide first class antialiasing quality TBH.
    If you pick up a GeForce 8800 GTS/GTX then please remember this; Each new AA mode can be enabled from the NVIDIA driver control panel and requires the use to select an option called “Enhance the Application Setting”. Users must first turn on ANY antialiasing level within the game’s control panel for the new AA modes to work, since they need the game to properly allocate and enable anti-aliased rendering surfaces.
    If a game does not natively support antialiasing, a user can select an NVIDIA driver control panel option called “Override Any Applications Setting”, which allows any control panel AA settings to be used with the game. Also you need to know that in a number of cases (such as the edge of stencil shadow volumes), the new antialiasing modes can not be enabled, those portions of the scene will fall back to 4x multisampled mode. So there definitely is a bit of a tradeoff going on as it is a "sometimes it works but sometimes it doesn't" kind of feature.
    So I agree, a very confusing method. I simply would like to select in the driver which AA mode I prefer, something like "Force CSAA when applicable", yes something for NVIDIA to focus on. We'll test CSAA with a couple of games in our benchmarks.
    But 16x quality at almost 4x performance, really good edges, really good performance, that obviously is always lovely.
    One of the most heated issues over the previous generation products opposed to the competition was the fact that the NVIDIA graphics cards could not render AA+HDR at the same time. Well that was not entirely true through as it was possible with the help of shaders as exactly four games have demonstrated. But it was a far from efficient method, a very far cry (Ed: please no more puns!) you might say.
    So what if I would were to say that now not only you can push 16xAA with a single G80 graphics card, but also do full 128-bit FP (Floating point) HDR! To give you a clue the previous architecture could not do HDR + AA but it could do technically 64-bit HDR (just like the Radeons). So NVIDIA got a good wakeup call and noticed that a lot of people were buying ATI cards just so they could do HDR & AA the way it was intended. Now the G80 will do the same but it's even better. Look at 128-bit wide HDR as a palette of brightness/color range that is just amazing. Obviously we'll see this in games as soon as they will adopt it, and believe me they will. 128-bit precision (32-bit floating point values per component), permitting almost real-life lighting and shadows. Dark objects can appear extremely dark, and bright objects can be exhaustingly bright, with visible details present at both extremes, in addition to rendering completely smooth gradients in between.
    As stated; HDR lighting effects can be used together with multisampled antialiasing now on GeForce 8 Series GPUs and the addition of angle-independent anisotropic filtering. The antialiasing can be used in conjunction with both FP16 (64-bit color) and FP32 (128-bit color) render targets.
    Improved texture quality is something I MUST mention. We all have been complaining about shimmering effects and lesser filtering quality than the Radeons, it's a thing of the past. NVIDIA added raw horsepower for texture filtering making it really darn good and in fact claims it's even better then currently the most expensive team red product (x1950 XTX). Well .. we can test that !
    Allow me to show you. See, I have this little tool called D3D AF Tester which helps me determine how image quality is in terms of Anisotropic filtering. So basically we knew that ATI always has been better at IQ compared to NVIDIA.
    GeForce 7900 GTX 16xAF (HQ)
    Radeon X1900 XTX 16xHQ AF
    GeForce 8800 GTX 16xAF Default
    Now have a look at the images above and let it sink in. It goes too far to explain what you are looking at; but the more perfect a round colored circle in the middle is the better image quality will be. A perfect round circle is perfect IQ.
    Impresive huh ? The the AF patterns are just massively better compared to previous generation hardware. Look at that .. that is default IQ; that's just really good ...
    What about Physics ?
    You can use the graphics card as a physics card with the help of what NVIDIA would like to market as the Quantum physics engine.
    Yes, the GeForce 8800 GPUs are capable of handling complete physics computations that free the CPU from physics calculations and allow the CPU to focus on AI and game management. The GPU is now a complete solution for both graphics and physics. We all know that Physics acceleration really is the next hip thing in gaming, now with the help of shaders, you can already push a lot of debris around but you can also take a card to the next level.
    As soon as titles start supporting it we'll be shocked whether one of our quad-cores of that new Intel CPU is doing Physics, whether you add-in another dedicated graphics card as demonstrated by NVIDIA and ATI, whether you do it with an Ageia card or in the case of the G80 in the GPU. I most definitely prefer this latter alternative as I think that Physics processing belongs in the GPU, as close as possible to the stuff that renders your games. Basically what happens is this: The Quantum Effect engine can utilize some of the Unified shader processors to calculate the physics data with the help of a new API that NVIDIA has created, CUDA. CUDA is short for Compute Unified Device Architecture. Is anyone thinking GPGPU here? No? Well I am. The thing is, NVIDIA is making it accessible to developers as you can code physics in C (common programmers language).
    A good example of GPGPU is ATI's recent development of calculating data over the GPU in the form of distributed computing. You calculate stuff over the GPU that is not intended for such a dedicated processor, but it can. NVIDIA is now utilizing a similar method CUDA and Kudo's to them for developing it.
    NVIDIA is making this API completely available to the programmers though an SDK. Since it's coded in C it's widely supported by programmers as it's a much preferred way of programming, basically any programmer can code in C. So to put it simply, game developpers or for that manner any developer can code in C some pretty nifty functions. NVIDIA has a very nice demo on it called fluid in a box, which should be released pretty soon.
    Here's the catch, I'm not really sure how successful it'll be compared to a dedicated physics processing unit will be though, as I believe that Physics processing on a GPU that has to render the game as well, might be a bit too much. But hey time will tell as yes we first need developer support to get this going.
    It's doable, yet I'm holding out on casting an opinion until after we have seen some actual implementation in games.


    The Photos On the next few pages we'll show you some photos. The images were taken at 2560x1920 pixels and then scaled down. The camera used was a Sony DCS-F707 5.1 MegaPixel.
    The BFG GeForce 8800 GTX 768 MB.
    There she is, the BFG GeForce 8800 GTX - equipped with 768MB GDDR3 memory and a length of 10.5 inches long.
    Okay, okay, so on its back it's like a turtle that can't get back on its feet eh? There's a reason I'm showing you this though as when you focus a little onto the middle you'll notice the memory locations. Count with me, what did you get? Exactly 12 spots means 12 memory chips x (16M) 32 bits = 384-bit. This is how NVIDIA accumulates it to a 384-bit product.
    To the "upper" middle you'll notice there are two SLI connectors on the GeForce 8800 GTX. The second SLI connector on the GeForce 8800 GTX is hardware support for "impending future enhancements" in SLI software functionality. Don't you love diplomacy. So that's either Physics as add-on or two-way SLI. With the current drivers, only one SLI connector is actually used. You can plug the SLI connector into either the right or left set of SLI connector.
    DVI connectors, dual-link DVI of course. With the 7-pin HDTV-out mini-din, a user can plug an S-video cable directly into the connector, or use a dongle for YPrPb (component) or composite outputs. The prior 9-pin HDTV-out mini-din connector required a dongle to use S-video, YPrPb and composite outputs.
    The cooler despite it's huge size does have a somewhat surreptitious design huh? Granted I like it. Next to that, it's quite silent and does not make a lot of noise at all.


    Here we can see the two 6-pin power connectors. Since the PCB is rather long they have been placed logically on the upper side of the card. Very clever as that'll save lots of space.
    That my friends is a buzzer speaker (the little black knob with the hole in the middle). So if you forget to hook up a 6-pin connector a little alarm will beep.
    Meh, just a photo shot I figured.
    The card in the test-system, eVGA nForce 680i SLI. A rather sexy mainboard for sure.


    Toda a analise:
    http://www.guru3d.com/articles_pages..._review,1.html




    Dia 8 de Novembro de 2006, a nVidia lança a sua melhor placa grafica de sempre, equipada com aquele que talvez possa ser considerado o melhor GPU jamais feito. Esse GPU dava pelo nome de código G80 e era simplesmente uma besta, quer em transitores, quer performance. Em transistores, duplicava o melhor GPU anterior, o G70 tinha pouco mais de 300B e o G80 tinha quase 700B e em performance a mesma poderia ir dos 40% aos 90% em aumento face ás gerações anteriores, coisa completamente assombrosa para a epoca.
    Este GPU quebrava com as arquiteturas anteriores ao unificar todo o processo num unico pipeline, dado que até aí as unidades de vertex e pixel shaders eram separadas e quanto mais unidades destas existissem mais desempenho a placa tinha. Com a unificação da arquitetura em SP (stream processors), todo o GPU trabalhava com um todo, deixando assim de existir partes inoperacionais em determinadas ocasiões de processamento de de shaders e vertex shaders.
    Outra novidade deste GPU era a inclusão pela primeira vez do DirectX 10, DirectX esse que ainda estava a alguns meses de aparecer através do Windows Vista e que como é do conhecimento geral não foi um grande SO, levando com isso ao fracasso do DirectX 10, tornando esta placa "apenas" DirectX 9.
    Este GPU, o G80, é o primeiro chip capaz de correr o Crysis conforme foi projectado pela Crytek, dado até esta altura não haver uma placa capaz de lider com o Crysis de forma decente ou satisfatória.

    Para se perceber a importancia deste GPU, ficam aqui uns tech demos:








    Este GPU para a ATI foi um valente KO, dado que tudo o que estava feito até aí passava quase a ser obsoleto e deixou a ATI uns meses largos à procura de uma solução, solução essa que quando apareceu, foi um desastre autentico, deixando assim o G80 (e rebrands) com um longo reinado.
    Este GPU foi também amplamente explorado pela nVidia, tal era a perfeição do mesmo na altura em que saiu, dado que foram feitos varios rebrands ao longo dos anos seguintes.
    Este GPU marca para a história, o inicio da segunda parte da era moderna, a mesma que ainda estamos hoje. Este GPU marca também a base de todas as arquiteturas seguintes que a nVidia desenvolveu até aos dias de hoje (Maxwell).
    Pelo marco que esta placa tem na história e desenvolvimento dos GPUs, a mesma (ou derivadas) deve constar na lista de peças a ter para os colecionadores de hardware.
    Última edição de Jorge-Vieira : 24-02-16 às 14:22
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  13. #193
    Tech Membro Avatar de Nirvana91
    Registo
    Jun 2013
    Local
    Lisboa
    Posts
    3,915
    Likes (Dados)
    1
    Likes (Recebidos)
    19
    Avaliação
    0
    Mentioned
    1 Post(s)
    Tagged
    0 Thread(s)
    Citação Post Original de Jorge-Vieira Ver Post
    Este GPU para a ATI foi um valente KO, dado que tudo o que estava feito até aí passava quase a ser obsoleto e deixou a ATI uns meses largos à procura de uma solução, solução essa que quando apareceu, foi um desastre autentico, deixando assim o G80 (e
    AKA HD 2900 series.

  14. #194
    O Administrador Avatar de LPC
    Registo
    Mar 2013
    Local
    Multiverso
    Posts
    17,815
    Likes (Dados)
    74
    Likes (Recebidos)
    156
    Avaliação
    31 (100%)
    Mentioned
    31 Post(s)
    Tagged
    0 Thread(s)
    Boas!
    Sem dúvida que o G80 foi um dos lançamentos mais excitantes a nível do 3D moderno...

    Eu lembro-me bem da loucura que foi... e Depois com o lançamento do Vista e do Crysis (e posteriormente o Crysis: Warhead), foram brutais tempos...

    A economia estava próspera e havia muito guito para se gastar... eu cheguei a ter 4 destas... e custavam 600€ cada!

    Cumprimentos,

    LPC
    My Specs: .....
    CPU: AMD Ryzen 7 5800X3D :-: Board: MSI B550M BAZOOKA :-: RAM: 64 GB DDR4 Kingston Fury Renegade 3600 Mhz CL16 :-: Storage: Kingston NV2 NVMe 2 TB + Kingston NV2 NVMe 1 TB
    CPU Cooling Solution: ThermalRight Frost Commander 140 Black + ThermalRight TL-C12B-S 12CM PWM + ThermalRight TL-C14C-S 14CM PWM :-: PSU: Corsair HX 1200 WATTS
    Case: NZXT H6 FLOW :-: Internal Cooling: 4x ThermalRight TL-C12B-S 12CM PWM + 4x ThermalRight TL-C14C-S 14CM PWM
    GPU: ASUS TUF
    AMD RADEON RX 7900 XTX - 24 GB :-: Monitor: BenQ EW3270U 4K HDR


  15. #195
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    NVIDIA GeForce 8800 GTX SLI


    Introduction

    My initial review of the NVIDIA GeForce 8800 GTX GPU showed it to be a stellar product. With increased DX9 gaming performance as well as support for DX10 and better performance per watt ratios than ATI's X1950 XTX card, the 8800 GTX was a winner in nearly all fronts. The one exception was that SLI configurations weren't ready during the initial release and reviewers and gamers had to wait a week or so for the driver support to be implemented.

    The wait for that is over and I have had some time to sit down and play with an SLI-configured gaming system here for a couple weeks. Before we get into the system setup and of course, the benchmark results, here is a quick summary of the 8800 GTX G80 architecture from my previous review.

    The GeForce 8800 GTX

    A unified graphics architecture, in its most basic explanation, is one that does away with seperate pixel pipelines and texture pipelines in favor of a single "type" of pipeline that can be used for both.

    Traditional GPU Architecture and Flow
    This diagram shows what has become the common GPU architecture flow; starting with vertex processing and ending with memory access and placement. In G70, and all recent NVIDIA and ATI architectures, there was a pattern that was closely followed to allow data to become graphics on your monitor. First, the vertex engine, starting out and pure texture and lighting hardware, processing the vertex data into cohesive units and passes it on to the triangle setup engine. Pixel pipes would then take the data and apply shading and texturing and pass the results onto the ROPs that are responsible for culling the data, anti-aliasing it (in recent years) and passing it in the frame buffer for drawing on to your screen.
    This scheme worked fine, and was still going strong with DX9 but as game programming became more complex, the hardware was becoming more inefficient and chip designers basically had to "guess" what was going to be more important in future games, pixel or vertex processing, and design their hardware accordingly.
    A unified architecture simplifies the pipeline significantly by allowing a single floating point processor (known as a pixel pipe or texture pipe before) to work on both pixel and vertex data, as well as new types of data such as geometry, physics and more. These floating point CPUs then pass the data onto a traditional ROP system and memory frame buffer for output that we have become familiar with.

    Click to Enlarge
    All hail G80!! Well, um, okay. That's a lot of pretty colors and boxes and lines and what not, but what does it all mean, and what has changed from the past? First, compared to the architecture of the G71 (GeForce 7900), which you can reference a block diagram of here, you'll notice that there is one less "layer" of units to see and understand. Since we are moving from a dual-pipe architecture to a unified one, this makes sense. Those eight blocks of processing units there with the green and blue squares represent the unified architecture and work on pixel, vertex and geometry shading.
    There are 128 streaming processors that run at 1.35 GHz accepting dual issue MAD+MUL operations. These SPs (streaming processors) are fully decoupled from the rest of the GPU design, are fully unified and offer exceptional branching performance (hmm...). The 1.35 GHz clock rate is independent of the rest of the GPU, though all 128 of the SPs are based off of the same 1.35 GHz clock generator; in fact you can even modify the clock rate on the SPs seperately from that of the GPU in the overclocking control panel! The new scalar architecture on the SPs benefits longer shader applicaitons to be more efficient when compared to the vector architecture of the G70 and all previous NVIDIA designs.
    With the new G80 architecture, NVIDIA is introducing a new antialising method known as coverage sampled AA. Because of the large memory storage that is required on multisampled AA (the most commonly used AA), moving beyond 4xAA was not efficient and NVIDIA is hoping the CSAA can solve the issue by offering higher quality images with less storage requirements. For much more detail and examples of this new AA method, look here in our previous architecture article.
    I mentioned in the discussion on the new G80 architecture that the texture filtering units are much improved and offer us better IQ options than ever before. While we haven't looked at it in depth on PC Perspective recently, there has been a growing concern over the filtering options that both ATI and NVIDIA were setting in their drivers, and the quality they produced. If you have ever been playing a game like Half Life 2 or Guild Wars (probably one of the worst) and noticed "shimmering" along the ground, where textures seem to "sparkle" before they come into focus, then you have seen filtering quality issues. And for more information on the improved filter, again, look here in our previous article.

    Requirements for 8800 GTX SLI

    Obviously, seeing as the 8800 GTX card used a lot of power in our first tests (though not as much as originally predicted before launch), I knew going into this that an SLI configuration was going to require a hefty power supply. Since I just so happened to have one of the PC Power & Cooling 1 kW (1000 watts) power supplies sitting around, I decided that would do the trick!

    Retailing for about $599, this is definitely not a cheap power supply. But if you are looking for a unit that is going to keep your system running stable no matter the components you put in it, this is it. But the main feature about the PC P&C 1kW that makes this unit ready for SLI 8800 GTX is that it has four 6-pin PCIe power connectors on it.
    Since each 8800 GTX card uses two PCIe power connectors, for running two of them you'll need four connectors. There aren't many power supply options that have four PCIe connectors, but another option is the Silverstone Zeus ST85ZF 850 watt power supply that our very own Lee Garbutt reviewed recently. Both should work fine, and with the Silverstone unit priced under $300, it definitely has a price advantage.
    Here is a picture of the two 8800 GTX SLI cards running on the EVGA 680i motherboard that was released the same day as the 8800 GTX video card. A Sound Blaster Audigy 2 sits between the two graphics cards, all being controlled by an Intel Core 2 Extreme X6800 processor at 2.93 GHz.
    Testing Methodology
    Graphics card testing has become the most hotly debated issue in the hardware enthusiast community recently. Because of that, testing graphics cards has become a much more complicated process than it once was. Where before you might have been able to rely on the output of a few synthetic, automatic benchmarks to make your video card purchase, that is just no longer the case. Video cards now cost up to $500 and we want to make sure that we are giving the reader as much information as we can to aid you in your purchasing decision. We know we can't run every game or find every bug and error, but we try to do what we can to aid you, our reader, and the community as a whole.
    With that in mind, all the benchmarks that you will see in this review are from games that we bought off the shelves just like you. Of these games, there are two different styles of benchmarks that need to be described.
    The first is the "timedemo-style" of benchmark. Many of you may be familiar with this style from games like Quake III; a "demo" is recorded in the game and a set number of frames are saved in a file for playback. When playing back the demo, the game engine then renders the frames as quickly as possible, which is why you will often see the "timedemo-style" of benchmarks playing back the game much more quickly than you would ever play the game. In our benchmarks, the FarCry tests were done in this matter: we recorded four custom demos and then played them back on each card at each different resolution and quality setting. Why does this matter? Because in these tests where timedemos are used, the line graphs that show the frame rate at each second, each card may not end at the same time precisely because one card is able to play it back faster than the other -- less time passes and thus the FRAPs application gets slightly fewer frame rates to plot. However, the peaks and valleys and overall performance of each card is still maintained and we can make a judged comparison of the frame rates and performance.
    The second type of benchmark you'll see in this article are manual run throughs of a portion of a game. This is where we sit at the game with a mouse in one hand, a keyboard under the other, and play the game to get a benchmark score. This benchmark method makes the graphs and data easy to read, but adds another level of difficulty to the reviewer -- making the manual run throughs repeatable and accurate. I think we've accomplished this by choosing a section of each game that provides us with a clear cut path. We take three readings of each card and setting, average the scores, and present those to you. While this means the benchmarks are not exact to the most minute detail, they are damn close and practicing with this method for many days has made it clear to me that while this method is time consuming, it is definitely a viable option for games without timedemo support.
    The second graph is a bar graph that tells you the average framerate, the maximum framerate, and the minimum framerate. The minimum and average are important numbers here as we want the minimum to be high enough to not affect our gaming experience. While it will be the decision of each individual gamer what is the lowest they will allow, comparing the Min FPS to the line graph and seeing how often this minimum occurs, should give you a good idea of what your gaming experience will be like with this game, and that video card on that resolution.
    Our tests are completely based around the second type of benchmark method mentioned above -- the manual run through.
    System Setup and Benchmarks
    Continuing on with our other recent GPU reviews, the Intel Core 2 Extreme X6800 is our processor of choice. We used the Intel 975XBX motherboard for our testing this time.






























    GeForce 8800 GTX SLI Test System Setup
    CPU
    Motherboards
    Memory

    Corsair TWIN2X2048-8500C4
    Hard Drive
    Sound Card

    Sound Blaster Audigy 2 Value
    Video Card
    Video Drivers
    97.02 - NVIDIA
    8.291 Beta - ATI
    DirectX Version
    DX 9.0c
    Operating System

    Windows XP Professional SP1

    Benchmarks


    • 3DMark06

    • Battlefield 2

    • Call of Duty 2

    • FEAR

    • HL2: Lost Coast

    • Prey


    I tested these games at 1600x1200, 2048x1536 and 2560x1600, all running at 4xAA and 8xAF in-game settings.

    Battlefield 2 (Direct X)


    Battlefield 2 is one of the first games to come along in quite a while that turned out to really push the current and even following generation gaming hardware. Having the privilege of being the first game that might need 2 GB of memory is either a positive or negative, depending on your viewpoint. Here are our IQ settings used:

    Our map was the Strike At Karkand that turns out to be one of the most demanding in the retail package in terms of land layout, smoke and other shader effects.





    At 1600x1200, the 8800 GTX SLI setup doesn't get us any gains in Battlefield 2, but at 2048x1536, the minimum frame rate is increased by 40% though the average is about the same because of the 100 frames per second cap in the game engine. Looking at the 2560x1600 resolution on our Dell 30" monitor, the average FPS is only 10 FPS higher, but the minimum is 91% higher on the SLI configuration!
    Toda a analise:
    http://www.pcper.com/reviews/Graphic...ormance-Review









    XFX GeForce 8800 GTX SLI Video Card Review

    Introduction

    image: http://www.legitreviews.com/images/r.../8800sli_1.jpg

    The launch of the latest flagship 8800 GTX has many drooling at incredible frame rates the card provides in today’s games and the future DirectX 10 prowess the card promises. When the GeForce 8800 GTX was released in November we had one card available to benchmark and found that it was the fastest card around. The very next day LR publied an editorial that showed a few benchmarks on a $4,000+ dream machine with a pair of XFX 8800 GTX graphics card running SLI. It was in this review we showed that a pair of 8800 GTX graphics cards would destroy ATI’s X1950 XTX Crossfire setup in games like F.E.A.R. and Quake 4, but was limited by BIOS issues as quad-core processors wouldn’t work on the NVIDIA 680i SLI motherboard. A few days later an article on the 8800 GTX resistor change was published, which found that many enthusiasts had video cards that needed to be RMA’d. Last month we were benchmarking a system with BIOS issues uses cards that were fixed by hand on older drivers. Today we are taking a look at what kind of performance you can expect to get from 2 of these bad boys linked together using an older AMD Socket 939 platform, which is something more along the lines of what people are running.
    While most will only be able to dream of a pair of 8800 GTX in SLI, a lucky few will be able to afford them. Being able to afford the graphics cards is only part of the battle as NVIDIA strongly suggest using a power supply with at least an output of 750W to power dual GeForce 8800 GTX. The need for a new power supply is required because it must be capable of delivering at least 30 Amps on the +12V rail and have 4 PCI-E connectors because two of these beasts eat up gobs of power. We strongly suggest looking at 13 certified SLI-Ready that have passed testing by NVIDIA and are known to power a system like this. For those of you that just recently upgraded their power supply don’t worry as it might still work. When we last talked to Corsair’s product manager about their modular HX620W power supply he informed us that those that already bougt one can get additional 6-pin PCIe power headers and be able to use this PSU on a system with 8800 GTX’s. This is because the power supply is modular and the fact that it has a 50Amp rating on the +12V rail. The take home message here is make sure you have to run out and buy a new $300-$500 PSU or just pick up a set of modular cables. Unfortunately those aren’t the only expenses for getting the most out of this kind of set up, but we’ll get to that later.
    The cards that are being used today have been provided by the good guys at XFX and have the updated resistor and came in the retail box. All standard cards come at the same 575MHz Core and 1.8 GHz Memory clock speeds and almost all have identical heat sink/fan combos. In our initial review of the 8800 GTX we praised the heat sink but after running them in SLI you quickly find out that any other add-in cards become heat magnets because of the three vent holes in the top of the 8800 series heat sink. That is something you X-Fi and PhysX owners should take note of. Also of note is just how much heat is being exhausted. An idle temp of 50c and load in the low 70c mean that you need exceptional cooling not just in your case, but also the room your computer is in!
    When it comes to price you’re looking at roughly $620 a piece for these monsters. Fortunately XFX provides a double lifetime warranty for those wanting a little piece of mind for their ~$1200 investment. XFX has recognized just how many early adopters are out there and the 8800 GTX is a perfect example of a card that someone that lives on the bleeding edge has to have. With the purchase of an XFX card you can be sure that you and the person you decide to sell it to will be well covered.
    Enough with all of the reality talk, let’s get to the benchmarks!


    Test Systems

    image: http://www.legitreviews.com/images/r...est_system.jpg

    Our test system has quickly become dated. A faulty Intel Bad Axe motherboard has put our test system upgrade on a delayed timeline.
    Video Card Test Platform
    Component
    Brand/Model
    Live Pricing
    Processor
    AMD Athlon 64 4800+
    Motherboard
    DFI NF4 SLI-DR
    Memory
    2GB Crucial PC-4000
    Hard Drive
    Western Digital Raptor 74GB
    Cooling
    Retail AMD Heatsink
    Power Supply
    Silverstone Zeus 850W
    Operating System
    Windows XP Professional SP2
    Video Card Test Platform
    Component
    Brand/Model
    Live Pricing
    Processor
    AMD Athlon 64 4800+
    Motherboard
    ASUS A8R32-MVP
    Memory
    2GB Crucial PC-4000
    Hard Drive
    Western Digital Raptor 74GB
    Cooling
    Retail AMD Heatsink
    Power Supply
    Silverstone Zeus 850W
    Operating System
    Windows XP Professional SP2

    • Drivers
    • NVIDIA ForceWare 97.44 8800 GTX SLI
    • NVIDIA ForceWare 93.71
    • ATI Catalyst 6.11



    Quake 4 & X3: Reunion

    image: http://www.legitreviews.com/images/r...5/quake4_t.jpg

    ID Software; Quake 4 v1.2

    ID Software?s QUAKE 4, developed by Raven Software, takes players into an epic invasion on a barbaric alien planet in one of the most anticipated first person shooters for 2005. Even today in 2006 Quake 4 is played by professional gamers around the world in the famed World Series of Video Games (WSVG) and still one of the most played first person shooters on the market today.
    image: http://www.legitreviews.com/images/r...421/quake4.gif

    In our first test Quad SLI jumps out to a slight lead over the rest of the field. The extra power the 4 GPU’s seems to agree with Quake 4.
    X3

    image: http://www.legitreviews.com/images/reviews/365/x3_t.jpg

    Egosoft: X³ Reunion

    The Sequel to the award winning X²: The Threat will introduce a new 3D engine as well as a new story, new ships and a new gameplay to greatly increase the variety in the X-universe. The economy of X³: Reunion will be more complex than anything seen in the X-universe before. Factories are being built by NPCs, wars can affect the global economy, NPCs can trade freely and pirates will behave far more realistically.
    Extensive development has gone into the X³ engine, making full use of DirectX 9 technology, to create dramatic visual effects and stunningly realistic starships. Coupled with the massively enhanced A.L. (Artificial Life) system, X³: REUNION will present players with an ever changing, evolving universe; where a player actions really can shape the future of the universe.
    image: http://www.legitreviews.com/images/reviews/421/x3.gif

    X³ strangely enough still favors ATI’s video cards. The 8800 GTX is not really any faster than the 7900 GTX SLI. We ran the test several times and the outcome was the same on each.



    Toda a analise:
    http://www.legitreviews.com/xfx-gefo...GJVChxSQRps.99


    Fica aqui alguns testes feitos à nVidia 8800GTX em SLI.
    Esta placa tinha dois conectores SLI, logo permitia fazer algo identico aquilo que as nVidia 7950GX2 faziam, mas de uma forma menos complexa e mais eficiente.
    Se uma placa sózinha já era fabulosa, duas era algo muito à frente na epoca e no minimo pedia um investimento de 1200€... 4 placas já era só para alguns
    Não referi em cima, mas o preço desta placa cá em Portugal rondava os 600 a 700€, mediante a marca e o local onde se comprasse.
    Outro aspecto importante do chip que equipa as 8800GTX que não referi também no texto inicial, o G80 é o chip que iguala a qualidade de imagem que desde sempre esteve presente nas placas das ATI, era uma das maiores lacunas que a nVidia tinha e com este chip isso foi ultrapassado, ficando a nvidia com a mesma qualidade de imagem que a ATI tinha.
    Por falar em ATI e AMD, esta placa entra numa altura em que a AMD começava o seu declinio nos CPUs, dado a Intel ter apresentado os seus processadores "Core" poucos meses antes, penso que em Junho ou Julho de 2006 e daí para a frente até aos dias de hoje é o que se sabe, a AMD nunca mais deu aos mercado de consumo domestico um CPU capaz de ombrear ou ultrapassar os CPUs da Intel.
    A qualidade de imagem que referi em cima, era mais notoria em monitores CRT e, mais uma vez, só quem teve esse tipo de monitores e teve placas da nVidia e ATI percebe o que aqui é falado.
    Última edição de Jorge-Vieira : 25-02-16 às 09:28
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

 

 
Página 13 de 16 PrimeiroPrimeiro ... 31112131415 ... ÚltimoÚltimo

Informação da Thread

Users Browsing this Thread

Estão neste momento 2 users a ver esta thread. (0 membros e 2 visitantes)

Bookmarks

Regras

  • Você Não Poderá criar novos Tópicos
  • Você Não Poderá colocar Respostas
  • Você Não Poderá colocar Anexos
  • Você Não Pode Editar os seus Posts
  •