Registar
Página 5 de 15 PrimeiroPrimeiro ... 34567 ... ÚltimoÚltimo
Resultados 61 a 75 de 216
  1. #61
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,005
    Avaliação
    1 (100%)
    The RV870 Story: AMD Showing up to the Fight

    The Call

    My love/hate relationship with AMD PR continued last year. But lately, it’s been far less hate. Let’s rewind back to the Summer of 2009. I’d been waiting for AMD to call for weeks.
    We all knew that the RV870 was going to launch sometime before the end of the year, and we’re normally briefed on new GPUs around a month or so before we get hardware. The rumors said that the launch had been pushed back, but just like clockwork I got a call in June or July of last year. It was my old friend, Chris Hook of AMD PR.
    This time he wanted me to come to a press event on a carrier off the coast of California. Sigh.
    It’s not that I have anything against carriers. It’s just that all I cared about at that time was the long awaited successor to the RV770. The RV770 was the GPU that unequivocally restored my faith in ATI graphics, an impact shared by others last June. But that’s not how the game is played I’m afraid. AMD promises its management and its partners that they can fill a room (or carrier) up with important press. We get promised access to engineers, useful information and free drinks.

    The USS Hornet. GPUs are in there.
    I’m not easily swayed by free drinks, but Chris Hook knows me well enough by now to know what I’d appreciate even more.
    The Dinner - September 2009

    I had to leave dinner earlier than I wanted to. ASUS’ Chairman Jonney Shih was in town and only had one opportunity to meet me before I left Oakland. Whenever either of us happens to be in the same town, we always make our best effort to meet - and I wasn’t going to let him down. In the same vein that Steve Jobs is successful because he is a product guy at heart, running a company best known for its products. Jonney Shih is an engineer at heart, and he runs a company who has always been known for their excellence in engineering. This wasn’t just another meeting with an executive, this was a meeting with someone who has a passion for the same things I do. His focus isn’t on making money, it’s on engineering. It’s a rare treat.
    My ride was waiting outside. I closed the lid on my laptop, making sure to save the 13 pages of notes I just took while at dinner. And I shook this man’s hand:
    Before I left he asked me to do one thing. He said “Try not to make the story about me. There are tons of hardworking engineers that really made this chip happen”. Like Jonney, Carrell Killebrew has his own combination of traits that make him completely unique in this industry. All of the greats are like that. They’ve all got their own history that brought them to the companies that they work for today, and they have their own sets of personality traits that when combined make them so unique. For Carrell Killebrew it's a mixture of intelligence, pragmatism, passion and humility that's very rare to see. He's also a genuinely good guy. One of his tenets is that you should always expect the best from others. If you expect any less than the best, that’s all you’ll ever get from them. It’s a positive take on people, one that surprisingly enough only burned Carrell once. Perhaps he’s more fortunate than most.
    Mr. Killebrew didn’t make the RV870, but he was beyond instrumental in making sure it was a success. What follows is a small portion of the story of the RV870, the GPU behind the Radeon HD 5800 series. I call it a small portion of the story because despite this article using more than eight thousand words to tell it, the actual story took place over years and in the minds and work of hundreds of engineers. This GPU, like all others (even Fermi) is the lifework of some of the best engineers in the industry. They are the heroes of our industry, and I hope I can do their story justice.
    As is usually the case with these GPU backstories, to understand why things unfolded the way they did we have to look back a few years. Introducing a brand new GPU can take 2 - 4 years from start to finish. Thus to understand the origins of the Radeon HD 5800 series (RV870) we have to look back to 2005.
    Sidebar on Naming
    AMD PR really doesn’t like it when I use the name RV870. With this last generation of GPUs, AMD wanted to move away from its traditional naming. According to AMD, there is no GPU called the RV870, despite the fact that Carrell Killebrew, Eric Demers and numerous others referred to it as such over the past couple of years. As with most drastic changes, it usually takes a while for these things to sink in. I’ve also heard reference to an RV870 jar - think of it as a swear jar but for each time someone calls Cypress an RV870.
    Why the change? Well, giving each member of a GPU family a name helps confuse the competition. It’s easy to know that RV870 is the successor to the RV770. It’s harder to tell exactly what a Cypress is.
    AMD PR would rather me refer to RV870 and the subject of today’s story as Cypress. The chart below shows AMD’s full listing of codenames for the 40nm DX11 GPU lineup:
    GPU Codename
    ATI Radeon HD 5900 Series Hemlock
    ATI Radeon HD 5800 Series Cypress
    ATI Radeon HD 5700 Series Juniper
    ATI Radeon HD 5600/5500 Series Redwood
    ATI Radeon HD 5400 Series Cedar
    Given that we still haven’t purged the RVxxx naming from our vocabulary, I’m going to stick with RV870 for this story. But for those of you who have embraced the new nomenclature - RV870 = Cypress and at points I will use the two names interchangeably. The entire chip stack is called Evergreen. The replacement stack is called the Northern Islands.
    Todo artigo:
    http://www.anandtech.com/show/2937


    Cá fica a história das HD 5000.
    Estas placas vieram dar seguimento aquilo que tinha sido feito com as anteriores HD 4000.
    O Chip que equipa estas placas, RV 870, foi o primeiro chip do mercado a suportar o DX 11, calhando desta vez a AMD/ATI ser a primeira a apresentar uma nova grafica para um novo DX.
    Na altura estas placas deixaram a nVidia K.O. uns meses bem largos, sendo a resposta da nVidia desastrosa com as GTX 400, só depois o erro foi emendado com as GTX 500, isto deu uma liderança de muitos meses à AMD.
    Este artigo e o artigo das HD 4000 mostram bem a força da AMD no campo dos graficos, a AMD de hoje é uma sombra destes tempos.
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  2. #62
    O Administrador Avatar de LPC
    Registo
    Mar 2013
    Local
    Multiverso
    Posts
    14,945
    Avaliação
    31 (100%)
    Boas!
    Sem dúvida... o Chip G80... Tive várias 8800gtx e foi uma loucura quando saiu o crysis e estas placas...

    Na altura combinei até com o DIMA e fui ter com ele a Lisboa na altura penso que foi a FIL ou algo assim para ir buscar uma BFG do bufo, para uma review para a o ITU e a PCDIGA.
    Depois tive diversas, até para o SLI... Sem dúvida que foi das gráficas que mais gostei da Nvidia...

    Cumprimentos,

    LPC

    My Specs:
    Case: Phanteks Eclipse P400S - CPU: AMD Ryzen 5 - 1600 @ 3.9 Ghz - Board: MSI B350 Tomahawk - RAM: 16GB DDR4 G.Skill RipJaws V 3200Mhz Cas 14-14-14-34 (2x8GB) - GPU: ZOTAC Nvidia GTX 1060 AMP! 6GB
    Cooling: Arctic Cooling 3x F14 Silent - CPU Cooler: Arctic Cooling: Liquid Freezer 360 (6xF12 Fans) - Storage: Samsung SSD 840 EVO 1 TB - PSU: EVGA G3 750W - Monitor: ACER XB270HU 1440p @ 144hz G-Sync

  3. #63
    Master Business & GPU Man Avatar de Enzo
    Registo
    Jan 2015
    Local
    País Campeão Euro 2016
    Posts
    6,223
    Avaliação
    41 (100%)
    Ainda hei-de por aquelas duas a bombar
    A BFG tambem desapareceram do mapa...
    Devido à falta de espaço na assinatura, resolvi colocar em "Acerca de mim" os meus projectos]
    http://www.portugal-tech.pt/member.php?u=801

  4. #64
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,005
    Avaliação
    1 (100%)
    AMD’s Radeon HD 5850: The Other Shoe Drops

    AMD’s Radeon HD 5850: The Other Shoe Drops

    “For those of you looking for the above and a repeat of the RV770/GT200 launch where prices will go into a free fall, you’re going to come away disappointed. That task will fall upon the 5850, and we’re looking forward to reviewing it as soon as we can.”
    -From our Radeon HD 5870 Review
    Today the other shoe drops, with AMD launching the 5870’s companion card: the slightly pared down 5850. It’s the same Cypress core that we saw on the 5870 with the same features: DX11, Eyefinity, angle-independent anisotropic filtering, HDMI bitstreaming, and supersample anti-aliasing. The only difference between the two is performance and power – the 5850 is a bit slower, and a bit less power hungry. If by any chance you’ve missed our Radeon HD 5870 review, please check it out; it goes in to full detail on what AMD is bringing to the table with Cypress and the HD 5800 series.

    ATI Radeon HD 5870 ATI Radeon HD 5850 ATI Radeon HD 4890 ATI Radeon HD 4870
    Stream Processors 1600 1440 800 800
    Texture Units 80 72 40 40
    ROPs 32 32 16 16
    Core Clock 850MHz 725MHz 850MHz 750MHz
    Memory Clock 1.2GHz (4.8GHz data rate) GDDR5 1GHz (4GHz data rate) GDDR5 975MHz (3900MHz data rate) GDDR5 900MHz (3600MHz data rate) GDDR5
    Memory Bus Width 256-bit 256-bit 256-bit 256-bit
    Frame Buffer 1GB 1GB 1GB 1GB
    Transistor Count 2.15B 2.15B 959M 956M
    TDP 188W 151W 190W 150W
    Manufacturing Process TSMC 40nm TSMC 40nm TSMC 55nm TSMC 55nm
    Price Point $379 $259 ~$180 ~$160
    AMD updated the specs on the 5850 at the last moment when it comes to power. Idle power usage hasn’t changed, but the final parts are now specified for 151W load power, versus the 160W originally given to us, and 188W on the 5870. So for the power-conscious out there, the 5850 offers a load power reduction in lockstep with its performance reduction.
    As compared to the 5870, AMD has disabled two of the SIMDs and reduced the core clock from 850MHz to 725Mhz. This is roughly a 15% drop in clock speed and a 10% reduction in SIMD capacity, for a combined theoretical performance difference of 23%. Meanwhile the memory clock has been dropped from 1.2GHz to 1GHz, for a 17% overall reduction. Notably the ROP count has not been reduced, so the 5850 doesn’t lose as much rasterizing power as it does everything else, once again being 15% due to the drop in clock speed.
    With the reduction in power usage, AMD was able to squeeze Cypress in to a slightly smaller package for the 5850. The 5850 lobs off an inch in length compared to the 5870, which will make it easier to fit in to cramped cases. However the power connectors have also been moved to the rear of the card, so in practice the space savings won’t be as great. Otherwise the 5850 is a slightly smaller 5870, using the same sheathed cooler design as the 5870, sans the backplate.
    Port-side, the card is also unchanged from the 5870. 2 DVI ports, 1 HDMI port, and 1 DisplayPort adorn the card, giving the card the ability to drive 2 TMDS displays (HDMI/DVI), and a DisplayPort. As a reminder, the DisplayPort can be used to drive a 3rd TMDS display, but only with an active (powered) adapter, which right now still run at over $100.
    AMD tells us that this is going to be a hard launch just like the 5870, with the 5850 showing up for $260. Given that the 5870 did in fact show up on-time and on-price, we expect the same for the 5850. However we don’t have any reason to believe 5850 supplies will be any more plentiful than 5870 supplies – never mind the fact that it’s in AMD’s interests to ship as many 5870s as they can right now given their higher price. So unless AMD has a lot of Cypress dice to harvest, we’re expecting the 5850 to be even harder to find.

    Update: As of Wednesday afternoon we have seen some 5850s come in to stock, only to sell out again even sooner than the 5870s did. It looks like 5850s really are going to be harder to find.

    Toda a analise:
    http://www.anandtech.com/show/2848


    Fica aqui mais um pouco da história das AMD HD 5000.
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  5. #65
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,005
    Avaliação
    1 (100%)
    A Quick Analysis of the NVIDIA Fermi Architecture

    NVIDIA GT200 and GT300
    GT200 was based upon G80, though not without significant improvements. The total number of shader pipelines was increased from 128 to 240 which were gathered into 10 clusters with 3 subclasters each. Like G92, it featured 8 texture load/store units per cluster and supported PCI Express v2.0. Like G80, it relied upon NVIO for output interfaces. Every register unit of GT200 operated with a double size register file (64Kb) which allowed to improve performance on multiple threads and long shaders. There are 3 subclusters per cluster, so the 1st level texture cache has got a size increase from 16Kb to 24Kb per cluster (240Kb in total). Although the total number of texture filtering units didn't change, their performance was also improved significantly. Last but not least, there were 8 memory channels 64-bit each (512 bits in total) which implied 8 raster partitions with 4 ROPs each (32 ROPs in total). Such a wide memory interface was introduced in AMD/ATI R600, though the following top performance graphprocessors by AMD/ATI featured less complicated 256-bit memory designs.

    GT200 was supposed to enter the market in November of 2007, but the actual appearance was made only in June of 2008. Nevetheless, it was a very impressive design counting 1.4 milliard transistors which possessed a die size of awesome 576mm² under a 65nm TSMC technological process. No need to mention probably that it was a very expensive thing in manufacturing means, so a 55nm die shrink called GT200b or GT206 appeared in January of 2009 with a die size of 470mm². It was also a bit late, the original schedule mentioned something about August of 2008. A 40nm version called GT200c or GT212 had never been produced. Neither GT200 nor any of its family members supported the complete DirectX 10.1 feature set (Shader Model 4.1). The company had got issues with other 40nm designs which were not nearly as complicated as GT200. In particular, GT214 was turned over for another development cycle and had seen the release as GT215. GT216 and GT218 were delayed several times. It seems obvious that NVIDIA has got real problems with 40nm, but instead of solving them as soon as possible they have placed a large bet on GT300, another monstrous design. A big mistake? Time will tell. In general, GT200 based cards appeared to be expensive power hungry devices (GeForce GTX 280 was advertised with a 650USD initial target price and 236W TDP), and they delayed on the market for about half a year. So far, GT300 seems to follow the bad luck of GT200.

    There isn't much to say about GT300 (also advertised as GF100) when it comes to architecture and technology as these things are kept pretty much confidential, but some information has surfaced. There are 512 pipelines gathered into 4 clusters now called graphics processing clusters, and every such a cluster is subdivided for 4 subclusters still called streaming multiprocessors. So, there are 128 pipelines per cluster and 32 pipelines per subcluster. Every subcluster is accommodated with 2 warp schedulers, 2 dispatch units, 4 special functions units, 16 texture load/store units, a register unit with a large 128Kb register file and so on. There are 64Kb of local memory per cluster which may be user configured as 16Kb of the 1st level cache memory (hardware managed) and 48Kb of shared memory (software managed) or vice versa. As it has been mentioned before, G80 and GT200 have 16Kb of shared memory per cluster and no true 1st level cache memory. In general, a single cluster of GT300 is more advanced than of either G80 or GT200. It has been announced that GT300 will have the 2nd level cache memory of 768Kb; to be correct, there will be 128Kb of such cache per memory channel. G80 and GT200 also have the 2nd level cache memory of either 32Kb or 64Kb per memory channel respectively, but keep in mind that their shader pipelines cannot make any use of it. GT300 can grant access to the 2nd level cache for both texture units and shader pipelines in read/write mode. About the memory interface, first rumours have mentioned that it's going to be 512 bits wide, but now we can be sure that there will be 6 memory channels 64-bit each (384 bits in total) like in G80. The primary memory type for GT300 will be GDDR5 SDRAM as opposed to G80 and GT200 which relied upon GDDR3 SDRAM. While ECC implementation for the register file and caches is going to be regular SEC/DED, NVIDIA have developed a proprietary ECC algorithm for memory protection: there will be no additional data lines and memory chips installed for this purpose as checksums will be stored in reserved portions of regular video memory. GT300 will support the IEEE 754-2008 standard for floating point calculations instead of the older IEEE 754-1985, though there isn't much difference. GT300 also seems to have a hardware tesselating logic as a part of the PolyMorph engine. What's even more interesting, there are expected to be as many tesselating units as clusters. Finally, GT300 will make use of NVIO just like G80 and GT200.

    GT300 is expected to consist of 3 milliard transistors with a die size at 530mm² given a 40nm TSMC technological process. Considering price of a single 300mm wafer at TSMC for this process at 5000USD to 6000USD and the die size above, GT300 is going to be much more expensive than a 55nm GT200b. If to consider additionally the amount of resources spent on development of GT300 and the Fermi architecture as well as low manufacturing yields and delays to the market, then it makes a tough job for NVIDIA to generate any reasonable profit out of this project. About release schedule, it has been planned originally for GT300 to be supplied in quantity to OEMs in Q3 2009. It's got moved for Q4 2009 after some serious design and manufacturing issues kept strictly confidential. Anyway, GT300 has failed totally to hit the Christmas and New Year sales, and the release schedule has been postponed once again to Q1 2010. The latest rumours tell that it's going to happen in March of 2010, so let's see.
    (click to enlarge, 514Kb)


    Conclusions

    It's diffucult to make any statements on performance of GT300. In a matter of fact, it depends mostly upon positives and negatives of the Fermi architecture as well as real clock speed of the GT300 shader domain. The latter is expected to be between 1.5GHz to 2.0GHz, though probably closer to the lower limit rather than higher. Anyway, let's suppose that actual performance will be between 1600 to 3000 gigaFLOPS on single precision floating point or from 800 to 1500 gigaFLOPS on double precision floating point. That's very impressive because GT200 based Tesla C1060 with its shader domain at 1.3GHz can do 933 gigaFLOPS single precision or 78 gigaFLOPS double precision. The primary competitor, 40nm RV870 (Cypress) based Radeon HD5870 by AMD/ATI, delivers 2720 gigaFLOPS single precision or 544 gigaFLOPS double precision at 850MHz. The current AMD/ATI top performance product for scientific calculations, 55nm RV790 based FireStream 9270, can do 1200 gigaFLOPS single precision or 240 gigaFLOPS double precision at 750MHz. It seems apparent that the next RV870 based FireStream will be at least two times faster than model 9270. It is also apparent that future GT300 based products won't gain any serious advantage over RV870 based products in single precision performance but will prevail significantly in double precision. The primary conclusion is that when it comes to computer gaming, GT300 and RV870 will be pretty much equal in means of performance, but GT300 will be preferred for scientific calculations. Actual prices, power consumption, support quality and so on may adjust the decision here and there as it usually is.

    (continued; 28-Mar-2010)

    So, GT300 (or GF100) has hit the market officially on the 26th of March. There are two cards released by NVIDIA through their partners, GeForce GTX470 and GeForce GTX480. What's the most interesting thing is that both of them are based upon GT300 with some units disabled. It seems NVIDIA has faced really poor manufacturing yields, but it's hardly possible for them to delay their Fermi based products any further, so they have made the decision. Well, it's the first time for NVIDIA to release a top performance product with masked units. A very unpopular move which may cost NVIDIA some reputation. It seems they have simply got no other choice: something is better than nothing at all. In means of competition, GeForce GTX470 is supposed to be an alternative to Radeon HD5850, and GeForce GTX480 is going to hurt Radeon HD5870 sales. See the table below for the cards' specifications. Note that execution units of NVIDIA and AMD/ATI graphprocessors cannot be compared by real numbers due to a very different architecture, so both real and approximate effective numbers are shown for AMD/ATI products.
    NVIDIA
    GeForce GTX470
    NVIDIA
    GeForce GTX480
    AMD/ATI
    Radeon HD5850
    AMD/ATI
    Radeon HD5870
    Graphprocessor GT300 (GF100) GT300 (GF100) RV870 RV870
    Clock speed (core logic) 607MHz 700MHz 725MHz 850MHz
    Clock speed (shader pipelines) 1215MHz 1400MHz 725MHz 850MHz
    TMUs 56 60 18
    (72 effective)
    20
    (80 effective)
    ROPs 40 48 8
    (32 effective)
    8
    (32 effective)
    Shader pipelines 448 480 288
    (1440 effective)
    320
    (1600 effective)
    Clock speed (memory)1 3350MHz 3700MHz 4000MHz 4800MHz
    Memory bus width 320-bit 384-bit 256-bit 256-bit
    Memory size 1280Mb 1536Mb 1024Mb / 2048Mb 1024Mb / 2048Mb
    Memory type GDDR5 SDRAM GDDR5 SDRAM GDDR5 SDRAM GDDR5 SDRAM
    TDP2 215W 250W 151W 188W
    Idle power consumption ~50W ~50W ~30W ~30W
    MSRP 350USD 500USD 300USD 400USD

    1 — effective data transfer speed of GDDR5 SDRAM;
    2 — real world peak power consumption may be higher.

    As you may have guessed already, GeForce GTX470 is powered by GT300 with 2 subclusters disabled, so minus 64 shader pipelines and 8 TMUs. The memory bus width is only 320-bit, hence one 64-bit memory controller is disabled together with 8 ROPs and 128Kb of the 2nd level cache memory. Considering low clock speeds, especially the 1.2GHz shader domain, this video card isn't going to fly sky high. On the other hand, NVIDIA will be able to satisfy market demand on GeForce GTX470 cards even with poor manufacturing yields. GT300 chips for GTX480 come with 1 subcluster disabled. That's not good, but there are other things to worry about. First of all, the shader domain clocked at 1.4GHz isn't what most people including myself have expected from this top performance product. Although I haven't been too optimistic, but I've expected for it to cross a 1.5GHz boundary at least. Another important issue is power consumption. I'm not sure what to do with those 250W of TDP reported by NVIDIA. There is some information that real world peak power consumption of GTX480 is 300W to 320W, and the card gets very hot even when running at default clock speeds: its core temperatures are well over 90°C consistently. Finally, here comes the money question. Radeon HD5870 is priced at 400USD for a 1Gb version, it's available on the market for 6 months, it's less power hungry and about as fast as GeForce GTX480 (plus or minus 10% here and there don't make much difference) while the latter is priced at 500USD. Frankly speaking, it doesn't make any sense.

    And one more thing. The only serious advantage GT300 could have over RV870 is outstanding double precision floating point performance. However, this way GT300 based GeForce cards would be highly competitive agaist GT300 based Tesla cards on some markets. Keep in mind that superior double precision performance is of very little to zero importance when playing computer games, encoding or decoding video streams, etc. So, NVIDIA have reduced double precision floating point performance of GT300 based GeForce cards by 4 times through a software lock probably (136 gigaFLOPS for GTX470 and 168 gigaFLOPS for GTX480). It's unclear whether this is a temporary solution or not, but currently GeForce GTX470 and GTX480 are much slower in double precision than Radeon HD5850 and HD5870 respectively. Those who need superior double precision performance are kindly advised by NVIDIA to purchase Tesla C2050 or C2070 cards. The first one comes with 3Gb of 384-bit 3600MHz GDDR5 SDRAM (2.625Gb available with ECC enabled), the second one — with 6Gb of 384-bit 4000MHz GDDR5 SDRAM (5.250Gb available with ECC enabled). Both of them are powered by GT300 with 2 subclusters disabled (minus 64 shader pipelines). These cards can do double precision floating point calculations at 560 and 628 gigaFLOPS respectively, and single precision floating point calculations — at 1120 and 1256 gigaFLOPS respectively. NVIDIA wants 2500USD for C2050 and 4000USD for C2070.
    Noticia:
    http://alasir.com/articles/nvidia_fe...itecture.shtml


    Já tinha colocado aqui há umas paginas atrás este artigo: http://www.anandtech.com/show/2918 sobre o Fermi.
    Fica aqui outro com outros pormenores e no seguimento da história que se tem desenrolado aqui nestes ultimos posts.
    Esta arquitetura veio como resposta ás AMD HD 5000, alguns meses depois de a AMD ter colocado as suas graficas no mercado. O Fermi é primeiro chip DX 11 da nVidia e quando chegou foi um desastre e talvez um dos piores chips de sempre da nVidia nestes tempos mais recentes, talvez só equiparado ao enorme falhanço do chip RV 600 ( ATI HD 2000), por isso mesmo este chip foi alvo de inumeros gifs com alimentos a serem cozinhados em cima.
    A AMD nesta altura chegou a ter uma quota de mercado, só englobando as graficas DX11 de cerca de 90% do mercado.
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  6. #66
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,005
    Avaliação
    1 (100%)
    NVIDIA's GeForce GTX 580: Fermi Refined

    The GTX 480… it’s hotter, it’s noisier, and it’s more power hungry, all for 10-15% more performance. If you need the fastest thing you can get then the choice is clear, otherwise you’ll have some thinking to decide what you want and what you’re willing to live with in return.
    Us on the GTX 480
    The GeForce GTX 480 and the associated GF100 GPU have presented us with an interesting situation over the last year. On the one hand NVIDIA reclaimed their crown for the fastest single GPU card on the market, and in time used the same GPU to give rise to a new breed of HPC cards that have significantly expanded the capabilities of GPU computing. On the other hand, like a worn and weary athlete finally crossing the finish line, this didn’t come easy for NVIDIA. GF100 was late, and the GTX 480 while fast was still hot & loud for what it was.
    Furthermore GTX 480 and GF100 were clearly not the products that NVIDIA first envisioned. We never saw a product using GF100 ship with all of its SMs enabled – the consumer space topped out at 15 of 16 SMs, and in the HPC space Tesla was only available with 14 of 16 SMs. Meanwhile GF100’s younger, punchier siblings put up quite a fight in the consumer space, and while they never were a threat to GF100, it ended up being quite the surprise for how close they came.
    Ultimately the Fermi architecture at the heart of this generation is solid – NVIDIA had to make some tradeoffs to get a good gaming GPU and a good compute GPU in a single product, but it worked out. The same can’t be said for GF100, as its large size coupled with TSMC’s still-maturing 40nm process lead to an unwieldy combination that produced flakey yields and leaky transistors. Regardless of who’s ultimately to blame, GF100 was not the chip it was meant to be.
    But time heals all wounds. With GF100 out the door NVIDIA has had a chance to examine their design, and TSMC the chance to work the last kinks out of their 40nm process. GF100 was the first Fermi chip, and it would not be the last. With a lesson in hand and a plan in mind, NVIDIA went back to the drawing board to fix and enhance GF100. The end result: GF110, the next iteration of Fermi. Hot out of the oven, it is launching first in the consumer space and is forming the backbone of the first card in NVIDIA’s next GeForce series: GeForce 500. Launching today is the first such card, the GF110-powered GeForce GTX 580.
    GTX 580 GTX 480 GTX 460 1GB GTX 285
    Stream Processors 512 480 336 240
    Texture Address / Filtering 64/64 60/60 56/56 80 / 80
    ROPs 48 48 32 32
    Core Clock 772MHz 700MHz 675MHz 648MHz
    Shader Clock 1544MHz 1401MHz 1350MHz 1476MHz
    Memory Clock 1002MHz (4008MHz data rate) GDDR5 924MHz (3696MHz data rate) GDDR5 900Mhz (3.6GHz data rate) GDDR5 1242MHz (2484MHz data rate) GDDR3
    Memory Bus Width 384-bit 384-bit 256-bit 512-bit
    Frame Buffer 1.5GB 1.5GB 1GB 1GB
    FP64 1/8 FP32 1/8 FP32 1/12 FP32 1/12 FP32
    Transistor Count 3B 3B 1.95B 1.4B
    Manufacturing Process TSMC 40nm TSMC 40nm TSMC 40nm TSMC 55nm
    Price Point $499 ~$420 ~$190 N/A
    GF110 is a mix of old and new. To call it a brand-new design would be disingenuous, but to call it a fixed GF100 would be equally shortsighted. GF110 does have a lot in common with GF100, but as we’ll see when we get in to the design of GF110 it is its own GPU. In terms of physical attributes it’s very close to GF100; the transistor count remains at 3 billion (with NVIDIA undoubtedly taking advantage of the low precision of that number), while the die size is at 520mm2. NVIDIA never did give us the die size for GF100, but commonly accepted values put it at around 530mm2, meaning GF110 is a hair smaller.
    But before we get too deep in to GF110, let’s start with today’s launch card, the GeForce GTX 580. GTX 580 is the first member of the GeForce 500 series, giving it the distinction of setting precedent for the rest of the family that NVIDIA claims will soon follow. Much like AMD last month, NVIDIA is on their second trip with the 40nm process, meaning they’ve had the chance to refine their techniques but not the opportunity to significantly overhaul their designs. As a result the 500 series is going to be very familiar to the 400 series – there really aren’t any surprises or miracle features to talk about. So in many senses, what we’re looking at today is a faster version of the GTX 480.
    So what makes GTX 580 faster? We’ll start with the obvious: it’s a complete chip. All the L2 cache, all the ROPs, all the SMs, it’s all enabled. When it comes to gaming this is as fast as GF110 can be, and it’s only through NVIDIA’s artificial FP64 limitations that double-precision computing isn’t equally unrestricted. We have wondered for quite some time what a full GF100 chip would perform like – given that GTX 480 was short on texture units, shaders, and polymorph engines, but not ROPs – and now the answer is at hand. From all of this GTX 580 has 6.6% more shading, texturing, and geometric performance than the GTX 480 at the same clockspeeds. Meanwhile the ROP count and L2 cache remains unchanged; 48 ROPs are attached to 768KB L2 cache, which in turn are attached to 6 64bit memory controllers.

    GeForce GTX 580
    The second change of course is clockspeeds. The reference GTX 480 design ran at 700MHz for the core and 924MHz (3696MHz data rate) for the GDDR5. Meanwhile GTX 580 brings that up to 772MHz for the core and 1002MHz (4008MHz data rate), marking a 72MHz(10%) increase in core clockspeed and a slightly more modest 78MHz (8%) increase in memory bandwidth. This is a near-equal increase in the amount of work that GTX 580 can process and the amount of work its memory can feed it, which should offer a relatively straightforward increase in performance.
    Last but certainly not least change coming from GTX 480 is in GF110 itself. NViDIA has ported over GF104’s faster FP16 (half-precision) texture filtering capabilities, giving GF110/GTX580 the ability to filter 4 FP16 pixels per clock, versus 2 on GF100/GTX480. The other change ties in well with the company’s heavy focus on tessellation, with a revised Z-culling/rejection engine that will do a better job of throwing out pixels early, giving GF110/GTX580 more time to spend on rendering the pixels that will actually be seen. This is harder to quantify (and impossible for us to test), but NVIDIA puts this at another 8% performance improvement.
    Meanwhile NVIDIA hasn’t ignored GTX 480’s hot and loud history, and has spent some time working on things from that angle. We’ll dive in to NVIDIA’s specific changes later, but the end result is that through some optimization work they’ve managed to reduce their official TDP from 250W on the GTX 480 to 244W on the GTX 580, and in practice the difference is greater than that. NVIDIA’s cooling system of choice has also been updated, working in tandem with GTX 580’s lower power consumption to bring down temperatures and noises. The end result is a card that should be and is cooler and quieter while at the same being faster than GTX 480.

    GF110
    The downside to this is that if it sounds like a fairy tale, it almost is. As you’ll see we have a rather high opinion of GTX 580, but we’re not convinced you’re going to be able to get one quite yet. NVIDIA is technically hard-launching GTX 580 today at $499 (GTX 480’s old price point), but they aren’t being very talkative about the launch quantity. They claim it’s for competitive reasons (to keep AMD from finding out) and we can certainly respect that, but at the same time it’s rare in this industry for someone to withhold information because it’s a good thing. We really hope to be surprised today and see GTX 580s available for everyone that wants one, but we strongly suspect that it’s going to be available in low quantities and will sell out very quickly. After that it’s anyone’s guess on what the refresh supply will be like; our impression of matters is that yields are reasonable for such a large chip, but that NVIDIA didn’t spend a lot of time stockpiling for today’s launch.
    In any case, with GTX 580 taking the $500 spot and GF110 ultimately destined to replace GF100, GF100 based cards are going to be on their way out. NVIDIA doesn’t have an official timeline, but we can’t imagine they’ll continue producing GF100 GPUs any longer than necessary. As a result the GTX 480 and GTX 470 are priced to go, falling between the GTX 580 and the GTX 460 in NVIDIA’s lineup for now until they’re ultimately replaced with other 500 series parts. For the time being this puts the GTX 480 at around $400-$420, and the GTX 470 – still doing battle with the Radeon HD 6870 – is at $239-$259.
    Meanwhile AMD does not have a direct competitor for the GTX 580 at the moment, so their closest competition is going to be multi-GPU configurations. In the single card space there’s the Radeon HD 5970, which is destined for replacement soon and as a result AMD is doing what they can to sell off Cypress GPUs by the end of the year. The last reference 5970 you can find on Newegg is a Sapphire card, which is quite blatantly priced against the GTX 580 at $499 with a $30 rebate. Given that it’s the last 5970, we’d be surprised if it was in stock for much longer than the initial GTX 580 shipments.
    For cards you do stand a good chance of getting, a pair of 6870s will set you back between $480 and $500, making it a straightforward competitor to the GTX 580 in terms of price. A pair of cards isn’t the best competitor, but CrossFire support is widely available on motherboards so it’s a practical solution at that price.
    Fall 2010 Video Card MSRPs
    NVIDIA Price AMD
    GeForce GTX 580
    $500 Radeon HD 5970
    $420
    $300 Radeon HD 5870
    $240 Radeon HD 6870
    $180 Radeon HD 6850


    Todo a analise:
    http://www.anandtech.com/show/4008/n...eforce-gtx-580

    Depois de um desastroso falhanço com a primeira geração do Fermi e a AMD confortavelmente na frente no que a placas graficas DX 11 diz respeito, a nvidia emendou o erro e uns meses mais terde após o lançamento das GTX 400 lançou o Fermi com uma nova revisão e uma nova serie, as GTX 500.
    Este "novo" Fermi corrigia os problemas iniciais e veio novamente colocar a nvidia no caminho de disputar as vendas com a AMD, este deveria ter sido o Fermi original.
    Este chip Fermi está na genese das arquiteturas que se seguiram, Kepler e Maxwell, dado estes chips mais recentes terem sido evoluções do Fermi, muito provavelmente o chip Pascal a ser lançado nos inicios de 2016 também terá ainda algumas influencias deste chip Fermi.
    Última edição de Jorge-Vieira : 25-12-15 às 10:42
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  7. #67
    Master Business & GPU Man Avatar de Enzo
    Registo
    Jan 2015
    Local
    País Campeão Euro 2016
    Posts
    6,223
    Avaliação
    41 (100%)
    Grandes tempos eu saltei das 8800 para as gtx7xx, perdi estas grandes guerras pelo meio. Muita coisa boa se passou por aqui nestes tempos para a AMD.
    Cheguei a ter uma 480 que nem cheguei a utilizar, e duas 590. Damn, they are hooooott!
    Última edição de Enzo : 25-12-15 às 13:07
    Devido à falta de espaço na assinatura, resolvi colocar em "Acerca de mim" os meus projectos]
    http://www.portugal-tech.pt/member.php?u=801

  8. #68
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,005
    Avaliação
    1 (100%)
    Sim, grandes tempos estes, muito mais animados e interessantes do que aquilo que se passa hoje e nos ultimos anos onde se só vê coisas a andar de um dos lados.
    Ambos os lados ao longo da história, tiveram pontos altos, pontos baixos, produtos excelentes e produtos mauzinhos.
    Á medida que me for lembrando das coisas, vou tentar colocar aqui.
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  9. #69
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,005
    Avaliação
    1 (100%)
    Barts Architecture Refresh


    A new architecture we are all very familiar with

    Introduction


    It has been a good year from the AMD GPU team. Since the release of the Radeon HD 5000 series of cards back in September of 2009 there has been a resurgence of noticeable size in the amount of AMD/ATI GPU users and we ourselves, among countless other hardware enthusiasts, touted the Evergreen architecture and the corresponding graphics cards as outstanding. For a good a six months AMD enjoyed an exclusive on DX11-class hardware and used it, as well as killer features like Eyefinity, to woo gamers back to the red team.
    This lead netted AMD a 90% share of the DX11 market and allowed AMD to output 25 million GPUs across their spectrum of cards.


    And a spectrum, it is. The Radeon HD 5000 series consists of 11 cards in total ranging from the HD 5970 dual-GPU offering to the tiny, low powered, HTPC-ready HD 5450.
    But now it is time to move past the 5000 series and discuss what is next, what is available starting today: the Radeon HD 6800 series.
    The "Barts" Architecture - Radeon HD 6800 series



    The Barts chip, known officially known as the HD 6800 series of cards, is based almost completely on the Evergreen architecture that we detailed more than a year ago in our HD 5000 series launch article. But while this launch might be viewed as a rebranding by some people, there were some notable changes in the architecture and design of the chip that allow AMD to offer improved performance and better performance per watt/area.

    Barts will also be bringing the Evergreen architecture seen in the HD 5800 series of card down into the sub-$200 market - no doubt a direct reaction to NVIDIA's GeForce GTX 460 cards that have been very successful in their short life on shelves. Later in the year we will see the release of future architectures that much more unique in the Cayman and Antilles product lines. We'll have to leave you with that tease for now and touch again on both of those items later.

    So if Barts is based on the currently available HD 5000 Evergreen architecture, what was AMD able to do to improve performance and compel us to accept the new HD 6800 cards? AMD was definitely pushing the fact that the Barts design has improved performance per die area - and while this is true their claim of a "35% improvement" is a bit of a stretch. In that metric they were comparing the new HD 6870 GPU (which is a fully enabled chip) to a Radeon HD 5850 (which is an HD 5870 with part of the chip disabled) which is not a completely legitimate comparison.
    The new cards will offer improved tessellation performance (which we'll touch on below) as well as a new anti-aliasing feature, an update UVD section and even support for newer display configurations.

    Here you can see a full Barts GPU that should look pretty familiar to those you that studied the HD 5000 series of cards and the Cypress GPU. The architecture is basically the same but has been reconfigured to improve performance per square mm of die space. You'll notice that there are now 14 SIMD engines totaling 1120 stream processors. Yes, that is fewer stream processors than the 1600 found in the HD 5870 and even the 1440 found in the Radeon HD 5850 but fear not - thanks to increased clock speeds and other changes the Barts GPU will be quite competitive.
    The memory interface is still a 256-bit connection but it has been slightly degraded compared to the Cypress controller, again in order to save die space. The new memory controller has about 20% less memory bandwidth but takes up less than 50% of the die area; and while that might not be important for the $400 market of GPUs it can mean a lot for power consumption and profit margin on the $200 cards. Thus, AMD decided to use this iteration for the HD 6800 cards that will be found in that exact market segment.
    What did NOT change is the number of render back ends (aka ROPs) on the GPU; the 32 found on the HD 5870 remain on the HD 6870 and HD 6850. This is basically a "rebalancing" of the architecture for AMD like NVIDIA did moving from the GF100 to the GF104 chip. By increasing the ratio of ROPs to SIMD units AMD was able to pull more efficiency out of the existing architecture as well as push the clock frequencies higher. This is a game of Jenga like you have never played. Thanks to the space saved by the different memory controller, AMD was able to keep the ROP count high.
    The Barts GPU consists of 1.7 billion transistors, down quite a bit from the 2.15 billion used on the HD 5870 card (and HD 5850 though with SIMDs disabled) and thus a smaller die area (334 mm^2 versus 255 mm^2).
    A quick note: while the diagram above appears to indicate that we have dual dispatch processors on the HD 6800 cards that did not exist on the HD 5800 cards, that is actually not the case. The application of the ultra-threaded dispatch processor is identical from the Cypress parts to Barts; only the way the diagram is built has changed.

    Tessellation is one area where NVIDIA's Fermi architecture really excelled and in certain tests, like the Unigine Heaven benchmark, that is incredibly apparent. And NVIDIA is quick to tout their advantage and push any game or test that uses heavy tessellation on the media and consumers in order to push their advantage. While this is AMD 7th generation of tessellation of engines in their GPUs, it definitely needs some help keeping up with the work NVIDIA did on their products. AMD has tweaked the engine to a point where they are seeing improved performance at lower tessellation factors (1-11, as seen above) where games that are using adaptive tessellation engines are going to be running for the majority of the time.
    You can see in the graph though that as the tessellation factor increases the difference between the HD 5800 series and the HD 6800 series will minimize quickly so we'll have to see how much better the real-world gaming performance gets between these two generations.
    With all of these architectural tweaks AMD was able to improve the clock frequencies on the cards as well.

    Radeon HD 6870 Specs
    While the Radeon HD 5870 ran at 850 MHz and the HD 5850 ran at 725 MHz, the new Radeon HD 6870 will come with the full Barts GPU clocked at 900 MHz out of the gate. It requires a pair of 6-pin power connections though the load board power is rated at 151 watts - about 30 watts lower than the HD 5870 card.

    Radeon HD 6850 Specs
    The new HD 6850 will come with a 775 MHz clock rate, 960 stream processors (12 SIMD units), the full 32 ROPs and a 127 watt board power and a single 6-pin connector.

    Looking at this quick comparison table new Radeon HD 6870 has about the same performance as the HD 5850 even though it has quite a few "lower" specifications. Lower memory bandwidth, fewer SIMD units, fewer texture units; but thanks to the click speed increase (from 775 MHz to 900 MHz) the new HD 6870 card will be putting up a fight.
    Noticia:
    http://www.pcper.com/reviews/Graphic...ecture-Refresh


    Sensivelmente um ano após o lançamento das HD 5000, a AMD faz a aparição de uma nova geração de graficas, as HD 6000.
    Estas placas eram apenas uma melhoria na excelente arquitetura que existia nas HD 5000, logo não vieram trazer grandes novidades e por isso esta serie não teve o sucesso das anteriores HD 4000 e HD 5000, não deixando por isso de ser placas excelentes.
    Um numero que aqui mostra o poder da AMD é os quase 90% de mercado e como hoje sabemos, o unico concorrente da AMD nos graficos dedicados anda quase a chegar a esse número, sendo que ainda estamos na era do DX 11, embora que 2016 marque o inicio de uma nova era com o DX12.
    Outra lacuna que agora se pode apontar à AMD é ter acabado com o suporte de drivers a estas placas à algumas semanas atrás, sendo que a concorrencia continua a dar suporte de drivers ás placas concorrentes das HD 6000, sendo estas placas DX 11 e ainda estão a sair jogos baseados nessa API a AMD poderia extender mais um pouco o suporte.
    As HD 6000 marcam o fim de uma arquitetura que esteve presente em várias gerações de placas gráficas, O VLIW.
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  10. #70
    Master Business & GPU Man Avatar de Enzo
    Registo
    Jan 2015
    Local
    País Campeão Euro 2016
    Posts
    6,223
    Avaliação
    41 (100%)
    Muito bem. Grande tareia na Nvidia nesses tempos :o
    Vamos ver como vai ser o 2016. Estou muito desiludido com eles, mas ainda não perdi a esperanca que voltem a voar alto.
    Devido à falta de espaço na assinatura, resolvi colocar em "Acerca de mim" os meus projectos]
    http://www.portugal-tech.pt/member.php?u=801

  11. #71
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,005
    Avaliação
    1 (100%)
    Pois foi, o problema foi a volta que isto deu e agora quem tem andado a levar uma tareia monumental é a AMD.
    A AMD decerto que vai voar alto novamente, isto é cíclico conforme se pode comprovar aqui pela história, o problema aqui é que eu acho que um ano não vai chegar para a AMD recuperar, basta ver o tempo que a nVidia levou a recuperar do rombo que levou na altura em que aquele grafico mostra 90% de mercado, por isso temos de contar com alguns anos e mais que uma ou duas gerações de graficas para colocar a AMD na rota ou no minimo novamente com uns 40% a 45% de quota de mercado.
    Eles têm o conhecimento, têm a experiencia, basta só que façam algo no proximo ano identico ao que fizeram com as HD 4000 e 5000, mesmo que cheguem atrasado, se vierem com esse espirito e qualidade, as coisas começam a equilibrar-se mais.
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  12. #72
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,005
    Avaliação
    1 (100%)
    The Kepler Architecture


    NVIDIA fans have been eagerly waiting for the new Kepler architecture ever since CEO Jen-Hsun Huang first mentioned it in September 2010. In the interim, we have seen the birth of a complete lineup of AMD graphics cards based on its Southern Islands architecture including the Radeon HD 7970, HD 7950, HD 7800s and HD 7700s. To the gamer looking for an upgrade it would appear that NVIDIA had fallen behind; but the company is hoping that today's release of the GeForce GTX 680 will put them back in the driver's seat.
    This new $499 graphics card will directly compete against the Radeon HD 7970, and it brings quite a few "firsts" to NVIDIA's lineup. This NVIDIA card is the first desktop 28nm GPU, the first to offer a clock speed over 1 GHz, the first to support triple-panel gaming on a single card, and the first to offer "boost" clocks that vary from game to game. Interested yet? Let's get to the good stuff.
    The Kepler Architecture
    In many ways, the new 28nm Kepler architecture is just an update to the Fermi design that was first introduced in the GF100 chip. NVIDIA's Jonah Alben summed things up pretty nicely for us in a discussion stating that "there are lots of tiny things changing (in Kepler) rather than a few large things which makes it difficult to tell a story."

    GTX 680 Block Diagram
    Continue reading our review of the new NVIDIA GeForce GTX 680 2GB Graphics Card!!
    The chip that the GeForce GTX 680 is built on GK104 is seen in its block diagram form above. Already, you can see a big difference between this and the GTX 580 flagship card before it. There are 1536 stream processors / CUDA cores on GTX 680 compared to the 512 cores found in GTX 580 cards. The divisions of the GPU still exist in NVIDIA's design the GPC is a combination of SMs though they have changed as well. A GPC now includes two SMX units (seen below) where the GTX 580 GPC included four SMs each.

    With the SM increasing from 32 cores to 192 cores each, NVIDIA is claiming a performance per watt metric improvement of 2x which is becoming a crucial factor as designers focus on the thermal limits and power consumption of GPUs.

    Kepler SMX Block Diagram
    The SMX unit consists of 192 CUDA cores, an updated PolyMorph Engine, 16 texture units, thread scheduling, among others. Further, the cores are arranged differently than we saw in Fermi with six cores per special function unit (SFU) instead of four. Warp (thread) count has gone from 48 to 64 in Kepler.
    With the 128 total texture units on the GTX 680 (twice what we had on the GTX 580) and an increase in cores of nearly 3x, you might be wondering how it all balances out. You may also be curious whether Kepler is really 3x as fast as Fermi.

    Gone away is the "hot clock" of NVIDIA GPUs where the cores would operate at twice the clock rate of the base GPU. Instead Kepler now runs the entire chip at the same clock rate. The reasoning is a trade off in terms of die space and power consumption. Engineers were able to reduce the clock power by half and logic power by 10% at the expense of some die area, but with a focus on power efficiency on this design it was a change they were obviously willing to make.

    Another change in Kepler is found in the scheduling component where much of the process is actually moved from hardware to software to be run in the NVIDIA driver. Because the software is already handling so much of the decoding process from DirectX, CUDA, OpenCL, and more NVIDA found it to be more power efficient to continue to increase the workload in the software rather than on the chip itself. Some items remain on die though because of latency concerns, such as texture operations.

    Because of a reduction in the number of SMX units per chip, NVIDIA had to double up on the performance of individual PolyMorph engines. But because we have half the SMX units on Kepler as you did on Fermi, total chip performance hasn't changed much.

    Compared to AMD's Radeon HD 7970 the GTX 680 is actually a bit slower at lower expansion factors and it's not until we hit 11x that we start to see the advantages NVIDIA once claimed to have throughout the scale. Both companies debate which factors are most important though to game developers with AMD claiming that the lower factors are much more often used.

    For the new memory design NVIDIA has gone with a 256-bit controller (compared to the 384-bit found on Fermi) though the clock speeds are running at 6 Gbps (1500 MHz)! The total memory bandwidth provided by this design is 192 GB/s, which is basically identical to that of the GTX 580. ROP count has decreased from 48 on the GTX 580 to 32 on Kepler/GTX 680, however.
    Today's GTX 680 will ship with a 2GB frame buffer and some users may lament of expectation for NVIDIA to match AMD's 3GB memory configuration on the HD 7900 cards. While we are never one to say we don't want MORE memory on our GPUs, in our testing we have not seen detrimental effects of 2GB versus 3GB of memory even on multi-display gaming.
    The GTX 680 is indeed a PCI Express 3.0 compatible card and the GPU does support DX11.1 features as well, but it isn't really anything to get excited about just yet.

    One interesting change is the addition of NVENC, a dedicated video encoding engine that is built (essentially) to rival the QuickSync technology found in Intel's Sandy Bridge processors. The logic is completely fixed function now it is no longer using the CUDA cores to encode video and NVIDIA claims that it is even more power efficient than Intel's implementation. In fact, I was told by designers that the NVENC feature could actually be used while the GPU was powered off.

    Another important change is found in the display support on Kepler as NVIDIA has finally moved away from the two display limit on single GPU cards. You can now run up to four displays on a single card, and run three of them in an NVIDIA Surround or 3DVision Surround configuration for multi-display gaming. This is obviously a feature that NVIDIA has needed for quite some time, and we are glad to see it in Kepler. DisplayPort 1.2 support is included as well.

    And here she is, the Kepler die in all her glory. The 28nm GPU is built with 3.54 billion transistors and is 294mm^2.
    There is quite a bit more to Kepler and the GeForce GTX 680 though.
    Toda a analise:
    http://www.pcper.com/reviews/Graphic...-Kepler-Motion


    Esta é a ultima arquitetura da nVidia antes da actual arquitetura Maxwell presente nas placas graficas da serie GTX 900.
    O chip Kepler faz parte de duas gerações de graficas, GTX 600 e GTX 700 e em comparação é um pouco daquilo que a AMD fez com as HD 5000 e 6000, onde existiram poucas melhorias de uma geração para a outra por se basearem no mesmo chip/arquitetura.
    Este chip veio dar resposta à arquitetura GCN da AMD, a qual ainda se encontra actual e a ser utilizada pelas mais recentes placas graficas da AMD.
    Uma novidades que este chip trás é a performance por Watt que foi bastante melhorada face a gerações anteriores.
    O chip Kepler estará também presente num top dos melhores chips feitos pela nVidia.

    A história mais recente no campo das placas graficas entre nVidia e AMD está aqui resumida nestes ultimos posts.
    Última edição de Jorge-Vieira : 26-12-15 às 20:48
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  13. #73
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,005
    Avaliação
    1 (100%)
    ATI - a primeira placa grafica 3D



    Retirado do site:
    http://vintage3d.org/rage.php#sthash....TVh0UvI0.dpuf

    Corria o ano de 1985 e a ATI é fundada, começa aí a sua história no desenvolvimento de vários chips, desde chips graficos a chipsets.
    De 1985 até 1995 todas as placas graficas eram apenas 2D e funcionam em ligações ou slots ISA e PCI.
    Final do ano de 1995, a ATI lança a sua primeira placa grafica 3D dedicada para jogos.
    Esta placa funciona através de um BUS PCI.
    A marca 3D Rage vai ficar associada durante alguns anos ás placas da ATI.
    A ATI nesta altura não tinha a concorrência da nvidia, que tinha sido fundada dois anos antes, em 1993 e ainda estava a dar os primeiros passos com a acelereção 3D através do chip NV 1, a concorrência vinha da S3, Matrox e uns meses mais tarde da 3DFX e Power VR. A nVidia ainda ia demorar alguns anos a ser um concorrente de peso e limitar as coisas a apenas dois, a ATI e a nVidia.
    Última edição de Jorge-Vieira : 28-12-15 às 06:48
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  14. #74
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,005
    Avaliação
    1 (100%)
    nVidia - a primeira placa grafica

    Retirado do site:
    http://vintage3d.org/nv1.php#sthash.AwDPd5o9.dpbs



    No ano de 1993, é fundada a nvidia, um dos seus três fundadores é o actual CEO, Jen-Sung Huang.
    A nVidia tinha para a ATI, aquele que se viria a tornar o unico rival, quase uma década de atraso no desenvolvimento de placas graficas.
    Após a fundação, é desenvolvido o chip NV1 que dá origem a uma placa com o nome de STG-2000 que já incluia algumas capacidades graficas de aceleralção 3D.
    Sinceramente não tenho memória alguma de alguma vez ter visto esta placa à venda ou até se a mesma chegou cá a Portugal.
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  15. #75
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,005
    Avaliação
    1 (100%)
    S3 - placas graficas



    Retirado do site:
    http://vintage3d.org/virge.php#sthash.od1c4a54.dpbs


    A S3 foi fundada nos finais da decada de 80, era o grande fabricante da altura de placas graficas.
    As placas eram extremamente baratas, praticamente todos os PCs da epoca tinham uma S3.
    O texto aqui retrata uma das primeiras placas S3, mas existiram muitas variantes equipadas com chips Virge e Trio.
    O meu primeiro PC veio com uma destas perolas, se a memoria não me falha, uma S3 Virge DX de 4MB da Black Flag.
    Não há muito a dizer sobre estas placas, destinavam-se apenas para 2D, sendo o 3D muito mauzinho e a qualidade de imagem também não era nada de extraordinário.
    Grande parte do pessoal que comprou as 3DFX Voodoo 2, tinha estas S3 como placa principal no PC para dar passagem ao sinal das Voodoo.
    Que me lembre no longo historial da S3, apenas houve uma placa grafica verdadeiramente competente em 3D, falo da S3 Savage, que na altura teve bastante boa aceitação pelas capacidades demonstradas.
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

 

 
Página 5 de 15 PrimeiroPrimeiro ... 34567 ... ÚltimoÚltimo

Informação da Thread

Users Browsing this Thread

Estão neste momento 1 users a ver esta thread. (0 membros e 1 visitantes)

Bookmarks

Regras

  • Você Não Poderá criar novos Tópicos
  • Você Não Poderá colocar Respostas
  • Você Não Poderá colocar Anexos
  • Você Não Pode Editar os seus Posts
  •