Registar

User Tag List

Likes Likes:  0
Página 3 de 97 PrimeiroPrimeiro 123451353 ... ÚltimoÚltimo
Resultados 31 a 45 de 1451

Tópico: Nvidia Pascal

  1. #31
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    NVIDIA Pascal GPU’s Double Precision Performance Rated at Over 4 TFLOPs, 16nm FinFET Architecture Confirmed – Volta GPU Peaks at Over 7 TFLOPs, 1.2 TB/s HBM2

    At this year’s SC15, NVIDIA revealed and confirmed two major bits about their next generation Pascal GPUs. The information includes details regarding process design, peak compute performance and even shared the same numbers for their Volta GPUs which are expected to hit the market in 2018 (2017 for HPC). The details confirm the rumors which we have been hearing since a few months that Pascal might be coming in market earlier next year.

    NVIDIA’s Pascal and Volta GPUs Peak Compute Performance Revealed – Volta To Push Memory Bandwidth To 1.2 TB/s

    For some time now, we have been hearing that NVIDIA’s next generation Pascal GPUs will be based on a 16nm process. NVIDIA revealed or should we say, finally confirmed during their SC15 conference that the chip is based on a 16nm FinFET process, NVIDIA didn’t reveal they name of the Semiconductor Foundry but it was confirmed that TSMC would be supplying the new GPUs. Now this might not be a significant bit as it has been known for months and we know that NVIDIA’s Pascal GP100 chip has already been taped out on TSMC’s 16nm FinFET process. This means that we can see a launch of these chips as early as 1H of 2016. Doubling of the transistor density would put Pascal to somewhere around 16-17 Billion transistors since Maxwell GPUs already feature 8 Billion transistors on the flagship GM200 GPU core.

    TSMC’s 16FF+ (FinFET Plus) technology can provide above 65 percent higher speed, around 2 times the density, or 70 percent less power than its 28HPM technology. Comparing with 20SoC technology, 16FF+ provides extra 40% higher speed and 60% power saving. By leveraging the experience of 20SoC technology, TSMC 16FF+ shares the same metal backend process in order to quickly improve yield and demonstrate process maturity for time-to-market value.
    Nvidia decided to let TSMC mass produce the Pascal GPU, which is scheduled to be released next year, using the production process of 16-nm FinFETs. Some in the industry predicted that both Samsung and TSMC would mass produce the Pascal GPU, but the U.S. firm chose only the Taiwanese firm in the end. Since the two foundries have different manufacturing process of 16-nm FinFETs, the U.S. tech company selected the world’s largest foundry (TSMC) for product consistency. (This quote was originally posted at BuisnessKorea however the article has since been removed due to confidential reasons).
    What we know so far about the GP100 chip:


    • Pascal microarchitecture.
    • DirectX 12 feature level 12_1 or higher.
    • Successor to the GM200 GPU found in the GTX Titan X and GTX 980 Ti.
    • Built on the 16FF+ manufacturing process from TSMC.
    • Allegedly has a total of 17 billion transistors, more than twice that of GM200.
    • Taped out in June 2015.
    • Will feature four 4-Hi HBM2 stacks, for a total of 16GB of VRAM for the consumer variant and 32GB for the professional variant.
    • Features a 4096bit memory interface.
    • Features NVLink and support for Mixed Precision FP16 compute tasks at twice the rate of FP32 and full FP64 support. 2016 release.


    Back at GTC 2015, NVIDIA’s CEO Jen-Hsun Huang talked about mixed precision which allows users to get twice the compute performance in FP16 workloads compared to FP32 by computing at 16-bit with twice the accuracy of FP32. Pascal allows more than just that, it is capable of FP16, FP32 and FP64 compute and we have just learned the peak compute performance of Pascal in double precision workloads. With Pascal GPU, NVIDIA will return to the HPC market with new Tesla products. Maxwell, although great in all regards was deprived of necessary FP64 hardware and focused only on FP32 performance. This meant that the chip was going to stay away from HPC markets while NVIDIA offered their year old Kepler based cards as the only Tesla based options. AMD which is NVIDIA’s only competitor in this HPC GPU department also made a similar approach with their Fiji GPUs which is a FP32 focused gaming part while the Hawaii GPU serves in the HPC space, offering double precision compute.

    Spending a lot of energy in the computation units and dedicating a lot of energy doing double precision and arithmetic when you need it is great but when you don’t need it, there’s a lot left on the table such as the un necessary power envelope that goes under utilized, reducing the efficiency of the overall systems. If you can survive with single precision or even half precision, you can gain significant improvements in energy efficiency and that is why mixed precision matters most as told by Senior Director of Architecture at NVIDIA, Stephen W. Keckler.
    Pascal is designed to be NVIDIA’s greatest HPC offering that incorporates the latest NVLINK standard and offers a UVM (Unified Virtual Memory) addressing inside a heterogeneous node. The Pascal GPU would be the first to introduce NVLINK which is the next generation Unified Virtual Memory link with Gen 2.0 Cache coherency features and 5 – 12 times the bandwidth of a regular PCIe connection. This will solve many of the bandwidth issues that high performance GPUs currently face.
    First technology we’ll announce today is an important invention called NVLink. It’s a chip-to-chip communication channel. The programming model is PCI Express but enables unified memory and moves 5-12 times faster than PCIe. “This is a big leap in solving this bottleneck,” Jen-Hsun says. NVIDIA
    According to official NVIDIA slides, we are looking at a peak double precision compute performance of over 4 TFLOPs along with 1 TB/s HBM2 memory which will be amount to 32 GB VRAM in HPC parts. NVIDIA’s current flagship, Tesla K80 accelerator which features two GK210 GPUs has a peak performance rated at 2.91 TFLOPs when running with boost clocks and just a little bit over 2 TFLOPs when running at the standard clock speeds. The single GK180 chip based, Tesla K40 has a double precision compute performance rated at 1.43 TFLOPs and AMD’s best single chip FirePro card, the FirePro S9170 with 32 GB VRAM has the peak double precision (FP64) performance rated at 2.62 TFLOPs.
    Advertisements


    Built for Double Precision General Matrix Multiplication workloads, both Kepler and Hawaii chips were built for compute and while their successor kept things pretty silent on the FP64 end, they did come with better FP32 performance (Maxwell and Fiji). On compute side, Pascal is going to take the next incremental step with double precision performance rated over 4 TFLOPs, which is double of what’s offered on the last generation FP64 enabled GPUs. As for single precision performance, we will see the Pascal GPUs breaking past the 10 TFLOPs barrier with ease.
    NVIDIA also shared numbers for their Volta GPUs which will be rated at 7 TFLOPs (FP64) compute performance. This will be an incremental step in building multi-PFLOPs systems that will be integrated inside supercomputers from Oak Ridge National Laboratory (Summit Supercomputer) and Lawrence Livermore National Laboratory (Sierra Supercomputer). Both computers are rated at over 100 PFLOPs (peak performance) and will integrate several thousand nodes with over 40 TFLOPs performance per node. While talking about Exascale computing, NVIDIA’s Chief Scientist and SVP of Research, Bill Dally gave a detailed explanation why energy efficiency is the main focus towards HPC:

    So let me talk about the first gap, the energy efficiency gap. Now lot’s of people say don’t you need more efficient floating point units? That’s completely wrong, It’s not about the flops. If I wanted to build an exascale machine today, I could take the same process technology we are using to build our Pascal chip, 16nm foundry process, 10mm on a side which is about a third the linear size and about 9th the area of Pascal, so the Pascal chip is way bigger than this, believing this is a 1cm on a side chip, if I pack it with floating point units which I drew it to scale you wouldn’t see it, that little red dot is a little bigger than scale, a double precision fused multiply add (DFMA) unit and that’s about 10 pJ/OP and can run at 2 GFLOPs. So if I fill this chip with floating point units and it consumed 200W, I get 20 TFLOPs on this one chip (100mm2 die). I put 50,000 of these inside racks, I have an Exascale Machine.
    Of course, its an Exascale machine and its completely worthless because much like children or pets, floating point units are easy to get and hard to take care of. What’s hard about floating point units is feeding them and taking care of what they produce, you know the results. It’s moving the data to and forth that’s hard, not building the arithmetic unit. via NVIDIA@SC15 Conference
    The talks detailed that an exascale system that will be implemented in systems around 2023 will consist of several heterogeneous nodes, made up of several throughput optimized cores aka GPUs known as TOCs, Latency Optimized Cores aka CPUs known as LOCs and will consist of tight communication between them, the memory and caches to enable good programming models. The GPUs will do the bulk of heavy lifting while the CPUs will focus on sequential processing. The reason explained for this is that the CPUs have great vector core performance but when those vector cores aren’t utilized, the scalar mode turns out to be pretty useless in HPC uses. The entire system will consist of large DRAM banks which will be connected inside a heterogeneous DRAM environment and will help solve two crucial problems on current generation systems, first is to exploit all available bandwidth on the system/node and second is to maximize the locality for frequently accessed data.

    CPUs waste a lot of their energy in deciding what order to do the instruction in, that usually consists restricting, reorganizing, renaming the registers and a small fraction of energy actually is used to do the actual executions.
    GPUs don’t care about latency of an individual instruction, they can execute instructions through pipelines as quickly as possible. They don’t have out of order execution or branch prediction and spend a lot more of the power budget on the actual execution. Some of the systems today have half of the energy go to actual system executions as opposed to very small amount of energy in past generations. The next generation GPUs will be able to utilize more of that energy to execute instructions.






    On further explaining the next generation GPU architectures and efficiency, Stephen pointed out that HBM is a great memory architecture which will be implemented across Pascal and Volta chips but those chips have max bandwidth of 1.2 TB/s (Volta GPU). Moving forward, there exists a looming memory power crisis. HBM2 at 1.2 TB/s sure is great but it adds 60W to the power envelope on a standard GPU. The current implementation of HBM1 on Fiji chips adds around 25W to the chip. Moving onwards, chips with access of 2 TB/s bandwidth will increase the overall power limit on chips which will go from worse to breaking point. A chip with 2.5 TB/s HBM (2nd generation) memory will reach a 120W TDP for the memory architecture alone, a 1.5 times efficient HBM 2 architecture that outputs over 3 TB/s bandwidth will need 160W to feed the memory alone.
    This is not the power of the whole chip mentioned but just the memory layout, typically, these chips will be considered non-efficient for the consumer and HPC sectors but NVIDIA is trying to change that and is exploring new means to solve the memory power crisis that exists ahead with HBM and higher bandwidth. In the near future, Pascal and Volta don’t see a major consumption increase from HBM but moving onward in 2020, when NVIDIA’s next gen architecture is expected to arrive, we will probably see a new memory architecture being introduced to solve the increased power needs.

    We will be having more of these technical talks on upcoming GPU architectures as their launch approaches in 2016. To finish this post, NVIDIA confirmed that Pascal will be available in 2016 (as it was originally confirmed) on choice of CPU platforms ranging from x86, ARM64 and Power (IBM). On the HPC front, NVIDIA will introduce NVLINK while consumer and servers side will rely on PCI-E (16 GB/s) for communication via chips.
    NVIDIA Pascal GPU Slides (GTC Taiwan 2015):








    Next Generation FinFET Based GPUs Comparison (AMD/NVIDIA):








    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  2. #32
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Exploring Nvidia’s Pascal and Volta Architectural Lineup – GP100 Could Be The First, Truly 4K/60 FPS Capable GPU

    The era of FinFET based GPUs is finally drawing closer and as far as Nvidia is concerned, the relevant architecture is Pascal (and later, Volta). Today, we will be looking at some basics of the chip lineup of Nvidia’s upcoming 16nm FinFET+ graphic cards. Before we begin the editorial, I would like to point out that while most of this is technically educated guesswork – it is based on the historical nomenclature used by Nvidia – and is as such more or less factually accurate. That said, I have marked the dubious areas with tags to help speculation-averse readers in digesting this peace.

    Mulling over Nvidia’s chip lineup for the next generation: GP100 through GP108

    As every graphic card enthusiast knows, there are two kinds of nomenclature: internal-chip-nomenclature and the commercial nomenclature. While the commercial nomenclature of the upcoming lineup can be anything (for eg Geforce GTX 10xx) the chip nomenclature remains the same as ever – and therefore predictable. Although most of our readers would be well aware of how Nvidia’s internal nomenclature works, here is a breakdown for those not in the fold:
    Nomenclature example: GM204
    Gx xxx: The first letter is a constant which stands for Graphics (Technology)
    xM xxx: The second letter is a variable which stands for the architecture of the chip. For Kepler it was K, for Maxwell it was M, for Pascal it was P and for Volta it will be V and so on.
    xx 2xx: This numerical stands for the generation of the architecture. First generation Maxwell for eg, had a 1 over here.
    xx x0x: This numerical has a variable meaning and can stand for everything from a better binned chip to a refresh. This digit is usually ignored when interpreting chips.
    xx xx4: The last digit is the actual performance indicator of the chip and follows an inverse approach. Lower means higher performance so a 0 here would mean the flagship die and an 8 would mean the weakest die.
    Now that we have set aside, lets talk about the actual lineup. Maxwell currently has a total lineup of 6 dies. These are: GM 200, GM204, GM206, GM107 and GM 108. Traditionally, the flagship die lands first. However, Nvidia has been mixing up this formula from time to time. In Maxwell’s case for eg, the GM107 and GM108 came first. One can argue that they were, technically, the flagship chips of their own generation (1st Generation Maxwell) but then Nvidia also introduced the GM204 before the GM200.
    The lineup speculation was originally done by 3DCenter – I will just be building upon the same. Now, since the 1st generation and 2nd generation are both active the moment, [caution: speculation] we can safely assume that they will be replaced by 16FF+ parts sometime in the future. By 2017, we can assume that the entire lineup should be present and that Volta should be visible on the horizon as well. The first lineup that is slated to appear is based entirely on Pascal and the chip nomenclature should be as follows:

    • GP100: First generation Pascal flagship which will replace the GM200 die.
    • GP104: First generation high end Pascal GPU which will replace the GM204.
      Advertisements

    • GP106: First generation mid end Pascal GPU which will replace the GM206
    • GP107/108: First generation low end Pascal GPU which will replace the GM107/GM108 chips respectively.

    Previous reports have stated that the GP100 ‘big’ pascal chip will hit the professional market first, giving time for green to roll out the GP104 chip to the mainstream consumer segment (followed by GP100 at a later date). The GM200 taped out in June and hit the shelves in March. That is a total time of approximately 9 months. The GM204 on the other hand took only five months to hit the shelves from its tape out date. At this schedule we can tentatively expect the Pascal architecture GPUs next year in late second quarter. Pascal flagships will also have HBM2 memory, which unlike the HBM1 specification can extend upto 16 GB of total vRAM size.
    Nvidia’s Volta series of GPUs will eventually replace the Pascal however, at this point there is no clear indication of what process the chip will be at. If the GP100 die is over or around the 550mm^2 limit then it is very likely that the GV100 chip will be on the 10nm FinFET process based on the 14nm backbone (the 16nmFF+ process is based on the 22nm backbone). However, if the GP100 is around the 500mm^2 mark, then there will be atleast one more generation on the 16nm FinFET+ process since the usual limit at TSMC is 600mm^2 – be that a Pascal GPU (GP200) or a Volta GPU (GV100).
    Keep in mind that reports have alleged that the GP100 will have approximately 17 billion transistors. Take a look at the official statement about the 16FF+ node from TSMC:
    TSMC’s 16FF+ (FinFET Plus) technology can provide above 65 percent higher speed, around 2 times the density, or 70 percent less power than its 28HPM technology. Comparing with 20SoC technology, 16FF+ provides extra 40% higher speed and 60% power saving.
    As you can see, there is an obvious conflict present in our available information. Two times 8 Billion (the transistor count of the GM200) is still around a Billion shy of the alleged transistor count – and this is at the huge die size of 601 mm^2, something which is highly improbable if not impossible for the first batch. Its possible that generational differences between the architectures make it possible to fit 17 Billion transistors on a ~520-550mm^2 die, but it is more probable that the transistor count is an exaggeration.
    Assuming [caution: speculation] a die near the 550mm² mark initially, you are looking at around 5000-6000 CUDA cores. With the architecture improvement and and the process upgrade – an improvement of around 60% – 80% is on the cards depending on how well Nvidia handles it. There isn’t a single GPU card in existence that can handle 4K@60fps on its own. First or second generation Pascal however, should most definitely be able to hit the mark with the technological upgrades heading its way.


    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  3. #33
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    GDDR5, GDDR6 rule 2016 GPUs



    Exclusive: Highest end are HBM 2.0, GDDR6
    AMD over-hyped the new High Bandwidth Memory standard and now the second generation HBM 2.0 is coming in 2016. However it looks like most of GPUs shipped in this year will still rely on the older GDDR5.

    Most of the entry level, mainstream and even performance graphics cards from both Nvidia and AMD will rely on the GDDR5. This memory has been with us since 2007 but it has dramatically increased in speed. The memory chip has shrunken from 60nm in 2007 to 20nm in 2015 making higher clocks and lower voltage possible.
    Some of the big boys, including Samsung and Micron, have started producing 8 Gb GDDR5 chips that will enable cards with 1GB memory per chip. The GTX 980 TI has 12 chips with 4 Gb support (512MB per chip) while Radeon Fury X comes with four HMB 1.0 chips supporting 1GB per chip at much higher bandwidth. Geforce Titan X has 24 chips with 512MB each, making the total amount of memory to 12GB.
    The next generation cards will get 12GB memory with 12 GDDR5 memory chips or 24GB with 24 chips. Most of the mainstream and performance cards will come with much less memory.
    Only a few high end cards such as Greenland high end FinFET solution from AMD and a Geforce version of Pascal will come with the more expensive and much faster HMB 2.0 memory.
    GDDR6 is arriving in 2016 at least at Micron and the company promises a much higher bandwidth compared to the GDDR5. So there will be a few choices.
    Noticia:
    http://www.fudzilla.com/news/graphic...re-gddr5-gddr6


    Já se tinha falado sopbre um novo formato de memória, o GDDR 5X... afinal parece que vai continuar ainda a utilizar o GDDR 5, o que a meu ver faz mais sentido dado que permite praticar preços mais baixos.
    Infelizmente para nós, HBM 2.0, vai ser só para os topos de gama, o que torna isso praticamente num mercado de nicho.
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  4. #34
    Tech Bencher Avatar de reiszink
    Registo
    Feb 2013
    Posts
    5,769
    Likes (Dados)
    0
    Likes (Recebidos)
    0
    Avaliação
    5 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)

    Nvidia Pascal

    Era expectável, HBM ainda será caro de produzir durante muito tempo, faz sentido ser reservado para as gráficas de alta gama.
    Intel i7 5820K - ASRock X99M Killer - 16GB G.Skill DDR4 - Gigabyte GTX 980Ti G1 - Plextor M6e 256GB + Samsung 850 EVO 500GB - Corsair H110 - EVGA G3 750W - Acer 27" 144Hz IPS - Zowie EC2-A - Filco Majestouch 2 TKL - HyperX Cloud II Pro

  5. #35
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    NVIDIA Announces Pascal GPU Powered Drive PX 2 – 16nm FinFET Based, Liquid Cooled AI Supercomputer With 8 TFLOPs Performance


    NVIDIA has officially announced their latest Drive PX 2 AI supercomputer for automobiles that is powered by their 16nm FinFET based Pascal GPU. Based on the latest Pascal GPU, the Drive PX 2 is a glimpse of the power packed by Pascal GPU, NVIDIA’s next iteration of CUDA compute architecture which is going to power fast and efficient GPUs in 2016.

    NVIDIA Surprises With Pascal GPU Powered Drive PX 2 – 16nm FinFET GPU With 8 TFLOPs of Performance

    NVIDIA’s next generation graphics architecture is finally coming to the market and to our surprise, we aren’t seeing it first on graphics cards or professional products but instead, the first product to feature this is Drive PX 2, the latest AI supercomputer that ushers in a new era of self driving cars. You might imagine why this Pascal is being aimed at automobiles first? Well the reason is quite simple, automobile has indeed become a huge deal for NVIDIA as it has become a major revenue driver for them as seen in the financial analyst results posted by them for Q3 2015.
    Our record revenue highlights NVIDIA’s position at the center of forces that are reshaping our industry,” said Jen-Hsun Huang, co-founder and chief executive officer, NVIDIA. “Virtual reality, deep learning, cloud computing and autonomous driving are developing with incredible speed, and we are playing an important role in all of them. We continue to make great headway in our strategy of creating specialized visual computing platforms targeted at important growth markets. The opportunities ahead of us have never been more promising. via NVIDIA

    Just months later after NVIDIA published their financials, we are looking at how important deep learning and deep neural networks for automobiles has become for NVIDIA. The end result is Pascal, NVIDIA’s brand new GPU architecture announced on their Drive PX 2 AI supercomputer. The Drive PX2 is the successor to the last years Drive PX and instead of being powered by entirely by Tegra SOCs, it relies on 2 next-gen Tegra SOCs and two discrete Pascal GPUs. It features 12 CPU cores (probably ARM64 based) and four chips that pack Pascal GPUs, rounding up to 8 TFLOPs of performance. The Drive PX2 module comes with a TDP of 250W which is due to the four individual chips, two of which are ARM based and feature Pascal architecture along with two GPUs on the back that are discrete offerings. The whole module is packed inside a package that is liquid cooled (also a first for automobile supercomputers). Last of all, we know this for sure that the GPU is 16 nm FinFET based and comes a single board package with multiple modules and chips.

    Now there’s a big reason to believe that even Drive PX 2 doesn’t has the full Pascal GPU and instead a cut down model that features disabled cores as the bigger version is aimed at the HPC market. We won’t see that version on consumer products for a while as 16nm FF+ is still an infant node and yields need to be pitch perfect in order for so many full Pascal GP100’s to ship to the consumer, server and professional market. The 250W is the baseline which NVIDIA chooses for their products these days and they can pack four chips with Pascal GPUs inside the Drive PX 2 module. The whole package features four chips, two Tegra chips with Pascal GPUs that have GDDR5 memory and two Tegra SOCs that come with the ARM cores and significantly cut down, GPGPU focused Pascal GPUs.






    According to NVIDIA, these versions of Pascal GPUs combine to give 8 TFLOPs of compute performance. It’s clear we are talking about FP32 operations here. While this increase is really good comparing to the 6.1 TFLOPs of performance on Maxwell, there is still room for improvement and we are actually looking at around 10 TFLOPs of performance when these flagship GPUs hit modern graphics board and the full fat version is packed inside consumer and HPC (High-Performance Computing) platforms. Coming to those CPU cores, we are looking at 8 A57 cores and 4 custom Denver ARM cores that NVIDIA has been building for a while. The Drive PX 2 will be available in fall of 2016.
    NVIDIA’s Shows off Pascal Chips in Both Tegra and Board Flavors




    NVIDIA showed off both Tegra and board flavors of their Pascal GPU. The one they showcased based on the MXM board was a high-end chip, although we presume that it wasn’t the full fat version which is the GP100 core but the GP104 from the size and dimensions of the chip. It can be further believed that this chip is being used as a placeholder as it even seems a bit big for GP104 since that is based on FinFET process which reduces its die size significantly. Being packed in a MXM type solution means that this GPU will be housed in a range of desktop and mobility solutions and confirms that NVIDIA will have both HBM2 and GDDR5X GPUs when Pascal hits the market. The Tegra SOCs were featuring the Denver/ARM config along with a Tegra focused Pascal which is geared towards GPGPU computation.
    Advertisements

    JHH now compares DRIVE PX 2, built on a 16nm process, to TITAN X, built on a 28nm process. DRIVE PX 2 is roughly six-times more powerful. DRIVE PX2 has 12 CPUs cores, capable of 8 teraflops of processing power and 24 teraflops of deep processing operations. It’s equivalent to 150 MacBook pros in the trunk of your car.
    JHH holds DRIVE PX 2, not much bigger than a tablet. It has two next-generation Tegra processors, and two next-generation Pascal-based discrete GPUs.
    NVIDIA Drive PX 2 – NVIDIA DIGITS Deep Neural Network Platform


    Through Drive PX 2, NVIDIA is boosting the object detection ability of these AI supercomputers through a data set known as DriveNet. To further drive this ecosystem, NVIDIA will provide DIGITS, a deep neural network platform that offers 9 inception layers, 3 convolutional layers, 37 million neurons and can process 40B operations while offering single and multi-class object detection. NVIDIA wants to enable an end-to-end deep learning platform for self driving cars.
    JHH recaps his main points from last year – noting that deep learning is what’s going to be needed to bring accuracy. But that’s going to take huge computational powers.
    Several thousand engineering have gone into the NVIDIA DRIVE PX 2, the world’s first artificial=intelligence supercomputer for self-driving cards.
    It’s got some chops. 12 CPUs. NVIDIA’s next-gen Pascal-based GPU. All producing 8 teraflops of power. That’s equivalent to 150 Macbook Pros. And it’s in a case the size of a school lunchbox. via NVIDIA Blogs
    This will enable a car to learn about the world, convey it back to the cloud-based network, which then updates all cars. Every car company will own its own deep neural network. We want to create a platform for these to be deployed. So, to recap. Three strategies NVIDIA has:

    1. Ensure NVIDIA GPUs accelerate all frameworks for GPUs;
    2. Create platforms for deploying deep learning;
    3. Develop an end-to-end development system to train and deploy the network.



















    Both NVIDIA and AMD are now in the same league and on the path to offer high-performance graphics chips based on the latest FinFET nodes in 2016. AMD has already shown off their Polaris GPU architecture while the green team is back with tremendous amount of power that is inside their flagship GPU architecture, Pascal. For those who were expecting a GeForce side announcement may be a bit disappointed with the auto based news but they should also be ready as NVIDIA might be gearing up for a full-blown Pascal GeForce introduction at either GDC 2015 or their GTC ’15 conference in April 2015.
    NVIDIA and AMD FinFET GPUs Comparison:








    Última edição de Jorge-Vieira : 05-01-16 às 09:22
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  6. #36
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    NVIDIA teases its next-gen Pascal GPU, puts it inside of a car first

    CES 2016 - Within 24 hours of AMD taking the NDA lift off of its next-gen Polaris architecture, NVIDIA announces that its Pascal architecture is being used in its new Drive PX 2 system for cars.


    NVIDIA's upcoming Pascal GPU will be pushed onto the 16nm FinFET process, but outside of that we don't know too much. The automotive market will see a liquid-cooled, 250W beast inside of cars that is capable of taking in a crazy amount of information - up to 2500 images per second worth - which will drive the autonomous car market going into the future.

    When it comes to video cards, we should expect NVIDIA to unveil its Pascal-based video cards at GTC 2016 in early April.

    Noticia:
    http://www.tweaktown.com/news/49398/...rst/index.html


    Abril será apresentado as novas placas graficas, segundo esta noticia.
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  7. #37
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    NVIDIA used GTX 980 MXM modules during Pascal tease at CES 2016

    CES 2016 - During NVIDIA's CES 2016 press conference, CEO and founder Jen-Hsun Huang took the stage to talk about where NVIDIA is in its journey on automotive technology.


    Huang announced that NVIDIA's next-generation Pascal architecture would be powering their automotive efforts this year, with it being as fast as 150 MacBook Pros. But, during my downtime in my hotel to read up on some of my favorite tech sites, I stumbled across AnandTech's piece on Drive PX 2.

    One of AT's readers noticed that NVIDIA didn't use Pascal GPUs when Huang held up a prototype PCB with two Pascal GPUs. AT reports: "Kudos to our readers on this one. The MXM modules in the picture are almost component-for-component identical to the GTX 980 MXM photo we have on file. So it is likely that these are not Pascal GPUs, and that they're merely placeholders".

    Pascal will be using HBM2, which would result in no visible VRAM chips surrounding the GPU, but the board Huang held up during the press conference has visible VRAM chips. I think Pascal will be arriving in two ways: enthusiast-class parts with HBM2, and mainstream parts using GDDR5X - but, the GPUs themselves according to AT and their readers, are GTX 980 MXM modules. Interesting...

    Noticia:
    http://www.tweaktown.com/news/49531/...016/index.html
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  8. #38
    Master Business & GPU Man Avatar de Enzo
    Registo
    Jan 2015
    Local
    País Campeão Euro 2016
    Posts
    7,793
    Likes (Dados)
    0
    Likes (Recebidos)
    1
    Avaliação
    41 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Cá estaremos para ver. Venham elas!
    Ideias sem Nexo e Provas do Tráfico de Hardware
    "que personifica o destino, equilíbrio e vingança divina." Dejá vú. Que cena!

  9. #39
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Nvidia GP104 Pascal GPU Spotted on Zauba – Roughly ~350mm^2 Die Size, Will Match or Exceed the GM200 in Performance

    The Pascal GPU from Nvidia has finally been spotted on Zauba, which has historically been one of the most rock-solid sources of information in the past. Recently however, companies have started employing stealth tactics to hide their shipments from our prying eyes. Not only have they now started scrambling codenames of the GPUs they ship, but if this recent information is to be believed, they are now using different ports to ship prototypes of the graphic cards. And far as their effort goes – they have been pretty successful. The good folks over at Beyond3D however have spotted something which is suspected to be a GP104 chip.

    Nvidia’s Pascal GPU ‘GP104’ finally spotted in shipping manifest

    The first thing you will note is that the HS Code, the shipping area, none of these things are where Nvidia usually ships from. Which means that either this is the latest in their attempts to protect information or this particular chip might not actually be related to Nvidia. The source however, appears to have inside information and is very sure of itself. Still, because of this uncertainty, I would urge you to take this one with just a grain of salt – a first for a Zauba sourced article. The post at Beyond3D was spotted by Videocardz and mentions a 35mm2 package with 2152 pins around – which we assume is the GP104 chip.

    Now the package dimensions of the GM204 was around 40 mm2, not only that, but the GM204 had less pins – around 1745. This means that only is the GP104 Pascal chip smaller in size than the GM204, but it will be far denser. The GM204 has a die size of 398 mm2 and according to the ratio that we can derive from the difference in package dimensions – the GP104 should be around 300 to 350 mm2. We know that Nvidia is employing TSMC for the fabrication of its GPUs and we also know what their estimates are for performance increase with the jump to 16nm FinFET. 16FF+ will be approximately twice at dense and will be able to achieve higher speeds than before. Nvidia itself has stated a performance jump of 2x.
    Advertisements


    This ofcourse means that Nvidia’s next generation pascal graphic card featuring that the GP104 die should be able to run circles around the GM204 based GTX 980 and even outperform the the GM200 based 980 Ti. Ofcourse, we aren’t even accounting for architectural gains here – which are usually significant as well – and will only increase the margin between the two generations. Nvidia has already stated that HBM2 memory will be used in Pascal graphic cards and with the recent advent of GDDR5X, that becomes a possibility for mid-tier offerings as well. Another interesting thing we can notice on the shipping manifests is the use of liquid cooling – something that might be a stock feature of future Nvidia GPUs as well – following AMD’s lead.
    Pascal GPUs are slated to be pretty damn amazing. They will feature 4X the mixed precision performance, 2X the performance per watt, 2.7X memory capacity & 3X the bandwidth of Maxwell. We will be seeing the GP104 flagship this year, and the GP100 one as well near the end (if we are very lucky). It is worth nothing that the GM200 taped out in June and hit the shelves in March. That is a total time of approximately 9 months. The GM204 on the other hand took only five months to hit the shelves from its tape out date. Since we are on a younger node however, the schedule could be different this time around, making it harder for large dies to be produced economically.


    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  10. #40
    Master Business & GPU Man Avatar de Enzo
    Registo
    Jan 2015
    Local
    País Campeão Euro 2016
    Posts
    7,793
    Likes (Dados)
    0
    Likes (Recebidos)
    1
    Avaliação
    41 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Resposta rapidinha da Nvidia aos chips detectados da AMD recentemente.
    Ideias sem Nexo e Provas do Tráfico de Hardware
    "que personifica o destino, equilíbrio e vingança divina." Dejá vú. Que cena!

  11. #41
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Nvidia’s Pascal Is MIA, Could Be In Trouble Reports Allege – Drive PX2 Demoed With GTX 980M Instead Of Pascal

    Several reports have surfaced alleging that Nvidia’s Pascal GPUs could be in deep trouble after Pascal was notably absent from the Drive PX2. Nvidia announced that this 250W automotive compute box is powered by two Tegra chips and two Pascal GPUs. Yet the Drive PX2 board Nvidia’s CEO Jen-Hsun Huang held on stage carried two Maxwell based GTX 980 MXM modules instead. As was evident from the date inscripted on the chips, the size of the chips and the configuration of the modules.[1][2][3]
    Nvidia Pascal Drive PX2 photo by Anandtech.com

    Nvidia’s Pascal Is MIA, Could Be In Trouble Reports Allege

    Many journalists found this quite the eye brow raising affair. Some of whom found this to be reminiscent of the company’s Fermi troubles in 2010. Fermi was Nvidia’s first GPU architecture to use 40nm technology, and due to various challenges the company faced during this important node transition a similar incident took place.
    Advertisements


    Extremetech.com
    These issues with Pascal and the Drive PX 2 echo the Fermi “wood screw” even of 2009. Back then, Jen-Hsun held up a Fermi board that was nothing but a mock-up, proclaimed the chip was in full production, and would launch before the end of the year. In reality, NV was having major problems with GF100 and the GPU only launched in late March, 2010.
    Use of mockups and prototypes in stage-craft is not something that’s new to the industry or Nvidia, but it’s use in place of the real thing has always been limited to situations where the real thing isn’t ready, in this case Pascal. This isn’t the first time that we’ve seen a mockup being used in a Pascal announcement either. Last year Jen-Husn Huang held Pascal board, which did not actually have a Pascal GPU. But rather a placeholder chip made to represent Pascal. This prototype was made to showcase what a mezzanine form-factor board would look like, so a functioning Pascal GPU wasn’t pertinent to this goal.

    The same argument can be made for the Drive PX2 demonstration. And perhaps this is merely a marketing decision on Nvidia’s part to withhold showing any actual Pascal silicon until GTC. But the question remains why weren’t actual Pascal GPUs used in any demo to date. This may look particularly troubling for Nvidia when we have AMD on the other side demoing functioning Polaris chips to the public and to the press, Charlie Demerjian notes. Does Nvidia have stable Pascal silicon? Is the roadmap still on track? those are questions that can only be answered with time.


    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  12. #42
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    NVIDIA's next-gen Pascal GP104 GPU spotted, should feature GDDR5X

    It looks like NVIDIA is already playing around with its next-gen Pascal GPUs, with a new listing spotted on a shipping manifest from Zauba.


    NVIDIA's upcoming GP104 will be the mid-range part, just like the GM204 which resulted in the GeForce GTX 980. The new GP104 GPU arrives in a 37.5 x 37.5mm BPA package, which is smaller than the GM204 which arrived in 40 x 40mm. It has more pins than the GM204, with 2152 vs 1745, which will be thanks to the 16nm FinFET process.


    The report from 3DCenter says that the GP104-based card will use GDDR5X, where I was the first to ponder that the mid-range (GP104 and under) will be powered by GDDR5X while the higher-end offerings will be powered by HBM2. This will make the GP104 and cards under that much cheaper, versus the more expensive HBM2 technology on the enthusiast products.

    Noticia:
    http://www.tweaktown.com/news/49578/...r5x/index.html


    Penso que já era esperado que a nVidia e também a AMD equipem as graficas de gamas mais baixas com GDDR5X e talvez GDDR 5, para colocar os produtos num patamar de preço competitivo.
    HBM e HBM 2.0, acho que só os vamos ver nos topos de gama.






    NVIDIA's GP100 spotted, the Pascal successor to GTX 980 Ti & Titan X

    We only just reported about the GP104, the Pascal-based successor to the GTX 980 which should be rocking the much-faster GDDR5X - but not HBM2.


    Now, let's talk about the GP100 - aka, Big Pascal. GP100 will arrive in a huge 55 x 55mm BGA package, 10mm more than GM200 - as it will need more physical room for the HBM2 modules. This is the one I'm excited for, as it should result in the successor of the GeForce GTX 980 Ti and the GeForce GTX Titan X.

    Noticia:
    http://www.tweaktown.com/news/49580/...tan/index.html



    Supostamente este será o sucessor das actuais topos de gama da nVidia e pela informação é um pequenos mostrinho que aguardamos os primeiros testes para confirmar se o é mesmo.
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  13. #43
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    NVIDIA Pascal GPU Analysis – An In-Depth Look at NVIDIA’s Next-Gen Graphics Cards Powering GeForce, Tesla and Quadro

    At CES 2016, NVIDIA’s CEO, Jen-Hsun Huang presented the latest Drive PX 2 board that will be powered by the next generation Pascal GPU architecture. The Pascal GPU architecture is one which will be powering the next iteration of professional and consumer graphics cards, succeeding Maxwell and besting it in every possible way as is anticipated by enthusiasts and PC builders.

    NVIDIA’s Pascal GPU Analysis – What To Expect From NVIDIA’s Next-Gen GPU Powerhouse

    NVIDIA’s Pascal GPUs are not being launched any time soon but we know quite a lot about them from previous reports. NVIDIA provided us with a bit more details at their conference so let’s take a look at what’s Pascal all about. In 2014, NVIDIA introduced Maxwell, their last architecture to use the 28nm process node. We had seen 28nm on the GPU market since 2012 when AMD and NVIDIA launched their first products based on the (then latest) process tech, codenamed Kepler and GCN (1.0).
    The Race To FinFET – What It Means For The GPU Industry

    Over the years, this process was refined and we got to see some beefy designs such as the GK110, GM200 from NVIDIA and Hawaii, Fiji from AMD. Measuring up to 601mm2 (GM200) and integrating an insane amount of transistors (8.9 Billion on Fiji), the 28nm process proven to be a real deal for the graphics market as it served the market for a good four years time frame. But hardware and technology grows at a fast pace and a new node has long been demanded by GPU makers to build their next graphics chips.
    As every generation of graphics card passes, we anticipate the successor to offer a great performance increase in the coming generation of graphics cards. When the industry shifted from 40nm to 28nm, we saw GPUs that were supposed to be aimed at mid-range offerings beating the big cores from the previous generation. The GTX 680, NVIDIA’s first 28nm graphics card obliterated the flagship GF110 core, featuring better performance and better power efficiency. The performance improvement was around 25% on a process that had just seen the light of day.
    More than a year later, NVIDIA showed off just what kind of performance they had in their hands with the 28nm Kepler GPU. When the GTX 780 Ti launched, it featured more than 50% performance lead over the GTX 580. This was the moment where the flagship Kepler core got compared to the flagship Fermi core. It was known that NVIDIA had given priority to HPC for their compute-oriented Kepler cores which was the sole reason why we got to see GK104 as a flagship offering in 2012 in the first place. However, by this time, the 28nm node was fully learned and mastered by GPU companies.
    When Maxwell and Fiji graphics cards came to the market, we saw a shift to gaming-only products rather than professional/HPC focused parts. The main reason for this shift was both NVIDIA and AMD knew that they had reached a certain bottleneck with the 28nm process where they could either go for a better performance in a single department (Gaming) or split it into two departments (Gaming/Compute) which would have resulted in worse efficiency and outrageously huge dies which they would have been selling at the fraction of their real cost to make the competitive against their own offerings. Result was GM200 and Fiji.

    Both GPUs are great but they have something in common, they aren’t armed with the strong compute hardware which their older gen predecessors had (Hawaii/GK110). While they were efficient, their performance increases weren’t as big given the hardware updates they had received by the time. The Titan X was 30% faster than the GTX 780 Ti and the same could be said for the Fury X over R9 290X. While we once saw the mid-range GTX 680 delivering a nice 25% lead over GTX 580, the GTX 980 could only manage to deliver a 5-10% lead over the GTX 780 TI. By that time, it was clear that 28nm process had become a bottleneck and a new node was required by GPU manufacturers to experiment with and make next generation graphics processors.
    GPU Architecture NVIDIA Fermi NVIDIA Kepler NVIDIA Maxwell NVIDIA Pascal
    GPU Process 40nm 28nm 28nm 16nm (TSMC FinFET)
    Flagship Chip GF110 GK210 GM200 GP100
    GPU Design SM (Streaming Multiprocessor) SMX (Streaming Multiprocessor) SMM (Streaming Multiprocessor Maxwell) TBA
    Maximum Transistors 3.00 Billion 7.08 Billion 8.00 Billion Up to 17 Billion
    Maximum Die Size 520mm2 561mm2 601mm2 TBA
    Stream Processors Per Compute Unit 32 SPs 192 SPs 128 SPs TBA
    Maximum CUDA Cores 512 CCs (16 CUs) 2880 CCs (15 CUs) 3072 CCs (24 CUs) TBA
    Compute Performance 1.6 TFLOPs 5.1 TFLOPs 6.1 TFLOPs 10+ TFLOPs
    Maximum VRAM 1.5 GB GDDR5 6 GB GDDR5 12 GB GDDR5 32 GB HBM2
    Maximum Bandwidth 192 GB/s 336 GB/s 336 GB/s 1 TB/s
    Maximum TDP 244W 250W 250W 250W
    Average Performance Increase over Predecessor +45%
    (GTX 580 Versus GTX 285)
    +55%
    (GTX Titan Black Versus GTX 580)
    +30%
    (GTX Titan X Versus GTX Titan Black)
    TBA
    Flagship GPU Price (Consumer Only) $499 US
    (GTX 580)
    $999 US
    (GTX Titan Black)
    $999 US
    (GTX Titan X)
    TBA
    Launch Year 2010 (GTX 580) 2014 (GTX Titan Black) 2015 (GTX Titan X) 2016
    We have entered 2016 and now look upon FinFET process as that enabling technology that will help build fast and efficient GPUs. The FinFET process nodes are under development by TSMC, Samsung and Glofo (Global Foundries). GPU makers have the choice to select from these companies to build their new GPUs and NVIDIA has sided with TSMC and using their 16FF+ process node to make the Pascal GPU a reality. The new node will deliver 65 percent higher speed, around 2 times the density, or 70 percent less power than its 28HPM tech. Comparing with 20SoC technology, 16FF+ provides extra 40% higher speed and 60% power saving. With FinFET, we may once again see the glory days of GPU back in action as graphics cards trounce their predecessors by a 50% performance lead and feature power efficiency that’s better in all departments.

    The race to FinFET has already begun and from the beginning, we know that the process node will last us at least two GPU generations as was the case with 28nm. It is believed that NVIDIA’s Volta GPU will also be using a much more refined version of FinFET process but for now, we have eyes and ears locked at everything related to Pascal.
    NVIDIA GP100 – The Flagship GPU, Powering Titans, Teslas and Quadros



    The heart of next generation supercomputers and High-Performance Computing platforms is without a doubt, the Pascal GP100 graphics chip. The NVIDIA GP100 chip will be the flagship GPU of the lineup and one which will determine the performance and efficiency of the new architecture. The Pascal GP100 has long been in the rumor mill and we still don’t have conclusive details on this monolithic chip. Being the successor to the GM200, the GP100 Pascal GPU will be built on the 16nm TSMC FinFET process node and feature up to a total 17 Billion transistors inside the package.
    The GPU is going to pack a lot of performance for gamers and FP64 users since this chip will be powering some serious compute-oriented machines that demand double precision compute. Being the flagship of the lineup, NVIDIA will make their GP100 GPU their first graphics chip to support HBM2 memory with up to 1 TB/s of bandwidth and 32 GB VRAM. We know Pascal has a peak double precision performance rated at over 4 TFLOPs while the single precision compute performance is rated at over 10 TFLOPs. This will be by far the biggest leap in total available compute performance we have seen on any graphics card.

    Advertisements


    As for when it arrives, there’s a strong possibility that consumers won’t get the full GP100 first nor a cut down variant. The reason is due to high demand from the HPC market as they have to update the older Kepler based cards which are since being used as FP64 options as the Maxwell chips drove FP64 support away. Powering the Tesla card first followed by GeForce and Quadro solutions, the card will be getting a range of products, without a doubt a new Titan offering and for $999 US which has been a consistent pricing for Titan graphics cards. The dual-chip cards are a totally different thing though (Titan Z). The GP100 chips will be available in a range of new packages such as the regular graphics (Add-In) boards and the new Mezzanine cards which were showed back at GTC 2015. Along with NVLINK support which is a new interconnect that NVIDIA is establishing with IBM and other partners such as CRAY, HP, DELL, TYAN, QCT and Bull, the connection would offer 80 – 200 GB/s access speeds between the several nodes integrated in HPC platforms.
    NVIDIA GP100 Features:

    • Based on Pascal GPU Architecture.
    • Will support DirectX 12 feature level 12_1 and higher.
    • Successor to the GM200 GPU found in the GTX Titan X and GTX 980 Ti.
    • Built on the 16nm FinFET manufacturing process from TSMC.
    • Allegedly has a total of 17 billion transistors, more than twice that of GM200.
    • Taped out in June 2015.
    • Will feature four 4-Hi HBM2 stacks, for a total of 16GB of VRAM for the consumer variant and 32GB for the professional variant.
    • Features a 4096-bit memory bus interface.
    • Features NVLink and support for Mixed Precision FP16 compute tasks at twice the rate of FP32 and full FP64 support.

    NVIDIA GP104 – The Gamer Focused GPU For Mainstream Graphics Cards


    For NVIDIA to make sure that their Pascal architecture is a success in the gaming field, they would need to get two chips right, the GP104 and the GP106. The G**04 and G**06 chips are primarily targeted at consumers as they offer decent value and can be mass produced in larger numbers owing to their smaller dies and the total number of yields from each 16nm wafer. The G**04 chips have become the main attraction within NVIDIA’s line of graphics cards since the GeForce GTX 680 as high-end offerings. These chips which once served space to the mid-range market as the GeForce GTX 460 and the GeForce GTX 560 Ti with prices of $199 US – $249 US are now serving the gaming market in the high-end space with prices exceeding the $300 US mark.
    There’s a reason why NVIDIA has increased prices on such chips, they are highly competitive. The GTX 460 was a masterpiece of a graphics card which crushed even the GTX 465 (a flopped card to begin with) and had better performance and efficiency than the HD 5850. The GTX 680 (GK104) was the first mid-range that not only beat its predecessor flagship GPU (GF110) by a 25% lead but also managed to keep performance numbers and great efficiency against the HD 7970. The chips from NVIDIA which once existed in the mid-range market were then capable enough to tackle AMD’s flagship cores. That did increase the pricing from sub-$300 to $499 US (GTX 680’s launch price). When NVIDIA launched Maxwell, they once again had nothing from the competition that matched their cards until 10 months later. This resulted in NVIDIA bagging some good sales from their second generation Maxwell cards.

    NVIDIA has learned from the passing years that timing, pricing and features are three essential things for their gaming focused cards to be a success. The GP104 will be delivered in cards ranging from $300 up to $549 US prices. We don’t know what NVIDIA plans to call their next generation of cards but there’s a good reason to say that we are looking at the same performance improvement we once saw from the GTX 680 over the GTX 580. Pascal not only brings with it a new process node, but also a new architecture and a range of gaming focused features. NVIDIA has a strong influence on the PC gaming market, their recent GameWorks initiatives can be found in almost every modern AAA title and they have great driver support for their graphics cards. NVIDIA can have a great showcase of performance just with their GP104 cards.
    While Maxwell had a 5-10% improvement of GTX 780 Ti, I can very easily tell that Pascal GP104 will be a greater performance increase over GM200 along with hardware that’s better built to support DX12, Vulkan API, VR/AR. Along with the added support for game technologies, Pascal GP104 chips will run with GDDR5X memory which is the new and fastest memory standard based on the GDDR5 architecture to deliver better bandwidth and fast clock speeds on VRAM chips. There’s a slight possibility that we may see special versions of the GP104 chips that come with HBM2 VRAM. Talking specifically about how many flops this chip will be able to get out of its belly, I should say between 6-7 TFLOPs sounds like a nice estimate if not an accurate one since the current GM 204 chip has 4.6 TFLOPs of performance which was up from 3 TFLOPs on the GTX 680 while the GTX 580 was around 2.0 TFLOPs in compute.
    The GeForce GTX 980 was the first full high-end, discrete class graphics chip to come to mobility (MXM). The GP104 will be doing the same with TDPs expected to be close to the 150W range and providing better performance on both mobility and desktop fronts. To sum it up, the GP104 based cards will be the most critical of all products available in the graphics lineup as they will be aimed at the market that amounts to the most revenue for NVIDIA. Since GP104 is a much critical product for NVIDIA, they will be showcasing more details regarding cards based on the graphics chip at GTC 2016 which is just four months away.
    NVIDIA GP106 – The Budget-Minded GPU For Sweet-Spot Graphics Cards



    The NVIDIA GP106 is another important chip that NVIDIA needs to keep in mind when talking about gamers. The GP106 will be seen in action on several sweet-spot graphics cards that will retail in the sub-$250 pricing that has so far seen no competition from NVIDIA with their Maxwell generation of graphics cards. The GP106 chips will feature TDPs below 120W as the current GTX 960 already has a TDP of 120W and is based on the GM106 core architecture. NVIDIA might want to enable the card with a wider memory bus and a higher VRAM solution since current GM106 cards have been starved of bandwidth due to 128-bit memory buses even if the Color Compression technologies are on board Maxwell.
    It is probable that the Drive PX 2 chip we saw from NVIDIA at CES 2016 was powered by either the GP104 or the GP106 solution since these cards under the MXM package could offer TDPs of just 100W. A performance increase of over 3 TFLOPs is a good increase over the 2.30 TFLOPs of GM206. Since GM206 was half of the specs of the GM204 core, it is highly likely that the same could be seen on the GP106 with it being half the core specifications of the bigger GP104 GPU core.
    The GP106 will be a main competitor against the highly efficient Polaris GPU that AMD demonstrated back at CES 2016. The Polaris chip pitted against the GTX 950 GPU was a entry level offering which offered the same performance as the GM206 based GPU but with significantly lower power requirements. The GP106 will need to have gear up in both performance and efficiency departments. The GTX 950 is already a 90W solution so its predecessor might do away with power connector requirements and run only on PCI-e power.
    NVIDIA GP107 – The Entry Level GPU Aimed At Power Efficient, Low-TDP Graphics Cards



    The entry-level solutions will be powered by the GP107 and GP108 chips. Back in early 2014, NVIDIA introduced their first generation Maxwell architecture which was a significant leap in efficiency numbers. Two years later, AMD is trying to tackle NVIDIA on the same patterns which green team had mastered since Kepler back in 2012 and they have already demonstrated their new Polaris architecture which does confirm what they have been telling to the audiences so far. The GM107 was already a sub-60W chip which didn’t use any power connector and ran on PCI-e power. It’s successor might be the first sub-50W chip with aim at efficient computing. There’s a big market for these cards as they retail for sub-$150 US prices and offer performance that can drive games at 1080P resolution (moderate settings) with ease. The GTX 750 Ti is still seen as a better option compared to the GTX 950 in the APAC region, a card that can be better than the GTX 750 Ti will make all those using GM107 want to upgrade their PCs.
    NVIDIA holds a dominant position in the discrete graphics market, amounting to more than 80% of the entire discrete graphics shipped around the globe. In mobility sector, NVIDIA has the most fastest solutions which so far remain unmatched by competition. In terms of gaming technologies, NVIDIA was the first to introduce lag free and tear free gaming through their G-Sync technology and the first to announce a range of new graphical features under the GameWorks program. Even with all the progress made by NVIDIA, they still consider AMD being a strong competitor offering some great graphics cards that offer better performance at decent value. The road ahead however is a fierce battle between the two long rivals as they launch their next generation 14 / 16nm FinFET based solutions with a historical leap in efficiency and performance.
    NVIDIA Pascal and AMD Polaris – The FinFET GPUs:









    Noticia:
    http://wccftech.com/nvidia-pascal-gpu-analysis/#ixzz3x3FqMmf0



    Se nada de
    anormal acontecer na transição para este novo processo de fabrico, tendo em conta o desenvolvimento das ultimas três arquiteturas da nVidia, estes novos chips Pascal podem continuar a trilhar um caminho de vitória novamente para o lado verde.
    Até ver o que nos tem sido mostrado, tanto da nVidia como da AMD para a proxima geração, mostra que muito provavelmente serão as melhores placas graficas a chegar aos consumidores nos ultimos anos.

    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  14. #44
    Master Business & GPU Man Avatar de Enzo
    Registo
    Jan 2015
    Local
    País Campeão Euro 2016
    Posts
    7,793
    Likes (Dados)
    0
    Likes (Recebidos)
    1
    Avaliação
    41 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Supostamente, as melhores e por uma grande diferenca.
    Ideias sem Nexo e Provas do Tráfico de Hardware
    "que personifica o destino, equilíbrio e vingança divina." Dejá vú. Que cena!

  15. #45
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    NVIDIA Rumored To Mass Produce Flagship Pascal GPUs With HBM2 In 1H 2016 – Availability in 2H 2016

    NVIDIA is rumored to launch their flagship GPUs based on the Pascal architecture in second half of 2016. In a report published by Korean site, Digital Times (via HWbattle), it is reported that NVIDIA will be making use of the next generation HBM2 standard on the flagship Pascal GPUs that feature the 16nm FinFET process aiming the consumer and enterprise markets.

    NVIDIA Rumored To Bring Flagship Pascal GPUs in Market By 2H 2016

    The rumor comes just a day after Samsung announced that their company have initiated mass production of their first, 4 GB HBM2 DRAM package. The company will be competing against SK Hynix who developed the memory standard in the first place in collaboration with AMD. SK Hynix deployed their first HBM package on AMD’s Radeon R9 Fury series cards back in July which adopted a revolutionary package design.
    Moving on, SK Hynix has yet to begin mass production of HBM2 DRAMs. SK Hynix is currently eyeing a production timeframe as early as August 2016. On the other hand, Samsung while having 4 GB HBM2 chips in mass production is further going to strengthen their HBM2 development by producing 8 GB HBM2 packages. These high capacity packages will be specifically aiming the flagship Pascal GPUs which will feature up to 32 GB of VRAM.




















    It is rumored that NVIDIA currently has a range of GPU samples based on their 16nm Pascal architecture that are being produced and tested internally. The NVIDIA lineup will range from solutions based on GDDR5X and HBM2 packages but it is the flagship which will have to wait till 2H 2016 to see the light of day since that is the time when the higher capacity HBM2 packages go in production. The current 4 GB HBM2 packages can go up to 16 GB VRAM with 1 TB/s bandwidth. The 8 GB HBM2 packages can go up to 32 GB VRAM with 1 TB/s of total bandwidth. NVIDIA had already revealed that their flagship GPUs aimed at HPC will be using 32 GB of VRAM that is only possible through higher capacity DRAM packages.
    NVIDIA GP100/GP200 – The Flagship GPU, Powering Titans, Teslas and Quadros



    The heart of next generation supercomputers and High-Performance Computing platforms is without a doubt, the Pascal GP100/200 graphics chip. The NVIDIA GP100/200 chip will be the flagship GPU of the lineup and one which will determine the performance and efficiency of the new architecture. The Pascal GP100/200 has long been in the rumor mill and we still don’t have conclusive details on this monolithic chip. Being the successor to the GM200, the GP100/200 Pascal GPU will be built on the 16nm TSMC FinFET process node and feature up to a total 17 Billion transistors inside the package.
    The GPU is going to pack a lot of performance for gamers and FP64 users since this chip will be powering some serious compute-oriented machines that demand double precision compute. Being the flagship of the lineup, NVIDIA will make their GP100/200 GPU their first graphics chip to support HBM2 memory with up to 1 TB/s of bandwidth and 32 GB VRAM. We know Pascal has a peak double precision performance rated at over 4 TFLOPs while the single precision compute performance is rated at over 10 TFLOPs. This will be by far the biggest leap in total available compute performance we have seen on any graphics card.
    Advertisements



    As for when it arrives, there’s a strong possibility that consumers won’t get the full GP100/200 first nor a cut down variant. The reason is due to high demand from the HPC market as they have to update the older Kepler based cards which are since being used as FP64 options as the Maxwell chips drove FP64 support away.
    Powering the Tesla card first followed by GeForce and Quadro solutions, the card will be getting a range of products, without a doubt a new Titan offering and for $999 US which has been a consistent pricing for Titan graphics cards. The dual-chip cards are a totally different thing though (Titan Z). The GP100/200 chips will be available in a range of new packages such as the regular graphics (Add-In) boards and the new Mezzanine cards which were showed back at GTC 2015. Along with NVLINK support which is a new interconnect that NVIDIA is establishing with IBM and other partners such as CRAY, HP, DELL, TYAN, QCT and Bull, the connection would offer 80 – 200 GB/s access speeds between the several nodes integrated in HPC platforms.
    NVIDIA GP100/GP200 Features:

    • Based on Pascal GPU Architecture.
    • 2x Performance per watt of Maxwell.
    • Launch rumored for 2H 2016.
    • Will support DirectX 12 feature level 12_1 and higher.
    • Successor to the GM200 GPU found in the GTX Titan X and GTX 980 Ti.
    • Built on the 16nm FinFET manufacturing process from TSMC.
    • Allegedly has a total of 17 billion transistors, more than twice that of GM200.
    • Taped out in June 2015.
    • Will feature four 4-Hi HBM2 stacks, for a total of 16GB of VRAM for the consumer variant and 32GB for the professional variant.
    • Features a 4096-bit memory bus interface.
    • Features NVLink and support for Mixed Precision FP16 compute tasks at twice the rate of FP32 and full FP64 support.

    Both NVIDIA and AMD are currently testing their next generation GPU architectures codenamed Pascal and Polaris. Giving final polishes to their next generation GPUs, both companies expect to see a huge performance and efficiency gains from the latest FinFET process. NVIDIA will most likely have more detailed information on their Pascal GPUs progress at GTC 2016 in April where they are likely to showcase a range of GPUs for the GTX consumer markets. AMD on the other hand has already stated that they will be launching Polaris in desktop and mobility platforms in mid-2016 which marks the Summer of this year. Both vendors will see launches just months or even weeks away from one another.
    NVIDIA Pascal and AMD Polaris – The FinFET GPUs:








    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

 

 
Página 3 de 97 PrimeiroPrimeiro 123451353 ... ÚltimoÚltimo

Informação da Thread

Users Browsing this Thread

Estão neste momento 1 users a ver esta thread. (0 membros e 1 visitantes)

Bookmarks

Regras

  • Você Não Poderá criar novos Tópicos
  • Você Não Poderá colocar Respostas
  • Você Não Poderá colocar Anexos
  • Você Não Pode Editar os seus Posts
  •