Registar
Página 2 de 90 PrimeiroPrimeiro 12341252 ... ÚltimoÚltimo
Resultados 16 a 30 de 1339

Tópico: Nvidia Pascal

  1. #16
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,005
    Avaliação
    1 (100%)
    NVIDIA Pascal GPU To Feature 17 Billion Transistors and 32 GB HBM2 VRAM – Full CUDA Compute Architecture Arrives in 2016

    NVIDIA will be introducing their next generation Pascal GPU in 2016 which will introduce several new and key technologies to the green team. The Pascal GPU will be the successor to the current generation Maxwell GPU and from the looks of it, it is going to be a beast of a chip. Featuring the latest HBM2 and 16nm FinFET based designs, Pascal GPUs will leverage NVIDIA’s dominance in both the consumer and corporate world.

    NVIDIA Pascal GPU Might Feature 17 Billion Transistors, Almost Twice The Transistors of Fiji

    In an exclusive report published by Fudzilla, the site reveals that NVIDIA’s next generation Pascal GPU will feature 17 billion transistors crammed inside its core. Currently, the flagship GM200 core found on the GeForce GTX Titan X comes with 8.0 Billion transistors while the competitor, the Radeon R9 Fury X has a total of 8.9 Billion transistors inside its Fiji GPU. The 17 Billion transistors on the Pascal GPU are twice the transistors found on the GM200 Maxwell and the Fiji XT GPU core which is literally insane. Pascal is meant to be NVIDIA’s next high performance, compute focused graphics architecture which will be found on all market segments that will include GeForce, Quadro and even Tesla. Based on TSMC’s 16nm process node, NVIDIA’s Pascal GPU will not only feature the best performance in graphics but also the most power efficient architecture ever made by a GPU manufacturer.
    It was revealed a few days ago that NVIDIA’s Pascal GP100 chip has already been taped out on TSMC’s 16nm FinFET process, last month. This means that we can see a launch of these chips as early as Q2 2016. Given that the transistor count is correct, we can expect a incremental performance increase from Pascal across the range of graphics cards that will be introduced.
    TSMC’s 16FF+ (FinFET Plus) technology can provide above 65 percent higher speed, around 2 times the density, or 70 percent less power than its 28HPM technology. Comparing with 20SoC technology, 16FF+ provides extra 40% higher speed and 60% power saving. By leveraging the experience of 20SoC technology, TSMC 16FF+ shares the same metal backend process in order to quickly improve yield and demonstrate process maturity for time-to-market value.




    The 17 Billion transistors are an insane amount but what’s more insane is the amount of VRAM that is going to be featured on the new cards. With HBM2, NVIDIA gets the leverage to feature far more memory than what’s currently allocated on HBM1 cards (4 GB HBM on Fury X, Fury, Nano, Fury X2). With HBM2, NVIDIA gets access to more denser chips that will result in cards with 16 GB and up to 32 GB of HBM memory across a massive 4096bit memory interface which will dominate the next high-resolution 4K and 8K gaming panels.Although they may have to wait a little bit longer thanks to AMD’s priority access to HBM2 with SK Hynix, the makers of HBM. With 8Gb per DRAM die and 2 Gbps speed per pin, we get approximately 256 GB/s bandwidth per HBM2 stack. With four stacks in total, we will get 1 TB/s bandwidth on NVIDIA’s GP100 flagship Pascal which is twice compared to the 512 GB/s on AMD’s Fiji cards and three times that of the 980 Ti’s 334GB/s.
    The Pascal GPU would also introduce NVLINK which is the next generation Unified Virtual Memory link with Gen 2.0 Cache coherency features and 5 – 12 times the bandwidth of a regular PCIe connection. This will solve many of the bandwidth issues that high performance GPUs currently face. One of the latest things we learned about NVLINK is that it will allow several GPUs to be connected in parallel, whether in SLI for gaming or for professional usage. Jen-Hsun specifically mentioned that instead of 4 cards, users will be able to use 8 GPUs in their PCs for gaming and professional purposes.
    With Pascal GPU, NVIDIA will return to the HPC market with new Tesla products. Maxwell, although great in all regards was deprived of necessary FP64 hardware and focused only on FP32 performance. This meant that the chip was going to stay away from HPC markets while NVIDIA offered their year old Kepler based cards as the only Tesla based options. Pascal will not only improve FP64 performance but also feature mixed precision that allows NVIDIA cards to compute at 16-bit at double the accuracy of FP32. This means that the cards will enable three tiers of compute at FP16, FP32 and FP64. NVIDIA’s far future Volta GPU will further leverage the compute architecture as it is already planned to be part of the SUMMIT and Sierra super computers that feature over 150 PetaFlops of compute performance and launch in 2017 which indicates the launch of Volta just a year after Pascal for the HPC market.
    NVIDIA Pascal GPU ModuleIMG]http://cdn.wccftech.com/wp-content/uploads/2015/03/NVIDIA-Pascal-GPU-Chip-Module-635x635.jpg[/IMG]



    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  2. #17
    Master Business & GPU Man Avatar de Enzo
    Registo
    Jan 2015
    Local
    País Campeão Euro 2016
    Posts
    6,223
    Avaliação
    41 (100%)
    Ahahahaha.
    Se alguem pensou que a Nvidia ia descansar sobre os louros obtidos este ano............Eis a prova. Esse GPU fica já reservado para bater a Furyx2.
    Devido à falta de espaço na assinatura, resolvi colocar em "Acerca de mim" os meus projectos]
    http://www.portugal-tech.pt/member.php?u=801

  3. #18
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,005
    Avaliação
    1 (100%)
    NVIDIA's Pascal GPU to feature over 100% more transistors than Titan X

    We knew that NVIDIA's Pascal architecture was going to deliver a massive update over the current Maxwell-based offerings in the GeForce GTX 980 Ti and Titan X, but the transistor count is going to be insane. Titan X features 8 billion transistors while Pascal will reportedly contain an insane 17 billion transistors.


    NVIDIA will be tapping TSMC's 16nm process for its Pascal architecture, as well as using HBM2, which should see a massive increase in horsepower. But even with 17 billion transistors, the Pascal-based GPU will be "significantly smaller" than the 28nm-based Maxwell GPUs, reports Fudzilla. NVIDIA will be making use of HBM2 on the next-gen video cards, offering up to 32GB of the next-gen VRAM technology on its highest end card.

    Expect around 50% or more performance over the already fast GTX 980 Ti, which will see NVIDIA easily dominate AMD's Radeon R9 Fury X. But where does this leave AMD? Right now, AMD is in dire need of a huge architectural change, as Fiji didn't really bring anything new to the table. All AMD has done is used HBM1, but it's benefits weren't really shown on Fury X, apart from the card being smaller than usual. NVIDIA is really going to leapfrog AMD next year with its triple-punch in 16nm + Pascal + HBM2.

    Noticia:
    http://www.tweaktown.com/news/46620/...tan/index.html
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  4. #19
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,005
    Avaliação
    1 (100%)
    TSMC Begins Volume Production of 16nm FinFET Process – Nvidia Pascal GP100 GPU Among the Products in Production

    The TSMC 16nm FinFET node is probably the most notable process, that is of interest to PC enthusiasts. This is the node that will house Nvidia’s next generation lineup of graphic cards (specifically the “16FF+” variant) and is one of the most authentic indicators of their time-frame. Taipei Times, in accordance with everything we heard in the past, has confirmed that TSMC has (finally) started mass production of 16nm FinFET products. However, It is expected that the initial run will be dedicated for Apple SoCs.

    TSMC starts mass production of products on the 16nm FinFET node

    We had previously reported some months ago that TSMC will enter into volume production in the time-range of Q3 2015, specifically July, so this news doesn’t really come as much of a surprise. TSMC and Nvidia have also confirmed on more than one occasion that the next generation (Pascal) GPUs will be produced on the 16nm FinFET+ node, with initial confirmation dating back approximately 9 months. AMD’s next generation Radeon graphics processor on the other hand, codenamed Arctic Islands, was not on the official list of products released by TSMC, so while their CEO have confirmed the use of a FinFET node (14/16) the exact specifics remain to be seen.
    16nm FinFET tech entered into risk production and approached mature yields a while back, and now full fledged production has begun full steam ahead. More than 60 projects are underway, with known products in development including Avago, Freescale, LG, MediaTek, NVIDIA, Renesas and Xilinx. The list is obviously not exhaustive in nature but like i mentioned above, the initial short list did not include AMD. I must admit however that this could have changed in the meantime since the press release is fairly dated.




    As I mentioned a long time back, GM200 is an intermediary product with Pascal GP100 graphics processor finally ushering in the era of the sub 28nm utopia. With the initial production being more or less fully utilized by Apple, we should see the Pascal GP100 GPU in Q1 2016 by the earliest, with Q2 being the more conservative estimate. 2016 is also the date that AMD has confirmed products involving their anxiously anticipated Zen micro-architecture and Arctic Islands GPU on a FinFET node. Needless to say, it is gearing up to be a pretty interesting year.
    TSMC’s 16FF+ (FinFET Plus) technology can provide above 65 percent higher speed, around 2 times the density, or 70 percent less power than its 28HPM technology. Comparing with 20SoC technology, 16FF+ provides extra 40% higher speed and 60% power saving. By leveraging the experience of 20SoC technology, TSMC 16FF+ shares the same metal backend process in order to quickly improve yield and demonstrate process maturity for time-to-market value.
    The Pascal GP100 GPU allegedly has 17 Billion transistors and around 32 GB of HBM2 based vRAM. If these reports are correct, then the GP100 will be an absolute beast in terms of graphics processing power. In comparison, the current Nvidia flagship, the TITAN-X has 8 Billion transistors, so we are looking at an alleged jump of more than double. There are however, two major problems with the theory. We have about 1 Billion transistors more unaccounted for, and we are assuming a die size equal to that of the TITAN-X GM200 (601mm^2) which frankly speaking is just not going to happen on a brand new node. Not to mention that Nvidia will want to keep leg room for future Pascal variants on the same node.
    No, a die size of 601mm^2 is very unplausible. Assuming the report is true, I believe (caution: speculation) we might be looking at a combined transistor count. In the meantime however, there is finally cause to rejoice. The era of the 28nm GPU is over, and the age of sub 20nm FinFETs has begun.


    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  5. #20
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,005
    Avaliação
    1 (100%)
    New 14nm GPU rumours



    Both Nvidia and AMD getting ready

    AMD and Nvidia both appear to be certain to get their 14 nm out next year.

    According to TweakTown Nvidia is apparently doting the "I" and working out where to put in the semi-colons for its Pascal GPU using TSMC's 14nm FinFET node. AMD rumoured has been winning and dining its old chums at GlobalFoundries' to use its 14nm process for its Greenland GPU.
    The dark satanic rumour mill suggests that the Greenland GPU, which has new Arctic Islands family micro-architecture will have HBM2 memory. There will be up to 32GB of memory available for enthusiast and professional users. Consumer-oriented cards will have eight to 16GB of HBM2 memory. It will also have a new ISA (instruction set architecture).
    It makes sense, AMD moved to HBM with its Fury line this year. Nvidia is expected to follow suit in 2016 with cards offering up to 32GB HBM2 as well.
    Both Nvidia and AMD are drawn to FinFET which offers 90 percent more density than 28nm. Both will boost the transistors on offer with their next-generation GPUs, with 17 to 18 billion transistors currently being rumoured.
    Noticia:
    http://www.fudzilla.com/news/graphic...nm-gpu-rumours


    Parece ser cada vez mais evidente que a proxima geração de graficas vai finalmente deixar os 28nm para trás.
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  6. #21
    Tech Veterano Avatar de Viriat0
    Registo
    May 2014
    Local
    LPPT
    Posts
    4,702
    Avaliação
    7 (100%)
    [Rumor] Nvidia Pascal será fabricada pela TSMC em 16nm e trará até 16GB de memória HBM



    Rumores correm sobre quem iria fabricar a próxima geração de chips da Nvidia, com os nomes da TSMC e Samsung aparecendo como potenciais fabricantes. Informações divulgadas pelo site Business Korea afirmam que a sul-coreana perdeu a disputa, o que significa que as próximas placas da Nvidia devem ser feitas nos 16 nanômetros FinFet da TSMC.


    Os rumores atuais indicam que as primeiras placas de vídeo chegam no segundo trimestre de 2016, equipadas com o chip G100, já com os transistores em 16nm FinFet, porém a memória segue como a tradicional GDDR5. Mais tarde chegariam modelos baseados no G104, voltados ao mercado mainstream, assim como placas com memórias HBM.


    A Nvidia estreia uma placa com as memórias de grande largura de banda a partir de segunda geração, a HBM 2. A nova geração reduz restrições de quantidade de memória disponível, sendo que rumores indicam que a Nvidia deve trazer seu modelo topo de linha com 16GB de memória HBM. Outro destaque das placas Pascal é a tecnologia NVLink, que promete largura de banda muito superior na comunicação entre CPU e GPU e inclusive entre GPUs. Ao invés do tradicional PCIe e seus 16 GB/s, o NVLink promete até 80 GB/s.



    Os efeitos da troca de litografia são bastante aguardados, já que as placas de vídeo estão estabilizadas nos 20 nanômetros já faz algum tempo. Os 16 nanômetros FinFet+ da TSMC prometem ganho de performance de 65%, aumento de densidade de até 2x e 70% menos perda de energia, comparado ao processo de fabricação em 28nm.


    A Samsung tem muito a lamentar com a perda desse potencial cliente, sendo que a Nvidia é o principal "player" em uma indústria dos chips gráficos.

  7. #22
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,005
    Avaliação
    1 (100%)
    Micron’s GDDR5X Memory Analysis – Will Nvidia’s Next Generation Pascal Graphic Cards Utilize the Standard?

    A few days back, a rumor about Nvidia utilizing GDDR5X memory in some of its upcoming Pascal offerings made the rounds. While we do not know whether or not the report was true, I thought it would be a good idea to get into a bit more detail about the GDDR5X memory standard by Micron as well as clear up a few misconceptions. We will look at its proposed advantages as well as some of the disadvantages.
    An AMD slide comparing the specifications of GDDR5 memory and HBM. @AMD Public Domain
    Micron’s GDDR5X memory explored – could this be what Nvidia’s mid end will look like in 2016?

    Lets start with the preliminaries, what exactly is GDDR5X? Ironically, this question has not been answered very clearly by Micron itself and details are very vague in nature. This is what we do know however: GDDR5X is based on the GDDR5 standard and primarily doubles the prefetch of the standard while preserving “most of the command protocols of GDDR5”. What that means is that while the bandwidth has been doubled, it is not, strictly speaking an improvement of the GDDR5 standard, rather a new branch of the same and arguably a completely new technology (contrary to what the ‘GDDR’X name might suggest). One of the examples given is DDR3 to DDR4, which also happens to be a good approximate analogy to think of the GDDR5 to GDDR5X jump.
    Also, contrary to what has been reported in the past by some sources, the Micron GDDR5X standard is not proprietary in nature, infact JEDEC has been approached (by Micron) to make it a universal standard. Given below is the only ‘technical slide’ released by Micron so far:

    We can immediately see that as opposed to a 32 Bit wide memory access, the GDDR5X supports 64 Bit wide (double the prefetch) memory access, theoretically doubling the memory bandwidth. Keep in mind however that voltages will remain exactly the same. Although the foot print or the real estate taken by the memory (something which was one of the problems associated with GDDR5), on the card itself, will halve in size – thanks to the fact that Micron has managed to double the density of GDDR5. The company is expected to make a formal announcement in 2016 – with availabliity of the standard in 2016 as well. So the question then becomes, will Nvidia use GDDR5X in their upcoming line of GPUs (Pascal and beyond)?
    Before we answer that, lets look a bit at the numbers of GDDR5 and GDDR5X.
    The bandwidth of GDDR5 can be computed via the following method:
    [DDR Clock Speed]*[2]*[Bus Width/8]
    *this is the same clock speed shown on popular OC tools such as MSI After burner.
    ** All calculations given below assume the same umber of GDDR5 or GDDR5X chips.
    This means that the GTX 980 Ti, which reads 3505 Mhz (7010 Mhz effective) has a theoretical bandwidth of [7010*384/8=>] ~333 GB/s.
    An extract from Micron’s nomenclature documentation. @Micron
    Now while we don’t know any other details abut GDDR5X, I was able to find this pdf on Micron’s website which sheds some interesting details; details that we can use to estimate speed and performance of this particular piece of technology. Thanks to the PDF, we know that the real clock rates of the memory will be the same. And if Micron’s claim is true than all we need to do is add a x2 multiplier. The equation for GDDR5X would therefore become:
    [DDR Clock Speed]*[4]*[Bus Width/8]
    Please note that the document lists the “real” clock speed and not the DDR clock speed. To change that into the DDR clock speed we will initially multiply the value by 2. So for a chip with a DDR clock rate of 3505 Mhz we will get the following bandwidth:




    [3505*4*384/8=>] ~673 GB/s
    Now if you remember the original leak, the numbers it stated for GDDR5X was a ‘256 bit bus width with 7000 Mhz (DDR) clock rate and the actual achieved bandwidth being 448 GB/s’. And consequently, we now have a metric to ascertain whether or not the rumor has even the slightest grain of authenticity. Now the folks over at 3DCentre have included the x2 multiple in the DDR clock rate, which might (or might not) be a technical inaccuracy since the real clock rates remain the same (only the ‘effective’ clock rate would change). To get the 448 GB/s number, we are actually assuming a 256 bit wide bus width and a DDR clock rate of 3500 Mhz:
    [3500*4*256/8=>] ~448 GB/s
    This is pretty close to the 512 GB/s number that HBM1 currently gives. Ofcourse, HBM2 is a whole other ball game and runs circles around the performance advantages offered by GDDR5X. So is this memory standard dead on arrival? Unfortunately, once again, we do not have enough information to categorically answer that question since we are missing several key points. More information on this will be shed in 2016 according to the press release.
    Now micron has stated and implied that they avoided creating a brand new standard from the gorund up and instead workied on the GDDR5 standard. They also claim that most of the command protocols have been preserved. However, the most critical question is if the interface itself is the same as GDDR5. To put that into perspective here are some things we feel very confident saying:

    • High Bandwidth Memory (HBM) is the memory standard of the future and the low clocks plus low power consumption makes it ideal for every form of compute.
    • Micron’s GDDR5 is not going to offer a lower power solution that HBM.
    • Nvidia is most definitely going to use HBM in its higher end offerings.
    • Nvidia might decide to swap GDDR5 with GDDR5X on its Mid-End offerings if (and only if) the switching costs (in terms of yield, sampling and development) are not significant. If they are significant, or if initial sampling isn’t high, then Nvidia will simply not bother switching to GDDR5X and just switch directly to HBM2 when it achieves economies of scale.
    • Micron’s GDDR5 will offer double the bandwidth capability and twice the memory density, with availability in 2016.

    This is why it matters a lot whether or not GDDR5X is sufficiently similar to GDDR5 like its name suggests. Nvidia will undoubtedly have offerings next generation which run on GDDR5 so it would make sense to swap the GDDR5 on them with GDDR5X, but only and only if the switching cost is marginal. If GDDR5X is a standard that requires significant development costs before adaptation then it is very unlikely (more like impossible) that Nvidia shifts to this standard.
    Another point to note is that graphics processors (and almost every other processor) are years in the making and if Micron has truly just developed this memory (their press release seems to support this fact) then Nvidia will simply not have the choice to switch to them since GPUs would already be in the fabrication stage at TSMC. The only way this can even happen is if we are to assume that Micron had approached Nvidia long time ago and the news about this standard is just becoming public.
    And since we are now entering the domain of sheer speculation, I would end on this note: Nvidia’s use of GDDR5X is a possibility, but not a probability as far as I can see. Micron wasn’t very explicit about it, but I have a feeling that the switching costs (or the sampling) will not be feasible for use in 2016. And considering that HBM will soon achieve economies of scale, it seems a pointless thing to follow after, unless ofcourse, it takes nothing (for Nvidia) to adopt this standard.


    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  8. #23
    O Administrador Avatar de LPC
    Registo
    Mar 2013
    Local
    Multiverso
    Posts
    14,945
    Avaliação
    31 (100%)
    Boas!
    Penso que GDDR5x duplicando a performance da actual GDDR5 poderá ser uma boa aposta para este passo intercalar.
    Será mais barata á partida de implementar e permitirá 8 ou 12GB de memória em custos controlados.

    Atrasar para mais tarde o HBM2 poderá ser uma ideia inteligente se isso não implicar com a performance do novo GPU.
    Essencialmente o que eu quero saber é a performance vs custo do novo produto...

    Cumprimentos,

    LPC

    My Specs:
    Case: Phanteks Eclipse P400S - CPU: AMD Ryzen 5 - 1600 @ 3.9 Ghz - Board: MSI B350 Tomahawk - RAM: 16GB DDR4 G.Skill RipJaws V 3200Mhz Cas 14-14-14-34 (2x8GB) - GPU: ZOTAC Nvidia GTX 1060 AMP! 6GB
    Cooling: Arctic Cooling 3x F14 Silent - CPU Cooler: Arctic Cooling: Liquid Freezer 360 (6xF12 Fans) - Storage: Samsung SSD 840 EVO 1 TB - PSU: EVGA G3 750W - Monitor: ACER XB270HU 1440p @ 144hz G-Sync

  9. #24
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,005
    Avaliação
    1 (100%)
    Eu penso que os topos de gama com o Pascal virão com HBM de segunda geração, provavelmente a(s) grafica(s) que se situar(em) logo abaixo devem utilizar talvez o HBM de primeira geração, depois a gama média deve utilizar este novo padrão de GDDR5X que permite um BUS maior e assim a nvidia volta a distanciar-se da oferta da AMD.
    As graficas de entrada de gama devem continuar a uilizar o GDDR5.

    Penso que a nível de custos, não deve haver grande diferença entre GDRR 5 e 5X, mas acho que vai dar um bom boost em desempenho pela maior largura de banda que oferece, se assim for, será talvez mais uma jogada inteligente por parte da nVidia desde que não abuse dos preços ou da posição dominante de mercado que tem actualmente.
    O que resta aqui saber é o custo que o HBM 2 vai trazer aos topos da nvidia e se este novo padrão de memoria não vai sofrer dos problemas de produção que afetaram a AMD.
    Última edição de Jorge-Vieira : 18-10-15 às 17:53
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  10. #25
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,005
    Avaliação
    1 (100%)
    Nvidia Pascal GPUs to be used to help model future hurricanes

    As much as we are pretty safe here in the UK from major storms and tectonic activity, the rest of the world isn’t quite so lucky. Fortunately though, we live in a world where not only can PS3s be connected together to form modular super computers, but commercial graphics cards can be leveraged for storm tracking too. That’s what looks set to happen with Nvidia’s upcoming Pascal designs, 760 of which the US NOAA agency is set to use to model future hurricanes.
    Although it might seem like a strange use for a technology originally designed with game rendering in mind, anyone who has attempted Bitcoin mining or large-scale protein folding will now that the parallel-processing of graphical hardware is often far in excess of the capabilities of a traditional central processor.

    Source: Wikimedia
    This news comes from inventor, developer, and general manager of Nvidia’s CUDA technology, Ian Buck, who said that while there was a lot of software that would need to be rewritten to operate on the GPUs, the potential for increased processing was huge. It should allow for tracking on a scale that creates a trackable grid across the world, with 3KM spacing. In comparison, today’s models are at best operating at a 12KM scale, others in excess of 28KM (via The Register).
    This added resolution should make it much easier for meteorologists to track where storms are headed and how they might behave when they reach changes in geography.
    Noticia:
    http://www.kitguru.net/components/gr...re-hurricanes/
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  11. #26
    Tech Bencher Avatar de reiszink
    Registo
    Feb 2013
    Posts
    4,949
    Avaliação
    5 (100%)
    Intel i7 5820K - ASRock X99M Killer - 16GB G.Skill DDR4 - Gigabyte GTX 980Ti G1 - Plextor M6e 256GB + Samsung 850 EVO 500GB - Corsair H110 - EVGA G3 750W - Acer 27" 144Hz IPS - Zowie EC2-A - Filco Majestouch 2 TKL - HyperX Cloud II Pro

  12. #27
    Master Business & GPU Man Avatar de Enzo
    Registo
    Jan 2015
    Local
    País Campeão Euro 2016
    Posts
    6,223
    Avaliação
    41 (100%)
    Impressive. AMD, your turn.
    Devido à falta de espaço na assinatura, resolvi colocar em "Acerca de mim" os meus projectos]
    http://www.portugal-tech.pt/member.php?u=801

  13. #28
    Tech Veterano Avatar de MTPS
    Registo
    Oct 2013
    Posts
    5,367
    Avaliação
    6 (100%)
    Citação Post Original de Enzo7231 Ver Post
    Impressive. AMD, your turn.
    Timed out.

  14. #29
    Master Business & GPU Man Avatar de Enzo
    Registo
    Jan 2015
    Local
    País Campeão Euro 2016
    Posts
    6,223
    Avaliação
    41 (100%)
    Ahahahah. Nice move.
    Devido à falta de espaço na assinatura, resolvi colocar em "Acerca de mim" os meus projectos]
    http://www.portugal-tech.pt/member.php?u=801

  15. #30
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,005
    Avaliação
    1 (100%)
    Nvidia Confirms: Pascal Flagship Graphic Card will Have 16GB HBM2 Memory and Boast 1TB/s Bandwidth at Launch

    The Japanese version of the Nvidia GTC event (Via VRWorld) was recently held and the company confirmed some previously leaked details about the upcoming Pascal GPUs. The next generation GP100 flagship will indeed boast HBM2 technology but will be limited to 16GB HBM2 with around 1TB/s of bandwidth at launch. The Tesla segment of Nvidia GPUs will also recieve NV link which will bypass the bottleneck of the PCI-e bus by using a custom interconnect that is roughly 5 times faster.

    Nvidia Pascal GPU will boast 16GB HBM with 1TB/s bandwidth – will only get 32GB HBM2 once the tech matures

    It was previously believed that the professional variants could have upto 32 GB HBM2 memory, whoever, Nvidia has further clarified that this amount can be increased to 32GB HBM2 only depending on the state of the memory technology in 2016 (and depending on how much progress SK Hynix makes). This means that all Pascal products at the time of launch will be equipped with a maximum of 16GB HBM2 memory. Ofcourse this means that on the professional side of things, once ECC (Error Correcting Code) is added, the actual capacity should decrease.
    The flagship chip of Pascal architecture is ofcourse the GP100. It will feature DirectX 12 feature level 12_1 or above and is built on the 16nm FinFET+ TSMC process. The chip allegedly has 17 Billion transistors – which is more than twice that of the GM200 chip. Zauba records indicate that it taped out in June 2015, so we can expect it in the first half of 2016. The Pascal Test Vehicle acutally had 4 GB of HBM1 memory, but as we now know, the production graphic cards will contain 16GB of HBM2 memory. The card will have full support for FP16, FP32 and FP64 tasks and will come in two form factors: namely that of the PCI-e form and the Mezzanine High Bandwidth form to be used with NV Link.


    Nvidia has also stated that internally the GPU will surpass speeds of 2TB/s – something which means that the next generation product will truly be a distinguished from its predecessors. It is also worth nothing that Nvidia has actually skipped Maxwell on the dual GPU professional side and will skip directly to Pascal. Pascal architecture is thought to fix all the faults of the Maxwell 2.0 architecture – and finally perfect full DirectX 12 compatibility at the 16nm FinFET process.

    Unfortunately for us, Nvidia has started scrambling their codenames (for obvious reasons) of GPUs shipped for testing – so we can no longer pinpoint the exact graphic card models. However, the pricing and timings of the recent Zauba shipping entries indicates that the Pascal GPU is nearing the end of its testing and will soon shift to CS sampling – which is the finished, final product. The exact date of launch remains unknown – and we have strong reason to believe that the delay will depend entirely upon sampling and yield of HBM2. Since 16nm FinFET+ is actually based on the 20nm backbone, and considering TSMC has had ample to time to mature the process, the node itself should have reasonable yield by early 2016. HBM2 sampling on the other hand, is one front where we currently don’t have any conclusive reports on.


    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

 

 
Página 2 de 90 PrimeiroPrimeiro 12341252 ... ÚltimoÚltimo

Informação da Thread

Users Browsing this Thread

Estão neste momento 1 users a ver esta thread. (0 membros e 1 visitantes)

Bookmarks

Regras

  • Você Não Poderá criar novos Tópicos
  • Você Não Poderá colocar Respostas
  • Você Não Poderá colocar Anexos
  • Você Não Pode Editar os seus Posts
  •