Parece bem encaminhado para termos os azuis na linha da frente. A AMD que não se cuide. R(aja) for Vendetta
Parece bem encaminhado para termos os azuis na linha da frente. A AMD que não se cuide. R(aja) for Vendetta
Ideias sem Nexo e Provas do Tráfico de Hardware
"que personifica o destino, equilíbrio e vingança divina." Dejá vú. Que cena!
Não tenho grandes ilusões, será difícil para a Intel encarar a Nvidia olhos nos olhos, pelo menos nos próximos anos! Mas se encarar, melhor para nós.
De qualquer forma, não precisam de ter os melhores produtos de forma absoluta, basta mexerem no mercado dos GPUs como a AMD mexeu no dos CPUs com o Ryzen. Já era uma enorme mais valia ao que temos actualmente.
Intel i7 5820K - ASRock X99M Killer - 16GB G.Skill DDR4 - Gigabyte GTX 980Ti G1 - Plextor M6e 256GB + Samsung 850 EVO 500GB - Corsair H110 - EVGA G3 750W - Acer 27" 144Hz IPS - Zowie EC2-A - Filco Majestouch 2 TKL - HyperX Cloud II Pro
NVIDIA's Tom 'TAP' Petersen Shockingly Departs For IntelDesta vez, a Intel, não foi buscar um dos da AMD, mas antes da nVidia.NVIDIA is losing its longtime Director of Technical Marketing, Tom "TAP" Petersen, who announced on Friday that he had wrapped up his final day with the firm. It has not yet beeb publicly announced where Tom will end up or what he has planned next, though we have it on good authority he will go where other stalwarts in the graphics industry have recently gone. More on that in a moment.
Tom is a veteran in the industry and an all around good dude. Before landing at NVIDIA in 2005, he spent the bulk of his career as a CPU designer, having worked with IBM and Motorola on the PowerPC team. He also spent some time with Broadcom after it acquired SiByte, where he was the Engineering Director for the BCM1400, an embedded quad-core multiprocessor.
Estão mesmo a apostar forte nas contratações...
Intel parece o Real Madrid na era dos Galáticos.
Ideias sem Nexo e Provas do Tráfico de Hardware
"que personifica o destino, equilíbrio e vingança divina." Dejá vú. Que cena!
Estou para ver o que vai sair desse leque de gente. A mim o que deixou o que raio fizeram foi com o o Raja. Esse foi uma boa contratação? O da NVIDIA deve ter sido mesmo. O gajo do hardwareocp também vai dar uma ajuda nas reviews
Realmente parece o futebol.
Queremos gráficas brutas
E baratas
Intel e baratas, são palavras que raramente combinam.
Intel ‘Xe Unleashed’ GPU Lineup Leaked – Xe Power 2 Flagship Graphics Card, Roadmap And MoreIntel Xe Unleashed: e denotes GPUs, Xe 2 flagship GPU will be a ‘seamless’ dual GPU, landing on 6/31 next year
Intel Xe philosophy believes that innovation needs to happen on 3 main fronts: process, microarchitecture and “e”. We are already familiar with the first two but ‘e’ is something that has not been successfully implemented so far. Sure there have been dual GPUs, but they all had to tradeoff some part of the functionality and never scaled linearly. Intel’s graphics team believes it has solved just that. With a brand new architectural approach (Xe) and a software layer (OneAPI) that can scale indiscriminately between any number of GPUs, it’s ready to remedy the years of neglect that ‘e’ has faced in the industry.
We managed to get our hands on 3 slides from the presentation that Raja Koduri gave:
This slide is the cornerstone of the Xe philosophy and the big reveal about what e actually denotes. This also reveals the existence of the X4 class of GPUs by the way, which as you will see is just one step in Intel’s plan to dominate the GPU market.
They have designed the One API to act as an intermediary between the Direct3D layer and the GPU(s) (I am told they have a Linux solution in the works as well) and allow the user to scale between multiple GPUs seamlessly. Seamless is the keyword here as a multi GPU that can perform cohesively as a single GPU has never been made. According to the presentation shown at the Xe Unleashed event, the GPU will register essentially as one large GPU. This will allow it to mate with applications that may not have the capability for multi GPU and retain almost all backwards compatibility.
Developers won’t need to worry about optimizing their code for multi-GPU, the OneAPI will take care of all that. This will also allow the company to beat the foundry’s usual lithographic limit of dies that is currently in the range of ~800mm. Why have one 800mm die when you can have two 600mm dies (the lower the size of the die, the higher the yield) or four 400mm ones? Armed with One API and the Xe macroarchitecture Intel plans to ramp all the way up to Octa GPUs by 2024. From this roadmap, it seems like the first Xe class of GPUs will be X2.
The tentative timeline for the first X2 class of GPUs was also revealed: June 31st, 2020. This will be followed by the X4 class sometime in 2021. It looks like Intel plans to add two more cores every year so we should have the X8 class by 2024. Assuming Intel has the scaling solution down pat, it should actually be very easy to scale these up. The only concern here would be the packaging yield – which Intel should be more than capable of handling and binning should take care of any wastage issues quite easily. Neither NVIDIA nor AMD have yet gone down the MCM path and if Intel can truly deliver on this design then the sky’s the limit.
Without any further ado, here’s the footage of the teaser our spy managed to take:
There you go folks, here’s your very first look at the official Intel Xe GPU, or more accurately the Intel X2 GPU. This short but rather spicy trailer gives a lot away. The design rocks a carbon fibre aesthetic with blue accents (from what I have been told, the blue stripes will be glow in the dark!) and the first reference design will be made in partnership with ASUS. You can also quite clearly see two intake pipes for what appears to be an internal water loop.
My source has told me that the card will actually have two modes. A standard mode, which will allow the dual GPU to function at moderate clock speeds for most users, and a turbo boost mode, which when connected to the AIO upgrade, will allow the user to reach clock speeds exceeding 2.7 GHz (well 2.71828 to be exact) on both GPUs! This is an absolutely astonishing feat that allows Intel to reduce the upfront cost of their GPU. You can either buy the card with the AIO as a package or pay less and upgrade later.
I am told that Intel is planning to be very competitive in pricing and when asked hinted that their flagship would be more affordable than any counterpart on the market. This means we are looking at a maximum MSRP of $699 for the X2 flagship. The X2 GPU will be based on the new 4D XPoint memory and feature the Direct3D 14_2 feature level as far as hardware goes. Here are the complete specs that were discussed during the event:
Wccftech Intel Xe 2 GPU Process10 7nm (Intel) Architecture Xe Stream Processors 12288 (6144 x2) Core Clock (Boost) 1600 MHz (2718 MHz) Memory 32 GB 4D XPoint Bandwidth 8 TB/s Peak Performance 66.8 TFLOPs Die Size 600mm2 (x2) GPU Type MCM Interconnect Unknown (TSV based) Feature Level Direct3D 14_2 TDP 350W Power Connectors 3x 8-pin Expected MSRP $699 (TBC) Launch 6/31 2020
A Intel veio "do nada" para ressuscitar os dual-gpu dos mortos? Nice!
Ideias sem Nexo e Provas do Tráfico de Hardware
"que personifica o destino, equilíbrio e vingança divina." Dejá vú. Que cena!
Sim, mas parece ser feito de uma forma diferente. Neste caso, parece que vão usar um sistema de chiplets, ligados por uma mesh.
E com uma API proprietária a ligar o hardware com o software.
Se isto funcionar, podem aumentar o desempenho do GPU sem serem muito limitados por processos de fabrico.
66Tflops a 360w e por 700 dólares? Deve ser deve....
Ideias sem Nexo e Provas do Tráfico de Hardware
"que personifica o destino, equilíbrio e vingança divina." Dejá vú. Que cena!
Só quando estiver nas lojas é que acredito. E depois ainda temos a optimização de jogos e software que tirem partido de tanto poder.
Nem tinha reparado nos 66 TFLOps. Isso é erro, ou andam a contar como se fosse FP8.
E depois apresentam suporte para D3D 14_2 ?!
Isto deve ser piada que saiu mais cedo, para o 1 de Abril. Só pode.
Estão neste momento 1 users a ver esta thread. (0 membros e 1 visitantes)
Bookmarks