Registar

User Tag List

Likes Likes:  0
Página 14 de 21 PrimeiroPrimeiro ... 41213141516 ... ÚltimoÚltimo
Resultados 196 a 210 de 304

Tópico: DirectX 12

  1. #196
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Exclusive: The Nvidia and AMD DirectX 12 Editorial – Complete DX12 Graphic Card List with Specifications, Asynchronous Shaders and Hardware Features Explained

    Addressing the AotS controversy

    This editorial serves as my first (and hopefully only) foray into the AoTs controversy that has been plaguing the inter-webs recently. As I mentioned in the beginning of this editorial, rivalry between the two giants is something that has been part and parcel of their history. With the advent of DirectX 12, it was only bound to increase tenfold.
    So when something as technical as DX12 was hyped up to gigantic proportions for the laymen, I thought it was only fair that we put the ‘technicality’ back into the hype.

    We have explained away some of the setups in which an Nvidia card appeared to gain less advantage with DX12 and we established a probable explanation for the only anomalous scenario in which it didnt.
    I think I would like to quote Robert Hallock from AMD here:
    “I think gamers are learning an important lesson: there’s no such thing as “full support” for DX12 on the market today. There have been many attempts to distract people from this truth through campaigns that deliberately conflate feature levels, individual untiered features and the definition of “support.”
    So summarizing, all hardware vendors fully and completely support the DirectX 12 API.

    No hardware vendor can claim 100% support of all hardware features and the differences are usually negligible in nature. If one is deciding by features observable by the end user and gaming experience, the vote might fall in favour ofNvidia with its Feature Level 12_1 support which will allow advanced illumination visual effects in next generation games. That said, there are ways to simulate the effects without much of a performance hit for Radeons as well. If we are talking about performance increase (in terms of untapped potential, not maximum potential) then an argument can be made for AMD with its ASync advantage.




    Also remember that developers usually code for the lowest common denominator, which means both AMD and Nvidia’s edge depends entirely on how many devs use it; and the expected mean result is a win-win for owners of both vendors.
    All that said and done, we will be looking out for more DX12 titles (AotS is after all a single DX12 title, and there is way too much bias involved with making a conclusion from a single data source) and seeing how they fare in terms of untapped potential that was unlocked and maximum potential (which is what we should actually be looking at).
    If you take away one thing from this editorial, let it be that there is very little black and white advantage in terms of DX12 compatibility for either vendor.


    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  2. #197
    Banido
    Registo
    Sep 2015
    Posts
    1,034
    Likes (Dados)
    0
    Likes (Recebidos)
    0
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Boa noite,

    Acho que por parte tanto de AMD quanto de NVIDIA está aqui um bom trabalho no suporte ao DX12 para gráficas de gerações anteriores. Sinceramente acho que ambas estão de parabéns.

    Cumprimentos.

  3. #198
    Master Business & GPU Man Avatar de Enzo
    Registo
    Jan 2015
    Local
    País Campeão Euro 2016
    Posts
    7,793
    Likes (Dados)
    0
    Likes (Recebidos)
    1
    Avaliação
    41 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Até estou paneleiro dos olhos. A AMD à frente da Nvidia pela primeira vez desde Abril de 2013?? Sinceramente gostaria que não fosse apenas mais um hype, como aquele que vimos acerca da performance da Fury, baseado num bench feito...pela AMD...........
    Ideias sem Nexo e Provas do Tráfico de Hardware
    "que personifica o destino, equilíbrio e vingança divina." Dejá vú. Que cena!

  4. #199
    Tech Bencher Avatar de reiszink
    Registo
    Feb 2013
    Posts
    5,769
    Likes (Dados)
    0
    Likes (Recebidos)
    0
    Avaliação
    5 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Produtor do Ashes of the Singularity a falar sobre esta polémica dos Async Shaders.

    Afinal, parece que a história ainda só vai a meio.

    Regarding Async compute, a couple of points on this. FIrst, though we are the first D3D12 title, I wouldn't hold us up as the prime example of this feature. There are probably better demonstrations of it. This is a pretty complex topic and to fully understand it will require significant understanding of the particular GPU in question that only an IHV can provide. I certainly wouldn't hold Ashes up as the premier example of this feature.

    We actually just chatted with Nvidia about Async Compute, indeed the driver hasn't fully implemented it yet, but it appeared like it was. We are working closely with them as they fully implement Async Compute. We'll keep everyone posted as we learn more.
    http://www.overclock.net/t/1569897/v...#post_24379702
    Intel i7 5820K - ASRock X99M Killer - 16GB G.Skill DDR4 - Gigabyte GTX 980Ti G1 - Plextor M6e 256GB + Samsung 850 EVO 500GB - Corsair H110 - EVGA G3 750W - Acer 27" 144Hz IPS - Zowie EC2-A - Filco Majestouch 2 TKL - HyperX Cloud II Pro

  5. #200
    Master Business & GPU Man Avatar de Enzo
    Registo
    Jan 2015
    Local
    País Campeão Euro 2016
    Posts
    7,793
    Likes (Dados)
    0
    Likes (Recebidos)
    1
    Avaliação
    41 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Ou a menos que isso até Tudo normal não?
    Ideias sem Nexo e Provas do Tráfico de Hardware
    "que personifica o destino, equilíbrio e vingança divina." Dejá vú. Que cena!

  6. #201
    Tech Bencher Avatar de reiszink
    Registo
    Feb 2013
    Posts
    5,769
    Likes (Dados)
    0
    Likes (Recebidos)
    0
    Avaliação
    5 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Sim, normal, está tudo no inicio e até acredito que a AMD possa estar nesta fase inicial com alguma vantagem, fruto do seu trabalho com o Mantle.

    Mas, muita gente quase fez o funeral da Nvidia com isto do DX12, baseados num único benchmark em estado Alpha, de um jogo que ainda nem saiu.

    Basta dar um salto ao fórum vizinho e recuar as páginas umas semanas, para se ver a festa que um punhado de pessoas fizeram.

    Para mim, qualquer conclusão que agora se tire, é pura precipitação. Deixar os jogos sair, deixar ambas as empresas apresentarem drivers competentes e depois logo se verá.

    Para bem do mercado, estou a torcer para que o DX12 seja de facto o ponto de viragem da AMD, a empresa precisa disso e os consumidores também.
    Intel i7 5820K - ASRock X99M Killer - 16GB G.Skill DDR4 - Gigabyte GTX 980Ti G1 - Plextor M6e 256GB + Samsung 850 EVO 500GB - Corsair H110 - EVGA G3 750W - Acer 27" 144Hz IPS - Zowie EC2-A - Filco Majestouch 2 TKL - HyperX Cloud II Pro

  7. #202
    Tech Membro Avatar de MAXLD
    Registo
    Mar 2013
    Local
    C.Branco
    Posts
    2,326
    Likes (Dados)
    0
    Likes (Recebidos)
    0
    Avaliação
    0
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Citação Post Original de reiszink Ver Post
    Mas, muita gente quase fez o funeral da Nvidia com isto do DX12, baseados num único benchmark em estado Alpha, de um jogo que ainda nem saiu.
    O mais provável:
    É precisamente o contrário. Isto no fundo para a nVidia é um show de bola.

    Primeiro que tudo, as Maxwell são placas que são boas para DX11... porque era o que havia e são boas nisso em specs, e com muito trabalho também a nível de drivers. As AMD têm outras particularidades e specs future proof e que agora sim se mostram bem melhores para o actual futuro das novas API... mas até agora não tirou proveito disso porque não eram usadas...

    Ou seja: agora sim que as novas API e VR vão exigir mesmo outras specs diferentes (que as AMD já tinham mas ainda não se aproveitam), a nVidia bota cá para fora o que é preciso com as novas Pascal.

    Resumindo e concluindo: a nVidia fai facturar para crlh porque vai tudo evntualmente trocar as velhas GTX por novas placas. E mais uma vez faz com que a anterior geração pareça lixo obsoleto e tem que se mudar para ter todo o full power, neste caso para as novidades (DX12, 4K, VR...). Esperem só para ver o espalhafato do Jen-Hsun no evento de apresentação das Pascal... vai ficar tudo parvinho dos olhos com a apresentação das supostas "novidades e estreias" por parte do "Steve Jobs 2.0".

  8. #203
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Nvidia Actively Working To Implement DirectX 12 Async Compute With Oxide Games In Ashes Of The Singularity

    Whether Nvidia hardware supports DirectX 12 Async Compute or not has been a hot issue of debate, exacerbated by the company’s silence. Nvidia continues to be silent on the matter and despite the best efforts of the media to get the company to comment and elaborate on the matter we have yet to see the company come out with any statements thus far.

    Fortunately however, the vocal nature of Oxide Game’s engagement in this debate may help us answer some of the leering questions about Nvidia’s support for Async Compute. We’ve seen several comments from a developer at Oxide Games talking about the DirectX 12 benchmark for Ashes Of The Singularity in the past week. In which the developer elaborated on several aspects of the DirectX 12 benchmark and why Nvidia GPUs seemingly struggled with it, or at least did not perform as gracefully as their AMD Radeon counterparts in it. And the developer concluded that it was down to an advantage that AMD GPUs posses over their Nvidia counterparts with a feature dubbed Asynchronous Shading/Shaders/Compute.
    Nvidia Is Actively Working With Oxide Games To Implement DirectX 12 Async Compute Support For GeForce 900 Series GPUs In Ashes Of The Singularity

    Yesterday the same developer issued a status update on the very same topic of DX12 Async Compute via a comment in that same vibrant Ashes Of The Singularity overclock.net thread.
    Regarding Async compute, a couple of points on this. FIrst, though we are the first D3D12 title, I wouldn’t hold us up as the prime example of this feature. There are probably better demonstrations of it. This is a pretty complex topic and to fully understand it will require significant understanding of the particular GPU in question that only an IHV can provide. I certainly wouldn’t hold Ashes up as the premier example of this feature.
    We actually just chatted with Nvidia about Async Compute, indeed the driver hasn’t fully implemented it yet, but it appeared like it was. We are working closely with them as they fully implement Async Compute. We’ll keep everyone posted as we learn more.
    As we have already detailed in an in-depth editorial two days ago, Nvidia GTX 900 series GPUs do have the hardware capability to support asynchronous shading/computing. A question arises however if that can only be achieved through the heavy use of pre-emption and context switch, which in turn adds substantial latency and defeats the purpose of the feature which is to reduce latency / improve performance. AMD claims that this is indeed the case. Nvidia however has not yet provided us with an answer to this question when asked although they have promised to do so and as soon as that happens we will make sure to bring you an update.
    In the meantime the Oxide Games developer has stated that Nvidia is actively working with them to implement support for the feature. As Async Compute has not been yet been implemented fully in the latest DirectX 12 ready drivers from Nvidia. We will make sure to revisit the benchmark again once some type of support for Async Compute is achieved on GTX 900 series GPUs to see what affect it may have on performance. Although we should point out that older GeForce generations, prior to 900 series, do not have support for asynchronous shading, so the feature should have no bearing on the performance of those cards.
    DirectX 12 Asynchronous Compute : What It Is And Why It’s Beneficial

    AMD has clearly been a far more vocal proponent of Async Compute than its rival. The company put this hardware feature under the limelight for the first time last year and attention has been directed towards it more so this year as the imminent launch of the DirectX 12 API was looming ever closer. Prior to that the technology remained, for the most part, out of sight.
    Asynchronous Shaders/Compute or what’s otherwise known as Asynchronous Shading is one of the more exciting hardware features that DirectX12 and Vulkan – as well as Mantle before them – exposed. This feature allows tasks to be submitted and processed by shader units inside GPUs ( what Nvidia calls CUDA cores and AMD dubs Stream Processors ) simultaneously and asynchronously in a multi-threaded fashion. In layman’s terms it’s similar to CPU multi-threading, what intel dubs hyperthreading. It works to fill the gaps in the machine by making sure that as much of the hardware resources inside the chip as possible are fully utilized to drive performance up and that nothing is left idling.
    One would’ve thought that with multiple thousands of shader units inside modern GPUs that proper multi-threading support would have already existed in DX11. In fact one would argue that comprehensive multi-threading is crucial to maximize performance and minimize latency. But the truth is that DX11 only supports basic multi-threading methods that can’t fully take advantage of the thousands of shader units inside modern GPUs. This meant that GPUs could never reach their full potential as many of their resources would be left untapped.

    Multithreaded graphics in DX11 does not allow for multiple tasks to be scheduled simultaneously without adding considerable complexity to the design. This meant that a great number of GPU resources would spend their time idling with no task to process because the command stream simply can’t keep up. This in turn meant that GPUs could never be fully utilized, leaving a deep well of untapped performance and potential that programmers could not reach.



    Other complementary technologies attempted to improve the situation by enabling prioritization of important tasks over others. Graphics pre-emption allowed for prioritizing tasks but just like multi-threaded graphics in DX11 it did not solve the fundamental problem. As it could not enable multiple tasks to be handled and submitted simultaneously independently of one another. A crude analogy would be that what graphics pre-emption does is merely add a traffic light to the road rather than add an additional lane.

    Out of this problem a solution was born, one that’s very effective and readily available to programmers with DX12, Vulkan and Mantle. It’s called Asynchronous Shaders and just as we’ve explained above it enables a genuine multi-threaded approach to graphics. It allows for tasks to be simultaneously processed independently of one another. So that each one of the multiple thousands of shader units inside a modern GPU can be put to as much use as possible to drive performance.

    However to enable this feature the GPU must be built from the ground up to support it. In AMD’s Graphics Core Next based GPUs this feature is enabled through the Asynchronous Compute Engines integrated into each GPU. These are structures which are built inside the chip itself and they serve as the multi-lane highway by which tasks are delivered to the stream processors.






    Each ACE is capable of handling eight queues and every GCN 1.2 based GPU has a minimum of eight ACEs, for a total of 64 queues. ACEs debuted with AMD’s first GCN (GCN 1.0 ) based GPU code named Tahiti in late 2011 which had two Asynchronous Compute Engines. They were originally added to GPUs mainly to handle compute tasks because they could not be leveraged with graphics APIs of the time. Today however ACEs can take on a more prominent role in gaming through modern APIs such as DirectX 12, Vulkan and Mantle.

    Earlier this year AMD debuted a demo for this hardware feature that showcased a performance improvement of 46%. So far however, Nvidia has not talked much if at all about the feature neither did it showcase a beneficial use case scenario for it in its hardware like its rival. Which is understandably where most of the questions and the controversy is stemming from. The company was not shy however from promoting other DX12 hardware features like conservative rasterization and raster ordered views which is where we believe the DX12 focus for the company has been directed.
    Speaking of GPUs in general, while modern GPU architectures of the day like GCN, which powers all of the new gaming consoles and AMD’s roster of graphics cards, or Maxwell, which powers Nvidia’s latest Tegra mobile processors and its roster of graphics cards, have grown to accumulate far more similarities than differences, different hardware will always inherent different architectural traits. There will always be one thing that a specific architecture does better than another. This diversity is dictated by the needs of the market and the diversity of the minds through which the technology was conceived. The semantics will always be there, and while it can be fun to discuss them, looking at the whole picture will yield the only substantial progress.


    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  9. #204
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Preemption Context Switching Allegedly Best on AMD, Pretty Good on Intel & Potentially Catastrophic on NVIDIA

    Recently, NVIDIA has been under a lot of pressure after the results of the first DirectX 12 benchmark clearly favored AMD. There have been reports that NVIDIA’s existing GPU architecture has problems with Async Compute, though our own Usman cleared up that it is indeed possible to use it.

    Now, another report seems to condemn NVIDIA’s choices for Maxwell. Tech Report’s David Kanter said in a video podcast that people who work at Oculus have mentioned how preemption context switching, a very important feature especially in VR scenarios, can be really bad on NVIDIA cards.
    I’ve been told by folks at Oculus that the preemption context switching is, and this is prior to the Skylake gen 8 architecture which has better preemption, but the best preemption context switching was with AMD by far. Intel was pretty good, and NVIDIA was possibly catastrophic.
    The real issue is, if you have a shader running, a graphics shader, you need to let it finish. And it could take you a long time, it could take you over 16ms.
    NVIDIA is very, to their credit they’re open and honest about this and how you tune for Oculus Rift. You have to be super careful because you can miss a frame boundary because the preemption is not particularly well latency. And again, it’s not like this is a bad decision on the part of NVIDIA, it’s you know, that’s just what made sense. Preemption isn’t something that was super important when the chip was designed and the API support was, there wasn’t much bang for your buck. Now, I’m sure they will improve it for Pascal, NVIDIA is full of good, sharp architects and they’ll probably fix it in Pascal.



    AMD cards exploit their great Async Compute support here, once again. It’s not all doom & gloom for NVIDIA, though: first of all, as reported by Khalid, NVIDIA is already working to implement Async Compute in Ashes of the Singularity. If they manage to do it in a beneficial way for performance, it seems likely that they’ll use it for other applications too, such as countering the latency of preemption context switching.
    Moreover, earlier in that same podcast David Kanter replied as follows on the larger Async Compute debate:
    So obviously AMD is going to get a much bigger benefit from DX12, in part because their drivers were just not as good as NVIDIA’s, but look, for every person on the Internet who is really unhappy on the performance of your Titan or 980, shoot me a line and I’ll take it off your hands. You can go out and buy a Radeon, but if you think that 980 isn’t gonna cut it because it doesn’t have Async Compute, send it to me, I’ll take care of it and put it in a good home.
    His point, clearly conveyed in a sarcastic way, seems to be that while Async Compute is going to be important in the future, it really isn’t right now. Currently, NVIDIA cards are faster than AMD’s and overall Maxwell is a great architecture that perhaps just isn’t as future-proof as Fiji.
    Even so, we’ll have to run many more tests before this theory becomes a fact, and surely NVIDIA will try to improve this aspect from their end as much as possible.


    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  10. #205
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    AMD’s Robert Hallock: “The vast majority of DX12 titles in 2015/2016 are partnering with AMD”

    While the ongoing debate over DX12 performance favoring one architecture over the other is still going strong, with claims from both AMD and Oxide Games, the Head of Global Technical Marketing at AMD, Robert Hallock, claims that the vast majority of DX12 titles in the next two years will be partnering with AMD.

    The majority of upcoming DX12 titles are partnering with AMD

    Hallock wrote on a Reddit thread:
    “You will find that the vast majority of DX12 titles in 2015/2016 are partnering with AMD. Mantle taught the development world how to work with a low-level API, the consoles use AMD and low-level APIs, and now those seeds are bearing fruit.”

    So far there haven’t been many DX12 titles announced that are partnering with AMD, apart from Deus Ex: Mankind Divided and Ashes of the Singularity. Maybe Hallock refers to some yet unannounced titles, or games that will be getting DX12 support in the future. Regardless if this holds true or not, and whether early benchmarks are indicative of true DX12 gaming performance remains to be seen. With Nvidia claiming 82% of the discrete GPU market share, developers will do well to support the architecture(s) as best they can. With AMD gaining ground we can certainly only benefit as keeping a healthy competition between the two giants can only be a good thing.
    Advertisements


    There is one minor detail that AMD neglected

    Personally though, and as minor as it may be, there is one detail that I feel AMD neglected. Given the popularity of HTPCs and the affordability of Ultra HD TV sets, the HDMI 2.0 is a requirement to enjoy 4K 60Hz content. Nvidia has been offering support for more than a year with the Maxwell architecture, with even the latest GeForce GTX 950 budget card supporting it. AMD’s Fury and Fury X, while much more expensive GPUs do not. Yes you can argue that any true gamer, uses a Display Port connection and a monitor with a fast refresh rate and excellent response time, but we are talking HTPC here, a setup that provides a PC experience for more than just gaming and the gaming it does provide is more living room oriented for which these factors are not that important. Recent technologies such as High Dynamic Range and Wide Color Gamut, will offer incredible potential and already some TV manufacturers are using 10-bit 4K panels, with brand new chipsets that support full chroma 4:4:4 at 4K@60, a must for any serious PC user.

    OLED displays are another thing that gamers should really be excited about. The instant pixel response nature of the OLED technology creates an experience that is truly incredible, and reminiscent of the CRT monitors of old. Add an infinite contrast ratio, with true blacks and you have a gaming panel that truly makes any game come to life with realistic colors that pop out of the screen and create a 3D effect that really has to be seen to be believed. OLED televisions are coming down in price, and offer an excellent alternative to the LED 4K TVs that are truly getting more effective for gaming with some really low input lag performance.





    AMD has marketed the Fury X towards the small factor crowd and that is more so the case with the Nano. The small factor PC is exceptionally suited as the HTPC that will sit beautifully next to your home cinema. AMD have promised to release an adaptor that will support the HDMI 2.0 format, but there has not been any news so far.


    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  11. #206
    Master Business & GPU Man Avatar de Enzo
    Registo
    Jan 2015
    Local
    País Campeão Euro 2016
    Posts
    7,793
    Likes (Dados)
    0
    Likes (Recebidos)
    1
    Avaliação
    41 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    O show continua. Muito bem. Há-de haver muita gente a ser desviado com toda esta publicidade. Até sairem +placas, +drives e o mais importante, os jogos, é tudo especulação bonita.
    Faria muito bem à AMD ganhar terreno. E se assim for, não são só as placas gráficas que vão fazê-la lucrar: ainda há as consolas. Até lá, Nvidia na frente.
    Ideias sem Nexo e Provas do Tráfico de Hardware
    "que personifica o destino, equilíbrio e vingança divina." Dejá vú. Que cena!

  12. #207
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Frostbite Already Supports DX12, Says DICE Technical Director

    Many enthusiast gamers have been wondering whether Frostbite (DICE’s own engine that is now used by most studios inside Electronic Arts) would get support for DirectX 12 soon, and now an official confirmation has been provided by Technical Director Stefan Boberg via Twitter.

    The Frostbite engine already supports DX12, apparently.



    Lars @CentroXer
    @bionicbeagle when is DX12 going to be a part of frostbite?

    Follow
    Stefan Boberg @bionicbeagle

    @CentroXer it already is, no word on which game will be first though



    This isn’t a huge surprise. In fact, DICE worked closely with AMD in order to promote Mantle, the forerunner low level API that paved the way for Microsoft’s DirectX 12 and Khronos’ Vulkan.
    Since Battlefield 4, all game releases that used Frostbite also featured support for Mantle, though the biggest performance benefits were found in Battlefield 4 itself.
    Now the big question is: which Frostbite game will be the first one to support DirectX 12? Battlefield 5 (or whatever it will be called) seems an obvious candidate, of course. We know that a new Battlefield will be released in Q3/Q4 2016 and the chances that it will be DX12 ready are extremely high.
    Holiday 2016 games made with Frostbite might have DX12 as minimum spec

    However, there are other candidates as well. Mass Effect 4 is also planned to launch in Q4 2016, and depending on release schedule this could be earlier or later than the next Battlefield.
    But there are Frostbite games being released very soon. Need for Speed, for instance, launching on November 3; Star Wars: Battlefront, scheduled for November 17, exactly two weeks later; finally, Mirror’s Edge Catalyst which debuts on February 23.
    We know that Johan Anderson, another Technical Director at DICE, was already pushing for Windows 10 and DX12 as a minimum spec for the games coming in late 2016. Perhaps the impressive figures that pinpoint Windows 10 installations at over 81 million (with an average of 400K installations per day since launch) will push forward internal plans at EA and even this year’s games will support DirectX 12.


    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  13. #208
    Tech Mestre Avatar de SleepyFilipy
    Registo
    Apr 2013
    Posts
    3,972
    Likes (Dados)
    13
    Likes (Recebidos)
    19
    Avaliação
    0
    Mentioned
    1 Post(s)
    Tagged
    0 Thread(s)
    Vale o que vale para quem quiser acompanhar a conversa.

    https://www.reddit.com/r/nvidia/comm...vidia_cant_do/
    Dell G15

  14. #209
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Aquanox Dev: We’d Do Async Compute Only With An Implementation That Doesn’t Limit NVIDIA Users

    The debate around Async Compute and its viability on NVIDIA hardware is still raging within the PC enthusiast community.

    AMD’s Technical Marketing Lead Robert Hallock went as far as saying that Maxwell cards were utterly incapable of performing Async Compute without heavy reliance on slow context switching.
    Our own Usman wrote a detailed editorial on the specifics of AMD & NVIDIA hardware in relation to Async Compute, stating that it is indeed supported by Maxwell 2.0 architecture though it is unclear whether it will yield performance benefits. Shortly after that, Oxide Games announced that they are working with NVIDIA to fully implement Async Compute in Ashes of the Singularity, probably with a combined hardware/software solution. It seems unlikely that this will yield similar performance gains to AMD’s fully hardware based implementation, anyway.
    However, we have to consider the problem from the perspective of game creators. In this context, while interviewing Digital Arrow about Aquanox Deep Descent, we have asked them if they intended to use Async Compute in their game. Here’s their reply:




    We aim to develop a game that is enjoyable to everyone who wishes to join the world of Aqua. Implementing and/or focusing on technologies that would limit certain people from accessing the game is entirely against our philosophy of being a community focused developer. If at any point, there will be an implementation possible that will not limit NVIDIA card users, then we will certainly explore this option as well.
    This isn’t really surprising. According to the latest Jon Peddie Research report, NVIDIA has reached 81% of the discrete GPU market share; it wouldn’t be prudent at all for developers to focus on a technology that may not translate very well for the majority of their potential user base.
    This is why it is unlikely that many games in the near future will make Async Compute central to their development. Besides, there are other DirectX 12 features to exploit, some of which (such as Conservative Rasterization and Raster Order Views) are only available on NVIDIA hardware right now; Digital Arrow is currently evaluating DirectX 12 options for Aquanox Deep Descent.
    DirectX 12 is certainly something we are having in mind. Testing will reveal how much of it we can use.
    There is more to DirectX 12 than Async Compute related benefits, and it will be interesting for developers to explore these opportunities as well.


    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  15. #210
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Exclusive: Xbox One – Potential Impacts of DirectX 12, Asynchronous Compute and Hardware Specifications Explored; Compared with Sony’s PS4

    Foreword: A few weeks back, I delved into the DirectX 12 question as it is applicable to the PC world. This time around, I will attempt to do something similar for the current generation consoles, specifically the Xbox One. The specifications, capabilities and features of both consoles are scattered and are usually not found in one coherent entity. While I would expect a vested party like Microsoft to deliberately refrain from clearing the confusion, the extent to which information about the consoles and DirectX 12 (in the case of the Xbox One) is exaggerated, remains one of the root causes of the flame war. This editorial isn’t, by any means, a thorough documentation on either console; but it does aspire to be a resource for a good number of technical queries of the average console gamer.
    Not an official poster. @WCCFTech
    Xbox One: A technical summary, potential impacts of DirectX 12, Asynchronous Compute capabilities and comparison with the PS4

    The Xbox One’s launch has been marred with allegations of being underpowered and an inferior alternative to the PS4. With the advent of DirectX 12, there has been an even split amongst camps, debating over whether the new API would bring any real performance gains to the console. In this editorial, I will explore the possible and the probable, in terms of DirectX 12 API (and “DirectX 12” hardware based features where applicable). It will also contain a complete overview of the hardware, OS, API and other features that are of interest to console gamers.
    If you are reading this, and are interested in learning the basics (such as differentiating between API and hardware features, as well as a complete overview of the technicalities) I would recommend giving the original DX 12 editorial a read first. Unlike graphic cards, which are meant for the PC industry, the territory of console hardware is treacherous at best, so some of the blanks that I have filled could pass as educated speculation (clearly marked as such with the [TBC] tag), although all of it is based on real documentation.
    In this article you will find:

    1. A basic hardware specification comparison of both consoles



    2. A sufficiently thorough comparison of the operating system and known APIs of both consoles
    3. An analysis of the µArchitecture and investigating Asynchronous Compute support for the Xbox One and PS4
    4. Answering the DirectX 12 Question and Something to Ponder

    Disclaimer: Every attempt has been taken to ensure the accuracy of the data present in this piece. However, we accept the possibility of a mistake or accidental omission due to human error. If any such hiccup is spotted, please let me know and I will make sure to update accordingly at the earliest.


    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

 

 
Página 14 de 21 PrimeiroPrimeiro ... 41213141516 ... ÚltimoÚltimo

Informação da Thread

Users Browsing this Thread

Estão neste momento 1 users a ver esta thread. (0 membros e 1 visitantes)

Bookmarks

Regras

  • Você Não Poderá criar novos Tópicos
  • Você Não Poderá colocar Respostas
  • Você Não Poderá colocar Anexos
  • Você Não Pode Editar os seus Posts
  •