Registar

User Tag List

Likes Likes:  0
Página 17 de 21 PrimeiroPrimeiro ... 71516171819 ... ÚltimoÚltimo
Resultados 241 a 255 de 304

Tópico: DirectX 12

  1. #241
    Master Business & GPU Man Avatar de Enzo
    Registo
    Jan 2015
    Local
    País Campeão Euro 2016
    Posts
    7,793
    Likes (Dados)
    0
    Likes (Recebidos)
    1
    Avaliação
    41 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    E a julgar por aquela foto, está muito bem integrado
    Ideias sem Nexo e Provas do Tráfico de Hardware
    "que personifica o destino, equilíbrio e vingança divina." Dejá vú. Que cena!

  2. #242
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Codemasters’ EGO Engine Is DX12 Enabled, Features Raster Ordered Views & Conservative Rasterization


    The GDC 2016 schedule delivers once again. It looks like Codemasters’ EGO Engine 4.0 (the version used in F1 2015) has already received the DirectX 12 treatment and Principal Programmer Tom Hammersley will talk about it in a session alongside Leigh Davies, Graphics Software Engineer at Intel.
    The description actually goes into a fair amount of detail, mentioning Raster Ordered Views (AVSM, Decal Blending) and Conservative Rasterization (voxel based ray tracing) as DX12 features added to the EGO Engine in order to enable new graphics effects.
    Codemasters present a post-mortem on their new rendering engine used for F1 2015 detailing how they balanced the apparently opposing goals optimizing for mainstream processor graphics, high end multi-core and DX12. The F1 2015 engine is Codemasters’ first to target the eighth generation of consoles and PC’s with a new engine architecture designed from scratch to distribute the games workload across many cores making it a great candidate for DX12 and utilise the processing power of high end PC’s. This session will show the enhanced the visuals created using a threaded CPU based particle system without increased the GPU demands and also cover the changes made to the engine while moving from DX11 to DX12. We will also discuss the graphics effects added using the new DX12 features Raster Ordered Views (AVSM and Decal Blending) and Conservative Rasterization (Voxel based ray tracing) adding even greater realism to the F1 world.
    Advertisements


    Takeaway
    An insight into the main architectural changes needed to move successfully to DX12 and realise a performance benefit together with an understanding of some of the new effects possible with feature level 12 capable hardware.
    An understanding of how to balance CPU and GPU workloads to get the best of modern PC hardware offering improved visuals and more interactive environments.
    What’s interesting is that like Just Cause 3, F1 2015 does not currently feature DX12 support. Will these games receive DX12 patches, or is it just a way to get the engines ready for the next titles? Let’s hope to gain some insight on this during the respective GDC 2016 sessions.
    Still, it’s nice to see that more developers are finally getting ready to support DirectX 12.


    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  3. #243
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Ultimate Ninja Storm Generations Running on RPCS3 DX12 Powered Emulator!


    Emulation of the PlayStation 3 is quickly becoming a reality, it seems not a week goes by until we hear that yet another commercially available game is up and running on the latest emulator, and that’s great news for gamers, especially those who have high-end gaming PC’s that are capable of playing them.
    RPCS3 has certainly been making impressive progress and is now in its prime as far as development goes, with constant updates and improvements that are making it a workable and fully playable emulator for quite a few games. Sure, it’s not quite playing the more demanding games such as The Last of Us just yet, but it really is only a matter of time now before that nut it cracked. The latest demonstration, which is something anyone can download and try for themselves, assuming you have the retail disc of the game (or the ISO), is of Naruto Shipuden: Ultimate Ninja Storm Generations.
    YouTube user ‘Zangetsu’ shared a video recently which shows the Naruto game running on the emulation with fully functioning in-game visuals, and matching audio at 30fps, triple the playable FPS of the previous attempts for this game. That’s right, the emulator is playing fully commercially available games at 30fps! Which is groundbreaking to say the least. Although 60fps would be even better, but let’s not get too picky, it’s early days.
    The emulator is using the DX12 engine to churn out the graphics, and as you’ll see in the video below, it’s running perfectly well.

    • Rpcs3 v0.0.0.6
    • GIT Rpcs3 : 21/01/16
    • PS3 Game : Naruto Shipuden: Ultimate Ninja Storm Generations
    • Statut : Playable in DX12
    • Graphics Render : in DX12 Very good
    • Sound : Perfect
    • Save : Works

    As you can see in the video, the user is running the game on an AMD FX-8350 CPU, paired up with an Nvidia GeForce GTx 980 and 16GB of RAM, meaning those with similar hardware specifications should be able to recreate similar results.





    For more information on RPCS3, check out their official emulation website here.
    Noticia:
    http://www.eteknix.com/ultimate-ninj...ered-emulator/
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  4. #244
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    AMD Says DX12 And Vulkan “Serve a Need And Add Value”


    The advent of low-level APIs such as DirectX 12 and Vulkan have the potential to revolutionize the way modern games scale across various hardware setups. Clearly the gains compared to DirectX 11 are still unknown until a game’s engine offers a direct comparison between the two APIs on identical hardware. Theoretically, it could be the most significant change to PC gaming in years and allow for enhanced optimization. There’s a huge debate regarding Microsoft’s DirectX 12 system and the open source Vulkan API. In a recent interview with Tom’s Hardware, AMD’s VR director, Daryl Sartain described the current state of modern APIs and how mantle contributed to the development of DirectX 12:
    “I view Mantle as something – because we did a lot of contribution to the features into DX12 – that has been spun into DX12 in so many ways. But to your question on Vulkan versus DX12, without getting into all the religious aspect, what I said yesterday [on the VR Fest panel] is that I think that both serve a need and add value. Can you make an argument that one is better than the other? You can make an argument about anything. Just bring a lawyer into the room.”
    “But I do believe that, and what I most am concerned about is our ISVs, the ISV community, where they gain the greatest benefit. You know, there are some people developing on Linux, all different flavors of life – so it’s a difficult question as to which [API] should we be focused on, which one is better”.
    “My opinion is that Windows as a platform, as an OS, is far better and far more evolved today than some of the previous generations, and that’s to be expected. DX12 and its integration into Windows is a great experience, is a great development environment, and has great compatibility. Does that mean that Vulkan doesn’t have a place? No. I think that answer really has to come from the development community, not from us.”
    This is a fairly non-committal response but it’s too easy to see a clear advantage from either API. At least there’s a clear alternative to DirectX 12 if you want to go down the open source route. Given the success of Windows as a gaming operating system, I cannot see DirectX 12 being overtaken unless there are some very clear performance or feature benefits.
    Noticia:
    http://www.eteknix.com/amd-says-dx12...and-add-value/


    Pois, mas provavelmente o DX ira sempre ser prioritário em relação ao Vulkan...
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  5. #245
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    AMD CodeXL & NVIDIA GeForce Tools To Help Devs With DX12 Optimization


    AMD and NVIDIA will host several presentations during the Game Developers Conference (GDC) 2016, scheduled for March 14-18 at San Francisco’s Moscone Center. We’ve told you about many of them, but there’s still two very interesting ones we haven’t reported about until now.
    Both NVIDIA and AMD will be showcasing their developer tools in dedicated sessions, with the overarching goal to help studios in their transition to the low level APIs (Microsoft’s DirectX 12 and Khronos Group’s Vulkan).
    NVIDIA’s is titled Raise your Game with NVIDIA GeForce Tools, and it will be hosted by Jeffrey Kiel (Sr. SW Engineering Manager, Graphics Developer Tools).
    A new era is beginning in PC graphics; low-level graphics APIs and VR headsets for masses that introduce a new set of challenges for how to program graphics for next-gen games. This talk will cover the latest offering of developer tools for DirectX 12 development and VR. Developers will learn how to recognize the new API concepts through demos and walkthroughs, as well as how to profile DirectX 12 with the new Range Profiler. The audience will also learn how to take advantage of NVIDIA Nsight Visual Studio Edition to develop VR applications.
    Advertisements
    AMD’s has a similarly bold title (Let your game shine – optimizing DirectX 12 and Vulkan performance with AMD CodeXL) and will be hosted by Senior Manager Doron Ofek.
    AMD CodeXL is a suite of tools for software developers that demand extreme performance from their software and hardware. The new CodeXL 2.0 release introduces tools for game developers who use Microsoft DirectX® 12 and Vulkan programming. This presentation will review these capabilities, including: capturing and visualizing the timeline of a frame, analyzing multi-threaded host and GPU interaction, pinpointing hotspot API calls, exposing inefficient GPU utilization and much more. In addition we’ll review how to develop and build Vulkan programs containing multiple shaders, and analyze the resource demands of their generated ISA on multiple target platforms without executing them.
    The presentation will also review CodeXL CPU Profiling and Power Profiling capabilities.
    These tools should definitely help game developers in their transition to DX12 (and Vulkan), which is bound to gradually happen between 2016 and 2017. Overall, there are lots of interesting presentations scheduled for GDC 2016; while most of them won’t be live streamed, slides are usually shared afterwards by the developers themselves.
    We’ll be on the lookout to bring you all the latest information coming out of GDC 2016.


    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  6. #246
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Dolphin Emulator Now Has DirectX 12 Support


    DirectX 12 is a low-level API which has the potential to allow for console-like optimization across a wide range of PC hardware. While it’s still early days, there’s a great deal of excitement surrounding games with plans to use Microsoft’s revolutionary API. For example, Quantum Break is a DirectX 12 exclusive so it will be fascinating to see the performance numbers on various setups. Additionally, there are rumours circulating which suggests that Rise of the Tomb Raider might receive a DirectX 12 patch. On another note, the Vulkan API is an open source alternative supporting Windows 7, 8.1, 10, Linux, Android and more! Competition is vital to push technology forward, and it’s not beyond the realm of possibilities to see emulators adopt both APIs.
    Dolphin is one of the most promising emulators and allows users to play Gamecube and Wii games! This is a fantastic project because it’s possible to experience iconic Nintendo games at high resolutions. On the original hardware, the output resolution is quite limiting and features a really murky look on modern Televisions. As always, it’s incredibly difficult to create a working emulator with low hardware demands. Currently, Dolphin works very well using the DirectX 11 but there’s some room for improvement.
    The user “hdcmeta” on the Dolphin forums, has created a DirectX 12 backend which exhibits performance improvements of up to 50%:
    “Generally, graphics-intensive games get a nice win, while (Gamecube CPU)-bound games (Zelda OOT from the ‘bonus disk’ is a good example) are the same – graphics wasn’t on the critical path there. At higher resolutions, graphics becomes more important, so the relative improvement can increase there. In general, CPU usage is now much lower for the same workload relative to DX11/OpenGL.”
    Here we can see the percentile difference between DirectX 11, DirectX 12, and OpenGL:



    This is astonishing and showcases the kind of optimization on low-mid range hardware. I’m interested to see if the performance increases scale in a similar fashion on higher end GPUs. Whatever the case, it seems DirectX 12 has a major benefit in emulators and this is going to be great news for anyone wanting to play older Nintendo games in glorious detail.


    Noticia:
    http://www.eteknix.com/dolphin-emula...tx-12-support/
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  7. #247
    Master Business & GPU Man Avatar de Enzo
    Registo
    Jan 2015
    Local
    País Campeão Euro 2016
    Posts
    7,793
    Likes (Dados)
    0
    Likes (Recebidos)
    1
    Avaliação
    41 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Xi....bom improvement.
    Ideias sem Nexo e Provas do Tráfico de Hardware
    "que personifica o destino, equilíbrio e vingança divina." Dejá vú. Que cena!

  8. #248
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    AMD Teases DX12, VR Ready PCs – 7x Faster Than XBOX One And PS4 Yet Just As Small

    AMD is teasing Radeon powered, console sized DirectX 12 and VR ready PCs that’s 8 times more powerful than the XBOX One and PS4. A photo of a bunch these systems, appropriately finished in red, were posted on twitter by AMD’s Roy Taylor, one of the biggest VR advocates inside the company.

    Very few details were given initially besides the picture you see above and the following tweet by Roy
    “Developers, we have something coming for you… @AMDDevCentral @FalconNW @AMDRadeon #VR #DX12
    It was later confirmed that the systems you see above are “Tiki” models from Falcon North West. A system builder who has collaborated with AMD to put the world’s fastest graphics card, AMD’s dual Fiji board, inside a compact – console sized – DirectX12 and VR ready powerhouse.
    AMD’s DirectX 12 And VR Ready Dual Fiji Powered Tiki Systems Are 9-7 Times Faster Than The XBOX ONE And PS4

    Interestingly we had heard of this system before, during Roy Taylor’s keynote at VRLA.
    “Last time I was here I also promised you that we would make the world’s most powerful small computer for developers. We promised you we would take two of our highest end GPUs and put it inside that tiny box and if you go downstairs we actually have a demonstration of a dual GPU, 12 TeraFlops, fastest GPU solution in the world, inside of Tiki. It’s a feat of engineering we are delighted with.” – Roy Taylor, AMD Corporate Vice President of Alliances at VRLA Winter Expo

    While not all specs for the system in question have been revealed just yet. Roy unveiled that it packs a dual Fiji graphics card with 12 TeraFlops of compute, nearly 9 times that of the XBOX One, 7 times that of the Playstation 4 and double that of Nvidia’s GeForce GTX Titan X.
    A single Fiji XT GPU found in AMD’s Radeon R9 Fury X2 flagship graphics card is rated at over 8 teraflops, two would easily amount to over 16 teraflops. However, it seems that the version employed in the Tiki is a custom designed affair with modest air cooling so it can fit inside this very compact form factor.
    WCCFTech Dual Fiji Board AMD Radeon R9 Fury X AMD Radeon R9 Nano AMD Radeon R9 Fury AMD Radeon R9 290X
    GPU Fiji XT x 2 Fiji XT Fiji XT Fiji Pro Hawaii XT
    Stream Processors 8192 4096 4096 3584 2816
    GCN Compute Units 128 64 64 56 44
    Render Output Units 128 64 64 64 64
    Texture Mapping Units 512 256 256 224 176
    Memory 8GB HBM (4 GB Per GPU) 4GB HBM 4GB HBM 4GB HBM 4GB GDDR5
    Memory Interface 8192bit (4096 Per GPU) 4096bit 4096bit 4096bit 512bit
    Memory Frequency 500Mhz 500Mhz 500 MHz 500Mhz 1250Mhz
    Effective Memory Speed 1Gbps 1Gbps 1Gbps 1Gbps 5Gbps
    Memory Bandwidth 1 Terabyte/s 512GB/s 512GB/s 512GB/s 320GB/s
    TDP TBA 275W 175W 275W 250W
    GFLOPS/Watt - 31.3 47.1 26.2 19.3
    Launch Price TBA $649 $649 $549 $549
    Launch Date 2016 24th June 2015 7th September 2015 10th July 2015 24th October 2013
    How AMD Managed To Cram All Of Those Teraflops Into Such A Small Box

    Fiji is AMD’s largest ever graphics processing unit and the very first in the world to feature 3D structured, 2.5D stacked High Bandwidth Memory, or HBM for short. A standard that AMD and SK Hynix, one of the world’s largest memory makers, co-invented. Because vertically stacking dies enables much greater densities and because HBM chip are smaller than GDDR5 chips to begin with there are immense area savings on the printed circuit board of the graphics card as a result. Allowing for the creation of far more compact form factors.
    Also unlike GDDR5, HBM is packaged alongside the host processor, in this case the GPU, on a single interposer. The closer proximity to the GPU enables significantly wider memory interfaces and reduces latency. The smaller, shorter connections also enable greater power efficiency. This means that HBM will also require less voltage to operate allowing for even more power savings.
    Advertisements

























    HBM allowed AMD to create significantly smaller and more power efficient graphics cards. Starting with the popular R9 Nano and all the way up to the dual Fiji board inside the Tiki. A consumer version of the dual GPU board was supposed to land sometime late last year, but it was delayed to Late Q1, early Q2 of this year to align it with major VR headset launches according to the company.

    AMD’s Roy Taylor hints that these Tiki systems may be given away to developers of virtual reality platforms as well as DirectX 12.



    Follow
    Roy Taylor ‎@Roy_techhwood

    Developers, we have something coming for you... @AMDDevCentral @FalconNW @AMDRadeon #VR #DX12









    As impressive as these systems are, more exciting perhaps for ordinary gamers is the advent of AMD’s even more exciting Polaris – Radeon 400 series – graphics cards launching next summer. Featuring cutting edge 14nm technology, 4th generation GCN graphics architecture and a whole new set of user and developer features.
    AMD has so far already demoed the smallest member of the Polaris family. The GPU demonstrated performance and power efficiency results that were by far the most impressive we had seen from one generation of GPUs to the next to date. The company revealed that there will be multiple GPUs based on the Polaris architecture to address the entry level, mid-range and high-end segments of the discrete graphics market.


































    So far it has unveiled two 14nm FinFET Polaris GPUs, given the names Polaris 11 and Polaris 10. One of which is a very small GPU, estimated to be around the same size as AMD’s Cape Verde which is 123mm² large. The goal for this chip is to deliver console-class gaming performance in thin and light notebooks. This is the GPU that AMD chose to demo at CES. The other member of the Polaris family is a large GPU, described as a successor to the Radeon R9 Fury X.
    Polaris – Radeon 400 series – graphics cards are set to launch this summer, before the back to school season, for both desktops and gaming notebooks
    WCCFTech Year Process Flagship GPU Product Transistors In Billions Memory Bandwidth
    Southern Islands 2012 28nm Tahiti HD 7970 4.3 3GB GDDR5 264GB/s
    Volcanic Islands 2013 28nm Hawaii R9 290X 6.2 4GB GDDR5 320GB/s
    Pirate/Caribbean Islands 2015 28nm Fiji R9 Fury X 8.9 4GB HBM1 512GB/s
    Arctic Islands/Polaris 2016 14nm Greenland
    /Vega10
    ? Up To 18 Up to 32GB HBM2 1TB/s


    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  9. #249
    Master Business & GPU Man Avatar de Enzo
    Registo
    Jan 2015
    Local
    País Campeão Euro 2016
    Posts
    7,793
    Likes (Dados)
    0
    Likes (Recebidos)
    1
    Avaliação
    41 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Surpresa AMD numero 1: check
    Ideias sem Nexo e Provas do Tráfico de Hardware
    "que personifica o destino, equilíbrio e vingança divina." Dejá vú. Que cena!

  10. #250
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Super Mario Galaxy 2 looks awesome at 4K 60FPS with DX12 support

    Nintendo Wii U emulators have come a very, very long way - with the latest video from YouTuber 'SHD'. SHD shared a video of Super Mario Galaxy 2, and it's never looked better.





    As you can see in the video above, if you've got the Internet connection to handle it, Super Mario Galaxy 2 is running at 4K 60FPS. SHD is using an Intel Core i7-5820K overclocked to 4.2GHz, backed up by an NVIDIA GeForce GTX 980 Ti. It looks absolutely beautiful and shows just what faster hardware can do for older games and older consoles.


    Noticia:
    http://www.tweaktown.com/news/50550/...ort/index.html
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  11. #251
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    AMD’s Secret DirectX 12 Weapon That Nvidia Traded Off – Async Compute Explained

    Ashes of The Singularity has been a major topic of interest for PC gamers for sometime due to its native support for the low level DirectX 12 API. The beta has just been released and we’ve spent some time testing an especially interesting DirectX 12 feature that the game supports dubbed explicit multi-gpu. This feature allows any two DX12 compatible GPUs , including integrated GPUs, regardless of vendor or capability to work together towards driving the framerate of your game even higher. Essentially pooling their resources together and combining their efforts to deliver a greater level of performance.

    The results are quite interesting and you should check them out if you haven’t already. Another major DirectX 12 feature that Ashes of The Singualrity supports is Asynchronous Compute, otherwise known as async compute/computing or async shaders/shading. AMD has long touted the DirectX 12 capabilities of its GCN architecture often citing async compute as an advantage for its Radeon graphics cards. Specifically those based on the GCN graphics arechitecture, which include the HD 7000 series and subsequent generations all the way up to the Fury series and 300 series.
    Just like ourselves many other tech pubs have been busy benchmarking AoTS and so far the results seem to be overwhelmingly in AMD’s favor. This is primarily the result of an effort between the company and the developers who have worked together on a comprehensive DirectX 12 async compute implementation.

    Ashes of the Singularity – DirectX 12 Benchmark, Async Compute Enabled by Tomshardware.com

    Ashes of the Singularity – DirectX 12 Benchmark, Async Compute Enabled by Anandtech.com
    Anandtech.com
    These findings do go hand-in-hand with some of the basic performance goals of async shading, primarily that async shading can improve GPU utilization. At 4096 stream processors the Fury X has the most ALUs out of any card on these charts, and given its performance in other games, the numbers we see here lend credit to the theory that RTG isn’t always able to reach full utilization of those ALUs, particularly on Ashes. In which case async shading could be a big benefit going forward.
    Nvidia Confirms, Async Compute Support Is Missing From GeForce Drivers For Ashes Of The Singularity

    Anandtech.com
    NVIDIA sent a note over this afternoon letting us know that asynchornous shading is not enabled in their current drivers, hence the performance we are seeing here. Unfortunately they are not providing an ETA for when this feature will be enabled.




    Sean Pelletier ‎@PellyNV
    Fun FACT of the day: Async Compute is NOT enabled on the driver-side with public Game Ready Drivers. You need app-side + driver-side!







    We were frankly surprised to find out that Nvidia has not yet implemented async compute in its GeForce drivers to this date for Ashes of The Singularity, despite news six months ago that it was actively working with the developers to get it done. The company hasn’t offered any sort of timetable yet either as to when we should expect to see this delivered.
    It should be noted that there’s a bit of a history here with Nvidia and Async Compute. Fable Legends was another game that supported this feature, yet GeForce graphics cards did not support it in a typical fashion. We’ve already published our detailed investigative report into the bizarre behavior that the feature exhibits in the game on Nvidia graphics cards.Despite the fact that the feature was implemented in a much more limited fashion in Fable Legends. where as Oxide Games makes liberal use of the technology in its real time strategy title AoTS.
    This more extensive use may require a level of complexity which simply can’t be delivered through software alone and could explain why the company hasn’t released a compatible driver yet. Unfortunately the company’s silence on the matter has forested a culture of unhealthy speculation around the issue. It is very important to understand that there are distinctive intrinsic architectural differences between Nvidia’s Maxwell architecture and AMD’s GCN that play a crucial role in all of this.
    Joel Hruska, Extremetech.com
    Every bit of independent research on this topic has confirmed that AMD and Nvidia have profoundly different asynchronous compute capabilities. Nvidia’s own slides illustrate this as well. Nvidia cards cannot handle asynchronous workloads the way that AMD’s can, and the differences between how the two cards function when presented with these tasks can’t be bridged with a few quick driver optimizations or code tweaks. Beyond3D forum member and GPU programmer Ext3h has written a guide to the differences between the two platforms — it’s a work-in-progress, but it contains a significant amount of useful information.


    ext3h.makegames.de
    ext3h.makegames.de
    1
    Additional queues are scheduled in software. Only memory limits apply.
    2 One 3D engine plus up to 8 compute engines running concurrently.
    3 Since GCN 1.1, each compute engine can seamlessly interleave commands from 8 asynchronous queues.
    4 Compute and 3D engine can not be active at the same time as they utilize a single function unit.
    The Hyper-Q interface used for CUDA is in fact supporting concurrent execution, but it’s not compatible with the DX12 API.
    If it was used, there would be a hardware limit of 32 asynchronous compute queues in addition to the 3D engine.
    5 Execution slots dynamically shared between all command types.
    6 Execution slots reserved for compute commands.
    7 Execution slots are reserved for use by the graphics command processors.
    According to Nvidia, GM20x chips should be able to lift the reservation dynamically. This behaviour appears to be limited to CUDA and Hyper-Q.
    8 Execution slots dynamically shared between each 8 compute queues since GCN 1.1.
    9 SMX/SMM units can only execute either type of wavefront. A full L1, local shared memory and scheduler flush is required to switch mode. This is most likely due to using a single shard memory block to provide L1 and LSHM in compute mode.
    The developer went on to conclude that the current situation is a mess. Nvidia on one hand offers a solution that while unintuitive and rudimentary – to reduce the power and area footprint of its GPUs – can still offer real-world benefits. On the other hand AMD is offering a solution that’s more comprehensive and flexible which aids developers directly at the cost of additional hardware inside the graphics processors and as such comes at the cost of higher power consumption and larger chip size.
    Advertisements

    It becomes very clear that this was simply a matter of the two vendors favoring one trade-off over the other from the very beginning. In this particular case Nvidia’s strive to push power efficiency underlines one of the sacrifices that they chose to make to achieve this and it comes at the cost of async compute.
    ext3h.makegames.de
    For the future, I hope that Nvidia will get on par with AMD regarding multi engine support. AMD is currently providing a far more intuitive approach which aids developers directly.
    This will come at an increased power consumption as the flexibility naturally requires more redundancy in hardware, but will most likely increase GPU utilization throughout the industry while accelerating development. The ultimate goal is still a common standard where you don’t have to care much about hardware implementation details, the same way as x86 CPUs have matured over the course of the past 25 years.



    Ashes of the Singularity – DirectX 12 Benchmark, Async Compute Enabled by Extremetech.com
    Extremetech’s Joel Hruska offers an interesting perspective on the current state of DirectX 12’s async compute support from Nvidia and AMD as well as the hardware capabilities of their current graphics architectures in his article. He also pointed to some excerpts from the reviewer’s guide to shine light on how the developers deal with optimizing for either vendor.
    We have created a special branch where not only can vendors see our source code, but they can even submit proposed changes. That is, if they want to suggest a change our branch gives them permission to do so…
    This branch is synchronized directly from our main branch so it’s usually less than a week from our very latest internal main software development branch. IHVs are free to make their own builds, or test the intermediate drops that we give our QA.
    Oxide primarily optimizes at an algorithmic level, not for any specific hardware. We also take care to avoid the proverbial known “glass jaws” which every hardware has. However, we do not write our code or tune for any specific GPU in mind. We find this is simply too time consuming, and we must run on a wide variety of GPUs. We believe our code is very typical of a reasonably optimized PC game.
    When asked about the decision to turn on async compute by default Dan Baker of Oxide Games had this to say :
    “Async compute is enabled by default for all GPUs. We do not want to influence testing results by having different default setting by IHV, we recommend testing both ways, with and without async compute enabled. Oxide will choose the fastest method to default based on what is available to the public at ship time.”
    Our Take, DirectX 12 Asynchronous Compute : What It Is And Why It Matters

    AMD has clearly been a far more vocal proponent of Async Compute than its rival. The company put this hardware feature under the limelight for the very first time two years ago and attention has been directed towards it more so last year as the imminent launch of the DirectX 12 API was looming ever closer. Prior to that the technology remained, for the most part, out of sight.
    Asynchronous Shaders/Compute or what’s otherwise known as Asynchronous Shading is one of the more exciting hardware features that DirectX12 and Vulkan – as well as Mantle before them – exposed. This feature allows tasks to be submitted and processed by shader units inside GPUs ( what Nvidia calls CUDA cores and AMD dubs Stream Processors ) simultaneously and asynchronously in a multi-threaded fashion. In layman’s terms it’s similar to CPU multi-threading, what intel dubs hyperthreading. It works to fill the gaps in the engine by making sure that as much of the hardware resources inside the chip are fully utilized to drive performance up and that nothing is left idling with nothing to do.
    One would’ve thought that with multiple thousands of shader units inside modern GPUs that proper multi-threading support would have already existed in DX11. In fact one would argue that comprehensive multi-threading is crucial to maximize performance and minimize latency. But the truth is that DX11 only supports very basic multi-threading techniques that can’t fully take advantage of the thousands of shader units inside modern GPUs. This meant that GPUs could never reach their full potential as many of their resources would be left untapped.

    Multithreaded graphics in DX11 does not allow for multiple tasks to be scheduled simultaneously without adding considerable complexity to the design. This meant that a great number of GPU resources would spend their time with no task to process because the command stream simply can’t keep up. This in turn meant that GPUs could never be fully utilized, leaving a deep well of untapped performance and potential that programmers could not reach.

    Other complementary technologies attempted to improve the situation by enabling prioritization of important tasks over others. Graphics pre-emption allowed for the prioritization of tasks but it did not solve the fundamental problem. As it could not enable multiple tasks to be handled and submitted simultaneously independently of one another. A crude analogy would be that what graphics pre-emption does is merely add a traffic light to the road rather than add an additional lane.

    Out of this problem a solution was born, one that’s very effective and readily available to programmers with DX12, Vulkan and Vulkan’s spiritual predecessor AMD’s Mantle. It’s called Asynchronous Shaders and just as we’ve explained above it enables a genuine multi-threaded approach to graphics. It allows for tasks to be simultaneously processed independently of one another. So that each one of the multiple thousands of shader units inside a modern GPU can be put to as much use as possible to drive performance and power efficiency up.

    However to enable this feature the GPU must be built from the ground up to support it. In AMD’s Graphics Core Next based GPUs this feature is enabled through the Asynchronous Compute Engines, ACE units, integrated into each GPU. These are structures which are built inside the chip itself and they serve as the multi-lane highway by which tasks are delivered to the stream processors.
    AMD’s DirectX 12 Secret Weapon – Async Compute Engines


    Each ACE is capable of handling eight queues and every GCN GPU features multiple Async Compute Engines. ACEs debuted with AMD’s first GCN (GCN 1.0 ) based GPU code named Tahiti, HD 7970, in late 2011 which featured two Asynchronous Compute Engines.
    They were originally used primarily for compute workloads rather than games because no API existed at the time that could directly access them. Today however ACEs can take on a more prominent role in gaming through modern APIs such as DirectX 12 and Vulkan. So in every sense this has been AMD’s dormant secret DirectX 12 weapon for the very beginning.





    Last year AMD showcased a demo for this hardware feature which demonstrated a performance improvement of 46% in VR workloads. So far however, Nvidia has not talked much if at all about the feature other than say that support is on the way six months ago.
    Speaking of GPUs in general, while modern GPU architectures of the day like GCN, which powers the current flock gaming consoles and AMD’s roster of graphics cards, or Maxwell, which powers Nvidia’s latest Tegra mobile processors and its array of GTX 900 series graphics cards, have grown to accumulate far more similarities than differences, different hardware will always inherent different architectural traits.
    There will always be one thing that a specific architecture does better than the other. This diversity is dictated by the needs of the market and the diversity of the great minds through which this wonderful technology that we enjoy today has been conceived. The semantics will always be there, and while it can be fun to discuss and debate them, looking at the whole picture will be the only way forward to any substantial progress.


    Noticia:
    http://wccftech.com/directx-12-async...te-nvidia-amd/


    Parece que os async shaders vão dar pano para mangas entre AMD e nVidia, isto se a nVidia não os implementar nas suas proximas arquiteturas.
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  12. #252
    Master Business & GPU Man Avatar de Enzo
    Registo
    Jan 2015
    Local
    País Campeão Euro 2016
    Posts
    7,793
    Likes (Dados)
    0
    Likes (Recebidos)
    1
    Avaliação
    41 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Como ainda não saiu nada que use realmente dx12 a nao ser este bench, nao deve ser preocupante para a Nvidia nao ter graficas com Async de momento.
    Ainda vão a tempo, penso, de os implementar, que deve ser exactamente isso que estão a finalizar.
    Ideias sem Nexo e Provas do Tráfico de Hardware
    "que personifica o destino, equilíbrio e vingança divina." Dejá vú. Que cena!

  13. #253
    Tech Bencher Avatar de reiszink
    Registo
    Feb 2013
    Posts
    5,769
    Likes (Dados)
    0
    Likes (Recebidos)
    0
    Avaliação
    5 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Se duvidas houvessem de que a Microsoft quer destruir o PC gaming, penso que agora começam a ficar desfeitas.

    E a aparição da Windows Store com todas estas restrições e imposições nos jogos, só vêm piorar a experiência dos utilizadores.

    PC Gaming Shakeup: Ashes of the Singularity, DX12 and the Microsoft Store
    Intel i7 5820K - ASRock X99M Killer - 16GB G.Skill DDR4 - Gigabyte GTX 980Ti G1 - Plextor M6e 256GB + Samsung 850 EVO 500GB - Corsair H110 - EVGA G3 750W - Acer 27" 144Hz IPS - Zowie EC2-A - Filco Majestouch 2 TKL - HyperX Cloud II Pro

  14. #254
    Tech Bencher Avatar de reiszink
    Registo
    Feb 2013
    Posts
    5,769
    Likes (Dados)
    0
    Likes (Recebidos)
    0
    Avaliação
    5 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Intel i7 5820K - ASRock X99M Killer - 16GB G.Skill DDR4 - Gigabyte GTX 980Ti G1 - Plextor M6e 256GB + Samsung 850 EVO 500GB - Corsair H110 - EVGA G3 750W - Acer 27" 144Hz IPS - Zowie EC2-A - Filco Majestouch 2 TKL - HyperX Cloud II Pro

  15. #255
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Mau de mais...
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

 

 
Página 17 de 21 PrimeiroPrimeiro ... 71516171819 ... ÚltimoÚltimo

Informação da Thread

Users Browsing this Thread

Estão neste momento 1 users a ver esta thread. (0 membros e 1 visitantes)

Bookmarks

Regras

  • Você Não Poderá criar novos Tópicos
  • Você Não Poderá colocar Respostas
  • Você Não Poderá colocar Anexos
  • Você Não Pode Editar os seus Posts
  •