Registar

User Tag List

Likes Likes:  0
Página 3 de 6 PrimeiroPrimeiro 12345 ... ÚltimoÚltimo
Resultados 31 a 45 de 79
  1. #31
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Exclusive: The LiquidVR Interview With AMD’s Guennadi Riguer – Chasing the Nirvana of 16k/144 Hz


    So we have something special for our readers today – an interview about LiquidVR. Or more specifically, an interview with one of the lead software architects that worked on LiquidVR: Guennadi Riguer. The LiquidVR SDK was created from the ground up to cater to VR applications on modern day GPUs. We asked Mr. Guennadi to walk us through the philosophy behind LiquidVR as well as its future prospects – all the while clearing up some fuzzy areas that we were dubious about.
    The philosophy behind LiquidVR – a software ecosystem to match the hardware

    The story begins from the initial advent of VR – with Oculus appearing on the scene for the first time and the emergence of a brand new market. One that would likely take the world by storm. While all the attention is currently focused on the hardware (Oculus Rift vs HTC Vive etc), once the dust settles, the software ecosystem is the one that will come under scrutiny. Before LiquidVR there was nothing that was designed natively with VR in mind. Without any further ado, let’s hear it directly from the man himself:
    [GR] We at AMD are very passionate about graphics, VR with its high graphics demands is particularly exciting. However, VR is more than just pretty graphics. As a platform, it has the capacity to revolutionize how we interface with computers and provide a whole range of new experiences we haven’t even imagined. Because of that we see VR as more than ‘just another cool technology’ and we firmly stand behind the efforts to bring immersive, unique experiences to Radeon users through continued innovation of LiquidVR.
    We’ve been excited about VR since the very first announcements of HMD projects that are soon to hit the market. Once solutions started maturing, we looked at what pieces of the software ecosystem (OS features, APIs and etc.) were missing or were not designed natively for VR. Our goal was to create a set of solutions to unleash VR development and provide first class experience on existing systems. We identified four main features that became the foundation of our LiquidVR SDK.

    So what are these features that Riguer talks about? well, there are a total of 4 features currently listed on GPUOpen right now (with a plus one):

    • Asynchronous Shaders: Provides in Direct3D 11 a subset of functionality similar to async-compute functionality in Direct3D 12.
    • Multi-GPU-Affinity: Provides ability to send Direct3D 11 API calls to one or more GPUs set via an affinity mask.
    • Late-Latch: Provides ability update constant data asynchronously from the CPU to reduce input or sensor latency.
    • GPU-to-GPU Resource Copies: Provides ability to copy resources between GPUs with explicit control over synchronization.

    Remember the plus one we talked about? Well, that happens to be the Direct to Display interface which was originally listed as part of the 4 features of Liquid VR but was later replaced by GPU-to-GPU Resource Copies. This feature is still present but is only available to hardware vendors that make the HMD. In fact what it does is offer a capability to render directly on-to the Headset, bypassing the operating system directly. We also tired asking Riguer to give up his favorite feature of LiquidVR but were not particularly successful.
    Chasing VR ‘Nirvana’: 16k for each eye at 144hz

    The VR market is something that has been growing steadily since its emergence into the open a few years back. To all of our tech savvy readers, I am talking about the VR market which emerged after the Oculus Rift. Headset shipments are expected to double in a few years and LiquidVR remains AMD’s primary VR focus. It is because of this that the future of LiquidVR is of significant interest to tech enthusiasts. Mr. Guennadi had the following to say about that:
    [GR] LiquidVR is a set of technologies that has a very clear mission: to take maximum advantage of GPUs to enable rich and seamless content.
    As VR technology evolves, so will LiquidVR – we have a robust roadmap that will tackle a wide range of issues the industry already talks up as the next set of challenges we’ll be facing. We believe that with LiquidVR, AMD has helped the industry to take a first big step in this wonderful new immersive world called virtual reality.
    LiquidVR will continue to drive hardware and software technologies that will ultimately lead to the nirvana of VR: 16K/eye, 144Hz and above refresh rate, and virtually no latency, all in a wireless, small form factor package.
    Advertisements
    AMD has a road map in place for the evolution of LiquidVR that will grow as the industry grows. Keep in mind however, that this roadmap is far from set in concrete as you will find out later on in this article. AMD is chasing the perfect standard here which Riguer dubbed as “VR Nirvana”: 16k resolution (for each eye) and a refresh rate of 144 Hz. Although, this won’t be possible for at least a few generations of GPUs maybe we will be able to see something close to that by 2020. 16k resolution would be ideal for VR because naturally, the higher the resolution of the eye piece, the less the “screen door” effect. Not only that, but a refresh rate of 144 Hz would eliminate any and all nausea issues associated with low refresh rates. For the time being however, the Oculus standard of 2k 90fps (1080p each eye) will have to do.

    AMD’s LiquidVR is designed for Direct3D11

    One of the rather interesting things that I noticed in the white paper for LiquidVR was that it is built for Direct3D11. So the first thing I asked Riguer was to confirm my understanding of LiquidVR as well as comment about its future in an ecosystem which will soon be populated with low level APIs like DirectX12 and Vulkan. I also wanted to know whether AMD is considering any Direct3D12 implementation of LiquidVR SDK.
    [GR] That’s correct – LiquidVR features are currently exposed in DirectX 11. While developers are really excited about a new generation of APIs – DirectX 12 and Vulkan, it takes time for such technologies to be adopted, become mature and available to the mainstream. A lot of development in VR is currently done with mature engines (Unreal and Unity) built with DirectX11 technology, and it would’ve been a shame if fast-paced development in VR had been stalled waiting for DirectX 12 and Vulkan to mature. This is why in LiquidVR we focused on enabling better VR support with DirectX 11, to solve challenges immediately. The DirectX 12 API is much more powerful and flexible than DirectX 11 and in some ways there is less need for delivering exactly the same LiquidVR features. We’re definitely looking at DirectX 12 integration and bringing further enhancements to it with LiquidVR.
    Basically, LiquidVR was designed to offer a solution to developers as of right now – in other words, on DirectX 11. DirectX 12 and Vulkan API have not achieved maturity yet – and LiquidVR aims to offer a solution to developers who do not want to wait for them to mature. The point is to target game engines and existing SDKs that work on DirectX 11 and give them the same relative advantage (or something very close to) that DirectX 12 would go on to offer.
    Essentially, LiquidVR is a solution that will be available to a very large install base and one that can hit the ground running as soon as it was implemented. LiquidVR is actually installed on all PCs running the latest Catalyst and can be triggered by the developers (assuming you have a supporting GPU). Many of the features that LiquidVR provides to DirectX 11 users are available natively to DirectX12 users – something that makes a Direct3D12 implementation of LiquidVR redundant.
    The SDK was designed to be a stepping stone and the connecting link between two different eras of APIs. AMD’s Mantle API had very similar aims; aims which have now been met. All that said, AMD has not completely ruled out integration with DirectX12 and is still looking to bring new features to the LiquidVR toolkit. With Oculus Rift out right now and HTC Vive launching soon – developers could take advantage of LiquidVR to gain many low level features in existing game engines. The question becomes however, whether any developer would chose to code for a transitional API. This is an approach that wasn’t particularly successful with Mantle API as we know it.
    Aligning AMD’s LiquidVR strategy with DirectX12 and Vulkan – What is the future?


    Of course, the fact that LiquidVR is not integrated with other low level APIs right now brings us to another pertinent question. What exactly is AMD’s strategy for LiquidVR with DirectX12 entering the scene right now (and Vulkan following closely behind)? AMD has graciously published the LiquidVR source code on GPUOpen but how exactly will it align LiquidVR with the low level API movement?
    [GR] LiquidVR is about delivering features and technologies that address particular developer needs and gaps in a software ecosystem, but we never envisioned it to remain the only solution. DirectX 12 is a much better match for VR implementations than older APIs, and we’re very happy to see such enthusiasm for DirectX 12. The key is that we’re not standing still and are not looking back. We’ll continue to bring enhancements to DirectX 12 and other APIs through LiquidVR as well as through collaboration with our ecosystem partners. Here’s another way to look at it: we’re in no way competing with DirectX 12, we just want to see it as an API with great solutions for VR that unlock the ultimate potential of Radeon GPUs.
    The key to all our developer-focused efforts like LiquidVR, and many others that fall under the GPUOpen initiative is to enable better use of our GPUs, giving developers more control. All of the features in LiquidVR have been designed to perfectly match our hardware and software architectures, and extract the most performance and efficiency. In order to deliver certain functions, we needed to go above and beyond what other APIs offer. For example, LiquidVR asynchronous compute offers special quality of service (QoS) controls and better integration with Direct-to-Display that is not available in other APIs.
    As Riguer points out, LiquidVR was created to serve a very specific purpose: bridging the software gap that existed at the time it was created. Just like with Mantle API, AMD was playing the white knight here, but now that other (and possibly better) alternatives exist on the horizon, the future of LiquidVR becomes uncertain. LiquidVR does not compete with DirectX12 – which is a much better match for the VR industry than the former – that much is clear. But will it fade into the background completely like Mantle or become something more subtle. While it is too soon to answer that question, the fact there are still a few tricks that LiquidVR can pull off that DX12 can’t suggest that it might be the latter. In any case, only time (and AMD) will tell.



    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  2. #32
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    AMD Radeon Technologies Group Q&A With Robert Hallock – Live Blog

    In case you’re unaware, AMD’s technical lead at Radeon Technologies Group Robert Hallock is currently conducting an AMA – Ask Me Anything – at reddit. The AMA started at 10 AM central time and will continue until 5 PM today, March 3rd.

    For your convenience we’ll be live-blogging the event, relaying Robert’s answers to you and the questions to which they’ve been given. If you want to check out the AMA over at reddit to ask a question you can do so by visiting the /r/amd subreddit or following this link which will take you there directly.
    AMD Radeon Technologies Group AMA – Live Blog, Updated By The Minute

    Comments by Robert Hallock – technical marketing lead at Radeon Technologies Group, AMD – are highlighted in red.
    Did moving to the 14nm FFet node after being stuck in 28nm planar transistor node present any significant challenges that were different from previous shrinks?
    Every node has its little foibles, but I am not quite in a position where I can disclose the level of information you’re probably looking for. I like keeping my job. However, we know that people are very interested in the process tech and we intend to publish a lot more information on the architecture and the process before Polaris parts arrive mid-2016. I know this isn’t quite as good as getting an answer today, but I hope you can see we’re thinking about questions like this already.
    Can you discuss the yields on the Polaris dies right now?
    I am not privy to yield information at AMD.
    Did the recent earthquake in Taiwan affect the production schedule in any major way?
    No.
    How did Mantle effect the development on DX12 besides catalyzing progress, was there a significant amount of Mantle going into the development of DX12 or was it just the philosophy of low CPU/Driver overheads?
    I like to say that Mantle was influential. Microsoft was one of several parties that had access to the API specification, documentation and tools throughout Mantle’s 3-year development cycle. We’re glad Microsoft saw we were doing the right thing for PC graphics and decided to spin up the DX12 project; DX12 has been pretty great for performance and features.
    Having each eye handled by a separate D-GPU would be of greatest benefit in VR, so is there going to be an increased focus on CrossFire and LiquidVR support in preparation for big VR launches?
    This is something we’re intensely interested in. LiquidVR SDK has a feature called Affinity Multi-GPU, which can allocate one GPU to each eye as you suggest. Certainly a single high-end GPU and/or dynamic fidelity can make for a good VR experience, but there are gamers who want unequivocally the best experience, and LVR AMGPU accomplishes that. As a recent good sign of adoption, the SteamVR Performance Test uses our affinity multi-GPU tech to support mGPU VR for Radeon, whereas SLI configs are not supported.
    How are you guys going to cool the FuryX2? Is it going to be a blower like the dev kits Roy Taylor has been posting or is it going to be a liquid cooling loop like the 295×2? Also any idea of a price point?
    With all due respect to Tizaki, he was erroneous in placing dual Fury on the list of kosher topics. I am not in a position where I can discuss that product.
    In regards to Polaris can you discuss any of the GPUs we could be seeing sometime in the near future, like the Fury and FuryX successor, are we going to see a low profile card like the Nano again?
    We will discuss specific SKUs and form factors when Polaris launches mid-year.
    Is there anything you can disclose about GPUOpen tools being deployed in games that are in development, or even some of the interesting application people are trying to make using the compute tools?
    Watch for GDC.
    As far as FreeSync monitors, do you see an increased adoption of FreeSync technology from monitor and panel manufacturers because of HDMI support?
    HDMI support is directly responsible for expanding the list of FreeSync-enabled monitors from 30 to 40 overnight. HDMI is the world’s most common display interface, and there’s a huge industry economy of scale built around it, especially at the mainstream end of the market where users with modest GPUs are correspondingly most in need of adaptive refresh rates. Porting FreeSync to HDMI was highly requested by our display partners.

    When will we see Vulkan support in the Linux driver, and will that driver be only available on top of AMDGPU as part of the hybrid driver model.
    Also will Polaris be supported day 1 by the AMDGPU driver, or will it require changes to the current Power Play code that supports reclocking on the 380 series hardware?
    I’m basically just trying to figure out if and when I need to buy a new card, and if Polaris is an option.



    1. The Vulkan Linux driver will be released soon, and it will only be a part of amdgpu.
    2. I do not know the current roadmap for our Linux drivers, and I am generally unable to disclose anything that forward-looking if I want to keep my job. Thanks for understanding.


    Thanks for it guys!

    1. What happened with TrueAudio? This was the most exciting feature of the Radeon GPUs, but there isn’t any new game that support it.
    2. Why don’t you build a special support around TrueAudio, so the hardware might convert 5.1/7.1 channel audio to headphone stereo, much like how CMSS-3D works in the Creative drivers.
    3. A lot of PC sold with integrated GPU + dedicated GPU configuration. Why not use the integrated GPU for compute offloads? Windows 10 allows a lot of interesting technique to share the memory between the latency and the throughput optimized cores.
    4. Will the Kaveri and Godavari APUs get an updated BIOS to support HSA? Even if the chips don’t designed for the 1.0 specs.
    5. I like the new APIs (DX12 and Vulkan), but there are some features that might come handy, and the consoles support these. For example: ordered atomics, SV_Barycentric, SIMD lane swizzles. Is there a chance to support these with some extensions on PC?


    1) TrueAudio still finds regular use in the console space. However, on the desktop PC side there seems to be generally less interest in complicated soundscapes for PC gaming. Going forward, we’re interested in exploring TrueAudio’s rich positional capabilities to augment VR experiences–I think this is probably a better use for the technology.
    2) For the precise reason you pointed out: most audio hardware already supports this functionality, and a duplication of effort is not a worthy use of resources.
    3) DirectX 12, Vulkan, Mantle and HSA can do this.
    4) KV/GV already support HSA, however the hardware is compliant with the HSA 1.0 Provisional specification. The 1.0 Final specification increased hardware requirements that are met by Carrizo. More info.
    5) DX12 and Vulkan both support API extensions. Especially Vulkan.
    ——————————————————-
    [03/03/2016 – 10:58 AM Central Time]







    And I was doubted in my request for an AMA, yar!
    Can we put the DX11 to rest, are we going to see DCL’s added or not?
    Can anything be said with more detail about how RTG views its current drivers and what its striving for?
    How soon can linux users expect to see some love with some major gains in the Linux driver?
    Any chance of integrating features or advanced settings that are featured in programs like RadeonPro, RadeonMOD and ditching raptr outright?
    Are we going to see any rebrands this year or will everything be a fresh baseline with 14nm on the GPU side?
    With the benchmarks we’re seeing that take advantage of Async Support, will this be a feature require additional work from the devs or is it something easily implemented to where it will be the mainstream?
    Any tricks up AMD’s sleeve that we might see in Vulkan that may or may not be an easy addition to DX12?
    Is there a specific reason the drivers still have issues of downclocking when a small program can be used to stop the downclocking issue?
    Are there any plans to increase resources (including software engineers) allocated to Radeon Software including Drivers, GPUOpen materials and other products?
    AMD Dockport, anything we can expect to see from this?
    1) Because DCLs are useless. They’ve been inappropriately positioned as a panacea for DX11’s modest multi-threading capabilities, but most journalists and users exploring the topic are not familiar with why DCLs are so broken or limited.
    Let’s say you have a bunch of command lists on each CPU core in DX11. You have no idea when each of these command lists will be submitted to the GPU (residency not yet known). But you need to patch each of these lists with GPU addresses before submitting them to the graphics card. So the one single CPU core in DX11 that’s performing all of your immediate work with the GPU must stop what it’s doing and spend time crawling through the DCLs on the other cores. It’s a huge hit to performance after more than a few minutes of runtime, though DCLs are very lovely at arbitrarily boosting benchmark scores on tests that run for ~30 seconds.
    The best way to do DX11 is from our GCN Performance tip #31: A dedicated thread solely responsible for making D3D calls is usually the best way to drive the API.Notes: The best way to drive a high number of draw calls in DirectX11 is to dedicate a thread to graphics API calls. This thread’s sole responsibility should be to make DirectX calls; any other types of work should be moved onto other threads (including processing memory buffer contents). This graphics “producer thread” approach allows the feeding of the driver’s “consumer thread” as fast as possible, enabling a high number of API calls to be processed.
    2+4) Sort of. We’re clearly interested in doing a major feature-rich driver every year, and that is on-track for 2016 as well. We constantly crawl the various PC gaming communities to keep our fingers on the pulse of what people want to see in terms of features. That’s why custom resolutions were added in Radeon Software, for example. But for us it is always a delicate balance of deciding what to leave in the 3rd-party tools vs. what should be incorporated directly. Not everything is safe or easy for the everyday user that’s not likely to be a GPU enthusiast participating in my nerd AMA.
    3) It would probably be better to asks Linux driver questions of our guru Graham Sellers. He’s a much better authority than I am on Linux and the Linux driver. I’ll be transparent: I’m a Windows gamer, and always have been, and probably always will be. I’m not very familiar with Linux.
    5) We will discuss SKUs when Polaris debuts mid-year.
    6) Async Compute does not require any changes to a developer’s shader code, so it is relatively straightforward to implement. Thus far every DX12 app and test has included support for it, so that’s encouraging.
    7) Comparable capabilities, though Vulkan more enthusiastically supports API extensions that could be useful to future hardware. NOTE TO JOURNALISTS: This does not imply there are super secret hidden hardware features that can only be exposed by Vulkan. I am only pointing out that this is one of the powerful aspects of Vulkan for HW vendors like us.
    8) I believe our release notes make it very clear we’re aware of the issue and are working to resolve it. The fix will be ready when it;s ready.
    9) I cannot comment on AMD staffing.
    10) Dockport is from our client (CPU/APU) side of the business, so I’m not familiar with its goings on.
    11) I hope I numbered these correctly.
    ——————————————————-
    [2016-03-03 – 11:09 AM Central Time]

    • Will Polaris GPU’s use GDDR5X? [if it’s not already using HBM2]
    • How different is the Polaris architecture compared to the current generation GCN? Is it GCN 1.3 or GCN 2.0?
    • How is tesselation implemented in Polaris?
    • What is the Tier of support for DX12 features like conservative rasterization, tiled resources, resource binding, rasterizer ordered views, order independent transparency, etc
    • What happened to TrueAudio? Now is the perfect time for 3D audio technologies given the commercial introduction of VR.
    • Can you guys work with Khronos and release a full featured OpenSL? Maybe even donate TrueAudio to Khronos to kickstart the process? Immersive VR requires not only low-latency visuals, but also low-latency 3D sound.
    • Why is improvements in cooling technology like the Sandia spinning heatsink, TIM with good lateral conductivity not seen in commercial products yet?
    • Why the name GPUOpen? As much as it is a step in the right direction, the name is a bit…. unimaginative.
    • How independent is the RTG to make decisions?
    As a general comment, you’re asking many architecture-specific questions. I cannot answer them right now, but please know that we understand there’s a tremendous appetite for specific µarch and process details, and we intend to answer them before Polaris launches mid-year. I think this generally addresses questions: 1, 2, 3, 4. But on the point of #2: We consider this 4th gen GCN.
    For TrueAudio, see this. WRT OpenSL, decent point I’ll bring it up internally.
    For TIM: I am not a material sciences expert so I probably cannot answer your question to the depth that you want. However I will say that advanced TIMs can get very expensive very fast, and that is probably the hurdle you’re seeing on commercial products.
    GPUOpen: It’s a straightforward name. I like it.
    Independence: There’s no objective metric by which I can measure an answer to this question. These discussions happen above my level, and the arrival of RTG has not substantially changed my day-to-day job function.


    OH YES! Me first! Ok, so when is gonna be the EXACT release date on those Bristol Ridge APUs and Polaris GPUs?


    June 2, 1997.






    Are the cards going to have any overclocking potential, or is this going to be another fury situation?
    Will the cards come with DP1.4?


    We will discuss specific SKUs and overclocking capabilities at product launch in mid-year.
    They will come with DP1.3. There is a 12-18 month lag time between the final ratification of a display spec and the design/manufacture/testing of silicon compliant with that spec. This is true at all levels of the display industry. For example: DP 1.3 was finished in September, 2014.
    ——————————————————-
    [2016-03-03 – 11:19 AM Central Time ]
    Hi, how did you all get your jobs at AMD? Where did you all go to university, what did you study, etc? What kind of person would you recommend your job for? What’s it like? What’s the best part of the job, and what’s the worst?
    And how would you recommend someone start teaching themselves about the more advanced parts of today’s processors? Block diagrams blow my mind and I’ve always wanted to learn more about how these things work.
    1) I used to be a hardware reviewer at a .com. You make contacts in the industry, and you make friends. One day you post on Facebook that you’re looking for a new apartment, and one of those contacts ask if you want to move to Toronto to work for AMD. Of course I said yes! Best decision of my life.
    2) I did not complete university. Instead I spent about 6-8 hours of every day for 3-5 years learning about PC hardware, reviewing that hardware, analyzing the industry, and writing furiously about that industry. I did over 500 feature-length articles during that time.
    3) I think it’s like any job. Sometimes it’s incredibly stressful, especially surrounding a product launch or a tradeshow, and other times it’s nice and laid back. I have really fantastic coworkers that keep me level, and it’s nice to be surrounded by cool technology all the time. The access to tech is probably the coolest part, but the frequent travel is probably the worst part–it’s just exhausting.
    4) Read every architecture deep dive you can get your hands on! For both CPUs and GPUs. That will help you start to understand what each component of the silicon does, and how it affects performance.

    ——————————————————-
    [2016-03-03 – 11:30 AM Central Time ]
    Will this finally be the year of AMD on Linux?
    Probably not. But I want to hear the answer as to why there is still no Vulkan driver from AMD for Linux. Everyone else has one


    Our Linux driver will be released quite soon.
    ——————————————————-
    [2016-03-03 – 11:47 AM Central Time ]
    Hello RTG. The thing that I would like to know is that if the Linux AMDGPU driver will support GCN 2(hawaii) hardware. Thank you very much!


    amdgpu already supports Hawaii.
    Thanks for taking the time to answer our queries, Thracks! Here are mine:
    1) Following Anandtech’s look at AMD laptops, I’d like to know if there’s a realistic possibility of having high-end AMD graphics make a return in gaming notebooks. Having NVIDIA’s hardware as the only option is not healthy for the industry, and I’d hoped that you would have some way of shoehorning a Radeon R9 Nano into a high-end 17-inch laptop chassis by now.
    2) Why are the GPUs inside the Falcon Tiki cases that Roy Taylor teased on Twitter using blower-style coolers? Is it a hybrid water-cooling kit, or will the dual-GPU Fury use this design for compact chassis?https://twitter.com/Roy_techhwood/st...22817478569984
    3) Can you comment on the recent developments regarding Ashes of the Singularity and DirectX 12 in PC Perspective and Extremetech’s tests? Will changes in AMD’s driver to include FlipEx support fix the framerate issues and allow high-refresh monitor owners to enjoy their hardware fully?
    4) Can you comment on how FreeSync is affected by the way games sold through the Windows Store run in borderless windowed mode?
    5) How far along are the efforts to port over the changes in Radeon Crimson Software to Linux? When can I reasonably expect to be able to switch over, and not have issues in games with my Radeon R7 265, using AMD’s drivers?
    6) Daisy-chained Displayport 1.2a monitors with FreeSync… does it work?
    7) FreeSync on a Displayport monitor connected to a Thunderbolt dock… does that work? (see here [https://www.youtube.com/watch?v=NshXgisNly4] at 14:24)
    8) This post of yours, Thracks, on Facebook . Is this hinting at the work AMD’s been doing to drive the DockPort standard? I haven’t heard about that for at least two years, and I fully expected to be able to buy DockPort stuff by now.


    1) Polaris.
    2) This is a project that I have not personally worked on or been involved with at AMD, so I could not answer.
    3) We will add DirectFlip support shortly.
    4) This article discusses the issue thoroughly. Quote: “games sold through Steam, Origin and anywhere else will have the ability to behave with DX12 as they do today with DX11.”
    5) This is a question better posed to Graham Sellers, our Linux driver Guru.
    6) It should. I have not tried.
    7) That depends on how well the dock transports the DP information, and if it is fully compliant with the DP1.2a standard. Should be fine, though.
    8) No.
    ——————————————————-
    [2016-03-03 – 11:53 AM Central Time ]
    “1) TrueAudio still finds regular use in the console space.”
    Could you please expand a bit on this? More precisely:
    1 – Does the PS4 have the exact same audio DSP block as TrueAudio in discrete GPUs?
    2 – When is it used? Does the console use it automatically every time a compatible middleware for audio is used in development? E.g. every time we see “Wwise” and/or FMOD in a PS4 game can we assume TrueAudio is being used?
    3 – If so, why isn’t this simply ported to PC games in multi-platform titles?


    1) No.
    2) Same dependencies as PC: when the developer choose the TrueAudio-enabled versions of any Wwise/FMOD plugin.
    3) Lots of things don’t carry over from PC to console, and vice versa. I am not privy to the details as of why.

    How far off are HDR monitors? AMD hyped up them at CES, and I think they would be great for game like Elite Dangerous.


    We expect the industry to arrive on HDR displays in the second half of 2016.
    ——————————————————-
    [2016-03-03 – 12:05 PM Central Time ]

    With the lack of overlay and fraps support in many DirectX 12 games is there a way that gamers and reviewers can properly benchmark games in the future? Will Hitman have FCAT support? Will there be a way to do benchmarking directx 12 games, especially ones that use the Windows compositing engine (are from the Windows store).


    It will be up to developers to make performance analysis applications that are compliant with DX12, just like they did for DX9/10/11. App developers can also incorporate high-performance counters and logging suitable for analyzing performance.
    To your second question: “games sold through Steam, Origin in anywhere else will have the ability to behave with DX12 as they do today with DX11.” Source.
    ——————————————————-
    [2016-03-03 – 12:12 PM Central Time ]
    Tell us more about HDR.Does this tech adds cost to monitor and how much?When we should expect first monitors with this tech?How many games will support it ?How this gonna work with different panel types (tn/ips).


    1) Yes it does.
    Advertisements

    2) Probably 2H16.
    3) Unknown at this time. Developers will need to adjust their tonemapping algorithms to be HDR-aware. This is not a trivial task, but also not incredibly complicated. Some notable devs have already expressed interest to us, so I would expect games to follow around the time monitors start appearing.
    4) HDR is best articulated by bright and high-contrast panel technologies like OLED. But local-dimming LCD is also viable on a variety of panel types, especially the contrasty ones like VA.
    As a followup to #3, I know that HSA has been pushed for a while now and that Mantle, DX12 and Vulkan support using non-paired GPUs, leading me to think that the future should be the heyday of CPUs with an integrated graphics chip. (They also make sense from a usability standpoint; if my Radeon card dies I still would like to be able to use the computer, even if it isn’t to game.)
    Are there plans to integrate a GPU into Zen, and if so, how long will we have to wait for them to come out? Will the Zen CPUs always be the flagship chip design, or will the future see Zen APUs taking over?


    Zen is the name of an architecture. There will be APUs and CPUs based on Zen. Zen-based CPUs will debut first.

    ——————————————————-
    [2016-03-03 – 12:24 PM Central Time ]
    Posting as its own comment for visibility: I know there are lots of questions Polaris’ architecture, memory configurations and market SKUs. Right now it is just too early for me to be disclosing this sort of information. I know that will be disappointing to many of you, and I apologize that the OP was not more clear about the boundaries.
    I promise to return for another AMA when I can answer all of your Polaris questions. I intend to do right by you guys; you deserve that. You’ll just need to give me a little more time to do that.
    not even if its all 14nm or some 16nm? Come onnnn. that doesn’t seem worth hiding


    14nm GlobalFoundries.






    ——————————————————-
    [2016-03-03 – 12:28 PM Central Time ]

    does AMD support using DDU to completely wipe drivers?


    Answer as a non-employee: DDU seems to work just fine, and I use it for every driver install because I’m picky about that sort of thing.
    Answer as an employee: We have our own tool.
    Previously, your next generation GCN GPU architectures were using arctic island code names ex: Ellesmere, Baffin, Greenland… As I understand it, the code names were changed to Polaris 10, 11 and Vega 10, 11. My question is, how do they relate to one another? Is Baffin Polaris 11 for example?


    I don’t know how to answer this. Polaris 10 is Polaris 10, and Polaris 11 is Polaris 11. There are no other names.
    Editorial note :
    Except there are. In fact AMD’s next generation Polaris GPUs were listed under Arctic Islands names in the drivers and their Arctic Islands code names are still being used in AMD’s own shipping descriptions.

    ——————————————————-
    [2016-03-03 – 01:02 PM Central Time ]
    Ok, safe to say I have more that a few questions – I understand a few of these may be very much “neither confirm nor deny”. Sorry in advance for the lack of structure, I’m just writing stuff as I think of it…

    1. Can you say if the previously shown Fury X2 boards eg are final? If so, are they?
    2. What direction is dual graphics going in? Between DDR4 and improvements to GCN the memory bottleneck is certainly being eased – can we expect an XDMA engine in future APUs?
    3. Is AMD actively working with any game developers on taking advantage of HSA?
    4. Bit similar to the above, but will GPUOpen include functions that leverage the capabilities of processor graphics cores, either through HSA or OpenCL? If so, will they support Intel’s “APUs” at all, or will you be accepting patches from Intel that would add such support? Is this made more awkward to address by the fact that more expensive “enthusiast-grade” CPUs are less likely to include graphics cores?
    5. Any sign of Intel supporting freesync in the future? Would you even know if they were going to?
    6. Is it true that Kaveri has a GDDR5 controller, but no products using it were released because of poor prospects for market positioning or the total lack of hardware ecosystem support?
    7. Polaris is going to be in the next macbook, isn’t it?
    8. ISN’T IT?
    9. More serious question in that I actually think you might be able to answer unlike the above 2 or 3, how easy is it for monitor vendors to initially adopt freesync, and to add more freesync products once they’ve done their first? Do you see there being a point in the future where Freesync is a universal or near-universal feature of new monitors?
    10. Where do you think VR will be in 5 years time, and what do you think the impact on ‘conventional’ gaming will be?
    11. There’s a lot of buzz about “explicit multiadapter” in DirectX 12. Do you think DX12 and Vulkan are going to lead to more games supporting some sort of multiGPU technology (outside of one-chip-per-eye VR setups)?
    12. Is there a trend towards multigpu implementations that focus more on user experience than pure fps as in Civ:BE, or was that a bit of a one-off?
    13. What are you personally most looking forward to – VR, low-level APIs becoming commonplace, or the inevitable big jump in top-end “conventional” performance that will come sooner or later with the jump to 16/14nm?


    1) I don’t know.
    2) I don’t think there’s enough information being passed to a dual graphics configuration to benefit from XDMA.
    3) HSA is not for gaming. It is purely GPGPU.
    4) Half of the GPUOpen effort is for GPU compute. We are open to code submissions.
    5) Intel has previously commented that future CPUs would support DisplayPort Adaptive-Sync. FreeSync is based on DPAS as well.
    6) I’ve never heard this.
    7) I only work on the PC side of our business. I do not know.
    8) DON’T HURT ME.
    9) It’s actually pretty easy. Many scalers in the market at the time FreeSync was introduced were technically capable of adaptive refresh rates, but there was no specification or firmware to expose that aspect of the hardware or to use the hardware in that new way. The DisplayPort Adaptive-Sync specification changed that, and gave scaler vendors a blueprint suitable for developing newer compliant firmware that could be accommodated by many of their existing SKUs. Of course other SKUs may not be compliant, and replacements were developed in time.
    Once there’s a scaler in place, a monitor vendor is already on the hook for buying a scaler to make a display, but now they can simply choose one that offers DPAS support. We work closely with our partners to also make sure that the bill of materials they’re assembling is likely to meet our criteria for FreeSync validation and logo, which looks for specific qualities that are not addressed anywhere in the DPAS specification. Not all monitors make it.
    The larger hurdle is tuning the monitor firmware for the particular LCD panel chosen. That’s an extra step of validation that is less intensive on a static refresh screen, but it’s not a disproportionate burden or anything. And once MFGRs get a sense of this overall process, yes, it does become easier to make more displays.
    10) Gosh, I can’t even fathom. IT IS PURELY MY PERSONAL BELIEF/NOT REPRESENTING THE OPINIONS OF AMD that virtual reality will follow the model of all other consumer goods: higher refresh rates, higher resolutions, lower costs, more content. At AMD we want to see 16K pixels per eye at 240 FPS as an ultimate goal. We believe that this will be capable of simulating reality.
    11) Currently unclear, but I hope so. EMA is awesome.
    12) DX12/Vulkan make unconventional and superior mGPU configurations, like split-frame rendering, easier to implement. DX11 doesn’t even support SFR. We (the industry) are just now building broad infrastructure to explore the possibility, so it’s hard to calculate the trajectory.
    13) I am personally most excited by the expansion of adaptive refresh rate technologies. I can never go back to any gaming experience that doesn’t feel like that magical “60 FPS” moment all the time. And that’s what FreeSync feels like: 100% smooth, 100% of the time. It feels like the GPU is so much faster than it is, even when the game is running at like 45 FPS.

    Does Polaris have a dedicated h264/h265 encoder/decoder chip or is it the gpu that’s able to do that? I want a red team shadowplay otherwise I’ll have to upgrade to nvidia :/


    Yes, it has h.265 encode and decode up to 4K.

    Is GCN 1.0 support for the AMDGPU linux driver planned?


    No.
    Our Linux driver will be released quite soon.
    different companies have different definitions of soon™
    hours / days / weeks / months / years ?
    If I could be more specific, I would. We’re not talking months.
    I’m slightly surprised by the answer about HSA – of course it does nothing for drawing pretty pictures, but surely it’d be really beneficial for things like physics simulations? Am I missing something?
    HSA isn’t suitable for gaming physics sims because gaming graphics APIs already have their own well-tailored solutions for GPU physics. It’s a needless reinvention of the wheel to apply HSA to the GPU physics topic.





















































    1. Currently CrossFire does not work in borderless windowed for DX games. Yet it does work properly in OpenGL, Mantle, and I’d assume for Vulkan too. Since DX12 is in the spirit of Mantle, will we possible see windowed CrossFire support in those titles?
    2. I’d assume DisplayPort 1.4 is off the table for Polaris, but is it possible we could see it with the next generation?
    3. With DX12 we’re seeing a dramatic decrease in processor load, but is there any more room for optimizations in DX11 on the driver side?
    4. Will we ever be able to use the outputs on the slave card in CrossFire mode?

    1) Yes, this is possible now.
    2) Way too early to say.
    3) We continue to look at it, yes.
    4) The memory copies required to drive SLS gaming on mGPU would annihilate performance.
    Hello AMD! First I want to say I appreciate you guys soo much for what you have done for the games industry. Without you guys we would all be in a world of hurt honestly and your products have yet to dissapoint on my personal end!
    I have 2 questions! Are my 7970’s “fully” capable of next Gen API’s or are there going to be new hardware innovations with proprietary features that will lock me out? I could never find a clear answer for this!
    Also can you talk a little bit working with mark cerny on the ps4? It’s one of the fastest selling consoles of all time and uncharted 4 looks absolutely amazing, knowing that it’s designed on the hardware you supplied just blows my mind!
    Thanks AMD you guys are seriously amazing!
    I want to be clear that there is no graphics architecture on the market today that is 100% compliant with everything DX12 or Vulkan have to offer. For example: we support Async Compute, NVIDIA does not. NVIDIA supports conservative raster, we do not. The most important thing you can do as a gamer is to own a piece of hardware that is compatible with the vast majority of the core specification, which you do. That’s where all the performance and image quality comes from, and you will be able to benefit.
    As for your PS4 questions: I work on the desktop PC side of our business, so I couldn’t really say anything useful about the PS4.
    Are 490 going to be on 14nm as well? My 280x is still kicking it strong but i want to upgrade both monitor/GPU by the end of next summer. Speaking of which , what will happen to the 300? Are they going to be phased out like last year 200?
    Polaris is 14nm. Plain and simple. Any time any company flips over to a new lineup, the old products that got replaced are a “while supplies last” kind of deal.
    Hi my questions are all about current state of drivers and possible future updates to them :
    1)Are there any plans to incorporate some of radeon pro’s features in future crimson drivers like texture lod controls, dynamic vsync ,mip map quality ect. ?
    2)Will there be more antialiasing modes or options incorporated in future drivers ?
    3)Could radeon settings have custom fan profiles and core voltage controls implemented in future drivers?
    4)How satisfied are you with the current state of crimson drivers?
    5)Will there be more tessellation improvements in the driver side for current 300 series cards?
    6)Could ssao or hbao methods be implemented in future drivers for older games that do not possess any of these occlusion methods?
    7)And finally Will there be any frame latency improvements?
    Hope it isn’t much to ask and thanks
    1) We constantly monitor the community’s feature requests and evaluate whether or not the feature is worthy of bringing into the driver, or leaving in a 3rd-party tool (which uses AMD APIs and interfaces to expose these features). We take this feedback seriously, which is why, for example, we added CRU support directly in the driver in Crimson. Nothing is off the table.
    2) Forced AA modes often do not work in the driver because most games use deferred rendering engines that are basically incompatible with pipeline AA options.
    3) If there’s a “big enough” pool of requests for it, yes, it is possible. NOT TO JOURNALISTS: I am explaining how we weight feature additions. Please do not misconstrue my explanation as a confirmation.
    4) I think it’s a big improvement over CCC. Huge. I love the UI. I am excited to see what gets added in the big feature driver for 2016.
    5) 8-16x tessellation factor is a practical value for detail vs. speed, and this is what our hardware and software is designed around. Higher tessellation factors produce triangles smaller than a pixel, and you’re turfing performance for no appreciable gain in visual fidelity.
    6) Shader injection is a risky business when you deploy a piece of software to tens of millions of people. Shader injection can easily lead to rendering errors, game crashes or BSODs. Few people think about the sheer scale of our userbase, or how a feature that might work for a small number of people tinkering with SweetFX might break down when exposed to millions of people.
    8) We profile games and, where able, adjust the pre-rendered frame limits to reduce frame and input latencies to their lowest possible values.
    Any plans for a new recording software like shadowplay inside the crimson software?
    The AMD Gaming Evolved client does this. It uses our VCE blocks inside the GPU for hardware-accelerated recording and streaming.
    What if someone is majoring in both Electrical & Computer Engineering and Media Communications? I’m just interested in seeing if there’s any position that would converge these two together.
    Probably a job just like mine. Don’t take my job, pls.
    I’ve got the XF270HU with a FS range of 40-144Hz, why wouldn’t that have LFC?
    XF270HU is the IPS variant of the XG270HU. LFC is supported.
    Do folks at AMD expect any significant lead over Nvidia, performance wise, as the new graphics APIs develop and take root?
    We’ve lead in performance on every DX12 app/test so far.


    Is there anything new at all you can tell us about Polaris or is this AMA about your Guinea pigs and imagining you in your boxers playing Rocket League?
    Hey, I said they were gym shorts dude.
    AMD Gaming Evolved suggested to install Radeon Software Crismon Edition on Windows 10. Now I have two AMD applications installed -> confusing. Is there a plan to remove one?
    AMD Gaming Evolved is our optional game streaming, game recording, game optimization client. You can remove it any time you like. Radeon Software Crimson Edition is our graphics driver, I imagine you wouldn’t want to remove that.
    I’m sorry, but a lot of people don’t like that thing, and a lot of people are also having technical problems with it. I think it was a mistake buying it. You also killed RadeonPro with this move, which was the only tool with enough settings, and Radeon Settings still isn’t a good replacement for it – Radeon Settings isn’t even able to read the correct Windows language setting variable …
    25 million people actively use the Gaming Evolved application every day. Not just installs, but active users. I think a little perspective is warranted.
    As for RadeonPro, let me be clear: John Mautari decided to leave the project for a job at Raptr. That’s his perogative, and it’s his life.
    [2016-03-03 – 1:58 PM Central Time ]
    What would you say would be the lowest overall boost we’ll see in fps performance moving to vulkan? I mean, presumably vulkan performance will be better than current performance, what’s a very conservative, rough estimate for the kind of performance boost to expect? 5%? 10%?
    There’s been a lot of talk about AMD drivers hampering performance. Would you say that drivers aren’t quite doing AMD hardware justice? Presumably mature vulkan drivers for AMD hardware won’t be hampered in the same way, if the driver issue is a real problem. What kind of increase in performance owuld you estimate we could see?
    Will vulkan games and applications be more sensitive to CPU core count? Will 4 core, and 8 core systems see better scaling with vulkan than with current apis? What rough numbers can you throw out there?
    What are you most excited about about AMD chips that are on the shelves now? Process shrinks haven’t been getting easier, but it seems like you’ve been able to rise to the challenge. The media seems to be focusing on your hbm advantage. Sadly I haven’t been able to follow things too closely. A quick google points to some hits on nvidia being tied to samsung’s 14nm process. Is there any good news you’d like to share about any early predictions you can make? Presumably the samsung process will be targeted at fairly low power, low voltages. Will that be a disadvantage for them going up against chips like the furies, and the r9 380s?
    I don’t think I’ve noticed a lot of press on your ARM chips. Wikipedia mentions they were released 2h ’15 ish? Do you have any highlights on them? I’m guessing they’re targeting VM/cloud computing? I believe the industry’s been dipping it’s toes in that pond. Your chips are roughly in line with the competition?
    iirc AMD had a flash memory line. Was that the sort of commodity flash you’d find in an SSD? I’m asking because it struck me the other day. I actually own two sticks of AMD RAM, putting that together with your flash production I wondered why there was AMD ram, but no AMD ssds. Did the flash production go over to global foundries? I guess they’re sort of a white label now? Their customers probably don’t want to compete with AMD ssds?
    Nvidia seems to have seen some success with their tressfx tessellation. That success wouldn’t have worked if nvidia hadn’t invested a lot in the hardware, but it also required a big investment in middleware, and on top of those two things, they needed games and game engines to use them. Can any lessons be learned from that?
    Do you mind saying, in rough terms, how similar the windows and linux drivers code is? Is there any shared code?
    1) This is just me wildly ballparking based on other low-overhead APIs. And it could be total BS when all is said and done: but I think 7-15% in GPU-bound scenarios, and up to 25% in scenarios where the game is binding the CPU.
    2) One of the points of low-overhead APIs is to move the driver and the run time out of the way, and minimize their impact to the overall rendering latency from start to finish. Vulkan and DX12 both behave like this. And I want to be clear that they are designed like this because all graphics drivers need to get out of the way to achieve peak performance from an app.
    3) Generally APIs like Vulkan are designed to eliminate CPU binding on modest CPUs, and then expand performance on powerful CPUs. This has the effect of raising both the floor and the ceiling of CPUs vs. higher level APIs.
    4) I am always most excited about FreeSync. Any gamer who knows that “magic moment” where the game FPS matches the display refresh rate, and how smooth that can be, is crazy not to want that all the time. But that’s what FreeSync gives you: that perfectly liquid smoothness at damn near any frame rate.
    5) We are using the more advanced 14nm node. NVIDIA is 16nm.
    6) You ask about ARM, but that line of business is furthest from my role. I cannot answer these questions.
    7) We used to own a company called “Spansion” Spansion was sold off. Now we contract with third-party DRAM and NAND ODMs to build products to our specifications.
    8) NVIDIA had success with TressFX because we designed the effect to run well on any GPU. It’s really that simple. They were successful because we let them be. We believe that’s how it should be done for gamers: improve performance for yourself, don’t cripple performance for the other guy. The lesson we learned is that actual customers see value in that approach.
    9) I am not a developer, and not privy to the codebases.
    2) Forced AA modes often do not work in the driver because most games use deferred rendering engines that are basically incompatible with pipeline AA options.
    Which leaves VSR as the only driver-enforced choice.
    Could you please explain why the range of VSR resolutions for GCN1 and GCN2 is more limited than for GCN3 GPUs (the formers basically can’t output 4K in 1080p/1200p monitors)?
    Could this be overcome in future driver updates?
    We have hardware that performs real-time frame resizing with 0% performance impact above and beyond the change in resolution selected by the user. This hardware became more advanced as we iterated GCN.
    It is possible to explore shader-based methods, but that can introduce specific performance penalties of their own.
    We will continue to look at adding additional resolution options to VSR.



































































    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  3. #33
    Master Business & GPU Man Avatar de Enzo
    Registo
    Jan 2015
    Local
    País Campeão Euro 2016
    Posts
    7,793
    Likes (Dados)
    0
    Likes (Recebidos)
    1
    Avaliação
    41 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Estao mesmo empenhados nisto. Falta ver que resultados conseguem depois para nao-vr gamers
    Ideias sem Nexo e Provas do Tráfico de Hardware
    "que personifica o destino, equilíbrio e vingança divina." Dejá vú. Que cena!

  4. #34
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    [UPDATED] AMD Radeon Technologies Group Q&A With Robert Hallock – All Questions And Answers

    In case you’re unaware, AMD’s technical lead at Radeon Technologies Group Robert Hallock is currently conducting an AMA – Ask Me Anything – at reddit. The AMA started at 10 AM central time and will continue until 5 PM today, March 3rd.

    For your convenience we’ll be live-blogging the event, relaying Robert’s answers to you and the questions to which they’ve been given. If you want to check out the AMA over at reddit to ask a question you can do so by visiting the /r/amd subreddit or following this link which will take you there directly.
    [2016-03-03 – 5:20 PM Central Time ]
    The AMA/Q&A session just came to a close. Below you’ll find all the questions that Robert has answered and their respective answers in their entirety. Get ready, there are over 9000 words below to dig into.
    AMD Radeon Technologies Group AMA – Live Blog, Updated By The Minute

    Comments by Robert Hallock – technical marketing lead at Radeon Technologies Group, AMD – are highlighted in red.
    Did moving to the 14nm FFet node after being stuck in 28nm planar transistor node present any significant challenges that were different from previous shrinks?
    Every node has its little foibles, but I am not quite in a position where I can disclose the level of information you’re probably looking for. I like keeping my job. However, we know that people are very interested in the process tech and we intend to publish a lot more information on the architecture and the process before Polaris parts arrive mid-2016. I know this isn’t quite as good as getting an answer today, but I hope you can see we’re thinking about questions like this already.
    Can you discuss the yields on the Polaris dies right now?
    I am not privy to yield information at AMD.
    Did the recent earthquake in Taiwan affect the production schedule in any major way?
    No.
    How did Mantle effect the development on DX12 besides catalyzing progress, was there a significant amount of Mantle going into the development of DX12 or was it just the philosophy of low CPU/Driver overheads?
    I like to say that Mantle was influential. Microsoft was one of several parties that had access to the API specification, documentation and tools throughout Mantle’s 3-year development cycle. We’re glad Microsoft saw we were doing the right thing for PC graphics and decided to spin up the DX12 project; DX12 has been pretty great for performance and features.
    Having each eye handled by a separate D-GPU would be of greatest benefit in VR, so is there going to be an increased focus on CrossFire and LiquidVR support in preparation for big VR launches?
    This is something we’re intensely interested in. LiquidVR SDK has a feature called Affinity Multi-GPU, which can allocate one GPU to each eye as you suggest. Certainly a single high-end GPU and/or dynamic fidelity can make for a good VR experience, but there are gamers who want unequivocally the best experience, and LVR AMGPU accomplishes that. As a recent good sign of adoption, the SteamVR Performance Test uses our affinity multi-GPU tech to support mGPU VR for Radeon, whereas SLI configs are not supported.
    How are you guys going to cool the FuryX2? Is it going to be a blower like the dev kits Roy Taylor has been posting or is it going to be a liquid cooling loop like the 295×2? Also any idea of a price point?
    With all due respect to Tizaki, he was erroneous in placing dual Fury on the list of kosher topics. I am not in a position where I can discuss that product.
    In regards to Polaris can you discuss any of the GPUs we could be seeing sometime in the near future, like the Fury and FuryX successor, are we going to see a low profile card like the Nano again?
    We will discuss specific SKUs and form factors when Polaris launches mid-year.
    Is there anything you can disclose about GPUOpen tools being deployed in games that are in development, or even some of the interesting application people are trying to make using the compute tools?
    Watch for GDC.
    As far as FreeSync monitors, do you see an increased adoption of FreeSync technology from monitor and panel manufacturers because of HDMI support?
    HDMI support is directly responsible for expanding the list of FreeSync-enabled monitors from 30 to 40 overnight. HDMI is the world’s most common display interface, and there’s a huge industry economy of scale built around it, especially at the mainstream end of the market where users with modest GPUs are correspondingly most in need of adaptive refresh rates. Porting FreeSync to HDMI was highly requested by our display partners.

    When will we see Vulkan support in the Linux driver, and will that driver be only available on top of AMDGPU as part of the hybrid driver model.
    Also will Polaris be supported day 1 by the AMDGPU driver, or will it require changes to the current Power Play code that supports reclocking on the 380 series hardware?
    I’m basically just trying to figure out if and when I need to buy a new card, and if Polaris is an option.



    1. The Vulkan Linux driver will be released soon, and it will only be a part of amdgpu.
    2. I do not know the current roadmap for our Linux drivers, and I am generally unable to disclose anything that forward-looking if I want to keep my job. Thanks for understanding.


    Thanks for it guys!

    1. What happened with TrueAudio? This was the most exciting feature of the Radeon GPUs, but there isn’t any new game that support it.
    2. Why don’t you build a special support around TrueAudio, so the hardware might convert 5.1/7.1 channel audio to headphone stereo, much like how CMSS-3D works in the Creative drivers.
    3. A lot of PC sold with integrated GPU + dedicated GPU configuration. Why not use the integrated GPU for compute offloads? Windows 10 allows a lot of interesting technique to share the memory between the latency and the throughput optimized cores.
    4. Will the Kaveri and Godavari APUs get an updated BIOS to support HSA? Even if the chips don’t designed for the 1.0 specs.
    5. I like the new APIs (DX12 and Vulkan), but there are some features that might come handy, and the consoles support these. For example: ordered atomics, SV_Barycentric, SIMD lane swizzles. Is there a chance to support these with some extensions on PC?


    1) TrueAudio still finds regular use in the console space. However, on the desktop PC side there seems to be generally less interest in complicated soundscapes for PC gaming. Going forward, we’re interested in exploring TrueAudio’s rich positional capabilities to augment VR experiences–I think this is probably a better use for the technology.
    2) For the precise reason you pointed out: most audio hardware already supports this functionality, and a duplication of effort is not a worthy use of resources.
    3) DirectX 12, Vulkan, Mantle and HSA can do this.
    4) KV/GV already support HSA, however the hardware is compliant with the HSA 1.0 Provisional specification. The 1.0 Final specification increased hardware requirements that are met by Carrizo. More info.
    5) DX12 and Vulkan both support API extensions. Especially Vulkan.
    ——————————————————-
    [03/03/2016 – 10:58 AM Central Time]







    And I was doubted in my request for an AMA, yar!
    Can we put the DX11 to rest, are we going to see DCL’s added or not?
    Can anything be said with more detail about how RTG views its current drivers and what its striving for?
    How soon can linux users expect to see some love with some major gains in the Linux driver?
    Any chance of integrating features or advanced settings that are featured in programs like RadeonPro, RadeonMOD and ditching raptr outright?
    Are we going to see any rebrands this year or will everything be a fresh baseline with 14nm on the GPU side?
    With the benchmarks we’re seeing that take advantage of Async Support, will this be a feature require additional work from the devs or is it something easily implemented to where it will be the mainstream?
    Any tricks up AMD’s sleeve that we might see in Vulkan that may or may not be an easy addition to DX12?
    Is there a specific reason the drivers still have issues of downclocking when a small program can be used to stop the downclocking issue?
    Are there any plans to increase resources (including software engineers) allocated to Radeon Software including Drivers, GPUOpen materials and other products?
    AMD Dockport, anything we can expect to see from this?
    1) Because DCLs are useless. They’ve been inappropriately positioned as a panacea for DX11’s modest multi-threading capabilities, but most journalists and users exploring the topic are not familiar with why DCLs are so broken or limited.
    Let’s say you have a bunch of command lists on each CPU core in DX11. You have no idea when each of these command lists will be submitted to the GPU (residency not yet known). But you need to patch each of these lists with GPU addresses before submitting them to the graphics card. So the one single CPU core in DX11 that’s performing all of your immediate work with the GPU must stop what it’s doing and spend time crawling through the DCLs on the other cores. It’s a huge hit to performance after more than a few minutes of runtime, though DCLs are very lovely at arbitrarily boosting benchmark scores on tests that run for ~30 seconds.
    The best way to do DX11 is from our GCN Performance tip #31: A dedicated thread solely responsible for making D3D calls is usually the best way to drive the API.Notes: The best way to drive a high number of draw calls in DirectX11 is to dedicate a thread to graphics API calls. This thread’s sole responsibility should be to make DirectX calls; any other types of work should be moved onto other threads (including processing memory buffer contents). This graphics “producer thread” approach allows the feeding of the driver’s “consumer thread” as fast as possible, enabling a high number of API calls to be processed.
    2+4) Sort of. We’re clearly interested in doing a major feature-rich driver every year, and that is on-track for 2016 as well. We constantly crawl the various PC gaming communities to keep our fingers on the pulse of what people want to see in terms of features. That’s why custom resolutions were added in Radeon Software, for example. But for us it is always a delicate balance of deciding what to leave in the 3rd-party tools vs. what should be incorporated directly. Not everything is safe or easy for the everyday user that’s not likely to be a GPU enthusiast participating in my nerd AMA.
    3) It would probably be better to asks Linux driver questions of our guru Graham Sellers. He’s a much better authority than I am on Linux and the Linux driver. I’ll be transparent: I’m a Windows gamer, and always have been, and probably always will be. I’m not very familiar with Linux.
    5) We will discuss SKUs when Polaris debuts mid-year.
    6) Async Compute does not require any changes to a developer’s shader code, so it is relatively straightforward to implement. Thus far every DX12 app and test has included support for it, so that’s encouraging.
    7) Comparable capabilities, though Vulkan more enthusiastically supports API extensions that could be useful to future hardware. NOTE TO JOURNALISTS: This does not imply there are super secret hidden hardware features that can only be exposed by Vulkan. I am only pointing out that this is one of the powerful aspects of Vulkan for HW vendors like us.
    8) I believe our release notes make it very clear we’re aware of the issue and are working to resolve it. The fix will be ready when it;s ready.
    9) I cannot comment on AMD staffing.
    10) Dockport is from our client (CPU/APU) side of the business, so I’m not familiar with its goings on.
    11) I hope I numbered these correctly.
    ——————————————————-
    [2016-03-03 – 11:09 AM Central Time]

    • Will Polaris GPU’s use GDDR5X? [if it’s not already using HBM2]
    • How different is the Polaris architecture compared to the current generation GCN? Is it GCN 1.3 or GCN 2.0?
    • How is tesselation implemented in Polaris?
    • What is the Tier of support for DX12 features like conservative rasterization, tiled resources, resource binding, rasterizer ordered views, order independent transparency, etc
    • What happened to TrueAudio? Now is the perfect time for 3D audio technologies given the commercial introduction of VR.
    • Can you guys work with Khronos and release a full featured OpenSL? Maybe even donate TrueAudio to Khronos to kickstart the process? Immersive VR requires not only low-latency visuals, but also low-latency 3D sound.
    • Why is improvements in cooling technology like the Sandia spinning heatsink, TIM with good lateral conductivity not seen in commercial products yet?
    • Why the name GPUOpen? As much as it is a step in the right direction, the name is a bit…. unimaginative.
    • How independent is the RTG to make decisions?
    As a general comment, you’re asking many architecture-specific questions. I cannot answer them right now, but please know that we understand there’s a tremendous appetite for specific µarch and process details, and we intend to answer them before Polaris launches mid-year. I think this generally addresses questions: 1, 2, 3, 4. But on the point of #2: We consider this 4th gen GCN.
    For TrueAudio, see this. WRT OpenSL, decent point I’ll bring it up internally.
    For TIM: I am not a material sciences expert so I probably cannot answer your question to the depth that you want. However I will say that advanced TIMs can get very expensive very fast, and that is probably the hurdle you’re seeing on commercial products.
    GPUOpen: It’s a straightforward name. I like it.
    Independence: There’s no objective metric by which I can measure an answer to this question. These discussions happen above my level, and the arrival of RTG has not substantially changed my day-to-day job function.


    OH YES! Me first! Ok, so when is gonna be the EXACT release date on those Bristol Ridge APUs and Polaris GPUs?


    June 2, 1997.






    Are the cards going to have any overclocking potential, or is this going to be another fury situation?
    Will the cards come with DP1.4?


    We will discuss specific SKUs and overclocking capabilities at product launch in mid-year.
    They will come with DP1.3. There is a 12-18 month lag time between the final ratification of a display spec and the design/manufacture/testing of silicon compliant with that spec. This is true at all levels of the display industry. For example: DP 1.3 was finished in September, 2014.
    ——————————————————-
    [2016-03-03 – 11:19 AM Central Time ]
    Hi, how did you all get your jobs at AMD? Where did you all go to university, what did you study, etc? What kind of person would you recommend your job for? What’s it like? What’s the best part of the job, and what’s the worst?
    And how would you recommend someone start teaching themselves about the more advanced parts of today’s processors? Block diagrams blow my mind and I’ve always wanted to learn more about how these things work.
    1) I used to be a hardware reviewer at a .com. You make contacts in the industry, and you make friends. One day you post on Facebook that you’re looking for a new apartment, and one of those contacts ask if you want to move to Toronto to work for AMD. Of course I said yes! Best decision of my life.
    2) I did not complete university. Instead I spent about 6-8 hours of every day for 3-5 years learning about PC hardware, reviewing that hardware, analyzing the industry, and writing furiously about that industry. I did over 500 feature-length articles during that time.
    3) I think it’s like any job. Sometimes it’s incredibly stressful, especially surrounding a product launch or a tradeshow, and other times it’s nice and laid back. I have really fantastic coworkers that keep me level, and it’s nice to be surrounded by cool technology all the time. The access to tech is probably the coolest part, but the frequent travel is probably the worst part–it’s just exhausting.
    4) Read every architecture deep dive you can get your hands on! For both CPUs and GPUs. That will help you start to understand what each component of the silicon does, and how it affects performance.

    ——————————————————-
    [2016-03-03 – 11:30 AM Central Time ]
    Will this finally be the year of AMD on Linux?
    Probably not. But I want to hear the answer as to why there is still no Vulkan driver from AMD for Linux. Everyone else has one


    Our Linux driver will be released quite soon.
    ——————————————————-
    [2016-03-03 – 11:47 AM Central Time ]
    Hello RTG. The thing that I would like to know is that if the Linux AMDGPU driver will support GCN 2(hawaii) hardware. Thank you very much!


    amdgpu already supports Hawaii.
    Thanks for taking the time to answer our queries, Thracks! Here are mine:
    1) Following Anandtech’s look at AMD laptops, I’d like to know if there’s a realistic possibility of having high-end AMD graphics make a return in gaming notebooks. Having NVIDIA’s hardware as the only option is not healthy for the industry, and I’d hoped that you would have some way of shoehorning a Radeon R9 Nano into a high-end 17-inch laptop chassis by now.
    2) Why are the GPUs inside the Falcon Tiki cases that Roy Taylor teased on Twitter using blower-style coolers? Is it a hybrid water-cooling kit, or will the dual-GPU Fury use this design for compact chassis?https://twitter.com/Roy_techhwood/st...22817478569984
    3) Can you comment on the recent developments regarding Ashes of the Singularity and DirectX 12 in PC Perspective and Extremetech’s tests? Will changes in AMD’s driver to include FlipEx support fix the framerate issues and allow high-refresh monitor owners to enjoy their hardware fully?
    4) Can you comment on how FreeSync is affected by the way games sold through the Windows Store run in borderless windowed mode?
    5) How far along are the efforts to port over the changes in Radeon Crimson Software to Linux? When can I reasonably expect to be able to switch over, and not have issues in games with my Radeon R7 265, using AMD’s drivers?
    6) Daisy-chained Displayport 1.2a monitors with FreeSync… does it work?
    7) FreeSync on a Displayport monitor connected to a Thunderbolt dock… does that work? (see here [https://www.youtube.com/watch?v=NshXgisNly4] at 14:24)
    8) This post of yours, Thracks, on Facebook . Is this hinting at the work AMD’s been doing to drive the DockPort standard? I haven’t heard about that for at least two years, and I fully expected to be able to buy DockPort stuff by now.


    1) Polaris.
    2) This is a project that I have not personally worked on or been involved with at AMD, so I could not answer.
    3) We will add DirectFlip support shortly.
    4) This article discusses the issue thoroughly. Quote: “games sold through Steam, Origin and anywhere else will have the ability to behave with DX12 as they do today with DX11.”
    5) This is a question better posed to Graham Sellers, our Linux driver Guru.
    6) It should. I have not tried.
    7) That depends on how well the dock transports the DP information, and if it is fully compliant with the DP1.2a standard. Should be fine, though.
    8) No.
    ——————————————————-
    [2016-03-03 – 11:53 AM Central Time ]
    “1) TrueAudio still finds regular use in the console space.”
    Could you please expand a bit on this? More precisely:
    1 – Does the PS4 have the exact same audio DSP block as TrueAudio in discrete GPUs?
    2 – When is it used? Does the console use it automatically every time a compatible middleware for audio is used in development? E.g. every time we see “Wwise” and/or FMOD in a PS4 game can we assume TrueAudio is being used?
    3 – If so, why isn’t this simply ported to PC games in multi-platform titles?


    1) No.
    2) Same dependencies as PC: when the developer choose the TrueAudio-enabled versions of any Wwise/FMOD plugin.
    3) Lots of things don’t carry over from PC to console, and vice versa. I am not privy to the details as of why.

    How far off are HDR monitors? AMD hyped up them at CES, and I think they would be great for game like Elite Dangerous.


    We expect the industry to arrive on HDR displays in the second half of 2016.
    ——————————————————-
    [2016-03-03 – 12:05 PM Central Time ]

    With the lack of overlay and fraps support in many DirectX 12 games is there a way that gamers and reviewers can properly benchmark games in the future? Will Hitman have FCAT support? Will there be a way to do benchmarking directx 12 games, especially ones that use the Windows compositing engine (are from the Windows store).


    It will be up to developers to make performance analysis applications that are compliant with DX12, just like they did for DX9/10/11. App developers can also incorporate high-performance counters and logging suitable for analyzing performance.
    To your second question: “games sold through Steam, Origin in anywhere else will have the ability to behave with DX12 as they do today with DX11.” Source.
    ——————————————————-
    [2016-03-03 – 12:12 PM Central Time ]
    Tell us more about HDR.Does this tech adds cost to monitor and how much?When we should expect first monitors with this tech?How many games will support it ?How this gonna work with different panel types (tn/ips).


    1) Yes it does.
    2) Probably 2H16.
    3) Unknown at this time. Developers will need to adjust their tonemapping algorithms to be HDR-aware. This is not a trivial task, but also not incredibly complicated. Some notable devs have already expressed interest to us, so I would expect games to follow around the time monitors start appearing.
    4) HDR is best articulated by bright and high-contrast panel technologies like OLED. But local-dimming LCD is also viable on a variety of panel types, especially the contrasty ones like VA.
    As a followup to #3, I know that HSA has been pushed for a while now and that Mantle, DX12 and Vulkan support using non-paired GPUs, leading me to think that the future should be the heyday of CPUs with an integrated graphics chip. (They also make sense from a usability standpoint; if my Radeon card dies I still would like to be able to use the computer, even if it isn’t to game.)
    Are there plans to integrate a GPU into Zen, and if so, how long will we have to wait for them to come out? Will the Zen CPUs always be the flagship chip design, or will the future see Zen APUs taking over?


    Zen is the name of an architecture. There will be APUs and CPUs based on Zen. Zen-based CPUs will debut first.

    ——————————————————-
    [2016-03-03 – 12:24 PM Central Time ]
    Posting as its own comment for visibility: I know there are lots of questions Polaris’ architecture, memory configurations and market SKUs. Right now it is just too early for me to be disclosing this sort of information. I know that will be disappointing to many of you, and I apologize that the OP was not more clear about the boundaries.
    I promise to return for another AMA when I can answer all of your Polaris questions. I intend to do right by you guys; you deserve that. You’ll just need to give me a little more time to do that.
    not even if its all 14nm or some 16nm? Come onnnn. that doesn’t seem worth hiding


    14nm GlobalFoundries.






    ——————————————————-
    [2016-03-03 – 12:28 PM Central Time ]

    does AMD support using DDU to completely wipe drivers?


    Answer as a non-employee: DDU seems to work just fine, and I use it for every driver install because I’m picky about that sort of thing.
    Answer as an employee: We have our own tool.
    Previously, your next generation GCN GPU architectures were using arctic island code names ex: Ellesmere, Baffin, Greenland… As I understand it, the code names were changed to Polaris 10, 11 and Vega 10, 11. My question is, how do they relate to one another? Is Baffin Polaris 11 for example?


    I don’t know how to answer this. Polaris 10 is Polaris 10, and Polaris 11 is Polaris 11. There are no other names.
    Editorial note :
    Except there are. In fact AMD’s next generation Polaris GPUs were listed under Arctic Islands names in the drivers and their Arctic Islands code names are still being used in AMD’s own shipping descriptions.

    ——————————————————-
    [2016-03-03 – 01:02 PM Central Time ]
    Ok, safe to say I have more that a few questions – I understand a few of these may be very much “neither confirm nor deny”. Sorry in advance for the lack of structure, I’m just writing stuff as I think of it…

    1. Can you say if the previously shown Fury X2 boards eg are final? If so, are they?
    2. What direction is dual graphics going in? Between DDR4 and improvements to GCN the memory bottleneck is certainly being eased – can we expect an XDMA engine in future APUs?
    3. Is AMD actively working with any game developers on taking advantage of HSA?
    4. Bit similar to the above, but will GPUOpen include functions that leverage the capabilities of processor graphics cores, either through HSA or OpenCL? If so, will they support Intel’s “APUs” at all, or will you be accepting patches from Intel that would add such support? Is this made more awkward to address by the fact that more expensive “enthusiast-grade” CPUs are less likely to include graphics cores?
    5. Any sign of Intel supporting freesync in the future? Would you even know if they were going to?
    6. Is it true that Kaveri has a GDDR5 controller, but no products using it were released because of poor prospects for market positioning or the total lack of hardware ecosystem support?
    7. Polaris is going to be in the next macbook, isn’t it?
    8. ISN’T IT?
    9. More serious question in that I actually think you might be able to answer unlike the above 2 or 3, how easy is it for monitor vendors to initially adopt freesync, and to add more freesync products once they’ve done their first? Do you see there being a point in the future where Freesync is a universal or near-universal feature of new monitors?
    10. Where do you think VR will be in 5 years time, and what do you think the impact on ‘conventional’ gaming will be?
    11. There’s a lot of buzz about “explicit multiadapter” in DirectX 12. Do you think DX12 and Vulkan are going to lead to more games supporting some sort of multiGPU technology (outside of one-chip-per-eye VR setups)?
    12. Is there a trend towards multigpu implementations that focus more on user experience than pure fps as in Civ:BE, or was that a bit of a one-off?
    13. What are you personally most looking forward to – VR, low-level APIs becoming commonplace, or the inevitable big jump in top-end “conventional” performance that will come sooner or later with the jump to 16/14nm?


    1) I don’t know.
    2) I don’t think there’s enough information being passed to a dual graphics configuration to benefit from XDMA.
    3) HSA is not for gaming. It is purely GPGPU.
    4) Half of the GPUOpen effort is for GPU compute. We are open to code submissions.
    5) Intel has previously commented that future CPUs would support DisplayPort Adaptive-Sync. FreeSync is based on DPAS as well.
    6) I’ve never heard this.
    7) I only work on the PC side of our business. I do not know.
    8) DON’T HURT ME.
    9) It’s actually pretty easy. Many scalers in the market at the time FreeSync was introduced were technically capable of adaptive refresh rates, but there was no specification or firmware to expose that aspect of the hardware or to use the hardware in that new way. The DisplayPort Adaptive-Sync specification changed that, and gave scaler vendors a blueprint suitable for developing newer compliant firmware that could be accommodated by many of their existing SKUs. Of course other SKUs may not be compliant, and replacements were developed in time.
    Once there’s a scaler in place, a monitor vendor is already on the hook for buying a scaler to make a display, but now they can simply choose one that offers DPAS support. We work closely with our partners to also make sure that the bill of materials they’re assembling is likely to meet our criteria for FreeSync validation and logo, which looks for specific qualities that are not addressed anywhere in the DPAS specification. Not all monitors make it.
    The larger hurdle is tuning the monitor firmware for the particular LCD panel chosen. That’s an extra step of validation that is less intensive on a static refresh screen, but it’s not a disproportionate burden or anything. And once MFGRs get a sense of this overall process, yes, it does become easier to make more displays.
    10) Gosh, I can’t even fathom. IT IS PURELY MY PERSONAL BELIEF/NOT REPRESENTING THE OPINIONS OF AMD that virtual reality will follow the model of all other consumer goods: higher refresh rates, higher resolutions, lower costs, more content. At AMD we want to see 16K pixels per eye at 240 FPS as an ultimate goal. We believe that this will be capable of simulating reality.
    11) Currently unclear, but I hope so. EMA is awesome.
    12) DX12/Vulkan make unconventional and superior mGPU configurations, like split-frame rendering, easier to implement. DX11 doesn’t even support SFR. We (the industry) are just now building broad infrastructure to explore the possibility, so it’s hard to calculate the trajectory.
    13) I am personally most excited by the expansion of adaptive refresh rate technologies. I can never go back to any gaming experience that doesn’t feel like that magical “60 FPS” moment all the time. And that’s what FreeSync feels like: 100% smooth, 100% of the time. It feels like the GPU is so much faster than it is, even when the game is running at like 45 FPS.

    Does Polaris have a dedicated h264/h265 encoder/decoder chip or is it the gpu that’s able to do that? I want a red team shadowplay otherwise I’ll have to upgrade to nvidia :/


    Yes, it has h.265 encode and decode up to 4K.
    Advertisements


    Is GCN 1.0 support for the AMDGPU linux driver planned?


    No.
    Our Linux driver will be released quite soon.
    different companies have different definitions of soon™
    hours / days / weeks / months / years ?
    If I could be more specific, I would. We’re not talking months.
    I’m slightly surprised by the answer about HSA – of course it does nothing for drawing pretty pictures, but surely it’d be really beneficial for things like physics simulations? Am I missing something?
    HSA isn’t suitable for gaming physics sims because gaming graphics APIs already have their own well-tailored solutions for GPU physics. It’s a needless reinvention of the wheel to apply HSA to the GPU physics topic.





















































    1. Currently CrossFire does not work in borderless windowed for DX games. Yet it does work properly in OpenGL, Mantle, and I’d assume for Vulkan too. Since DX12 is in the spirit of Mantle, will we possible see windowed CrossFire support in those titles?
    2. I’d assume DisplayPort 1.4 is off the table for Polaris, but is it possible we could see it with the next generation?
    3. With DX12 we’re seeing a dramatic decrease in processor load, but is there any more room for optimizations in DX11 on the driver side?
    4. Will we ever be able to use the outputs on the slave card in CrossFire mode?

    1) Yes, this is possible now.
    2) Way too early to say.
    3) We continue to look at it, yes.
    4) The memory copies required to drive SLS gaming on mGPU would annihilate performance.
    Hello AMD! First I want to say I appreciate you guys soo much for what you have done for the games industry. Without you guys we would all be in a world of hurt honestly and your products have yet to dissapoint on my personal end!
    I have 2 questions! Are my 7970’s “fully” capable of next Gen API’s or are there going to be new hardware innovations with proprietary features that will lock me out? I could never find a clear answer for this!
    Also can you talk a little bit working with mark cerny on the ps4? It’s one of the fastest selling consoles of all time and uncharted 4 looks absolutely amazing, knowing that it’s designed on the hardware you supplied just blows my mind!
    Thanks AMD you guys are seriously amazing!
    I want to be clear that there is no graphics architecture on the market today that is 100% compliant with everything DX12 or Vulkan have to offer. For example: we support Async Compute, NVIDIA does not. NVIDIA supports conservative raster, we do not. The most important thing you can do as a gamer is to own a piece of hardware that is compatible with the vast majority of the core specification, which you do. That’s where all the performance and image quality comes from, and you will be able to benefit.
    As for your PS4 questions: I work on the desktop PC side of our business, so I couldn’t really say anything useful about the PS4.
    Are 490 going to be on 14nm as well? My 280x is still kicking it strong but i want to upgrade both monitor/GPU by the end of next summer. Speaking of which , what will happen to the 300? Are they going to be phased out like last year 200?
    Polaris is 14nm. Plain and simple. Any time any company flips over to a new lineup, the old products that got replaced are a “while supplies last” kind of deal.
    Hi my questions are all about current state of drivers and possible future updates to them :
    1)Are there any plans to incorporate some of radeon pro’s features in future crimson drivers like texture lod controls, dynamic vsync ,mip map quality ect. ?
    2)Will there be more antialiasing modes or options incorporated in future drivers ?
    3)Could radeon settings have custom fan profiles and core voltage controls implemented in future drivers?
    4)How satisfied are you with the current state of crimson drivers?
    5)Will there be more tessellation improvements in the driver side for current 300 series cards?
    6)Could ssao or hbao methods be implemented in future drivers for older games that do not possess any of these occlusion methods?
    7)And finally Will there be any frame latency improvements?
    Hope it isn’t much to ask and thanks
    1) We constantly monitor the community’s feature requests and evaluate whether or not the feature is worthy of bringing into the driver, or leaving in a 3rd-party tool (which uses AMD APIs and interfaces to expose these features). We take this feedback seriously, which is why, for example, we added CRU support directly in the driver in Crimson. Nothing is off the table.
    2) Forced AA modes often do not work in the driver because most games use deferred rendering engines that are basically incompatible with pipeline AA options.
    3) If there’s a “big enough” pool of requests for it, yes, it is possible. NOTE TO JOURNALISTS: I am explaining how we weight feature additions. Please do not misconstrue my explanation as a confirmation.
    4) I think it’s a big improvement over CCC. Huge. I love the UI. I am excited to see what gets added in the big feature driver for 2016.
    5) 8-16x tessellation factor is a practical value for detail vs. speed, and this is what our hardware and software is designed around. Higher tessellation factors produce triangles smaller than a pixel, and you’re turfing performance for no appreciable gain in visual fidelity.
    6) Shader injection is a risky business when you deploy a piece of software to tens of millions of people. Shader injection can easily lead to rendering errors, game crashes or BSODs. Few people think about the sheer scale of our userbase, or how a feature that might work for a small number of people tinkering with SweetFX might break down when exposed to millions of people.
    8) We profile games and, where able, adjust the pre-rendered frame limits to reduce frame and input latencies to their lowest possible values.
    Any plans for a new recording software like shadowplay inside the crimson software?
    The AMD Gaming Evolved client does this. It uses our VCE blocks inside the GPU for hardware-accelerated recording and streaming.
    What if someone is majoring in both Electrical & Computer Engineering and Media Communications? I’m just interested in seeing if there’s any position that would converge these two together.
    Probably a job just like mine. Don’t take my job, pls.
    I’ve got the XF270HU with a FS range of 40-144Hz, why wouldn’t that have LFC?
    XF270HU is the IPS variant of the XG270HU. LFC is supported.
    Do folks at AMD expect any significant lead over Nvidia, performance wise, as the new graphics APIs develop and take root?
    We’ve lead in performance on every DX12 app/test so far.


    Is there anything new at all you can tell us about Polaris or is this AMA about your Guinea pigs and imagining you in your boxers playing Rocket League?
    Hey, I said they were gym shorts dude.
    AMD Gaming Evolved suggested to install Radeon Software Crismon Edition on Windows 10. Now I have two AMD applications installed -> confusing. Is there a plan to remove one?
    AMD Gaming Evolved is our optional game streaming, game recording, game optimization client. You can remove it any time you like. Radeon Software Crimson Edition is our graphics driver, I imagine you wouldn’t want to remove that.
    I’m sorry, but a lot of people don’t like that thing, and a lot of people are also having technical problems with it. I think it was a mistake buying it. You also killed RadeonPro with this move, which was the only tool with enough settings, and Radeon Settings still isn’t a good replacement for it – Radeon Settings isn’t even able to read the correct Windows language setting variable …
    25 million people actively use the Gaming Evolved application every day. Not just installs, but active users. I think a little perspective is warranted.
    As for RadeonPro, let me be clear: John Mautari decided to leave the project for a job at Raptr. That’s his perogative, and it’s his life.
    [2016-03-03 – 1:58 PM Central Time ]
    What would you say would be the lowest overall boost we’ll see in fps performance moving to vulkan? I mean, presumably vulkan performance will be better than current performance, what’s a very conservative, rough estimate for the kind of performance boost to expect? 5%? 10%?
    There’s been a lot of talk about AMD drivers hampering performance. Would you say that drivers aren’t quite doing AMD hardware justice? Presumably mature vulkan drivers for AMD hardware won’t be hampered in the same way, if the driver issue is a real problem. What kind of increase in performance owuld you estimate we could see?
    Will vulkan games and applications be more sensitive to CPU core count? Will 4 core, and 8 core systems see better scaling with vulkan than with current apis? What rough numbers can you throw out there?
    What are you most excited about about AMD chips that are on the shelves now? Process shrinks haven’t been getting easier, but it seems like you’ve been able to rise to the challenge. The media seems to be focusing on your hbm advantage. Sadly I haven’t been able to follow things too closely. A quick google points to some hits on nvidia being tied to samsung’s 14nm process. Is there any good news you’d like to share about any early predictions you can make? Presumably the samsung process will be targeted at fairly low power, low voltages. Will that be a disadvantage for them going up against chips like the furies, and the r9 380s?
    I don’t think I’ve noticed a lot of press on your ARM chips. Wikipedia mentions they were released 2h ’15 ish? Do you have any highlights on them? I’m guessing they’re targeting VM/cloud computing? I believe the industry’s been dipping it’s toes in that pond. Your chips are roughly in line with the competition?
    iirc AMD had a flash memory line. Was that the sort of commodity flash you’d find in an SSD? I’m asking because it struck me the other day. I actually own two sticks of AMD RAM, putting that together with your flash production I wondered why there was AMD ram, but no AMD ssds. Did the flash production go over to global foundries? I guess they’re sort of a white label now? Their customers probably don’t want to compete with AMD ssds?
    Nvidia seems to have seen some success with their tressfx tessellation. That success wouldn’t have worked if nvidia hadn’t invested a lot in the hardware, but it also required a big investment in middleware, and on top of those two things, they needed games and game engines to use them. Can any lessons be learned from that?
    Do you mind saying, in rough terms, how similar the windows and linux drivers code is? Is there any shared code?
    1) This is just me wildly ballparking based on other low-overhead APIs. And it could be total BS when all is said and done: but I think 7-15% in GPU-bound scenarios, and up to 25% in scenarios where the game is binding the CPU.
    2) One of the points of low-overhead APIs is to move the driver and the run time out of the way, and minimize their impact to the overall rendering latency from start to finish. Vulkan and DX12 both behave like this. And I want to be clear that they are designed like this because all graphics drivers need to get out of the way to achieve peak performance from an app.
    3) Generally APIs like Vulkan are designed to eliminate CPU binding on modest CPUs, and then expand performance on powerful CPUs. This has the effect of raising both the floor and the ceiling of CPUs vs. higher level APIs.
    4) I am always most excited about FreeSync. Any gamer who knows that “magic moment” where the game FPS matches the display refresh rate, and how smooth that can be, is crazy not to want that all the time. But that’s what FreeSync gives you: that perfectly liquid smoothness at damn near any frame rate.
    5) We are using the more advanced 14nm node. NVIDIA is 16nm.
    6) You ask about ARM, but that line of business is furthest from my role. I cannot answer these questions.
    7) We used to own a company called “Spansion” Spansion was sold off. Now we contract with third-party DRAM and NAND ODMs to build products to our specifications.
    8) NVIDIA had success with TressFX because we designed the effect to run well on any GPU. It’s really that simple. They were successful because we let them be. We believe that’s how it should be done for gamers: improve performance for yourself, don’t cripple performance for the other guy. The lesson we learned is that actual customers see value in that approach.
    9) I am not a developer, and not privy to the codebases.
    2) Forced AA modes often do not work in the driver because most games use deferred rendering engines that are basically incompatible with pipeline AA options.
    Which leaves VSR as the only driver-enforced choice.
    Could you please explain why the range of VSR resolutions for GCN1 and GCN2 is more limited than for GCN3 GPUs (the formers basically can’t output 4K in 1080p/1200p monitors)?
    Could this be overcome in future driver updates?
    We have hardware that performs real-time frame resizing with 0% performance impact above and beyond the change in resolution selected by the user. This hardware became more advanced as we iterated GCN.
    It is possible to explore shader-based methods, but that can introduce specific performance penalties of their own.
    We will continue to look at adding additional resolution options to VSR.

    [2016-03-03 – 4:23 PM Central Time ]
    1) Is there a serious focus within AMD on exploiting machine learning applications through the Radeon hardware? We have seen compute capabilities of the Radeon GPUs being (literally) years ahead of the competition, yet the AMD does not seem too invested in supporting the (hugely) growing machine learning landscape.
    2) Are there any plans of AMD entering the autonomous driving market? Or at least putting out dev platforms similar to Jetson TX1 and Drive PX2?
    3) Will there be any further GPUOpen surprises this year in scientific computing and machine learning/AI?
    Thank you for your thorough answers elsewhere in the thread!


    1) Machine learning is more of a FirePro question. We just released some deep learning code on GPUOpen:http://gpuopen.com/compute-product/hccaffe/
    2) Not that I am aware of.
    3) Much more GPUOpen news coming at GDC.
    1 – Is there any place where we can see the VCE encoding capabilities for each GCN GPU so far?
    2 – In terms of encoding quality, Steam In-Home Streaming using AMD’s hardware acceleration isn’t great, specially when compared to Steam’s own software encoding. Are there any plans for improving the encoding quality on current AMD GPUs for IHS?


    1) 1080p on GCN1.0. Up to 4K60 on Hawaii, Tonga, Fiji.
    2) Encoding quality is governed by the encoder software, be it a GPU or CPU. If the quality isn’t up to expectation, then the implementation of the encoder needs work.
    any plans to work with watercooling vendors like EK, XSPC, and bitspower to make waterblocks for the Polaris cards.
    or is that left up to the board partners


    When you see waterblocks very close to the release of our GPUs, that’s because we worked with the block vendors and gave them PCB layouts.
    Is AMD ever going to fix the flickering lines in less demanding games when using freesync?
    https://community.amd.com/thread/194556


    This is related to this item in our known issues list: “Core clocks may not maintain sustained clock speeds resulting in choppy performance and or screen corruption”
    We’re working on it.
    Will the FuryX2 have HDMI 2.0 or will we have to get an adapter from Club3D or Accell?


    All Fury-based GPUs have HDMI 1.4.
    So this might be better directed at either the CPU side of AMD or OEMs, but I’ll ask anyway.Your lower power APUs seem really well suited to an HTPC build, but Carrizo and Carrizo-L seem nowhere to be found when it comes to Mini-ITX boards or mini PCs (with the exception of the 65W desktop Carrizo parts which lack a GPU). Any chance this will change? Is this the result of some conscious decision on AMD’s part or is it just a matter of OEM interest?


    It is a conscious decision for us to design a mobile-specific APU, and sell it only into mobile.

    I’ve been a user of multi-GPU cards for a long time. 5970, 6990, 7990… The problem is that we’re in such a minority, support on the developer side is lacking. There’s often problems with flickering, and some features, such as SSAO, due to it requiring the whole scene to be rendered first, I’ve been told will “never” work on multi-GPU. Often the “fix” recommended by developers for multi-GPU issues is to “just run it in fullscreen windowed mode”. Of course some users will do that, and seem satisfied, but most of us who bothered to buy the multi-GPU card in the first place, are infuriated by this answer, because we understand that they’re effectively telling us, in a round about way, to turn off one of the cores we paid good money for.
    The latest Unreal engine doesn’t even support multi-GPUs.
    The situation is looking dire. My concern is that multi-GPU is dying, and I’m not sure how to save it.
    My question is, essentially, for you to address this topic as best you can. I apologize for the waning of specificity in the question.


    It is my hope that DX12/Vulkan will emphasize mGPU by making mGPU solutions more flexible and obvious to the developer. We do what we can in the AMD Gaming Evolved program to push developers towards making engine decisions that benefit mGPU, too.

    I’ve already asked a whole bunch of questions and had really good answers. One more has sprung to mind – I gather the interposer on Fiji cards is rather delicate and general advice is not to change coolers around as it’s easy to break it when cleaning thermal paste.
    If a hypothetical future card is released which uses an interposer could/would anything be done to reduce the risk of accidental damage and make it so installation of aftermarket coolers is easier and relatively risk-free? For example is a heatspreader an option, or would that have an unacceptable effect on thermal performance?


    Silicon interposers are indeed delicate, but even a heatspreader or shim would compromise thermals. Best just to be careful.
    //EDIT: And not change your coolers because that would violate your warranty and my attorneys would want me to remind you of that.
















































    Thank you for doing this AMA, lots of interesting answers so far!

    1. What plans are there for future improvements to Eyefinity?
    2. Has AMD considered making a “display output” GPU? As in a very weak GPU (like the R5 240) intended only to drive a lot of high-res displays showing flat images, video and other “non gaming” loads? A lot of those low-end GPUs currently have very few outputs and have VGA out but not DP at all. I’d much rather buy a dedicated “display” card than having to buy two DP MST hubs. In short, I want an easy way to get more DP outputs that doesn’t cost an arm and a leg.


    1) I think we’re pretty happy with how Eyefinity has shaken out. The last “big” think I wanted our team to tackle was PLP support, which there’s hardware for in some of our newer GPUs. Beyond that, I can’t see much else to add but I’m open to suggestions.
    2) An interesting idea and I see your point, but the vast majority of requests for this type of card are for digital signage, which falls into the FirePro bucket. It’s also possible for AIBs to design these sorts of boards for Radeon, which is our preference. Example.
    Just wanted to say great job with the consistent GPU driver updates! Kudos to the team! Any plans to completely integrate all the GPU driver options into Crimson away from the original Catalyst GUI?


    Thanks. I’m pretty proud of the direction Radeon Software has gone. It is our intention to ultimately replace all of CCC with Radeon Software, but that will come in phases.
















    Also, have you tried any VR games yet? And, how large of an impact do you think Polaris will have on VR? And… are you able to give any information on when the development of Polaris started? (A more specific answer would be nice, but a general time frame would suffice if at all possible)


    I’ve tried a few VR demos, but I only tend to encounter VR when I’m at tradeshows and busy beyond belief. I enjoy the experience a lot, but don’t think I’m personally ready to wear a headset at home.
    As for what Polaris will do to VR, you’ll have to wait and see!
    Fury X2, Whatever happened to it? Any word on release date?


    The product schedule for Fiji Gemini had initially been aligned with consumer HMD availability, which had been scheduled for Q415 back in June. Due to some delays in overall VR ecosystem readiness, HMDs are now expected to be available to consumers by early Q216. To ensure the optimal VR experience, we’re adjusting the Fiji Gemini launch schedule to better align with the market.
    Working samples of Fiji Gemini have shipped to a variety of B2B customers in Q415, and initial customer reaction has been very positive.

    Have you had any chance to use or play with Polaris personally? If so, how was it and are you excited for it? (The answer can be vague or specific)


    [2016-03-03 – 4:49 PM Central Time ]
    Yes, but only the demos that were also shown to media. I am really freakin’ excited about it. It’s been 4-5 years since the last big node jump, and I’m thrilled that we’ve taken advantage of that to update every functional IP block in the ASIC.
    I also wanted to let you know I buy AMD/ATi for ethical reasons, such as FreeSync, GPUopen, and essentially allthesevariousreasons. I love that Richard Huddy wasn’t afraid to lay out the facts in all their gory detail regarding the BS nVidia and Intel have been pulling over the years. People need to know who and what they’re supporting with their voting dollars, and what kind of ethical or unethical business practices they’re advocating or condemning, and factor that into their purchase criteria.
    As long as you keep being the Good Guys, you’ve got a life long customer. Thanks for not being evil.


    I personally and professionally believe very strongly in open standards and transparent source code. At the end of the day, as a gamer myself, I want to sit down and play a game that just bloody works. When the industry screws around with black boxes and other janky efforts, my games don’t run well, and I’m not some special snowflake that doesn’t get pissed off when my games don’t run well.
    I hope I’m not late, but will Hawaii gpus in Linux get vulkan support? I’ve heard that Hawaii has experimental build in amdgpu atm



    Anything that’s in amdgpu will be covered by the Vulkan user mode driver, as far as I am aware.
    [2016-03-03 – 5:02 PM Central Time ]

    1. Will you ever do another promotion again with Cloud Imperium Games for Star Citizen? This got me back into buying AMD Cards
    2. What performance gains can we see with the Polaris line of GPUs over the current gen Radeon Cards?
    3. Are there any plans with future CPUs and APUs to do a Tick / Tock Release cycle similar to how Intel does their release cycles?


    1) Game bundles come and go. I can’t promise we’ll do another Star Citizen bundle, but we are always looking at bundle opportunities like we recently started with Hitman.
    2) Currently we have projected a 2x performance per watt jump over existing hardware.
    3) I am not in the CPU business and do not know.

    Do you expect interposers to experience a moores law like improvement trend?


    This is one of my favorite questions on the thread. In fact, interposers are a great way to advance Moore’s law. High-performance silicon interposers permit for the integration of different process nodes, different process optimizations, different materials (optics vs. metals), or even very different IC types (logic vs. storage) all on a common fabric that transports data at the speed of a single integrated chip. As we (“the industry”) continue to collapse more and more performance and functionality into a common chip, like we did with Fiji an the GPU+RAM, the interposer is a great option to improve socket density.
    [2016-03-03 – 5:13 PM Central Time ]
    I’ve seen speculation of a possible far-future arrangement where the inherent difficulty of making big chips on super-small processes is combated by having one or two different designs of very small chip combined in great numbers on an interposer – could that actually become a reality? I vaguely remember reading a white paper that touched on the matter. Is there much you can say that isn’t sensitive?


    Yes, it is absolutely possible that one future for the chip design industry is breaking out very large chips into multiple small and discrete packages mounted to an interposer. We’re a long ways off from that as an industry, but it’s definitely an interesting way to approach the problem of complexity and the expenses of monolithic chips.
    My question was more about whether the interposers themselves would experience an exponential increase of a feature. I did read or watch something about stacking allowing more specialized processes for each part of a circuit.


    Ooooh, I see. Well, interposers right now are “dumb” in the sense that they’re basically just silicon motherboards.
    I totally get designing a mobile-specific APU given the realities of the current market. I don’t really understand only selling it into mobile though. With their low TDP, mobile oriented parts seem great for very small form factor and HTPC applications. Could you elaborate on the decision not to sell these outside of mobile applications or is that getting into sensitive material?


    Because we already have desktop BGA or socketed chips like Kabini, Temash, Beema, Mullins that would suit an HTPC just fine. And small form factor, too.
    Where would performance be if we got the 20nm node?
    Are there any features besides dp 1.3 that have been cut because of the node collapsing?
    Anything on the horizon that changes the way games are played like eyefinity did from a feature standpoint?


    I don’t know. 20nm node was never designed for large, high-performance chips like a GPU. It’s hard to model something that could never have been.
    HDMI2 got cut because of the node collapse.
    [2016-03-03 – 5:20 PM Central Time ]
    Okay, everybody. As it is 5:20 PM, here, it’s time for me to sign off and work on some other tasks that I needed to wrap up today.
    First, I really appreciate the opportunity Tizaki and the /r/amd mod team + community arranged to do an AMA. I’ve always wanted to do an official one, and I can finally check that off my bucket list. SUPER EXCITING FOR ME.
    Secondly, I know that I could not get to all of the Polaris architecture/SKU/pricing questions that people had. Right now those things are protected by strict NDAs, and I rather like keeping my job. Even so, you guys deserve answers to your good and hard-thought questions. Rest assured that I will be back with another official AMA to answer those questions as we get closer to the Polaris release mid-year. That’s the right thing to do!
    Gengar is best Pokemon. Shower daily. Be yourself. Brush your teeth. Play Rocket League. And remember to have fun. <3































































































































    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  5. #35
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    AMD XConnect External GPU technology launched

    AMD has officially introduced its XConnect Technology to enable easy plug'n'play usage of external GPUs. As was briefly covered in the news earlier today, of the Radeon Software Crimson Edition 16.3 (Hotfix) release, AMD XConnect Technology provides initial support for external GPU enclosures configured with modern Radeon GPUs connected via Thunderbolt 3. A new Radeon Software 16.2.2 graphics driver will be made available very shortly (this news release NDA expired at 2pm), featuring XConnect Technology.

    AMD XConnect Technology has been developed by AMD, Razer and Intel's Thunderbolt group. It eschews multi-cable proprietary setups to adopt a single reversible Thunderbolt 3 (USB-C) cable / connector. The eGFX requirements are that four PCI Express lanes are provided by Thunderbolt 3, which gives the external solution up to 40Gbps of bandwidth (equivalent to a PCI Express 3.0 x4 slot), says AMD.

    In the new AMD Radeon Software 16.2.2 graphics driver the following graphics cards are supported for eGPU deployment:

    • AMD Radeon R9 Fury
    • AMD Radeon R9 Nano
    • AMD Radeon R9 300 Series
    • AMD Radeon R9 290X
    • AMD Radeon R9 290
    • AMD Radeon R9 280
    • Mobile derivatives of these ASICs
    • PLANNED: Radeon dGPU products based on the Polaris architecture.

    Further system requirements are as follows:

    • Radeon Software 16.2.2 (or later)
    • 1x Thunderbolt 3 port
    • 40Gbps Thunderbolt 3 cable
    • Windows 10 build 10586 (or later)
    • System BIOS ACPI extensions for Thunderbolt eGFX (check with vendor)
    • Thunderbolt firmware (NVM) v.16 (or higher)
    • Pass Thunderbolt certification


    Discussing how these eGFX enclosures will function, AMD tells us that users with the above software/hardware will "not have to reboot their systems or give up hibernation capabilities" to utilise eGFX. Furthermore AMD XConnect "ensures that validated Thunderbolt 3 eGFX solutions can seamlessly switch between discrete and integrated graphics with no interruption to the host system". AMD's driver can identify which applications are running on external graphics at any given time and will have no problem handling a 'surprise removal', preventing inadvertent data loss or application hangs. Even so, it's best to 'safely remove external AMD Radeon graphics' via a system tray click.
    At the time of writing AMD's XConnect Technology seems to be limited to use by the Razer Blade Stealth Ultrabook with Thunderbolt 3 and its accompanying external graphics chassis, the Razer Core. In the months ahead AMD "expects that validated Thunderbolt 3 eGFX solutions will ship in a diverse variety of form factors," including compact integrated devices or larger upgradeable enclosures suitable for user-upgradeable discrete graphics. External chassis makers will have their products validated for Thunderbolt 3 eGFX to ensure compatibility.
    Noticia:
    http://hexus.net/tech/news/graphics/...logy-launched/


    Parece-me uma ideia muito interessante que o Radeon Group/AMD colocaram aqui neste produto, agora resta aguardar se o mesmo vai ter sucesso.
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  6. #36
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    AMD XConnect External GPU Technology For Thunderbolt 3

    For those of you that aren't up to speed with AMD's XConnect technology, this video gives you the rundown on how Radeon Software 16.2.2 drivers coupled with an external Thunderbolt 3 GPU enclosure can turn any compatible laptop into a gaming machine.



    Noticia:
    http://www.hardocp.com/news/2016/03/...3#.VuHhu-Zv4vc
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  7. #37
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    XConnect Puts AMD GPUs Into External Graphics Enclosures (Updated)

    AMD announced a Radeon Software update that includes support for a new technology called XConnect, which allows AMD graphics cards to work in Thunderbolt 3-powered external graphics enclosures, such as the Razer Core.
    We first saw the Razer Blade Stealth Ultrabook and its companion external graphics enclosure, the Razer Core, at CES. At the time, Razer was demoing the new product with an Nvidia GPU, but we were told that any GPU with driver support would work. AMD GPUs now have support for Thunderbolt 3 external graphics enclosures as well, albeit with a few caveats.
    AMD revealed that R9 300-series, R9 290X, R9 290, R9 285, R9 Fury and Nano GPUs are compatible with the new XConnect technology, leaving out the entirety of the company’s older and mid-tier GPU offerings (R9 280 and below, R7 300-series and below) and setting a strict performance entry level in order to enjoy the benefits of external graphics enclosures.
    However, this is acceptable considering that most consumers purchasing these docks are likely looking for some serious GPU horsepower anyway. Fury X is not specifically listed as compatible, due to its 120 mm radiator.
    AMD’s XConnect is available now as part of the Radeon Software Crimson Edition update 16.2.2 and higher.
    Updated 3/10/16 4:40 PM CT: A change was made to reflect the graphics card compatibility.
    Noticia:
    http://www.tomshardware.com/news/amd...ate,31385.html
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  8. #38
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    AMD Unveils Vega And Navi, 2017 And 2018 GPU Architectures – To Bring HBM2 & Nextgen Memory To Market

    Hot off AMD’s Capsaicin GDC 2016 event, the company has just unveiled Vega & Navi its next generation post Polaris Radeon graphics architectures. Coming in 2017 and 2018 Respectively. Featuring HBM2 vertically stacked memory technology. nextgen post-HBM2 memory and very significant power efficiency jumps every year.

    AMD’s Raja Koduri, head of Radeon Technologies Group, just revealed the company’s future GPU architectures all of which will be named after stars, star systems and galaxies. The company’s head of the newly founded Radeon Technologies Group explained that the team is heading towards a new direction with a fresh look on all things graphics and there was a need to reflect this change of winds.
    AMD Radeon Technologies Group Bids A Somber Farewell To Island Code Names And Begins Its “Journey Into Space”

    One of the most exciting things about AMD’s next generation Polaris graphics architecture is the highly anticipated move to the revolutionary FinFET process technology. Which is accompanied by a considerable engineering focus to push innovations on architectural efficiency.





















    AMD has consistently said that the Polaris graphics architecture will deliver a “historic” leap in performance per watt. This unprecedented improvement in performance per watt is where the naming convention stems from. This is because stars are the most efficient photon generators in the universe,. Which explains why folks at AMD found it only fitting to call their most power efficient graphics architecture to date “Polaris” after the brightest star that can be seen from earth’s surface.

    Excerpt from AMD’s Official Polaris Press Release :
    AMD’s Polaris architecture-based 14nm FinFET GPUs deliver a remarkable generational jump in power efficiency. Polaris-based GPUs are designed for fluid frame rates in graphics, gaming, VR and multimedia applications running on compelling small form-factor thin and light computer designs.
    “Our new Polaris architecture showcases significant advances in performance, power efficiency and features,” said Lisa Su, president and CEO, AMD. “2016 will be a very exciting year for Radeon fans driven by our Polaris architecture, Radeon Software Crimson Edition and a host of other innovations in the pipeline from our Radeon Technologies Group.”
    Raja Koduri, vice president at AMD and chief architect of the Radeon Technologies Group explained before that he holds a very ambitious goal of powering 90% of the world’s pixels. To achieve that efficiency goal is key and just as stars are the world’s most efficient photon generators, Koduri wants AMD graphics technology to be the world’s most efficient pixel generators.





    One of the more notable changes that the Radeon Technologies Group is doing is giving its next gen graphics architectures unifying code names. Something the company hasn’t done consistently in the past, not in any official capacity anyway. Raja explained earlier in the year that this change will make things easier for everyone involved. Whether it be internally at AMD, journalists or even consumers. Raja confirmed that of AMD’s future graphics architectures will be given astronomical code names. Polaris being the first that will begin Radeon Technologies Group’s re-invigorated journey into space.

    Raja Koduri, Senior Vice President and Chief Architect, Radeon Technologies Group, speaking with Venturebeat.com :
    We have some exciting hardware announcements as well. This is designed for FinFET. Our guiding principle for the Polaris architecture was power efficiency. We have the new naming scheme for our architectures. It’ll be based on galaxies, star systems, and stars. You’ll see more of this coming in the future. Polaris is the beginning of our journey through space.
    VB: Does the Polaris brand supplant the Radeon brand?
    Advertisements

    Koduri: It’s an architecture codename. It’ll still be Radeon something something on the box. But we didn’t have a consistent architecture name like our competitors do. It was hard, because for people, including yourselves and some of the press and enthusiasts—This family of chips has this architecture and a similar class of features. You can group them easily together.






































    [Exclusive] AMD’s Greenland Has Been Known As “Vega 10” Internally For Some Time

    Two months ago we published an exclusive story explaining what Vega 10 was and what had happened to AMD’s flagship Greenland GPU. Today at Capsiacin – GDC 2016 – AMD has confirmed exactly what we had conveyed to you in that report.
    Vega is the brightest star in the Lyra constellation and the second brightest star in the northern hemisphere. Vega along with Arcturus and Sirius, is one of the most luminous stars close to our earth’s own sun. Vega has been extensively studied by astronomers and as such was described as “arguably the next most important star in the sky after the Sun”. Furthermore, it was the northern pole star around 12,000 BCE.
    Vega was the first star other than the Sun to be photographed and the first to have its spectrum recorded. But perhaps the most significant fact of all about Vega is that it served as the baseline for calibrating the photometric brightness scale, and was one of the stars used to define the mean values for the UBV photometric system.
    This preface is important in underlining why the Vega code name bears a special meaning for AMD and why AMD has chosen to dedicate it to its most power 14nm GPU to date. Greenland / Vega 10 will be the first GPU from the company to feature HBM2 – High Bandwidth Memory – Technology. All of this is part of AMD’s strategy to deploy this extremely power efficient and innovative memory standard in its high-end graphics products first and eventually into mainstream GPUs and even APUs.
    AMD To Introduce Vega With HBM2 in 2017

    The Fiji GPU powering the R9 Fury series is AMD’s largest ever graphics processing unit. It’s also the world’s only and first to feature the 3D structured, 2.5D stacked High Bandwidth Memory, . This standard was co-invented by AMD and SK Hynix, one of the world’s largest memory makers. The advantages of vertically stacking dies are many. One of which is the enablement of significantly greater densities versus GDDR5. Also because HBM cubes occupy a lot less space than GDDR5 engineers can achieve tremendous area savings on the printed circuit board of any HBM graphics card. This allowed for the creation of world’s fastest SFF six inch graphics card, the 8 teraflops R9 Nano.



























    There are also other innovations that HBM leverages. One of which is packaging. Because it sits next to the GPU on an interposer, a very short distance away, it allows engineers to create immensely wider memory busses with reduced latency and very low power requirements. The end result translates to a memory standard that’s very power efficient, very area efficient and very scalable and very fast.
    Earlier this year AMD confirmed to us that while it is still committed to the High Bandwidth Memory technology it co-invented with SK Hynix and brought to the market last year, Polaris was designed to be compatible with both HBM/HBM2 and GDDR5 memory standards. Technical marketing lead at AMD Robert Hallock explained that the company has the flexibility to use either technology as needs arise. This means that either memory technology can be employed where it makes sense.
    AMD helped lead the development of HBM, was the first to bring HBM to market in GPUs, and plans to implement HBM/HBM2 in future graphics solutions.
    At this time we have only publicly demonstrated a GDDR5 configuration of the Polaris architecture.It’s important to understand that HBM isn’t (currently) suitable for all GPU segments due to the current HBM cost structure. In the mainstream GPU segment, GDDR5 remains an extremely cost-effective, efficient and viable memory technology.
    It’s now clear that AMD’s plan is to introduce HBM2 technology with it’s upcoming Vega graphics architecture, which will be launching next year. HBM2 is the second generation of the vertically stacked High Bandwidth Memory standard that AMD introduced last year with its flagship Radeon R9 Fury X graphics card, the R9 Fury and the R9 Nano.

    Second generation High Bandwidth Memory is not only faster than the first generation but it also scales to capacities 8 times larger than HBM1. HBM2 feature 1GB per die and up to 8 Hi stacks for 8GB per cube. Which is eight times more than the highest capacity currently available per cube on HBM1. Additionally, HBM2 operates at twice the speed of HBM1 for double the memory bandwidth.
    This means that AMD can equip its flagship Vega 10 GPU with up to 32GB of super high speed HBM2 memory for a total of terabyte/s of memory bandwidth. However we’ll have to wait until 2017 for the 14nm process technology and the HBM2 technology to mature enough for such a monstrous chip. Polaris graphics cards on the other hand are coming out this summer, before the back to school season, on both desktops and gaming notebooks.
    WCCFTech Year Process Flagship GPU Product Transistors In Billions Memory Bandwidth
    Southern Islands 2012 28nm Tahiti HD 7970 4.3 3GB GDDR5 264GB/s
    Volcanic Islands 2013 28nm Hawaii R9 290X 6.2 4GB GDDR5 320GB/s
    Pirate/Caribbean Islands 2015 28nm Fiji R9 Fury X 8.9 4GB HBM1 512GB/s
    Arctic Islands/Polaris 2016 14nm Polaris 10 TBA Up To 18 GDDR5/HBM1 512 GB/s
    Vega 2017 14nm Vega 10 TBA TBA HBM2 1 TB/s
    Navi 2018 TBA Navi 10? TBA TBA Nextgen Memory TBA


    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  9. #39
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    AMD's 'Radeon VR Ready' GPU Certification Program Designed To Simplify Choice For VR Systems

    AMD revealed the "Radeon VR Ready" GPU certification program designed to make selecting a VR ready PC easier. AMD has classifications that it said will help consumers and creators easily make informed decisions.
    AMD said its “Radeon VR Ready” GPU certification program is an easy way for OEM PC manufacturers, boutique system builders and AMD add-in board partners to show that their products meet the necessary requirements for VR gaming and development.
    The “Radeon VR Ready Premium” designation certifies that the product in question is capable of handling VR. AMD said that all systems with this seal will have an R9 290-class GPU or better and will generally be ready to deliver a good experience on the Oculus Rift and the HTC Vive. HP revealed that it is embracing the Radeon VR Ready program and is planning to sell VR-ready Envy gaming systems.(This answers the question we had a few weeks ago.)
    "HP is working with AMD to deliver the VR-ready HP ENVY Phoenix tower for a seamless out-of-the-box experience. With the Radeon VR Ready Premium program, it will help customers to take the guess work out of selecting the right VR capable graphics,” said Kevin Frost, vice president and general manager, consumer personal systems, HP Inc.
    AMD also revealed the “Radeon VR Ready Creator” designation, which is aimed at content creation professionals. AMD said the creator seal “signifies unprecedented performance and industry-leading innovation” for VR content creation. The company said that systems and GPUs with this seal include the AMD Liquid VR SDK.
    The first board with this certification is the newly-announced Radeon Pro Duo.
    Noticia:
    http://www.tomshardware.com/news/amd...ion,31409.html
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  10. #40
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    AMD's Raja Koduri talks moving past CrossFire, smaller GPU dies, HBM2 and more.

    After hosting the AMD Capsaicin event at GDC tonight, the SVP and Chief Architect of the Radeon Technologies Group Raja Koduri sat down with me to talk about the event and offered up some additional details on the Radeon Pro Duo, upcoming Polaris GPUs and more. The video below has the full interview but there are several highlights that stand out as noteworthy.






    • Raja claimed that one of the reasons to launch the dual-Fiji card as the Radeon Pro Duo for developers rather than pure Radeon, aimed at gamers, was to “get past CrossFire.” He believes we are at an inflection point with APIs. Where previously you would abstract two GPUs to appear as a single to the game engine, with DX12 and Vulkan the problem is more complex than that as we have seen in testing with early titles like Ashes of the Singularity. But with the dual-Fiji product mostly developed and prepared, AMD was able to find a market between the enthusiast and the creator to target, and thus the Radeon Pro branding was born.


      Raja further expands on it, telling me that in order to make multi-GPU useful and productive for the next generation of APIs, getting multi-GPU hardware solutions in the hands of developers is crucial. He admitted that CrossFire in the past has had performance scaling concerns and compatibility issues, and that getting multi-GPU correct from the ground floor here is crucial.

    • With changes in Moore’s Law and the realities of process technology and processor construction, multi-GPU is going to be more important for the entire product stack, not just the extreme enthusiast crowd. Why? Because realities are dictating that GPU vendors build smaller, more power efficient GPUs, and to scale performance overall, multi-GPU solutions need to be efficient and plentiful. The “economics of the smaller die” are much better for AMD (and we assume NVIDIA) and by 2017-2019, this is the reality and will be how graphics performance will scale. Getting the software ecosystem going now is going to be crucial to ease into that standard.

    • The naming scheme of Polaris (10, 11…) has no equation, it’s just “a sequence of numbers” and we should only expect it to increase going forward. The next Polaris chip will be bigger than 11, that’s the secret he gave us. There have been concerns that AMD was only going to go for the mainstream gaming market with Polaris but Raja promised me and our readers that we “would be really really pleased.” We expect to see Polaris-based GPUs across the entire performance stack.

    • AMD’s primary goal here is to get many millions of gamers VR-ready, though getting the enthusiasts “that last millisecond” is still a goal and it will happen from Radeon.
    • No solid date on Polaris parts at all – I tried! (Other than the launches start in June.) Though Raja did promise that after tonight, he will only have his next alcoholic beverage until the launch of Polaris. Serious commitment!
    • Curious about the HBM2 inclusion in Vega on the roadmap and what that means for Polaris? Though he didn’t say it outright, it appears that Polaris will be using HBM1, leaving me to wonder about the memory capacity limitations inherent in that. Has AMD found a way to get past the 4GB barrier? We are trying to figure that out for sure.

      Why is Polaris going to use HBM1? Raja pointed towards the extreme cost and expense of building the HBM ecosystem prepping the pipeline for the new memory technology as the culprit and AMD obviously wants to recoup some of that cost with another generation of GPU usage.

    Speaking with Raja is always interesting and the confidence and knowledge he showcases is still what gives me assurance that the Radeon Technologies Group is headed in the correct direction. This is going to be a very interesting year for graphics, PC gaming and for GPU technologies, as showcased throughout the Capsaicin event, and I think everyone should be looking forward do it.
    Noticia:
    http://www.pcper.com/news/Graphics-C...-HBM2-and-more
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  11. #41
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    AMD Vega 10 Flagship HBM2 GPU Launching In 2017 – Greenland Reincarnated


    AMD’s highly anticipated Vega 10 flagship 14nm FinFET GPU, previously known as Greenland, is launching in 2017 with up to 32 GB of HBM2. AMD unveiled its long-term GPU roadmap at Capsaicin yesterday for the first time, debuting its 2017 and 2018 architectures Vega & Navi. This is the very first time the company has ever discussed its graphics roadmap beyond Polaris, its extremely power efficient 14nm graphics architecture launching this summer. Raja Koduri, AMD’s Chief Architect of the Radeon Technolgies Group, revealed that the company is working on two GPU architectures to succeed Polaris.

    The first is Vega. Which will be arriving next year and is the first from AMD to support second generation High bandwidth Memory. In 2018 AMD plans to introduce yet another graphics arachitecture, code named “Navi”. With Navi Raja & the Radeon Technologies Group is planning to utilize vertically stacked HBM memory throughout its graphics product lineup. This is what’s being referred to with “scalability”.
    By 2018 we’re potentially looking at third generation HBM, which hasn’t been announced nor named yet. Whether it’s HBM3 or something slightly or completely different, it’s too early to tell. else no one can really tell However, what we know for sure is that whatever memory standard comes after HBM2 will still employ vertical stacking in some form or fashion.
    AMD Vega 10 / Greenland Flagship HBM2 GPU Launching In 2017

    Graphics generation code names aside, perhaps the most important piece of information that we got out of this is as follows : AMD’s roadmap confirms that Vega will be the company’s first architecture to feature second generation HBM. In other words, it confirms that Polaris will in all likelihood make use of GDDR5/X memory instead.
    If you’ve been paying close attention to the industry you will quickly point out that this makes sense in so many ways. Firstly there’s the cost advantage of using GDDR5/X over HBM. GDDR5/X chips area cheaper to make because they don’t have to go through a meticulous process to stack them on top of each other. As a GPU maker you also don’t need an interposer with GDDR5/X like you do with HBM. Both of these factors give GDDR5/X a very real cost advantage. Secondly there’s the question of HBM2’s readiness. HBM is still ramping and it would take sometime to for HBM2 to catch up.
    Thirdly, there’s Raja’s revelations last year that AMD has two 14nm GPUs coming out this year, not three, two.
    Patrick Moorhead, Semiconductor Financial Analyst – Forbes
    Raja also talked about how Advanced Micro Devices’ RTG will need to execute on their architectural designs and create brand new GPUs, something that Advanced Micro Devices has struggled with lately. He promised two brand new GPUs in 2016
    That is Polaris 10 and Polaris 11. Both of which AMD has actually shown to journalists, we’re talking about the actual physical dies. Those who have seen them – we’ve only seen Polaris 11, but AMD has shown a Polaris 10 die to visitors of its suite at CES – reported that neither of the dies sported an interposer or HBM like Fiji.
    We later confirmed with AMD that indeed the Polaris 10 and Polaris 11 GPUs showcased were configured for GDDR5.
    AMD helped lead the development of HBM, was the first to bring HBM to market in GPUs, and plans to implement HBM/HBM2 in future graphics solutions.
    At this time we have only publicly demonstrated a GDDR5 configuration of the Polaris architecture.It’s important to understand that HBM isn’t (currently) suitable for all GPU segments due to the current HBM cost structure. In the mainstream GPU segment, GDDR5 remains an extremely cost-effective, efficient and viable memory technology.
    This doesn’t necessarily exclude that either of these chips can end up with HBM. After-all AMD might have been trying to be really coy and just showed off GDDR5 configured die. But our money is that they weren’t trying to be coy and that both of these chips will end up GDDR5/X. For the simple fact that any GPU would have to be designed from the very beginning with one memory standard in mind. As a designer you either choose to equip your GPU with an HBM memory controller or a GDDR5/X memory controller. Not both, that would be utterly wasteful to the extremely limited chip area you have.
    Designing a GPU with both types of memory technologies is theoretically possible. But as the GPU will only end up with one memory standard or the other, either HBM or GDDR5/X, not both at the same time. One of the two memory controllers will repetitively end up as useless as white space on a die. So it just doesn’t make any sense to design any such product.
    Advertisements


    With all of that said, this wasn’t actually unexpected. AMD has actually hinted to us back in December at Sonoma, when we were first introduced to Polaris, that there are indeed plans for HBM2 in the future. Yesterday those plans have been made public. HBM2 once again ,just as first generation HBM did, will debut at the high-end. Nvidia likely has very similar plans to AMD for 2016 and 2017. And that is to launch two Pascal GDDR5/X GPUs and introduce GP100 with HBM2 next year.

    AMD’s 2.5X Polaris Power Efficiency Jump Is More Impressive Than You Think

    This is perhaps one of the less expected realizations to come to after yesterday’s Capsaicin event. A 2.5X – 250% – power efficiency jump is impressive in its own right. But actually, it’s even more so when you consider the fact that Polaris achieves this without HBM.





    The power savings from HBM were the primary driver behind Fiji’s considerable power efficiency gains over Hawaii and the R9 290X. In fact HBM played a pivotal role in allowing AMD to design a GPU that not only had 45% more GCN cores than the R9 290X – 4096 vs 2816 – but, also burned less power on average. According to AMD, Polaris GPUs will deliver 2.5x the perf/watt of AMD’s 2015 28nm GPUs. Those include the Radeon 300 series and the R9 Fury series.

    It’s quite difficult to pin-point exactly how far ahead each architecture is of the one before it on AMD’s roadmap. But Navi looks to be sitting at double Polaris’s power efficiency, making it 5x as power efficient as the current lineup of 28nm GPUs. Vega sits roughly in the middle between Polaris and Navi, slightly closer to Polaris. So we might be looking at a 3X improvement over 28nm GPUs, which puts it 20% ahead of Polaris in perf/watt. That’s about what you’d expect from HBM2.
    But does this mean that AMD’s fabled Vega 10 / Greenland flagship GPU will be three times more power efficient than the Fury X. I’d say possible, but more likely we’re looking at a 3X efficiency jump over the R9 390X. In case you missed our exclusive report a couple of months back , Vega 10 is Greenland. The ~18 billion transistor 32GB HBM2 behemoth that’s been the subject of many leaks over the past year.





    No HBM2 This Year? Who Cares!?

    It’s true that we will have to wait until next year for HBM2 GPUs from either Nvidia or AMD. Nvidia’s GP104 powered GTX 1080 and GTX 1070 cards – not official names – are reportedly configured for GDDR5X. And there’s virtually no chance of neither Nvidia or AMD risking production of large GPUs like GP100 or Vega 10 this early in the nodes’ lifecycles. Neither 16nm or 14nm will be mature enough for large GPU launches this year. This is all assuming HBM2 production has already sufficiently ramped up, which hasn’t yet.
    Despite that, what we’re getting this year is still going to deliver a very significant jump in performance, performance per watt and performance per dollar. This is thanks to the much needed arrival of FinFET manufacturing technology as well as GDDR5X.
    This update to the GDDR5 memory spec is simple enough. Double the capacity and double the data rate of GDDR5 with minimal alterations to the protocol. That in turn means existing GDDR5 memory controllers will only need minor design updates to be compatible with GDDR5X. Which saves Nvidia and AMD a lot of engineering effort and cost. Double the bandwidth also means GDDR5X will do more than an adequate job of keeping faster, bigger, next generation FinFET GPUs happy and well fed until HBM2 arrives in 2017.
    At double the speed of GDDR5, a GPU configured with a 256bit memory interface can have access to up to 448GB/s of bandwidth and 8GB of memory. That’s double the bandwidth available to the GTX 980 and twice the capacity. It’s also 33% more bandwidth than what’s available to the GTX Titan X. With that in perspective GDDR5X is clearly going to be more than enough for Nvidia’s GP104 GPU and AMD’s Polaris 10 GPU.
    It’s not every day that we get GPUs with 2.5x better perf/watt, in fact not ever. This is the biggest jump from one generation to the next that we have ever seen. But it’s not all about efficiency, Raja is also promising the “most revolutionary jump in performance so far”.
    AMD announced that Polaris graphics cards are going to be released mid 2016, so this summer. AMD’s CEO Lisa Su then elaborated further, saying that Polaris graphics cards are to be expected on shelves and in notebooks before the back to school season this year. There’s also been a lot of buzz around Nvidia potentially demoing Pascal at GTC in April, with GTX 1080 and GTX 1070 – not official names – graphics cards launching in the summer. AMD has already demoed Polaris 11 at CES and Polaris 10 yesterday at GDC, so we can’t wait to see Pascal in action. Hopefully next month.



    Noticia:
    http://wccftech.com/amds-greenland-vega-10-flagship-gpu-hbm2-launching-2017/#ixzz42zODcAb0








    AMD reveals Navi GPU architecture



    Comes after Vega and feature next-gen memory
    During its Capsaicin event at GDC 2016 in San Francisco, AMD has unveiled its new roadmap showing a bit more details about upcoming GPU architectures, including Polaris, Vega and the new Navi architecture.

    While details are quite scarce in the new "projected roadmap" from AMD, it did raise a couple of questions as well as show a bit more details about future, including the new Navi GPU architecture that will be coming in 2018 and focus on scalability as well as feature the new "next-gen memory".
    The Vega GPU architecture, which should succeed the upcoming 14nm Polaris GPU architecture, will be coming in 2017 and use 2nd-generation High Bandwidth Memory (HBM2).
    The most interesting is the fact that Polaris GPU architecture does not say anything about the memory but rather only that we will see 2.5x performance per watt improvement. This suggest that Polaris will most likely stick to 1st-generation High Bandwidth Memory (HBM) for high-end and probably GDDR5(X) for mid-range lineup.
    Hopefully, AMD will unveil a bit more details about Polaris during GDC 2016 show as, so far, company only showed a short demo.
    Noticia:
    http://www.fudzilla.com/news/graphic...u-architecture


    Última edição de Jorge-Vieira : 15-03-16 às 16:25
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  12. #42
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    AMD's stonking VR market boost depends on PlayStation VR



    Not that AMD is telling anyone

    AMD told the Games Developer Conference press event that it had 83 per cent of a market share in VR. The claim appeared to be legit as it quoted Jon Peddie Research, however we had some severe doubts.

    Radeon Technology Group announced that "According to Jon Peddie Research as of March 11, 2016, AMD is estimated as powering 83 per cent of the total addressable market for dedicated VR HMDs."
    JPR (Jon Peddie Research) is bullet proof and quite reliable source of market share and the figure surprised us. So we started to ask around where this 83 percent number came from. It turns out that the phrase "addressable market" gave the game away.
    In 2015 Nvidia and AMD shipped close to 70 million high end ad-in-board discreet graphics cards. This gave us a clue of how many PC systems are VR ready. These are the numbers from JPR. Since you need a high end GPU to run Virtual Reality, as Oculus and Vive recommend Radeon 280, 290 or 300 generation of Fury cards. It also asks for Geforce 970 or faster if you want Nvidia.



    According to the same report, in Q4 2015 AMD had 21.1 percent of high end market share while Nvidia had 78.8 percent of high end market share. Compared to Q3 2015 AMD gained 2.3 percent market share. This is what raised our suspicion, how could a company that has 21.1 percent of total high end can get to 83 percent of the total addressable market for dedicated VR Head Mounted Displays?


    The answer is rather simple; AMD includes PlayStation VR in this "total addressable market" for dedicated VR Head Mounted Displays number. This was confirmed by several graphics technology insiders. The GPU inside of PlayStation 4 is a semi-custom AMD GCN Radeon. It is a customized version of AMD's 7870 GPU with two compute units disabled. This is a Pitcairn GCN 1.0 28nm core that comes with maximum of 2.8 billion transistors in discrete variation. Since PlayStation GPU is part of the semi-custom 8-core AMD x86-64 Jaguar 1.6 GHz CPU integrated into APU it has to be significantly slower than a modern high end GPU.
    To cut a long story short, the GPU on PlayStation 4 that powers PlayStation VR is seriously underpowered compared to a high end PC graphics card. AMD's Greenland should end up with 18 billion and Pascal with 17 billion transistors making them a few times faster than PlayStation VR.


    We would not be surprised that Qualcomm steps in and claims that VR leadership. Qualcomm is the only company that shipped serious quantity of their SoCs in actual available systems.
    Samsung Gear VR is powered with Adreno graphics wiht Snapdragon SoCs. They are have been shipped in hundreds of thousands if not even a million+ pieces. We could not gather any realistic market share numbers at the press time.
    So the reality is that Qualcomm is the market leader at this time and that customers buying HTC Vive and Oculus have four to one chance of having an Nvidia card, at least according to Q4 2015 high end market share numbers. AMD will probably increase its GPU market share this year, but it will be hard to catch up with Nvidia.

    The dual Fiji Radeon Pro Duo $1499 GPU announcement, combined with a roadmap for up to early 2018 and the big Virtual Reality push at the press conference has resulted with 7.94 percent of jump stock value. The jump is probably realistic as the company was undervalued to begin with and that things should start significantly improving in GPU and CPU market share during the second half of 2016.
    Noticia:
    http://www.fudzilla.com/news/graphic...playstation-vr
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  13. #43
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    There's a possibility that AMD will get into mobile graphics again


    AMD was once a very prolific force in the mobile, handheld GPU world. And it seems that Raja Koduri, the head honcho in charge of the new Radeon Technologies Group, isn't necessarily ruling out the idea of returning to that field either.


    AMD is already well positioned to create custom chips with their deal that they've brokered with Sony and Microsoft, not to mention Nintendo. And Raja himself is open to the idea of licesning their IP to be used in mobile products. They don't, however, want to actively build their own mobile devices based on their technology, but if someone else approached them with the idea to integrate either an APU outright or Polaris into their own design, it wouldn't at all be out of the question.

    The idea is a natural one, given the potential power savings that the new Polaris architecture could introduce on all fronts. Already Polaris 11 (the smallest chip thus far) can play Star Wars Battlefront at 1080P with a steady 60FPS while only consuming around 35 watts for the GPU itself. So, then, it isn't a stretch to have their new architecture appear in lower-power APU's for, say, tablets, micro-consoles or even phones.


    They've been down this route before, with the Imageon line of mobile SoC's. From 2003 to 2006 they shipped an estimated 250 million units, which is a large number for that time period. But as a result of restructuring the company after the acquisition by AMD, they sold that GPU IP to Qualcomm, who has continued to evolve it as the Adreno line ever since.

    The mobile landscape was once a bit more bleak than it is today, but that's changing considerably with the Internet of Things and massive adoption of mobile devices in our lives. NVIDIA also hasn't really been able to break into the burgeoning mobile market, as their own SoC's tend to be a bit more power-hungry and not quite what customers are looking for in phones. Tablets and other devices, however, have been a perfect match. Thus a licensing deal could very well be a fantastic idea if there are any takers. They already have design wins for casino gaming machines and other embedded applications, but would it really be possible for them to enter the mobile phone market again? It's a distinct possibility.



    Noticia:
    http://www.tweaktown.com/news/51088/...ain/index.html


    Tendo em conta o crescimento desta área, se a AMD entrar novamente com um bom GPU, mesmo enfrentando forte competição, acho que a AMD pode ter algum retorno neste investimento.









    AMD behind yet another VR headset



    Unannounced headset to bring 4K per-eye display
    During the announcement of the Radeon Pro Duo at the Capsaicin event at GDC 2016, Roy Taylor from AMD actually teased yet another VR headset which will offer 4K-per-eye screen.

    While we certainly missed it, Mark Walton from Ars Technica UK did not, and according to Roy Taylor, AMD Corporate Vice President of Alliances, saying that the company is working with a headset manufacturer on yet to be announced VR headset with 4K-per-eye resolution.
    The headset apparently exists and its "quite, quite beautiful", according to Roy Taylor. He also added that the roadmap to go into higher resolutions will happen more quickly than it is probably expected.
    During the Capsaicin event at GDC 2016, AMD unveiled the Sulon Q headset, a tether-free VR/AR headset based on AMD's quad-core FX-8800P CPU and Radeon R7 GPU. The Sulon Q headset is based on a 2560x1440p OLED screen which means it will provide 720p per-eye, so obviously that is not the headset that Roy has been talking about.
    Currently, the Oculus Rift leads the pack with two 2160x1200 OLED screens while HTC's Vive has two 1200x1080p screens.
    Unfortunately, both the Oculus Rift and HTC Vive have pretty high system requirements so pushing 3,840x2,160 resolution per-eye will require a very powerful hardware, and we are not sure that even Radeon Pro Duo would be enough,
    Hopefully we will hear more soon as it certainly piqued our interest.
    Noticia:
    http://www.fudzilla.com/news/graphic...her-vr-headset
    Última edição de Jorge-Vieira : 16-03-16 às 16:15
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  14. #44
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    AMD’s Raja Koduri Talks Future Developments – Capsaicin


    Even though a lot of information was shared from the Capsaicin live stream, some details weren’t made known till the after party. In an interview, Radeon Technologies Group head Raja Koduri spoke in more detail about the plans AMD has for the future and the direction they see gaming and hardware heading towards.
    First up of course, was the topic of the Radeon Pro Duo, AMD’s latest flagship device. Despite the hefty $1499 price tag, AMD considers the card a good value, something like a FirePro Lite, with enough power to both game and develop on it, a card for creators who game and gamers who create. If AMD does tune the drivers more to enhance the professional software support, the Pro Duo will be well worth the cash considering how much real FirePro cards cost.

    Koduri also see the future of gaming being dual-GPU cards. With Crossfire and SLI, dual GPU cards were abstracted away as one on the driver level. Because of this, performance widely varies for each game and support requires more work on the driver side. For DX12 and Vulkan, the developer can now choose to implement multi-GPU support themselves and build it into the game for much greater performance. While the transition won’t fully take place till 2017-2019, AMD wants developers to start getting used to the idea and getting ready.

    This holds true for VR as well as each GPU can render for each eye independently, achieving near 2x performance benefit. The benefits though are highly dependent on the game engine and how well it works with LiquidVR. Koduri notes that some engines are as easy as a few hours work while others may take months. Roy Taylor, VP at AMD was also excited about the prospect of the upcoming APIs and AMD’s forward-looking hardware finally getting more use and boosting performance. In some ways, the use of multi-GPU is similar to multi-core processors and the use of simultaneous multi-threading (SMT) to maximize performance.

    Finally, we come to Polaris 10 and 11. AMD’s naming scheme is expected the change, with the numbers being chronologically based, so the next Polaris will be bigger than 11 but not necessarily a higher performance chip. AMD is planning to use Polaris 10 and 11 to hit as many price/performance and performance/watt levels as possible so we can possibly expect multiple cards to be based on each chip, meaning probably 3. This should help AMD harvest imperfect dies and help their bottom line. Last of all, Polaris may not feature HBM2 as AMD is planning to hold back till the economics make sense. That about wraps it up for Capsaicin!
    Noticia:
    http://www.eteknix.com/amds-raja-kod...nts-capsaicin/
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  15. #45
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    DICE, Frostbite Engine team get behind GPUOpen

    DICE and the Frostbite Engine team are putting their weight behind AMD's GPUOpen initiative, as you can see from the tweet and video below.






    In the video, DICE rendering engineer Arne Schober touts the level of hardware access GPUOpen offers programmers like himself, while also praising the friendly, powerful ecosystem it encourages.


    "It is really nice to have physical access to the source code and be able to handcraft the tools to our own needs," he says, later adding, "By opening up the source, we can help each other across the entire industry, solving problems together instead of individually. This would improve our effectiveness and lower development costs at the same time by sharing our knowledge."



    The Frostbite Engine is by far one of the most respected and advanced in the industry, and the team behind it is perhaps the largest in gaming, so to have DICE put their weight behind GPUOpen means great things are to come for developers and gamers everywhere.



    Noticia:
    http://www.tweaktown.com/news/51098/...pen/index.html
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

 

 
Página 3 de 6 PrimeiroPrimeiro 12345 ... ÚltimoÚltimo

Informação da Thread

Users Browsing this Thread

Estão neste momento 1 users a ver esta thread. (0 membros e 1 visitantes)

Bookmarks

Regras

  • Você Não Poderá criar novos Tópicos
  • Você Não Poderá colocar Respostas
  • Você Não Poderá colocar Anexos
  • Você Não Pode Editar os seus Posts
  •