PDA

View Full Version : nVidia, assuntos gerais



Jorge-Vieira
12-02-15, 07:10
Nvidia rakes in record quarterly, yearly revenue




Nvidia's latest financial results are in, and they're full of good news. The company
set revenue records (http://nvidianews.nvidia.com/news/nvidia-announces-financial-results-for-fourth-quarter-and-fiscal-2015) for not only the three months ending January 25, but also the past year as a whole.

Time for some tables! Let's start with the quarterly numbers.

<tbody>

Q4'15
Q4'14
Change


Revenue
$1.25 billion
$1.14 billion
up 9%


Gross margin
55.9%
54.1%
up 1.8 pts


Net income
$193 million
$147 million
up 31%

</tbody>
Fourth-quarter revenue hit $1.25 billion, up 9% from the same period last year. Net income rose an even more impressive 31%, while gross margin was up slightly. And the full-year figures are even brighter:

<tbody>

FY'15
FY'14
Change


Revenue
$4.68 billion
$4.13 billion
up 13%


Gross margin
55.5%
54.9%
up 0.6 pts


Net income
$631 million
$440 million
up 43%

</tbody>
Nvidia raked in $4.68 billion over the past 12 months, a 13% increase over the previous year. Gross margin went up by less than a point, but profits surged 43% to $631 million. The green team is certainly living up to its tradmark color.
Here's a revenue breakdown along crude divisional lines. The GPU segment accounts for all of the graphics processors: GeForce, Quadro, Tesla, and Grid. Tegra covers the SoC and related products, while the other category "includes licensing revenue from [Nvidia's] patent cross-license agreement with Intel." The quarterly totals come first, followed by the yearly ones.

<tbody>

Q4'15
Q4'14
Change


GPU
$1.07 billion
$947 million
up 13%


Tegra
$112 million
$131 million
down 15%


Other
$66 million
$66 million
--

</tbody>

<tbody>

FY'15
FY'14
Change


GPU
$3.84 billion
$3.47 billion
up 11%


Tegra
$579 million
$398 million
up 45%


Other
$264 million
$264 million
--

</tbody>
GPUs continue to make up the vast majority of Nvidia's business. Revenue for that overarching category rose by double digits both yearly and quarterly. Although we don't have more granular data for that division, Nvidia said Q4 revenue from desktop and notebook GPUs went up 38% versus the same period last year. High-end Maxwell cards probably deserves a lot of the credit for that increase.
The Tegra numbers are more mixed. Although yearly revenue increased 45%, the total for Q4 fell 15%. Nvidia blames the drop on the "product life cycle of several smartphone and tablet designs." Revenue from "auto infotainment systems" doubled, according to the CFO commentary (http://files.shareholder.com/downloads/AMDA-1XAJD4/3965746882x0x808803/42729242-C72A-4212-A4B8-97528DAD4B9E/Q4_15_CFO_Commentary.pdf) (PDF), but there are no specifics on sales of Shield devices.
Looking forward, Nvidia expects Q1 revenue of $1.16 billion with gross margin in the 56.2-56.5% range.


Noticia:
http://techreport.com/news/27807/nvidia-rakes-in-record-quarterly-yearly-revenue


E mais um record, é sempre a somar para o gigante das placas graficas.

Jorge-Vieira
12-02-15, 13:25
Nvidia schedules Made to Game event for March 3

http://www.fudzilla.com/media/k2/items/cache/e70e807c99b41be262ccfeb7a9221caa_L.jpg (http://www.fudzilla.com/media/k2/items/cache/e70e807c99b41be262ccfeb7a9221caa_XL.jpg)

In San Francisco, at GDC 2015

Nvidia plans a new product announcement on the second day of MWC 2015 and it has sent invitations to press outlets for an event scheduled for the 3rd of March 2015.

The only catch is that Nvidia event is actually on the other side of the globe, in San Francisco, and is a part of another event called Games Developers Conference 2015, not MWC 2015.
Nvidia teases that Made to Game event is about something that was "5 years in the making" and that will redefine the future of gaming. We kind of heard such bold announcements before, but since we don’t know what this might be, we cannot offer any insight. We also heard that G-sync would kill AMD, and that didn't happen either, not even close.
We do know that Nvidia is actively working on virtual reality technology, and we would not be surprised to see some form of Tegra-powered product on the stage (since the Tegra fits the "5 years in the making" statement).
The fact that Nvidia wants Android-focused press outlets to attend the event suggests we are looking at an Android device, or Shield device to be more precise. It's not the new Titan based on a GM200-series GPU. Titan II was not five years in making, so it has to be something different. [Tegra X1 Shield Tablet, Shield Portable, or maybe Shield VR, take your pick and head down to the comment section. Ed]

The 5 years in the making part (http://www.androidauthority.com/nvidia-press-event-march-3rd-586673/) makes us wonder.



Noticia:
http://www.fudzilla.com/news/graphics/36996-nvidia-has-made-to-game-event-on-march-3

Jorge-Vieira
10-03-15, 16:35
Nvidia Spending over a Third of Revenue in Semiconductor R&D for Future Graphics Technologies

The Russian publication overclockers.ru (http://www.overclockers.ru/hardnews/67477/nvidia-tratit-na-razrabotki-i-issledovaniya-tret-vyruchki.html) published a report on research and development spendings of semiconductor companies all over the world. Wafer fabrication is all about the big bucks and R&D budget is a leading indicator of this fact, so this piece is about that basic rundown.

http://cdn2.wccftech.com/wp-content/uploads/2014/08/tsmc_semiconductor_fab14_production-635x423.jpg (http://cdn2.wccftech.com/wp-content/uploads/2014/08/tsmc_semiconductor_fab14_production.jpg)A stock photo of a fabrication plant. @TSMC Public Domain Green makes the top 10 spenders on semiconductor R&D, Intel tops the list R&D is absolutely critical to any semiconductor firm and their spending is, in most cases, very indicative of their success. In a report released by the agency IC Insights we learn of the top 10 highest spenders in this department. The list, as you may guess, offers quite a lot of insights into the current standing of these companies. Intel for example, the undisputed giant of the tech world, currently has spending that is approximately 3 times that of the guy in second place. Its R&D budget completely eclipses that of everyone else on the list – once again, something that indicates just how much of a lead Intel has over TSMC.http://cdn4.wccftech.com/wp-content/uploads/2015/03/insights_01.png (http://cdn4.wccftech.com/wp-content/uploads/2015/03/insights_01.png)


Interestingly Nvidia has also managed to claim a spot in the top 10 list, albeit at 10th place. Interestingly the bottom few semiconductor companies all have approximately the same budget, while Intel has more budget than nearly all of them combined. What is perhaps of more interest to our readers is that Nvidia spends approximately 1/3 of its total revenue on semiconductor R&D, needless to say, that is a huge amount. Comparatively, TSMC, which is Nvidia’s foundry spends only 22% more than green itself – and even that after raising its R&D budget. That should give you an idea how high Nvidia’s budget for research is. Out of the companies present in the report, five of them are headquartered in the United States, three in Asia and the Pacific, and Europe and Japan.
Intel Corporation remains the leader in the development costs for the year, and increased their funding for this year by 9%. The processor giant spent approximately 22.4% of its annual revenue on R&D. 36% of total R & D expenditure of the Top 10 belongs to Intel, while TSMC has managed to climb from seventh to fifth with a 15-percent increase in R & D expenditures and 7.5 percent share of the proceeds.





Noticia:
http://wccftech.com/nvidia-spends-revenue/#ixzz3U09GHWl4

Jorge-Vieira
02-04-15, 13:29
Nvidia’s new HQ back on schedule

http://www.fudzilla.com/media/k2/items/cache/b7f6274f3ff1da16ed34bbc0d5ee9dc9_L.jpg (http://www.fudzilla.com/media/k2/items/cache/b7f6274f3ff1da16ed34bbc0d5ee9dc9_XL.jpg)

Futuristic apparently
A year after moth balling its “futuristic new office project in Santa Clara” Nvidia. is again ready to get to work and start building again.

According to Biz Journals (http://www.bizjournals.com/sanjose/news/2015/03/31/nvidias-futuristicsanta-clara-campus-is-back-on.html) the graphics chipmaker confirmed on Tuesday that it would soon start demolition to make way for the 500,000-square-foot, triangular building at Walsh Avenue and San Tomas Expressway.
Nvidia launched the project in 2013 and it was pushed as one of these innovative design buildings which was supposed to make Silicon Valley look a little more interest.
Designers said one inspiration for the project's look was the polygon — the building block of computer graphics, although we thought it looked a little like a cancerous melanoma a mate had cut out. .
The enormous, 250,000-square-foot floor plates, united by a massive stairs, were to facilitate employee interactions. Nothing happened and it was thought that the project had suffered a bad case of cut backs.
Nvida said that it was just trying to get the design right, although cynics suggested that it was just waiting to see what AMD did first..
Now it seems that the way is clear and building has started.



Noticia:
http://www.fudzilla.com/news/graphics/37420-nvidia-s-new-hq-back-on-schedule


A nova casa da nVidia :)

Jorge-Vieira
27-04-15, 14:42
NVIDIA Wins Two Edison Awards For Innovation (http://www.hardocp.com/news/2015/04/27/nvidia_wins_two_edison_awards_for_innovation/)

The awards – named for fabled U.S. inventor Thomas Edison (http://blogs.nvidia.com/blog/2015/04/24/edison/) – recognize innovation, creativity and ingenuity in the global economy. Prizes were handed out in a wide range of categories by an independent team of judges from industry and academia. Other winners this year include GE, LG, Dow Chemicals, Logitech, Lenovo, Hyundai and 3M. Our VCM won a Gold award in the Automotive Computing category. The VCM is a modular computer that gives automakers a fast, easy way to update their systems with the latest mobile technology. This has helped Audi reduce its infotainment development cycle to two years, from the industry standard five to seven years. And Tesla Motors uses two VCMs to drive the screens in the Model S.

Noticia:
http://www.hardocp.com/news/2015/04/27/nvidia_wins_two_edison_awards_for_innovation#.VT5K 0ZP0Mxk

Jorge-Vieira
06-05-15, 09:00
Nvidia to cease development of software modems, may sell Icera (http://www.kitguru.net/laptops/mobile/anton-shilov/nvidia-to-cease-development-of-software-modems-may-sell-icera/)


Nvidia Corp. on Tuesday said that it will cease development of its Icera modems in the second quarter of its fiscal year 2016. At present, the company is considering to sell its soft modem technologies or Icera business completely.
Nvidia acquired Icera in 2011 in order to develop competitive system-on-chips for smartphones and tablets. Since then the company has refocused its Tegra business to gaming, automotive and cloud computing applications. While company’s latest SoCs (such as Tegra X1 or Tegra K1) can be used inside mobile devices, they are not designed specifically for them. As a result, Nvidia believes it does not need its own modem technologies.
http://www.kitguru.net/wp-content/uploads/2015/05/nvidia_tegra_x1_cut.jpg (http://www.kitguru.net/wp-content/uploads/2015/05/nvidia_tegra_x1_cut.jpg)
The Icera 4G LTE modem meets Nvidia’s needs for the next year or more. Going forward, the company plans to partner with third-party modem suppliers and will no longer develop its own. Essentially, this means that Nvidia will also not develop highly-integrated applications processors for mainstream smartphones.
The Icera modem operation has approximately 500 employees, based primarily in the U.K. and France, with smaller operations in Asia and the United States.



Noticia:
http://www.kitguru.net/laptops/mobile/anton-shilov/nvidia-to-cease-development-of-software-modems-may-sell-icera/

Jorge-Vieira
25-06-15, 13:31
Nvidia supplies open saucy reference headers

http://www.fudzilla.com/media/k2/items/cache/5fde75b34f30d7a9fe10bef302fc7b7e_L.jpg (http://www.fudzilla.com/media/k2/items/cache/5fde75b34f30d7a9fe10bef302fc7b7e_XL.jpg)

Catching up with Intel and AMD
Nvidia will begin supplying hardware reference headers for the Nouveau DRM driver in a move that will see it catching up with AMD and Intel on supplying the steadily growing Linux gamer market.


Nvidia is popular with Linux gamers who use proprietary hardware drivers, but those who have a nearly vegan religious approach to open source are not really catered for, Nvidia's open-source support has lagged behind Intel and AMD on Linux and Nvidia not officially supporting the community-based, mostly-reverse-engineered Nouveau driver.
Nvidia has been working on the Tegra K1 and newer graphics driver support for the open-source driver which is seen as being a sea change.
The outfit has been neutral against Nouveau but have been supplying some recent hardware samples to Nouveau developers, a little bit of public documentation, and Nvidia answering some questions for Nouveau developers. Push hardware reference headers into the Nouveau driver is therefore a big step.
Phoronix (http://www.phoronix.com/scan.php?page=news_item&px=NVIDIA-Hardware-Headers)said that Nvidia's system software team has begun aligning their new-chip development efforts with Nouveau.
Nvidia's Ken Adams said that the outfit would like to arrive at a place where the Nouveau kernel driver code base as its primary development environment.
To drive Nouveau as Nvidia s primary development environment for Tegra, it is looking at adding "official" hardware reference headers to Nouveau.

"The headers are derived from the information we use internally. I have arranged the definitions such that the similarities and differences between GPUs is made explicit. I am happy to explain the rationale for any design choices and since I wrote the generator I am able to tweak them in almost any way the community prefers."
So far he has been cleared to provide the programming headers for the GK20A and GM20B. For those concerned that this is just an item for pushing future Tegra sales Ken said: in "the long-term I'm confident any information we need to fill-in functionality greater than or equal to NV50/G80 will be made public eventually. We just need to go through the internal steps necessary to make that happen." It's just like Intel and AMD with legal/IP review being time consuming."




Noticia:
http://www.fudzilla.com/news/38075-nvidia-supplies-open-saucy-reference-headers

Jorge-Vieira
02-07-15, 14:40
IBM, NVIDIA and Mellanox Launch Design Center for Big Data and HPC (http://www.techpowerup.com/214007/ibm-nvidia-and-mellanox-launch-design-center-for-big-data-and-hpc.html)

IBM, in collaboration with NVIDIA and Mellanox, today announced the establishment of a POWER Acceleration and Design Center in Montpellier, France to advance the development of data-intensive research, industrial, and commercial applications. Born out of the collaborative spirit fostered by the OpenPOWER Foundation - a community co-founded in part by IBM, NVIDIA and Mellanox supporting open development on top of the POWER architecture - the new Center provides commercial and open-source software developers with technical assistance to enable them to develop high performance computing (HPC) applications.

Technical experts from IBM, NVIDIA and Mellanox will help developers take advantage of OpenPOWER systems leveraging IBM's open and licensable POWER architecture with the NVIDIA Tesla Accelerated Computing Platform and Mellanox InfiniBand networking solutions. These are the class of systems developed collaboratively with the U.S. Department of Energy for the next generation Sierra and Summit supercomputers and to be used by the United Kingdom's Science and Technology Facilities Council's Hartree Centre for big data research.http://tpucdn.com/img/15-07-02/15a_thm.jpg (http://www.techpowerup.com/img/15-07-02/15a.jpg)

In addition to expanding the software and solution ecosystem around OpenPOWER, this new collaboration between the three companies will create opportunities for software designers to acquire advanced HPC skills and drive the development of new technologies to bring value to customers globally.

The Center will be led by a team of experts from NVIDIA and Mellanox along with leading scientists from IBM Client Center Montpellier (France) and IBM Research Zurich (Switzerland). The Montpellier-based center is the second of its kind, complementing the previously announced center in Germany established in concert with IBM, NVIDIA and the Jülich Supercomputing Center in November.

"Our launch of this new Center reinforces IBM's commitment to open-source collaboration and is a next step in expanding the software and solution ecosystem around OpenPOWER," said Dave Turek, IBM's Vice President of HPC Market Engagement. "Teaming with NVIDIA and Mellanox, the Center will allow us to leverage the strengths of each of our companies to extend innovation and bring higher value to our customers around the world."

"Increasing computational performance while minimizing energy consumption is a challenge the industry must overcome in the race to exascale computing," said Stefan Kraemer, director of HPC Business Development, EMEA, at NVIDIA. "By providing systems combining IBM Power CPUs with GPU accelerators and the NVIDIA NVLink high-speed GPU interconnect technology, we can help the new Center address both objectives, enabling scientists to achieve new breakthroughs in their research."

"The new POWER Acceleration and Design Center will help scientists and engineers address the grand challenges facing society in the fields of energy and environment, information and health care using the most advanced HPC architectures and technologies," said Gilad Shainer, vice president of marketing, Mellanox Technologies. "Mellanox InfiniBand networking solutions offer more than a decade of experience building the world's highest performing networks, and are uniquely based on an offload-architecture. Only Mellanox offloads data movement, management and even data manipulations (for example Message Passing - MPI collective communications) which are performed at the network level, enabling more valuable CPU cycles to be dedicated to the research applications."

As founding members of the OpenPOWER Foundation, IBM, NVIDIA, and Mellanox share a common vision to bring a new class of systems to market faster to tackle today's big data challenges.

Noticia:
http://www.techpowerup.com/214007/ibm-nvidia-and-mellanox-launch-design-center-for-big-data-and-hpc.html

Jorge-Vieira
07-07-15, 16:57
Nvidia's Deep Learning Updates Build Bigger Neural Nets Faster: Digits 2, cuDNN 3, CUDA 7.5

At a machine learning convention in France, Nvidia announced updates to its contributions to Deep Learning.
If there's one company that puts a heap of effort into Deep Learning, it's Nvidia, and today in Lille, France, at ICML (International Conference on Machine Learning), the GPU maker announced three updates: the new Nvidia DIGITS 2 system, Nvidia CuDNN 3, and CUDA 7.5.
Deep Learning is a concept where computers can build deep neural networks based on given information, which then can be used to accomplish various complicated tasks such as image recognition, object detection and classification, speech recognition, translation, and various medical applications.
You may not realize it, but Deep Learning really is all around us already. Google uses it quite widely, Nvidia applied it in its Drive PX auto-pilot car computer (http://www.tomshardware.com/news/nvidia-drive-px-drive-cx-adas,28325.html), and medical institutions are starting to use it to detect tumors with much higher accuracy than doctors can.http://media.bestofmicro.com/T/Z/510407/gallery/3_w_600.png (http://www.tomshardware.com/gallery/3,0101-510407-0-2-12-1-png-.html)
The reason Deep Learning has been able to explode the way it has been is because of the huge amounts of compute power available to us with GPUs. Building DNNs (Deep Neural Networks) is a massively parallel task that takes lots of power. For example, building a simple neural net to recognize everyday images (such as the ImageNet challenge (http://image-net.org/challenges/LSVRC/)) can take days, if not weeks, depending on the hardware used. It is therefore essential that the software be highly optimized to use the resources most effectively, because not all neural nets end up working, and rebuilding another with slightly different parameters to increase its accuracy is a lengthy process.
DIGITS 2 For that reason, DIGITS 2's biggest update is that it can now build a single neural net using up to four GPUs; in the past, you could only use one per neural net. When using four GPUs, the training process for a neural net is up to 2x faster than on a single GPU. Of course, you may then say: build four different neural nets and see which one is the best. But it's not quite that simple.
A researcher may begin by building four different neural nets with different parameters, but based on the same learning data, and figure out which one is best, and then from there on out improve the parameters until the ideal setup is found, at which point only a single neural net needs to be trained.http://media.bestofmicro.com/T/X/510405/gallery/1_w_600.png (http://www.tomshardware.com/gallery/1,0101-510405-0-2-12-1-png-.html)
Nvidia's DIGITS is a software package with a web-based interface for Deep Learning, where scientists and researchers can start, monitor, and train DNNs. It comes with various Deep Learning libraries, making it a complete package. It allows researchers to focus on the results, rather than have to spend heaps of time figuring out how to install various libraries and how to use them.
CuDNN 3 CuDNN 3 is Nvidia's Deep Learning library, and as you may have guessed, it is CUDA based. Compared to the previous version, it can train DNNs up to 2x faster on identical hardware platforms with Maxwell GPUs. Nvidia achieved this improvement by optimizing the 2D convolution and FFT convolution processes.
http://media.bestofmicro.com/T/W/510404/gallery/2_w_600.png (http://www.tomshardware.com/gallery/2,0101-510404-0-2-12-1-png-.html)
CuDNN 3 also has support for 16-bit floating point data storage in the GPU memory, which enables larger neural networks. In the past, all data points were 32 bits in size, but not a lot of vector data needs the full accuracy of 32-bit data. Of course, some accuracy is lost in the process for each vector point, but the result of that tradeoff is that the GPU's memory has room more vectors, which in turn can increase the accuracy of the entire model.
CUDA 7.5 Both of the above pieces of software are based on the new CUDA 7.5 toolkit. The reason why CuDNN 3 supports 16-bit floating point data is because CUDA 7.5 now supports it. Most notably, it offers support for mixed floating point data, meaning that 16-bit vector data can be used where accuracy is less essential, and 32-bit data points will be used when higher accuracy is required.
Additional changes include new cuSPARSE GEMVI routines, along with instruction-level profiling, which can help you figure out which part of your code is limiting GPU performance.
The Preview Release version of the DIGTS 2 software is all available for free to registered CUDA developers, with final versions coming soon. More information is available here (https://developer.nvidia.com/digits).



Noticia:
http://www.tomshardware.com/news/nvidia-deep-learning-digits-update,29523.html

Jorge-Vieira
08-07-15, 08:45
Nvidia adds AI improvements to CUDA

http://www.fudzilla.com/media/k2/items/cache/946015e04c231b2b2f9e0bcfef19556d_L.jpg (http://www.fudzilla.com/media/k2/items/cache/946015e04c231b2b2f9e0bcfef19556d_XL.jpg)

Updated to 16-bit floating point arithmetic
Nvidia has sexed up its CUDA (Compute Unified Device Architecture) parallel programming platform and application programming interface.


The company has now made sure that it supports 16-bit floating point arithmetic when in the past it could only do 32-bit floating point operations.
The reason for the change is part of Nvidia's improvements in its AI software. Support for the smaller floating point size helps developers cram more data into the system for modelling. The company updated its CUDA Deep Neural Network library of common routines to support 16 bit floating point operations as well.
Nvidia has been upgraded its Digits software for designing neural networks. Digits version 2, released yesterday comes with a graphical user interface, potentially making it accessible to programmers beyond the typical user-base of academics and developers who specialize in AI.
Ian Buck, Nvidia vice president of accelerated computing said the previous version could be controlled only through the command line, which required knowledge of specific text commands and forced the user to jump to another window to view the results.
Digits can now run up to four processors when building a learning model. Because the models can run on multiple processors,
Digits can build models up to four times as quickly compared to the first version.
Nvidia is a big fan of AI because it requires the heavy computational power used by its GPUs.
Nvidia first released Digits as a way to cut out a lot of the menial work it takes to set up a deep learning system.
One early user of Digits' multi-processor capabilities has been Yahoo, which found this new approach cut the time required to build a neural network for automatically tagging photos on its Flickr service from 16 days to 5 days.



Noticia:
http://www.fudzilla.com/news/38174-nvidia-adds-ai-improvements-to-cuda

Jorge-Vieira
13-07-15, 14:19
Nvidia Opens Up OpenACC Toolkit, Free To Academia

OpenACC has been adopted by over 8,000 developers already, but now Nvidia is opening up a toolkit for free for academic and research purposes.
http://media.bestofmicro.com/R/M/511618/gallery/3_w_600.png
If you’re good at creating parallel computing devices (read: graphics cards), you’ll certainly want people to be able to leverage the best of that power. Therefore, in 2011 Nvidia created OpenACC in collaboration with Cray, CAPS and PGI, and now the GPU maker is introducing the free OpenACC Toolkit, which comes with the PGI compiler, an NVProf Profiler, code samples and documentation, for academia.http://media.bestofmicro.com/R/N/511619/gallery/OpenACC_slide_white2-fix_w_300.png
The idea behind OpenACC is simple: programmers can take their existing Fortran, C and C++ codes, and with minimal adjustments, alter them to leverage the GPU's parallel processing power for certain bits of the code. Using hints in the code, you can tell the compiler which tasks should run over the GPU and which should remain on the CPU. Nvidia described OpenACC as simple, powerful and portable.
http://media.bestofmicro.com/R/L/511617/gallery/1_w_600.png
Among the examples provided, Nvidia quoted that at the University of Illinois, the MRI reconstruction in PowerGrid was sped up 70-fold, with just two days of effort in adjusting the previously CPU-based code. Climate modeling for RIKEN Japan was sped up by 7-8x, having modified just five percent of the original source code. Currently, there are about 8,000 developers using OpenACC.
http://media.bestofmicro.com/R/K/511616/gallery/2_w_600.png
In addition, Nvidia announced x86 CPU portability. If your system doesn’t have a GPU to run the parallel code on, the OpenACC compiler will build the application such that it uses the multiple x86 cores to achieve at least somewhat better performance. Currently, this feature is in the beta phase, but Nvidia will be making it available to a wider audience in Q4 this year.
In the past, Nvidia’s OpenACC was only open to paying customers. Now, the OpenACC toolkit is available for free to academia. Commercial customers still have to pay, although they will be able to sign up for 90-day trials to figure out what they’d actually be paying for. More information and OpenACC downloads are available here (https://developer.nvidia.com/openacc#utm_source=shorturl&utm_medium=referrer&utm_campaign=openacc).



Noticia:
http://www.tomsitpro.com/articles/nvidia-openacc-academia-update,1-2738.html

Jorge-Vieira
20-07-15, 20:22
Nvidia Will Host Free Deep Learning Course Online, Starting This Week

http://media.bestofmicro.com/Z/F/513195/gallery/brainbrain_w_600.jpg (http://www.tomshardware.com/gallery/brainbrain,0101-513195-0-2-12-1-jpg-.html)Starting on Wednesday, July 22, Nvidia will be hosting a free bi-weekly instructor-led course on deep learning. The introductory course will include a combination of interactive lectures and hand-on exercises, as well as a one-hour Q&A the week following each class.
Deep learning (http://blogs.nvidia.com/blog/2015/02/19/deep-learning/) is a form of artificial intelligence that is rapidly being adopted by many different industries. It can be used to classify images, (http://www.tomshardware.com/news/flickr-nvidia-deeplearning-magicview,29621.html) recognize voices, or analyze sentiments, among other things, with human-like accuracy. It can be applied to facial recognition software, used for scene detection, and used in advanced medical and pharmaceutical research. The technology is even being used in the development of self-driving cars.
During the training sessions, you’ll learn all the necessities of designing and training neural network-powered AI (http://www.tomshardware.com/news/nvidia-deep-learning-digits-update,29523.html) and integrating it into your own applications. The course will cover Nividia DIGITS interactive training system for image classification, and the Caffe, Theano and Torch Framworks. You'll have to take the introduction into deep learning as the first class. Nvidia will be supplying free hands-on lab exercises during the course through Nvidia Qwiklab (https://nvidia.qwiklab.com/). http://media.bestofmicro.com/Z/E/513194/gallery/deeplearning-schedule_w_600.png (http://www.tomshardware.com/gallery/deeplearning-schedule,0101-513194-0-2-12-1-png-.html)
Each class is scheduled for 9am PT and will be recorded for viewing later. The course kicks off on Wednesday July 22, and Nvidia is accepting registration (http://info.nvidianews.com/Deep_Learning_Courses_15.html) right now. As this is an introductory course, previous experience with deep learning and GPU programming is not required.



Noticia:
http://www.tomshardware.com/news/free-nvidia-deep-learning-course,29630.html

Jorge-Vieira
24-07-15, 17:30
NVIDIA Sets Conference Call for Second-Quarter Financial Results (http://www.hardocp.com/news/2015/07/24/nvidia_sets_conference_call_for_secondquarter_fina ncial_results/)

NVIDIA will host a conference call (http://nvidianews.nvidia.com/news/nvidia-sets-conference-call-for-second-quarter-financial-results-2983780) on Thursday, Aug. 6, at 2 p.m. PT (5 p.m. ET) to discuss its financial results for the second quarter of fiscal year 2016, ending July 26, 2015. The call will be webcast live (in listen-only mode) at the following websites: nvidia.com and streetevents.com. The company's prepared remarks will be followed by a question and answer session, which will be limited to questions from financial analysts and institutional investors. Ahead of the call, NVIDIA will provide written commentary on its second-quarter results from its CFO.

Noticia:
http://www.hardocp.com/news/2015/07/24/nvidia_sets_conference_call_for_secondquarter_fina ncial_results#.VbJ2Efn0OTQ

Jorge-Vieira
06-08-15, 21:43
NVIDIA Announces Financial Results for Second Quarter Fiscal 2016 (http://www.techpowerup.com/215026/nvidia-announces-financial-results-for-second-quarter-fiscal-2016.html)

NVIDIA today reported revenue for the second quarter ended July 26, 2015, of $1.153 billion, up 5 percent from $1.103 billion a year earlier, and up marginally from $1.151 billion the previous quarter.

GAAP earnings per diluted share for the quarter were $0.05. This includes a charge of $0.19 per diluted share in connection with the company's decision to wind down its Icera modem operations, after a viable buyer failed to emerge. It also includes a charge of $0.02 per diluted share related to the NVIDIA SHIELD tablet recall. Non-GAAP earnings per diluted share were $0.34, up 13 percent from $0.30 a year earlier, and up 3 percent from $0.33 in the previous quarter.http://tpucdn.com/img/15-08-06/NVIDIA_Q2_FY2016_Results_01_thm.jpg (http://www.techpowerup.com/img/15-08-06/NVIDIA_Q2_FY2016_Results_01.jpg)

"Our strong performance in a challenging environment reflects NVIDIA's success in creating specialized visual computing platforms targeted at important growth markets," said Jen-Hsun Huang, president and chief executive officer of NVIDIA.

"Our gaming platforms continue to be fueled by growth in multiple vectors -- new technologies like 4K and VR, blockbuster games with amazing production values, and increasing worldwide fan engagement in e-sports. We're working with more than 50 companies that are exploring NVIDIA DRIVE to enable self-driving cars. And our GPU-accelerated data center platform continues to make great strides in some of today's most important computing initiatives -- cloud-based virtualization and high performance computing applications like deep learning."

"Visual computing continues to grow in importance, making our growth opportunities more exciting than ever," he said.

Capital Return
During the second quarter, NVIDIA paid $52 million in cash dividends and $400 million in share repurchases -- returning an aggregate of $452 million to shareholders. In the year's first half, the company returned an aggregate of $551 million to shareholders.

NVIDIA will pay its next quarterly cash dividend of $0.0975 per share on September 11, 2015, to all shareholders of record on August 20, 2015.

NVIDIA's outlook for the third quarter of fiscal 2016 is as follows:

Revenue is expected to be $1.18 billion, plus or minus two percent.
GAAP and non-GAAP gross margins are expected to be 56.2 percent and 56.5 percent, respectively, plus or minus 50 basis points.
GAAP operating expenses are expected to be approximately $484 million. Non-GAAP operating expenses are expected to be approximately $435 million.
GAAP and non-GAAP tax rates for the third quarter of fiscal 2016 are expected to be 22 percent and 20 percent, respectively, plus or minus one percent.
The above GAAP outlook amounts exclude additional restructuring charges, which are expected to be in the range of $15 million to $25 million, in the second half of fiscal 2016.
Capital expenditures are expected to be approximately $25 million to $35 million.

Second Quarter Fiscal 2016 Highlights
During the second quarter, NVIDIA achieved progress in each of its platforms.
Gaming:

Continued strong demand for GeForce GTX GPUs, driven by advanced new games and growth in competitive e-sports, which now have an estimated 130 million viewers.
Unveiled the flagship GeForce GTX 980 Ti GPU, with the power to drive 4K and VR gaming.
Increased users of the GeForce Experience PC gaming platform to 65 million, from 38 million a year earlier.
Launched the NVIDIA SHIELD Android TV device, the most advanced smart TV platform, which connects TVs to a world of entertainment apps and services.

Enterprise Graphics & Virtualization:

Continued strong momentum for NVIDIA GRID graphics virtualization, which more than tripled its customer base to over 300 enterprises from a year earlier.

HPC & Cloud:

Engaged with more than 3,300 companies exploring the use of deep learning in areas such as speech recognition, image analysis and translation capabilities.
Shipped cuDNN 3.0, which doubles the performance of deep learning training on GPUs and enables the training of more sophisticated neural networks. cuDNN has been downloaded by more than 9,000 researchers worldwide.

Auto:

Working with more than 50 companies to use the NVIDIA DRIVE PX platform in their autonomous driving efforts.



Noticia:
http://www.techpowerup.com/215026/nvidia-announces-financial-results-for-second-quarter-fiscal-2016.html



E a nVidia continua a fazer somas e a subir percentagens de lucro, mais 5% de incremento.

Jorge-Vieira
07-08-15, 13:43
Nvidia hit by Icera, Shield recall costs


http://images.bit-tech.net/news_images/2015/08/nvidia-icera-shield-costs/article_img.jpg Nvidia's Jen-Hsun Huang has spoken of growth at the company, but faulty devices and the death of its Icera modem business have led to an 80 per cent drop in net income in the last quarter.





Nvidia has published its latest financial results, and despite taking a serious hit on the recall of faulty Shield Tablet devices the company has beaten expectations with a five per cent boost in revenue year-on-year - a fact which has failed to rescue the company from an 80 per cent drop in net income.

In the company's latest financial announcement (https://nvidianews.nvidia.com/news/nvidia-announces-financial-results-for-second-quarter-fiscal-2016), for the second quarter of its 2016 financial year, the company boasted of $1.153 billion in revenue, up five per cent on the same period last year. While it comes with a slight drop in gross margin, from 56.1 per cent last year to 55 per cent this year, the bigger news was an 80 per cent drop in net income from $128 million to $26 million - thanks in part to the recall of Shield Tablet devices (http://www.bit-tech.net/news/hardware/2015/08/03/nvidia-shield-tablet-recall/1) due to a fault which puts users at risk of fire.

A bigger impact on the bottom line came from the company's decision to wind down its Briston-based Icera mobile modem business unit, which it announced back in May (http://www.bit-tech.net/news/hardware/2015/05/06/nvidia-shutters-icera/1) a mere four years after buying the 500-strong company for $367 million. While the company had looked to sell off the business unit, it claims no 'viable buyer' could be found and so its closure comes as an entire loss.

'Our strong performance in a challenging environment reflects Nvidia's success in creating specialised visual computing platforms targeted at important growth markets,' boasted Jen-Hsun Huang, president and chief executive officer, of Nvidia's results. 'Our gaming platforms continue to be fuelled by growth in multiple vectors - new technologies like 4K and VR, blockbuster games with amazing production values, and increasing worldwide fan engagement in e-sports. We're working with more than 50 companies that are exploring Nvidia Drive to enable self-driving cars, and our GPU-accelerated data centre platform continues to make great strides in some of today's most important computing initiatives - cloud-based virtualisation and high performance computing applications like deep learning.'

Nvidia is predicting further growth to $1.18 billion for the next quarter, with a return to a profit margin on 56.2 to 56.5 per cent, but warns that a further $15 million to $25 million restructuring charge will again have an impact on the company's bottom line as it enters the second half of its financial year.

Noticia:
http://www.bit-tech.net/news/hardware/2015/08/07/nvidia-icera-shield-costs/1

Jorge-Vieira
07-08-15, 14:04
Nvidia: 50 car makers use Drive PX platform to develop self-driving cars (http://www.kitguru.net/laptops/mobile/anton-shilov/nvidia-50-car-makers-are-using-drive-px-platform-to-develop-self-driving-cars/)


Earlier this year Nvidia refocused (http://www.kitguru.net/components/anton-shilov/nvidia-weve-learnt-a-lot-from-the-automotive-industry/) its Tegra business from consumer electronics to automotive industry. Sales of Nvidia’s system-on-chips for vehicles are growing rapidly, but still represent only a small fracture of the company’s revenue. But going forward a lot may change. Nvidia said this week that as many as 50 automakers are using Nvidia Drive PX platform for their autonomous driving efforts. Moreover, some of them are already developing self-driving cars.
“In addition to our infotainment cockpit business, we are working with more than 50 companies interested in using Nvidia Drive PX in their autonomous driving efforts,” said Colette Kress, chief financial officer of Nvidia, on the company’s earnings conference call with investors and financial analysts.
http://www.kitguru.net/wp-content/uploads/2015/08/nvidia_drive_px.jpg (http://www.kitguru.net/wp-content/uploads/2015/08/nvidia_drive_px.jpg)
The Nvidia Drive PX is a development platform for automakers interested in producing self-driving cars. The Drive PX features two Tegra X1 system-on-chips that deliver around 2.3TFLOPS of FP16 compute performance, has inputs for up to 12 high-resolution cameras, and can process up to 1.3 gigapixels of visual data per second.
One of the key advantages of the Drive PX is its sensor fusion technology – a combination of hardware and software – that enables cameras, lidar, radar and sonar sensors to work together. This enables the platform to run advanced driver assistance features, including collision avoidance, pedestrian detection, mirror-less operation, cross-traffic monitoring and driver-state monitoring, to run simultaneously. Another key feature of the Drive PX is its deep learning capabilities that allow the software to be trained, and retrained for any possible eventuality.
http://www.kitguru.net/wp-content/uploads/2015/08/audi_a3_1.jpg (http://www.kitguru.net/wp-content/uploads/2015/08/audi_a3_1.jpg)
While the Drive PX can potentially be used in commercial vehicles, it is primarily a development platform. Many deep-learning-related capabilities will require considerably higher amount of compute horsepower than two Tegra X1 can offer. However, automakers are already developing self-driving cars due in four or five years from now, which is why they need appropriate development platforms.
“We are developing autonomous driving vehicles with many of them at the moment,” said Jen-Hsun Huang, chief executive officer of Nvidia. “I expect the car business to continue to grow.”
http://www.kitguru.net/wp-content/uploads/2015/08/tesla_model_s.jpg (http://www.kitguru.net/wp-content/uploads/2015/08/tesla_model_s.jpg)
At present it is impossible to say when exactly Nvidia-based self-driving cars hit the market. Moreover, even Nvidia does not know how many of the 50 automakers currently using the Drive PX platform for research will actually use Tegra inside their commercial cars. Nonetheless, if 50 manufacturers use the Drive PX, it means that the company will have better chances to win actual designs.
Sales of Nvidia Tegra for vehicles rose to $71 million, or by 76 per cent year-over-year, in the second quarter of the company’s fiscal 2016.



Noticia:
http://www.kitguru.net/laptops/mobile/anton-shilov/nvidia-50-car-makers-are-using-drive-px-platform-to-develop-self-driving-cars/

Jorge-Vieira
08-08-15, 13:21
Nvidia: China is shifting to high-end GPUs (http://www.kitguru.net/components/graphic-cards/anton-shilov/nvidia-china-is-shifting-to-high-end-gpus/)


Chief executive officer of Nvidia Corp. explained this week the reason why a number of graphics cards makers this year refocused their marketing efforts (http://www.kitguru.net/components/graphic-cards/anton-shilov/makers-of-graphics-adapters-refocus-to-high-end-graphics-cards/) to higher-end graphics adapters. Apparently, China, one of the world’s largest markets, is shifting to higher-end GPUs, which is generally a good news for both Nvidia and its rival Advanced Micro Devices.
“We are also seeing a shift to higher end GPUs in China,” said Jen-Hsun Huang, chief executive officer of Nvidia, on the company’s earnings conference call with investors and financial analysts. “Some of that probably has to do with the dramatic change in improvement in production value of games in China.”
http://www.kitguru.net/wp-content/uploads/2015/08/NVIDIA_GeForce_GTX_900series_KeyVisual_HD_003.jpg
Historically, China consumed cheap personal computers with poor configurations and with low-end or even integrated graphics adapters. In the recent years, the situation began to change and today there are loads of high-end systems sold in the world’s most populated country. Another reason why Chinese gamers now buy higher-end hardware is because locally-made games demand more performance and better GPUs.
“There was a time when the Chinese games were enjoyed and fun, [but] the production values were not very good,” explained Mr. Huang. “But now if you take a look at the Tencent games, the production value are absolutely phenomenal. They are beautiful. They are artistic and in those cases require a lot more GPU capability.”
http://www.kitguru.net/wp-content/uploads/2015/08/TitanX_Stylized_02.png (http://www.kitguru.net/wp-content/uploads/2015/08/TitanX_Stylized_02.png)
Yet another reason why Chinese gamers are shifting to higher-end GPUs could be shrinking difference between low-end GPUs and integrated graphics processors. Today, one may have decent gaming experience on a system powered by a moderate central processing unit with a good graphics core inside. In a bid to have a radically better experience, an advanced graphics card will be needed.



Noticia:
http://www.kitguru.net/components/graphic-cards/anton-shilov/nvidia-china-is-shifting-to-high-end-gpus/

Jorge-Vieira
11-08-15, 13:59
NVIDIA DesignWorks Unleashes the Power of Interactive Photorealistic Rendering (http://www.hardocp.com/news/2015/08/11/nvidia_designworks_unleashes_power_interactive_pho torealistic_rendering)

To bring the power of interactive photorealism to mainstream designers, we’re announcing NVIDIA DesignWorks (http://blogs.nvidia.com/blog/2015/08/10/designworks/). DesignWorks is a new set of software tools, libraries and technologies for the developers behind the software that designers use to create the products we use, the buildings we live in, and the planes, trains and automobiles that keep us on the move. The big idea of DesignWorks is to give application developers a way to take advantage of our work in both physically based rendering (PBR) and physically based materials — cornerstones of visualizing a design interactively with photo-real results.

Noticia:
http://www.hardocp.com/news/2015/08/11/nvidia_designworks_unleashes_power_interactive_pho torealistic_rendering#.Vcn_rfn0OTQ

Jorge-Vieira
12-08-15, 20:24
More On Nvidia's DesignWorks From SIGGRAPH

In an effort to bring the power of interactive photorealism to mainstream designers, Nvidia is introducing DesignWorks, a set of software tools, libraries and technologies to give developers easy access to both physically-based rendering (PBR) and physically-based materials.
Physically-based rendering, often abbreviated PBR, is a rendering method that more accurately models the properties of light than the previous model used in both games and photorealistic rendering. Previously, the lighting and shading model used took many shortcuts to avoid properties that were too computationally intensive, and then these additional properties were slowly integrated back in as it became possible to compute them in a useful amount of time.
This resulted in conditions occurring that could not physically exist, often looked a little odd, and had to be corrected by further manipulating the shading and materials settings. PBR began to see use in visual effects around 2006, and in games shortly thereafter.
http://media.bestofmicro.com/I/9/517761/gallery/clowncolors_w_600.png (http://www.tomshardware.com/gallery/clowncolors,0101-517761-0-2-12-1-png-.html)
Previously, designers would work with false color versions of their objects within their design software and have to export them from the software in order to attempt realistic surfacing. Nvidia would like designers to be able to make choices of realistic materials within their design application instead of having to spend hours porting their CAD data over into an animation application with more detailed, realistic surfacing capabilities.
The ability to test the surfaces and different real-world materials interactively without having to take the time-consuming step of moving the design files between applications means designers can iterate more quickly and less expensively.
http://media.bestofmicro.com/I/8/517760/gallery/MDL_w_600.png (http://www.tomshardware.com/gallery/MDL,0101-517760-0-2-12-1-png-.html)
DesignWorks brings together the following:

-Nvidia Iray SDK- a physically based rendering and light simulation SDK which includes new optimizations and algorithms to reduce the time for iterating designs.
-Nvidia Material Design Language (MDL)- a standard to describe PBR materials allowing usage of a library of materials between applications even using dissimilar renderers.
-Nvidia vMaterials- a collection of calibrated and verified materials described using MDL.
-Nvidia OptiX-Ray-tracing SDK constructed for GPU acceleration that supports the new Nvidia VCA for scalability. Both Iray and V-Ray use Optix.
-Nvidia DesignWorks VR (http://www.tomshardware.com/news/designworks-vr-for-professionals-revealed,29815.html)- a set of tools for incorporating VR directly into design software.
DesignWorks is already found throughout the design industry, Iray being available in several applications including Dassault Systèmes CATIA, while Autodesk's VRED virtual reality visualization tool makes use of the VR technologies that are part of DesignWorks VR. Nvidia Iray, mental ray, Chaos Group's V-Ray, and Allegorithmic's Substance Designer all currently support MDL.



Noticia:
http://www.tomshardware.com/news/nvidia-designworks-siggraph,29823.html

Jorge-Vieira
23-08-15, 13:41
NVIDIA partners with ESL, trying to win over new customers


The Electronic Sports League (ESL) is teaming up with NVIDIA, contributing all GeForce GTX GPUs for PCs used in the ESL One event. The massive Counter-Strike: Global Offensive tournament features some of the best teams in the world competing for more than $250,000 in prizes.


image: http://imagescdn.tweaktown.com/news/4/7/47206_01_nvidia-partners-esl-trying-win-over-new-customers.jpg (http://www.tweaktown.com/image.php?image=imagescdn.tweaktown.com/news/4/7/47206_01_nvidia-partners-esl-trying-win-over-new-customers_full.jpg)
http://imagescdn.tweaktown.com/news/4/7/47206_01_nvidia-partners-esl-trying-win-over-new-customers.jpg

"GeForce GTX is our de facto standard for professional eSports events," said Ulrich Schulze, VP of pro gaming at ESL, in a statement. "CS:GO players require extremely high frame rates and as low latency as possible. NVIDIA GeForce GTX and G-sync technologies deliver on all fronts."

As eSports grows, with even more people viewing live and recorded streams over the Internet, it's a great opportunity for the NVIDIA logo to be prominently displayed. There were almost half a million people watching the tournament yesterday and there should be large amounts of viewers throughout the rest of the weekend.

Watching the current stream between Team Fnatic CS and Luminosity Gaming, it's easy to spot Intel, ASUS, and other ESL sponsors. In addition, many of the teams rely on sponsors to help keep them supplied with cutting edge technology.




Noticia:
http://www.tweaktown.com/news/47206/nvidia-partners-esl-trying-win-over-new-customers/index.html

Jorge-Vieira
27-08-15, 16:22
NVIDIA Foundation Awards $200,000 In Cancer Care Grants (http://www.hardocp.com/news/2015/08/27/nvidia_foundation_awards_200000_in_cancer_care_gra nts/)


Four nonprofits providing cancer care and support services in West Africa, India and the U.S. have won $50,000 grants from our employee-led corporate foundation (http://blogs.nvidia.com/blog/2015/08/26/cancer-care-grants/), the NVIDIA Foundation. The awards are part of Compute the Cure — our companywide effort to fight cancer and offer support to individuals and their families who are dealing with it. Through grants and employee fundraising, the initiative has directed more than $1.6 million to cancer-fighting causes since 2011.

Launched this spring, our Compute the Cure Cancer Care grant program aims to help provide support to those affected by cancer in the communities in which we operate, and others around the world. More than 50 organizations submitted proposals. Three dozen employees reviewed the applications and selected a group of finalists. Then, our employees voted for their favorite recipient, the Employee Choice winner. The Foundation’s board of directors (composed of non-execs) awarded the other three grants.



Noticia:
http://www.hardocp.com/news/2015/08/27/nvidia_foundation_awards_200000_in_cancer_care_gra nts#.Vd85Epf0OTQ

Jorge-Vieira
31-08-15, 13:45
NVIDIA GRID 2.0 Launches With Broad Industry Support

NVIDIA today launched NVIDIA GRID 2.0 with broad industry support for its ability to deliver even the most graphics-intensive applications to any connected device virtually. Nearly a dozen Fortune 500 companies are completing trials of the NVIDIA GRID 2.0 beta. Major server vendors, including Cisco, Dell, HP and Lenovo, have qualified the GRID solution to run on 125 server models, including new blade servers. NVIDIA has worked closely with Citrix and VMware to bring a rich graphics experience to end-users on the industry’s leading virtualization platforms.
NVIDIA GRID 2.0 delivers unprecedented performance, efficiency and flexibility improvements for virtualized graphics in enterprise workflows. Employees can work from almost anywhere without delays in downloading files, increasing their productivity. IT departments can equip workers with instant access to powerful applications, improving resource allocation. And data can be stored more securely by residing in a central server rather than individual systems.
http://www.hardwareheaven.com/wp-content/uploads/2015/08/130a.jpg (http://www.hardwareheaven.com/wp-content/uploads/2015/08/130a.jpg)
“Industry leaders around the world are embracing NVIDIA GRID to provide their employees access to even the most graphics-intensive workflows on any device, right from the data center,” said Jen-Hsun Huang, co-founder and CEO of NVIDIA. “NVIDIA GRID technology enables employees to do their best work regardless of the device they use or where they are located. This is the future of enterprise computing.”
The ability to virtualize enterprise workflows from the data center has not been possible until now due to low performance, poor user experience and limited server and application support.
NVIDIA GRID 2.0 integrates the GPU into the data center and clears away these barriers by offering:
•Doubled user density: NVIDIA GRID 2.0 doubles user density over the previous version, introduced last year, allowing up to 128 users per server. This enables enterprises to scale more cost effectively, expanding service to more employees at a lower cost per user.
•Doubled application performance: Using the latest version of NVIDIA’s award-winning Maxwell GPU architecture, NVIDIA GRID 2.0 delivers twice the application performance as before — exceeding the performance of many native clients.
•Blade server support: Enterprises can now run GRID-enabled virtual desktops on blade servers — not simply rack servers — from leading blade server providers.
•Linux support: No longer limited to the Windows operating system, NVIDIA GRID 2.0 now enables enterprises in industries that depend on Linux applications and workflows to take advantage of graphics-accelerated virtualization.
Positive Feedback on NVIDIA GRID 2.0
More than a dozen enterprises in a wide range of industries have been piloting NVIDIA GRID 2.0 and are reporting direct business benefits in terms of user productivity, IT efficiency and security improvements.
“With NVIDIA GRID, our engineers are able to run a wide range of engineering design and analysis applications. It’s led to increased productivity by streamlining our use of data and eliminating the need to replicate data to our remote production facilities,” says Fred Devoir, senior architect and IT infrastructure manager, Textron. “With the latest 2.0 release, we’ve been able to double the number of concurrent users per GPU or increase the maximum amount of video memory which allows a greater array of applications to be used without a compromise in performance. I am excited about the potential of enabling these capabilities for even more design and manufacturing engineers.”
“NVIDIA GRID 2.0 with VMware Horizon marks the next phase of innovation in enterprise-wide virtual desktop deployments,” said Sanjay Poonen, executive vice president and general manager, End-User Computing, VMware. “VMware End-User Computing solutions have transformed the way organizations empower their workforces, with technologies that are simple to use, and are secure. Our close alignment with NVIDIA continues to bring forth powerful capabilities to customers, and is one of the key reasons for our gaining market-share in the desktop virtualization market.”
“In 2013, Citrix and NVIDIA released the first joint vGPU solution to enable multiple virtual desktops to share a single GPU and deliver an uncompromised experience that scales easily,” said Calvin Hsu, vice president, product marketing, Windows App Delivery, Citrix. “NVIDIA GRID 2.0 with Citrix XenApp and XenDesktop app and desktop delivery now allows more users to take advantage of rich applications on any device.”
“The ability of our newest desktop product, ArcGIS Pro, to deliver a great user experience in virtual environments with GRID is extremely important to Esri,” said John Meza, performance engineering lead, Esri. “It allows our users to continue their great work in whatever environment, physical or virtualized, they choose.”
“With GRID 2.0 we can provide our customers a powerful, secure and reliable blade server configuration, giving them more options to virtualize all their graphics-accelerated workflows,” said Neil MacDonald, vice president and general manager, HP BladeSystem. “GRID technology allows HP to provide the highest density virtualized graphics offering on the market today so that our customers easily scale to accommodate the highest possible number of users.”
“Dell has a long history of being first to market with innovative solutions that help customers address their IT challenges,” said Brian Payne, executive director, Server Solutions, Dell. “We’ve worked closely with NVIDIA to be the first to enable our server ecosystem to improve productivity, security and efficiency through enterprise GPU solutions.”
Experience NVIDIA GRID
Users are encouraged to experience NVIDIA GRID for themselves through the NVIDIA GRID Test Drive. This experience gives users instant access to hours of NVIDIA GRID vGPU acceleration on a Windows desktop with 2D and 3D industry-leading applications such as:
•Autodesk AutoCAD
•Dassault Systèmes SOLIDWORKS
•Esri ArcGIS Pro
•Siemens NX
NVIDIA GRID 2.0 is available worldwide starting Sept.15, 2015. Read here (http://www.nvidia.com/object/nvidia-grid.html) to learn more or sign up for a 90-day evaluation.



Noticia:
http://www.hardwareheaven.com/2015/08/nvidia-grid-2-0-launches-with-broad-industry-support/

Jorge-Vieira
01-09-15, 17:03
Nvidia Gained Market Share in Q2 2015, Now Holds 81% of the Market – Jon Peddie Research Report

The not-so-friendly rivalry between our two giants, namely AMD and Nvidia, is at an all time high with the incoming DX12 era. A report released by JPR (http://jonpeddie.com/press-releases/details/add-in-board-market-decreased-in-q215) shows that Nvidia has gained market share the last quarter and towers at a massive 81% market share of the graphic card AIB segment. AMD went down from 22% to 18% this quarter and down from 38% year over year.

http://cdn.wccftech.com/wp-content/uploads/2015/06/AMD-Nvidia-Feature-635x357.jpg (http://cdn.wccftech.com/wp-content/uploads/2015/06/AMD-Nvidia-Feature.jpg)Not an official logo (naturally). @WCCFTechAMD loses market share to Nvidia – Green now holds 81% of the GPU AIB market (JPR)The report takes into account the changes in the market for last quarter, namely the second quarter of 2015. Keep in mind that all this was before the AotS controversy so if the recent allegations against Nvidia turn out to be true and its DX12 performance suffers; this could turn around just as easily. Before we go any further, here are the quick contents of the report are given below:







JPR found that AIB shipments during the quarter behaved according to past years with regard to seasonality, but the increase was less than the 10-year average. AIB shipments decreased -16.81% from the last quarter (the 10-year average is -8.7% ).
Total AIB shipments decreased this quarter to 9.4 million units from last quarter.
AMD’s quarter-to-quarter total desktop AIB unit shipments decreased -33.3% .
Nvidia’s quarter-to-quarter unit shipments decreased -12.0% Nvidia continues to hold a dominant market share position at 81.9% .
Figures for the other suppliers were flat to declining.
The change from year to year decreased -18.8% compared to last year.The AIB market has benefited from the enthusiast segment PC growth, which has been partially fueled by recent introductions of exciting new powerful board.
The demand for high-end PCs and associated hardware from the enthusiast and overclocking segments has bucked the downward trend and given AIB vendors a needed prospect to offset declining sales in the mainstream consumer space.

In posts such as this, its usually advisable to keep commentary to a minimum and let the data speak for itself. But I would also like to point out some other facts. Nvidia has kept up a very healthy launch schedule with new cards releasing left and right and the data reflects this as well. AMD recently released their Fury lineup but not enough time has passed for the market sentiment to reflect the lineup update so the third quarter report should be the one to really look out for. If the share doesn’t improve – or atleast hold constant, AMD could be in serious trouble.
There is also one more thing that this data doesn’t portray accurately, as is mentioned, IHVs like AMD and Nvidia are now shifting focus towards the high end lineup because the PC (GPU AIB) market is shrinking. So less quantity of high end products is better than high quantity of low end products – or atleast that’s the general consensus so far. I think (caution: opinion) one of the reasons behind AMD’s recent pricing strategy with the Nano is also this exactly – an attempt to finally change tactics, take a leaf out of the green book, and attempt to become profitable.







Noticia:
http://wccftech.com/nvidia-gained-market-share-q2-2015-holds-81-market-jpr/#ixzz3kVW8Zo8O

Jorge-Vieira
01-09-15, 17:04
NVIDIA Tesla M60 and Tesla M6 Accelerators To Power Grid 2.0 – M60 Featuring Dual-GM204 GPUs

Yesterday, NVIDIA announced their latest Virtual Desktop Infrastructure (VDI), the Grid 2.0, which will be powering gaming and professional 3D workloads through cloud computing. The Grid which houses NVIDIA based GPUs for virtualization was launched back at CES 2013 and based on the Kepler generation of GPUs. NVIDIA has now featured their latest Maxwell GPU architecture on the Grid 2.0 powered platform to deliver double the user density and performance than Grid 1.0.
http://cdn.wccftech.com/wp-content/uploads/2015/09/NVIDIA-Grid-2.0-635x357.png (http://cdn.wccftech.com/wp-content/uploads/2015/09/NVIDIA-Grid-2.0.png)
NVIDIA Grid 2.0 Powered By Maxwell Based Tesla M60 and Tesla M6While Grid 2.0 is the main announcement by NVIDIA at the VMworld conference that is hosted by VMware, they also announce two new graphics cards powering their Grid 2.0 platform. These include the new Maxwell based Tesla M60 and Tesla M6. Replacing the NVIDIA Grid K2 is the Tesla M60, both cards are based on dual core designs. The former offers two Kepler GK104 cores with 1536 CUDA cores each while the latter offers two Maxwell GM204 chips with 2048 CUDA cores each. The NVIDIA Tesla M60 has 16 GB of GDDR5 memory (8 GB per core), 256-bit bus interface and has a TDP of 225-300W. The Tesla M60 is offered in a dual-slot PCI-Express card form factor and will stick with moderate clock speeds that is optimized towards 24/7 workloads. The card can handle up to 32 users simultaneously and has 36 H.264 (1080P @ 30 FPS) streams.
http://cdn.wccftech.com/wp-content/uploads/2015/09/NVIDIA-Tesla-M60-and-Tesla-M6-Specifications-635x297.png (http://cdn.wccftech.com/wp-content/uploads/2015/09/NVIDIA-Tesla-M60-and-Tesla-M6-Specifications.png)
The interesting thing about this card is that while its aimed at a completely different market, the card is the first dual-GPU design based on the Maxwell core architecture. We can expect a consumer version of this card soon however I would personally prefer a dual-GM200 based chip since AMD has already showcased their R9 Fury X2 and GM204 is no match for the high-performance Fiji GPU core.


The second card in the lineup is the Tesla M6 which is a single GM204 GPU based card. The card is based on a cut down configuration hence we are only looking at 1536 CUDA cores. The more surprising thing is that this card is available in the MXM form factor which indicates that it is aimed at the more denser blade servers that can accommodate several of these cards in the specific form factor. The card has a TDP of 75-100W and has 8 GB of GDDR5 memory running along a 256-bit bus interface. The card is similar in specifications compared to the GeForce GTX 980M however we might just see different clock speeds on the Tesla M6. The card can handle up to 16 users and 18 H.264 (1080P @ 30 FPS) streams. Pricing for both cards are not available but they will be available on 15th September through NVIDIA partners that include HP, Dell and Cisco.
http://cdn.wccftech.com/wp-content/uploads/2015/09/NVIDIA-Grid-2.0_1-635x357.png (http://cdn.wccftech.com/wp-content/uploads/2015/09/NVIDIA-Grid-2.0_1.png)
The benefits of GRID 2.0:


Doubled user density: NVIDIA GRID 2.0 will allow you to push the newly introduced cards up to 128 users per server.
Doubled application performance: Using the new Maxwell GPU architecture, GRID 2.0 will deliver twice the performance of before.
Blade server support: Enterprises can now run GRID-enabled virtual desktops on blade servers.
Linux support: Both Citrix and VMware now supports Linux so it makes sense to introduce Linux support for GRID 2.0 too. Enabling GPU powered desktops in a Linux based VDI environment.

Overall, the main advantage of Grid 2.0 will be to offer twice the performance with new features, lower power consumption and more denser platforms. The Blade Servers now offer two platforms compared to just one on the traditional Grid 1.0 based products. The user density has also increased twice over Grid 1.0 which is a good increase for users looking forward to virtualization with NVIDIA’s latest tech. Windows users were already fully support and now Linux users can also take advantage on Grid 2.0 and there’s also newly added support for 4K monitors.
NVIDIA Tesla M60 and M6 Specifications:
<thead>
Grid 2.0 Board Name
NVIDIA Tesla M60
NVIDIA Tesla M6
NVIDIA Grid K2
NVIDIA Grid K1

</thead><tbody class="row-hover">
GPU
GM204
GM204
GK104
GK104


GPU Cores
2048 x 2 (Dual Config)
4096 CUDA Cores
1536 CUDA Cores
1536 x 2 (Dual Config)
3072 CUDA Cores
192 x 4 (Quad Config) 768 CUDA Cores


Memory
16 GB GDDR5 (8 GB x 2)
8 GB GDDR5
8 GB GDDR5 (4 GB x 2)
16 GB DDR3 (4 GB x 4)


Memory Bus
256-bit x 2
256-bit
256-bit x 2
64-bit


Max Users
36
18
32
16


H.264 (1080P @ 30 FPS) Streams
2-32
1-16
2-12
1-8


Form Factor
Dual-Slot PCI-Express
MXM Card
Dual-Slot PCI-Express
Dual-Slot PCI-Express


TDP
300W
100W
225W
130W

</tbody>







Noticia:
http://wccftech.com/nvidia-tesla-m60-tesla-m6-accelerators-power-grid-20-m60-featuring-dualgm204-gpus/#ixzz3kVWTEqNI

LPC
01-09-15, 17:10
Boas!
Bem... 81% do mercado!

Impressionante como a degradação da cota de mercado da AMD foi tão grande nestes 2 anos...

Falta de lançamentos, inovação e concorrência só podem dar nisto...

Tal como aconteceu nos cpu´s, até os que apoiam o UNDERDOG foram para Intel...

É difícil apoiar uma empresa que ela mesmo passa a vida a dar tiros nos pés...

Cumprimentos,

LPC

Jorge-Vieira
06-11-15, 13:55
Nvidia Corporation (NASDAQ: NVDA) Posts Record Revenue of $1.305 Billion – Third Quarter Fiscal Year 2016 Earnings Out

Nvidia (NASDAQ: NVDA (http://www.nasdaq.com/symbol/nvda)) recently posted its quarterly results for the third quarter of Fiscal Year 2016. The graphics giant posted a record revenue of $1.305 Billion dollars which is 13% more than the same quarter from last year. With a GAAP diluted net income of $246 Million dollars, the company posted an earnings per share of $0.44 for this quarter. Things have been looking real good for green lately and their strong financials continue to represent their bullish trend.

http://cdn.wccftech.com/wp-content/uploads/2015/11/NVIDIA-Third-Quarter-2016-Results_Gaming_16-635x357.jpg (http://cdn.wccftech.com/wp-content/uploads/2015/11/NVIDIA-Third-Quarter-2016-Results_Gaming_16.jpg)A slide from the Investor Day pressdeck of March 17th 2015. @Nvidia Public DomainNvidia (NASDAQ: NVDA) posts third quarter results for FY2016 – Record revenue, Geforce stronger than ever and significant automotive growthMr. Jen-Hsun Huang had the following to say at the earnings call:

“Our record revenue highlights NVIDIA’s position at the center of forces that are reshaping our industry,” said Jen-Hsun Huang, co-founder and chief executive officer, NVIDIA. “Virtual reality, deep learning, cloud computing and autonomous driving are developing with incredible speed, and we are playing an important role in all of them. “We continue to make great headway in our strategy of creating specialized visual computing platforms targeted at important growth markets. The opportunities ahead of us have never been more promising,”.
The company has all but finished its roll out of high end graphic cards of this generation (minus one major exception) and the revenue of this quarter comes mostly from consistent sales and a strong hold on its market share. With such a majority market share, Nvidia can manage to meet expectations without breaking much of a sweat. Infact, the company made a point of noting that they are not becoming complacent just because they have a majority share and take their competitor very seriously – all the while pointing out that AMD is a completely different company from Nvidia and not really comparable to green.
This year has seen them venture further into the Automotive side of things – yielding very promising results. Infact, it recently introduced its Drive PX chip as well – powered by the Tegra X1 – which is supposed to be a major earner once it gets the requisite design wins and production ramp.
At the time of writing, the company’s stock is trading at $27.71 with a market capitalization of 15.11 Billion dollars. The share price has been up by 20% this quarter starting from $23. This past week has seen the bullish trend falter somewhat but the overall trend remains positive and this quarters strong earnings should put some of the bull back in its share price. Without any further ado, here are the results and the highlights:
Financial Highlightshttp://cdn.wccftech.com/wp-content/uploads/2015/11/Nvidia-Quarterly-Trend-by-Markets-635x317.png




As can be expected from Nvidia (NASDAQ: NVDA (http://www.nasdaq.com/symbol/nvda)), the GPU side of things was the primary driver behind the results. This includes the Geforce division as well as Quadro and Tesla divisions. Over $1.1 Billion dollars in revenues is accounted for by this particular business unit. This side of Nvidia (NASDAQ: NVDA (http://www.nasdaq.com/symbol/nvda)) saw a healthy growth of 16% Quarter over Quarter and 12% Year over Year – both of which are very good indicators.


The ‘Gaming’ or the GeForce side of things saw a healthy growth of 40% year over year and 15% sequentially, bringing in a revenue of $761 Million.
The Professional Visualization or Quadro side of things was up 8% sequentially and down 8% Year over Year at $190 Million.
The Datacenter side of things (aka the Tesla and the Grid) were also up 13% sequentially but down 8 percent year over year at 82 Million dollars.
The Tegra SBU accounted for $129 Million US dollars on the revenue and while it was up 1% quarter over quarter, it actually declined by a significant 23% year over year. The reason stated in the financials is that the decline reflects the end of life of OEM smartphones and tablets which featured older variants of Tegra.
The Automotive segment, which includes the infotainment system for Tesla Motors infact actually went up by a resounding 50% year over year and over 11% sequentially – which is very promising.
Licensing revenue was flat at $66 Million.

Nvidia (NASDAQ: NVDA (http://www.nasdaq.com/symbol/nvda))’s quarterly cash dividend saw an increase of 18%. The company has already returned an aggregate of $604 Million to shareholders in the first 3 quarters of Fiscal Year 2016 and aims to take the amount upto $800 Million by the year end. Nvidia has mentioned that it aims to return a cool $ 1 billion to shareholders in FY17. The quarterly cash dividend will be paid on December 14, 2015 and will be paid to all shareholders on record on the 20th of November 2015.
Nvidia Q3 FY16 GAAP Financial Result
<thead>
WCCFTech
Q3 FY16
Q2 FY16
Q3 FY15
Q/Q
Y/Y

</thead><tbody>
Revenue
$1,305
$1,153
$1,225
up 13%
up 7%


Gross margin
56.3%
55.0%
55.2%
up 1.3%
up 1.1%


Operating expenses
$489
$558
$463
down 12%
up 6%


Operating income
$245
$76
$213
up 222%
up 15%


Net income
$246
$26
$173
up 846%
up 42%


Diluted earnings per share
$0.44
$0.05
$0.31
up 780%
up 42%

</tbody>
Nvidia Q3 FY16 IFRS Financial Result
<thead>
WCCFTech
Q3 FY16
Q2 FY16
Q3 FY15
Q/Q
Y/Y

</thead><tbody>
Revenue
$1,305
$1,153
$1,225
up 13%
up 7%


Gross margin
56.5%
56.6%
55.5%
down 0.01%
up 1%


Operating expenses
$430
$421
$415
up 2%
up 4%


Operating income
$308
$231
$264
up 33%
up 17%


Net income
$255
$190
$220
up 34%
up 16%


Diluted earnings per share
$0.46
$0.34
$0.39
up 35%
up 18%

</tbody>
Quarterly OverviewThis quarter saw the launch of the GM206 powered GTX 950 – which is Nvidia (NASDAQ: NVDA (http://www.nasdaq.com/symbol/nvda))’s entry level Maxwell GPU and fully DX 12.0 capable. Retailing for $159 MSRP, it was targeted at MOBA gamers and particularly the APAC market including China. Green launched the GeForce Now game streaming service (dubbed the ‘netflix of gaming’). Nvidia (NASDAQ: NVDA (http://www.nasdaq.com/symbol/nvda)) also launched the SHIELD Android TV in certain European markets.
The company also ventured further into the territory of Virtual Reality and introduced two software development kits this year, namely the GameWorks VR and DesignWorks VR. The first is intended to be a dev kit for creating gaming and other visual experiences for Virtual Reality devices whileas the second is aimed at professionals wanting to integrate VR in their own apps.
Nvidia (NASDAQ: NVDA (http://www.nasdaq.com/symbol/nvda)) unveiled the GRID 2.0 platform which delivers high graphical intensive workloads straight to a slave drive.
As far as the automotive industry goes, Nvidia (NASDAQ: NVDA (http://www.nasdaq.com/symbol/nvda))’s tegra chips landed in more production vehicles including the likes of the Mercedes-Benz, Audi, Porsche, Bentley and Honda at the International Auto Show in Frankfurt. The partnership with Tesla Motors is also highlighted, since the chip is a pretty integral part of the Tesla, powering the digital instrument cluster and infotainment system.
Outlook for Nvidia Corporation for Fourth Quarter Fiscal Year 2016

Revenue is expected to be $1.30 billion, plus or minus two percent.
GAAP and non-GAAP gross margins are expected to be 56.7 percent and 57.0 percent, respectively, plus or minus 50 basis points.
GAAP operating expenses are expected to be approximately $503 million. Non-GAAP operating expenses are expected to be approximately $445 million.
GAAP and non-GAAP tax rates for the fourth quarter of fiscal 2016 are expected to be 20 percent, plus or minus one percent.
The above GAAP outlook amounts exclude restructuring charges, which are expected to be in the range of $25 million to $35 million, in the fourth quarter of fiscal 2016.
Capital expenditures are expected to be approximately $20 million to $30 million.








Noticia:
http://wccftech.com/nvidia-nasdaq-nvda-q3-fy16-financial-results/#ixzz3qifbNgSf


Mais um excelente resultado para a nvidia.
(http://wccftech.com/nvidia-nasdaq-nvda-q3-fy16-financial-results/#ixzz3qifbNgSf)

Jorge-Vieira
10-11-15, 13:39
Nvidia: Our Laptop Gaming Is More Powerful Than Current Generation Consoles And Fully VR Capable

Nvidia is one of the two modern makers of graphics processing chips, and the financially successful one I might add, but were still unable to score a design win in the console side of things (due to an under-powered CPU portion); even though their graphics chips are among the most powerful. At their recent quarterly earnings the company CEO Mr. Jen-Hsun Huang noted (factually correctly) that the latest mobile chips from Nvidia are better than the chips in current generation consoles in nearly all aspects – especially considering the flagship mobility chip performs equivalent to a desktop GTX 980.

http://cdn.wccftech.com/wp-content/uploads/2015/09/NVIDIA-GeForce-GTX-980_Laptop_GPU-635x357.png (http://cdn.wccftech.com/wp-content/uploads/2015/09/NVIDIA-GeForce-GTX-980_Laptop_GPU.png)An official slide showing off the GTX 980 (Notebook). @Nvidia Public DomainNvidia: our notebook GPUs are much more powerful than consoles and capbale of immersive VR gamingAs most of our readers will know, there is a huge power difference between the high end PC gaming hardware and the hardware inside the console. This is because unlike an enthusiast PC rig, the budget to make a single unit of a console is very limited – as well as the thermal and electric envelop that the chips inside are required to operate. This means that high end desktop graphic cards – which are usually more expensive than a single PS4 end up having exponentially more power than the tiny chips inside the current generation consoles. Before we go any further, here is a quote from the earnings transcript

I appreciate you bringing up the notebook work that we did. Maxwell, the GPU architecture that we created and the craftsmanship of the GPUs we made are so incredible that it’s finally possible for a notebook to be able to deliver the same level of performance as a desktop and our timing was timed so that people who want to enjoy VR with a notebook can finally do it. And so the latest generation of notebooks with GTX 980 are just amazing. I mean, they are so many times more powerful than a game console. It fits in a space smaller than a game console and it can drive VR. Everything you want to do, every game you want to play is possible on that thin laptop. And so the enthusiasm behind our launch with GTX 980 has been really, really fantastic. I appreciate you recognizing it. – CEO Nvidia, Jen-Hsun Huang
Now the chip in question here is the GTX 980. The mobile GTX 980 is a graphics chip like no other, it is equivalent in power to the desktop GTX 980 and has a hefty cooling and wattage requirement. But what you loose in battery life and heat management, you gain more in raw performance numbers. It is this chip that the CEO of Nvidia Corporation is referring to.




The comments he makes about the mobility chip being better than consoles might look a case of sour grapes (and it might actually be just that!) but they have very real facts supporting them. VR is clearly the new hot cake of this Industry – but unlike previous technologies, this one requires a sheer number of graphical horsepower behind it. The current standard (taken from the Oculus Rift) is of 2160×1200 @90 fps. This is a graphical load which the current generation consoles can absolutely not handle right now – not unless you massively reduce the polygon count, graphical settings, textures and probably not even then. Take a look at the following numbers:



The PS4 has is rated at 1.84 TFlops
The Xbox One is rated at 1.31 TFlops
The GTX 980 (Notebook) is rated at 4.6 TFlops
The PS4 can handle (in most cases) a load of 1080p (30-60fps)
The Oculus VR standard is rated at 2160×1200 at 90fps

The graphical load of the Oculus Rift VR standard is 3.75 times higher than the 1080p30fps standard and 1.875 times higher than the 1080p 60fps standard. This gives an idea of why current generation consoles aren’t really in a good position to power VR (without inducing severe motion sickness and nausea) whereas enthusiast rigs and notebooks, by sheer horsepower, very much are. Ofcourse, they will also be proportionally more expensive – something that the CEO failed to point out.

A copy of the full transcript can be found over at SeekingAlpha (http://seekingalpha.com/article/3655446-nvidia-nvda-jen-hsun-huang-on-q3-2016-results-earnings-call-transcript).







Noticia:
http://wccftech.com/nvidia-laptop-gaming-more-powerful-consoles-vr-capable-ceo/#ixzz3r5zjMAnl

Jorge-Vieira
13-11-15, 14:25
NVIDIA Announces Upcoming Events for Financial Community (http://www.hardocp.com/news/2015/11/13/nvidia_announces_upcoming_events_for_financial_com munity/)


NVIDIA will present (http://nvidianews.nvidia.com/news/nvidia-announces-upcoming-events-for-financial-community-3102681) at the following conferences for the financial community:

Credit Suisse 19th Annual Technology Conference



Wednesday, Dec. 2, 10:30 a.m. Mountain time

The Phoenician, Scottsdale, Ariz.

Barclays Tech Conference



Wednesday, Dec. 9, 11:20 a.m. Pacific time

Palace Hotel, 2 New Montgomery St., San Francisco

Interested parties can listen to live audio webcasts of NVIDIA's presentations at these events, available on the NVIDIA website at nvidia.com/investor. A replay of the webcasts will be available for seven days afterward.



Noticia:
http://www.hardocp.com/news/2015/11/13/nvidia_announces_upcoming_events_for_financial_com munity#.VkXyxL9v708

Jorge-Vieira
16-11-15, 15:06
NVIDIA To Describe Path Forward For Accelerated Computing At SC15 Show (http://www.hardocp.com/news/2015/11/16/nvidia_to_describe_path_forward_for_accelerated_co mputing_at_sc15_show/)


As the world of supercomputing gathers for its biggest event of the year this week in Austin, Texas, NVIDIA will be at the center of it (http://blogs.nvidia.com/blog/2015/11/15/sc15/) – with a talk by our CEO, presentations by international researchers and demonstrations of the latest technology. Starting tomorrow, we’ll describe our vision for just what’s possible today in the world of computation, machine learning and visualization to 10,000 of the world’s leading researchers and technologists attending SC15.

Kicking off the show will be a presentation on the path forward for accelerated computing by NVIDIA CEO Jen-Hsun Huang and Ian Buck, who runs our Accelerated Computing business. They’ll also describe our latest work that makes it possible for web-services companies to accelerate their huge machine learning workloads, using the new hyperscale datacenter line for our Tesla Accelerated Computing Platform.



Noticia:
http://www.hardocp.com/news/2015/11/16/nvidia_to_describe_path_forward_for_accelerated_co mputing_at_sc15_show#.Vknw1L9v708

Jorge-Vieira
19-11-15, 14:18
NVIDIA Receives Perfect Score on Key Workplace Equality Index (http://www.hardocp.com/news/2015/11/19/nvidia_receives_perfect_score_on_key_workplace_equ ality_index/)

NVIDIA is committed to creating a workplace that leads our industry in equality, diversity and inclusiveness. So, it’s gratifying to see our work recognized by an important U.S. benchmark survey on LGBT workplace practices. Our recent perfect score of 100 on the 2016 Corporate Equality Index (http://blogs.nvidia.com/blog/2015/11/18/corporate-equality-index-2016/) results from a series of efforts that raised our rating from 75 just a year ago. Our work in the area of fairness around LGBT issues is part of our larger focus on making NVIDIA an even greater place to work. Earlier this month, for example, we expanded our parental leave benefits in the United States up to 30 weeks for birth mothers and 20 weeks for fathers and those adopting a child or serving as foster parents. These new benefits are among the absolute best in the technology industry.

Noticia:
http://www.hardocp.com/news/2015/11/19/nvidia_receives_perfect_score_on_key_workplace_equ ality_index#.Vk3aML9v708

Jorge-Vieira
27-11-15, 14:23
University of Toronto Receives $200K NVIDIA Foundation Award for Cancer Research (http://www.hardocp.com/news/2015/11/26/university_toronto_receives_200k_nvidia_foundation _award_for_cancer_research/)


A team at the University of Toronto, led by Dr. Brendan Frey, is advancing computational cancer research by developing a "genetic interpretation engine" – a GPU-powered, deep learning method (http://blogs.nvidia.com/blog/2015/11/25/compute-the-cure-3/) for identifying cancer-causing mutations. Today the NVIDIA Foundation, our employee-driven philanthropy arm, awarded Frey and his team a US$200,000 grant to further that work — and help them usher in an era of personal and effective cancer care.

Cancer kills almost 600,000 people each year in the U.S. alone. It can be caused by any one of an endless variety of mutations, across many different genes. This can make it hard to identify quickly and treat in a highly targeted way. As computers grow more powerful, scientists are delving into giant datasets and deploying computer simulations to research how cancer develops. Part of our "Compute the Cure" initiative, the NVIDIA Foundation’s grant will help Frey’s team scale up their GPU-powered methods so they can be applied to a large number of personal genomes in clinical settings, ultimately involving hundreds of thousands of genomes.



Noticia:
http://www.hardocp.com/news/2015/11/26/university_toronto_receives_200k_nvidia_foundation _award_for_cancer_research#.VlhnR79v708

Jorge-Vieira
02-12-15, 14:17
Win tickets to an exclusive Nvidia VR event in London

Want to try your hand at virtual reality? Then pay attention as we have a pair of tickets up for grabs to an exclusive NVIDIA VR event taking place at a secret location in London on December 8!
This unique experience will give you and a friend the opportunity to go hands-on with the very latest in VR technology.
NVIDIA's invitation is embedded below, but in a nutshell, all you need to do to be in with a chance of winning is leave a comment with an answer to the following question: outside of gaming, where can you see VR being used?
We'll pick our lucky winner this Friday, December 4, so get those entries in quick!
http://hexus.net/media/uploaded/2015/12/f51b4735-a372-4a12-90fb-3824d47a5e2f.png

Terms and Conditions


UK residents only
Closing date: 4/12/15
Winners contacted/announced: 4/12/15
Winners must be available to travel to London 8/12/15
Travel costs not included



Noticia:
http://hexus.net/tech/news/industry/88559-win-tickets-exclusive-nvidia-vr-event-london/

Jorge-Vieira
07-12-15, 14:05
Nvidia backtracks on Kodi pirate support

http://www.fudzilla.com/media/k2/items/cache/e3ba98f1214569ae1fa6ebe2f2752de4_L.jpg (http://www.fudzilla.com/media/k2/items/cache/e3ba98f1214569ae1fa6ebe2f2752de4_XL.jpg)

Shielded pirating activity

Red-faced staff at Nvidia published an article advising customers to site known for 'pirate' addons in order to boost the capabilities of the Kodi media player.

TorrentFreak (https://torrentfreak.com/nvidia-inadvertently-endorses-site-offering-pirate-kodi-addons-151206/)pointed out that Nvidia sponsors Kodi so sending its users to a site which is ripping it off is pretty silly. The article has been taken down. However it does highlight some of the problems that Nvidia has with Kodi.
The software is one of the best pieces of software for viewing pirate content online, but you have to get your paws on some less than kosher software. Kodi is very friendly with third-party addons that can turn it into a piracy powerhouse providing free access to movies, music and TV shows.
The XBMC Foundation which runs Kodi finds this pirate association is a major headache and his been taking action against infringers, filing complaints with eBay over members who sold Kodi-loaded hardware for the purpose of viewing pirate content.
Then in the middle of this Nvidia wrote an article explaining how to Install Kodi Add-ons on NVIDIA SHIELD.’ It did not exactly advise users to install illegal software but it pointed readers to TVaddons, a site that has become known not only for the promotion and development of totally legitimate software, but also a range of ‘pirate’ addons for Kodi.
Nvidia explained how to install Fusion which contains addons which allow the viewing of anything from user-uploaded content to the latest movies.
Soon after the paper was released there were complaints on Kodi’s official forums.



Noticia:
http://www.fudzilla.com/news/39404-nvidia-backtracks-on-kodi-pirate-support

Jorge-Vieira
08-12-15, 15:03
How NVIDIA GRID Is Bringing GIS to Any Device, Anywhere (http://www.hardocp.com/news/2015/12/08/how_nvidia_grid_bringing_gis_to_any_device_anywher e/)


Life just got easier if your company is one of the 300,000 relying on Esri solutions to visually understand and analyze location data. That’s because the Esri ArcGIS Desktop Virtualization Appliance with NVIDIA GRID graphics virtualization technology (http://blogs.nvidia.com/blog/2015/12/07/nvidia-grid-esri-arcgis/) is here. Now, geographic information systems (GIS) applications, like Esri ArcGIS Pro, can be delivered to users in the field, on any connected device.

Previously, using ArcGIS Pro had been confined to high-end workstations. With NVIDIA GRID, users anywhere get the same high-end experience, but the application stays hosted in the data center. Whether for city planning, military operations, facilities management or natural resource conservation, all sorts of organizations use ArcGIS Pro. Schools, governments and businesses use it to analyze data, create maps, visualize scenarios and share information, in both 2D and 3D environments.



Noticia:
http://www.hardocp.com/news/2015/12/08/how_nvidia_grid_bringing_gis_to_any_device_anywher e#.VmbxH79v708

Jorge-Vieira
15-12-15, 14:04
Nvidia loses patent case against Samsung

http://www.fudzilla.com/media/k2/items/cache/22541e8388aa984576e44f2284e78879_L.jpg (http://www.fudzilla.com/media/k2/items/cache/22541e8388aa984576e44f2284e78879_XL.jpg)

ITC rules in Samsung's favour
The US International Trade Commission has cleared Samsung and Qualcomm from the claim that they used Nvidia patents without permission.

The case was targeting all Android devices from Samsung including some of the Qualcomm based chips. This ruling is a big deal for Samsung as the company sells a lot of phones and tablets based on Android operating system in the US. Nvidia originally announced the case in September 2014 and we covered it with quite a few details (http://www.fudzilla.com/35687-nvidia-launches-patent-lawsuit-against-samsung-and-qualcomm).
Today's announcement is just the final confirmation of the ruling from October when Samsung was given a get out of jail free card over two out of seven patents (http://www.fudzilla.com/news/graphics/38985-samsung-and-qualcomm-didn-t-infringe-two-nvidia-gpu-patents).

Samsung issued a following statement.

“We are very pleased that the ITC came to the right and just conclusion that we did nothing wrong.” Nvidia's David M. Shannon who serves as executive vice president, chief administrative officer and secretary has issued a following statement at company's blog.

The U.S. International Trade Commission today declined our request to review a recent decision involving NVIDIA by one of its administrative law judges. He had ruled in October that Samsung and Qualcomm infringed one of our graphics patents, which wasn’t valid, and did not infringe two of our other patents.
While today’s ruling ends this particular case in the ITC, we remain firm in our belief that our patents are valid and have been infringed and will look to appeal the ITC’s decision. Nvidia was hoping for a better news from ITC buas as David mentions in his blog post, the flight is not over, they will continue to chase Samsung and look into possibility of appealing on the ruling.



Noticia:
http://www.fudzilla.com/news/39463-nvidia-loses-patent-case-against-samsung

Jorge-Vieira
15-12-15, 15:04
NVIDIA Backs Data Science Bowl To Fight Heart Disease (http://www.hardocp.com/news/2015/12/15/nvidia_backs_data_science_bowl_to_fight_heart_dise ase/)


Cardiovascular disease is the leading cause of death for U.S. men and women. Doctors diagnose someone with the condition every forty three seconds. Finding new and better ways to speed the diagnosis of heart disease couldn’t be more important. So, we’re pleased to take part in a program aimed at applying technology to help — potentially improving the care of heart disease patients, and helping them live longer, healthier lives.

We’re supporting the second annual Data Science Bowl competition (http://blogs.nvidia.com/blog/2015/12/14/heart-disease-deep-learning/), collaborating with Booz Allen Hamilton, Kaggle’s data scientist community and the National Institutes of Health. In the competition, data scientists from around the world will work to help transform the diagnosis of heart disease. During the 90-day competition, teams will develop machine learning algorithms to analyze a thousand MRI images of existing patients. The goal: systems that automatically identify early indicators of heart disease. The team with the most accurate algorithm will win $125,000. The second and third-place teams get $50,000 and $25,000, respectively.



Noticia:
http://www.hardocp.com/news/2015/12/15/nvidia_backs_data_science_bowl_to_fight_heart_dise ase#.VnAr2r9v4vc

Jorge-Vieira
17-12-15, 15:15
ITC declines Nvidia's appeal in Samsung patent-infringement case

The United States International Trade Commission has denied Nvidia's request for a decision review regarding the company's ongoing patent spat with Samsung. Nvidia was hoping to get the ITC's initial findings (http://techreport.com/news/29180/itc-says-samsung-and-qualcomm-didnt-infringe-some-nvidia-patents) in Samsung's favor overturned. In the statement issued by the ITC (http://www.usitc.gov/secretary/fed_reg_notices/337/337_ta_932.pdf?source=govdelivery&utm_medium=email&utm_source=govdelivery), the agency says it again found no instances where Samsung was violating Nvidia's patents.
This decision upholds judge Thomas Pender's earlier findings, published on October 9. While the judge concluded that Samsung and Qualcomm did indeed violate one of Nvidia's patents, he also deemed that patent invalid. Furthermore, Pender limited the investigation's scope so it wouldn't include Qualcomm's graphics processing units.
Nvidia acknowledges that this decision marks the end of the road for its case against Samsung and Qualcomm. Despite that setback, the graphics company said in a recent blog post (http://blogs.nvidia.com/blog/2015/12/14/nvidia-itc/) that it remains firm in its belief that its patents are still valid, and that those patents were infringed. The company says it plans to appeal the ITC's decision.



Noticia:
http://techreport.com/news/29452/itc-declines-nvidia-appeal-in-samsung-patent-infringement-case

Jorge-Vieira
21-12-15, 14:37
NVIDIA Iray Comes to Autodesk Maya (http://www.hardocp.com/news/2015/12/21/nvidia_iray_comes_to_autodesk_maya/)

Autodesk Maya users can create their designs faster and more easily than ever, thanks to our new Iray plug-in (http://blogs.nvidia.com/blog/2015/12/18/nvidia-iray-plug-in-autodesk-maya/) for the popular software. Maya is used by tens of thousands of designers, as well as commercial production companies that service the needs of the design industry. These creatives can experience the Iray for Maya plug-in with unlimited use with our 90-day trial, and they can purchase through our online store for $295 a year as a node-locked or floating license with no processor restrictions. Iray is a great tool for Maya users because it excels in both rendering and post-production. It accurately predicts the final results of a design, reducing the number of prototypes designers need to produce. With Iray, designers can work interactively, adjusting physically accurate lighting or materials on the fly and getting rapid feedback.

Noticia:
http://www.hardocp.com/news/2015/12/21/nvidia_iray_comes_to_autodesk_maya#.VngOjVJv4vc

Jorge-Vieira
23-12-15, 15:25
Nvidia Used three Samsung patents in tablets

Chipmaker Nvidia used in their Tablets three Samsung patents, and violated Samsung's patents claims. This is what a judge from the American International Trade Commission (USITC) in the USA ruled this week.
USITC will investigate if measures need to be taken, the commission can acutally prohibit sales and import from Nvidia in the United States. The patented technologies all have been used in Nvidia's Shield tablets, one of these patents expires next year (2016). Nvidia argued that Samsung’s patents date back to the 1990s, covering older technology that’s no longer used in modern chip designs. Its lawyers argued that Samsung had “chosen three patents that have been sitting on the shelf for years collecting nothing but dust.”
Samsung The lawsuit was a claim from Korea based Samsung and a direct response to an earlier lawsuit that Nvidia filed against Samsung. Nvidia at that time claimed that Samsung and chipmaker Qualcomm used three of their patents embedded into theit GPU technology. On thr 15th of December USITC ruled that Samsung did not breach two of the three patens claims. Samsung did make use of a 3rd one, but according to the ruling it was ruled not valid. The trade agency staff, which acts as a third party in the case on behalf of the public, recommended that the judge find that Nvidia had infringed two of the three patents. One of the two is the patent that expires next year.
Patents 6.173.349 (http://www.google.com/patents/US6173349), 6.147.385 (http://www.google.com/patents/US6147385) and 7.804.734 (https://www.google.com/patents/US7804734) are at discussion here and involve a type of sram-cel and two patents on a methodology to achieve less latency in the bus-sytem, the data-signal buffers and system memory..
The case also involves some of Nvidia’s customers, including Biostar Microtech International Corp., Jaton Corp., and EliteGroup Computer Systems Co.



Noticia:
http://www.guru3d.com/news-story/nvidia-used-three-samsung-patents-in-tablets.html

Jorge-Vieira
24-12-15, 14:49
Nvidia loses out in court cases

http://www.fudzilla.com/media/k2/items/cache/01278a2df25f22ace8adfaa37e6fed94_L.jpg (http://www.fudzilla.com/media/k2/items/cache/01278a2df25f22ace8adfaa37e6fed94_XL.jpg)

Should not have really tried it
Nvidia is probably regretting its moves to turn into a patent troll last year as they appear to have backfired terribly.

The GPU maker played itself as the underdog in a case against Qualcomm and Samsung – particularly as it had not really played the patent troll card before. Cynics thought that Nvidia was trying to squeeze a bit of extra cash from the mobile market, but people don’t usually like patent trolls.
However when Nvidia sued Samsung, Samsung sued it back and asked the ITC to block NVIDIA’s products from being sold due to alleged patent infringement. It was then that it all went pair shaped.
The ITC first ruled that Nvida’s patents were invalid and now says that Samsung’s were. This means rather than cracking open the champers this crimbo, Nvidia is having to check its bank account to see if it can pay off the people it sued.
If it does not reach an agreement with Samsung, it could find its products banned from the US, something that is not good for one’s bottom line.
Nvidia might be hoping that the ITC will change its mind when the case is reviewed several months from now. However that sort of thing is rather rare. Still one of the patents that it infringed expires in 2016, meaning that if the review doesn’t go in their favour, products that use said patent would only be banned for several months.
The main question that Nvidia about the whole incident is “what was Nvidia thinking?” The whole Apple versus Samsung thing proved to the tech industry that engaging any patent wars is a complete waste of time. Technology companies are a little like the superpowers in the cold war. They all have dossers of patents which they never take out unless someone sues them first. Nvidia has patents, but no-where as many as the likes of Qualcomm and Samsung, and it could not survive any first wave of writs.
Apple also discovered that even with an iTC ruling, it was never going to take a tech product off the shelves. By the time the court case was all done and dusted, the product which you are suing over is out-of-date and no longer sold.
Now Nvidia will have to appeal and the case will end up costing a much more than it could have hoped to have squeezed out of Samsung. To make matters worse, the iTC will have taken way some of the weapons from its own portfolio which it could have collected money from licence fees from.
If only AMD had got Zen out, Nvidia might have had some real big headaches about now.



Noticia:
http://www.fudzilla.com/news/processors/39529-nvidia-loses-out-in-court-cases

Jorge-Vieira
24-12-15, 15:07
NVIDIA Announces Upcoming Event for Financial Community (http://www.hardocp.com/news/2015/12/24/nvidia_announces_upcoming_event_for_financial_comm unity/)


NVIDIA will present (http://nvidianews.nvidia.com/news/nvidia-announces-upcoming-event-for-financial-community-3116166) at the following conference for the financial community:




J.P. Morgan 14th Annual Tech Forum at the 2016 International CES

Wednesday, Jan. 6, 7:50 a.m. Pacific time

Bellagio Hotel, Las Vegas

Interested parties can listen to a live audio webcast of NVIDIA's presentation at this event, available on the NVIDIA website at nvidia.com/investor. A replay of the webcast will be available for seven days afterward.



Noticia:
http://www.hardocp.com/news/2015/12/24/nvidia_announces_upcoming_event_for_financial_comm unity#.VnwJ_lJv4vc

Jorge-Vieira
05-01-16, 09:05
Nvidia faces "favorable" GPU trends going into 2016

http://www.fudzilla.com/media/k2/items/cache/33a4b4b18178dbb560328236c9ca0132_L.jpg (http://www.fudzilla.com/media/k2/items/cache/33a4b4b18178dbb560328236c9ca0132_XL.jpg)

Strong performance in 2015 due to gaming segment
According to a latest report from MKM Partners, Nvidia's target stock price should rise due to an increase in gameing segment revenue.

MKM Partners has increased Nvidia’s 12-month target stock price (http://www.businessfinancenews.com/27095-nvidia-to-reach-greater-heights-in-2016-mkm-partners/) from $36 to $39 and expects the company to have "favorable" GPU trends going into 2016 due to its gaming segment generating 58 percent of revenues in 2015.
The firm notes that GPU card price trends and availability are good indicators of Nvidia’s performance going forward. For the third quarter of FY16 (July 27, 2015 – October 25, 2015), the company reported a net income of $245 million, a 42 percent increase year-over-year. For the four quarter of FY16 (October 26, 2015 – January 25, 2016), the company anticipates gross margins to stand at 56.7 percent.


http://www.fudzilla.com/images/stories/2016/January/sk-hynix-hbm2-specification.jpg

Image credit: Extremetech.com In March, the company announced Pascal, its next-generation GPU architecture for 2016. The GPUs will make use of High Bandwidth Memory (HBM) which is similar to what AMD launched with its 28nm R9 Fury series GPU lineup back in June. The only difference is that unlike AMD’s first-generation high-bandwidth memory, Pascal GPUs will offer up to 16GB of VRAM thanks to their use of second-generation high-bandwidth memory (HBM2).
During Nvidia’s GTC conference, company co-founder and CEO Jen-Hsun Huang stated that HBM2 "will give bandwidth in excess of a terabyte per second, more than double what AMD's Fury X cards currently offer.”
Nevertheless, we look forward to seeing hardware in late Q1 or early Q2 (http://www.3dcenter.org/news/reihenweise-pascal-und-volta-codenamen-aufgetaucht-gp100-gp102-gp104-gp106-gp107-gp10b-gv100) with more details later today. Nvidia will hold a CES 2016 press conference (http://www.nvidia.com/object/ces2016.html) in Las Vegas later this afternoon at 6:00pm PST.



Noticia:
http://www.fudzilla.com/news/graphics/39584-nvidia-faces-favorable-gpu-trends-going-into-2016

Jorge-Vieira
18-01-16, 14:50
NVIDIA-Powered Cars Unveiled At Detroit Motor Show (http://www.hardocp.com/news/2016/01/18/nvidiapowered_cars_unveiled_at_detroit_motor_show/)


It may be 10 degrees outside of the Cobo Convention Center in downtown Detroit, but things are heating up inside as the 2016 North American International Auto Show (aka the Detroit Auto Show) opens its doors to the public tomorrow. The Big 3 and most other major automakers are showcasing what the world will soon see on the streets. Here’s a taste of some of the NVIDIA-powered cars on display (http://blogs.nvidia.com/blog/2016/01/16/detroit-motor-show/).

Doubling down on its Motor Trend "SUV of the Year Award," the Volvo XC90 just bagged the "North American Truck of the Year" at the show. Volvo will soon add NVIDIA DRIVE PX 2 as the artificial intelligence brain of this luxury SUV when its Drive Me program kicks off next year. This first-of-its-kind program will let Volvo customers operate leased XC90s in a fully autonomous mode around Volvo’s hometown of Gothenburg, Sweden.

Presenting another highlight of its product portfolio, Porsche is introducing the 911 Turbo and Turbo S, with neck-snapping acceleration and less fuel consumption compared to previous generations. But the car itself isn’t the only thing that’s quick. The 911’s new infotainment system, powered by NVIDIA, can access the latest traffic information and even visualize routes with 360-degree images and satellite images. It’ll make sure you not only arrive in style, but exactly where you need to be.



Noticia:
http://www.hardocp.com/news/2016/01/18/nvidiapowered_cars_unveiled_at_detroit_motor_show# .Vpz7olJv4vc

Jorge-Vieira
28-01-16, 17:18
NVIDIA Sets Conference Call for Fourth-Quarter Financial Results (http://www.hardocp.com/news/2016/01/28/nvidia_sets_conference_call_for_fourthquarter_fina ncial_results/)

NVIDIA will host a conference call (http://nvidianews.nvidia.com/news/nvidia-sets-conference-call-for-fourth-quarter-financial-results-3136856) on Wednesday, Feb. 17, at 2 p.m. PT (5 p.m. ET) to discuss its financial results for the fourth quarter and fiscal year 2016, ending Jan. 31, 2016. The call will be webcast live (in listen-only mode) at the following websites: nvidia.com and streetevents.com. The company's prepared remarks will be followed by a question and answer session, which will be limited to questions from financial analysts and institutional investors. Ahead of the call, NVIDIA will provide written commentary on its fourth-quarter results from its CFO. This material will be posted to nvidia.com/ir immediately after the company's results are publicly announced at approximately 1:20 p.m. PT. To listen to the conference call, dial (212) 231-2927; no password is required. A replay of the conference call will be available until Feb. 24, 2016, at (402) 977-9140, conference ID 21794034. The webcast will be recorded and available for replay until the company's conference call to discuss financial results for its first quarter of fiscal year 2017.

Noticia:
http://www.hardocp.com/news/2016/01/28/nvidia_sets_conference_call_for_fourthquarter_fina ncial_results#.VqpNIlJv4vc

Jorge-Vieira
05-02-16, 18:21
NVIDIA Looking HOT to nVestors (http://www.hardocp.com/news/2016/02/05/nvidia_looking_hot_to_nvestors/)

While many have questioned NVIDIA business moves over the last few years outside of it main consumer GPU realm, it seems as though some folks are making some big bets on NVIDIA (http://www.intercooleronline.com/stocks/dorsey-wright-associates-takes-position-in-nvidia-co-nvda/379494/) in the coming months. Dorsey Wright & Associates purchased over $3.5M worth of NVIDIA stock in Q4. However they are not alone, as ClariVest Asset Management LLC purhased over $20.5M worth of stock.



A number of other institutional investors have also recently bought and sold shares of the company. ClariVest Asset Management LLC purchased a new position in NVIDIA during the fourth quarter worth approximately $20,651,000. Acadian Asset Management increased its position in shares of NVIDIA by 3,321.5% in the fourth quarter. Acadian Asset Management now owns 330,960 shares of the computer hardware maker’s stock worth $10,908,000 after buying an additional 321,287 shares during the last quarter.



Noticia:
http://www.hardocp.com/news/2016/02/05/nvidia_looking_hot_to_nvestors#.VrToCFJv4vc

Jorge-Vieira
07-02-16, 15:30
Nvidia Wins Samsung Memory Patent-Infringement Trial – Given the All Clear to Continue Using Memory Chips for its GPUs

In what is turning into an episodic soap opera, this iteration of the Nvidia v. Samsung case has finally seen some wins for the green camp. Nvidia is one of the two largest manufacturers of graphic cards on the planet and entered into a legal battle with Samsung some time ago. While (as of yet) Nvidia lost the original case against Samsung, the company has won the patent-infringement case (regarding computer memory-chip technology) brought by Samsung in retaliation – which constituted of a total of 4 claims, 3 of which were tossed. (via Bloomberg (http://www.bloomberg.com/news/articles/2016-02-05/nvidia-wins-trial-brought-by-samsung-over-memory-chip-patent))
http://cdn.wccftech.com/wp-content/uploads/2016/02/nvidia-samsung-trial.jpg (http://cdn.wccftech.com/wp-content/uploads/2016/02/nvidia-samsung-trial.jpg)
Jury rules that Nvidia’s use of Samsung’s memory chips does not constitute infringement The tale originally started when Nvidia had filed two lawsuits against Samsung and Qualcom. The first one was filed at the International Trade Commission in September 2014, while as the second was filed in the federal court of Wilmington, Delaware. The first lawsuit was meant to stop the import of Samsung products while as the second suit was meant for damages. Back in the day Nvidia had won the race after 2D graphic cards and was able to bring the first modern (3D) graphics processing chip with close-to-modern capabilities to the market in 1999. Nvidia claimed that this victory meant that it had effectively invented the GPU. The rest, as they say is history.
Almost a month ago, Samsung fired back with its own lawsuits and it looked like the scales would tip in its favor with Nvidia stating the following:
“A judge at the U.S. International Trade Commission this week issued his initial determination that we had infringed three of Samsung’s patents. He was ruling on Samsung’s retaliatory lawsuit against our own suit against them in the ITC. Since we don’t import any significant amount of products directly, they had filed the suit to enjoin the imports of several small companies that use our products. We are disappointed by this initial decision. We will seek a review by the full ITC, which will take several months to issue its ruling. We will continue to keep you informed of significant developments.”
Advertisements

Now after the original ruling, the case that had started with 4 patent infringement claims form Samsung was reduced to just one claim. Samsung had dropped one patent before trial and out of the remaining three, two were deemed as mistrials and therefore negated. For the remaining patent infringement claim, Nvidia scored a win last Friday when the court declared that it had not infringed upon any patents of Samsung. An Nvidia spokesperson had the following to say:
“We are pleased with the outcome of this case, which reflects the jury’s careful attention to the facts and the law that applied,” said Hector Marinez, an Nvidia spokesman.

The case is Samsung Electronics Co. v. Nvidia Corp., 14cv757, U.S. District Court for the Eastern District of Virginia (Richmond).







Noticia:
http://wccftech.com/nvidia-wins-samsung-memory-patent-trial/#ixzz3zUqPGw7c

Enzo
07-02-16, 16:22
Agora para retaliação, a Samsung vai ficar...vermelha;)

Jorge-Vieira
10-02-16, 16:55
NVIDIA GRID Boosts Blast Extreme in VMware Horizon (http://www.hardocp.com/news/2016/02/10/nvidia_grid_boosts_blast_extreme_in_vmware_horizon )

NVIDIA GRID acceleration of Blast Extreme — a new protocol for optimizing the mobile cloud — is now supported in VMware Horizon 7 (http://blogs.nvidia.com/blog/2016/02/09/nvidia-grid-blast-extreme-vmware-horizon/). What’s that mean for IT managers? Reduced latency. Improved performance. And up to 18 percent more users. NVIDIA and VMware have been working together for years to improve the virtualized computing user experience and enable a whole new class of virtual use cases. We were the first to enable hardware-accelerated graphics rendering in VMware Horizon View. Then we enabled the first virtualized graphics acceleration in Horizon with GRID vGPU. Now, using the new Blast Extreme protocol, NVIDIA GRID offloads encoding from the CPU to the GPU. This frees up resources and lowers the demand on network infrastructure, which lets organizations reach more remote users. In tests of key applications like ESRI ArcGIS Pro, scalability increased by up to 18 percent. That’s without investing in new hardware.

Noticia:
http://www.hardocp.com/news/2016/02/10/nvidia_grid_boosts_blast_extreme_in_vmware_horizon #.VrtrY1Jv4vc

Jorge-Vieira
16-02-16, 16:21
GeForce GTX 900 open-source firmware images and support code released

http://www.fudzilla.com/media/k2/items/cache/6faf8e49e7cd73c8b0e6d91489be5a51_L.jpg (http://www.fudzilla.com/media/k2/items/cache/6faf8e49e7cd73c8b0e6d91489be5a51_XL.jpg)

Steps towards Open source
Nvidia has released the signed firmware images and support code which will enable the GeForce GTX 900 "Maxwell" series to run under its open-source driver.

An Nvidia spokesman said that it has taken a long time for this to happen and he was sorry it took so long.
According to Phoronix (http://www.phoronix.com/scan.php?page=news_item&px=NVIDIA-Releases-Signed-Blobs)the whole thing has been a fiasco. The lack of driver signing held up the open-source Linux driver from supporting hardware acceleration on Nvidia's latest generation GPUs.

With Pascal only a few months away Nvidia had not got Maxwell to move on with open-source 3D. Nvidia's only saving grace is that its official, proprietary Linux driver continues working quite well.
This time taking the firmware to be signed led Nouveau developers to call the Maxwell GPUs "VERY Open-Source Unfriendly".
Firmware blobs have bee signed for the GM200 and GM204, which isin a separate Git repository but will be merged into linux-firmware once all the DRM driver code is ready.
There is also the "secboot" code for Nouveau that provides the pieces to the open-source kernel driver for being able to load the signed firmware.
Of course this does not mean that there is working open-source 3D for the GTX 900 series, but the necessary enablement should mean that it can working soon.
"the changes for basic support are rather modest, and hopefully this pre-release will be enough to enable patches to land in Mesa."
Nouveau developers should get of this code prepped and landed in the Linux 4.6 kernel cycle.



Noticia:
http://www.fudzilla.com/news/processors/39967-geforce-gtx-900-open-source-firmware-images-and-support-code-released


Boas noticias, especialmente para Linux.

Jorge-Vieira
18-02-16, 13:42
NVIDIA Announces Q4 FY16 Earnings – Reports Record Quarterly Revenue of $1.40 Billion and Record Full-Year Revenue of $5.01 Billion

NVIDIA has announced their earnings for Q4 2015 of $1.40 Billion which is an increase of 7% from the previous quarter that was $1.30 Billion. NVIDIA has also posted their earnings for the Fiscal Year 2016 which stands at a record $5.01 Billion, up 7% from $4.68 Billion in the Fiscal Year 2015. During the FY2016, NVIDIA saw growth in all market sectors including Gaming, Professional Visualization, Datacenter and Automotive.
http://cdn.wccftech.com/wp-content/uploads/2016/02/NVIDIA-Q4-FY16-Revenue-Report-635x345.jpg (http://cdn.wccftech.com/wp-content/uploads/2016/02/NVIDIA-Q4-FY16-Revenue-Report.jpg)
NVIDIA Reports Record Revenue in Q4 2015 and Fiscal Year 2016 For NVIDIA, 2015 was a great year that saw the launch of several products aimed at gamers, datacenters and visualization market. The Maxwell architecture from NVIDIA remained the main highlight of the year which saw unprecedented sales in the consumer market leading NVIDIA to gain over 80% discrete graphics market share. Looking at the overall numbers, the Q4 2015 revenue is reported at $1.40 Billion which is up 7% from previous quarter and up 12% Year-to-Year. Looking at the yearly results, NVIDIA posted a record revenue of $5.01 Billion which is up 7% from their fiscal year 2015 ($4.68 Billion).
NVIDIA Quarter to Quarter Earnings: http://cdn.wccftech.com/wp-content/uploads/2016/02/NVIDIA-Earnings-Q4-FY16-635x345.jpg (http://cdn.wccftech.com/wp-content/uploads/2016/02/NVIDIA-Earnings-Q4-FY16.jpg)
NVIDIA Year to Year Earnings: http://cdn.wccftech.com/wp-content/uploads/2016/02/NVIDIA-Earnings-FY16-Summary-635x198.jpg (http://cdn.wccftech.com/wp-content/uploads/2016/02/NVIDIA-Earnings-FY16-Summary.jpg)
“We had another record quarter, capping a record year,” said Jen-Hsun Huang, co-founder and chief executive officer, NVIDIA. “Our strategy is to create specialized accelerated computing platforms for large growth markets that demand the 10x boost in performance we offer. Each platform leverages our focused investment in building the world’s most advanced GPU technology.
“NVIDIA is at the center of four exciting growth opportunities — PC gaming, VR, deep learning, and self-driving cars. We are especially excited about deep learning, a breakthrough in artificial intelligence algorithms that takes advantage of our GPU’s ability to process data simultaneously. via NVIDIA (http://nvidianews.nvidia.com/news/nvidia-announces-financial-results-for-the-fourth-quarter-and-fiscal-2016)
NVIDIA GPU Segments Performance and Revenue Results: Dissecting the market performance for NVIDIA, their GPU reported revenue of $1.17 Billion which is 6% up from the previous quarter and 10% up from Q4 FY 2015. The overall performance of NVIDIA GPUs in FY16 was up 9% compared to the FY15. The Tegra business reported a revenue of $157.0 million in Q4 FY 2016 which is a 22% increase from the previous quarter and a 40% increase from Q4 FY 15. Compared to FY15, NVIDIA’s Tegra department saw a decline of 3% in FY16.
NVIDIA Buisness Performance Quarter to Quarter: http://cdn.wccftech.com/wp-content/uploads/2016/02/NVIDIA-Earnings-Q4-FY16_Buisness-635x120.jpg (http://cdn.wccftech.com/wp-content/uploads/2016/02/NVIDIA-Earnings-Q4-FY16_Buisness.jpg)
NVIDIA Buisness Performance Year to Year: http://cdn.wccftech.com/wp-content/uploads/2016/02/NVIDIA-Earnings-FY16_Buisness-635x120.jpg (http://cdn.wccftech.com/wp-content/uploads/2016/02/NVIDIA-Earnings-FY16_Buisness.jpg)

NVIDIA GeForce GTX Gaming GPUs: NVIDIA has reported that gaming, datacenter and automotive were the key drivers for them in Fiscal Year 2016 that led to a 7% increase in revenue. The GPU business is divided into a range of segments which include the GeForce Gaming GPUs, Quadro Professional GPUs and the Tesla/Grid for Datacenter markets. The GeForce GTX gaming GPU revenue grew 21% from the last year which was due to the widely successful GeForce GTX 980 Ti (http://wccftech.com/nvidia-geforce-gtx-980-ti-features-full-directx-12-support-arrives-geforce-lineup-competitive-649-pricing-titanx-performance/) graphics card and also the sub-$200 US cards such as the GeForce GTX 960 (http://wccftech.com/nvidia-officially-launches-geforce-gtx-960-graphics-card-features-gm206-gpu-price-199/) and GeForce GTX 950 (http://wccftech.com/nvidia-maxwell-based-geforce-gtx-950-launched-149-priced-insane-graphics-card/). The GeForce GTX 970 graphics cards witnessed an increased usage by gamers which led it to become the most popular GPU (http://wccftech.com/nvidia-geforce-gtx-970-steam-survey/) on Steam.
Advertisements

Gaming:


Announced the GeForce GTX VR Ready program (http://wccftech.com/nvidia-geforce-gtx-vr-ready-program/) — in conjunction with PC companies, notebook makers and add-in card providers – to help users discover systems that will provide great virtual reality experiences.
Released NVIDIA GameWorks VR (http://wccftech.com/nvidia-gameworks-vr-large-scale-vr-adoption/), a software development kit for developers of VR software and headsets for gaming.

NVIDIA Quadro Professional GPUs: NVIDIA’s Quadro GPUs for the professional market were the second most profitable for NVIDIA, reporting a revenue of $204 million which is up 7% both sequentially and year over year. The lineup was fueled by new Maxwell powered cards.
Professional Visualization:


Rolled out NVIDIA Iray plugins for Autodesk Maya and Autodesk 3ds Max, which enable users of these applications to create designs incorporating real-world lights and materials faster and more easily than before.
Released NVIDIA DesignWorks VR, a software development kit for developers of VR software and headsets for enterprise.

NVIDIA Tesla / Grid Datacenter GPUs: The datacenter market which includes the Tesla and Grid GPUs also saw an 18% increase of revenue, both sequentially and 10% year over year. The segment reported a revenue of $97 million which was due to the latest Maxwell based Tesla M40 and Telsa M4 cards aimed at Visualization and Grid servers. The market didn’t perform that well due to Maxwell having a non-compute heavy architecture but is expected to be a major driver for the company in Fiscal Year 2017.

“Deep learning is a new computing model that teaches computers to find patterns and make predictions, extracting powerful insights from massive quantities of data. We are working with thousands of companies that are applying the power of deep learning in fields ranging from life sciences and financial services to the Internet of Things,” he said. via NVIDIA (http://nvidianews.nvidia.com/news/nvidia-announces-financial-results-for-the-fourth-quarter-and-fiscal-2016)
http://cdn.wccftech.com/wp-content/uploads/2016/02/NVIDIA-Q4-2016-Earnings-635x357.jpg (http://cdn.wccftech.com/wp-content/uploads/2016/02/NVIDIA-Q4-2016-Earnings.jpg)
Datacenter:


Introduced an end-to-end hyperscale datacenter deep learning platform — consisting of two accelerators, the NVIDIA Tesla M40 and NVIDIA Tesla M4 (http://wccftech.com/nvidia-launches-tesla-m40-tesla-m4-gpus-data-centers-tegra-x1-powered-jetson-tx1-module-announced/) — that lets web-services companies accelerate deep learning workloads.
Revealed new breakthroughs from leading web-services groups using NVIDIA GPUs:

Facebook is using the NVIDIA Tesla accelerated computing platform to power Big Sur, its next-generation computing system for machine learning applications.
Alibaba’s AliCloud cloud computing business is working with NVIDIA to promote China’s first GPU-accelerated, cloud-based, high performance computing platform.
IBM is adding support for NVIDIA GPU accelerators to its Watson cognitive computing platform.
Google is open-sourcing its TensorFlow deep-learning framework, which can be accelerated on GPUs.
Microsoft’s Computational Network Toolkit was integrated with Azure GPU Lab, enabling neural nets for speech recognition that are up to 10x faster than their predecessors.



Tegra processor revenue for the fourth quarter of $157 million was up 22 percent sequentially and up 40 percent year on year, reflecting growth in Tegra development services and automotive. Automotive revenue of $93 million from infotainment modules and product-development contracts increased 18 percent sequentially and 68 percent from a year earlier. License revenue from NVIDIA’s patent license agreement with Intel were flat at $66 million for the fourth quarter.
NVIDIA’s Outlook For Q1 FY17: Moving in fiscal year 2017, NVIDIA expects that their revenue to reach $1.26 billion (+/- 2%). NVIDIA’s outlook for the first quarter of fiscal 2017 is as follows:


Revenue is expected to be $1.26 billion, plus or minus two percent.
GAAP and non-GAAP gross margins are expected to be 57.2 percent and 57.5 percent, respectively, plus or minus 50 basis points.
GAAP operating expenses are expected to be approximately $500 million. Non-GAAP operating expenses are expected to be approximately $445 million.
GAAP and non-GAAP tax rates for the first quarter of fiscal 2017 are both expected to be 19 percent, plus or minus one percent.
Capital expenditures are expected to be approximately $35 million to $45 million.








Noticia:
http://wccftech.com/nvidia-q4-fy16-earnings-financials/#ixzz40WieWVwW


Mais um (http://wccftech.com/nvidia-q4-fy16-earnings-financials/#ixzz40WieWVwW) record para a nVidia!!!!
A capacidade da nVidia em fazer dinheiro é impressionante.

Jorge-Vieira
18-02-16, 14:03
NVIDIA has 100 million GeForce gamers, has high hopes for VR gaming


NVIDIA has released their financial results for its fiscal 2016 today, as well as its Q4 results ending January 31. NVIDIA has had a stellar year, with record revenues for their final quarter, and across the entire year.

http://imagescdn.tweaktown.com/news/5/0/50485_08_nvidia-100-million-geforce-gamers-high-hopes-vr-gaming.jpg

NVIDIA's co-founder and CEO Jen-Hsun Huang said in a statement alongside the financial results that the excitement around the VR market will boost the demand for its GeForce products. But, it has also noticed that other parts of its businesses are booming, too. Huang said: "PC gaming, VR, deep learning, and self-driving cars" are all big focuses for NVIDIA, but VR is a soft spot for the company - and that has me excited, too.

Huang said in the company's investors call: "We can grow by introducing new game platforms. The installed base of 100 million GeForce gamers in the world has a chance to upgrade when that happens". With NVIDIA's GPU Technology Conference right around the corner, we are foaming at the mouth of the rumors for the successor to the Titan X to be revealed, as it'll be a Pascal-powered beast.






Noticia:
http://www.tweaktown.com/news/50485/nvidia-100-million-geforce-gamers-high-hopes-vr-gaming/index.html


É muito gamer com Geforce!!!!!












(http://www.tweaktown.com/news/50485/nvidia-100-million-geforce-gamers-high-hopes-vr-gaming/index.html)Nvidia does well by losing its PC addiction

http://www.fudzilla.com/media/k2/items/cache/329361ec29217e61ef603be70ff672fd_L.jpg (http://www.fudzilla.com/media/k2/items/cache/329361ec29217e61ef603be70ff672fd_XL.jpg)

Beat Wall Street's expectations
Graphics chip maker Nvidia reported earnings for its fourth fiscal quarter that surprised the cocaine nose jobs of Wall Street.

While the results show that while the PC industry is still suffering, people are still buying rather a lot of Nvidia chips. Nvidia's King Jen-Hsun Huang seems to have moved his kingdom out of trouble.
The Santa Clara, California-based company reported non-GAAP earnings of 52 cents a share on revenue of $1.4 billion.
Analysts had expected non-GAAP earnings of 32 cents a share on revenue of $1.31 billion. The results appeared to surprise Nvidia which had predicted revenue of $1.3 billion, plus or minus 2 percent.
Nvidia’s graphics processing unit revenue was up 10 percent. Gaming revenue was up 25 percent from a year ago, thanks in part to interest in virtual reality and strong sales of holiday games.

King Huang proclaimed: “We had another record quarter, capping a record year. Our strategy is to create specialized accelerated computing platforms for large growth markets that demand the 10x boost in performance we offer. Each platform leverages our focused investment in building the world’s most advanced GPU technology.” Nvidia is at the center of four exciting growth opportunities — PC gaming, VR, deep learning, and self-driving cars. We are especially excited about deep learning, a breakthrough in artificial intelligence algorithms that takes advantage of our GPU’s ability to process data simultaneously, he added

And he said, “Deep learning is a new computing model that teaches computers to find patterns and make predictions, extracting powerful insights from massive quantities of data. We are working with thousands of companies that are applying the power of deep learning in fields ranging from life sciences and financial services to the Internet of Things.” Huang has dedicated most of the time in Nvidia’s recent press conferences to Nvidia’s attempts to create supercomputers for cars (http://www.fudzilla.com/news/graphics/39994-volvo-dual-pascal-based-self-driving-car-comes-next-year), which could fuel innovations such as dashboard electronics, infotainment systems, and self-driving cars.
Nvidia is engaged with 3,500 companies in the deep learning market, said Colette Kress, chief financial officer, in a conference call with analysts today. It’s a good thing that the emerging markets are taking off, as Nvidia made a big move into mobile chips and then decided to exit that market.



Noticia:
http://www.fudzilla.com/news/39996-nvidia-does-well-by-losing-its-pc-addiction


(http://www.tweaktown.com/news/50485/nvidia-100-million-geforce-gamers-high-hopes-vr-gaming/index.html)

Jorge-Vieira
19-02-16, 15:53
Nvidia has 20 to 25 million powered cars in the pipeline

http://www.fudzilla.com/media/k2/items/cache/7f317dfde161d0dc9f9becca49bf712e_L.jpg (http://www.fudzilla.com/media/k2/items/cache/7f317dfde161d0dc9f9becca49bf712e_XL.jpg)

Nvidia's automotive future is bright
After 23 and a half years of trying, Nvidia finally became a 5 billion dollar revenue company. This is a great result considering that all PC-oriented companies are suffering from the PC market decline. Now the company has 20 to 25 million cars that will ship with Nvidia technology inside.

Nvidia tried to find its luck with music and video players, later with mobile phones and it abandoned both of the markets as the market aromatized much faster than most expected. A few years back Nvidia started putting Tegra SoCs in cars and that strategy worked out really well.
Jen-Hsun Huang, President and Chief Executive officer at Nvidia, has confirmed to analysts that Nvidia shipped in 5 to 6 million cars (probably in the last quarter). In August 2015, Nvidia confirmed that the company had a 70 percent year over year growth and that it sold SoCs that are shipping in 30 million cars. The growth number got slightly adjusted two quarters later as Nvidia how claims 68 percent growth YoY.
The company also claims that they have additional 20 to 25 million units to ship in its pipeline. These are design wins that took quite few years of engineering to ramp into production. This gives Nvidia good visibility of the pipeline and the opportunities that are ahead. In other words, the future is bright. Computerized car became a highly desirable end user feature and Audi and Tesla owners have proven that. Honda, for example, in Europe offers a 7-inch Android powered navigation for additional €700 which a small price to pay to get rather functional infotainment system with navigation. These things just a few years back use to cost between €2,000 to €3,000 and it was not touch enabled or as powerful as the infotainment of today.
The second part of the Tegra car revenue story is yet to happen. Nvidia has a great chance to earn a lot of money from the self-driving the next step in development of ADAS (Advanced Drive Assistance System) strategy. Nvidia's solution is called DRIVE PX and DRIVE PX 2. This is a rather complex subject and Nvidia claims that it works with quite a large number of customers now, car companies, start-up companies, companies that are largely cloud-based and have an enormous amount of data that they could transform into an automotive service, transportation as a service.
Nvidia benefits a lot now that self-driving vehicles require much more computational power than the ADAS, and in each one of those levels of autonomy, a different amount of computation would have to be deployed. Nvidia has created a scalable architecture that allows car companies to develop cars that are partially assisted, all the way to completely assisted.
When self-driving finally reaches the consumer affordable vehicles Nvidia should be able to sell three different products inside of the same car. One SoC for the dashboard computer, one SoC solution for the infotainment – navigation and one SoC based DRIVE PX for the self-driving feature.
Of course, Qualcomm will try to take some of this market and the company is launching its first infotainment and dashboard solution with a few customers but it won't be easy for Qualcomm to attack the self-driving market. We see a bright future for Nvidia Automotive, this part of the company really has a lot of space to grow and can easily even multiply what the company is making right now in this market.



Noticia:
http://www.fudzilla.com/news/40005-nvidia-has-20-to-25-million-powered-cars-in-the-pipeline

Jorge-Vieira
24-02-16, 16:39
Nvidia lifts the lid on Iray Server

http://www.fudzilla.com/media/k2/items/cache/015d39e53ae0f3e440309c1608f890de_L.jpg (http://www.fudzilla.com/media/k2/items/cache/015d39e53ae0f3e440309c1608f890de_XL.jpg)

Distributed networks provide high performance renders
Nvidia has just spilt the beans on its Iray Server which is a clever bit of kit which allows distributed networks to pool resources create high-performance renders.

The software manages machines into WAN clusters, combining their resources to speed up the creation of images during development. According to Nvidia the Iray Server can also stream interactive rendering from the server to another machine and submit jobs to a queue for easy management. This will allow the queue to be managed from any browser connected to the network.
Phil Miller, director of product management at Nvidia, said that the Iray Server is a great tool for designers and the production companies employing them because it speeds up the creative process.

"Machines running Iray Server coordinate with each other to reduce the time needed to render an image. This allows you to process images in a fraction of the time it would take a single machine." The set up runs on Nvidia GPUs and there are cloud plug-ins for industry standard applications including Autodesk 3DSMax and Maya. Iray Server is available as a 90-day free trial starting today.



Noticia:
http://www.fudzilla.com/news/graphics/40051-nvidia-lifts-the-lid-on-iray-server

Jorge-Vieira
25-02-16, 16:43
NVIDIA Update On Samsung Case At U.S. International Trade Commission (http://www.hardocp.com/news/2016/02/25/nvidia_update_on_samsung_case_at_us_international_ trade_commission/)


We promised to keep you updated on major developments in our intellectual-property lawsuits with Samsung (http://blogs.nvidia.com/blog/2016/02/24/review/). Today, the U.S. International Trade Commission agreed to review parts of a recent decision by an administrative law judge relating to the ‘734 and ‘349 patents. The decision was that NVIDIA had infringed three graphics-related patents belonging to Samsung. It also let stand another part of that decision relating to the ‘385 patent. This means that both sides will submit more detail to enable the ITC to make a final determination in late April.

We are disappointed any part of this decision will stand. We welcome this opportunity to provide further information to the Commission and remain firm in our beliefs that we did not infringe the Samsung patents and that they are invalid. We note that the U.S. Patent Trial and Appeals Board has already instituted reviews of two of the three patents, with the third patent expiring in October of this year. We will continue to provide updates on our blog following material developments.



Noticia:
http://www.hardocp.com/news/2016/02/25/nvidia_update_on_samsung_case_at_us_international_ trade_commission#.Vs8vJOZv4vc

Jorge-Vieira
28-02-16, 13:57
GPU Shipments Saw 2.4% Increase in Q4 2015 – NVIDIA GPU Shipments Grew 8.4%, AMD’s 5.1% and Intel’s 0.7%

The results are in for the total GPU shipments in Q4 2015 from Jon Peddie Research (https://www.jonpeddie.com/press-releases/details/for-the-4th-quarter-of-2015-gpu-shipments-increased-2.4-from-last-quarter), according to which the graphics industry saw a 2.4% increase in overall GPU shipments compared to the previous quarter. The small increase from previous quarter has to do with several reasons, one being the sluggish PC market at the current moment and the other being that next-generation GPUs are on the verge of arrival hence in their anticipation, most users are holding off their purchases.
http://cdn.wccftech.com/wp-content/uploads/2016/02/GPU-Shipments-Q4-2015-NVIDIA-Intel-AMD_1.png (http://cdn.wccftech.com/wp-content/uploads/2016/02/GPU-Shipments-Q4-2015-NVIDIA-Intel-AMD_1.png)

Image Credits: Jon Peddie Research (https://www.jonpeddie.com/press-releases/details/for-the-4th-quarter-of-2015-gpu-shipments-increased-2.4-from-last-quarter) GPU Industry Witnesses Increase of 2.4% in Shipments During Q4 2015 Fourth Quarter marked the end of the year 2015. The year saw introduction to several graphics processing chips from NVIDIA, AMD and Intel. Starting off, NVIDIA launched their GeForce GTX 980 Ti (http://wccftech.com/nvidia-geforce-gtx-980-ti-features-full-directx-12-support-arrives-geforce-lineup-competitive-649-pricing-titanx-performance/), GeForce GTX 980 (Mobility) (http://wccftech.com/nvidia-geforce-gtx-980-arrives-gaming-laptops-35-faster-gtx-980m-full-overclocking-support/), GeForce GTX 960 and GeForce GTX 950 along with a couple of other mobility chips. AMD launched their Radeon 300 series lineup (http://wccftech.com/wipamd-radeon-300-series-officially-launches-r9-390x-r9-390-r9-380-r7-370-r7-360-performance-specifications-detailed/)which had several graphics cards ranging from the Radeon R7 360 up to the R9 Fury X graphics card with HBM memory.
During the same year, Intel shipped their Skylake CPUs (http://wccftech.com/intel-skylake-s-mainstream-desktop-processor-lineup-launching-september-1/) with Iris and Iris Pro graphics chips which are some of the fastest integrated graphics chips available on a main stream processor. In the same department, AMD released a line of Carrizo (http://wccftech.com/amds-6th-generation-carrizo-apus-officially-launched-detailed-upto-15-ipc-3rd-gen-gcn-cores-directx-12-hsa-10-support/)based laptop SOCs which were powered by their own GCN architecture which is specifically an integrated chip which is based on the foundation of their discrete graphics core. There was tons of action in 2015 and we have the numbers in to tell how well did the three GPU vendors fared in Q4 compared to the previous quarter and last year.
Highlights for the Fourth Quarter of 2015:


AMD’s overall unit shipments increased 5.16% quarter-to-quarter, Intel’s total shipments increased 0.73% from last quarter, and Nvidia’s increased 8.41%.
The attach rate of GPUs (includes integrated and discrete GPUs) to PCs for the quarter was 139% which was up 0.59% from last quarter.
Discrete GPUs were in 31.28% of PCs, which is up 1.34%.
The overall PC market increased 2.01% quarter-to-quarter, and decreased -10.27% year-to-year.
Desktop graphics add-in boards (AIBs) that use discrete GPUs decreased -4.87% from last quarter.

14% Decrease in GPU Shipments and 9% Decrease in Discrete GPU Shipments Compared to 2014 So while the industry saw a good increase in shipments in Q4 2015, the market didn’t perform nearly as well compared to last year. Compared to Q4 2014, the total GPU shipments were down 14% and dGPU (Discrete GPU) shipments were down 9% in Q4 2015. The situation was same with the notebook market which saw 17% decrease in shipments. Regardless, the GPU market has seem to made a steady foot in the industry that doesn’t gets affected a whole lot with a slow PC market.
http://cdn.wccftech.com/wp-content/uploads/2016/02/GPU-Shipments-Q4-2015-NVIDIA-Intel-AMD.png (http://cdn.wccftech.com/wp-content/uploads/2016/02/GPU-Shipments-Q4-2015-NVIDIA-Intel-AMD.png)
The credit for this goes to the booming PC Gaming industry due to which the high-end discrete graphics card market is growing at a steady pace. Due to influx of great AAA titles on the PC along with good optimizations for the platform, PC gamers are now purchasing high-end GPUs more than ever, whether it be their gaming PC or their gaming notebook.
Neither did NVIDIA or AMD introduced a major graphics card during Q4 2015 that made as much impact on their market share or shipments but the reason why the still saw an increased share in GPU shipments was primarily due to the new games being launched in the market along with game bundles offered as a part of promos which were run by both vendors. The graphics cards also got price cuts and great deals during the holiday season which drove many people to buy them off of store shelves.

The GPU market and the PC market in general, seems to have found its new normal. The Gaming PC segment, where higher-end GPUs are used, was once again the bright spot in the overall PC market for the quarter. The GPU market is rebounding and outperforming the PC market, which is stabilizing.
Advertisements


Combined with the introduction of a half dozen really terrific, and processor-intensive games in the second half of 2014 fed a buying frenzy that drove the GPU sales in desktop add-in boards (AIBs) and the new gaming notebook sales in Q4. via Jon Peddie Research (https://www.jonpeddie.com/press-releases/details/for-the-4th-quarter-of-2015-gpu-shipments-increased-2.4-from-last-quarter)
NVIDIA Bags 16.6% GPU Market – Desktop Discrete GPU Shipments Falls But Notebook GPU Shipments See Major Increase NVIDIA saw a great 2015 with their Maxwell generation of GPUs. These graphics card were launched in late 2014 and early 2015 but were remained the highlight as the most popular graphics cards in the market. Q4 2015 got NVIDIA an 8.4% increase in GPU shipments. Taking a closer look reveals that most of these weren’t on the desktops but the notebook market.
http://cdn.wccftech.com/wp-content/uploads/2015/11/NVIDIA-Third-Quarter-2016-Results_Gaming_10-635x357.jpg (http://cdn.wccftech.com/wp-content/uploads/2015/11/NVIDIA-Third-Quarter-2016-Results_Gaming_10.jpg)
According to the report, NVIDIA’s desktop discrete GPU shipments fell 7.56% from previous quarter. NVIDIA had captured a good chunk of discrete GPU market with their GPUs on the desktop front so their shipments are expected to fall as demand lowers. On the notebook side, NVIDIA launched a couple of new chips along with the industry’s fastest mobility GPU, the GeForce GTX 980. These new Maxwell powered chips led to the total notebook discrete GPU shipments to see a rise of 34.2%.
Notebooks are getting faster than ever, with improved GPUs that have lower power demands on FinFET process, we can already see the whole notebook market transforming with ultra-fast platforms that not only consume less power but also run cooler and PCs that eliminate the performance and feature parity from desktops and notebooks. Overall, NVIDIA’s entire GPU market share is 16.6%, up from 15.7% during the last quarter. The discrete GPU market share numbers are not mentioned within the report.
AMD’s GPU Market Share Rises to 11.8% – Radeon Discrete GPU Shipments Rises But Notebook Discrete Shipments Fall, APUs See Growth on Notebooks AMD’s Q4 ended with good results as far as their Radeon graphics cards are to be concerned. The desktop discrete Radeon graphics cards saw a 6.69% increase in shipments from previous quarter due to the highly competitive price cuts for their cards in the holiday season along with improved driver support in the form of Radeon Software Crimson drivers. AMD’s notebook discrete GPU shipments witnessed another major blow in Q4 as they didn’t introduce any new mobility Radeon graphics card for the platform leading to a 1.3% decline in shipments.
(http://cdn.wccftech.com/wp-content/uploads/2016/01/AMD-Computing-Platforms_3.jpg)
http://cdn.wccftech.com/wp-content/uploads/2016/01/AMD-Computing-Platforms_3-635x357.jpg (http://cdn.wccftech.com/wp-content/uploads/2016/01/AMD-Computing-Platforms_3.jpg)
AMD’s APU business seems to be performing really great as it saw a 30.3% climb in notebook processor shipments. The desktop APU side saw a 4.3% decline in shipments but that might change as AMD since has launched a few new products in their APU lineup in Q1 2016. Overall, AMD’s GPU market shipments saw a 5.2% increase from last quarter which led to them capping an 11.8% market share in the GPU world compared to 11.5% during the previous quarter.
Intel Leads With Highest GPU Market Share But Sees Small Shipments Increase Just like previous quarters, Intel remained the company with the highest GPU market share of 71.6% but down from 72.8% from the previous quarter. Intel’s iGPUs which are featured in their mainstream CPUs saw a shipment increase of 6.1% while notebook processor shipments increased 0.7%. Intel leads the market share in the GPU market but that is their processors dominate the PC industry with almost 9 out of 10 PCs housing an Intel processor and each one of them has a integrated graphics processor. In the discrete GPU market, Intel has literally 0% share since they only develop integrated/embedded GPUs while NVIDIA and AMD focus primarily on discrete GPUs.
http://cdn.wccftech.com/wp-content/uploads/2016/01/Intel-Iris-Pro-Graphics-635x357.jpg (http://cdn.wccftech.com/wp-content/uploads/2016/01/Intel-Iris-Pro-Graphics.jpg)
Shipments for the overall PC market increased 2.01% from the previous quarter but fell 10.27% compared to last year. The PC market is going to rebound in 2016 according to analysts with vastly improved hardware in the form of FinFET products coming in from NVIDIA and AMD. Both companies are preparing their Polaris (http://wccftech.com/amd-unveils-polaris-11-10-gpu/)and Pascal (http://wccftech.com/nvidia-pascal-debut-gtc-2016-launch-june/)chips for the industry which are expected to deliver much superior efficiency and performance when compared to their 28nm predecessors. Intel and AMD will also introduce their Kaby Lake (http://wccftech.com/intel-2016-roadmap-leaked-confirms-kaby-lakes-10-core-broadwelle-apollo-lake-processors/) and Summit Ridge (http://wccftech.com/amd-zen-summit-ridge-launch-q4-2016/) CPUs which will be feature compatibility for the hottest storage and memory solutions such as 3D XPoint (http://wccftech.com/intels-3d-xpoint-memory-featured-optane-ssds-optane-dimms-8x-performance-increase-conventional-ssds/) based Optane SSDs and faster DDR4 memory at prices comparable to DDR3 memory. So expect 2016 a great year for the GPU market and PC industry as a whole.







Noticia:
http://wccftech.com/gpu-market-share-q4-2015-nvidia-intel-amd/#ixzz41TFXyy3o


Mais um crescimento para a nVidia...
(http://wccftech.com/gpu-market-share-q4-2015-nvidia-intel-amd/#ixzz41TFXyy3o)

Jorge-Vieira
29-02-16, 13:49
Nvidia Corporation To Lead In The Autonomous Car Sector – Poised To Capture Significant Market Share

Self driving cars is one of the big things expected to rock the automobile industry in the future, and Nvidia (NASDAQ: NVDA (http://www.nasdaq.com/symbol/nvda) ) has been betting big on its automobile sector. In fact the Auto Department and Tegra Development Services are currently looking to have an incredibly huge growth opportunity going forward. This article is dedicated to talking about one of the rather under-rated departments over at the IHV. At CES 2016 this year, Nvidia Corporation revealed its Drive PX2 chip – one of the most powerful chips in the world for autonomous driving. If Nvidia can get a sturdy holding in the autonomous driving industry – its fundamentals will become quite attractive.http://cdn.wccftech.com/wp-content/uploads/2016/02/Nvidia-drive-px-2-feature-635x357.jpg (http://cdn.wccftech.com/wp-content/uploads/2016/02/Nvidia-drive-px-2-feature.jpg)
Nvidia Corporation (NASDAQ: NVDA) to lead the way in self-driving automobile sector Nvidia’s auto sector grew from 56 Million in Q4 FY15 to 93 Million in Q4 FY16 – which is an absolutely huge double digit growth (~66%). And what’s more, this is just the tip of the ice berg, since the company’s GPU architectural progression is trickling down to the Tegra levels and could be potentially disruptive over the course of the next few years. The company is reportedly supplying Alphabet with the chips to power its self driving cars and will introduce the insanely-powerful Drive PX2 in 2017 to car manufacturers. Considering the fact that nearly all other autonomous chip makers have very long product cycles – Nvidia could take them all unawares with its absolutely killer advantage: a product cycle of just a couple of years.
http://cdn.wccftech.com/wp-content/uploads/2016/02/NVIDIA-Q4-FY16-Revenue-Report-635x345.jpg (http://cdn.wccftech.com/wp-content/uploads/2016/02/NVIDIA-Q4-FY16-Revenue-Report.jpg)
Continuous and sustained development of very powerful chips will be a disruptive edge As I have mentioned in an exclusive editorial (http://wccftech.com/tesla-autopilot-story-in-depth-technology/), currently, Nvidia’s chips are being used to power the infotainment system and digital instrument cluster abroad Tesla cars with Mobileye Eye Q3 processor handling the actual autonomous driving. This was because the original chip available at the time (the Tegra 3) was not really up to the mark of performance which a custom designed silicon like the EyeQ3 can provide. If there is one thing that Nvidia (NASDAQ: NVDA (http://www.nasdaq.com/symbol/nvda)) does best however, its R&D. Jump forward a few generations and we have the pascal architecture – which will outperform the measly Tegra 3 by leaps and bounds. Mobileye is expected to debut its EyeQ4 chip as well – which will be significantly stronger than its predecessor but unlike Nvidia – it cannot sustain the level of R&D and expects the EyeQ4 to have a large shelf life – where as green can spit out new chips every other year.
http://cdn.wccftech.com/wp-content/uploads/2016/01/NVIDIA-CES-2016_Drive-PX-2-Specifications-635x357.jpg (http://cdn.wccftech.com/wp-content/uploads/2016/01/NVIDIA-CES-2016_Drive-PX-2-Specifications.jpg)
The Drive PX2 for example is slated to have 10 times the performance (with a grand total of 24 Trillion operations per second) of the original Drive PX. Something that is pretty darn amazing. That and the fact that the company is venturing into Deep Neural Networks and optimizing its automotive offerings accordingly – is a very impressive feat. Considering the fact that Nvidia has manged to bag design wins with Volvo, Ford, Daimler and Audi only adds to the bullishness shown by the Automobile and TDS department of Nvidia.
Advertisements

More power than high end desktop graphics cards available to car manufacturers
“Drivers deal with an infinitely complex world,” said Nvidia co-founder and CEO Jen-Hsun Huang. “Modern artificial intelligence and GPU breakthroughs enable us to finally tackle the daunting challenges of self-driving cars.”… “NVIDIA’s GPU is central to advances in deep learning and supercomputing,” added Huang. “We are leveraging these to create the brain of future autonomous vehicles that will be continuously alert, and eventually achieve superhuman levels of situational awareness. Autonomous cars will bring increased safety, new convenient mobility services and even beautiful urban designs – providing a powerful force for a better future.”
“Due to our close collaboration with NVIDIA, Audi has the ability to integrate technology quickly and move at the same innovation cycle as the consumer electronics industry,” said Audi Head of Electrics/Electronics Ricky Hudi. “We will push the technology of artificial intelligence, of machine learning, to get the same recognition rate – or even better than a human being. For me this is the most disruptive technology – machine learning – in what you can discover now, and not just in the automotive industry.”
The critical question here is whether Nvidia can translate its massive experience and success in the desktop GPU sector into the Automobile sector. If the answer to that question turns out to be a Yes, than the company will become a force to reckon with and will make it very difficult for any other autonomous driving silicon provider to compete with the green giant. NVDA stock is currently trading at $31.68 and has shown a gain of 40% over a year (up from $22.60).
http://cdn.wccftech.com/wp-content/uploads/2016/01/NVIDIA-Drive-PX-2-Pascal-GPU_1-635x358.jpg

http://cdn.wccftech.com/wp-content/uploads/2016/01/NVIDIA-Drive-PX-2-Pascal-GPU_2-635x357.jpg

http://cdn.wccftech.com/wp-content/uploads/2016/01/NVIDIA-Drive-PX-2-Pascal-GPU_3-635x358.jpg

http://cdn.wccftech.com/wp-content/uploads/2016/01/NVIDIA-Drive-PX-2-Pascal-GPU_4-635x357.jpg

http://cdn.wccftech.com/wp-content/uploads/2016/01/NVIDIA-Drive-PX-2-Pascal-GPU_5-635x357.jpg

http://cdn.wccftech.com/wp-content/uploads/2016/01/NVIDIA-Drive-PX-2-Pascal-GPU_6-635x357.jpg

http://cdn.wccftech.com/wp-content/uploads/2016/01/NVIDIA-Drive-PX-2-Pascal-GPU_7-635x357.jpg

http://cdn.wccftech.com/wp-content/uploads/2016/01/NVIDIA-Drive-PX-2-Pascal-GPU_8-635x357.jpg









Noticia:
http://wccftech.com/nvidia-lead-autonomous-car-sector-market-share/#ixzz41Z40Ct8R


Já domina quase a totalidade do mercado dos GPUs e começa a preparar-se para fazer o mesmo no sector automovel... até onde vai chegar esta nVidia?
Uma empresa com este poderio e dominio de mercado nunca é bom para os consumidores...
(http://wccftech.com/nvidia-lead-autonomous-car-sector-market-share/#ixzz41Z40Ct8R)

Jorge-Vieira
29-02-16, 14:17
NVIDIA sees 34.2% increase in notebook discrete GPU shipments


Discrete GPU shipments increased in Q4 2015, but were down 9% from 2014 - so how were notebook discrete GPU shipments? Well, NVIDIA saw an increase of notebook discrete GPU shipments - an increase of 34.2% - a considerable number.

http://imagescdn.tweaktown.com/news/5/0/50743_02_nvidia-sees-34-2-increase-notebook-discrete-gpu-shipments.jpg

Why the increase? Thanks to the industry's fastest mobility GPU - the GeForce GTX 980. NVIDIA's notebook-based GeForce GTX 980 was launched in September 2015 and has been powering the fastest gaming notebooks since. The GTX 980 is capable of 1080p 60FPS without a problem, and even 1440p and 4K gaming. The Maxwell-powered GPU is the same technology found on the desktop GTX 980, but NVIDIA did considerable work to shrink the PCB and VRMs down to fit it into a notebook - and obviously, it has been met with great success.





Noticia:
http://www.tweaktown.com/news/50743/nvidia-sees-34-2-increase-notebook-discrete-gpu-shipments/index.html

Jorge-Vieira
10-03-16, 14:29
NVIDIA Awarded VMware’s Global Technical Partner of the Year (http://www.hardocp.com/news/2016/03/10/nvidia_awarded_vmwarersquos_global_technical_partn er_year/)


NVIDIA has been named (http://blogs.nvidia.com/blog/2016/03/09/vmware-desktop-virtualization/) VMware’s "Global Technical Partner of the Year," as well as its "European Regional Technical Partner of the Year." NVIDIA and VMware have been close partners in delivering the next generation in end-user computing. NVIDIA GRID and VMware Horizon dramatically transform how companies can deliver amazing graphics from the cloud or data center in order to support collaboration among global design and engineering teams and their suppliers.

At VMworld last fall we launched GRID 2.0. We continued to deepen our partnership in February, announcing NVIDIA GRID acceleration of Blast Extreme, a new protocol for optimizing the mobile cloud, now supported in VMware Horizon 7. Our partnership represents the most reliable, end-to-end solution for rich, high-performance graphics on the market today. It enables companies to collaborate on any device, anywhere, with reduced latency and increased user experience, without sacrificing corporate data and intellectual property. Our joint customers, including the University of Massachusetts, Populous, Legacy Reserves and Meridian Technology Center, are already seeing the increased value that VMware Horizon and NVIDIA GRID can deliver.


<iframe width="640" height="360" src="https://www.youtube.com/embed/7ioawbYKlrc" frameborder="0" allowfullscreen></iframe>



Noticia:
http://www.hardocp.com/news/2016/03/10/nvidia_awarded_vmwarersquos_global_technical_partn er_year#.VuGEruZv4vc

Enzo
13-03-16, 12:39
Já domina quase a totalidade do mercado dos GPUs e começa a preparar-se para fazer o mesmo no sector automovel... até onde vai chegar esta nVidia? (http://wccftech.com/nvidia-lead-autonomous-car-sector-market-share/#ixzz41Z40Ct8R)
Uma empresa com este poderio e dominio de mercado nunca é bom para os consumidores... (http://wccftech.com/nvidia-lead-autonomous-car-sector-market-share/#ixzz41Z40Ct8R)



Para uma empresa que nao tenha escrupulos, pode ser perigoso sim.
Eu estou a olhar pelo lado positivo da cena. Em vez de ir ao OLX comprar graficas, passo a ir ao ferro velho;)

Jorge-Vieira
13-03-16, 12:56
No ferro velho compras um automóvel marca nVidia :D

Enzo
13-03-16, 13:06
E monto o PC dentro dele depois?:)

Jorge-Vieira
13-03-16, 13:08
Yep, auto bench table powered by nVidia :D

Jorge-Vieira
28-03-16, 13:20
How GPUs Are Helping Map Worldwide Poverty (http://www.hardocp.com/news/2016/03/28/how_gpus_are_helping_map_worldwide_poverty)

Eradicating worldwide poverty by 2030 is the top goal on the United Nations’ sustainable development agenda, published late last year. But a lack of data has frustrated efforts to measure progress toward the goal. Most of those living in extreme poverty are in sub-Saharan Africa and Southern Asia, where accurate poverty data is scarce. A small team at Stanford University is changing that, one satellite image at a time.

Machine learning expert Stefano Ermon partnered with food security specialists David Lobell and Marshall Burke, plus a couple Stanford engineering students, to turn Google Earth images into statistical poverty models. "We want to end extreme poverty, but we need a way to be able to measure whether we’re making progress or not," said Ermon, an assistant professor of computer science at Stanford.

Using NVIDIA GPUs, the team trained a neural network (https://blogs.nvidia.com/blog/2016/03/25/mapping-poverty-data-gpus/) to accurately predict poverty levels in sub-Saharan Africa from image features like roads, farmlands and homes. This work has placed Stanford among five finalists for NVIDIA’s 2016 Global Impact Award. Each year, we award a $150,000 grant to researchers using NVIDIA technology for groundbreaking work that addresses social, humanitarian and environmental problems.



Noticia:
http://www.hardocp.com/news/2016/03/28/how_gpus_are_helping_map_worldwide_poverty#.Vvkvkn r0Pug

Jorge-Vieira
01-04-16, 13:18
Tesla could squeeze Nvidia

http://www.fudzilla.com/media/k2/items/cache/17724e162e4cc2220b66cecaeac59951_L.jpg (http://www.fudzilla.com/media/k2/items/cache/17724e162e4cc2220b66cecaeac59951_XL.jpg)

Recruitment wars

Tesla Motors,’ which has been poaching engineers from Apple and AMD, could be causing a few headaches for Nvidia.

MKM analyst Ian Ing pointed out that Nvidia and Tesla have partnered in machine-learning which is the key to autonomous driving. Nvidia’s own automotive segment grew 80 per cent to $320 million in revenue.
It had been known that Tesla is swiping Apple and AMD engineers, but the difficulty is that it also needs staff from its old chum Nvidia. Ing said that Apple and AMD staff are not as steeped in graphics processing units and machine learning as Nvidia’s staff.
“Although there are widely reportedly headlines that Tesla has been hiring chip architects from Apple and AMD, we note that expertise has been focused more on multi-purpose application processors vs. the GPU accelerators necessary for machine learning,” Ing wrote.
This could either pressure Nvidia to work more closely with Tesla, or it too might lose staff to the carmarker. However that might be a small headache for Nvidia which is doing obscenely well, according to Ing. He is suggesting everyone should buy Nvidia shares.



Noticia:
http://www.fudzilla.com/news/40360-tesla-could-squeeze-nvidia

Jorge-Vieira
06-04-16, 20:06
Nvidia Wants Qualcomm To Compensate Its $352 Million Icera Failure Court Hearings Reveal

When it comes to mobile hardware and processors, US manufacturer Qualcomm has been comfortably dominating the industry for quite a while. Not only does the company manufacture the Snapdragon lineup of chipsets, that are present in almost every mainstream Android smartphone, flagship or other, out there, but its other offerings, including wireless modules, patents and more make sure that it maintains a strong market position.
GPU manufacturer Nvidia on the other hand has been making its presence known PC hardware sphere, particularly after yesterday’s GTC 2016. We saw the company showcase its Pascal architecture on board the P-100, alongside a couple of other launches that should keep things nice and interesting for some time. Nvidia’s also been at odds with Qualcomm for quite a while, claiming that the latter uses its market position unfairly and today we’ve got some more news for you on this front.
http://cdn.wccftech.com/wp-content/uploads/2016/03/Screen-Shot-2016-03-17-at-4.21.48-am-635x625.png
Nvidia Accuses Qualcomm Of Mobile Chip Monopoly; Demands Compensation For Unfair Practices When it comes to allegations of unfair practices with respect to its market share, Qualcomm’s been in the news for quite a while now. The US chip giant has seen accusations from a variety of quarters claiming that due to its aggressive industry strategies, competition and smaller manufacturers have been suffering. Nvidia’s made similar claims as well and today we get to learn some more details on the matter.
As the GTC was underway yesterday, details about Nvidia’s court hearings against Qualcomm in London were also starting to make rounds. The chip maker claims that Qualcomm’s abuse of its dominant position in the mobile processor market has forced it to quit its own plans for similar technology, making it incur substantial losses in the process.
To recap, in 2011 Nvidia spent $352 million to acquire soft modem tech firm Icera. It was however forced to terminate things just four years after in 2015, alleging that Qualcomm’s aggressive pricing strategies forced it out of competition and made it incur significant losses.
Advertisements

http://cdn.wccftech.com/wp-content/uploads/2016/04/thumb__Icera_pantone_368-635x302.jpg
According to Nvidia, Qualcomm’s aggressive and unfair market tactics led to “unexplained delays in customer orders, reductions in demand volumes and contracts never being entered into, even after a customer or mobile network cooperating with a prospective customer has agreed or expressed a strong intention to purchase”.
The US chipmaking giant has already faced similar accusations of selling its modem chips below their costs of development and production in order to drive out smaller competitors from the market. While Qualcomm is able to absorb losses which it incurs from such moves owing to the massive revenues it earns from patents and other products, the resulting impact from such decision leaves other, smaller companies in more difficult situations.
Whether Nvidia will prevail over Qualcomm in this alleged ”unlawful abuse of dominance”, we’ll find out soon enough. Things aren’t looking that well for the US chipmaker however as the amount of similar allegations piles up. Thoughts? Let us know what you think in the comments section below and stay tuned for the latest.
Via (http://www.bloomberg.com/news/articles/2016-04-06/nvidia-demands-qualcomm-pay-up-after-demise-of-352-million-unit)







Noticia:
http://wccftech.com/nvidia-accuses-qualcomm-mobile-chip-monopoly/#ixzz454wcxj63



Parece que vamos twer nos proximos tempos uma guerrinha para animar a malta, só espero que a nvidia não se lembre de aumentar ainda mais os preços das graficas para compensar...
(http://wccftech.com/nvidia-accuses-qualcomm-mobile-chip-monopoly/#ixzz454wcxj63)

Jorge-Vieira
12-04-16, 17:30
How AI Can Predict Cardiac Failure Before It’s Diagnosed (http://www.hardocp.com/news/2016/04/12/how_ai_predict_cardiac_failure_before_itrsquos_dia gnosed)

AI and NVIDIA are playing a growing role (https://blogs.nvidia.com/blog/2016/04/11/predict-heart-failure/) in advancing healthcare around the country. NVIDIA announced last week that it’s working with Massachusetts General Hospital to apply the latest AI techniques to improve disease detection, diagnosis, treatment and management. The team analyzed electronic health records from more than 265,000 Sutter Health patients. From these, it studied 3,884 patients with heart failure and about 28,900 patients as a control group. Researchers analyzed the records using deep learning, a type of artificial intelligence that can solve complex technical problems like face or speech recognition — sometimes even topping human performance.

Noticia:
http://www.hardocp.com/news/2016/04/12/how_ai_predict_cardiac_failure_before_itrsquos_dia gnosed#.Vw0wtHr0Pug

Jorge-Vieira
13-04-16, 16:14
Oxford Researchers Win GPU Center of Excellence Achievement Award (http://www.hardocp.com/news/2016/04/13/oxford_researchers_win_gpu_center_excellence_achie vement_award/)

Researchers from the University of Oxford won the fifth annual Achievement Award for NVIDIA GPU Centers of Excellence (https://blogs.nvidia.com/blog/2016/04/12/oxford-researchers-win-gpu-center-of-excellence-achievement-award-for-speeding-brain-analysis/) in recognition of their work using GPUs to speed the analysis of the human brain’s underlying anatomical and structural organization.The team, led by Mike Giles, received the top award, along with a $15,000 cash prize, for their work applying GPU acceleration to tractography, a 3D modeling technique for investigating human brain connections. With GPUs handling the immense computing load, the team was able to double the speed of analysis of diffusion MRI (dMRI), a widely used method to reveal the wiring diagram of the brain and estimate tissue microstructures. The Oxford team’s work is aimed at improving the investigation of neurological and psychiatric diseases — such as multiple sclerosis, schizophrenia and Alzheimer’s disease — and has the potential to improve the clinical applicability of dMRI. A panel of experts selected three other teams from top universities as finalists from among our 23 GPU Centers of Excellence.

Noticia:
http://www.hardocp.com/news/2016/04/13/oxford_researchers_win_gpu_center_excellence_achie vement_award#.Vw5wVHr0Pug