Registar

User Tag List

Likes Likes:  0
Página 1 de 5 123 ... ÚltimoÚltimo
Resultados 1 a 15 de 67
  1. #1
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)

    nVidia, assuntos gerais

    Nvidia rakes in record quarterly, yearly revenue



    Nvidia's latest financial results are in, and they're full of good news. The company
    set revenue records for not only the three months ending January 25, but also the past year as a whole.
    Time for some tables! Let's start with the quarterly numbers.
    Q4'15 Q4'14 Change
    Revenue $1.25 billion $1.14 billion up 9%
    Gross margin 55.9% 54.1% up 1.8 pts
    Net income $193 million $147 million up 31%
    Fourth-quarter revenue hit $1.25 billion, up 9% from the same period last year. Net income rose an even more impressive 31%, while gross margin was up slightly. And the full-year figures are even brighter:
    FY'15 FY'14 Change
    Revenue $4.68 billion $4.13 billion up 13%
    Gross margin 55.5% 54.9% up 0.6 pts
    Net income $631 million $440 million up 43%
    Nvidia raked in $4.68 billion over the past 12 months, a 13% increase over the previous year. Gross margin went up by less than a point, but profits surged 43% to $631 million. The green team is certainly living up to its tradmark color.
    Here's a revenue breakdown along crude divisional lines. The GPU segment accounts for all of the graphics processors: GeForce, Quadro, Tesla, and Grid. Tegra covers the SoC and related products, while the other category "includes licensing revenue from [Nvidia's] patent cross-license agreement with Intel." The quarterly totals come first, followed by the yearly ones.
    Q4'15 Q4'14 Change
    GPU $1.07 billion $947 million up 13%
    Tegra $112 million $131 million down 15%
    Other $66 million $66 million --
    FY'15 FY'14 Change
    GPU $3.84 billion $3.47 billion up 11%
    Tegra $579 million $398 million up 45%
    Other $264 million $264 million --
    GPUs continue to make up the vast majority of Nvidia's business. Revenue for that overarching category rose by double digits both yearly and quarterly. Although we don't have more granular data for that division, Nvidia said Q4 revenue from desktop and notebook GPUs went up 38% versus the same period last year. High-end Maxwell cards probably deserves a lot of the credit for that increase.
    The Tegra numbers are more mixed. Although yearly revenue increased 45%, the total for Q4 fell 15%. Nvidia blames the drop on the "product life cycle of several smartphone and tablet designs." Revenue from "auto infotainment systems" doubled, according to the CFO commentary (PDF), but there are no specifics on sales of Shield devices.
    Looking forward, Nvidia expects Q1 revenue of $1.16 billion with gross margin in the 56.2-56.5% range.



    Noticia:
    http://techreport.com/news/27807/nvidia-rakes-in-record-quarterly-yearly-revenue


    E mais um record, é sempre a somar para o gigante das placas graficas.
    Última edição de Jorge-Vieira : 12-02-15 às 07:13
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  2. #2
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Nvidia schedules Made to Game event for March 3



    In San Francisco, at GDC 2015

    Nvidia plans a new product announcement on the second day of MWC 2015 and it has sent invitations to press outlets for an event scheduled for the 3rd of March 2015.

    The only catch is that Nvidia event is actually on the other side of the globe, in San Francisco, and is a part of another event called Games Developers Conference 2015, not MWC 2015.
    Nvidia teases that Made to Game event is about something that was "5 years in the making" and that will redefine the future of gaming. We kind of heard such bold announcements before, but since we don’t know what this might be, we cannot offer any insight. We also heard that G-sync would kill AMD, and that didn't happen either, not even close.
    We do know that Nvidia is actively working on virtual reality technology, and we would not be surprised to see some form of Tegra-powered product on the stage (since the Tegra fits the "5 years in the making" statement).
    The fact that Nvidia wants Android-focused press outlets to attend the event suggests we are looking at an Android device, or Shield device to be more precise. It's not the new Titan based on a GM200-series GPU. Titan II was not five years in making, so it has to be something different. [Tegra X1 Shield Tablet, Shield Portable, or maybe Shield VR, take your pick and head down to the comment section. Ed]

    The 5 years in the making part makes us wonder.
    Noticia:
    http://www.fudzilla.com/news/graphic...ent-on-march-3
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  3. #3
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Nvidia Spending over a Third of Revenue in Semiconductor R&D for Future Graphics Technologies

    The Russian publication overclockers.ru published a report on research and development spendings of semiconductor companies all over the world. Wafer fabrication is all about the big bucks and R&D budget is a leading indicator of this fact, so this piece is about that basic rundown.
    A stock photo of a fabrication plant. @TSMC Public Domain
    Green makes the top 10 spenders on semiconductor R&D, Intel tops the list

    R&D is absolutely critical to any semiconductor firm and their spending is, in most cases, very indicative of their success. In a report released by the agency IC Insights we learn of the top 10 highest spenders in this department. The list, as you may guess, offers quite a lot of insights into the current standing of these companies. Intel for example, the undisputed giant of the tech world, currently has spending that is approximately 3 times that of the guy in second place. Its R&D budget completely eclipses that of everyone else on the list – once again, something that indicates just how much of a lead Intel has over TSMC.


    Interestingly Nvidia has also managed to claim a spot in the top 10 list, albeit at 10th place. Interestingly the bottom few semiconductor companies all have approximately the same budget, while Intel has more budget than nearly all of them combined. What is perhaps of more interest to our readers is that Nvidia spends approximately 1/3 of its total revenue on semiconductor R&D, needless to say, that is a huge amount. Comparatively, TSMC, which is Nvidia’s foundry spends only 22% more than green itself – and even that after raising its R&D budget. That should give you an idea how high Nvidia’s budget for research is. Out of the companies present in the report, five of them are headquartered in the United States, three in Asia and the Pacific, and Europe and Japan.
    Intel Corporation remains the leader in the development costs for the year, and increased their funding for this year by 9%. The processor giant spent approximately 22.4% of its annual revenue on R&D. 36% of total R & D expenditure of the Top 10 belongs to Intel, while TSMC has managed to climb from seventh to fifth with a 15-percent increase in R & D expenditures and 7.5 percent share of the proceeds.
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  4. #4
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Nvidia’s new HQ back on schedule



    Futuristic apparently

    A year after moth balling its “futuristic new office project in Santa Clara” Nvidia. is again ready to get to work and start building again.

    According to Biz Journals the graphics chipmaker confirmed on Tuesday that it would soon start demolition to make way for the 500,000-square-foot, triangular building at Walsh Avenue and San Tomas Expressway.
    Nvidia launched the project in 2013 and it was pushed as one of these innovative design buildings which was supposed to make Silicon Valley look a little more interest.
    Designers said one inspiration for the project's look was the polygon — the building block of computer graphics, although we thought it looked a little like a cancerous melanoma a mate had cut out. .
    The enormous, 250,000-square-foot floor plates, united by a massive stairs, were to facilitate employee interactions. Nothing happened and it was thought that the project had suffered a bad case of cut backs.
    Nvida said that it was just trying to get the design right, although cynics suggested that it was just waiting to see what AMD did first..
    Now it seems that the way is clear and building has started.
    Noticia:
    http://www.fudzilla.com/news/graphic...ck-on-schedule


    A nova casa da nVidia
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  5. #5
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    NVIDIA Wins Two Edison Awards For Innovation

    The awards – named for fabled U.S. inventor Thomas Edison – recognize innovation, creativity and ingenuity in the global economy. Prizes were handed out in a wide range of categories by an independent team of judges from industry and academia. Other winners this year include GE, LG, Dow Chemicals, Logitech, Lenovo, Hyundai and 3M. Our VCM won a Gold award in the Automotive Computing category. The VCM is a modular computer that gives automakers a fast, easy way to update their systems with the latest mobile technology. This has helped Audi reduce its infotainment development cycle to two years, from the industry standard five to seven years. And Tesla Motors uses two VCMs to drive the screens in the Model S.
    Noticia:
    http://www.hardocp.com/news/2015/04/...n#.VT5K0ZP0Mxk
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  6. #6
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Nvidia to cease development of software modems, may sell Icera

    Nvidia Corp. on Tuesday said that it will cease development of its Icera modems in the second quarter of its fiscal year 2016. At present, the company is considering to sell its soft modem technologies or Icera business completely.
    Nvidia acquired Icera in 2011 in order to develop competitive system-on-chips for smartphones and tablets. Since then the company has refocused its Tegra business to gaming, automotive and cloud computing applications. While company’s latest SoCs (such as Tegra X1 or Tegra K1) can be used inside mobile devices, they are not designed specifically for them. As a result, Nvidia believes it does not need its own modem technologies.

    The Icera 4G LTE modem meets Nvidia’s needs for the next year or more. Going forward, the company plans to partner with third-party modem suppliers and will no longer develop its own. Essentially, this means that Nvidia will also not develop highly-integrated applications processors for mainstream smartphones.
    The Icera modem operation has approximately 500 employees, based primarily in the U.K. and France, with smaller operations in Asia and the United States.
    Noticia:
    http://www.kitguru.net/laptops/mobil...ay-sell-icera/
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  7. #7
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Nvidia supplies open saucy reference headers



    Catching up with Intel and AMD

    Nvidia will begin supplying hardware reference headers for the Nouveau DRM driver in a move that will see it catching up with AMD and Intel on supplying the steadily growing Linux gamer market.


    Nvidia is popular with Linux gamers who use proprietary hardware drivers, but those who have a nearly vegan religious approach to open source are not really catered for, Nvidia's open-source support has lagged behind Intel and AMD on Linux and Nvidia not officially supporting the community-based, mostly-reverse-engineered Nouveau driver.
    Nvidia has been working on the Tegra K1 and newer graphics driver support for the open-source driver which is seen as being a sea change.
    The outfit has been neutral against Nouveau but have been supplying some recent hardware samples to Nouveau developers, a little bit of public documentation, and Nvidia answering some questions for Nouveau developers. Push hardware reference headers into the Nouveau driver is therefore a big step.
    Phoronix said that Nvidia's system software team has begun aligning their new-chip development efforts with Nouveau.
    Nvidia's Ken Adams said that the outfit would like to arrive at a place where the Nouveau kernel driver code base as its primary development environment.
    To drive Nouveau as Nvidia s primary development environment for Tegra, it is looking at adding "official" hardware reference headers to Nouveau.
    "The headers are derived from the information we use internally. I have arranged the definitions such that the similarities and differences between GPUs is made explicit. I am happy to explain the rationale for any design choices and since I wrote the generator I am able to tweak them in almost any way the community prefers."
    So far he has been cleared to provide the programming headers for the GK20A and GM20B. For those concerned that this is just an item for pushing future Tegra sales Ken said: in "the long-term I'm confident any information we need to fill-in functionality greater than or equal to NV50/G80 will be made public eventually. We just need to go through the internal steps necessary to make that happen." It's just like Intel and AMD with legal/IP review being time consuming."

    Noticia:
    http://www.fudzilla.com/news/38075-n...erence-headers
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  8. #8
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    IBM, NVIDIA and Mellanox Launch Design Center for Big Data and HPC

    IBM, in collaboration with NVIDIA and Mellanox, today announced the establishment of a POWER Acceleration and Design Center in Montpellier, France to advance the development of data-intensive research, industrial, and commercial applications. Born out of the collaborative spirit fostered by the OpenPOWER Foundation - a community co-founded in part by IBM, NVIDIA and Mellanox supporting open development on top of the POWER architecture - the new Center provides commercial and open-source software developers with technical assistance to enable them to develop high performance computing (HPC) applications.

    Technical experts from IBM, NVIDIA and Mellanox will help developers take advantage of OpenPOWER systems leveraging IBM's open and licensable POWER architecture with the NVIDIA Tesla Accelerated Computing Platform and Mellanox InfiniBand networking solutions. These are the class of systems developed collaboratively with the U.S. Department of Energy for the next generation Sierra and Summit supercomputers and to be used by the United Kingdom's Science and Technology Facilities Council's Hartree Centre for big data research.

    In addition to expanding the software and solution ecosystem around OpenPOWER, this new collaboration between the three companies will create opportunities for software designers to acquire advanced HPC skills and drive the development of new technologies to bring value to customers globally.

    The Center will be led by a team of experts from NVIDIA and Mellanox along with leading scientists from IBM Client Center Montpellier (France) and IBM Research Zurich (Switzerland). The Montpellier-based center is the second of its kind, complementing the previously announced center in Germany established in concert with IBM, NVIDIA and the Jülich Supercomputing Center in November.

    "Our launch of this new Center reinforces IBM's commitment to open-source collaboration and is a next step in expanding the software and solution ecosystem around OpenPOWER," said Dave Turek, IBM's Vice President of HPC Market Engagement. "Teaming with NVIDIA and Mellanox, the Center will allow us to leverage the strengths of each of our companies to extend innovation and bring higher value to our customers around the world."

    "Increasing computational performance while minimizing energy consumption is a challenge the industry must overcome in the race to exascale computing," said Stefan Kraemer, director of HPC Business Development, EMEA, at NVIDIA. "By providing systems combining IBM Power CPUs with GPU accelerators and the NVIDIA NVLink high-speed GPU interconnect technology, we can help the new Center address both objectives, enabling scientists to achieve new breakthroughs in their research."

    "The new POWER Acceleration and Design Center will help scientists and engineers address the grand challenges facing society in the fields of energy and environment, information and health care using the most advanced HPC architectures and technologies," said Gilad Shainer, vice president of marketing, Mellanox Technologies. "Mellanox InfiniBand networking solutions offer more than a decade of experience building the world's highest performing networks, and are uniquely based on an offload-architecture. Only Mellanox offloads data movement, management and even data manipulations (for example Message Passing - MPI collective communications) which are performed at the network level, enabling more valuable CPU cycles to be dedicated to the research applications."

    As founding members of the OpenPOWER Foundation, IBM, NVIDIA, and Mellanox share a common vision to bring a new class of systems to market faster to tackle today's big data challenges.
    Noticia:
    http://www.techpowerup.com/214007/ib...a-and-hpc.html
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  9. #9
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Nvidia's Deep Learning Updates Build Bigger Neural Nets Faster: Digits 2, cuDNN 3, CUDA 7.5

    At a machine learning convention in France, Nvidia announced updates to its contributions to Deep Learning.
    If there's one company that puts a heap of effort into Deep Learning, it's Nvidia, and today in Lille, France, at ICML (International Conference on Machine Learning), the GPU maker announced three updates: the new Nvidia DIGITS 2 system, Nvidia CuDNN 3, and CUDA 7.5.
    Deep Learning is a concept where computers can build deep neural networks based on given information, which then can be used to accomplish various complicated tasks such as image recognition, object detection and classification, speech recognition, translation, and various medical applications.
    You may not realize it, but Deep Learning really is all around us already. Google uses it quite widely, Nvidia applied it in its Drive PX auto-pilot car computer, and medical institutions are starting to use it to detect tumors with much higher accuracy than doctors can.
    The reason Deep Learning has been able to explode the way it has been is because of the huge amounts of compute power available to us with GPUs. Building DNNs (Deep Neural Networks) is a massively parallel task that takes lots of power. For example, building a simple neural net to recognize everyday images (such as the ImageNet challenge) can take days, if not weeks, depending on the hardware used. It is therefore essential that the software be highly optimized to use the resources most effectively, because not all neural nets end up working, and rebuilding another with slightly different parameters to increase its accuracy is a lengthy process.
    DIGITS 2

    For that reason, DIGITS 2's biggest update is that it can now build a single neural net using up to four GPUs; in the past, you could only use one per neural net. When using four GPUs, the training process for a neural net is up to 2x faster than on a single GPU. Of course, you may then say: build four different neural nets and see which one is the best. But it's not quite that simple.
    A researcher may begin by building four different neural nets with different parameters, but based on the same learning data, and figure out which one is best, and then from there on out improve the parameters until the ideal setup is found, at which point only a single neural net needs to be trained.
    Nvidia's DIGITS is a software package with a web-based interface for Deep Learning, where scientists and researchers can start, monitor, and train DNNs. It comes with various Deep Learning libraries, making it a complete package. It allows researchers to focus on the results, rather than have to spend heaps of time figuring out how to install various libraries and how to use them.
    CuDNN 3

    CuDNN 3 is Nvidia's Deep Learning library, and as you may have guessed, it is CUDA based. Compared to the previous version, it can train DNNs up to 2x faster on identical hardware platforms with Maxwell GPUs. Nvidia achieved this improvement by optimizing the 2D convolution and FFT convolution processes.

    CuDNN 3 also has support for 16-bit floating point data storage in the GPU memory, which enables larger neural networks. In the past, all data points were 32 bits in size, but not a lot of vector data needs the full accuracy of 32-bit data. Of course, some accuracy is lost in the process for each vector point, but the result of that tradeoff is that the GPU's memory has room more vectors, which in turn can increase the accuracy of the entire model.
    CUDA 7.5

    Both of the above pieces of software are based on the new CUDA 7.5 toolkit. The reason why CuDNN 3 supports 16-bit floating point data is because CUDA 7.5 now supports it. Most notably, it offers support for mixed floating point data, meaning that 16-bit vector data can be used where accuracy is less essential, and 32-bit data points will be used when higher accuracy is required.
    Additional changes include new cuSPARSE GEMVI routines, along with instruction-level profiling, which can help you figure out which part of your code is limiting GPU performance.
    The Preview Release version of the DIGTS 2 software is all available for free to registered CUDA developers, with final versions coming soon. More information is available here.
    Noticia:
    http://www.tomshardware.com/news/nvi...ate,29523.html
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  10. #10
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Nvidia adds AI improvements to CUDA



    Updated to 16-bit floating point arithmetic

    Nvidia has sexed up its CUDA (Compute Unified Device Architecture) parallel programming platform and application programming interface.


    The company has now made sure that it supports 16-bit floating point arithmetic when in the past it could only do 32-bit floating point operations.
    The reason for the change is part of Nvidia's improvements in its AI software. Support for the smaller floating point size helps developers cram more data into the system for modelling. The company updated its CUDA Deep Neural Network library of common routines to support 16 bit floating point operations as well.
    Nvidia has been upgraded its Digits software for designing neural networks. Digits version 2, released yesterday comes with a graphical user interface, potentially making it accessible to programmers beyond the typical user-base of academics and developers who specialize in AI.
    Ian Buck, Nvidia vice president of accelerated computing said the previous version could be controlled only through the command line, which required knowledge of specific text commands and forced the user to jump to another window to view the results.
    Digits can now run up to four processors when building a learning model. Because the models can run on multiple processors,
    Digits can build models up to four times as quickly compared to the first version.
    Nvidia is a big fan of AI because it requires the heavy computational power used by its GPUs.
    Nvidia first released Digits as a way to cut out a lot of the menial work it takes to set up a deep learning system.
    One early user of Digits' multi-processor capabilities has been Yahoo, which found this new approach cut the time required to build a neural network for automatically tagging photos on its Flickr service from 16 days to 5 days.
    Noticia:
    http://www.fudzilla.com/news/38174-n...ements-to-cuda
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  11. #11
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Nvidia Opens Up OpenACC Toolkit, Free To Academia

    OpenACC has been adopted by over 8,000 developers already, but now Nvidia is opening up a toolkit for free for academic and research purposes.

    If you’re good at creating parallel computing devices (read: graphics cards), you’ll certainly want people to be able to leverage the best of that power. Therefore, in 2011 Nvidia created OpenACC in collaboration with Cray, CAPS and PGI, and now the GPU maker is introducing the free OpenACC Toolkit, which comes with the PGI compiler, an NVProf Profiler, code samples and documentation, for academia.
    The idea behind OpenACC is simple: programmers can take their existing Fortran, C and C++ codes, and with minimal adjustments, alter them to leverage the GPU's parallel processing power for certain bits of the code. Using hints in the code, you can tell the compiler which tasks should run over the GPU and which should remain on the CPU. Nvidia described OpenACC as simple, powerful and portable.

    Among the examples provided, Nvidia quoted that at the University of Illinois, the MRI reconstruction in PowerGrid was sped up 70-fold, with just two days of effort in adjusting the previously CPU-based code. Climate modeling for RIKEN Japan was sped up by 7-8x, having modified just five percent of the original source code. Currently, there are about 8,000 developers using OpenACC.

    In addition, Nvidia announced x86 CPU portability. If your system doesn’t have a GPU to run the parallel code on, the OpenACC compiler will build the application such that it uses the multiple x86 cores to achieve at least somewhat better performance. Currently, this feature is in the beta phase, but Nvidia will be making it available to a wider audience in Q4 this year.
    In the past, Nvidia’s OpenACC was only open to paying customers. Now, the OpenACC toolkit is available for free to academia. Commercial customers still have to pay, although they will be able to sign up for 90-day trials to figure out what they’d actually be paying for. More information and OpenACC downloads are available here.
    Noticia:
    http://www.tomsitpro.com/articles/nv...te,1-2738.html
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  12. #12
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Nvidia Will Host Free Deep Learning Course Online, Starting This Week

    Starting on Wednesday, July 22, Nvidia will be hosting a free bi-weekly instructor-led course on deep learning. The introductory course will include a combination of interactive lectures and hand-on exercises, as well as a one-hour Q&A the week following each class.
    Deep learning is a form of artificial intelligence that is rapidly being adopted by many different industries. It can be used to classify images, recognize voices, or analyze sentiments, among other things, with human-like accuracy. It can be applied to facial recognition software, used for scene detection, and used in advanced medical and pharmaceutical research. The technology is even being used in the development of self-driving cars.
    During the training sessions, you’ll learn all the necessities of designing and training neural network-powered AI and integrating it into your own applications. The course will cover Nividia DIGITS interactive training system for image classification, and the Caffe, Theano and Torch Framworks. You'll have to take the introduction into deep learning as the first class. Nvidia will be supplying free hands-on lab exercises during the course through Nvidia Qwiklab.
    Each class is scheduled for 9am PT and will be recorded for viewing later. The course kicks off on Wednesday July 22, and Nvidia is accepting registration right now. As this is an introductory course, previous experience with deep learning and GPU programming is not required.
    Noticia:
    http://www.tomshardware.com/news/fre...rse,29630.html
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  13. #13
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    NVIDIA Sets Conference Call for Second-Quarter Financial Results

    NVIDIA will host a conference call on Thursday, Aug. 6, at 2 p.m. PT (5 p.m. ET) to discuss its financial results for the second quarter of fiscal year 2016, ending July 26, 2015. The call will be webcast live (in listen-only mode) at the following websites: nvidia.com and streetevents.com. The company's prepared remarks will be followed by a question and answer session, which will be limited to questions from financial analysts and institutional investors. Ahead of the call, NVIDIA will provide written commentary on its second-quarter results from its CFO.
    Noticia:
    http://www.hardocp.com/news/2015/07/...s#.VbJ2Efn0OTQ
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  14. #14
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    NVIDIA Announces Financial Results for Second Quarter Fiscal 2016

    NVIDIA today reported revenue for the second quarter ended July 26, 2015, of $1.153 billion, up 5 percent from $1.103 billion a year earlier, and up marginally from $1.151 billion the previous quarter.

    GAAP earnings per diluted share for the quarter were $0.05. This includes a charge of $0.19 per diluted share in connection with the company's decision to wind down its Icera modem operations, after a viable buyer failed to emerge. It also includes a charge of $0.02 per diluted share related to the NVIDIA SHIELD tablet recall. Non-GAAP earnings per diluted share were $0.34, up 13 percent from $0.30 a year earlier, and up 3 percent from $0.33 in the previous quarter.

    "Our strong performance in a challenging environment reflects NVIDIA's success in creating specialized visual computing platforms targeted at important growth markets," said Jen-Hsun Huang, president and chief executive officer of NVIDIA.

    "Our gaming platforms continue to be fueled by growth in multiple vectors -- new technologies like 4K and VR, blockbuster games with amazing production values, and increasing worldwide fan engagement in e-sports. We're working with more than 50 companies that are exploring NVIDIA DRIVE to enable self-driving cars. And our GPU-accelerated data center platform continues to make great strides in some of today's most important computing initiatives -- cloud-based virtualization and high performance computing applications like deep learning."

    "Visual computing continues to grow in importance, making our growth opportunities more exciting than ever," he said.

    Capital Return
    During the second quarter, NVIDIA paid $52 million in cash dividends and $400 million in share repurchases -- returning an aggregate of $452 million to shareholders. In the year's first half, the company returned an aggregate of $551 million to shareholders.

    NVIDIA will pay its next quarterly cash dividend of $0.0975 per share on September 11, 2015, to all shareholders of record on August 20, 2015.

    NVIDIA's outlook for the third quarter of fiscal 2016 is as follows:
    • Revenue is expected to be $1.18 billion, plus or minus two percent.
    • GAAP and non-GAAP gross margins are expected to be 56.2 percent and 56.5 percent, respectively, plus or minus 50 basis points.
    • GAAP operating expenses are expected to be approximately $484 million. Non-GAAP operating expenses are expected to be approximately $435 million.
    • GAAP and non-GAAP tax rates for the third quarter of fiscal 2016 are expected to be 22 percent and 20 percent, respectively, plus or minus one percent.
    • The above GAAP outlook amounts exclude additional restructuring charges, which are expected to be in the range of $15 million to $25 million, in the second half of fiscal 2016.
    • Capital expenditures are expected to be approximately $25 million to $35 million.

    Second Quarter Fiscal 2016 Highlights
    During the second quarter, NVIDIA achieved progress in each of its platforms.
    Gaming:
    • Continued strong demand for GeForce GTX GPUs, driven by advanced new games and growth in competitive e-sports, which now have an estimated 130 million viewers.
    • Unveiled the flagship GeForce GTX 980 Ti GPU, with the power to drive 4K and VR gaming.
    • Increased users of the GeForce Experience PC gaming platform to 65 million, from 38 million a year earlier.
    • Launched the NVIDIA SHIELD Android TV device, the most advanced smart TV platform, which connects TVs to a world of entertainment apps and services.

    Enterprise Graphics & Virtualization:
    • Continued strong momentum for NVIDIA GRID graphics virtualization, which more than tripled its customer base to over 300 enterprises from a year earlier.

    HPC & Cloud:
    • Engaged with more than 3,300 companies exploring the use of deep learning in areas such as speech recognition, image analysis and translation capabilities.
    • Shipped cuDNN 3.0, which doubles the performance of deep learning training on GPUs and enables the training of more sophisticated neural networks. cuDNN has been downloaded by more than 9,000 researchers worldwide.

    Auto:
    • Working with more than 50 companies to use the NVIDIA DRIVE PX platform in their autonomous driving efforts.
    Noticia:
    http://www.techpowerup.com/215026/nv...scal-2016.html



    E a nVidia continua a fazer somas e a subir percentagens de lucro, mais 5% de incremento.
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  15. #15
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Nvidia hit by Icera, Shield recall costs

    Nvidia's Jen-Hsun Huang has spoken of growth at the company, but faulty devices and the death of its Icera modem business have led to an 80 per cent drop in net income in the last quarter.





    Nvidia has published its latest financial results, and despite taking a serious hit on the recall of faulty Shield Tablet devices the company has beaten expectations with a five per cent boost in revenue year-on-year - a fact which has failed to rescue the company from an 80 per cent drop in net income.

    In the company's latest financial announcement, for the second quarter of its 2016 financial year, the company boasted of $1.153 billion in revenue, up five per cent on the same period last year. While it comes with a slight drop in gross margin, from 56.1 per cent last year to 55 per cent this year, the bigger news was an 80 per cent drop in net income from $128 million to $26 million - thanks in part to the recall of Shield Tablet devices due to a fault which puts users at risk of fire.

    A bigger impact on the bottom line came from the company's decision to wind down its Briston-based Icera mobile modem business unit, which it announced back in May a mere four years after buying the 500-strong company for $367 million. While the company had looked to sell off the business unit, it claims no 'viable buyer' could be found and so its closure comes as an entire loss.

    'Our strong performance in a challenging environment reflects Nvidia's success in creating specialised visual computing platforms targeted at important growth markets,' boasted Jen-Hsun Huang, president and chief executive officer, of Nvidia's results. 'Our gaming platforms continue to be fuelled by growth in multiple vectors - new technologies like 4K and VR, blockbuster games with amazing production values, and increasing worldwide fan engagement in e-sports. We're working with more than 50 companies that are exploring Nvidia Drive to enable self-driving cars, and our GPU-accelerated data centre platform continues to make great strides in some of today's most important computing initiatives - cloud-based virtualisation and high performance computing applications like deep learning.'

    Nvidia is predicting further growth to $1.18 billion for the next quarter, with a return to a profit margin on 56.2 to 56.5 per cent, but warns that a further $15 million to $25 million restructuring charge will again have an impact on the company's bottom line as it enters the second half of its financial year.
    Noticia:
    http://www.bit-tech.net/news/hardwar...shield-costs/1
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

 

 
Página 1 de 5 123 ... ÚltimoÚltimo

Informação da Thread

Users Browsing this Thread

Estão neste momento 1 users a ver esta thread. (0 membros e 1 visitantes)

Bookmarks

Regras

  • Você Não Poderá criar novos Tópicos
  • Você Não Poderá colocar Respostas
  • Você Não Poderá colocar Anexos
  • Você Não Pode Editar os seus Posts
  •