Registar

User Tag List

Likes Likes:  0
Página 2 de 2 PrimeiroPrimeiro 12
Resultados 16 a 23 de 23
  1. #16
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Semiconductors from idea to product

    The story of how chips are made

    Rys Sommefeldt works for Imagination Technologies and runs Beyond3D. He took us inside Nvidia's Fermi architecture a couple of years ago, and now he's back with a breakdown of how modern semiconductors like CPUs, GPUs, and SoCs are made.
    Disclaimer: what you're about to read is not an exact description of how my employer, Imagination Technologies, and its customers take semiconductor IP from idea to end user product. It draws on how they do it, but that's it.
    This essay is designed to be a guide to understanding how any semiconductor device is made, regardless of whether it's purely an in-house design, licensed IP, or something in between. I'll touch on chips for consumer devices, since that's what I work on most, but the process applies almost universally to any chip in any device.
    I've never read a really great top-to-bottom description of the process, and it's something I'd have loved to have read years before I joined a semiconductor IP company. I hope this helps others in the same position. If you're at all interested in chip manufacturing and how chips are made and selected for consumer products, this should hopefully be a great read.
    Chips on a 45-nm wafer. Source: AMD
    The idea
    It all starts with an idea you see. Not quite at the level of "I want to build a smartphone," although understanding that the smartphone might be a target application for the idea will be great to help the idea take shape. No, we're going to talk about things a little bit further down, at the level of the silicon (but not for long!) chips that do all the computing in modern devices, be they smartphones or otherwise.
    All of the chips I can think of, even the tiniest and most specialized chips that perform just a few functions, are made up of much smaller building blocks underneath. If you want to perform any non-trivial amount of computation, even just on a single input, you're going to need a design that builds on top of foundational blocks.
    So whether the idea is "let's build the high-performance GPU that'll go in the chips that go into smartphones," or something that's much simpler, the idea (almost) never gets built in its entirety as one monolithic piece of technology. It usually must be built from smaller building blocks. The primary reason, especially these days, is that it's incredibly rare that one single person can hold the entire design for a chip in her or his head, in order to build it from start to finish and make sure it works. Modern chips are complex, usually consisting of at least a couple hundred million transistors in most consumer products and often much much more. Most main processors in modern a desktop or laptop are well over a billion transistors. There's maybe over a billion transistors in your pocket, in the main chip in your phone.
    So you overwhelmingly can't build the idea as a monolithic thing, because humans just don't work that way. Instead, the idea must be broken down into blocks. Maybe a single person can design, build, assemble, and test all of the blocks themself, but blocks are must. I'll talk a lot about blocks, so apologies if the word offends somehow, or if it means "I hate your cat" in your native language. I definitely love your cat.
    Timescales
    For simplicity's sake, I'm going to talk about most common processors these days, which all take at least a year to make. Nothing in the semiconductor business happens really quickly. It really does normally take years to go from an idea about a chip all the way through the design, build, validation, integration, testing, sampling, possible rework, and mass production. All that happens before the product can be sold and you hold it in your hands, put it under your TV, drive it, fly it, use it to read books, or whatever else the chip finds itself in these days.
    There's maybe over a billion transistors in your pocket, in the main chip in your phone.
    The lifetime of a new chip is therefore never short. There are some macro views of the semiconductor industry where you might think that's the case. For example, a modern smartphone system-on-chip (SoC) vendor might be able to go from project start to chip mass production in a short matter of months, but that's because all they're doing is integrating the already designed, built, validated, and tested building blocks that other people have made. Tens of thousands of man years went into all of the constituent building blocks before the chip vendor got hold of them and turned them into the full SoC.
    It takes years—not months, weeks, days, or anything silly like that, at least for the main chips performing complex computation in modern consumer electronics and related industries.
    Knowing what you need
    Chip development taking years means there's a certain amount of hopefully accurate prediction to be done. Smart chip designers are data-driven folks who don't trust instinct or read tea leaves. They don't make decisions based on whether the headline in today's paper started with the third letter of the name of the second dog they had in their first house as a kid. Knowing what to design is almost pure data analysis.
    Data inputs come in to the chip designer from everywhere: marketing teams, sales people, existing customers, potential new customers, product planners, project managers, and competitive and performance analysis folks like me. Then there's the data they get from experience, because they built something similar last time and they know how well it worked (or not).
    The chip designer's first job is to filter all of that data and use it as the foundation of the model of what they're going to build. They need to know as much as possible about the contextual life of the chip when it finally comes into existence. What kind of products is it going to go into eventually? What does the customer expect as a jump over the last thing someone sold them? Is there a minimal bar for new performance or a requirement for some new features? Are trends in battery life, materials science, or the manufacturing of chips by the foundry changing?
    What about costs? Costs play an enormous role in things. There's no point designing something that costs $20 if your competitor can sell their closely functional and performing equivalent for $10. Knowing your cost structure for any chip is probably the thing that shapes a chip designer's top-level bounds the most. Every choice you make has a cost, direct or indirect.
    Say your chip needs Widget A, which is 20 square millimeters in area on the process technology of your foundry. Your total chip cost lets you design something that's 80 mm² square, because every square millimeter costs you 20 cents and your customer won't pay more than $20 for the full chip, and because you really need that 25% gross margin on the manufacturing to pay for the next chip. Widgets B through Z only have 60 mm² left, and really a bit less than 60 because it's incredibly hard to lay out everything on the chip so there are no gaps. Sometimes you even want gaps, for power or heat reasons. I'll come back to that theme later.
    There's both a direct (your chip can't cost more than $16 to fab) and an indirect (choosing Widget A affects your further choices of Widgets B through Z) set of costs to model.
    So the chip designer takes all of those inputs and feeds them into her or his models (there's usually a lot of spreadsheet gymnastics here, more than you might think). The designer decides what Widgets they need for their chip, intercepting all of the top-level context about the chip, when it will be made, and when it will come into the world to make they take advantage of everything known about its design and manufacturing.
    We now know that the designer needs some building blocks for their chip, and that they've made the hard decisions about what they believe they need. Where do those blocks come from these days?
    Buy it in or build it yourself
    If you're a semiconductor behemoth like Intel, where you literally have the ability not just to design the chip yourself, but also to manufacture it because you also own the chip fabrication machinery, you invariably build the blocks yourself. Say you're the lead designer for the next-generation Core i8-6789K xPro Extreme Edition Hyper Fighting. These days a product like that is not just the CPU like it used to be, where everything else in the system lies on the other end of a connected bus. Chips like the Core i7-4790K are a CPU, memory controller and internal fabric, GPU, big last level cache, video encoder, display controller, PCI Express root complex, and more. So let's assume the i8-6789K is probably at least all of those things.
    As lead designer of something like the i8-6789K, there's probably almost nothing on the i7-4790K chip that its designer bought from outside Intel, or that you'll now buy from a third party as a building block. I'd like to think there's at least one block that Intel didn't design, but I wouldn't be surprised if someone told me there were zero third-party pieces.
    Intel's Core i7-4790K (left) and i7-5960X (right)
    Intel do make chips where they get the blocks from outside of the company, but the vast majority of their revenue comes from sales of chips that are almost completely their own.
    So where are you going to get building blocks from? Intel obviously has design teams for each and every block of the chip. It's incredibly expensive, but the competitive advantages are enormous. Knowing that all of your block designs are coming from your own company, on timescales you (hopefully) control, where your competitors have no idea what you're building, and where you have full design-level control over every part that results in a flip-flop to be flip-flopped, is really compelling. That vertical integration is overwhelmingly an excellent idea if you can afford it, because it lets you put economies of scale to work amortizing the incredibly expensive capital expenditure required.
    You can see that build-it-yourself mentality elsewhere in the chip industry. Qualcomm do as much as they can. Nvidia are trying their very best. Apple are beating the rest of the consumer device world to death with their ability to vertically integrate as much as they can. Lots of that is built on Apple doing the work themselves, at the chip's block level.
    At the other end of the scale in consumer devices like phones and tablets, you have vendors that are master integrators but design none of the blocks themselves. They go shopping, get the blueprints for the blocks from other suppliers, connect them up, and ship the result, often very quickly. It's comparatively cheap and easy for them take this approach. And, primarily because it's also cheap and easy for someone else to follow suit, they're in a horrible, slow, squeezing, cost-down race to the bottom that only a few will survive unless they can differentiate.
    Choosing between buying it in or building it yourself is largely a matter of capital expenditure, expertise, and supporting shipping volume. Those are the big factors, but there's still incredible extra nuance depending on the company making the chip. Some vendors will take a block design in-house where they previously bought it, not because doing so will make them any more money directly, but simply because it'll increase the size of the smile on the customer's face when they use the final product.
    Now we know where the blocks tend to come from. If you're rich and your customers love your stuff so much that your competition matters less, if anyone can even compete with you at all, and if you ship loads of whatever it is you make, you can go ahead and try to do as much of block design as you can yourself. If your cost structure and competitive environment means things are tighter, you need to go shopping. I've also written about how you should go shopping, if you want to nip off and read about that too.
    Regardless, someone needs to design the blocks.
    Todo o artigo:
    http://techreport.com/review/28126/s...dea-to-product
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  2. #17
    Moderador Avatar de Winjer
    Registo
    Feb 2013
    Local
    Santo Tirso
    Posts
    12,672
    Likes (Dados)
    30
    Likes (Recebidos)
    208
    Avaliação
    4 (100%)
    Mentioned
    7 Post(s)
    Tagged
    0 Thread(s)
    IBM announces silicon photonics breakthrough, set to break 100Gb/s barrier

    IBM has announced a breakthrough in the field of silicon photonics — the first fully integrated wavelength multiplexed chip. This new device is designed to enable the manufacture of 100Gb/s optical transceivers and allow both the optical and electrical components to exist side-by-side on the same package. This type of on-die integration is critical to the long-term deployment of optical technology over short distances. But why deploy silicon photonics in the first place — and why has it taken decades of work from companies like IBM and Intel, with seemingly so little to show for it?Silicon photonics — the long-term copper replacement

    In theory, silicon photonics could solve some major problems associated with the continued use of copper interconnects. One of the biggest problems with copper wire is that it doesn’t scale nearly as well as other vital parts of a modern CPU. Past a certain point, it becomes physically impossible to make copper wires any smaller without compromising their performance and/or lifespan. In theory, optical interconnects could transmit data for far less power while simultaneously moving information much more quickly.
    Silicon, unfortunately, is a poor native medium for optical devices. Because silicon lacks a bandgap and the scales of manufacturing are so different (optical waveguides and other components are far larger than the silicon CMOS devices they interact with), designing solutions that could scale effectively and affordably, integrate into existing CMOS manufacturing, and rely on silicon rather than costly alternative materials like gallium arsenide has proven extremely difficult.
    The reason so many companies have pushed to bring this technology to market, despite the slow pace of progress, is that silicon photonics is generally believed to be necessary for exascale-level computing. Right now, copper and fiber typically split the transmission market by distance. Short-run cables between servers or racks tend to use copper, while longer distances rely on fiber.

    The chart above is from an Intel presentation on silicon photonics, but it illustrates the cost and power consumption targets that manufacturers are trying to hit. The long-term roadmaps for silicon photonics enables bandwidths and energy/bit of information ratios that no copper signaling can match. Bringing power down from 75 picojoules to 250 femtojoules is a reduction of multiple orders of magnitude.

    As part of that effort, IBM’s research teams have worked to reduce the process node that it used for circuit design. This slide from 2012 shows how 90nm – 65nm represents the “sweet spot” for these kinds of circuits. While we’re used to smaller nodes offering substantial benefits to traditional CPU transistors, other kinds of components don’t see the same benefits from scaling to smaller process geometries. IBM’s documentation refers to sub-100nm manufacturing, implying that the company standardized at 90nm or 65nm.
    IBM isn’t giving timelines for when we might see more devices shipping with on-chip silicon photonics, but we can predict how the technology will roll out. Current cutting-edge designs put the optical components on the same physical package as the CPU, or at the edge of a motherboard. This makes the hardware useful for server-to-server linkages or possibly for peripheral connection. We expect to see silicon photonics roll out first in the HPC and scientific computing industries, where the sheer scale of many build-outs makes the power conservation critical and government grants are available to ease the cost of initial deployment.
    After decades of work, silicon photonics might seem like just another pie-in-the-sky idea that sounds great on paper and never pans out — but from HP to Intel to IBM, progress is happening in this field. Hardware may not roll out today or next year, but optical signaling is going to play a part of computing’s future — in the datacenter, even if nowhere else.
    Com os limites dos processos de fabrico actuais, bem que precisamos de um avanço deste tipo. Que continuem o trabalho.
    Ryzen R5 3700X / Noctua NH-D15 / B550 AORUS ELITE V2 / Cooler Master H500 Mesh / 16Gb DDR4 @ 3800mhz CL16 / Gigabyte RTX 2070 Super / Seasonic Focus GX 750W / Sabrent Q Rocket 2 TB / Crucial MX300 500Gb + Samsung 250Evo 500Gb / Edifier R1700BT


  3. #18
    Tech Ubër-Dominus Avatar de Jorge-Vieira
    Registo
    Nov 2013
    Local
    City 17
    Posts
    30,121
    Likes (Dados)
    0
    Likes (Recebidos)
    2
    Avaliação
    1 (100%)
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Intel looks to tunnelling transistors and spintronics



    Who needs speed?


    Chipzilla is looking at tunnelling transistors and spintronics and slowly rejecting the need for speed.

    According to the Intel’s William Holt, who leads the company’s technology and manufacturing group, Intel will soon have to start using fundamentally new technologies.
    He named tunnelling transistors and spintronics as good candidates, but both would require changes in how chips are designed and manufactured, and would likely be used alongside silicon transistors.
    Holt said that the technology will not offer speed benefits over silicon transistors and chips may stop getting faster. Instead the tech would improve the energy efficiency of chips, something important for many leading uses of computing today, such as cloud computing, mobile devices, and robotics.
    Speaking at the International Solid State Circuits Conference in San Francisco said: “We’re going to see major transitions... The new technology will be fundamentally different.”
    Holt said that the status quo can only continue for two more generations, just four or five years, by which time silicon transistors will be only seven nanometres in size.
    Tunnelling transistors are far from commercialisation. They take advantage of quantum mechanical properties of electrons that harm the performance of conventional transistors and that have become more problematic as transistors have got smaller.
    Spintronic devices could hit the market next year. They represent digital bits by switching between two different states encoded into a quantum mechanical property of particles such as electrons known as spin.
    Spintronics will appear in some low-power memory chips in the next year or so, perhaps in high-powered graphics cards.
    “Particularly as we look at the Internet of things, the focus will move from speed improvements to dramatic reductions in power. Power is a problem across the computing spectrum. The carbon footprint of data centres operated by Google, Amazon, Facebook, and other companies is growing at an alarming rate. And the chips needed to connect many more household, commercial, and industrial objects from toasters to cars to the Internet will need to draw as little power as possible to be viable,” Holt said.
    Noticia:
    http://www.fudzilla.com/news/process...nd-spintronics
    http://www.portugal-tech.pt/image.php?type=sigpic&userid=566&dateline=1384876765

  4. #19
    Tech Iniciado
    Registo
    Mar 2017
    Posts
    39
    Likes (Dados)
    0
    Likes (Recebidos)
    0
    Avaliação
    0
    Mentioned
    0 Post(s)
    Tagged
    0 Thread(s)
    Explicação das diferença de um i5 6400 para um i5 6600 no processo de fabrico de um processador.

    Última edição de osorio86 : 03-04-17 às 21:09

  5. #20
    O Administrador Avatar de LPC
    Registo
    Mar 2013
    Local
    Multiverso
    Posts
    17,813
    Likes (Dados)
    74
    Likes (Recebidos)
    154
    Avaliação
    31 (100%)
    Mentioned
    31 Post(s)
    Tagged
    0 Thread(s)
    Boas!
    Para quem gosta de nostalgia sobre os semi-condutores... deixo aqui um vídeo interessante sobre a história da Cyrix e como a mesma foi importante para a competição dos CPU´s...



    Cumprimentos,

    LPC
    My Specs: .....
    CPU: AMD Ryzen 7 5800X3D :-: Board: MSI B550M BAZOOKA :-: RAM: 64 GB DDR4 Kingston Fury Renegade 3600 Mhz CL16 :-: Storage: Kingston NV2 NVMe 2 TB + Kingston NV2 NVMe 1 TB
    CPU Cooling Solution: ThermalRight Frost Commander 140 Black + ThermalRight TL-C12B-S 12CM PWM + ThermalRight TL-C14C-S 14CM PWM :-: PSU: Corsair HX 1200 WATTS
    Case: NZXT H6 FLOW :-: Internal Cooling: 4x ThermalRight TL-C12B-S 12CM PWM + 4x ThermalRight TL-C14C-S 14CM PWM
    GPU: SAPPHIRE
    NITRO+ AMD RADEON RX 7800 XT - 16 GB :-: Monitor: BenQ EW3270U 4K HDR


  6. #21
    Moderador Avatar de Winjer
    Registo
    Feb 2013
    Local
    Santo Tirso
    Posts
    12,672
    Likes (Dados)
    30
    Likes (Recebidos)
    208
    Avaliação
    4 (100%)
    Mentioned
    7 Post(s)
    Tagged
    0 Thread(s)
    Hoje, 23 de dezembro, comemora-se um dos dias mais importantes da história da humanidade: a invenção do transístor.


  7. #22
    O Administrador Avatar de LPC
    Registo
    Mar 2013
    Local
    Multiverso
    Posts
    17,813
    Likes (Dados)
    74
    Likes (Recebidos)
    154
    Avaliação
    31 (100%)
    Mentioned
    31 Post(s)
    Tagged
    0 Thread(s)
    Citação Post Original de Winjer Ver Post
    Hoje, 23 de dezembro, comemora-se um dos dias mais importantes da história da humanidade: a invenção do transístor.

    Boas!
    Dia esse que curiosamente não aconteceu no universo do Fallout...
    Para eles o mundo continuou, sem essa invenção e por isso é que vemos aquele tipo retro look com as bobines e afins no jogo...
    De qualquer forma adoro aquele mundo e deixo-vos um vídeo que de tempos a tempos gosto de ver...



    Cumprimentos,

    LPC
    My Specs: .....
    CPU: AMD Ryzen 7 5800X3D :-: Board: MSI B550M BAZOOKA :-: RAM: 64 GB DDR4 Kingston Fury Renegade 3600 Mhz CL16 :-: Storage: Kingston NV2 NVMe 2 TB + Kingston NV2 NVMe 1 TB
    CPU Cooling Solution: ThermalRight Frost Commander 140 Black + ThermalRight TL-C12B-S 12CM PWM + ThermalRight TL-C14C-S 14CM PWM :-: PSU: Corsair HX 1200 WATTS
    Case: NZXT H6 FLOW :-: Internal Cooling: 4x ThermalRight TL-C12B-S 12CM PWM + 4x ThermalRight TL-C14C-S 14CM PWM
    GPU: SAPPHIRE
    NITRO+ AMD RADEON RX 7800 XT - 16 GB :-: Monitor: BenQ EW3270U 4K HDR


  8. #23
    Moderador Avatar de Winjer
    Registo
    Feb 2013
    Local
    Santo Tirso
    Posts
    12,672
    Likes (Dados)
    30
    Likes (Recebidos)
    208
    Avaliação
    4 (100%)
    Mentioned
    7 Post(s)
    Tagged
    0 Thread(s)
    IBM acabou de mostrar o seu processo de 2nm

    Ryzen R5 3700X / Noctua NH-D15 / B550 AORUS ELITE V2 / Cooler Master H500 Mesh / 16Gb DDR4 @ 3800mhz CL16 / Gigabyte RTX 2070 Super / Seasonic Focus GX 750W / Sabrent Q Rocket 2 TB / Crucial MX300 500Gb + Samsung 250Evo 500Gb / Edifier R1700BT


 

 
Página 2 de 2 PrimeiroPrimeiro 12

Informação da Thread

Users Browsing this Thread

Estão neste momento 1 users a ver esta thread. (0 membros e 1 visitantes)

Bookmarks

Regras

  • Você Não Poderá criar novos Tópicos
  • Você Não Poderá colocar Respostas
  • Você Não Poderá colocar Anexos
  • Você Não Pode Editar os seus Posts
  •