Today, we are at an inflection point in computing where emerging Generative AI services are placing unprecedented demand for compute while the existing architectural patterns for improving efficiency have stalled. In this talk, we will discuss the likely needs of the next generation of computing infrastructure and use recent examples at Google from networks to accelerators to servers to illustrate the challenges and opportunities ahead. Taken together, we chart a course where computing must be increasingly specialized and co-optimized with algorithms and software, all while fundamentally focusing on security and sustainability.
Bio
Amin Vahdat is a Fellow and vice president of Engineering at Google, where his team is responsible for delivering industry-leading Machine
Learning software and hardware that serves Alphabet, Google and the world, and Artificial Intelligence technologies that solve customers’ most
pressing business challenges. In the past, he was General Manager for Google's compute, storage, and network hardware and software infrastructure.
Until 2019, he was the Technical Lead for the Networking organization at Google.
Before joining Google, Amin was the Science Applications International Corporation (SAIC) Professor of Computer Science and Engineering at UC San Diego
(UCSD) He received his doctorate from the University of California Berkeley in computer science, and is a member of the National Academy of Engineering
(NAE) and an Association for Computing Machinery (ACM) Fellow. Amin has been recognized with a number of awards, including the National
Science Foundation (NSF) CAREER award, the UC Berkeley Distinguished EECS Alumni Award, the Alfred P. Sloan Fellowship, the Association for
Computing Machinery's SIGCOMM Networking Systems Award, and the Duke University David and Janet Vaughn Teaching Award. Most recently,
Amin was awarded the SIGCOMM lifetime achievement award for his contributions to data center and wide area networks.
Abstract
The basic principles of achieving high performance in computing have remained the same, have evolved,
and have presented new and different challenges. This talk will touch on some computing history, learnings,
and make the case that although computing has achieved tremendous orders-of-magnitude breakthroughs, many of the
challenges facing us today are curiously the same. Today’s computing landscape is more exciting than ever.
Bio
Debbie Marr is the Chief Architect of the Advanced Architecture Development Group
(AADG) at Intel, where she leads visioning and developing new CPU architectures and
microarchitectures for future computing needs such as AI, cloud computing, and
security.
Debbie’s 30+ years at Intel include roles such as the Director of Accelerator Architecture
Lab in Intel Labs where she led research in machine learning and acceleration
techniques for CPU, GPU, FPGA, and AI Accelerators. Debbie played leading roles on
Intel CPU products from the 386SL to Intel’s current leading-edge products. Debbie was
the server architect of Intel® PentiumTM Pro, Intel’s first Xeon Processor. She brought Intel
Hyperthreading Technology from concept to product on the Pentium 4 Processor. She
was the chief architect of the 4th Generation Intel CoreTM (Haswell), and led advanced
development for Intel’s 2017/2018 Core/Xeon CPUs. Debbie holds over 40 patents in
many aspects of CPU, AI accelerators, and FPGA architecture/microarchitecture.
Debbie has a PhD in electrical and computer engineering from University of Michigan, an
MS in electrical engineering and computer science from Cornell University, and a BS in
electrical engineering and computer science from the University of California, Berkeley.
Moderator: Andreas Moshovos, University of Toronto
Panelists
Abstract
For over 50 years, information technology has relied upon Moore’s Law: providing, for the same cost, 2x the number of logic
transistors that were possible a few years prior. For much of that time, the smaller devices also provided dramatic energy
and performance improvement through Dennard Scaling, but that scaling ended over a decade ago. While technology scaling continues,
per transistor cost is no longer scaling in the advanced nodes. In this post Moore’s Law reality, further price/performance improvement
follows only from improving the efficiency of applications using innovative hardware and software techniques.
Unfortunately, this need for innovative system solutions runs smack into the enormous complexity of designing and debugging contemporary
VLSI based hardware/software platforms; a task so large it has caused the industry to consolidate, moving it away from innovation.
The result is a set of platforms aim at different computing markets. To overcome this challenge, we need to develop a new design approach
and tools to enable small groups of application experts to selectively extend the performance of those successful platforms.
Like the ASIC revolution in the 1980s, the goal of this approach is to enable a new set of designers, then board level logic designers,
now application experts, to leverage the power of customized silicon solutions. Like then, these tools won’t initially be useful for
current chip designers, but over time will underly all designs. In the 1980s to provide access to logic designers, the key technologies
were logic synthesis, simulation, and placement/routing of their designs to gate arrays and std cells. Today, the key is to realize you are
creating an “app” for an existing platform, and not creating the system solution from scratch (which is both too expensive and error prone),
and to leverage the fact that modern “chips” are made of many chiplets. The new approach must provide a design window familiar to application
developers, with similar descriptive, performance tuning, and debug capabilities. These new tools will be tied to highly capable platforms that
are used as the foundation, like the appStore model for mobile phones. This talk will try to convince you this might be possible, and encourage
you to help contribute to this effort.
Bio
Mark Horowitz is the Yahoo! Founders Professor at Stanford University and chair of the Electrical Engineering Department.
He co-founded Rambus, Inc. in 1990 and is a fellow of the IEEE and the ACM and a member of the National Academy of Engineering and the
American Academy of Arts and Science. Dr. Horowitz's research interests are quite broad and span using EE and CS analysis methods to problems
in molecular biology to creating new design methodologies for analog and digital VLSI circuits.