This lecture will be held in person and via WebEx.
The use of deep-learning (DL) based AI has grown dramatically over the last few years. The accuracy delivered by DL has enabled new levels of automation for many tasks, and DL has proven to be very effective for processing unstructured data such as natural language and image data that is difficult to handle otherwise. The computational demands of these AI methods are high and are growing exponentially; satisfying them requires large improvements in compute latency, throughput, and energy efficiency. To address the challenges and opportunities of AI computing, the IBM Research AI Hardware Center has undertaken an end-to-end approach to enhance compute efficiency and improve performance. This includes developing algorithmic innovations such as arithmetic precision scaling while maintaining iso-accuracy, new digital AI chip architectures informed by these new algorithms, novel analog in-memory hardware with networks mapped on to arrays of non-volatile memory elements, heterogeneous integration technologies to enable balanced system performance, and the software innovations needed to enable system integration. In this presentation I will explore the challenges and opportunities in this exciting space of AI hardware design.
Jeff Burns is the Director of the IBM Research AI Hardware Center. He manages the Center's activities across materials, advanced packaging, accelerator design, software, and applications. Upon joining IBM Research at the T.J. Watson Research Center he initially worked in layout automation and processor design. Subsequently he has managed teams and projects in VLSI design, design automation, microprocessors, systems architecture, and AI. He received his B.S. in Engineering from UCLA, and his M.S. and Ph.D. in Electrical Engineering from U.C. Berkeley.