Keynote 2: Challenges and Opportunities in Memory Systems for AI Accelerators
Demand for processors with very high-bandwidth memory systems has exploded in concert with the rapid advances in deep-learning and artificial intelligence. Within a decade, we can expect processors that require a memory system capable of delivering 100 terabytes per second from over 1 terabyte of capacity in less than 1 kilowatt. This simultaneous need to push the envelope for very high bandwidth at very low per-access energy to a large pool of data creates many challenges. This talk will detail some of these difficulties, and discuss some of the approaches architects and memory designers might take to address them.
Mike O’Connor manages the Memory Architecture Research Group at NVIDIA. His group is responsible for future DRAM and memory system architecture research. In a prior role at NVIDIA, he was the memory system architecture lead for several generations of NVIDIA GPUs. Mike’s career has also included positions at AMD, Texas Instruments, Silicon Access Networks (a network-processor startup), Sun Microsystems, and IBM. At AMD, he drove much of the architectural definition for the High-Bandwidth Memory (HBM) specification. Mike has a BSEE from Rice University and an MSEE & PhD from the University of Texas at Austin.
Here you can see the final program: