Keynotes


Computing in a Persistent (Memory) World

Paolo Faraboschi, HP Labs

Monday, February 9th (8:50am-10:00am)

Abstract: By end of the decade we expect over 30 billion intelligent devices connected to the internet, resulting in unprecedented amounts of data. At the same time, scaling of the memory technologies that are at the foundation of computing today will significantly slow down. We will need transformational changes to the way in which we collect, process, store, and analyze that data. Not everyone realizes that these changes will revolutionize the way in which we architect and program computing systems. This talk will discuss the technology trends, the implications to software and programming, and what we are doing at HP to address some of the challenges. Starting from the emerging non-volatile devices, it will cover how they will enable flattening and re-architecting the memory hierarchy. Then, it will dive into the implications to software, discussing how file systems, databases and explicit applications can take advantage of large, flat and persistent memory spaces.

Biography:

Paolo Faraboschi is an HP Fellow at HP Labs. His interests are at the intersection of system architecture and software. He is currently working on TheMachine project, researching how we can build better systems around non-volatile memory. In the last five years, he worked on low-energy servers and HP project Moonshot. From 2004 to 2009, at HPL in Barcelona, he led a research activity on scalable system-level simulation and modeling. From 1995 to 2003, at HPL Cambridge, he was the principal architect of the Lx/ST200 family of VLIW cores, widely used in video SoCs and HP’s printers. Paolo is an IEEE Fellow and an active member of the computer architecture community: guest co-editor of IEEE Micro TopPicks 2012, Program co-Chair for HiPEAC10 (2010), MICRO41 (2008) and MICRO34 (2001). He holds 25 patents and co-authored the book “Embedded Computing: a VLIW approach to architecture, compiler end tools”. Before joining HP in 1994, he received a Ph.D. in EECS from the University of Genoa, Italy.


A New Architecture for Brain-inspired Computing

Dr. Dharmendra S. Modha, IBM Fellow

Tuesday, February 10th (1:15 pm – 2:25 pm)

Abstract: I will describe a decade-long, multi-disciplinary, multi-institutional effort spanning neuroscience, supercomputing, and nanotechnology to build and demonstrate a brain-inspired computer and describe the architecture, programming model, and applications. For more information, see: modha.org.

Biography:

Dr. Dharmendra S. Modha is an IBM Fellow and IBM Chief Scientist for Brain-inspired Computing. He is a cognitive computing pioneer who envisioned and now leads a highly successful effort to develop brain-inspired computers. The groundbreaking project, SyNAPSE, funded by DARPA to the tune of $53.5M, is multi-disciplinary, multi-national, multi-institutional and has had worldwide scientific impact. Its resulting revolutionary computing architecture and ecosystem break from the prevailing von Neumann paradigm and constitute a foundation for new classes of ultra-low-power, compact, real-time, multi-modal sensorimotor information technology systems. Dr. Modha has also made significant contributions to IBM businesses via innovations in caching mechanisms for storage controllers, clustering algorithms for services, and coding theory for disk drives. His work has been featured in Economist, Science, New York Times, BBC, Discover, MIT Technology Report, Associated Press, Popular Mechanics, Communications of the ACM, Forbes, Fortune, and IEEE Spectrum amongst thousands of media mentions. Author of over 60 papers and inventor of over 100 patents, he has won ACM’s Gordon Bell Prize, USENIX/FAST Test of Time Award, Best Paper Awards at ASYNC and IDEMI, First Place, Science/NSF International Science & Engineering Visualization Challenge, and is a Fellow of IEEE and World Technology Network. In 2013 and 2014, he was named as Best of IBM. On their 40th Anniversary, EE Times named Dr. Modha amongst 10 Electronics Visionaries to Watch. Dr. Modha received BTech from IIT Bombay in 1990 and PhD from UCSD in 1995.


LIQUi|>: Simulation and Compilation of Quantum Algorithms

Dave Wecker, Architect, Microsoft Research

Wednesday, February 11th (8:15am-9:25am)

Abstract: Languages, compilers, and computer-aided design tools will be essential for scalable quantum computing, which promises an exponential leap in our ability to execute complex tasks. LIQUi|> is a modular software architecture designed to simulate and control quantum hardware. It enables easy programming, compilation, and simulation of quantum algorithms and circuits, and is independent of a specific quantum architecture. This talk will focus on simulation of quantum algorithms in Quantum Chemistry and Materials as well as Factoring, Quantum Error Correction and compilation for hardware implementations (http://arxiv.org/abs/1402.4467).

Biography:

Dave came to Microsoft in 1995 and helped create the “Blender” (digital video post-production facility). He designed and worked on a Broadband MSN offering when he became architect for the Handheld PC v1 & v2 as well as AutoPC v1 and Pocket PC v1. He moved to Intelligent Interface Technology and resurrected SHRDLU for Natural Language research as well as building a state of the art Neural Network based Speech Recognition system. For the Mobile Devices Division he implemented secure DRM on e-books and Pocket PCs. He created and was director of ePeriodicals before taking on the role of Architect for Emerging Technologies. This lead to starting the Machine Learning Incubation Team and then architect for Parallel Computing Technology Strategy working on Big Data and now Quantum Computing. He has over 20 patents for Microsoft and 9 Ship-It awards. He started coding professionally in 1973, worked in the AI labs at CMU while obtaining a BSEE and MSIA and was at DEC for 13 years (ask him about DIDDLY sometime ;).


News

Important Dates

  • Conference: Feb 9-11, 2015
  • W&T: Feb 7-8, 2015
  • Early registration: Jan 11, 2015
  • Travel grant: Jan 9, 2015
  • Camera ready: Dec 15, 2014
  • Paper Notification: Nov 10, 2014
  • Author response: Oct 28-30, 2014
  • W&T notification: Oct 1, 2014
  • W&T proposal: Sep 15, 2014
  • Paper submission: Sep 12, 2014
  • Paper registration: Sep 5, 2014

Sponsors