The Meeting
Keynote Presentations
-
Pete Beckman - Argonne National Laboratory and the University
of Chicago
Facts and Speculations on Exascale: Revolution or Evolution?
For the last two years scientists have been planning for the immense challenges of moving to the exascale era. Some profound changes will be required to reach exascale. Key design issues will include a massive intranode parallelism, power management, advanced run-time systems, and fault management. From the programming model to the system software, shift is happening. This talk will explore the future needs of exascale software and how programmers will adapt to the new models.
-
Alessandro Curioni - IBM, Zurich Research Laboratory, Switzerland
New Scalability frontiers in ab-initio Molecular Dynamics
First-principles-based molecular simulations nowadays are key tools for scientific discovery, and their impact on innovation and technology is increasing constantly.
In this presentation, I will give an overview of the work we have done in the past decade to extend the applicability of ab-initio Molecular Dynamics simulations by means of algorithm re-engineering and their proper mapping to massively parallel machines. Moreover, successful applications as well as the challenges and opportunities of exascale computing will be discussed.
-
Toni Cortes - Computer Architecture Department (DAC) in the
Universitat Politècnica de Catalunya and Barcelona Supercomputing Center, Spain
Why trouble humans? They do not care
Traditionally, HPC has been obtained at a high cost for the users. How many times have you heard, or read, from HPC managers that user training was key to achieve high performance on a HPC system? But think it from the user perspective; did any of the recommendations make any sense to them? How much pain did this training mean to them? How many things did they have to worry about besides their real problem? They cared about a business or science problem that, unfortunately, involved large amounts of data and computation. They did not care about all these other things they had to learn to get their problem solved. Can HPC systems be designed and implemented in a different way where high-performance is achieved by the system and not the user?
Just a couple of thoughts: i) if current systems are complex to use, can you imagine using an exaflop machine? and ii) the iPad2 would have been in the top500 till 1994, could we have though back in 1994 that a machine in the top500 could be as easily used as an iPad2?