ASHPC25 will take place at Rimske Terme in Slovenia, 19–22 May 2025, it will start with a welcome dinner on 19 May 2025, the conference program will start at 09:00 on 20 May 2025.
Continuing the tradition of the annual Austrian HPC Meetings (2015–2020) and the Austrian-Slovenian HPC Meetings (2021–2024), bringing together users and providers of large-scale computing resources for both academic and non-academic research and development. Keynote speakers will share insights into the latest developments in distributed machine-learning algorithms, computational chemistry, quantum computing, and the challenges posed by climate and weather simulations.
ASHPC25 focuses on various aspects of High-Performance Computing (HPC) and provides a great opportunity to discuss HPC-related topics and present your latest results. It will also provide an overview and update on the rapidly improving HPC landscape available to European researchers.
Registration for ASHPC25, along with the co-located Central European NCCs Workgroup Meeting, is now open. Please register by February 15 February 21, 2025, to ensure room reservations at the hotel, as we cannot guarantee accommodations beyond this date.
On Monday, 19 May 2025, in the afternoon preceding ASHPC25, there will be a Central European NCCs Workgroup Meeting (for NCC / EuroCC 2 staff only) where the National Competence Centres (NCCs) address their most challenging topics and further regional collaboration.
This is a preliminary timetable and is subject to change.
Erwin Laure is the Director of the Max Planck Computing and Data Facility (MPCDF) of the MPG in Garching, Germany and Honorary Professor at the Technical University of Munich. Before joining MPG he was Professor for High Performance Computing at KTH Stockholm and Director of the PDC Center for High Performance Computing there. He holds a PhD from the University of Vienna and has more than 25 years years of experience in high performance computing, was a member of the EuroHPC Infrastructure Advisory Group, and was involved in major European exascale projects (e.g. the BioExcel Centre of Excellence for Biomolecular Simulations). His research interests include programming environments, languages, compilers and runtime systems for parallel and distributed computing, with a focus on exascale computing.
For many decades now, the European Centre for Medium-Range Weather Forecasts (ECMWF) has spearheaded developments in global numerical weather prediction. The continued growth in forecast skill over the past few decades is due in large part to increases in the grid resolution of ECMWF’s Earth-systemnmodel, the Integrated Forecasting System (IFS). This increase has gone hand-in-hand with developments innhigh-performance computing, with new generations of supercomputer permitting higher model resolutionsnand complexity. Weather forecast skill is especially sensitive to the resolution of the atmospheric component, for which resolutions are approaching the so-called “storm-resolving” level, which indicates a grid spacing ofnlower than 10 kilometres. A step change in the fidelity of global atmospheric simulations is expected as thenmodel resolution approaches this “kilometre scale”, in particular for the representation of extreme weather events.
However, recent developments in supercomputing present a barrier as we push towards these kilometre-scale simulations. The impetus for this new class of forecast system comes from the Destination Earth project, whose goals are to develop a series of Earth-system digital twins to aid in the prediction and mitigation of extreme weather events under a changing climate. In order to run this new class of Earth-system simulation efficiently, one must make effective use of accelerators, namely GPUs, and large communication networks.
This talk will give an overview of activities at ECMWF towards the goal of running kilometre-scale Earth-system simulations on pre-exascale and exascale supercomputers. The talk will present lessons learned from earlier experiments on supercomputers such as Summit. I will concentrate in particular on the spectral transform library ecTrans which the IFS atmospheric component crucially depends on, and which neatly contains several key computation and communication paradigms. I will also explore the opportunities of the new breed of data-driven models, which are led by ECMWF’s AIFS machine learning model. These models rival traditional “physics-based” models such as the IFS, and are extremely cheap at inference time. The training of these models is a high-performance computing problem in its own right.
Sam Hatfield is a Computational Scientist at the European Centre for Medium-Range Weather Forecasts (ECMWF). He is closely involved in efforts to port the Integrated Forecasting System (IFS), a numerical weather prediction code with a long history, to modern GPU-equipped supercomputers. This will enable ECMWF to perform kilometer-scale global Earth system simulations to better capture extreme weather events, a key goal of the Destination Earth initiative. Sam focuses his efforts in particular on the spectral transform kernel, fulfilled by ECMWF’s numerical library ecTrans, which can dominate the overall wall time for the IFS.
Presented posters:
*The order of posters is random and subject to change.
• Performance Evaluation of Parallel Approaches for 1D PIC Simulations on GPUs
• AURELEO: Austrian Users at LEONARDO supercomputer
• Integrating Linux with HTTP: Secure and Automated Workflows
• FLEXWEB
• Empowering Women in High Performance Computing: Activities of the Central European Chapter of Women in HPC
• Understanding and Addressing Evolving Training Needs in High-Performance Computing: Insights from the 2024 Training Services Survey
• Photoionization
• Influence of membrane composition on the signaling of the NKG2A/CD94/HLA-E complex investigated by all-atom simulations
• Optimization of Small Language Models (SLMs)
• NCC Czechia: Sucess Stories
• ICON @ VSC: strong scaling tests with a global km-scale model of the atmosphere
• VSC's Software Stack Envisioned
• Heterogeneous Exascale Particle-in-Cell
• Exoplanet Climate
• Support Systems for National Advanced Computing Service
• Power Monitoring: IPMI Exporter
• Predicting rates of conformational change of proteins from projected molecular dynamics simulations
• VSC Service Infrastructure State & Future
• FFplus: Driving SME and Startup Innovation by unleasing the potential of HPC and Generative AI
• EXCELLERAT CoE: The European Centre of Excellence for Engineering Applications
A key barrier to the wide deployment of highly-accurate machine learning models, whether for language or vision, is their high computational and memory overhead. Although we possess the mathematical tools for highly-accurate compression of such models, these elegant techniques require second-order information about the model’s loss function, which is hard to even approximate efficiently at the scale of billion-parameter models.
In this talk, I will describe our work on bridging this computational divide, which enables the accurate second-order pruning and quantization of models at truly massive scale. Compressed using our techniques, models with billions and even trillions of parameters can be executed efficiently on GPUs or even CPUs, with significant speedups, and negligible accuracy loss.
Dan Alistarh is a Professor at the Institute of Science and Technology Austria, in Vienna. Previously, he was a Visiting Professor at MIT, a Researcher at Microsoft, and received his PhD from the EPFL. His research is on algorithms for efficient machine learning and high-performance computing, with a focus on scalable DNN inference and training, for which he was awarded an ERC Starting Grant in 2018. In his spare time, he works with the ML research team at Neural Magic, a startup based in Boston, on making compression faster, more accurate and accessible to practitioners.
This talk highlights the importance of high-performance computing (HPC) in addressing some of the fundamental challenges in nonequilibrium quantum physics, particularly in understanding ergodicity-breaking transitions (EBTs) in isolated interacting quantum systems. These transitions delineate ergodic systems (also referred to as quantum chaotic), which equilibrate over time, from nonergodic ones, which retain memory of their initial conditions indefinitely. As such, nonergodic systems hold promise for applications in quantum computing and memory devices, thus making their identification and characterization an area of intense interest for theoreticians and experimentalists alike.
Jan Šuntajs is a physicist specializing in quantum many-body systems, quantum chaos, and ergodicity breaking transitions, with extensive experience in high-performance computing for numerical simulations. He received his PhD in
in physics from at the University of Ljubljana, Faculty of Mathematics and Physics. His thesis, Nonequilibrium and statistical properties of isolated quantum many-body systems, earned him the Jožef Stefan Institute Golden Emblem Prize. Jan is currently employed at the Department of Theoretical Physics (F1) at the Jožef Stefan Institute and at the Laboratory for Internal Combustion Engines and Electromobility (LICeM) at the Faculty of Mechanical Engineering, University of Ljubljana. At LICeM, his research focuses on numerical simulations of batteries, particularly the coupling between chemical and elastomechanical properties, while his work at the Jožef Stefan Institute explores the boundaries of quantum chaos.
Real-world data typically contain a large number of features that are often heterogeneous in nature, relevance, and also units of measure. When assessing the similarity between data points, one can build various distance measures using subsets of these features. Finding a small set of features that still retains sufficient information about the dataset is important for the successful application of many machine learning approaches. We introduce an approach that can assess the relative information retained when using two different distance measures, and determine if they are equivalent, independent, or if one is more informative than the other. This test can be used to identify the most informative distance measure out of a pool of candidate. We will discuss applications of this approach to feature selection in molecular modeling, to the analysis of the representations of deep neural networks, and to infer causality in high-dimensional dynamic processes and time series.
Alessandro Laio is a Full Professor in the Department of Physics at the International School for Advanced Studies (SISSA), Trieste, and consultant of the Condensed Matter and Statistical Physics group at ICTP. His recent research has focused on algorithmic developments in unsupervised learning, data clustering, metric learning and dimensionality reduction, while he has also made pioneering contributions to improving the ability of computer simulations to make predictions for complex systems. His name is usually associated with the groundbreaking algorithmic solutions for extracting essential features from complex data.