ISC 2022 IXPUG Workshop

 

isc logo

ISC 2022 IXPUG Workshop: "Communication, I/O, and Storage at Scale on Next-Generation Platforms"

 

Location: Hamburg, Germany (ISC 2022 Hall Y11)

Date & Time: Thursday, June 2, 2022 9:00 AM to 1:00 PM CEST

Registration: The workshop is held in conjunction with the ISC 2022 in Hamburg. To attend the IXPUG workshop, you must register through the ISC 2022 website specifically for workshops.

 

Event Description: The workshop intends to attract system architects, code developers, research scientists, system providers, and industry luminaries who are interested in learning about the interplay of next-generation hardware and software solutions for communication, I/O, and storage subsystems tied together to support HPC and data analytics at the systems level, and how to use them effectively. The workshop will provide the opportunity to assess technology roadmaps to support AI and HPC at scale, sharing users’ experiences with early-product releases and providing feedback to technology experts. The overall goal is to make the ISC community aware of the emerging complexity and heterogeneity of upcoming communication, I/O and storage subsystems as part of next-generation system architectures and inspect how these components contribute to scalability in both AI and HPC workloads.

The workshop will pursue several objectives: (1) Develop and provide a holistic overview of next-generation platforms with an emphasis on communication, I/O, and storage at scale, (2) Showcase application-driven performance analysis with various HPC fabrics, (3) Present early experiences with emerging storage concepts like object stores using next-generation HPC fabrics, (4) Share experience with performance tuning on heterogeneous platforms from multiple vendors, and (5) Be a forum for sharing best practices for performance tuning of communication, I/O, and storage to improve application performance at scale and any challenges.

Workshop Format: The workshop will have a keynote, full (30 min) talks and lightning talks (10-15 min). While in-person presentations are preferred, pre-recorded videos will be allowed as presentation in exceptional cases.

Workshop Agenda:All times are shown in CEST. Final presentations will be made accessible to download at https://www.ixpug.org/resources after the workshop.

Time Title and Authors Presenter Presentation
09:00-09:05 Welcome Thomas Steinke, Zuse Institute Berlin  
  Session 1 Chair: Thomas Steinke    
09:05-09:45

Keynote: Modular Supercomputing: A Heterogeneous Architecture for the Exascale Era

Estela Suarez, Juelich Supercomputing Centre

09:45-10:15

DAOS Features for Next Generation Platforms

Mohamad Chaarawi, Intel Corporation

10:15-10:45

Evaluating On-Demand Parallel File System Impacts on Compute-Bound Tasks

Matthew L. Curry, Sandia National Laboratories  
10:45-11:00 Building a Balanced Exascale System David E. Martin, Argonne Leadership Computing Facility   
11:00-11:30 Break    
 

Session 2 Chair: Amit Ruhela

 

11:30-12:00

An Early Scalability Study of Omni-Path Express

Douglas Fuller, Cornelis Networks and Steffen Christgau, Zuse Institute Berlin

12:00-12:30

Update on PSM3 Architecture and Performance

James P. Erwin, Intel Corporation

12:30-13:00

Activites Towards the Upcoming Extension of the LRZ Flagship System

Josef Weidendorfer, Leibniz Computing Centre

13:00 Wrap-up

 

Call for Submissions (closed): The submission process will close on April 11, 2022 AoE. All submitters should provide content that represents an Extended Abstract, max. 8 pages in LNCS format via the IXPUG EasyChair website. Notifications will be sent to submitters by April 18, 2022 AoE. 

Topics of Interest are (but not limited to):

  • Holistic view on performance of next-generation platforms (with emphasis on communication, I/O, and storage at scale)
  • Application driven performance analysis on inter-node and intra-node HPC fabrics
  • Software-defined networks in HPC environments
  • Experiences with with emerging scalable storage concepts, e.g., object stores using next-generation HPC fabrics
  • Performance tuning on heterogeneous platforms from multiple vendors including impact of I/O and storage
  • Performandce and portability using network programmable devices (DPU, IPU)
  • Best practice solutions for performance tuning of communication, I/O, and storage to improve application performance at scale and any challenges.

Keywords: high-performance fabrics, data and infrastructure processing units, scalable object stores as HPC storage subsystems, heterogeneous data processing 

Review Process: All submissions within the scope of the workshop will be peer-reviewed and will need to demonstrate the high quality of the results, originality and new insights, technical strength, and correctness. We apply a standard single-blind review process, i.e., the authors will be known to reviewers. The assignment of reviewers from the Program Committee will avoid conflicts of interest.

 Important Dates:

  • Call for Papers/Contributions: Mar 21, 2022
  • Deadline for submissions: April 11, 2022
  • Final acceptance notification: April 18, 2022
  • Camera ready presentation: May 31, 2022

 

 Organizers:

  • Maria Girone, CERN/openlab
  • David Martin, Argonne Leadership Computing Facility
  • Amit Ruhela, Texas Advanced Computing Center
  • Thomas Steinke, Zuse Institute Berlin

 Program Committee:

  • Aksel Alpay, Heidelberg University
  • R. Glenn Brook, Cornelis Networks
  • Melyssa Fratkin, Texas Advanced Computing Center
  • Maria Girone, CERN openlab
  • Toshihiro Hanawa, The University of Tokyo
  • Clayton Hughes, Sandia National Laboratories
  • Nalini Kumar, Intel Corporation
  • James Lin, Shanghai Jiao Tong University
  • Hatem Ltaief, King Abdullah University of Science & Technology
  • David Martin, Argonne National Laboratory'
  • Chris Mauney, Los Alamos National Laboratory
  • Amit Ruhela, Texas Advanced Computing Center
  • Thomas Steinke, Zuse Institute Berlin

 

 General questions should be sent to This email address is being protected from spambots. You need JavaScript enabled to view it.