Workshop at ISC17

IXPUG Workshop
"Experiences on Intel Knights Landing at the One Year Mark"

Location: Frankfurt am Main, Germany
Date: Thursday, June 22, 2017, 9:00am-6:00pm
Venue: Marriott Frankfurt Hotel


One year on since the launch of the 2nd generation ‘Knights Landing (KNL)’ Intel Xeon Phi platform, a significant amount of application experience has been gathered by the user community. This provides a timely opportunity to share insights on how to best exploit this new many-core processor, and in particular, on how to achieve high performance on current and upcoming large-scale KNL-based systems.

This one full-day IXPUG workshop at ISC 2017 is about sharing ideas, implementations, and experiences that will help users take advantage of new Intel Xeon Phi features, such as AVX512 and high-bandwidth MCDRAM memory, as well as relevant high-performance system fabrics on large-scale KNL-based systems (e.g. OmniPath). By sharing knowledge on how to best exploit the major advances in vectorization, memory, and communication featured on the 2nd generation Intel Xeon Phi platform, the workshop also has the wider aim of boosting the adoption of many-core architectures in HPC and beyond. You will experience an open forum with fellow application programmers, software developers, Intel Xeon Phi architecture designers, and compiler and tool experts. Application performance and scalability challenges at all levels will be covered, focusing on application tuning on large HPC systems with many KNL devices.

The workshop will consist of three parts: a keynote presentation, talks on the submitted papers (around 30 minutes each), and a final panel session. The keynote will introduce the main features of current-generation Intel Xeon Phi processors -- including the various memory configurations and modes of operation available -- and provide a refresher on what’s public about future processor generations. The submitted talks will cover optimization in real-world HPC applications, e.g. data layouts and code restructuring for efficient SIMD operation, thread management, use and performance comparison for different memory modes, etc. Papers describing application results on multi-node configurations and addressing KNL-specific features (e.g. use of MCDRAM) will be prioritized. The usability of tools for development, debugging and performance analysis will be covered. The panel session provides an opportunity to discuss optimization strategies for Intel Xeon Phi and to provide feedback to the toolchain developers.


Important Note: The workshop is held in conjunction with the ISC 2017 in Frankfurt (Main). To attend the IXPUG workshop, you have to register for ISC Workshops. More information is on the ISC 2017 conference website.


** New **  -- Best Paper Award(s) 

We are pleased to announce that two papers were selected to win our first-ever the “Best Paper” award for the upcoming ISC Workshop.

They are:

  1. "Performance Evaluation of NWChem Ab-Initio Molecular Dynamics (AIMD) Simulations on the Intel Xeon Phi Processor"
      Eric Bylaska (PNNL), Mathias Jacquelin (LBL), Bert de Jong (LBL), Jeff Hammond (Intel) and Michael Klemm (Intel Deutschland GmbH)
  2. "KART - A Runtime Compilation Library for Improving HPC Application Performance"
      Matthias Noack (ZIB), Florian Wende (ZIB), Georg Zitzlsberger (Intel Deutschland GmbH), Michael Klemm (Intel Deutschland GmbH) and Thomas Steinke (ZIB)


Call for papers

**Submissions closed**

IXPUG welcomes paper submissions on innovative work from KNL users in academia, industry and government labs, describing original discoveries and experiences that will promote and prescribe efficient use of many-core and multicore systems.

The authors of the best scored papers are selected for a presentation at the workshop and are invited by the Program Committee for publication in the ISC 2017 Workshop volume published within the Springer LNCS series. For paper submission format and author instructions see "information for authors" below.

Submission URL:

Topics of interest are (but not limited to):

•    Vectorization: Data layout in cache for efficient SIMD operations, SIMD directives and operations, and 2-core tiling with 2D interconnected mesh
•    Memory: Data layout in memory for efficient access (data preconditioning), access latency concerns (prefetch, streams, costs for HBM), partitioning of DDR and HBM for applications (memory policies)
•    Communication, including early experiences with OmniPath
•    Thread and Process Management: Process and thread affinity issues, SMT (simultaneous multi-threading, in core), balancing processes and threads
•    Multi-node application experience: specially on large-scale KNL systems
•    Programming Models: OpenMP 4.x, hStreams, using MPI 3 on Xeon Phi, hybrid programming (MPI/OpenMP, others)
•    Algorithms and Methods: including scalable and vectorizable algorithms
•    Software Environments and Tools
•    Benchmarking & Profiling Tools
•    Visualization


Important Dates

Call for papers issued 1 February 2017
Abstract submission 14 April 2017 (AoE)
Full paper submission 27 April 2017 (AoE)
Reviews start 28 April 2017
Paper acceptance notification 17 May 2017
Agenda finalized (presenter notifications) 9 June 2017
Camera ready papers due 20 June 2017
Workshop day 22 June 2017


Information for authors

Paper Format

Papers will be published through the Springer Publishing (LNCS series), using their template. General information for authors can be found on Springer LNCS - information for authors of Springer Computer Science Proceedings. It includes links to MS Word templates. For LaTeX, a simplified LNCS template and Quick Start can be found in the LNCS Repository on GitHub.

Following specific rules apply to the workshop paper submissions:

  • Allowed formats: PDF files only from LaTeX or MS Word docs.
  • Page limits: minimum 6, maximum 12 pages excluding reference list and acknowledgement.
  • Submission URL: EasyChair IXPUG Workshop at ISC2017


Review Process

Reviewers are expected to make judgment on what was available at the time reviews were assigned (April 21). Subsequent updates to content may or may not be considered by the program committee as part of the selection decision. We encourage authors to exercise the freedom to use the time up until presentation and camera ready copy to provide the highest-quality product.

All submitted papers will be reviewed. We apply a standard single-blind review process, i.e., the authors will be known to reviewers. All submissions within the scope of the workshop will be peer-reviewed and will need to demonstrate quality of the results, originality and new insights, technical strength, and correctness. The submitted papers may not be published in or be in preparation for other conferences, workshops or journals.


Program Committee


Damian Alvarez Jülich Supercomputing Centre (JSC)
Carlo Cavazzoni CINECA
Gilles Civario DELL
Doug Doerfler Lawrence Berkeley National Lab (LBL)
Richard Gerber Lawrence Berkeley National Lab (LBL) / National Energy Research Scientific Computing Center (NERSC)
Clayton Hughes Sandia National Laboratories
Balint Joo Thomas Jefferson National Accelerator Facility (Jefferson-Lab)
Rakesh Krishnaiyer Intel
Michael Lysaght Ireland's High-Performance Computing Centre (ICHEC)
Simon McIntosh-Smith University of Bristol
Andrew Mallinson Intel
David E. Martin Argonne National Laboratory
Hideki Saito Intel
Thomas Steinke Zuse Institute Berlin (ZIB)
Estela Suarez Jülich Supercomputing Centre (JSC)
Zhengji Zhao Lawrence Berkeley National Lab (LBL)


Organising committee and Contacts

•    Dr. Estela Suarez, This email address is being protected from spambots. You need JavaScript enabled to view it., Forschungszentrum Juelich / Juelich Supercomputing Centre, Germany
•    Dr. Michael Lysaght, This email address is being protected from spambots. You need JavaScript enabled to view it., ICHEC (Irish Centre for High End Computing), Ireland
•    Dr. Simon J Pennycook, This email address is being protected from spambots. You need JavaScript enabled to view it., Intel Corporation, United States
•    Dr. Richard A. Gerber, This email address is being protected from spambots. You need JavaScript enabled to view it., National Energy Research Scientific Computing Center, Lawrence Berkeley National Lab. (NERSC), United States