Parallel Programming in Computational Engineering and Science
Monday, March 10 - Friday, March 14, 2014vin Aachen, Germany
This event will continue the tradition of previous annual week-long events that take place in Aachen every spring since 2001.
Throughout the week we will cover a wide spectrum of topics, ranging from serial programming (Monday) to parallel programming using MPI (Tuesday) and OpenMP (Wednesday) in both Fortran and C/C++ as well as performance tuning. Furthermore, we will introduce the participants to modern features of the OpenMP standard like vectorisation and programming for accelerators and for the Many Integrated Core (MIC) Architecture (Thursday) as well as GPGPU programming with OpenACC (Friday). Hands-on exercises for each topic will be provided, which should not discourage you from working on your own code.
The topics are presented in a modular way, so that you could pick specific ones and register for the particular days only in order to let you invest your time as efficiently as possible.
Attendees should be comfortable either with C/C++ or Fortran programming and interested in learning more about the technical details of application tuning and parallelization. The presentations will be given in English.
There is no seminar fee. All other costs (e.g. travel, hotel, and consumption) are at your own expenses.
Allocation is on a first come, first served basis, since the seminar room is of limited capacity. Please register separately for each session you would like to attend.
The registration deadline is March 3, 2014.
The event is kindly sponsored by the Bull company.
In recent years, OpenMP has shifted from being solely focused on shared- memory systems to also include accelerators, embedded systems, multicore and real-time systems. Today the OpenMP Architecture Review Board (ARB) releases a new Mission Statement to formalize this change.
A technical report on directives for attached accelerators was first released in 2012, and subsequently, a full revision of the standard, OpenMP 4.0, was released in 2013, which included support for accelerators, SIMD constructs to vectorize both serial as well as parallelized loops, error handling, thread affinity, and tasking extensions.
Following these releases, the OpenMP ARB is ready for a new mission statement. The old mission statement was focused on shared-memory systems, but the new Mission Statement broadens this mandate to “Standardize directive-based multi-language high-level parallelism that is performant, productive and portable.”
“The new Mission Statement for OpenMP is the result of a collaborative consultation between members, industry, and academia”, said Michael Wong, OpenMP CEO. “It recognizes the changing landscape of parallelism by broadening our mandate to cover more types of architecture, be more robust, responsive, and dynamic while remaining firmly committed to our pedigree. With our firm footing now supporting accelerators and embedded systems, we are open to begin further exploration into more affinity, deeper task dependencies, full error-model, NUMA-access, FPGA, transactional memory, asynchronous and even-driven programming, inter-nodal and intra-nodal interoperability.”
February 13, 2014 - Champaign, Illinois - The OpenMP ARB, a group of leading hardware and software vendors and research organizations developing the OpenMP API specification for shared-memory parallelization, appointed Dieter an Mey to its Board of Directors. Dieter brings a wealth of experience as an OpenMP user to the Board.
Dieter an Mey leads the High Performance Computing team of the IT Center RWTH Aachen University in Germany. He has a 30+ year track record in HPC with an emphasis on user support and services. Ever since vectorization and message passing and the release of the first OpenMP API specification in 1997, Dieter and his group have actively participated in the OpenMP community. He is co-author of numerous publications on OpenMP programming and productivity.
“I believe Dieter will bring a much needed non-commercial viewpoint to the Board, as well as a non-US viewpoint”, said Michael Wong, OpenMP CEO. “He has been a long-time, active OpenMP proponent who is widely respected within the OpenMP community.”
Dieter joins Josh Simons of VMware, Sanjiv Shah of Intel, Andy Fritsch of Texas Instruments and Partha Tirumalai of Oracle on the OpenMP Board of Directors.
An update to the OpenMP 4.0 Examples document is now available. It adds new examples that demonstrate use of the proc_bind clause to control thread binding for a team of threads in a parallel region. Also, new examples for the taskgroup construct.
As the Winter face-to-face meeting of the OpenMP Language Committee winds down at Intel in Santa Clara, CA, we can announce the date and location of the next meeting in the Spring:
Spring F2F Meeting
- At: Sidney Sussex College, Cambridge, UK
- Dates: April 12-16 (Sat-Wed)
- Hosts: Michael Wong and James Cownie
Slides and audio from the day-long tutorial on MPI and OpenMP programming presented at Supercomputing 13 in Denver in November 2013 is now available.
»Hybrid MPI and OpenMP Parallel Programming
MPI + OpenMP and other models on clusters of SMP nodes
- Rolf Rabenseifner, Georg Hager, Gabriele Jost
The OpenMP Lang Committee will be holding its first face-to-face meeting of 2014 on the Intel campus in Santa Clara, CA January 27-31.
This meeting will include breakout sessions and presentations by the various subcommittees (Accelerator, Error Model, Interoperability, Fortran, Tasking, Examples) as work continues to develop the next version of the OpenMP API specifications.
This meeting is not open to the public without a prior arrangement. Contact the OpenMP ARB at firstname.lastname@example.org for more information.
Intel has posted a video tutorial on the use of the OpenMP 4.0 SIMD pragmas.
A series of seven videos covering performance essentials using OpenMP 4.0 Vectorization with C/C++. It provides an overview of why explicit vector programming methods are crucial to getting performance on modern many core platforms. In the series, we explore code snippets and background information on such topics as OpenMP* 4.0 Simd-enabled functions, and explicit SIMD loops, as well as techniques to help determine if targeted loops were vectorized and if not, why not. Each video is typically less than 10 minutes in length and yet provides a good starting point for developer who wishes to get started using these technologies.