OpenMP News

»PPCES 2013 : HPC Seminar and Workshop : March 11-15

PPCES 2013: Parallel Programming in Computational Engineering and Science HPC Seminar and Workshop

Monday, March 11 - Friday, March 15, 2013 at the Center for Computing and Communication RWTH Aachen University, Germany

This event will continue the tradition of previous annual week-long events taking place in Aachen every spring since 2001. Throughout the week, we will cover serial (Monday) and parallel programming using MPI (Tuesday) and OpenMP (Wednesday) in Fortran and C / C++ as well as performance tuning addressing both Linux and Windows platforms. Furthermore, we will introduce the participants to GPGPU programming (Thursday) and programming the brand new Intel Xeon Phi processor (Friday) and provide ample opportunities for hands-on exercises including a “bring-your-own-code” session on Friday. These topics are presented in a modular way, so that you can choose, pick and register for single days in order to let you invest your time as efficiently as possible.

Attendees should be comfortable with C/C++ or Fortran programming and interested in learning more about the technical details of application tuning and parallelization. The presentations will be given in English.

The seminar is free. Allocation is on a first come, first served basis, as we are limited in capacity. Please register separately for any session you intend to participate. Go to: for more information. The event is sponsored by the companies Bull and NVIDIA.

»IWOMP 2013 Announced

This year’s International Workshop on OpenMP will be held Sept 15-17, 2013 in Canberra, Australia, hosted by the Australian National University.

Details will be forthcoming. Watch this space.

»OpenMP Language Committee Meeting Jan 28-31

There will be a meeting of the OpenMP Language Committee in Houston, TX, at the University of Houston, Jan 28-31, 8:00 am-6:00 pm.

Our main aim is close on the OpenMP 4.0 Draft.

Topics to be covered in this meeting include: Tasking, Fortran 2003 base language support, Affinity, Accelerators, Error Models, and SIMD.

These meetings are open to the public. However, if you wish to attend the meeting, please notify us at and meeting details will be sent to you.

This meeting will be held at the University of Houston, 4800 Calhoun Rd, Houston Texas, USA 77004.

Future meetings of the OpenMP Language Committee are planned for  May in Niagara Falls and September in Australia

»Public Comment Ending

Please note that the period for accepting public comments on the OpenMP Technical Report #1 on Directives for Attached Accelerators (PDF) ends on January 27th. Please provide feedback to the Editor directly or in the  OpenMP Discussion Forum.


We’ve added an FAQ (Frequently Asked Questions) about OpenMP to our website. Originally prepared for distribution at our booth at SC12, it’s now available »here.

»SC12 Photos

Photos from Supercomputing 2012 Salt Lake City (November 2012) are viewable here.

From Supercomputing 2012 Salt Lake City (SC12) (PDFs)

»OpenMP 4.0 Release Candidate 1 Now Available

OpenMP, the de facto standard for parallel programming on shared memory systems, continues to extend its reach beyond pure HPC to include embedded systems, real time systems, and accelerators.

Release Candidate 1 of the OpenMP 4.0 API specifications currently under development is now available for public discussion.  This update includes thread affinity, initial support for Fortran 2003, SIMD constructs to vectorize both serial and parallelized loops, TASKGROUP, user-defined reductions, and sequentially consistent atomics.

The OpenMP ARB plans to integrate the Technical Report on directives for attached accelerators, as well as more new features, in a final Release Candidate 2, to appear sometime in the first Quarter of 2013, followed by the finalized full 4.0 API specifications soon thereafter.

The 4.0 Release Candidate API specifications (4.0RC1) and the Technical Report (TR1) PDFs can be downloaded from the »OpenMP Specifications page.

A new public »discussion forum has also been created to discuss the 4.0RC1 and the TR1.

»The OpenMP Consortium Releases First Technical Report

Champaign, Illinois - Nov 5, 2012 - The OpenMP Consortium announces the release of a Technical Report detailing directives used for the execution of loops and regions of code on attached accelerators. The directives described in this Technical Report are a work in progress, and the goal of this release is to get early feedback on the proposed directives.

“We aim to provide what the marketplace has been looking for, a standard high-level way of programming accelerators across a broad base of languages and for all forms of accelerator devices”, said Michael Wong, OpenMP CEO.

This Technical Report describes a model for the offloading of code and data onto a target device. Any device may be a target device, including graphics accelerators, attached multiprocessors, co-processors and DSPs. The directives detailed in the Technical Report can be used in Fortran, C, and C++.

The directives are the result of a 3-year effort by the OpenMP Consortium of HPC vendors, national labs, supercomputing centers, academic institutions and users. User experience with members’ initiatives have provided important information to the effort.

The Technical Reports are located at

Technical Report process
The OpenMP Consortium has created a process by which Technical Reports can be released to the public. With this process, the Consortium will be able to show intermediate versions of the standardization work, and describe possible future directions or extensions to the OpenMP specification. These Technical Reports are not yet part of the OpenMP Standard, in contrast with Specifications, which are normative.
Feedback can be posted on the OpenMP Forum, for which registration is required.

The OpenMP API supports multi-platform shared-memory parallel programming in C/C++ and Fortran. The OpenMP API defines a portable, scalable model with a simple and flexible interface for developing parallel applications on platforms from the desktop to the supercomputer.
»Read about