OpenMP News

»SC14 Videos!

Videos from the in-the-booth and Birds of a Feather sessions at Supercomputing 2014 (New Orleans) are now available »here and on our »YouTube channel.

»OpenMPCon and IWOMP 2015 Announced

The OpenMP® Architecture Review Board (ARB) announces a new style OpenMP User Conference called OpenMPCon. The first OpenMPCon will be held on September 28-30, 2015 at RWTH Aachen University in Germany. It will be directly followed by the research-oriented IWOMP on September 30-October 2, 2015.


OpenMPCon will be the annual, face-to-face gathering organized by the OpenMP community, for the community. The aim is to learn about and showcase usage of OpenMP. Attendees will enjoy keynotes, inspirational talks, and a friendly atmosphere that helps them meet interesting people, learn more about OpenMP from each other, and have a stimulating experience. Multiple diverse technical tracks are being formulated that will appeal to anyone: from the OpenMP novice to the seasoned expert.

Call for Contributions

The Call for Contributions has been published, and can be found on the OpenMPCon website. The deadline for submissions is May 15th 2015. Any topic related to OpenMP can be subject of a contribution. There will be room for both new voices and seasoned presenters.


IWOMP is the premier forum to present and discuss issues, trends, recent research ideas and results related to parallel programming with OpenMP. Submissions of unpublished technical papers detailing innovative, original research and development related to OpenMP will be solicited. More information about IWOMP 2015 will appear early in the year.

»Code Challenge Announced At SC14

Today at SC14, the OpenMP Architecture Review Board (ARB) announced a Code Conversion Challenge. The Challenge is to users to bring in shared memory or accelerator code (C++ AMP, CUDA, OpenACC®, OpenCL®, …) from real applications. The ARB will then demonstrate how to write them in modern OpenMP.

The applications can be written in any parallel programming model, either shared-memory or accelerator based. The OpenMP ARB will convert the applications to OpenMP, with a focus on source to source conversion, especially of the parallel regions of interest.

OpenMP, one of the most-capable high-level parallel languages

The ARB aims to show that reasonable programs written in another parallel programming model (e.g., OpenCL, Co-Array-Fortran, OpenACC, CUDA, C++, C++AMP or Cilk) can be rewritten in OpenMP and thus that modern OpenMP is among the most-capable high-level parallel languages.

Submission of applications

Applicants are welcome to submit their applications on the site If the submitter agrees, the application will be listed on the OpenMP web site. Discuss the code challenge on the OpenMP Code Challenge Forum.

We have designed modern OpenMP to be the best parallel language for the three general purpose languages C, C++ and Fortran, said Michael Wong, OpenMP CEO. It is a language that enables you to access all capabilities of your machine without dropping to another language. This Challenge will convince you of that or show us where we need to improve. Either way, the industry benefits.

»SC14: Two Technical Reports Released

The OpenMP ARB has released two technical reports at the start of Supercomputing 14 in New Orleans. Here are the Press Releases:

You can find the Technical Reports on the Specifications page.

»OpenMP at SC14 New Orleans

Join us in New Orleans November 16-21 for Supercomputing 2014.

We’ll be in booth #2824, and we have lots going on!


Tuesday, Nov 18th, 5:30 - 7:00pm
OpenMP 4.0 Implementations and OpenMPCon [Room 291]
Geared toward both users and implementers, the current implementations of OpenMP 4.0 will be presented, and plans for future standard extensions with be discussed. We will also announce the new OpenMPCon — the new event for OpenMP that will launch in September 2015.


Wednesday, Nov 19th, 10:30am - 12:00pm
OpenMP Version 4.0 & Announcing OpenMPCon - Michael Wong, IBM [Room #292]

Bring your laptop to these all-day tutorials and learn the latest about OpenMP

  • Sunday, Nov 16th, 8:30am - 5:00pm
    Hands-on Introduction to OpenMP [Room 395]
    Bring your laptop with installed OpenMP compiler for a hands-on introduction.
  • Monday, Nov 17th, 8:30am - 5:00pm
    Advanced OpenMP Tutorial: Performance and 4.0 Features [Room 397]
    OpenMP performance, parallelization strategies, advanced features, and more.
  • Monday, Nov 17th, 8:30am - 5:00pm
    Debugging and Performance Tools for MPI and OpenMP 4.0 [Room 398-399]
    Parallel debugging and optimization focused on techniques used with accelerators and coprocessors.

Join us in the booth [Booth# 2824] on Tuesday and Wednesday at 4:00pm for snacks and a cold bottled microbrew beer. Come for the munchies and beer, then hang around for great conversation! Food and beer are served while it lasts.

Attend a short talk in our booth [Booth# 2824], meet some of the pros behind the API, and participate in drawings after each talk for an OpenMP book.

  • Tuesday, Nov 18th, 11:15 - 11:40am
    Explicit Vector Programming with OpenMP 4.0 SIMD Extensions – Xinmin Tian, Intel
  • Tuesday, Nov 18th, 1:15 - 1:40pm
    OpenMP for Embedded Systems – Sunita Chandrasekaran and Barbara Chapman, University of Houston
  • Tuesday, Nov 18th, 2:15 - 2:40pm
    Integrating OpenMP into Clang on the IBM BG/Q – Hal Finkel, ANL
  • Wednesday, Nov 19th, 11:15 - 11:40am
    Let’s Stay Close: Open cc-NUMA Support – Ruud van der Pas, Oracle
  • Wednesday, Nov 19th, 1:15 - 1:40pm
    OpenMP Support in Clang / LLVM Compiler – Andrey Bokhanko, Intel
  • Wednesday, Nov 19th, 2:15 - 2:40pm
    OpenMP 4.0 Complete Overview – Michael Klemm, Intel and Christian Terboven, RWTH-Aachen
  • Thursday, Nov 20th, 11:15 - 11:30am
    LLVM Support for OpenMP 4.0 Target Regions on GPUs – Samuel F. Antao, IBM

OpenMP Member Booths:
Many OpenMP Members are also exhibiting at the show:

AMD: Booth 839
Barcelona Supercomputing Center: Booth 3427
Cray: Booth 2339
EPCC: Booth 3445
Fujitsu: Booth 3131
HP: Booth 1715
IBM: Booth 931
Intel: Booths 1215, 1315
NASA: Booth 2739
NEC: Booth 1131
Nvidia: Booth 1727
Oracle: Booth 1615
RedHat: Booth 2832
TACC: Booth 2915
Texas Instruments: Booth: 3745

»OpenMP in Clang/LLVM

The first talk at the LLVM developers’ meeting  October 28-29 is  OpenMP Support in Clang/LLVM: Status Update and Future Directions

  • What: The eighth meeting of LLVM Developers and Users.
  • When: October 28-29, 2014
  • Where: DoubleTree by Hilton - San Jose, CA

OpenMP Support in Clang/LLVM: Status Update and Future Directions
Alexey Bataev (Speaker) - Intel, Zinovy Nis (Speaker) - Intel
OpenMP is a well-known and widely used API for shared-memory parallelism. Support for OpenMP in Clang/LLVM compiler is currently under development. In this talk, we will present current status of OpenMP support, what is done and what remains to be done, technical details behind OpenMP implementation. Also, we will elaborate on accelerators and pragma-assisted SIMD vectorization, introduced in the latest 4.0 edition of the OpenMP specifications.

For more information:

»OpenMP Timeline

The latest issue of Intel’s Parallel Universe magazine has a wonderful graphic that depicts the history of the OpenMP specifications (on page 41):

»Latest News Items

Here are a few news items relating to OpenMP:

Multicore Software Development Kit for HPC available now for Keystone II devices, incl. OpenMP, OpenCL & MPI

Article on The New World of Embedded Multicore Processing with Open Programming Models

Article on programming heterogeneous multicore embedded SoCs

Article on parallelization of the matrix product using OpenMP.

Slides on efficient scheduling of OpenMP and OpenCL workloads on APU. See

Talk on C++ SIMD parallelism with Intel Cilk Plus and OpenMP 4.0. See

Benchmarking LLVM’s Clang OpenMP Support Against GCC

Release of GCC new version 4.9 RC1, with support for OpenMP 4.0, and Intel Cilk plus.

The OpenMP API supports multi-platform parallel programming in C/C++ and Fortran. The OpenMP API defines a portable, scalable model with a simple and flexible interface for developing parallel applications on platforms from the desktop to the supercomputer.
»Read about