An update to the OpenMP 4.0 Examples document is now available. It adds new examples that demonstrate use of the proc_bind clause to control thread binding for a team of threads in a parallel region. Also, new examples for the taskgroup construct.
As the Winter face-to-face meeting of the OpenMP Language Committee winds down at Intel in Santa Clara, CA, we can announce the date and location of the next meeting in the Spring:
Spring F2F Meeting
- At: Sidney Sussex College, Cambridge, UK
- Dates: April 12-16 (Sat-Wed)
- Hosts: Michael Wong and James Cownie
Slides and audio from the day-long tutorial on MPI and OpenMP programming presented at Supercomputing 13 in Denver in November 2013 is now available.
»Hybrid MPI and OpenMP Parallel Programming
MPI + OpenMP and other models on clusters of SMP nodes
- Rolf Rabenseifner, Georg Hager, Gabriele Jost
The OpenMP Lang Committee will be holding its first face-to-face meeting of 2014 on the Intel campus in Santa Clara, CA January 27-31.
This meeting will include breakout sessions and presentations by the various subcommittees (Accelerator, Error Model, Interoperability, Fortran, Tasking, Examples) as work continues to develop the next version of the OpenMP API specifications.
This meeting is not open to the public without a prior arrangement. Contact the OpenMP ARB at email@example.com for more information.
Intel has posted a video tutorial on the use of the OpenMP 4.0 SIMD pragmas.
A series of seven videos covering performance essentials using OpenMP 4.0 Vectorization with C/C++. It provides an overview of why explicit vector programming methods are crucial to getting performance on modern many core platforms. In the series, we explore code snippets and background information on such topics as OpenMP* 4.0 Simd-enabled functions, and explicit SIMD loops, as well as techniques to help determine if targeted loops were vectorized and if not, why not. Each video is typically less than 10 minutes in length and yet provides a good starting point for developer who wishes to get started using these technologies.
Michael Wong, CEO of OpenMP ARB, reflects on Supercomputing 13 and recent OpenMP advances:
I attended Supercomputing in my third year as OpenMP CEO to both represent IBM and OpenMP. This was a big year for us as we closed with many milestones in what I call a Significant Paradigm shift in Parallelism. The most significant milestone was that the OpenMP Consortium has released OpenMP 4.0 in 2013 with new parallelism features that are productive, portable, and performant across C, C++, and Fortran. OpenMP 4.0 contains significant additions for accelerators, standardized for a broad set of architectures, and an industry-first support for SIMD vectorization. It was being showcased at SC13.
The OpenMP ARB Consortium now has 26 members and is still growing, adding three new members in the last year:
- Red Hat/GCC
- Barcelona SuperComputing Centre
- University of Houston
Coming implementations of OpenMP 4.0 include GNU, and the Intel 13.1 compiler with support for accelerators. Clang has started with support with OpenMP 3.1. (more…)
A new article, “Full Throttle: OpenMP 4.0” by Michael Klemm, Senior Application Engineer, Intel and Christian Terboven, Deputy Head of HPC Group, RWTH Aachen University, appears in the current issue of Intel’s Parallel Universe magazine.
“Multicore is here to stay.” This single sentence accurately describes the situation of application developers and the hardware evolution they are facing. Since the introduction of the first dual-core CPUs, the number of cores has kept increasing. The advent of the Intel® Xeon Phi™ coprocessor has pushed us into the world of manycore— where up to 61 cores with 4 threads each impose new requirements on the parallelism of applications to exploit the capabilities of the hardware.
It is not only the ever-increasing number of cores that requires more parallelism in an application. Over the past years, the width of SIMD (Single Instruction Multiple Data) registers has been growing. While the early single instruction multiple data (SIMD) instructions of Intel® MMX™ technology used 64-bit registers, our newest family member, Intel® Advanced Vector Instructions 512 (Intel® AVX-512), runs with 512-bit registers. That’s an awesome 16 floating-point numbers in single precision, or eight double-precision numbers that can be computed in one go. If your application does not exploit these SIMD capabilities, you can easily lose a factor of 16x or 8x compared to the peak performance of the CPU.
To read the entire article, download the magazine in PDF. The article starts on page 6.
Intel’s Tim Mattson’s Introduction to OpenMP video tutorial is now available.
Thanks go to the University Program Office at Intel for making this tutorial available.