OpenMP News


»Video: Performance Essentials with OpenMP 4.0 Vectorization

Intel has posted a video tutorial on the use of the OpenMP 4.0 SIMD pragmas.

http://software.intel.com/en-us/articles/performance-essentials-with-openmp-40-vectorization

A series of seven videos covering performance essentials using OpenMP 4.0 Vectorization with C/C++. It provides an overview of why explicit vector programming methods are crucial to getting performance on modern many core platforms. In the series, we explore code snippets and background information on such topics as OpenMP* 4.0 Simd-enabled functions, and explicit SIMD loops, as well as techniques to help determine if targeted loops were vectorized and if not, why not. Each video is typically less than 10 minutes in length and yet provides a good starting point for developer who wishes to get started using these technologies.

»Reflections on SC13 and OpenMP

Michael Wong, CEO of OpenMP ARB, reflects on Supercomputing 13 and recent OpenMP advances:

I attended Supercomputing in my third year as OpenMP CEO to both represent IBM and OpenMP. This was a big year for us as we closed with many milestones in what I call a Significant Paradigm shift in Parallelism. The most significant milestone was that the OpenMP Consortium has released OpenMP 4.0 in 2013 with new parallelism features that are productive, portable, and performant across C, C++, and Fortran. OpenMP 4.0 contains significant additions for accelerators, standardized for a broad set of architectures, and an industry-first support for SIMD vectorization. It was being showcased at SC13.

The OpenMP ARB Consortium now has 26 members and is still growing, adding three new members in the last year:

  • Red Hat/GCC
  • Barcelona SuperComputing Centre
  • University of Houston

Coming implementations of OpenMP 4.0 include GNU, and the Intel 13.1 compiler with support for accelerators. Clang has started with support with OpenMP 3.1. (more…)

»Article: OpenMP 4.0

A new article, “Full Throttle: OpenMP 4.0” by Michael Klemm, Senior Application Engineer, Intel and Christian Terboven, Deputy Head of HPC Group, RWTH Aachen University, appears in the current issue of Intel’s Parallel Universe magazine.

“Multicore is here to stay.” This single sentence accurately describes the situation of application developers and the hardware evolution they are facing. Since the introduction of the first dual-core CPUs, the number of cores has kept increasing. The advent of the Intel® Xeon Phi™ coprocessor has pushed us into the world of manycore— where up to 61 cores with 4 threads each impose new requirements on the parallelism of applications to exploit the capabilities of the hardware.

It is not only the ever-increasing number of cores that requires more parallelism in an application. Over the past years, the width of SIMD (Single Instruction Multiple Data) registers has been growing. While the early single instruction multiple data (SIMD) instructions of Intel® MMX™ technology used 64-bit registers, our newest family member, Intel® Advanced Vector Instructions 512 (Intel® AVX-512), runs with 512-bit registers. That’s an awesome 16 floating-point numbers in single precision, or eight double-precision numbers that can be computed in one go. If your application does not exploit these SIMD capabilities, you can easily lose a factor of 16x or 8x compared to the peak performance of the CPU.

To read the entire article, download the magazine in PDF.  The article starts on page 6.

»Tutorial: Introduction to OpenMP

Intel’s Tim Mattson’s Introduction to OpenMP video tutorial is now available.

Outline:

»Introduction to OpenMP Tutorial

Thanks go to the University Program Office at Intel for making this tutorial available.

»Accelerated Programming

Michael Wolfe, at PGI, writes about programming standards for the next generation of  HPC systems.

Having just returned from SC13, one burning issue is the choice of a standard approach for programming the next generation HPC systems. While not guaranteed, these systems are likely to be large clusters of nodes with multicore CPUs and some sort of attached accelerators. A standard programming approach is necessary to convince developers, and particularly ISVs, to start adoption now in preparation for this coming generation of systems. John Barr raised the same question in a recent article at Scientific Computing World from a more philosophical point of view. Here I address this question from a deeper technical perspective.

Read the complete article at »HPCWire.

»SC13 Video Talks Online

SC13

Videos of the five in-booth talks and the Birds of a Feather session at Supercomputing 2013 (November 2013, Denver CO) are now »online.

»4.0 Examples Document Released

The first release of the OpenMP 4.0 API Examples document is now available and can be downloaded from the Specifications page.  This is a work in progress — additional examples are under development and will be released in later editions.

Also, a discussion forum for the 4.0 Examples document is now open.

»OpenMP ARB at SC’13

OpenMP at SC13

The OpenMP API supports multi-platform shared-memory parallel programming in C/C++ and Fortran. The OpenMP API defines a portable, scalable model with a simple and flexible interface for developing parallel applications on platforms from the desktop to the supercomputer.
»Read about OpenMP.org