C++17 Parallel STL (7 new algorithms)

Parallel algorithms implementation it’s a new feature of C++17 standard. Which allow to use simple STL functions in multiple threads.

At the moment (17.07.2018) only one compiler supports Parallel Algorithms. It’s Visual Studio 2017 ( versions 15.3 and higher).

To build C++ projects with parallel STL you need to build with a flag:

/std:c++17

In C++17 we have a new term – Execution policy

It’s the first parameter to parallel functions from STL. It allows you to choose, how can be function executed. To use it you need to include

#include <execution>

Types of execution policy:

std::seq – without using threads or vectorization, sequenced instructions. This is a default value.

std::par – parallel execution with using threads. When using parallel execution policy, it is the programmer’s responsibility to avoid deadlocks.

std::par_unseq – parallel execution using threads and vectorization or migrating across threads(such as by parent-stealing scheduler). Functions with this policy are permitted to execute in unordered fashion in unspecified threads and unsequenced with other threads.

When uncaught exception will be thrown, in all 3 cases will be called std::terminate().

For example, we have a method

and here we want to execute sort in parallel:

 

In case of std::par policy, the code needs to be checked for parallel execution issues:

In case of std::par_unseq you can’t add lock there because of possible changes in sequence commands. Users are not allowed to allocate or deallocate memory, acquire mutexes, use non-lock-free std::atomic specializations, or perform any other vectorization-unsafe operations.

 

What is the difference between parallel execution and vectorized parallel execution?

Vectorization means that instructions in a method will be moved to get a boost in execution.

Example (parallel execution without vectorization):

Thread 1:

  • int x = 10;
  • int x += 10;
  • call <some method>

Thread 2:

  • int x = 10;
  • int x += 10;
  • call <some method>

Thread 3:

  • int x = 10;
  • x += 10;
  • call <some method>

Example (execution with vectorization):

  • int x = 10;
  • int x = 10;
  • int x = 10;
  • x+=10;
  • x+=10;
  • x+=10;
  • call <some method>
  • call <some method>
  • call <some method>

 

In C++17 algorithms library there are presented 7 new functions:

All algorithms by default use std::seq execution policy. To use std::execution::par you need to have ForwardIterator.

reduce() – it’s unordered std::accumulate() which can be run in parallel. Takes the same parameters and execution policy parameter.

 

inclusive_scan() – it’s unordered std::partial_sum() which can be run in parallel. Takes the same parameters and execution policy parameter.

By default inclusive_scan() uses std::plus<> operator.

The algorithm uses next formula:

vector[i] = i1 + i2 + … + in

For example:

if you need another operator, you can pass it by param:

or use a lambda:

 

exclusive_scan() – behaviour is almost same as in inclusive_scan() except elements are in (i – 1) range.

vector[i] = (i1 – 1) + (i2 – 1) + … + (in – 1)

also as with inclusive_scan() you can use another operator or lambda.

 

transform_reduce() – it has almost behavior like reduce(), except it has one more operation before accumulating values. By default it uses std::multiplies<> operator.

Example:

or if you need another operation you can pass them using lambda.

 

transform_inclusive_scan() – same as inclusive_scan() but with one more lambda to modify object after inclusive_scan.

For example:

 

transform_exclusive_scan() – same as exclusive_scan() but with one more lambda to modify object after exclusive_scan.

 

for_each_n() – apply function to first n elements in array in parallel.

For example:

 

 

 

3 Replies to “C++17 Parallel STL (7 new algorithms)”

  1. Pretty nice post. I simply stumbled upon your weblog and wanted to mention that I have truly enjoyed surfing around your blog posts.
    After all I’ll be subscribing in your rss feed and I am hoping you write once more soon!

Leave a Reply

Your email address will not be published.