Ive started using OpenMP since a couple of days and I'm

facing something strange. The goal is to calculate the variance of a 256x256 array.

With 1 core the elapsed time for the calculation is 0.39ms and for 4 core 1.63 ms!!!

I must be doing something wrong but I can not figure out what. I'm using Visual Studio 2008 Pro

on a Windows XP machine.

Here is how my code looks like:

- Code: Select all
`#include <cstdlib>`

#include <ctime>

#include <omp.h>

#include <iostream>

#include <cmath>

int main ()

{

//Initialize the random generator random

srand(static_cast<unsigned>(time(NULL)));

int n=256,x,y;

double XMax=57.7, XMin= -10.3,Sum1=0.0,Sum2=0.0,Variance;

double range=XMax-XMin;

double U_0[256][256];

//Populating array U_0

....

//Measuring the time elapsed to calculate the variance

double t=omp_get_wtime();

#pragma omp parallel private(x,y)

{

#pragma omp for schedule(dynamic) reduction(+:Sum1,Sum2)

for (x=0; x<n; x++)

{

for (y=0; y<n; y++)

{

#pragma omp atomic

Sum1+= U_0[x][y] * U_0[x][y];

#pragma omp atomic

Sum2+= U_0[x][y];

}

}

}

Variance=(Sum1/(n*n)) - ((Sum2/(n*n))*(Sum2/(n*n)));

//Time needed for the calculation of the variance

printf_s("Time elapsed in milliseconds: %2lf ms\n", 1000*(omp_get_wtime()-t));

return 0;

}

Could you please give a hint what I might be doing wrong?

Thanks

Caroline