Row-wise vs. column-wise image processing

E

Enrique Cruiz

Hello all,

I am currently implementing a fairly simple algorithm. It scans a
grayscale image, and computes a pixel's new value as a function of its
original value. Two passes are made, first horizontally and second
vertically. The problem I have is that the vertical pass is 3 to 4
times slower than the horizontal, although the code is _exactly_ the
same in both cases?!

The code itself is very simple. There are 2 loops to scan through rows
and colums (horizontal pass), or columns and rows (vertical pass)
depending on the pass. The processing part is simple and is contained
in a single line. 'pixel' is a pointer to the current pixel. The new
value of the current pixel is first updated, and the pointer is then
incremented to the next pixel, which is either in the next column or in
the next row depending on the pass. I should add that the image is
stored as a large 1-D vector, i.e. the values of each rows are
contiguous in memory. Here is a simplified version of the code:

####################
// HORIZONTAL
// loop for every row
....
// then for every column
for(col=firstCol+1 ; col<=lastCol-1 ; ++col)
{
*pixel = (*pixel) * 2 / 3;
pixel++;
}

// VERTICAL
// loop for every column
....
// then for every row
for(row=firstRow+1 ; row<=lastRow-1 ; ++row)
{
*pixel = (*pixel) * 2 / 3;
pixel+=imgWidth;
}
##################

For this small amount of code, timings are as follow:
- horizontal = 0.035 sec.
- vertical =   0.135 sec.

Now if we simply remove in each case the line updating the pixel
pointer (i.e. "pixel++;" and "pixel+=imgWidth;"), the timings then
becomes equal at 0.081. This simple instruction is responsible for the
massive loss of time. And I have no idea why.

My only guess relates to memory management issues. Since the image is
stored row-wise, the current and next values are physically next in
memory during the horizontal pass. On the other hand, for the vertical
pass, the next value is stored in the next row, and the distance
between them becomes 'image_width'. My guess is that the next pixel
value in such a case is not close enough to be stored in the processor
cache or register. The processor has to fetch it from memory, hence the
massive loss in speed. This is however just a guess.

I would really appreciate if anybody could enlighten me on this topic.

Thanks in advance,


Enrique
 
S

santosh

Enrique said:
Hello all,

I am currently implementing a fairly simple algorithm. It scans a
grayscale image, and computes a pixel's new value as a function of its
original value. Two passes are made, first horizontally and second
vertically. The problem I have is that the vertical pass is 3 to 4
times slower than the horizontal, although the code is _exactly_ the
same in both cases?!

The code itself is very simple. There are 2 loops to scan through rows
and colums (horizontal pass), or columns and rows (vertical pass)
depending on the pass. The processing part is simple and is contained
in a single line. 'pixel' is a pointer to the current pixel. The new
value of the current pixel is first updated, and the pointer is then
incremented to the next pixel, which is either in the next column or in
the next row depending on the pass. I should add that the image is
stored as a large 1-D vector, i.e. the values of each rows are
contiguous in memory. Here is a simplified version of the code:

####################
// HORIZONTAL
// loop for every row
...
// then for every column
for(col=firstCol+1 ; col<=lastCol-1 ; ++col)
{
*pixel = (*pixel) * 2 / 3;
pixel++;
}

// VERTICAL
// loop for every column
...
// then for every row
for(row=firstRow+1 ; row<=lastRow-1 ; ++row)
{
*pixel = (*pixel) * 2 / 3;
pixel+=imgWidth;
}
##################

For this small amount of code, timings are as follow:
- horizontal = 0.035 sec.
- vertical = 0.135 sec.

Now if we simply remove in each case the line updating the pixel
pointer (i.e. "pixel++;" and "pixel+=imgWidth;"), the timings then
becomes equal at 0.081. This simple instruction is responsible for the
massive loss of time. And I have no idea why.

My only guess relates to memory management issues. Since the image is
stored row-wise, the current and next values are physically next in
memory during the horizontal pass. On the other hand, for the vertical
pass, the next value is stored in the next row, and the distance
between them becomes 'image_width'. My guess is that the next pixel
value in such a case is not close enough to be stored in the processor
cache or register. The processor has to fetch it from memory, hence the
massive loss in speed. This is however just a guess.

I would really appreciate if anybody could enlighten me on this topic.

Thanks in advance,

Without a complete, minimal, compilable example, our guess is as good
as yours, which is probably a good guess. Use the time program to
observe time spent by your code and time spent in system code. That'll
give an indication if your guess is right.

In anycase, the C standard says nothing about efficiency of the code
generated. If the small delay is too much for your application, then
you'll have to profile your code to find out the area where it is
stalling and look at what to do about it.
 
M

mark_bluemel

I am currently implementing a fairly simple algorithm. It scans a
grayscale image, and computes a pixel's new value as a function of its
original value. Two passes are made, first horizontally and second
vertically. The problem I have is that the vertical pass is 3 to 4
times slower than the horizontal, although the code is _exactly_ the
same in both cases?!
[Snip]

This is not really a question about the C language. A general forum
such as comp.programming may be more appropriate than a C language
forum.
My only guess relates to memory management issues.

<Off-topic>
You are almost certainly right, and there's probably not a lot you can
do to change it with this processing style.
In your horizontal pass you can process a cache's worth of data at a
time before pulling the next cachefull. In the worst case, your
vertical pass could have a cache miss for each pixel accessed.
</Off-topic>
 
S

Steffen Buehler

Enrique said:
Now if we simply remove in each case the line updating the pixel
pointer (i.e. "pixel++;" and "pixel+=imgWidth;"), the timings then
becomes equal at 0.081. This simple instruction is responsible for the
massive loss of time. And I have no idea why.

I'm pretty sure it's because the processor for which you are compiling
has an INC-command which increments a given number quite fast. Your
"pixel++" is compiled into such an INC, whereas "pixel+=imgWidth" can
only be translated into an ADD-command which takes much longer.

I don't think you can do much C-specific here. Maybe, if imgWidth is a
power of two, there is a chance of using some strange &s and |s, but I
wonder...

Regards
Steffen
 
S

santosh

Steffen said:
I'm pretty sure it's because the processor for which you are compiling
has an INC-command which increments a given number quite fast. Your
"pixel++" is compiled into such an INC, whereas "pixel+=imgWidth" can
only be translated into an ADD-command which takes much longer.
<snip>

<OT>
With regard to modern x86s, as far as I can tell, the ADD instruction
is faster than the INC instruction. The OP didn't specify the size of
the image being processed, and the cache sizes of the CPU, so one can't
tell if the slowdown is due to cache misses.
</OT>
 
E

Enrique Cruiz

<OT>
With regard to modern x86s, as far as I can tell, the ADD instruction
is faster than the INC instruction. The OP didn't specify the size of
the image being processed, and the cache sizes of the CPU, so one can't
tell if the slowdown is due to cache misses.
</OT>

10 MB pixels image, 3650x2730

I have no idea how to find out about the sizes of the CPU cache?!

Enrique
 
D

Dik T. Winter

....
> You are almost certainly right, and there's probably not a lot you can
> do to change it with this processing style.
> In your horizontal pass you can process a cache's worth of data at a
> time before pulling the next cachefull. In the worst case, your
> vertical pass could have a cache miss for each pixel accessed.

Indeed. Cache misses can give a huge loss of cycles.
 
C

CBFalconer

Enrique said:
I am currently implementing a fairly simple algorithm. It scans a
grayscale image, and computes a pixel's new value as a function of its
original value. Two passes are made, first horizontally and second
vertically. The problem I have is that the vertical pass is 3 to 4
times slower than the horizontal, although the code is _exactly_ the
same in both cases?!

The code itself is very simple. There are 2 loops to scan through rows
and colums (horizontal pass), or columns and rows (vertical pass)
depending on the pass. The processing part is simple and is contained
in a single line. 'pixel' is a pointer to the current pixel. The new
value of the current pixel is first updated, and the pointer is then
incremented to the next pixel, which is either in the next column or in
the next row depending on the pass. I should add that the image is
stored as a large 1-D vector, i.e. the values of each rows are
contiguous in memory. Here is a simplified version of the code:

####################
// HORIZONTAL
// loop for every row
...
// then for every column
for(col=firstCol+1 ; col<=lastCol-1 ; ++col)
{
*pixel = (*pixel) * 2 / 3;
pixel++;
}

// VERTICAL
// loop for every column
...
// then for every row
for(row=firstRow+1 ; row<=lastRow-1 ; ++row)
{
*pixel = (*pixel) * 2 / 3;
pixel+=imgWidth;
}
##################

For this small amount of code, timings are as follow:
- horizontal = 0.035 sec.
- vertical = 0.135 sec.

Now if we simply remove in each case the line updating the pixel
pointer (i.e. "pixel++;" and "pixel+=imgWidth;"), the timings then
becomes equal at 0.081. This simple instruction is responsible for the
massive loss of time. And I have no idea why.

My only guess relates to memory management issues. Since the image is
stored row-wise, the current and next values are physically next in
memory during the horizontal pass. On the other hand, for the vertical
pass, the next value is stored in the next row, and the distance
between them becomes 'image_width'. My guess is that the next pixel
value in such a case is not close enough to be stored in the processor
cache or register. The processor has to fetch it from memory, hence the
massive loss in speed. This is however just a guess.

Your guess is almost certainly correct. But, for the code you
show, there is no reason to even have the second scan. The
operation could simply be *pixel *= 4.0 / 9;

Assuming the operations are really more complex, and depend on
adjacent pixels, you could maintain cache coherency by keeping
things local. No need for the two sweeps.

/* init pixel pointer */
for (row = firstRow + 1; row < lastRow; ++row) {
/* adjust pixel pointer, maybe pixel += 2 */
for (col = firstCol + 1; col < lastCol; ++col, ++pixel) {
/* look through adjacencies */
}
}

--
<http://www.cs.auckland.ac.nz/~pgut001/pubs/vista_cost.txt>

"A man who is right every time is not likely to do very much."
-- Francis Crick, co-discover of DNA
"There is nothing more amazing than stupidity in action."
-- Thomas Matthews
 
S

santosh

Enrique said:
10 MB pixels image, 3650x2730

I have no idea how to find out about the sizes of the CPU cache?!

Under Linux 'dmesg | grep cpu' should do the trick. Under Windows
msinfo.exe should show the sizes.

Anyway, a 10 Mb file will definitely cause hundreds of cache misses,
when you read non-sequentially. I don't there's anything much you can
do about it. Generally the delay is acceptable.
 
W

William Hughes

Hello all,

I am currently implementing a fairly simple algorithm. It scans a
grayscale image, and computes a pixel's new value as a function of its
original value. Two passes are made, first horizontally and second
vertically. The problem I have is that the vertical pass is 3 to 4
times slower than the horizontal, although the code is _exactly_ the
same in both cases?!

The code itself is very simple. There are 2 loops to scan through rows
and colums (horizontal pass), or columns and rows (vertical pass)
depending on the pass. The processing part is simple and is contained
in a single line. 'pixel' is a pointer to the current pixel. The new
value of the current pixel is first updated, and the pointer is then
incremented to the next pixel, which is either in the next column or in
the next row depending on the pass. I should add that the image is
stored as a large 1-D vector, i.e. the values of each rows are
contiguous in memory. Here is a simplified version of the code:

####################
// HORIZONTAL
// loop for every row
...
// then for every column
for(col=firstCol+1 ; col<=lastCol-1 ; ++col)
{
*pixel = (*pixel) * 2 / 3;
pixel++;

}// VERTICAL
// loop for every column
...
// then for every row
for(row=firstRow+1 ; row<=lastRow-1 ; ++row)
{
*pixel = (*pixel) * 2 / 3;
pixel+=imgWidth;}##################

For this small amount of code, timings are as follow:
- horizontal = 0.035 sec.
- vertical = 0.135 sec.

Now if we simply remove in each case the line updating the pixel
pointer (i.e. "pixel++;" and "pixel+=imgWidth;"), the timings then
becomes equal at 0.081. This simple instruction is responsible for the
massive loss of time. And I have no idea why.

My only guess relates to memory management issues. Since the image is
stored row-wise, the current and next values are physically next in
memory during the horizontal pass. On the other hand, for the vertical
pass, the next value is stored in the next row, and the distance
between them becomes 'image_width'. My guess is that the next pixel
value in such a case is not close enough to be stored in the processor
cache or register. The processor has to fetch it from memory, hence the
massive loss in speed. This is however just a guess.

I would really appreciate if anybody could enlighten me on this topic.

Thanks in advance,

This is quite standard, and your guess is quite correct. The problem
is
that when you process by rows the elements are close together
and when you process by columns the elements are far apart.
This has little to do with C (if you use Fortran you will probably
find that processng by columns is faster than processing by rows).

You have several options

Live with things as they are (a drop in speed of three times
is not that severe. Suppose your matrix
was so large that a column operation caused a disc read
for every element. Can you say "several orders of magnitude")

Change to an algorithm that can be done with
only row processing (not always possible).

Do the row processing, transpose the matrix,
do the column processing, transpose the matrix
again. The two transposes may take less time
than the processing time difference.

Store the image in "tiles", so that close
by pixels in any direction are usually close
in memory. This is the most general solution
and also the hardest. Note that the
overhead of the tiling sheme will probably
make row processing slower. Hopefully, you
will more that make up for this in column processing.

Process in a different order. Process a few rows
and then do the needed column processing. Process
a pixel at a time, doing the needed row and column operations.
The change in memory access pattern may help.

-William Hughes
 
C

christian.bau

I am currently implementing a fairly simple algorithm. It scans a
grayscale image, and computes a pixel's new value as a function of its
original value. Two passes are made, first horizontally and second
vertically. The problem I have is that the vertical pass is 3 to 4
times slower than the horizontal, although the code is _exactly_ the
same in both cases?!

RAM is very slow, that is why your computer has cache memory. And
accessing RAM to read a single byte is very very slow compared to
transferring a whole bunch of data, that is why your computer will read
a whole bunch of consecutive bytes from RAM into cache memory when it
has to, typically 32, 64 or 128 byte at a time.

When you process data line by line, those 128 bytes are all used, so
this is very efficient. When you process data in columns, you access
one pixel, but the computer reads 128 bytes from RAM. Then you access
the next pixel, again the computer reads 128 bytes of RAM. I guess you
can see how this is inefficient.

There is also some strange effect that the speed depends on the
distance from one line to another. Run your program with different
values of imgWidth and measure the time, and you will likely see that
your computer doesn't like certain values of imgWidth (it has to do
with cache associativity - google or wikipedia should help). Figure out
which values it doesn't like and avoid them. Nice powers of two are
usually a very bad idea for numWidth.

If you process the same data repeatedly, it is best to do that in
chunks that fit into cache. Usually your computer has two kinds of
cache, L1 cache (very small, maybe 32 KB or less, and very very fast),
and L2 cache (maybe 1 MB or 2 MB if you're lucky, a bit slowish) and
then there is RAM (tons of it, slow as hell). To find out how much
cache your computer has, just measure how long the same operation
takes, depending on the data size. You should have a considerable drop
in speed at two points, so don't exceed that size.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,731
Messages
2,569,432
Members
44,835
Latest member
KetoRushACVBuy

Latest Threads

Top