Fastest Way To Loop Through Every Pixel

N

nikie

Chaos said:
As my first attempt to loop through every pixel of an image, I used

for thisY in range(0, thisHeight):
for thisX in range(0, thisWidth):
#Actions here for Pixel thisX, thisY

But it takes 450-1000 milliseconds

I want speeds less than 10 milliseconds

Milliseconds don't mean much unless we knew how big your images are and
what hardware you're using.

Have you considered using NumPy? Assuming you can get the image into a
numpy array efficiently, the actual algorithm boils down to something
like this:

grey = r*0.3 + g*0.59 + b*0.11
index = grey.argmin()
x,y = index%step, index/step
v = grey[x,y]

where r,g,b and grey are numpy.ndarray objects; The arithmetic
operators and the argmin-function are implemented in C, so you can
expect decent performance. (the 4 lines above take about 80 ms for a
1000x1000 image on my PC)

If that's not enough, you might want to use some specially optimized C
library for this purpose. (I'd suggest Intel's IPP, but there are
others).
 
C

Chaos

nikie said:
Chaos said:
As my first attempt to loop through every pixel of an image, I used

for thisY in range(0, thisHeight):
for thisX in range(0, thisWidth):
#Actions here for Pixel thisX, thisY

But it takes 450-1000 milliseconds

I want speeds less than 10 milliseconds

Milliseconds don't mean much unless we knew how big your images are and
what hardware you're using.

Have you considered using NumPy? Assuming you can get the image into a
numpy array efficiently, the actual algorithm boils down to something
like this:

grey = r*0.3 + g*0.59 + b*0.11
index = grey.argmin()
x,y = index%step, index/step
v = grey[x,y]

where r,g,b and grey are numpy.ndarray objects; The arithmetic
operators and the argmin-function are implemented in C, so you can
expect decent performance. (the 4 lines above take about 80 ms for a
1000x1000 image on my PC)

If that's not enough, you might want to use some specially optimized C
library for this purpose. (I'd suggest Intel's IPP, but there are
others).

Can you give me an example of geting an image into a numpy array?
 
C

Chaos

nikie said:
Milliseconds don't mean much unless we knew how big your images are and
what hardware you're using.

Have you considered using NumPy? Assuming you can get the image into a
numpy array efficiently, the actual algorithm boils down to something
like this:

grey = r*0.3 +

g*0.59 + b*0.11
index = grey.argmin()
x,y = index%step, index/step
v = grey[x,y]

where r,g,b and grey are numpy.ndarray objects; The arithmetic
operators and the argmin-function are implemented in C, so you can
expect decent performance. (the 4 lines above take about 80 ms for a
1000x1000 image on my PC)

If that's not enough, you might want to use some specially optimized C
library for this purpose. (I'd suggest Intel's IPP, but there are
others).

I really do not understand the code. Where did you get the varibales r,
g, b and step and what does v produce?
 
R

Ron Adam

Chaos said:
As my first attempt to loop through every pixel of an image, I used

for thisY in range(0, thisHeight):
for thisX in range(0, thisWidth):
#Actions here for Pixel thisX, thisY

But it takes 450-1000 milliseconds

I want speeds less than 10 milliseconds

I have tried using SWIG, and pypy but they all are unsuccessfull in
compiling my files.

This probably won't work for you, but it's worth suggesting as it may
give you other ideas to solve your problem.

If it is a list of lists of pixel objects you can iterate though the
pixels directly and not use range or xrange at all. For this to work
the pixel object needs to be mutable or have an attribute to store it's
value. It can't be just an int, in that case you will need to use indexes.



pixel = [rgb_value]

or

pixel = [r,g,b]

or

class Pixel(object):
def __self__(self, rgb_value):
self.value = rgb_value
pixel = Pixel(rgb_value)

Or some other variation that is mutable.


These may not be suitable and may cause additional overhead elsewhere as
the image may need to be converted to some other form in order to
display or save it.



What Actions are you performing on the pixels?

You may be able to increase the speed by creating lookup tables in
dictionaries and then use the pixel value for the key.


Just a rough example...

action1 = dict()
# fill dict with precomputed pixel key value pairs.
# ...

image = getimage()
for row in image:
for pixel in row:
# one of the following or something similar
pixel[0] = action1[pixel]
pixel.value = action1[pixel.value]
pixel[:] = action[pixel]


The pixels need to be objects so they are mutable. If they aren't, then
you will need to use index's as you did above.

Precomputing the pixel value tables may use up too much memory or take a
very long time if your image has a large amount of possible colors. If
precomputing the pixels take too long but you are not concerned by the
memory usage, you may be able to store (pickle) the precomputed tables
then unpickle it before it is used.

This work best if the number of colors (the depth) is limited.

If these suggestions aren't applicable, then you most likely need to
look at an image library that uses compiled C (or assembly) code to do
the brute force work. It may also be possible to access your platforms
directX or opengl library routines directly to do it.

Cheers,
Ron
 
P

Paul McGuire

Paul McGuire said:
Psyco may be of some help to you, especially if you extract out your myCol
expression into its own function, something like:

def darkness(img,x,y):
return (0.3 * img.GetRed(x,y)) + (0.59 * img.GetGreen(x,y)) + (0.11 *
img.GetBlue(x,y))
<snip>

Even better than my other suggestions might be to write this function, and
then wrap it in a memoizing decorator
(http://wiki.python.org/moin/PythonDecoratorLibrary#head-11870a08b0fa59a8622
201abfac735ea47ffade5) - surely there must be some repeated colors in your
image.

-- Paul
 
N

nikie

Chaos said:
nikie said:
Milliseconds don't mean much unless we knew how big your images are and
what hardware you're using.

Have you considered using NumPy? Assuming you can get the image into a
numpy array efficiently, the actual algorithm boils down to something
like this:

grey = r*0.3 +

g*0.59 + b*0.11
index = grey.argmin()
x,y = index%step, index/step
v = grey[x,y]

where r,g,b and grey are numpy.ndarray objects; The arithmetic
operators and the argmin-function are implemented in C, so you can
expect decent performance. (the 4 lines above take about 80 ms for a
1000x1000 image on my PC)

If that's not enough, you might want to use some specially optimized C
library for this purpose. (I'd suggest Intel's IPP, but there are
others).

I really do not understand the code. Where did you get the varibales r,
g, b and step and what does v produce?

Sorry, I should have commented it better. The idea is that r,g,b are
numpy-arrays containig the r, g, b-pixel-values in the image:

import numpy, Image
img = Image.open("Image1.jpg")
r = numpy.array(img.getdata(0))
g = numpy.array(img.getdata(1))
b = numpy.array(img.getdata(2))
w,h = img.size

The "step" is the length of one line of pixels, that is, the offset to
a pixel (x,y) is x+y*step. (This is usually called "step" or "stride"
in literature)

step = w

Now, I can use numpy's overridden arithmetic operators to do the
per-pixel calculations (Note that numpy also overrides sin, cos, max,
...., but you'll have to use from numpy import * to get these overrides
in your namespace):

grey = r*0.3 + g*0.59 + b*0.11

The multiplication-operator is overridden, so that "r*0.3" multiplies
each value in the array "r" with "0.3" and returns the multiplied
array, same for "g*0.59", "b*0.11". Adding the arrays adds them up
value by value, and returns the sum array. This generally works quite
well and intuitively if you perform only per-pixel operations. If you
need a neighborhood, use slicing.
The function "argmin" searches for the index of the minimum value in
the array:

index = grey.argmin()

But since we want coordinates instead of indices, we'll have to
transform these back using the step value:

x,y = index % step, index / step

Result:

print x,y,r[index],g[index],b[index]

Works for my image.

Notes:
- I'm _not_ sure if this is the fastest way to get an image into a
numpy array. It's convenient, but if you need speed, digging into the
numpy docs/newsgroup archive, and doing a few benchmarks for yourself
would be a good idea
- numpy does have 2d-Arrays, but I'm not that much of a numpy expert to
tell you how to use that to get rid of the index/step-calculations.
Again, numpy-docs/newsgroup might help
- a general advice: using integer multiplications instead for
floating-point may help with performance, too.
- you said something about 10 ms, so I'm guessing your image doesn't
come from a harddrive at all (because reading it alone will usually
take longer than 10 ms). Numpy arrays have a C-Interface you might want
to use to get data from a framegrabber/digital camera into a numpy
array.
 
P

Paul McGuire

Chaos said:
Its not only finding the darkest color, but also finding the X position
and Y Position of the darkest color.

Sorry, you are correct. To take advantage of memoizing, the darkness method
would have to work with the r, g, and b values, not the x's and y's.

-- Paul


import psyco
psyco.full()

IMAGE_WIDTH = 200
IMAGE_HEIGHT = 200
imgColRange = range(IMAGE_WIDTH)
imgRowRange = range(IMAGE_HEIGHT)

@memoize # copy memoize from
http://wiki.python.org/moin/PythonDecoratorLibrary, or similar
def darkness(r,g,b):
return 0.3*r + 0.59*g + 0.11*b

def getDarkestPixel(image):
getR = image.GetRed
getG = image.GetGreen
getB = image.GetBlue

darkest = darkness( getR(0,0), getG(0,0), getB(0,0) )
# or with PIL, could do
# darkest = darkness( *getpixel(0,0) )
dkX = 0
dkY = 0
for r in imgRowRange:
for c in imgColRange:
dk = darkness( getRed(r,c), getGrn(r,c), getBlu(r,c) )
if dk < darkest:
darkest = dk
dkX = c
dkY = r
return dkX, dkY
 
N

nikie

Paul said:
Sorry, you are correct. To take advantage of memoizing, the darkness method
would have to work with the r, g, and b values, not the x's and y's.

-- Paul


import psyco
psyco.full()

IMAGE_WIDTH = 200
IMAGE_HEIGHT = 200
imgColRange = range(IMAGE_WIDTH)
imgRowRange = range(IMAGE_HEIGHT)

@memoize # copy memoize from
http://wiki.python.org/moin/PythonDecoratorLibrary, or similar
def darkness(r,g,b):
return 0.3*r + 0.59*g + 0.11*b

def getDarkestPixel(image):
getR = image.GetRed
getG = image.GetGreen
getB = image.GetBlue

darkest = darkness( getR(0,0), getG(0,0), getB(0,0) )
# or with PIL, could do
# darkest = darkness( *getpixel(0,0) )
dkX = 0
dkY = 0
for r in imgRowRange:
for c in imgColRange:
dk = darkness( getRed(r,c), getGrn(r,c), getBlu(r,c) )
if dk < darkest:
darkest = dk
dkX = c
dkY = r
return dkX, dkY
From my experiences with Psyco/PIL, it's probably faster to move the
image data into lists first (using list(Image.getdata) or
list(image.getband)) and access the raw data in your loop(s). Don't use
Image.getpixel in a tight loop, they result in Python-to-C-Calls which
can't be optimized by Psyco. Even when you're not using Psyco, getpixel
is probably slower (at least the PIL docs say so: "Note that this
method is rather slow; if you need to process larger parts of an image
from Python, use the getdata method.")
 
F

Fredrik Lundh

Chaos said:
I have tried PIL. Not only that, but the Image.eval function had no
success either. I did some tests and I found out that Image.eval only
called the function a certain number of times either 250, or 255.
Unless I can find a working example for this function, its impossible
to use.

you might have better success by asking questions about the problem
you're trying to solve, rather than about some artifact of your first
attempt to solve it...

the following PIL snippet locates the darkest pixel in an image in about
0.5 milliseconds for a 200x200 RGB image, on my machine:

im = im.convert("L") # convert to grayscale
lo, hi = im.getextrema() # find darkest pixel value
lo = im.point(lambda x: x == lo) # highlight darkest pixel value
x, y, _, _ = lo.getbbox() # locate uppermost/leftmost dark pixel

</F>
 
C

Chaos

Fredrik said:
you might have better success by asking questions about the problem
you're trying to solve, rather than about some artifact of your first
attempt to solve it...

the following PIL snippet locates the darkest pixel in an image in about
0.5 milliseconds for a 200x200 RGB image, on my machine:

im = im.convert("L") # convert to grayscale
lo, hi = im.getextrema() # find darkest pixel value
lo = im.point(lambda x: x == lo) # highlight darkest pixel value
x, y, _, _ = lo.getbbox() # locate uppermost/leftmost dark pixel

</F>

thank you that worked perfectly.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,774
Messages
2,569,598
Members
45,151
Latest member
JaclynMarl
Top