Problem with glReadPixels in ruby-opengl

R

Randy Baden

I'm having a lot of difficulty understanding the behavior of some of the
ruby-opengl code. For those who've never used it, ruby-opengl provides
bindings for the OpenGL functions in C. For the most part it works
fine, but there's something funny about glReadPixels that I think I'm
missing. I'm posting it here because I can't find a decent resource for
ruby-opengl information, and I think my question is more about how
bindings work.

Here's the C function in question:

static VALUE
gl_ReadPixels(obj,arg1,arg2,arg3,arg4,arg5,arg6)
VALUE obj,arg1,arg2,arg3,arg4,arg5,arg6;
{
GLint x;
GLint y;
GLsizei width;
GLsizei height;
int format;
int type;
VALUE pixels;
x = (GLint)NUM2INT(arg1);
y = (GLint)NUM2INT(arg2);
width = (GLsizei)NUM2INT(arg3);
height = (GLsizei)NUM2INT(arg4);
format = NUM2INT(arg5);
type = NUM2INT(arg6);
if (format != -1 && type != -1) {
int type_size;
int format_size;
type_size = gltype_size(type) / 8;
format_size = glformat_size(format);
pixels =
allocate_buffer_with_string(width*height*format_size*type_size);
glReadPixels(x,y,width,height,format,type,(GLvoid*)RSTRING(pixels)->ptr);
return pixels;
}
return Qnil;
}

I feel like I understand everything going on in this function; my only
problem is with the return pixels line. The allocate_buffer_with_string
function just allocates a string of the specified size, so it seems like
pixels is just a string, and the value of the characters in that string
are modified by glReadPixels.

I guess my problem is that, when this function is called from the Ruby
side, the result is a string that looks like one of the following:

"\000\000\200?"
"\376n~?"

I don't really know what format the string has. Also, as you can see,
even though I'm passing constant parameters for the width, height,
format, and type, the strings have different lengths. What exactly is
this function supposed to be returning? The glReadPixels function in C
should be modifying the last parameter as if it were an 3-dimensional
array of size [width][height][format], and it should be putting float
values into the elements of the array. In my case it's very simple:
width, height, and format are all 1, so it's just an array with a single
element. I would think that this would then be just a float value, but
I don't have any idea how to convert those strings into a float value.

Does anyone have any ideas as to how this is working?
 
R

Randy Baden

Jason said:
Interesting way of doing it, that's for sure, but all you'll need to do
is
treat the string like an array. e.g.:

pixels = GL.ReadPixels(...)

pixels[0] # r of first pixel
pixels[1] # g of first pixel
pixels[2] # b of first pixel.

The crazy /000 stuff you see is simply a nice way of displaying
characters
that aren't of the a-zA-Z0-9 character range. As RGB values are 0-255,
you
can save the stuff in a single character.

So while it's actually a string, just treat the results as an array of
numbers.

HTH

Jason

Thanks so much! Your reply wasn't exactly what I needed, but it did
help me realize how stupid I was being. Here's what I really needed:

winZ=glReadPixels(winX, winY, 1, 1,
GL_DEPTH_COMPONENT,GL_FLOAT).unpack("f")[0];

Apparently unpack was the function that I needed to convert the string
representation into the float values that I needed. I should have
mentioned that I was trying to find the depth component, but your
example with the RGB components still helped me find my answer.

Thanks!
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,054
Latest member
TrimKetoBoost

Latest Threads

Top