Hamiral said:
I'd add that most of OpenGL functions are commands for which speed is
particularly critical, so adding an OOP overlay on top of the specialized
functions like glVertex3f or glTexCoord2f (1) and alike, would be a
significant overhead, as these functions can get called millions of times
each frame (so more than 70 million times per second). So, as they must
execute the fastest possible, even a 0.001ms (2) overhead for each call
would be a very noticeable performance hit.
(1) If Rune is concerned with the suffixes, I consider he uses immediate
mode, so no vertex arrays (which would by the way add a bit of performance
and of abstraction, which would be interesting in his particular case).
(2) I don't know how much time would it take, but usually OpenGL
programmers are dealing with lots of lots of vertices and polygons, and
thus function calls, so any tiny amount of time per call would rapidly get
noticeable.
presumably, yes...
however, this assumes that the graphics card is a little faster than it
often is IME, and so, although potentially the GL implementation "could" get
bogged down with function call overheads, more often, IME, it seems to get
bogged down with the fill rate...
it is much like how triangle strips could presumable add a notable amount of
speed over simply using GL_TRIANGLES and sending out a bunch of triangles
this way...
at least in my experience, on newer HW, it doesn't seem to make as much of a
difference.
I guess I will note that for some of my uses, I had essentially wrapped many
of the GL calls with other calls which would accumulate geometry on their
own (one goes through the motions, and the calls build up a collection of
geometry). I had used this, mostly to good effect, for doing more advanced
texturing and shader effects (to allow my shader code to be fairly generic,
only caring about generic geometry, rather than about the details of what
was being drawn...).
a more advanced strategy could potentially be used to add a whole real-time
lighting and stencil-shadowing pass to the graphics pipeline (vs more
manually managing the lighting and shadowing), but I have not personally
done so (actually, it is probably both easier and more efficient to use a
mesh/model based strategy at this point, even if this doesn't abstract
things so well).
....
the overhead was "not all that bad", FWIW...
it is much like how even building a half-assed BSP tree in real-time can
lead to better performance than trying to have a very low-overhead drawing
pass (and doing everything linearly), thus leading to the amusing
side-effect of the fully lit-and-shadowed world getting a better framerate
than the fullbright version (even though the lights and shadows involve a
lot of lighting passes and shadow geometry, and the fullbright pass simply
draws textured polygons...)
I guess it helps if one can build an approximate BSP on O(n log n) time,
even if a better one would take O(n^2) or more...
but, then again, performance is relative, for example, static BSP and
lightmapping is still a whole lot better performance-wise than is real-time
BSP and dynamic lighting...
or such...