Explicitly "Matrix" class, or general "Transform" class?

M

Mike Oliver

I'm working on a whole graphics library, partly as an excuse to get
better at C++, that works with (the latest) OpenGL. The details or
that are not important, as I rarely have trouble finding the nitty-
gritty details of how to implement things and the algorithms involved.

My question is one of interface: given that the programmable pipeline
of OpenGL doesn't even support matrices directly, and a developer is
free to use whatever method they wish of transforming, amplifying, and
shading (though sadly not rasterizing) geometry, would it be better/
more versatile to use an abstraction of orientation/scale/translation/
etc. called Transform instead of Matrix?

I can see advantages to both methods:

Matrix:
Pros:
Direct interoperability with vectors
direct construction of bases from vectors
"familiar" method for other potential users of the code
Many necessities of Graphics math are best done with a matrix (the
matrix for transforming normals under arbitrary transformations does
not exactly have a parametrized counter part, though under some
conditions it may)
Cons:
Chaining transforms is more expensive in matrices (transformation
hierarchies would be slower than Transforms by a significant power)
Orthogonality can rapidly go out the window since the easiest way to
provide a matrix interface is to use one internally; renormalizing and
reorthogonalizing is very expensive

Transform:
Pros:
Internal implementation can choose the best algorithm given floating
point arithmetic constraints and computing power to deal with a given
transformation type (quaternions stack very fast)
More extensible (I could easily add "Skew" for instance)
Forces Projections to be handled as a different beast
Agnostic interface (by definition allows the interface to be
convenient, not bound by Matrix notation)
Transform hierarchy makes more intuitive sense
Cons:
As I found out on my senior project, the obvious internal
representation of rotations, Quaternions, do not translate well from
more human-readable representations (going from a rotation matrix
defined by a location, lookAt point, and up vector to a Quaternion can
be off by more than 5 degrees!)
Most likely other weird lurking floating-point creep conditions
The order of transformation is not as implicitly defined for a
Transform, unlike a Matrix: how do you combine two arbitrary
Transforms such that the result is what one would expect of the same
composition of transformations under Matrix math while retaining the
possible performance gains of using other representations?

It may also make sense to provide both: arbitrary 3d objects could
have Transforms, and when you get ready to render you obtain and use
Matrix versions of the Transforms. That is desirable if only because
using matrix transforms in a shader would save a dozen or so floating
point opts on each vertex, and in a modern scene, that translates to
hundreds of thousands of operations the GPU doesn't have to do, all
for doing a handful of expensive operations on the CPU and only losing
~12 uniform slots per shader (my card supports up to 4096 uniform
inputs, so I can spare a few).

Thoughts?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,580
Members
45,054
Latest member
TrimKetoBoost

Latest Threads

Top