J
Jos A. Horsmeier
Greetings and salutations,
I'm struggling with a bit of a mess (solely created by me and myself).
Suppose a bunch of types (classes) exist, say, T_1, T_2, ... T_n, implementing
a single interface or inheriting from some (abstract) base class T. Also assume
that some 'applicators' exit, say, A_1, A_2 ... Am, all implementing an interface
(or abstract base class) equivalent to --
interface Applicator {
Object apply(T ta, T tb);
}
If both sets T_i and A_j are 'fixed', i.e no new classes T_p nor A_q enter
my messy arena, I could implement all A_i as visitors to the T_j classes
in order to get their job done. OTOH, all T_j classes could double dispatch
the 'A_i.apply' methods to their T_j counterparts to get the job done.
I realise that some form of a Cartesian product of 'apply' methods must
reside somehwere, somehow ...
For the sake of the example, think of the T_j as being subclasses of classes
Short, Int, Long, Double, and let the Appliers be something like 'Add', 'Sub'
etc. i.e. the folllowing code snippet should make sense --
Add myAdd= new Add();
Dbl myDbl= new Dbl(41);
Int myInt= new Int(1);
// assume Num is a superclass of Dbl and Int
Num total= myAdd.apply(myDbl, myInt);
Variable 'total' should have type 'Dbl' in this example. As I wrote above,
there would be no problem if at least set T_j would be fixed; the visitor
pattern would do fine. If set A_i would be fixed the multiple dispatch
pattern (over the set T_j) would be a fine fit. But what to do if both sets
may be expanded?
I've fiddle-diddled with introspection, synthesizing class names given a T_i
and T_j, caching them in hashmaps but to no avail -- my code is a mess and
my design is something I'm ashamed off. I would really appreciate it if
some kind soul here could enlighten me.
TIA, and kind regards,
Jos
I'm struggling with a bit of a mess (solely created by me and myself).
Suppose a bunch of types (classes) exist, say, T_1, T_2, ... T_n, implementing
a single interface or inheriting from some (abstract) base class T. Also assume
that some 'applicators' exit, say, A_1, A_2 ... Am, all implementing an interface
(or abstract base class) equivalent to --
interface Applicator {
Object apply(T ta, T tb);
}
If both sets T_i and A_j are 'fixed', i.e no new classes T_p nor A_q enter
my messy arena, I could implement all A_i as visitors to the T_j classes
in order to get their job done. OTOH, all T_j classes could double dispatch
the 'A_i.apply' methods to their T_j counterparts to get the job done.
I realise that some form of a Cartesian product of 'apply' methods must
reside somehwere, somehow ...
For the sake of the example, think of the T_j as being subclasses of classes
Short, Int, Long, Double, and let the Appliers be something like 'Add', 'Sub'
etc. i.e. the folllowing code snippet should make sense --
Add myAdd= new Add();
Dbl myDbl= new Dbl(41);
Int myInt= new Int(1);
// assume Num is a superclass of Dbl and Int
Num total= myAdd.apply(myDbl, myInt);
Variable 'total' should have type 'Dbl' in this example. As I wrote above,
there would be no problem if at least set T_j would be fixed; the visitor
pattern would do fine. If set A_i would be fixed the multiple dispatch
pattern (over the set T_j) would be a fine fit. But what to do if both sets
may be expanded?
I've fiddle-diddled with introspection, synthesizing class names given a T_i
and T_j, caching them in hashmaps but to no avail -- my code is a mess and
my design is something I'm ashamed off. I would really appreciate it if
some kind soul here could enlighten me.
TIA, and kind regards,
Jos