I think the most immediate response from p5p would be 'that's what the
new ~~ operator is for, which is already overloadable', and in general I
would agree that having two 'stringifications' was seriously confusing.
However, since this is (just) for back-compat hacks, it's possible a
case could be made. Maybe I should just do up a patch...
Myself, I would not be so quick. As I see it, the problem with
designing a reasonable string-overloading framework is that I do not
have many SIGNIFICANTLY different models to serve as examples
applications of this framework.
I agree with Larry's estimate that "to implement a feature, I must
want it 3 times first". Usually, three "orthogonal" applications
provide enough insight to design a pilot semantic. However, I have
only 2.3 examples in mind:
a) potentially infinite streams: consider
$Pi = infinite_precision_Pi;
print OK if $Pi =~ /123456789/;
or consider
$/ = qr/[1-9][0-9]{50}/; # or something more vicious
$in = <STDIN>; # assume a pipe
system $external_program;
(in second example, we may want to gobble as few characters as
possible while achieving the match for $/).
b1) Strings with out-of-bound markup. E.g., colored output to TTY;
or *parsed* HTML streams (the string value is what you get by
cut&paste, but all the formating info is there, just out-of-bound).
You want to look for certains "features" (e.g., match RExes on
in-bound content + some restrictions on out-of-bound - as in:
find "foo|bar" at start of a "subdivision" [table cell, or div,
or whatsit]).
b2) Same, only not for read-only access, but for modification (as in
my interview to Perl Journal). E.g., suppose you want to
translate a chunk of data from HTML to LaTeX *inplace* (i.e., as
s/// is doing); the translation rules are very non-local; one
must either
re-gather all the non-local information at again and again at
each point, or
gather it once, put it in markup, and use "local structure of
markup at every point" instead of re-gathering.
After this, to do actual translation, one wants to do needed
s/// without ruining the gathered non-local information.
(This is essentially what I do in cperl-mode to facify RExes;
the only difference is that in CPerl, I only touch out-of-bound
part of content. Consider the case when I need to use these
markups to convert RExes from Perl syntax to Emacs syntax...)
(As I said in the interview, Emacs has much better facilities
for string processing than Perl. One of the purposes of string
overloading should be to narrow the gap.)
It would be wonderful if one could use "2-headed 2-language strings"
as the third application of string overloading. The problem is that I
have no idea how one would like to EDIT such strings.
For example: on English strings, one could do something like
s/each/every/g; would one want to do s/$each/$every/ (with suitably
constructed $each and $every) on 2-language strings? So far this
looks too silly to be a help in semantical design...
Hope this helps,
Ilya