Uri Guttman said:
so in general, don't use [$&], use explicit capturing
parens which will only cause the s/// with them to copy the original
string.
I don't have such an old perl to hand, but perlre points out that:
As of 5.005, $& is not so costly as the other two.
(meaning $' and $`)
How much less costly is it?
As a side note: Thanks to Abigail, mostly, one alteration I've made to
my personal programming practises lately is that I've started using
things like $&, shelling out, etc., more often in cases where the code
isn't time-critical (which is, frankly, most of the time). I've found
that it will often save me mental effort time, and in many cases makes
the code clearer than a more conventional approach might dictate.
Recently, for instance, I replaced a shell script that examined a
Linux system, and printed out what cards it thought were in which
slots, with a Perl program that does all sorts of conventionally 'bad'
things, like using $&, lots of `find -name ... | grep | sort -u`, and
the like because I was trying, as much as possible, to stick with the
logic of the shell script, and I figured "Heck, I'll optimize it
later, and pass around arrayrefs instead of calling `lspci`
everywhere, and use File::Find, and stop with the $&."
Before I even got around to it, I ran some benchmarks, and I still cut
down the average run time from 10 seconds to 3, so I give myself a
free pass for using those constructs in that context. I realize that
is not disagreeing with you, just that sometimes, the performance hit
of using $&, or shelling out even when there's a perfectly good module
available, isn't significant.
My advice would be to use them wherever you like, but be aware that
they can indeed cause performance problems. Even so, I'd still
profile your program before rushing to those as the first cure to poor
performance-- you may well find, as I have, that poor algorithms or
inefficient data structures are far more detrimental to your program's
run than $& could ever be.
-=Eric