Chris said:
This is, of course, going to depend on test cases. If you're testing
extremely long strings with the @ close to the end and no dots at all,
then clearly the constant-time substring operation will be overwhelmed
by the linear-time additional search distance. In cases where the dot
lies after the @ (best case for not doing the substring), the version
with substring will be slower by a small constant increase (not even a
constant factor) in execution time.
This also raises questions about importance of efficiency; the advantage
for the substring lies in failure cases, which tend to be comparatively
unimportant with regard to efficiency in many applications. The
advantage for the non-substring code, while being a far smaller
advantage, occurs in the mainline case rather than the failure case.
In other words, there is no "best" answer without more description of
the problem.
Quite right! And, on top of that, the more I thought about this last night
(yes, I spend time pondering Java-related subjects and programming in
general when I should be doing more productive things, like finishing up
_Why People Believe Weird Things_ by Michael Shermer <g>) was that, with a
test of 10M addresses, the one neither algorithm differed from the other by
anything more than a fraction of a second (about 1/2). With that small a
gap in performance between the two, is there any *real* difference between
them? To have to go to the extreme of shoving 10M iterations just to show a
miniscule difference is, IMHO, argumentum ad absurdum...