Optimisation for tight loops...

  • Thread starter Krishna Chaitanya
  • Start date
K

Krishna Chaitanya

Hi,

Does the Perl compiler/interpreter do any loop unrolling, or any loop
optimisations at all? Or do we have to code them by hand? I came
across opinions that Perl's "bad" for tight loops....I'd like to see
what the gurus say.

-Chaitanya
 
U

Uri Guttman

KC> Does the Perl compiler/interpreter do any loop unrolling, or any
KC> loop optimisations at all? Or do we have to code them by hand? I
KC> came across opinions that Perl's "bad" for tight loops....I'd like
KC> to see what the gurus say.

no dynamic lang (and i assume it is the others are who are badmouthing
perl) can do loop unrolling and such. this is because things can change
at runtime with tied things, symrefs, and eval string. no compiler can
handle such things and optimize them out of a loop or unroll a loop.

now you can write fast and slower loops in perl. c style loops and
indexing into arrays are slow, perl foreach style loops are fast and
simple. knowing better perl is the way to write better loops, not
listening to anti-perl opinions. ask those naysayers about their 'perl
compatible regex' stuff and watch them stammer about that.

uri
 
K

Krishna Chaitanya

now you can write fast and slower loops in perl. c style loops and
indexing into arrays are slow, perl foreach style loops are fast and
simple. knowing better perl is the way to write better loops, not
listening to anti-perl opinions. ask those naysayers about their 'perl
compatible regex' stuff and watch them stammer about that.

uri

Thanks, Uri. This really helped.

On this note, I referred to perlperf and got this example going:


#!/usr/bin/perl

use warnings;
use strict;
use Benchmark qw:)hireswallclock timethese cmpthese);

my $code_C_Style = sub {
my @a = (1..100000);
my $searchfor = 99999;
for (my $i=0; $i <= $#a; $i++) {
last if $a[$i] == $searchfor;
}
};

my $code_Perl_grep = sub {
my @a = (1..100000);
my $searchfor = 99999;
return if grep /$searchfor/, @a;
};

my $code_Perl_foreach = sub {
my @a = (1..100000);
my $searchfor = 99999;
foreach (@a) {
last if /^$searchfor$/;
}
};

my $code_Perl_hashkeys = sub {
my @a = (1..100000);
my $searchfor = 99999;
my %ah;
@ah{@a} = ();
return if exists $ah{$searchfor};
};

my $results = timethese(100, {
C_Style => $code_C_Style,
Perl_grep => $code_Perl_grep,
Perl_foreach => $code_Perl_foreach,
Perl_hashkeys => $code_Perl_hashkeys
}
);

cmpthese($results);

The results were as follows:

Benchmark: timing 100 iterations of C_Style, Perl_foreach, Perl_grep,
Perl_hashkeys...
C_Style: 6.45279 wallclock secs ( 6.36 usr + 0.01 sys = 6.37 CPU)
@ 15.70/s (n=100)
Perl_foreach: 9.16967 wallclock secs ( 8.55 usr + 0.02 sys = 8.57
CPU) @ 11.67/s (n=100)
Perl_grep: 7.02614 wallclock secs ( 6.80 usr + 0.01 sys = 6.81 CPU)
@ 14.68/s (n=100)
Perl_hashkeys: 23.5906 wallclock secs (22.61 usr + 0.04 sys = 22.65
CPU) @ 4.42/s (n=100)
Rate Perl_hashkeys Perl_foreach Perl_grep
C_Style
Perl_hashkeys 4.42/s -- -62%
-70% -72%
Perl_foreach 11.7/s 164% --
-21% -26%
Perl_grep 14.7/s 233% 26%
-- -6%
C_Style 15.7/s 256% 35%
7% --

Looks like the results are against this:

" If you are searching for an element in a list, it can be more
efficient to store the data in a hash structure, and then simply look
to see whether the key is defined, rather than to loop through the
entire array using grep() for instance. " (from perlperf)

Or...did I interpret the results incorrectly, or worse, create the
wrong test case? Pardon me, this is the 1st time I'm venturing into
benchmarking.

Thanks a lot,
Chaitanya
 
K

Keith Thompson

Uri Guttman said:
KC> Does the Perl compiler/interpreter do any loop unrolling, or any
KC> loop optimisations at all? Or do we have to code them by hand? I
KC> came across opinions that Perl's "bad" for tight loops....I'd like
KC> to see what the gurus say.

no dynamic lang (and i assume it is the others are who are badmouthing
perl) can do loop unrolling and such. this is because things can change
at runtime with tied things, symrefs, and eval string. no compiler can
handle such things and optimize them out of a loop or unroll a loop.

Why not? Perl is compiled before it's executed, after all, and
the compiler *could* determine that the body of a loop doesn't do
anything disruptive and use that information to perform optimizations
(loop unrolling, common subexpression elimination, etc.).

It's not at all clear that it makes sense to do so, given that
you're going to pay the overhead each time the program is executed.
And I have no idea what optimizations the perl implementation
actually performs. But I suggest that your "no dynamic lang"
claim is overstated.

[...]
 
U

Uri Guttman

KC> Looks like the results are against this:

KC> " If you are searching for an element in a list, it can be more
KC> efficient to store the data in a hash structure, and then simply look
KC> to see whether the key is defined, rather than to loop through the
KC> entire array using grep() for instance. " (from perlperf)

anyone here could have told you that. and it is only true if you need to
do this search multiple times. and that has nothing to do with the speed
of perl loops. it is all about using the best data structure for a task.

KC> Or...did I interpret the results incorrectly, or worse, create the
KC> wrong test case? Pardon me, this is the 1st time I'm venturing into
KC> benchmarking.

you don't need to do benchmarking for such an obvious choice.

uri
 
U

Uri Guttman

TM> my @a = (1..100000);
TM> my $searchfor = 99999;
TM> my %ah;
TM> @ah{@a} = (1);

that will only set one element to 1. assign () to make them all undef
and use exists or assign a full list with (1) x @a.

TM> my $code_Perl_hashkeys = sub {
TM> return if exists $ah{$searchfor};
TM> };

you do use exists (as i suspected you would). so () would have been
fine.

what about trying first() from List::Utils?

uri
 
U

Uri Guttman

KT> Why not? Perl is compiled before it's executed, after all, and
KT> the compiler *could* determine that the body of a loop doesn't do
KT> anything disruptive and use that information to perform optimizations
KT> (loop unrolling, common subexpression elimination, etc.).

KT> It's not at all clear that it makes sense to do so, given that
KT> you're going to pay the overhead each time the program is executed.
KT> And I have no idea what optimizations the perl implementation
KT> actually performs. But I suggest that your "no dynamic lang"
KT> claim is overstated.

then why haven't you ever seen any dynamic lang do any of the classic
loop optimizations you see in classic compiled langs like fortran? you
can't. loop unrolling and constant subexpression expression moving can't
be done if the values can change underneath as they can in a language
like perl. a sub call which can be chosen at runtime in the loop can
change the WHOLE loop by calling eval. and that can't be detected. any
variable in the loop could be tied which is a runtime thing and that
could call anything which could change the loop. no compiler can see or
predict those sorts of things. perl does constant folding, conversions
of constant style subs (as what the constant pragma does) to real
constants and not much else. those things can't have variables or
anything other than pure constants at compile time.

uri
 
K

Krishna Chaitanya

Hi all, thanks for the criticism of my poor choice of operators as
well as test cases. I appreciate your time and wisdom.

One last question - in the ocean of Perl literature and documentation,
is there any one single place where points similar to what you've
raised in your replies are mentioned (rather explicitly)? Like, for
example, the choice of operators and the consequences in terms of
speed, etc? My situation is: I don't have a lot of time to go through
a large body of knowledge as other concerns are pressing...and can
hardly afford a Perl expert's time. I know this might invite a lot of
flak from those who love to do things the right way...but, if you can
help me, I'd be grateful.
 
U

Uri Guttman

KC> One last question - in the ocean of Perl literature and
KC> documentation, is there any one single place where points similar
KC> to what you've raised in your replies are mentioned (rather
KC> explicitly)? Like, for example, the choice of operators and the
KC> consequences in terms of speed, etc? My situation is: I don't have
KC> a lot of time to go through a large body of knowledge as other
KC> concerns are pressing...and can hardly afford a Perl expert's
KC> time. I know this might invite a lot of flak from those who love
KC> to do things the right way...but, if you can help me, I'd be
KC> grateful.

there is a reason it is called expertise and that there are
experts. experts are the ones who write the docs that you don't have the
time to read. there are no shortcuts to becoming an expert. google for
'10000 hour expert'

uri
 
P

Peter J. Holzer

KC> " If you are searching for an element in a list, it can be more
KC> efficient to store the data in a hash structure, and then simply look
KC> to see whether the key is defined, rather than to loop through the
KC> entire array using grep() for instance. " (from perlperf)

anyone here could have told you that.

Why should he ask here if it's already in the docs? Aren't we constantly
telling people to read the docs? Now somebody does and you're chewing
him out for it.

hp
 
K

Keith Thompson

Uri Guttman said:
KT> Why not? Perl is compiled before it's executed, after all, and
KT> the compiler *could* determine that the body of a loop doesn't do
KT> anything disruptive and use that information to perform optimizations
KT> (loop unrolling, common subexpression elimination, etc.).

KT> It's not at all clear that it makes sense to do so, given that
KT> you're going to pay the overhead each time the program is executed.
KT> And I have no idea what optimizations the perl implementation
KT> actually performs. But I suggest that your "no dynamic lang"
KT> claim is overstated.

then why haven't you ever seen any dynamic lang do any of the classic
loop optimizations you see in classic compiled langs like fortran?

For one thing, because I haven't looked. Aside from that, I don't
know for sure, but it's likely because such optimizations are too
difficult, are too seldom applicable, and have too little payoff,
for them to be worth the effort.
you
can't. loop unrolling and constant subexpression expression moving can't
be done if the values can change underneath as they can in a language
like perl. a sub call which can be chosen at runtime in the loop can
change the WHOLE loop by calling eval. and that can't be detected. any
variable in the loop could be tied which is a runtime thing and that
could call anything which could change the loop. no compiler can see or
predict those sorts of things. perl does constant folding, conversions
of constant style subs (as what the constant pragma does) to real
constants and not much else. those things can't have variables or
anything other than pure constants at compile time.

Consider this complete program:

#!/usr/bin/perl

use strict;
use warnings;

my $i = 0;
for (1 .. 10) {
$i ++;
}
print "$i\n";

The Perl compiler/interpreter can see the entire program before it
produces any output. At least in principle, it *could* analyze it
and determine that $i isn't tied, that there are no sub calls in the
loop, that there are no calls to eval(), and so forth. Given that
information, it *could* compile the whole thing to the equivalent of

print "10\n";

In this particular case, the time spent doing the analysis and code
transformation would certainly exceed the time saved by simplifying the
code. And in general, I'm not arguing that perl *should* do this kind
of thing, just that it's not impossible.

Even in a static compiled language like C, certain constructs can
inhibit certain optimizations. A optimizing C compiler could likely
reduce something like the above to puts("10"), but would not be able to
do so if the loop body took the address of i or called an external
function. Perl, as you point out, has more constructs that can
interfere with this kind of analysis. That doesn't make the analysis
impossible in every case.
 
U

Uri Guttman

KT> For one thing, because I haven't looked. Aside from that, I don't
KT> know for sure, but it's likely because such optimizations are too
KT> difficult, are too seldom applicable, and have too little payoff,
KT> for them to be worth the effort.

loop optimizations are an ancient technique that probably got invented
in the 60's if not earlier. it is well known by all compiler people. it
isn't even that complicated to do. perl COULD do it if it made sense as
it just involves scanning the opcode tree for invariant sections in
loops and moving them before the loop. loop unrolling is also a common
thing. perl can't do them not because of the lack of skill but because
the language won't allow it. you can't compile and know what will happen
at runtime which can screw up the optimizations. so it isn't even
discussed let alone attempted.

KT> Consider this complete program:

KT> #!/usr/bin/perl

KT> use strict;
KT> use warnings;

KT> my $i = 0;
KT> for (1 .. 10) {
KT> $i ++;
KT> }
KT> print "$i\n";

KT> The Perl compiler/interpreter can see the entire program before it
KT> produces any output. At least in principle, it *could* analyze it
KT> and determine that $i isn't tied, that there are no sub calls in the
KT> loop, that there are no calls to eval(), and so forth. Given that
KT> information, it *could* compile the whole thing to the equivalent of

and what if warnings was replaced by a trojan module that made declaring
$i a tied variable?

KT> Even in a static compiled language like C, certain constructs can
KT> inhibit certain optimizations. A optimizing C compiler could likely
KT> reduce something like the above to puts("10"), but would not be able to
KT> do so if the loop body took the address of i or called an external
KT> function. Perl, as you point out, has more constructs that can
KT> interfere with this kind of analysis. That doesn't make the analysis
KT> impossible in every case.

it is impossible. that is the win and loss of a dynamic language. if you
could analyze it completely in advance, you could also solve the halting
problem. and even if it were possible, it isn't worth the effort as that
deep of an analysis is equivilent to running the program (see halting
problem).

uri
 
M

Marc Girod

and what if warnings was replaced by a trojan module that made declaring
$i a tied variable?

Do you mean that perl doesn't compile warnings from the sources at
this stage?
Because if it does, it knows at compile time.

Marc
 
P

Peter J. Holzer

KT> Why not? Perl is compiled before it's executed, after all, and
KT> the compiler *could* determine that the body of a loop doesn't do
KT> anything disruptive and use that information to perform optimizations
KT> (loop unrolling, common subexpression elimination, etc.).

KT> It's not at all clear that it makes sense to do so, given that
KT> you're going to pay the overhead each time the program is executed.
KT> And I have no idea what optimizations the perl implementation
KT> actually performs. But I suggest that your "no dynamic lang"
KT> claim is overstated.

then why haven't you ever seen any dynamic lang do any of the classic
loop optimizations you see in classic compiled langs like fortran?

Mostly because it's not worth it. Loop unrolling is a technique that
works well for real CPUs because jumps are expensive (or rather, they
were expensive - I'm not sure if loop unrolling is still worthwhile on
modern CPUs), but that doesn't apply to typical bytecode interpreters.

Here's a simple example. A loop which fills an array with a constant
value:


#!/usr/bin/perl

use warnings;
use strict;

use Benchmark qw:)hireswallclock timethese cmpthese);

my @a;

my $results = timethese(-3, {
normal => sub {
for (0 .. 999_999) {
$a[$_] = 0;
}
},
unroll4 => sub {
for (0 .. 249_999) {
$a[$_ * 4 + 0] = 0;
$a[$_ * 4 + 1] = 0;
$a[$_ * 4 + 2] = 0;
$a[$_ * 4 + 3] = 0;
}
},
unroll4c => sub {
my $c = 0;
for (0 .. 249_999) {
$a[$c++] = 0;
$a[$c++] = 0;
$a[$c++] = 0;
$a[$c++] = 0;
}
},
}
);

cmpthese($results);
__END__

Rate unroll4 unroll4c normal
unroll4 4.04/s -- -33% -37%
unroll4c 6.00/s 49% -- -6%
normal 6.37/s 58% 6% --

The result is not surprising: The unrolled versions are slower.
The extra computations more than make up for the lower loop overhead.
In machine code, special addressing modes could be used in "unroll4"
instead of the explicit multiplication and addition, or the computations
could be performed in parallel to the memory accesses. Similarly in
"unroll4c".

you can't. loop unrolling and constant subexpression expression moving
can't be done if the values can change underneath as they can in a
language like perl.

Most of the time the values don't change underneath, and
in many cases this could be detected by the compiler. But such analysis
is expensive. When the program is compiled on every run, this is hard to
amortize, even if there is a noticable speedup, which doesn't seem to be
the case for loop unrolling.

There are other techniques which are cheaper and gain more, for example JIT.

hp
 
K

Keith Thompson

Uri Guttman said:
[...]
KT> Consider this complete program:

KT> #!/usr/bin/perl

KT> use strict;
KT> use warnings;

KT> my $i = 0;
KT> for (1 .. 10) {
KT> $i ++;
KT> }
KT> print "$i\n";

KT> The Perl compiler/interpreter can see the entire program before it
KT> produces any output. At least in principle, it *could* analyze it
KT> and determine that $i isn't tied, that there are no sub calls in the
KT> loop, that there are no calls to eval(), and so forth. Given that
KT> information, it *could* compile the whole thing to the equivalent of

and what if warnings was replaced by a trojan module that made declaring
$i a tied variable?

Then the hypothetical loop-optimizing Perl implementation would
analyze the contents of warnings.pm, fail to prove that it doesn't
do anything disruptive, and refrain from performing the optimization.

But if warnings *isn't* replaced by a Trojan module, the
implementation can, in principle, detect that fact.

Or, if you prefer, consider a program like the above but without
the "use strict; use warnings;" (and invoked without using perl's
-m or -M option or equivalent).
KT> Even in a static compiled language like C, certain constructs can
KT> inhibit certain optimizations. A optimizing C compiler could likely
KT> reduce something like the above to puts("10"), but would not be able to
KT> do so if the loop body took the address of i or called an external
KT> function. Perl, as you point out, has more constructs that can
KT> interfere with this kind of analysis. That doesn't make the analysis
KT> impossible in every case.

it is impossible. that is the win and loss of a dynamic language. if you
could analyze it completely in advance, you could also solve the halting
problem.

That doesn't follow. The halting problem is about the impossibility
of doing certain kinds of analysis on *all* programs. You are
claiming that certain kinds of analysis are impossible on *any*
programs, a much stronger claim.

The halting theorem doesn't imply that you can't prove the "hello,
world" program terminates.
and even if it were possible, it isn't worth the effort as that
deep of an analysis is equivilent to running the program (see halting
problem).

No, it isn't, at least not in all cases. *I* can look at the above
program and conclude that it will print "10" followed by a new-line.
I don't have to run the program to reach that conclusion, and it
wouldn't take me 100 times as long if the 10 were changed to 1000.
There's no reason a compiler can't perform the same analysis *for
at least some programs*.

(Note that I'm not arguing that it *is* worth the effort, merely that
your argument that it isn't is incorrect.)

And who knows, *maybe* there are some loop optimizations that would
be worth doing in Perl, at least optionally.
 
U

Uri Guttman

KT> [...]
KT> Consider this complete program:KT> use strict;
KT> use warnings;KT> my $i = 0;
KT> for (1 .. 10) {
KT> $i ++;
KT> }
KT> print "$i\n";KT> The Perl compiler/interpreter can see the entire program before it
KT> produces any output. At least in principle, it *could* analyze it
KT> and determine that $i isn't tied, that there are no sub calls in the
KT> loop, that there are no calls to eval(), and so forth. Given that
KT> information, it *could* compile the whole thing to the equivalent of
KT> Then the hypothetical loop-optimizing Perl implementation would
KT> analyze the contents of warnings.pm, fail to prove that it doesn't
KT> do anything disruptive, and refrain from performing the optimization.

and if that module loads another module? ad infinitum. welcome to the
halting problem. do you even know what that is?

KT> That doesn't follow. The halting problem is about the impossibility
KT> of doing certain kinds of analysis on *all* programs. You are
KT> claiming that certain kinds of analysis are impossible on *any*
KT> programs, a much stronger claim.

no, it is the same problem. you have to keep analyzing deeper and
deeper. the issue isn't about small things being optimized, it is how
much work you need to do to gain a worthy optimization. you have already
exceeded that limit.

KT> (Note that I'm not arguing that it *is* worth the effort, merely that
KT> your argument that it isn't is incorrect.)

no, it is the same. we can easily write a solution for the halting
problem for most programs. same as for analyzing code for
optimization. that doesn't mean you solved the problem nor have you
found a proper limit on when to stop. someone (like you! :) will always
bitch that the analysis doesn't go deeply enough for your
program. better to not even offer to do it.

uri
 
M

Marc Girod

and if that module loads another module? ad infinitum. welcome to the
halting problem. do you even know what that is?

Looking from aside, I believe Keith does, and I believe your
comparison to the halting problem is incorrect.

The halting problem is a case of self-reference: to prove that any
program will halt, you must prove first that your own program will
halt.

There is no such problem with the optimization question (or there is
one in practice, but it has been repeatedly excluded).
Unless *this* is what you meant, but it wasn't clear.

Let *me* try to be clear: the optimization ought to be itself
optimized, so that it would benefit any other program.

Marc
 
P

Peter J. Holzer

KT> [...]
KT> Consider this complete program:KT> use strict;
KT> use warnings;KT> my $i = 0;
KT> for (1 .. 10) {
KT> $i ++;
KT> }
KT> print "$i\n";KT> The Perl compiler/interpreter can see the entire program before it
KT> produces any output. At least in principle, it *could* analyze it
KT> and determine that $i isn't tied, that there are no sub calls in the
KT> loop, that there are no calls to eval(), and so forth. Given that
KT> information, it *could* compile the whole thing to the equivalent of
KT> Then the hypothetical loop-optimizing Perl implementation would
KT> analyze the contents of warnings.pm, fail to prove that it doesn't
KT> do anything disruptive, and refrain from performing the optimization.

and if that module loads another module? ad infinitum.

If the module loads other modules ad infinitum, the compiler will spend
an infinite time compiling the program, so we don't need to worry about
what happens after it finishes ;-).

If the the number of loaded modules is finite, nothing changes. The
compiler knows all the code it has compiled so far - it makes no
difference whether all the code is in a single file or in multiple
modules. (This is BTW a major difference between Perl and languages
which are usually compiled to object files: In the latter the compiler
only sees the current compilation unit - only at link time the whole
code is merged together)

welcome to the halting problem. do you even know what that is?

I think he does. Do you?

KT> That doesn't follow. The halting problem is about the impossibility
KT> of doing certain kinds of analysis on *all* programs. You are
KT> claiming that certain kinds of analysis are impossible on *any*
KT> programs, a much stronger claim.

no, it is the same problem. you have to keep analyzing deeper and
deeper. the issue isn't about small things being optimized, it is how
much work you need to do to gain a worthy optimization. you have already
exceeded that limit.

That's an engineering tradeoff. If the gain (shorter run-time) is
greater than the cost (longer compiler-time, implementation effort, code
complexity (harder to maintain, potential bugs) - note that these costs
are not directly comparable) it's worthy.

It may well be that loop unrolling and strength reduction are not worthy
optimizations for perl (in fact, for loop unrolling I'm quite sure it
isn't worth the effort - see my last posting in this thread), but you
can't argue with the halting problem here because these optimizations
are not equivalent to the halting problem. The halting problem is to
write a program which for *any* program and input can determine (in
finite time) whether that program will halt for that input. An optimizer
doesn't need to recognize all optimizable programs: It can err on the
safe side and leave some optimizable programs unoptimized.


KT> (Note that I'm not arguing that it *is* worth the effort, merely that
KT> your argument that it isn't is incorrect.)

no, it is the same. we can easily write a solution for the halting
problem for most programs. same as for analyzing code for
optimization. that doesn't mean you solved the problem nor have you
found a proper limit on when to stop. someone (like you! :) will
always bitch that the analysis doesn't go deeply enough for your
program. better to not even offer to do it.

"Somebody will always bitch that Perl doesn't offer a good enough
solution to their problems. Better to not even offer it."

If Larry had thought that way we wouldn't have Perl. In fact we would
have very little software at all since almost all software can be
improved and the authors know it.

hp
 
K

Keith Thompson

Uri Guttman said:
KT> [...]
KT> Consider this complete program:KT> use strict;
KT> use warnings;KT> my $i = 0;
KT> for (1 .. 10) {
KT> $i ++;
KT> }
KT> print "$i\n";KT> The Perl compiler/interpreter can see the entire program before it
KT> produces any output. At least in principle, it *could* analyze it
KT> and determine that $i isn't tied, that there are no sub calls in the
KT> loop, that there are no calls to eval(), and so forth. Given that
KT> information, it *could* compile the whole thing to the equivalent of
KT> Then the hypothetical loop-optimizing Perl implementation would
KT> analyze the contents of warnings.pm, fail to prove that it doesn't
KT> do anything disruptive, and refrain from performing the optimization.

and if that module loads another module? ad infinitum. welcome to the
halting problem. do you even know what that is?

You're missing the point. What if that module *doesn't* load
another module?

For *some* programs, the analysis is possible. For *some* programs,
it isn't. For *some* programs, determining whether the analysis
is possible or not requires more resources than is reasable.
KT> That doesn't follow. The halting problem is about the impossibility
KT> of doing certain kinds of analysis on *all* programs. You are
KT> claiming that certain kinds of analysis are impossible on *any*
KT> programs, a much stronger claim.

no, it is the same problem. you have to keep analyzing deeper and
deeper. the issue isn't about small things being optimized, it is how
much work you need to do to gain a worthy optimization. you have already
exceeded that limit.

Yes, it would require a lot of work, probably more than is worthwhile.
I do not dispute that. All I'm saying is that it's theoretically
possible in some cases.
KT> (Note that I'm not arguing that it *is* worth the effort, merely that
KT> your argument that it isn't is incorrect.)

no, it is the same. we can easily write a solution for the halting
problem for most programs. same as for analyzing code for
optimization. that doesn't mean you solved the problem nor have you
found a proper limit on when to stop. someone (like you! :) will always
bitch that the analysis doesn't go deeply enough for your
program. better to not even offer to do it.

"better" is irrevelant. I've repeatedly acknowledged that it's very
likely not worth the effort.

As for "when to stop", set any arbitrary limit you like. Some programs
will be simple enough not to exceed that limit.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,067
Latest member
HunterTere

Latest Threads

Top