C
Chris Smith
You wouldn't restrict your development to some extreme lowest common
denominator, i.e. telnet, would you?
Depends. There are really two things going on here. There are a lot of
very nice features of development environments like Eclipse that do NOT
force me to use them, because they help with the PROCESS of writing
code. On the other hand, mis-uses of AOP try to change the GOAL of
writing code, causing people to accept code that would have been
rejected otherwise.
Put as an analogy: an index is quite helpful in using a book. However,
the index should not be used as an excuse for failing to organize the
book in a logical flow with related information grouped logically
together. Similarly, AOP programming aids should not be used as an
excuse for failing to organize code in such a way that control flow is
indicated in the source.
I guess I'm not following. You acknowledge that
Eclipse IDE with AspectJ enabled is sufficient if you do all your
development in Eclipse, which you do. Yet you still don't want to reap
the benefits of AOP.
I think part of the misunderstanding is that I write "only if" and you
seem to have read that as "if and only if". I don't believe that the
forward direction is true.
Uses of AOP, except in the limited contexts I've mentioned, are wrong on
two levels: a symptom, and a cause. The symptom is that they lead to
certain enumerable maintenance hassles, which can admittedly be
mitigated by tools. The cause, though, is that they abuse the language.
That abuse of the language doesn't just cause the immediately apparent
maintenance hassles. I tried to explain that in the earlier article,
but perhaps I failed to be clear.
Let's say I throw an exception in some code, and Eclipse pops up a
little icon in the margin informing me that there's some AOP advice that
applies to this code. I click on the icon, and discover that the
exception I am throwing is going to be emailed to the development team
with its full stack trace in an automated bug report. I don't want that
to happen, so I have to go modify the pointcut and avoid it. This is
the "ideal" situation, in which I had all the tools to tell me exactly
what was going on, and they did their job. Yet AOP was still extremely
far from harmless. Rather, it did the following:
1. Something that should have taken 15 seconds has now taken 100 times
as long.
2. Some single unit of code (the pointcut) is now accumulating a list of
places where non-logged exceptions are thrown. This is not useful
information, and its maintenance complicates management of the project.
3. The logging code therefore becomes very unstable... as unstable as
the most unstable part of the project to which it applies.
4. I become reluctant to throw exceptions in places where they might be
a good idea, because I don't want to deal with accidentally tripping
someone's checkpoint again.
We haven't just lost encapsulation; we've invented poor abstractions,
and then enforced them on everyone. Or, as the article said that I
originally posted, we have switched to a new language that sort of like
Java, but not really. In this new language, which is used only in this
project, throwing an exception means two different things: control flow,
and logging. As an experienced Java developer, I feel confident writing
Java to work in someone's project... but I certainly don't feel
comfortable writing in this pseudo-Java. If I don't read through and
understand all the AOP stuff before writing ANY code for the project, I
am reduced to the level of a programming novice who determines if code
is correct by compiling it... except I look for little AOP icons in the
margins of Eclipse instead.
(I keep using the exceptions example because I see it done a lot, and
it's horrid. Matching patterns of method names -- as in, add this code
every time I call a method starting with "test" -- is even worse, but I
think most people recognize the silliness of this and avoid it... most
of the time. Occasionally, though, I'm wrong in that last bit.)
Lets say you did, code
a cross-cutting concern into, say, 35 different classes pretty much all
the same code woven into those classes where appropriate. [...] And
another developer comes in and has to code a new requirement into the
cross-cutting concern.
[...]
Do you feel that the above error prone process is the lesser of the
evils here?
No, I feel that it's a straw man that is ridiculous to even consider.
It would be hard to come up with a programming technique that's NOT
better than what your hypothetical programmer is doing.
Let's get this straight. So he's got 35 copies of exactly the same
code, and yet he hasn't even bothered to factor that code out into a
utility method somewhere? What's wrong with him?!? I've certainly put
off work on code cleanliness to meet scheduling deadlines... but 35
copies?!? And he realizes that they are all copies, and he worries
about making sure they all stay in sink, and yet it's never occurred to
this guy that he ought to put this code in a function somewhere? The
solution to this problem is for the hypothetical programmer to leave his
job and enter the lucrative field of real estate management. They seem
to like his sort, there.
I don't mean to say that I never think AOP is useful. Occasionally, the
existing forms of abstraction available in modern programming languages
become awkward for certain problems, and AOP provides a convenient
solution. That said, the advantages are routinely and blatantly
overstated on a regular basis, and a large number of people need to come
back down to Earth and get back to doing their jobs instead of hyping
AOP.
Even meeting all of your requirements it's still bad?
I'm unsure what you mean. I think I've made it continually clear that I
have no problem with AOP attached to things that have no other defined
purpose in the language; in other words, annotations -- which are sorta
defined by having no other defined purpose. It's the overloading of
significant language features that's problematic. Identifier names are
meant to be unique and communicate purpose to human beings, not to
define transaction semantics. Exceptions are used to quickly transfer
control from one bit of code to another; not to invoke logging. I just
can't imagine what could make someone think it's a good idea to be able
to make the behavior of a method change because its name starts with
some magic word... unless it's (a) an overwhelming desire to be on the
bandwagon for the next New Thing, (b) a lack of understanding of code
maintenance issues, or (c) a kind of twisted pleasure in setting land
mines for fellow programmers.
I asked in so many words in a prior post: Why are
these trade-offs not worth the benefits?
The main reason is that they are not necessary to achieve the benefits.
Why do you need to wait for others to know what you know? If the tools
already (seem to) support what you would like them do what's stopping
you from using them? Are you talking about your peers in your
organization? If so, are they using Eclipse as well, can you bring
them up to speed and catch unapproved techniques in peer reviews?
I certainly don't need to wait. However, a form of AOP that relies on
annotations instead of the existing hodgepodge of pointcut types and
patterns would be considerably more elegant in syntax and harder to get
screwed over with than the existing alternatives.
--
www.designacourse.com
The Easiest Way To Train Anyone... Anywhere.
Chris Smith - Lead Software Developer/Technical Trainer
MindIQ Corporation