Interface naming conventions

D

Daniel Dyer

Increasingly I am seeing people prefixing interface names with an 'I', in
open source apps and in example code on the web. So instead of this:

public interface MyInterface
{
}

we have this:

public interface IMyInterface
{
}

Personally, I really dislike this naming convention. Now, some of the
people who are doing this aren't stupid, so I was wondering if anybody has
a good explanation for why this is a good idea? Is it symptomatic of an
addiction to Hungarian notation, or this there some more sensible
rationale?

Dan.
 
S

Stefan Ram

Daniel Dyer said:
public interface IMyInterface

I often declare interfaces with one method, and then it
is the most natural convention for me to name the interface
like the method, e.g.,

interface Accept<T>{ void accept( T t ); }

Everything else makes you guess more often
(»Closeable« or »Closable«?)

Sun does not seem to prefer »-able« anymore:

»Interfaces

Interface names should be capitalized like class names.

interface RasterDelegate;

interface Storing;«

http://java.sun.com/docs/codeconv/html/CodeConventions.doc8.html
 
P

parkerc

Daniel said:
Increasingly I am seeing people prefixing interface names with an 'I', in
open source apps and in example code on the web. So instead of this:

public interface MyInterface
{
}

we have this:

public interface IMyInterface
{
}

Personally, I really dislike this naming convention. Now, some of the
people who are doing this aren't stupid, so I was wondering if anybody has
a good explanation for why this is a good idea? Is it symptomatic of an
addiction to Hungarian notation, or this there some more sensible
rationale?

Dan.

I think that it comes from C#. Personally, I really don't see a
problem with a little Hungarian Notation in regard to interfaces.
 
D

Daniel Dyer

I think that it comes from C#. Personally, I really don't see a
problem with a little Hungarian Notation in regard to interfaces.

OK, but what advantages does it have over not prefixing the interface name
with an 'I'? I'm assuming that there must be some perceived advantage
otherwise it would be completely pointless.

Dan.
 
A

AndrewMcDonagh

Daniel said:
OK, but what advantages does it have over not prefixing the interface
name with an 'I'? I'm assuming that there must be some perceived
advantage otherwise it would be completely pointless.

Dan.

--Daniel Dyer
http://www.dandyer.co.uk


Its a hang over from the days when Microsoft introduced COM in C++. and
then languages like Delphi took it up also.

As there is no such thing as an interface in C++, it was deemed 'a good
idea' at the time as it differentiated or at least highlighted that
there was a complete set of Pure Abstract Classes in the API, rather
than the traditional Abstract Base classes.

Aside from C++, where I can just about see how it might help some people
see that they should not put implementation inside the (interface)
classes, for language like Delphi that support Interfaces - its a
complete waste of time.

Andrew
 
M

Martin Gregorie

parkerc said:
>
I think that it comes from C#. Personally, I really don't see a
problem with a little Hungarian Notation in regard to interfaces.
IMO its one of the silliest ideas ever to infect programming.

The problem is that it completely destroys information hiding by forcing
implementation details to be included as part of all variable and
function names. Change the internal representation of a variable or the
return value of a function or method and you have to change its name and
then churn through all your source, changing all references to the name.
Stupid. It can just about be made to work with ANSI C, but it is
completely at odds with any language that supports overloading.

The best explanation for its existence is that M$ came up with the idea
to paper over deficiencies in its early compilers' type checking. If you
use "Hungarian notation' in its full rigor *you* are the type checker,
not the compiler. This seems like a perverse role reversal to me.

For sensible ideas about naming, source layout, and programming in
general you can't do much better than get a copy of "The Practice of
Programming" by Brian Kernighan and Rob Pike. Its ideas are applicable
to just about any programming language and it contains reasonably
substantial examples in C, C++ and Java to prove it.
 
D

Dale King

parkerc said:
I think that it comes from C#. Personally, I really don't see a
problem with a little Hungarian Notation in regard to interfaces.

A point of clarification, this is *NOT* Hungarian notation. This is an
example of type-based naming, which, Micro$oft incorrectly labelled as
Hungarian.

True Hungarian naming defines prefixes for names based on what the item
represents. Type-based naming defines prefixes based on the data type in the
programming language.

So for example if we had a variable that contained the height of a screen
whose data type in C were unsigned long, then possible names in the two
conventions would be:

Hungarian - hgtScreen
Type naming - ulScreenHeight

The problem is that there is little value in type-based naming. Imagine if
you also had an unsigned long screen color. If you tried to assign this to
the above variable in type based-naming it might look like:

ulScreenHeight = ulScreenColor;

The statement makes no logical sense, but the type-based naming prefix
didn't tell you anything. Compare thiw with Hungarian:

hgtScreen = clrScreen;

One of the main problems with type-based naming is that types can change.
For example in Windoze there is wParam which ceased being a word ages ago,
but they couldn't change the name without breaking users.

Hungarian is not nearly as usefull in an OO language, but it is not as bad
as you were lead to believe from M$'s misuse of the term.

The I convention is relatively harmless, but offers little, if any, benefit.
In addition to the origins in COM as others have mentioned, Eclipse also
uses this convention. According to their convention guidelines, "This
convention aids code readability by making interface names more readily
recognizable." I don't find it necessary myself.

One reason it is used is to simplify naming of interfaces and concrete
implementations of that interface so you can have IFoo interface and the
concrete implementation could be called just Foo. I'll leave it up to you to
decide if this is a good thing or not.
 
G

Guest

Its a hang over from the days when Microsoft introduced COM in C++. and
then languages like Delphi took it up also.

As there is no such thing as an interface in C++, it was deemed 'a good
idea' at the time as it differentiated or at least highlighted that
there was a complete set of Pure Abstract Classes in the API, rather
than the traditional Abstract Base classes.

Aside from C++, where I can just about see how it might help some people
see that they should not put implementation inside the (interface)
classes, for language like Delphi that support Interfaces - its a
complete waste of time.

..NET uses it too.

I can actually see one usage for it in C#, because the syntax
for extends and implements are the same in C# it can make
a class easier to read.

Arne
 
?

=?ISO-8859-1?Q?Arne_Vajh=F8j?=

Dale said:
A point of clarification, this is *NOT* Hungarian notation. This is an
example of type-based naming, which, Micro$oft incorrectly labelled as
Hungarian.

True Hungarian naming defines prefixes for names based on what the item
represents. Type-based naming defines prefixes based on the data type in the
programming language.

So for example if we had a variable that contained the height of a screen
whose data type in C were unsigned long, then possible names in the two
conventions would be:

Hungarian - hgtScreen
Type naming - ulScreenHeight

Wikipedia actually mentions both as being hungarian
(system and apps respectively).

http://en.wikipedia.org/wiki/Hungarian_notation

I would say that the MS way of using the word is so
common, that it is the meaning of the word today.

Arne
 
P

parkerc

Martin said:
IMO its one of the silliest ideas ever to infect programming.

The problem is that it completely destroys information hiding by forcing
implementation details to be included as part of all variable and
function names. Change the internal representation of a variable or the
return value of a function or method and you have to change its name and
then churn through all your source, changing all references to the name.
Stupid. It can just about be made to work with ANSI C, but it is
completely at odds with any language that supports overloading.

The best explanation for its existence is that M$ came up with the idea
to paper over deficiencies in its early compilers' type checking. If you
use "Hungarian notation' in its full rigor *you* are the type checker,
not the compiler. This seems like a perverse role reversal to me.

For sensible ideas about naming, source layout, and programming in
general you can't do much better than get a copy of "The Practice of
Programming" by Brian Kernighan and Rob Pike. Its ideas are applicable
to just about any programming language and it contains reasonably
substantial examples in C, C++ and Java to prove it.

Great book.

Hungarian notation makes for ugly code - anyone who did any Win32 C++
programming should remember this (any a good deal of people still doing
WinForms C# coding). I still think that adding an "I" to an interface
is not all that bad if it helps the person coding the app to produce
code that he or she can more easily read.

http://web.umr.edu/~cpp/common/hungarian.html - this is evil.

typedefs can also get pretty nasty in C. Especially typedefs of
structs that have typedefs of structs that have typedefs...
 
D

Dale King

Arne Vajhøj said:
Wikipedia actually mentions both as being hungarian
(system and apps respectively).

http://en.wikipedia.org/wiki/Hungarian_notation

I don't look to wikipedia as an authority on the subject. The important
thing I was trying to point out (which the Wikipedia article concurs with)
is that there is a difference between the Hungarian naming originally
described by Simonyi and the later misuse of the term by Micro$oft.
I would say that the MS way of using the word is so
common, that it is the meaning of the word today.

I don't let M$ redefine terms for me. It is important to make the
distinction. On the same note there is also what is called scope-based
naming which is using prefixes to indicate local vs. member vs. global
variables.
 
C

Chris Brat

Hi,

I'm not a fan of the "I" prefix and Hungarian notation simply because
it disrupts intelisense in IDEs (Eclipse specifically, which lists
options in alphabetical order).

This is going to add a whole extra keystroke!! ;-)

Chris
 
C

Chris Uppal

Daniel said:
Now, some of the
people who are doing this aren't stupid, so I was wondering if anybody has
a good explanation for why this is a good idea?

I don't believe there is one; I don't believe there can be one. And just
because someone isn't stupid doesn't mean they can't have stupid habits.

The only, and partial, exception I'd make is for example code where the names
don't mean anything in themselves -- "Foo" and the like. There the names need
as much help as they can get if they are not to obscure the example. But even
in that case, I'd use something longer like "FooInterface".

There are btw quite a lot of programming practises which are accepted as
normal, but which don't make any sense when you think about them.

-- chris
 
C

Chris Smith

Chris Brat said:
Hi,

I'm not a fan of the "I" prefix and Hungarian notation simply because
it disrupts intelisense in IDEs (Eclipse specifically, which lists
options in alphabetical order).

This is going to add a whole extra keystroke!! ;-)

I'd say the more important effect is that it prevents code from reading
nicely. A piece of code is written once; read long after you have
retired.
 
D

Daniel Dyer

There are btw quite a lot of programming practises which are accepted as
normal, but which don't make any sense when you think about them.

Agreed, there are too many examples of people doing things, like this,
because somebody told them it was a good idea. The justification usually
sounds something like "because it's considered good practice...". By
who? And more importantly, why? If a developer can't explain why
something is a good idea, perhaps they shouldn't be doing it in the first
place? This is kind of the point I was trying to get to in the "Design
Question" thread the other day.

Dan.
 
O

olle.sundblad

I am quite sure that the Java Code Style Guidelines tells you not to
put it there. Also Java syntax differ between classes and interfaces
and all decent IDEs makes it even easier.

So no do not put an I prefix (unless you have a stupid CTO who forces
you, even then it might be a good idea to begin looking for a better
place to work).

Just my 2 cents
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,744
Messages
2,569,484
Members
44,903
Latest member
orderPeak8CBDGummies

Latest Threads

Top