Fn Defn Style

M

mindswitness

Ever since I moved back over to *nix, I've noticed a difference in the way
people are defining functions in a lot of the source code I'm downloading.
Instead of...

int funcname (int arg1, ...)
{
...
}

This is often used...

int
funcname (int arg1, ...)
{
...
}


Haven't seen this on the DOS/Windows side of things. I'm guessing it's for
search purposes, perhaps in vi... ??

/^funcname

.... or is there another reason?


Thanks in advance!



-MW-
 
N

Nick

mindswitness said:
Ever since I moved back over to *nix, I've noticed a difference in the way
people are defining functions in a lot of the source code I'm downloading.
Instead of...

int funcname (int arg1, ...)
{
...
}

This is often used...

int
funcname (int arg1, ...)
{
...
}


Haven't seen this on the DOS/Windows side of things. I'm guessing it's for
search purposes, perhaps in vi... ??

/^funcname

... or is there another reason?

I'm pretty sure that's the reason - I'm sure I remember seeing it
recommended for just that reason. Not one I use, or like, myself.
 
I

Ian Collins

mindswitness said:
Ever since I moved back over to *nix, I've noticed a difference in the way
people are defining functions in a lot of the source code I'm downloading.
Instead of...

int funcname (int arg1, ...)
{
...
}

This is often used...

int
funcname (int arg1, ...)
{
...
}


Haven't seen this on the DOS/Windows side of things. I'm guessing it's for
search purposes, perhaps in vi... ??

Some of us prefer the function name to start in column zero, especially
if the return type has a long name.
 
T

Thad Smith

mindswitness said:
Ever since I moved back over to *nix, I've noticed a difference in the way
people are defining functions in a lot of the source code I'm downloading.
Instead of...

int funcname (int arg1, ...)
{
...
}

This is often used...

int
funcname (int arg1, ...)
{
...
}


Haven't seen this on the DOS/Windows side of things. I'm guessing it's for
search purposes, perhaps in vi... ??

/^funcname

.... or is there another reason?

I place the return type on a separate line because that is where I
document the return value:

int /* number of newts found */
newtcount (
const t_newt *newtlist /* list of newts to count, terminated by
** nextptr=NULL */
);
 
M

mindswitness


Thanks all. It's a style I've adopted, as I tend toward vi.


OT: I've seen C go through a lot of changes, some that weren't even style
issues, but "lack of standards" issues. When I first learned C, one
typically wrote a fn like this...

/* FtoC - accepts a Fahrenheit value and
returns the Celsius equivalent */

double FtoC (fah)
double fah;
{
return (fah - 32.0) * 5.0 / 9.0;
}

... and I still have a DOS compiler that'll accept it (Borland's
Turbo C v1.0 --- c. 1987, IIRC).


And for int and void functions, types were usually omitted altogether.



-MW-
 
E

Eric Sosman

OT: I've seen C go through a lot of changes, some that weren't even style
issues, but "lack of standards" issues. When I first learned C, one
typically wrote a fn like this...

/* FtoC - accepts a Fahrenheit value and
returns the Celsius equivalent */

double FtoC (fah)
double fah;
{
return (fah - 32.0) * 5.0 / 9.0;
}

... and I still have a DOS compiler that'll accept it (Borland's
Turbo C v1.0 --- c. 1987, IIRC).

Contemporary compilers still accept it, because it's still
valid C. The style has been marked as "obsolescent," though,
and I for one can see no advantage to using it.
And for int and void functions, types were usually omitted altogether.

That became non-standard ten years ago, but was permitted
under the rules of the first standard, now twenty years old.
Quite a lot of compilers still follow the twenty-year-old rules,
or have not yet made a complete transition to the more recent
standard. Again, I can see no advantage to omitting the return
type.
 
M

mindswitness

Contemporary compilers still accept it, because it's still
valid C. The style has been marked as "obsolescent," though, and I for
one can see no advantage to using it.
... Again, I can see no advantage to omitting the return type.

Agreed on both counts. Interesting to see the subtle changes in style
that have taken place over the decades, though.



-MW-
 
T

Tom St Denis

I'm pretty sure that's the reason - I'm sure I remember seeing it
recommended for just that reason.  Not one I use, or like, myself.

Why not just re-factor your code so there are fewer functions per
file. Limiting yourself to one exported function per file is handy to
help there. Of course there are static functions, but in reality,
most source files should be less than 500 lines at most anyways
[common exception being machine generated code].

I find the

return_type
function_name(...)
{
...code ...
}

style annoying, but not because it's harder to read [it's not] but
just because it uses more lines and is not what I'm used to [I know I
know ... I'm not the centre of the universe hehehe]. :)

Tom
 
B

bartc

Tom said:
On Dec 12, 3:31 pm, Nick <[email protected]> wrote:
Why not just re-factor your code so there are fewer functions per
file. Limiting yourself to one exported function per file is handy to
help there. Of course there are static functions, but in reality,
most source files should be less than 500 lines at most anyways

Why? Isn't having up to ten times as many source files lying around (even
forgetting the problems with private functions, variables and namespaces),
going to be more hassle than longer source files (and half the time you're
hardly aware of the size of the file).
 
T

Tom St Denis

Why? Isn't having up to ten times as many source files lying around (even
forgetting the problems with private functions, variables and namespaces),
going to be more hassle than longer source files (and half the time you're
hardly aware of the size of the file).

Generally you want fewer functions per file for numerous reasons

1. Makes it easier to work with others in a version control system,
as you lock a smaller percentage of the code at any given time.
2. It speeds up build/rebuild times while testing new code
3. It makes it easier to "smart" link code as not all linker can do
per function linking, they're usually per object file.
4. I find it generally easier to work on smaller files, specially
when what I'm looking for isn't hidden in the middle of a 3,000 line
file... but that's just MHO.

Usually, the smart thing to do is sort your source tree with these
things called directories. So finding files should be easy.

Tom
 
T

Tom St Denis

Tom appears to be stuck in the ark with regard to his toolsets. The size
of a file should not be consideration within normal limits related to
todays HW unless it severely impacts compilation for example. When
navigating around code I rarely bother noticing which file its in for
example.

Spoken like someone who either works alone or without a content/
version control system. Suppose you have 20 people on a team and all
of your source is locked in 2 files. What do the 18 other people
do?

Also, I work on a quad-core AMD box with 4GB of ram. I still
appreciate faster turn-around on build/rebuild cycles. You'd be an
idiot not to.

Tom
 
T

Tom St Denis

Yet again your assumptions are totally wrong.

And who said anything about ALL functions locked up in 2 files? Also,
did you never bother to investigate more modern RCS which can handle
hunks from within a file? A file is nothing more than a user view of
data anyway in more advanced set ups....

I've worked with git, svn, cvs, and even clearcase. Collisions happen
all the time and they're nasty. That's why file locks exist. The
fewer resources you lock the better.
Yes, but its another one of your straw men. My point is that your
arbitrary 500 line limit is bullshit.

First, let me do my impression of you. "ZOMG TOM SAID SOMETHING THAT
I CAN DISAGREE WITH, *drool*, *wipe face*, I SIMPLY HAVE TO POST A
REPLY!!!!"

Then, I never said it was a hard written in stone limit. I have hand
written files that span into 6-7-8 hundred lines long. As a general
rule though if you're writing something [by hand] that gets over 500
lines, there is very likely [but not always] a chance to re-factor the
code to make it easier to work with [and/or a chance for code re-
use].

That's the difference between people like me [with experience] and
people like you [think they know everything]. We can say things like
"most files shouldn't be longer than 500 lines" and understand that it
means "most files shouldn't be super long because you'll probably be
able to factor the code better and achieve code reuse." Whereas you,
with little experience didn't know about that sort of development
strategy and just assumed that I meant "all files must be less than
500 lines because compilers can't handle 501 lines."

tl;dr, sometimes you just have to know when to shut up.

Tom
 
S

Seebs

Spoken like someone who either works alone or without a content/
version control system. Suppose you have 20 people on a team and all
of your source is locked in 2 files. What do the 18 other people
do?

Advocate switching to git.

-s
 
T

Tom St Denis

Advocate switching to git.

No matter the tool, if two people are working on the same bit of code
nothing good will result. If you have 50 functions in one file and
you're not all working on the same functions, sure collisions might
not happen...

Tom
 
S

Squeamizh

Yet again your assumptions are totally wrong.
And who said anything about ALL functions locked up in 2 files? Also,
did you never bother to investigate more modern RCS which can handle
hunks from within a file? A file is nothing more than a user view of
data anyway in more advanced set ups....

I've worked with git, svn, cvs, and even clearcase.  Collisions happen
all the time and they're nasty.  That's why file locks exist.  The
fewer resources you lock the better.
Yes, but its another one of your straw men. My point is that your
arbitrary 500 line limit is bullshit.

First, let me do my impression of you.  "ZOMG TOM SAID SOMETHING THAT
I CAN DISAGREE WITH, *drool*, *wipe face*, I SIMPLY HAVE TO POST A
REPLY!!!!"

Then, I never said it was a hard written in stone limit.  I have hand
written files that span into 6-7-8 hundred lines long.  As a general
rule though if you're writing something [by hand] that gets over 500
lines, there is very likely [but not always] a chance to re-factor the
code to make it easier to work with [and/or a chance for code re-
use].

That's the difference between people like me [with experience] and
people like you [think they know everything].  We can say things like
"most files shouldn't be longer than 500 lines" and understand that it
means "most files shouldn't be super long because you'll probably be
able to factor the code better and achieve code reuse."  Whereas you,
with little experience didn't know about that sort of development
strategy and just assumed that I meant "all files must be less than
500 lines because compilers can't handle 501 lines."

tl;dr, sometimes you just have to know when to shut up.

There is a cost associated with what you're advocating. Shorter files
implies more files, which adds complexity. At the very least, you'll
need more "glue" in the form of extern declarations and prototypes,
which makes code harder for your coworkers to follow and maintain.
Maybe my experience is unique, but I have found that "judicious
refactoring" of code that a large teams works on causes more problems
than it solves, as it forces everyone to become reacquainted with the
design du jour. Obviously, code reuse is good, but it is only
achieved if the other members of your team know where the code is and
what the functions are called. Getting (re)acquainted with code takes
time and effort.

When I'm writing something alone, I do find that the benefits of heavy
refactoring outweigh the added complexity.

What does "tl;dr" mean?
 
T

Tom St Denis

I've worked with git, svn, cvs, and even clearcase.  Collisions happen
all the time and they're nasty.  That's why file locks exist.  The
fewer resources you lock the better.
First, let me do my impression of you.  "ZOMG TOM SAID SOMETHING THAT
I CAN DISAGREE WITH, *drool*, *wipe face*, I SIMPLY HAVE TO POST A
REPLY!!!!"
Then, I never said it was a hard written in stone limit.  I have hand
written files that span into 6-7-8 hundred lines long.  As a general
rule though if you're writing something [by hand] that gets over 500
lines, there is very likely [but not always] a chance to re-factor the
code to make it easier to work with [and/or a chance for code re-
use].
That's the difference between people like me [with experience] and
people like you [think they know everything].  We can say things like
"most files shouldn't be longer than 500 lines" and understand that it
means "most files shouldn't be super long because you'll probably be
able to factor the code better and achieve code reuse."  Whereas you,
with little experience didn't know about that sort of development
strategy and just assumed that I meant "all files must be less than
500 lines because compilers can't handle 501 lines."
tl;dr, sometimes you just have to know when to shut up.

There is a cost associated with what you're advocating.  Shorter files
implies more files, which adds complexity.  At the very least, you'll
need more "glue" in the form of extern declarations and prototypes,
which makes code harder for your coworkers to follow and maintain.
Maybe my experience is unique, but I have found that "judicious
refactoring" of code that a large teams works on causes more problems
than it solves, as it forces everyone to become reacquainted with the
design du jour.  Obviously, code reuse is good, but it is only
achieved if the other members of your team know where the code is and
what the functions are called.  Getting (re)acquainted with code takes
time and effort.

When I'm writing something alone, I do find that the benefits of heavy
refactoring outweigh the added complexity.

Well presumably your exported functions will have corresponding
entries in header files anyways. So yeah there are more entries in
your makefile but that's more than made up for in the quicker builds
and easier content management.

I actually find it easier to have separate functions in separate files
because I tend to think of files as ideas or algorithms. One file may
perform elliptic curve point addition, while another does doubling.
That way if I'm in the mood to review the point addition code I just
look at that. As opposed to opening a 3000 line file and finding the
function I need and having to see the other code in the process...

Even when writing applications [I tend to work on libraries more] I
separate out the re-usable code into libraries inside the source tree
for the application. So in essence the application is merely a user
interface or driver for the functionality provided by the libraries.
Like we recently wrote an app with crypto and DB functionality. So we
had a libcert.a for crypto and libdb.a for our DB retrieve/store
functions. The actual app customers used was a relatively short piece
of code that parses command line options and makes use of the two
libraries. In this case we re-used the crypto lib for another part of
the project.

The trick is to have a solid design to start with, then document as
you go. In our group, we're all expected to contribute and READ the
SDK user guides. So "keeping up" with development is just part of the
job. Of course if you don't document anything I can see how your
coworkers can get lost...
What does "tl;dr" mean?

"too long, didn't read." It's used for when you ramble on and want to
sum it up for the impatient.

Tom
 
B

bartc

Tom said:
Spoken like someone who either works alone or without a content/
version control system. Suppose you have 20 people on a team and all
of your source is locked in 2 files. What do the 18 other people
do?

Also, I work on a quad-core AMD box with 4GB of ram. I still
appreciate faster turn-around on build/rebuild cycles. You'd be an
idiot not to.

Does it really take that much longer to compile 5000 lines instead of 500?
(My files are typically 5000 lines and take a fraction of a second to
compile.)

And how does it affect building other than making it take longer because of
having to deal with hundreds of files instead of dozens?

(I don't know how these things work in teams; maybe only one member has
build/run privileges? Or can anyone build and test a project that includes
half-finished modules from other team members? Or does each person just
test, independently, a small portion of the project in a test setup. That
still doesn't explain this arbitrary file line-limit.)
 
K

Keith Thompson

Tom St Denis said:
Generally you want fewer functions per file for numerous reasons

1. Makes it easier to work with others in a version control system,
as you lock a smaller percentage of the code at any given time.

Most modern version control systems don't lock files; they allow
two different people to work on the same file at the same time.
But typically the last person to check in the file has to merge
the changes. If the changes are isolated from each other, this is
straightforward; if not, you'd have the same merging problem with
one function per file.
2. It speeds up build/rebuild times while testing new code
3. It makes it easier to "smart" link code as not all linker can do
per function linking, they're usually per object file.
4. I find it generally easier to work on smaller files, specially
when what I'm looking for isn't hidden in the middle of a 3,000 line
file... but that's just MHO.

Some text editors (Emacs in particular) have a mode in which you
can temporarily narrow the visible portion of a file, working on
a subset as if it were an entire file. (And I just learned
that there's a vim plugin that does the same thing.)
Usually, the smart thing to do is sort your source tree with these
things called directories. So finding files should be easy.

Unless you've got thousands of them.
 
B

Ben Pfaff

Tom St Denis said:
Spoken like someone who either works alone or without a content/
version control system. Suppose you have 20 people on a team and all
of your source is locked in 2 files. What do the 18 other people
do?

The other 18 people have plenty of time to learn how to break up
a program into modules and to use and maintain a version control
system.
 
T

Tom St Denis

Yet you dont seem to know how they work. It is quite usual for many
people to work on the same files. Its why these systems exist.

Provided they don't touch overlapping sections of code. Yeah I agree
there won't be problems. But if people are doing a code review and
touching up things here and there it's easy to collide. That's why
locking a file is easier, it prevents this problem. Now if you lock a
file with huge segments of your project your boned.

Not to forget to mention the build time speedups, which is usually
fairly invaluable.
But you still came out with it.

It's HOW you reacted to it that is important. You threw away any
possible reasonable interpretation and went directly for "he must mean
the compiler can't handle large files." And you didn't do that
because you're being difficult, you did that because you don't know
better.
Extra files can also introduce extra complexity.

Not really. It takes me all of 5 seconds to add a source file to a
makefile. Takes another 2 seconds to import it to CVS. Adding files
to a well maintained source tree is really easy.
It wasn't me making all sorts of rules : it was you. I'm a programmer
for a long time and have worked on a lot of systems. But it's you
showing off about your experience and you making the rules I see. First
we have you condemning inline functions, and now large files. Yeah, I am
playing devils advocate a bit - but primarily because I find your
arguments weak and similar to those small minded people who insits only
their coding style will suffice.

Well I'm not a "programmer." I'm a developer. So it's my job to not
only write software but produce maintainable and manageable source
trees that stand the test of time. That includes proper tree layout,
documentation, API design rules, etc. I don't just sit and write for
loops all day long like your typical code monkey.
I never said compilers couldnt handle 501. You produce yet another straw
man.

Files can be split based on functional partitioning. To split BECAUSE
its more than 500 lines is ridiculous with modern RCS and navigation
tools.

It's a rule of thumb. Stop being so obtuse. I said that chances are
if you're writing a SINGLE function that approaches 500 lines that
chances are good you can factor functionality out of it. That doesn't
mean there aren't exceptions. But it's a very common rookie mistake
to put all your code in one basket. 500 was just a number I pulled
out of thin air too. You can obviously factor smaller functions.

But you're being obtuse for argumentative sake...

Tom
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,764
Messages
2,569,564
Members
45,039
Latest member
CasimiraVa

Latest Threads

Top