This is a little function I wrote, inspired by the thread
"Urgent HELP! required for Caesar Cipher PLEASE"
$ cat /home/keisar/bin/c/ymse/rot13.h
Something nobody else has pointed out yet: Executable code
like the function below should *not* be in a header file (ending
with ".h", as you have above). It should be in a separate
translation unit, in a source file ending with ".c", and you
should learn how to use your compiler to compile projects
consisting of multiple ".c" source files.
<OT> Using gcc, it's easy:
% gcc -W -Wall -ansi -pedantic -O2 mainfile.c rot13.c
(and any other source files in the project). All the options
are only there to catch mistakes in your code; if you write
perfect code, you don't need them. ;-) [In other words, you
*do* need them. Always.]
char rot13(char character)
{
int changed;
changed = character - 'a' + 'n';
return changed;
}
I find two things strange about this code:
1) I don't have to specify that b should be replaced by n,
c by o and so on. How come?
Because your system, like most systems in the world today,
uses ASCII to represent characters. Part of the ASCII character
table looks like this:
"...]^_`abcdefghijklmnopqrstuvwxyz{|}..."
See how all the lowercase letters are packed together, in order?
That's why your code works (do the math yourself as to why it
converts 'b' to 'o').
As Joona notes, all your code is doing is adding 'n'-'a', or 13
(assuming ASCII), to the character it receives. Which is why it
fails miserably to convert 'o' to 'b', or 'z' to 'a'.
Your code won't work on some other real-life systems out there,
and it might even cause demons to fly out of your nose on the
Death Station 9000. (Google for it.) So it's not really the
best way to do it if you're writing code that's supposed to work
everywhere.
2) The function returns a char(char rot13), but changed
is an integer. How is that possible?
You wrote 'int' instead of 'char', that's how.

In C,
characters are treated just like little integers, so you can
do arithmetic on them, as you've already figured out. And of
course you can assign 'char' to 'int' and vice versa, because
that just involves narrowing or widening the integers involved.
Here's how you could write that code more portably, so it
wouldn't depend on the organization of the letters in your
character set:
#include <ctype.h>
int rot13(int c)
{
static char lookup[UCHAR_MAX] = {0};
static char Alpha[] = "abcdefghijklmnopqrstuvwxyz";
static int not_initialized_yet = 1;
if (not_initialized_yet) {
unsigned int i;
for (i=0; i < sizeof lookup; ++i) {
lookup
= i;
}
for (i=0; i < sizeof Alpha; ++i) {
lookup[Alpha] = Alpha[(i+13) % 26];
lookup[toupper(Alpha)] = toupper(Alpha[(i+13) % 26]);
}
not_initialized_yet = 0;
}
return lookup[c];
}
Doesn't that look complicated, now? But note that most of the
time -- *all* the time after the first time you call the function --
it doesn't even need to do any arithmetic! It's just a simple
table lookup, plus some complicated stuff to initialize the table.
I changed 'char' to 'int' to bring 'rot13' in line with similar
standard functions like 'toupper', which take and return 'int'.
As you've found out, 'char' is *almost* always replaceable by 'int'
(one big exception being text strings, obviously).
HTH,
-Arthur