'nobody' using sudo -- scary!

J

Johnny

Hi,

My perl script runs as 'nobody' but it needs to execute some commands
with more privilege (rm /home/username/.forward). I see a lot of
talk about sudo for this type of circumstance ...is that really the
best choice? I've gotten the username/password of the account that
has permission to do what I'd like to do - maybe that's somehow
useful? Making the users home directory world writable seems to
break sendmail, so I don't want to fuss with that. Running perl
scripts as root must be the worst possible choice. Are there any
other methods worth considering? Is allowing 'nobody' to execute
commands as root an excepted practice?

Thanks in advance,
SuchaNewb
 
H

Hans Bulvai

Hi,

My perl script runs as 'nobody' but it needs to execute some commands
with more privilege (rm /home/username/.forward). I see a lot of talk
about sudo for this type of circumstance ...is that really the best
choice? I've gotten the username/password of the account that has
permission to do what I'd like to do - maybe that's somehow useful?
Making the users home directory world writable seems to break sendmail,
so I don't want to fuss with that. Running perl scripts as root must
be the worst possible choice. Are there any other methods worth
considering? Is allowing 'nobody' to execute commands as root an
excepted practice?

Thanks in advance,
SuchaNewb

DO NOT:
1) give 'nobody' any rights, especially not sudo rights.
2) make users homedirs world writeable.
3) run it as root.

create a new user, give it the necessary perms (whether sudo, or
otherwise) and run it as that user. Above, (1) and (2) are the worst
choices you could possibly do. Even (3) is less dangerous than them.
 
J

Jens Thoms Toerring

Johnny said:
My perl script runs as 'nobody' but it needs to execute some commands
with more privilege (rm /home/username/.forward). I see a lot of
talk about sudo for this type of circumstance ...is that really the
best choice? I've gotten the username/password of the account that
has permission to do what I'd like to do - maybe that's somehow
useful? Making the users home directory world writable seems to
break sendmail, so I don't want to fuss with that. Running perl
scripts as root must be the worst possible choice. Are there any
other methods worth considering? Is allowing 'nobody' to execute
commands as root an excepted practice?

I guess this would be better suited for e.g. comp.unix.questions
or maybe comp.os.linux.misc. I guess the worst "solution" would
be to make the users directories world writable. That's simply
stupidness. What I don't see is why a Perl script running as
root when doing root tasks would be bad (especially since Perl
is regarded as the "Swiss army knife" of system admins). It just
might a bit too much effort when a simple shell script line like

for i in `ls -a /home/*/.forward`; do rm $i; done

would do nicely. But then I also don't see why you would want to
delete users .forward files - if you have a really good reason to
do so at least rename them to something else instead of deleting
them completely.
Regards, Jens
 
J

Johnny

I guess this would be better suited for e.g. comp.unix.questions
or maybe comp.os.linux.misc. I guess the worst "solution" would
be to make the users directories world writable. That's simply
stupidness. What I don't see is why a Perl script running as
root when doing root tasks would be bad (especially since Perl
is regarded as the "Swiss army knife" of system admins). It just
might a bit too much effort when a simple shell script line like

for i in `ls -a /home/*/.forward`; do rm $i; done

would do nicely. But then I also don't see why you would want to
delete users .forward files - if you have a really good reason to
do so at least rename them to something else instead of deleting
them completely.
Regards, Jens


Thanks for the comments. My post wasn't as clear as it should have
been. I was trying avoid irrelevant details (but failed). The more
complete story is that I've taken over for a consultant that built a
perl based website. All users supply a username and password.
There's a page that allows users to edit their vacation message and
toggle their away/back status. That part is broken because of the
permissions issue. Currently the code attempts to set the away
message by:

system "/usr/bin/vacation -i";
system "cp -p /home/$remoteuser/vacation.forward /home/$remoteuser/
\.forward";

or to turn off the vacation message:
system "/usr/bin/vacation -i";
system "rm /home/$remoteuser/\.forward";

I haven't done web development before and made the assumption that I'd
have many more cases where 'nobody' wouldn't be sufficient. Based on
that assumption I looked for a method I could use to solve this
problem and again in the future. I confused matters by listing
alternate solutions to this particular problem. I found a lot of
talk about the sudo solution and that left me thinking, "... really?
That can't be the best idea." So then I posted, in a unclear
manner. Here's a second attempt at my question if you still feel
like playing.

Given a perl based web application, running as 'nobody' with a need to
execute some privileged command, what approach is recommended?
 
B

Ben Morrow

Quoth Johnny said:
Given a perl based web application, running as 'nobody' with a need to
execute some privileged command, what approach is recommended?

Stick the details of what to do in a file somewhere, and run a program
out of root's crontab to check the list and perform the commands.
*Obviously* you will need extremely careful checking of the contents of
that list; you will want to write the root command in Perl, and use
taint mode.

Ben
 
R

RedGrittyBrick

Ben said:
Stick the details of what to do in a file somewhere, and run a program
out of root's crontab to check the list and perform the commands.
*Obviously* you will need extremely careful checking of the contents of
that list; you will want to write the root command in Perl, and use
taint mode.

That is a nice solution.

A further refinement might be to create a FIFO instead of a file. and
have a root daemon reading the FIFO. That way there'd be no lag between
requesting the change and the change being performed.

man mkfifo

The daemon could be a Perl script started in the usual way at boot-time
(rc files etc).

Ben is right about the need to very very carefully check and sanitise
the input. I'd consider some sort of throttling to ameliorate any DOS
attacks.
 
T

Ted Zlatanov

R> Isn't that the same as
R> rm home/*/.forward

Thay are both bad solutions when there are enough users to run over the
command line limits. Perl would actually be a decent choice here,
unless you're sure you trust `find' to do the right thing. I would
never remove files from a user directory with any kind of automated
script, personally.

cfengine has specific facilities to do this, and would be my first
recommendation if it's an option. One of the big benefits in this case
is that the policy can be set by the administrator:

'remove $(home)/.forward' (in the cfengine syntax this looks slightly different)

but a cfengine run can actually be triggered by less-privileged users,
even remotely. See http://cfengine.org for further details.

Ted
 
N

nntpman68

This raises an intersting pint )for me at least)

I'm not that used to perl globs:


let's assume I work in a setup where /home/*/.forward expands to > 15000
files.


What would happen if I use follwing statement in perl"


foreach my $file (</home/*/.forward>){
do_something($file);
}

would perl
- iterate through the files
- or would perl first create a list of all the files and then
iterate through them.
- or would it hit a linit and not provide all hits.
- or does it depend on the system perl is running on
?

Just being curious?
 
B

Ben Morrow

[please quote properly]

Quoth (e-mail address removed):
What would happen if I use follwing statement in perl"

foreach my $file (</home/*/.forward>){
do_something($file);
}

would perl
- iterate through the files
- or would perl first create a list of all the files and then
iterate through them.

'foreach' always creates a list and then iterates over it.
- or would it hit a linit and not provide all hits.
- or does it depend on the system perl is running on

You will eventually hit the memory limit on your system, and the limit
on the size of the pointer used to index the perl stack; you won't hit
any limits before that.

You can avoid pre-creating the list by using 'while' instead:

while (my $file = </home/*/.forward>) {
...
}

Ben
 
P

Peter J. Holzer

Thanks for the comments. My post wasn't as clear as it should have
been. I was trying avoid irrelevant details (but failed). The more
complete story is that I've taken over for a consultant that built a
perl based website. All users supply a username and password.
There's a page that allows users to edit their vacation message and
toggle their away/back status. That part is broken because of the
permissions issue. Currently the code attempts to set the away
message by:

system "/usr/bin/vacation -i";
system "cp -p /home/$remoteuser/vacation.forward /home/$remoteuser/
\.forward";

To get something perl-specific into that thread: Don't construct command
lines from untrusted user input. Even if you are sure that $remoteuser
can only be an existing user name that cannot contain any funny
characters (like " ", "/" or "."), get into the habit of using the list
form of system:

system "/usr/bin/vacation", "-i";
system "cp", "-p", "/home/$remoteuser/vacation.forward", "/home/$remoteuser/.forward";

(what was the \ for, BTW?)


or to turn off the vacation message:
system "/usr/bin/vacation -i";
system "rm /home/$remoteuser/\.forward";

I haven't done web development before and made the assumption that I'd
have many more cases where 'nobody' wouldn't be sufficient.

First, don't run your webserver as "nobody". Create a specific user and
run it as that user. You may think that it doesn't make any difference
whether the server runs as "nobody" or as "foo". But if your webserver
runs as "nobody" out of the box, chances are that there is some other
stuff on the box also running as nobody, and you don't want to open a
path to privileged commands to that other stuff.

If this web server is tightly controlled and only used for controlling
user accounts, you can now give the user "foo" permission to remove
..forward files, for example using sudo. But don't just give it
permission to run "rm". Instead create a script "vacation-off", and give
it permission to run that script. So even if your server is cracked,
the attacker cannot delete any file. He can only turn off (and on)
vacation messages. (And I don't know if that is possible with sudo, but
you should strongly consider restricting these commands to run as some
"real" user, but not as root).

If your web server is also used for other stuff which is less security
sensitive (and where the web authors are probably less careful), it's a
good idea to put in another layer. Create yet another user and run only
those scripts which need special privileges as that user. You can do
this for example with suexec (with apache) or fastcgi (just about any
webserver). FastCGI is especially nice because it communicates with the
webserver over a socket - the script can run even run on a different
host than the webserver.

hp
 
X

xhoster

Ben Morrow said:
[please quote properly]

Quoth (e-mail address removed):
What would happen if I use follwing statement in perl"

foreach my $file (</home/*/.forward>){
do_something($file);
}

would perl
- iterate through the files
- or would perl first create a list of all the files and then
iterate through them.

'foreach' always creates a list and then iterates over it.

Not always. For example, in the case of foreach (1..1e6).
You will eventually hit the memory limit on your system, and the limit
on the size of the pointer used to index the perl stack; you won't hit
any limits before that.

You can avoid pre-creating the list by using 'while' instead:

while (my $file = </home/*/.forward>) {

On my system, and I suspect on all systems, this still pre-creates the
result set in its entirety.

For example, if I put a "last" in the while loop, the code still performed
49418 "lstat" calls before it did a single loop iteration and broke out.

Perhaps the result set is stored in a special packed structure that is more
compact than it would be in a foreach loop. But a test shows that this
effect seems small. it took 12 meg to do

while (<blah/*>) {last}

and 14.3 meg to do

foreach (<blah/*>) {last}

Where blah has 49418 files in it.

Xho

--
-------------------- http://NewsReader.Com/ --------------------
The costs of publication of this article were defrayed in part by the
payment of page charges. This article must therefore be hereby marked
advertisement in accordance with 18 U.S.C. Section 1734 solely to indicate
this fact.
 
B

Ben Morrow

Quoth (e-mail address removed):
On my system, and I suspect on all systems, this still pre-creates the
result set in its entirety.

You're right. From File::Glob::csh_glob:

| # if we're just beginning, do it all first
| if ($iter{$cxix} == 0) {
| if (@pat) {
| $entries{$cxix} = [ map { doglob($_, $DEFAULT_FLAGS) } @pat ];
| }
| else {
| $entries{$cxix} = [ doglob($pat, $DEFAULT_FLAGS) ];
| }
| }


so it builds the whole list on the first call, regardless. I guess this
is because doglob sorts the list before returning it.

Ben
 
H

Hans Mulder

Ben said:
Quoth (e-mail address removed):
On my system, and I suspect on all systems, this still pre-creates
the result set in its entirety.

You're right. From File::Glob::csh_glob:

| # if we're just beginning, do it all first | if ($iter{$cxix}
== 0) { | if (@pat) { | $entries{$cxix} = [ map {
doglob($_, $DEFAULT_FLAGS) } @pat ]; | } | else { |
$entries{$cxix} = [ doglob($pat, $DEFAULT_FLAGS) ]; | } | }



so it builds the whole list on the first call, regardless. I guess
this is because doglob sorts the list before returning it.

If you really don't want to have the whole list in memory, you'll
have to roll your own glob using readdir. Something like:

opendir HOMES, "/home" or die Can't read /home: $!";

while (my $entry = readdir(HOMES)) {
my $candidate = "/home/$entry/.forward";
if (-f $candidate) {
unlink $candidate or die "Can't remove $candidate: $!";
}
}

closedir HOMES;


Hope this helps,

-- HansM
 
J

John W. Krahn

Joe said:
The first call to readdir() in scalar context will read the entire list
into memory

No it won't.
and return just the first one. Succeeding calls to
readdir() in scalar context will return the next one from the buffer.

Either way, you'll still end up having the whole list in memory.

That is not how it works.



John
 
X

xhoster

Ben Morrow said:
(deja vu anyone?) Yes it will.

No it won't.

We have now switched from glob to readdir. Different functions,
different behaviors. (I missed that transition myself at first.)

Xho

--
-------------------- http://NewsReader.Com/ --------------------
The costs of publication of this article were defrayed in part by the
payment of page charges. This article must therefore be hereby marked
advertisement in accordance with 18 U.S.C. Section 1734 solely to indicate
this fact.
 
B

Ben Morrow

Quoth (e-mail address removed):
No it won't.

We have now switched from glob to readdir. Different functions,
different behaviors. (I missed that transition myself at first.)

D'oh! Yes, of course; sorry, John.

Ben
 
J

John W. Krahn

Ben said:
Quoth (e-mail address removed):

D'oh! Yes, of course; sorry, John.

That's OK, I wanted an argument, not just the automatic gainsaying of
anything the other person says. :)



John
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,756
Messages
2,569,535
Members
45,008
Latest member
obedient dusk

Latest Threads

Top