Need your recommendations for TCP Server/Client design

Discussion in 'Ruby' started by Victor Reyes, May 23, 2008.

  1. Victor Reyes

    Victor Reyes Guest

    [Note: parts of this message were removed to make it a legal post.]

    Team,

    Let me begin by stating that I am still a Ruby novice, although I've written
    some simple appls (sudoku, TCP and UDP servers and other mundane appls) with
    the input of the team.

    I worked with AIX and I support more than a hundred servers in a complex and
    secured??? environment.
    Although some vendors have packages to perform "distributed" remote support,
    it is not allowed in my environment.
    At first tried to design my own poor-man distributed package using what is
    allowed, ssh (port 22).
    But this did not provide the flexibility to manage all the servers from one
    centralized location.

    So, I went ahead and designed in Ruby a TCP Client/Server that works as
    follows:

    On every server I have a server listening on a predefined port.
    The server gets started from the cron and every 10 minutes the cron checks
    to ensure that the server is running.

    Lets say the client wants to execute a remote command like creating a userid
    on all servers or just checking paging or memory consumption, etc.
    It sends a request to the server and the server executes the command and
    returns the output to the client.

    So the client can:

    dshc -s hostname cmd
    dshc -p full_path_of_a_file_with_list_of_servers
    dhsc -a cmd (This version uses a file */etc/servers* with the list of all
    servers)

    I also have another client named *dshp* with the same flags as above and
    which uses the same TCP server and which is listening on the same port.
    The *dshp *program is used to *push* files to one or multiple or all
    servers.

    All the UNIX admin actually love the application. BTW, the *dshc dshp* are
    only executable by *root*.

    However, although we are behind multiple firewalls (at least 6) a scanning
    tool detected the listener (TCP Server) and marked it as a security risk on
    a particular server.
    I was asked and of course I complied, to shutdown the server on that host.
    I was also asked to redesign the tool adding a bit more security and they
    would allow it. They suggested "handshaking" between client and server, the
    initial comm or perhaps all comm should be encrypted. I was asked if Ruby
    has encryption
    So here is where I am looking for some recommendations.

    Reading a new book I just acquired I came across a package called *GServer*.
    I was wondering if this will be suitable for what I need.
    Also, what type of encryption should I use?

    They were talking something like:

    Client sends connection request
    Server replies with client's *hostname* and *time*
    Client sends back the *time *received from server together with the command
    which the client wants to execute at the remote server.
    Server executes command if it is "happy" with the reply from the client.

    Of course all communication must be ciphered.

    Any suggestions will be greatly appreciated.

    Thank you

    Victor
     
    Victor Reyes, May 23, 2008
    #1
    1. Advertising

  2. On 23 May 2008, at 22:07, Victor Reyes wrote:
    > Reading a new book I just acquired I came across a package called
    > *GServer*.
    > I was wondering if this will be suitable for what I need.
    > Also, what type of encryption should I use?
    >
    > They were talking something like:
    >
    > Client sends connection request
    > Server replies with client's *hostname* and *time*
    > Client sends back the *time *received from server together with the
    > command
    > which the client wants to execute at the remote server.
    > Server executes command if it is "happy" with the reply from the
    > client.
    >
    > Of course all communication must be ciphered.
    >
    > Any suggestions will be greatly appreciated.


    Go to the link in my sig and study the Semantic Networking
    presentation. There's an example in there of doing hybrid key crypto,
    which is where you use a public key exchange to wrap a symmetric key
    for establishing a connection and then use the symmetric key (with
    much less processing overhead) to do the actual communication. On the
    surface it's probably overkill for your app, but the included source
    code shows how GServer can be used for this kind of tool and as the
    whole thing is based on OpenSSL it should be possible to conform with
    any security policy you're working under.


    Ellie

    Eleanor McHugh
    Games With Brains
    http://slides.games-with-brains.net
    ----
    raise ArgumentError unless @reality.responds_to? :reason
     
    Eleanor McHugh, May 24, 2008
    #2
    1. Advertising

  3. Victor Reyes

    Aaron Turner Guest

    On Fri, May 23, 2008 at 2:07 PM, Victor Reyes <> wrote:
    [snip]
    > Reading a new book I just acquired I came across a package called *GServer*.
    > I was wondering if this will be suitable for what I need.
    > Also, what type of encryption should I use?
    >
    > They were talking something like:
    >
    > Client sends connection request
    > Server replies with client's *hostname* and *time*
    > Client sends back the *time *received from server together with the command
    > which the client wants to execute at the remote server.
    > Server executes command if it is "happy" with the reply from the client.
    >
    > Of course all communication must be ciphered.
    >
    > Any suggestions will be greatly appreciated.


    GServer is great. I'd use SSL for encryption. Require the client
    app to authenticate to each server via a password. Probably easiest
    to check against the root password.

    The easiest way to add SSL to any application is to run stunnel on
    each of your servers and have it proxy to your server listing on a
    port on the loopback interface. That way your server doesn't even
    have to know SSL and it's easy to debug. Whatever you do DO NOT
    design your own crypto solution- notice the Debian guys couldn't even
    make a small "fix" without breaking ssh horribly.

    On a side note, there are already free solutions for this sort of
    thing... just search freshmeat.net.


    --
    Aaron Turner
    http://synfin.net/
    http://tcpreplay.synfin.net/ - Pcap editing & replay tools for Unix
    They that can give up essential liberty to obtain a little temporary
    safety deserve neither liberty nor safety. -- Benjamin Franklin
     
    Aaron Turner, May 24, 2008
    #3
  4. 2008/5/24 Aaron Turner <>:
    > The easiest way to add SSL to any application is to run stunnel on
    > each of your servers and have it proxy to your server listing on a
    > port on the loopback interface. That way your server doesn't even
    > have to know SSL and it's easy to debug. Whatever you do DO NOT
    > design your own crypto solution- notice the Debian guys couldn't even
    > make a small "fix" without breaking ssh horribly.


    Definitively not cook your own!

    > On a side note, there are already free solutions for this sort of
    > thing... just search freshmeat.net.


    Yet another alternative might be to just use ssh, i.e. replace your
    demon with sshd and execute commands directly via ssh. This also
    allows for secure file transfers (scp). dshc and dshp then become
    wrapper for a ssh call. Note that with ssh-agent you don't even have
    to enter passwords for all the servers.

    Kind regards

    robert

    --
    use.inject do |as, often| as.you_can - without end
     
    Robert Klemme, May 26, 2008
    #4
  5. Victor Reyes

    Victor Reyes Guest

    [Note: parts of this message were removed to make it a legal post.]

    Actually, ssh was my first choice and we use it for a short period until it
    became impractical.
    Here is what I would like to do so you have a better understanding.
    BTW, I've been playing with *gserver* this weekend but I still don't get
    what I need. I have problems on the receiving size. I don't get all the data
    sent by the server.
    That been said, here is what we do and the trouble we ran into.

    Son facts:


    1. I am a Ruby neophyte but I don't give up until I get what I need.* *My
    solutions are not always elegant but they do the job!
    2. ssh *IS permitted.*
    3. We are less than than 10 UNIX admins.
    4. We have over 100 AIX servers behind splitted among vlans and each
    behind different firewalls.
    5. My second solution TCP Server/Client worked very well. That's until
    the security people discovered the listening port and the fact that my
    server, which was listening on EVERY server would execute any cmd. True, the
    client version only runs as *root* providing just a bit more security as
    you first have to log-in with your ID and then *su* to *root*.
    6. *root* can only be use via *su*.
    7. The solution I am looking for is to be used only by the sys admins.
    8. My *first solution* was using *ssh* as it is fully allowed by the sec
    group. Since authenticating would be impractical when executing a cmd on
    over 100 servers, we created public/private keys, which was a pain below the
    waist to distribute for everyone. Also, since in many instances we needed to
    run *root* commands, that was a real problem since we would have to
    either setup keys for root or implement* sudo*. That's why I decided to
    create my own poor-man distributed remote command processor.

    So, this is what I need to do.

    Create an environment where a sys admin:


    1. log-in with her userid as we do daily and su to root.
    2. Execute a root cmd remotely on a server or multiple servers and
    receive the reply on the local server. We use one server as a the main
    server. Kind of a control work station.
    3. The communication between the main (local) server and the remote
    server(s) must be "secured" (ssh, ssl, encryption, whatever)

    That's in a nutshell!

    All suggestions are greatly appreciated.

    Thank you

    Victor





    On Mon, May 26, 2008 at 8:49 AM, Robert Klemme <>
    wrote:

    > 2008/5/24 Aaron Turner <>:
    > > The easiest way to add SSL to any application is to run stunnel on
    > > each of your servers and have it proxy to your server listing on a
    > > port on the loopback interface. That way your server doesn't even
    > > have to know SSL and it's easy to debug. Whatever you do DO NOT
    > > design your own crypto solution- notice the Debian guys couldn't even
    > > make a small "fix" without breaking ssh horribly.

    >
    > Definitively not cook your own!
    >
    > > On a side note, there are already free solutions for this sort of
    > > thing... just search freshmeat.net.

    >
    > Yet another alternative might be to just use ssh, i.e. replace your
    > demon with sshd and execute commands directly via ssh. This also
    > allows for secure file transfers (scp). dshc and dshp then become
    > wrapper for a ssh call. Note that with ssh-agent you don't even have
    > to enter passwords for all the servers.
    >
    > Kind regards
    >
    > robert
    >
    > --
    > use.inject do |as, often| as.you_can - without end
    >
    >
     
    Victor Reyes, May 26, 2008
    #5
  6. 2008/5/26 Victor Reyes <>:
    > Actually, ssh was my first choice and we use it for a short period until it
    > became impractical.


    From your posting it is not fully clear to me why it was "impractical".

    > 6. *root* can only be use via *su*.


    If ssh is allowed and several people should be allowed to become
    "root" on all the machines, then you might as well allow root access
    via ssh (probably with password auth disabled for improved security).

    > 7. The solution I am looking for is to be used only by the sys admins.
    > 8. My *first solution* was using *ssh* as it is fully allowed by the sec
    > group. Since authenticating would be impractical when executing a cmd on
    > over 100 servers, we created public/private keys, which was a pain below the
    > waist to distribute for everyone.


    Hm... Normally I would have expected home directories to be shared
    via nfs in such a setup. Even if not, you could have automated this.

    > Also, since in many instances we needed to
    > run *root* commands, that was a real problem since we would have to
    > either setup keys for root or implement* sudo*. That's why I decided to
    > create my own poor-man distributed remote command processor.


    ... which was identified as a security threat. :) As always there is
    the tradeoff between security and convenience.

    > So, this is what I need to do.
    >
    > Create an environment where a sys admin:
    >
    > 1. log-in with her userid as we do daily and su to root.
    > 2. Execute a root cmd remotely on a server or multiple servers and
    > receive the reply on the local server. We use one server as a the main
    > server. Kind of a control work station.
    > 3. The communication between the main (local) server and the remote
    > server(s) must be "secured" (ssh, ssl, encryption, whatever)


    I'd still use ssh or stunnel. You could even use ssh's port
    forwarding feature to connect to your remote command processor.

    Kind regards

    robert

    --
    use.inject do |as, often| as.you_can - without end
     
    Robert Klemme, May 26, 2008
    #6
  7. On Mon, 26 May 2008, Victor Reyes wrote:

    > 2. ssh *IS permitted.*
    > 3. We are less than than 10 UNIX admins.
    > 4. We have over 100 AIX servers behind splitted among vlans and each
    > behind different firewalls.


    10 admins for only 100 servers? You've got it easy!

    > 8. My *first solution* was using *ssh* as it is fully allowed by the sec
    > group. Since authenticating would be impractical when executing a cmd on
    > over 100 servers, we created public/private keys, which was a pain below the
    > waist to distribute for everyone. Also, since in many instances we needed to
    > run *root* commands, that was a real problem since we would have to
    > either setup keys for root or implement* sudo*. That's why I decided to
    > create my own poor-man distributed remote command processor.


    You only need to distribute keys the hard way once. After that, you can
    use the existing account to distribute more keys. In my last job, this
    was known as the "abuse matt" option since I was the first person to have
    keys everywhere. Using sudo is a very good idea, I highly recommend you
    install and configure it.

    > 1. log-in with her userid as we do daily and su to root.
    > 2. Execute a root cmd remotely on a server or multiple servers and
    > receive the reply on the local server. We use one server as a the main
    > server. Kind of a control work station.
    > 3. The communication between the main (local) server and the remote
    > server(s) must be "secured" (ssh, ssl, encryption, whatever)


    Take a look at gsh/ghosts. Written in perl, but it works very well.

    -- Matt
    It's not what I know that counts.
    It's what I can remember in time to use.
     
    Matt Lawrence, May 27, 2008
    #7
  8. Victor Reyes

    Victor Reyes Guest

    [Note: parts of this message were removed to make it a legal post.]

    Thanks for the advise and info.

    On Mon, May 26, 2008 at 9:03 PM, Matt Lawrence <> wrote:

    > On Mon, 26 May 2008, Victor Reyes wrote:
    >
    > 2. ssh *IS permitted.*
    >> 3. We are less than than 10 UNIX admins.
    >> 4. We have over 100 AIX servers behind splitted among vlans and each
    >> behind different firewalls.
    >>

    >
    > 10 admins for only 100 servers? You've got it easy!
    >
    > 8. My *first solution* was using *ssh* as it is fully allowed by the sec
    >> group. Since authenticating would be impractical when executing a cmd on
    >> over 100 servers, we created public/private keys, which was a pain below
    >> the
    >> waist to distribute for everyone. Also, since in many instances we needed
    >> to
    >> run *root* commands, that was a real problem since we would have to
    >> either setup keys for root or implement* sudo*. That's why I decided to
    >> create my own poor-man distributed remote command processor.
    >>

    >
    > You only need to distribute keys the hard way once. After that, you can
    > use the existing account to distribute more keys. In my last job, this was
    > known as the "abuse matt" option since I was the first person to have keys
    > everywhere. Using sudo is a very good idea, I highly recommend you install
    > and configure it.
    >
    > 1. log-in with her userid as we do daily and su to root.
    >> 2. Execute a root cmd remotely on a server or multiple servers and
    >> receive the reply on the local server. We use one server as a the main
    >> server. Kind of a control work station.
    >> 3. The communication between the main (local) server and the remote
    >> server(s) must be "secured" (ssh, ssl, encryption, whatever)
    >>

    >
    > Take a look at gsh/ghosts. Written in perl, but it works very well.
    >
    > -- Matt
    > It's not what I know that counts.
    > It's what I can remember in time to use.
    >
    >
     
    Victor Reyes, May 27, 2008
    #8
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. bigbinc

    Good tcp java server design

    bigbinc, Jan 20, 2004, in forum: Java
    Replies:
    3
    Views:
    8,816
    bigbinc
    Jan 21, 2004
  2. Wonderer
    Replies:
    9
    Views:
    5,066
    Roedy Green
    Apr 26, 2004
  3. Dave Theese

    Need your recommendations

    Dave Theese, Aug 29, 2003, in forum: C++
    Replies:
    4
    Views:
    320
    Greg P.
    Aug 30, 2003
  4. marklawford
    Replies:
    2
    Views:
    483
    MKline
    Dec 8, 2006
  5. Tiger
    Replies:
    5
    Views:
    985
    Dave Thompson
    May 1, 2006
Loading...

Share This Page