java newbie, error in java or in squid proxy setup?

Discussion in 'Java' started by pantagruel, Mar 10, 2008.

  1. pantagruel

    pantagruel Guest

    Hi,

    I am wondering if this is a problem with the setup of squid on my
    network or a problem with java, I have the following code


    import java.net.*;
    import java.io.*;

    public class WebSiteReader {
    public static void main(String args[]){
    String nextLine;
    URL url = null;
    URLConnection urlConn = null;
    InputStreamReader inStream = null;
    BufferedReader buff = null;
    try{
    //reate the URL obect that points
    //t the default file index.html

    System.setProperty("http.proxyHost","the proxy server");
    System.setProperty("http.proxyPort", "80");
    url = new URL("some URL to read in");
    System.out.println("opening connection");
    urlConn = url.openConnection();
    System.out.println("connection opened");

    inStream = new InputStreamReader(
    urlConn.getInputStream());
    buff= new BufferedReader(inStream);
    System.out.println("before while");
    //ead and print the lines from index.html
    while (true){
    nextLine =buff.readLine();
    if (nextLine !=null){
    System.out.println(nextLine);
    }
    else{
    break;
    }
    }
    } catch(MalformedURLException e){
    System.out.println("Please check the URL:" +
    e.toString() );
    } catch(IOException e1){
    System.out.println("Can't read from the Internet: "+
    e1.toString() );
    }
    }
    }


    If I put in a url that is a top level domain, for example http://www.google.com
    it returns the apache page for folder browsing, whatever that's
    called, the one that says Index of /

    / being the path. (this is on a primarily windows network, but there
    are probably some linux setups on it that I don't know of, at any rate
    the Apache server is running on Ubuntu - probably in a VM somewhere)

    If I try any url that is not a top level one, for example
    http://www.google.com/ig?hl=en&esrch=BetaShortcuts&btnG=Search
    I get a java.io.FileNotFoundException on the url.

    Now if I try any url in a browser it goes directly through of course.
    If I try any url in Curl (not just top-level domains) it tells me it
    can't connect to the host, if I try curl with the same proxy
    configuration I set in my java code above it gets all urls correctly.

    So is there a setting I should set in my java code, or is it something
    that should be fixed on the server proxy configurations, can anyone
    point to something that would be causing this?

    Anyone think of a way to track the problem? I don't want to go
    complain to the admin unless absolutely necessary.

    thanks.
    pantagruel, Mar 10, 2008
    #1
    1. Advertising

  2. pantagruel

    pantagruel Guest

    Hi,

    Ok, I solved the problem.

    thanks

    On Mar 10, 3:05 pm, pantagruel <> wrote:
    > Hi,
    >
    > I am wondering if this is a problem with the setup of squid on my
    > network or a problem with java, I have the following code
    >
    > import java.net.*;
    > import java.io.*;
    >
    > public class WebSiteReader {
    > public static void main(String args[]){
    > String nextLine;
    > URL url = null;
    > URLConnection urlConn = null;
    > InputStreamReader inStream = null;
    > BufferedReader buff = null;
    > try{
    > //reate the URL obect that points
    > //t the default file index.html
    >
    > System.setProperty("http.proxyHost","the proxy server");
    > System.setProperty("http.proxyPort", "80");
    > url = new URL("some URL to read in");
    > System.out.println("opening connection");
    > urlConn = url.openConnection();
    > System.out.println("connection opened");
    >
    > inStream = new InputStreamReader(
    > urlConn.getInputStream());
    > buff= new BufferedReader(inStream);
    > System.out.println("before while");
    > //ead and print the lines from index.html
    > while (true){
    > nextLine =buff.readLine();
    > if (nextLine !=null){
    > System.out.println(nextLine);}
    >
    > else{
    > break;}
    > }
    > } catch(MalformedURLException e){
    >
    > System.out.println("Please check the URL:" +
    > e.toString() );} catch(IOException e1){
    >
    > System.out.println("Can't read from the Internet: "+
    > e1.toString() );
    >
    > }
    > }
    > }
    >
    > If I put in a url that is a top level domain, for examplehttp://www.google.com
    > it returns the apache page for folder browsing, whatever that's
    > called, the one that says Index of /
    >
    > / being the path. (this is on a primarily windows network, but there
    > are probably some linux setups on it that I don't know of, at any rate
    > the Apache server is running on Ubuntu - probably in a VM somewhere)
    >
    > If I try any url that is not a top level one, for examplehttp://www.google.com/ig?hl=en&esrch=BetaShortcuts&btnG=Search
    > I get a java.io.FileNotFoundException on the url.
    >
    > Now if I try any url in a browser it goes directly through of course.
    > If I try any url in Curl (not just top-level domains) it tells me it
    > can't connect to the host, if I try curl with the same proxy
    > configuration I set in my java code above it gets all urls correctly.
    >
    > So is there a setting I should set in my java code, or is it something
    > that should be fixed on the server proxy configurations, can anyone
    > point to something that would be causing this?
    >
    > Anyone think of a way to track the problem? I don't want to go
    > complain to the admin unless absolutely necessary.
    >
    > thanks.
    pantagruel, Mar 10, 2008
    #2
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. Replies:
    0
    Views:
    535
  2. Zlatko Hristov

    Python script to grep squid logs

    Zlatko Hristov, Apr 14, 2004, in forum: Python
    Replies:
    1
    Views:
    863
    Lee Harr
    Apr 15, 2004
  3. Eddie Butcher

    Zope, Squid and manage_workspace

    Eddie Butcher, Jun 11, 2004, in forum: Python
    Replies:
    0
    Views:
    307
    Eddie Butcher
    Jun 11, 2004
  4. CptDondo

    OT: squid-type cache for XML?

    CptDondo, Nov 14, 2006, in forum: HTML
    Replies:
    4
    Views:
    609
    Toby Inkster
    Nov 14, 2006
  5. Larry Hale
    Replies:
    6
    Views:
    499
    Danilo
    Aug 21, 2008
Loading...

Share This Page