going back from catch to try-webspider

A

ameerov

I'm writing a webspider. When a bad url is encountered then a
MalformedURLException is thrown. What I want to do is ignore this bad
URL's exception and continue reading the other urls.

I'm not very familiar with try catch blocks but as I understand there
is no way to go from catch BACK TO the try .(After encountering an
exception)

So then how is it possible to ignore this exception and thus SKIP
reading the bad URL so I can continue reading the good ones ?

Thanks.
 
A

Andrew Thompson

I'm writing a webspider. When a bad url is encountered then a
MalformedURLException is thrown. What I want to do is ignore this bad
URL's exception and continue reading the other urls.

I'm not very familiar with try catch blocks but as I understand there
is no way to go from catch BACK TO the try .(After encountering an
exception)

It depends, if the try/catch is entirely inside a loop, the
loop itself can continue.

A better group for those new to Java is
<http://www.physci.org/codes/javafaq.jsp#cljh>
 
B

Bjorn Abelli

I'm writing a webspider. When a bad url is encountered
then a MalformedURLException is thrown. What I want to
do is ignore this bad URL's exception and continue reading
the other urls.

I'm not very familiar with try catch blocks but as I
understand there is no way to go from catch BACK TO
the try .(After encountering an exception)

So then how is it possible to ignore this exception and
thus SKIP reading the bad URL so I can continue reading
the good ones ?

That depends on your specific implementation.

Lets say that you have all urls as strings in a list of some sort.

Then you iterate through them somehow, e.g:

for (int i = 0; i < numberOfUrls; i++)
{
String url = (String) listOfUrls.get(i);

try
{
// Some parsing or whatever, that can
// throw a MalformedURLException

// Here you proceed on the urls that hasn't
// thrown a MalformedURLException
}
catch (MalformedURLException mux)
{
// Here you do whatever you want, e.g.
// write the String url to a logfile, to
// check at a later time.
}
}

// Bjorn A
 
R

Roedy Green

So then how is it possible to ignore this exception and thus SKIP
reading the bad URL so I can continue reading the good ones ?

you put a try block around the whole thing so if anything fails it
jumps to the end.
where you can do a finally close.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,755
Messages
2,569,536
Members
45,007
Latest member
obedient dusk

Latest Threads

Top