Logging problem

N

Novice

I've suddenly started having an odd problem with respect to my logging and
would appreciate a hint or tip to help me sort it out.

For some reason I can't figure out, the last line of the log, which simply
says:

</log>

is not being written. When I try to open the log file, I see a companion
file with it that appears to be a lock. (If the log file is Foo.log.xml,
the companion file is Foo.log.xml.lck and the companion file's size is 0.)
If I delete the lock file, I can open the log file and it is missing the
final </log> tag.

Now, I know that the final </log> tag is written within XMLFormatter in a
simple method called getTail(). What I can't figure out is why getTail()
isn't being invoked any more. I don't know how it was being invoked before
- I certainly never call it in my code - but it was being invoked because
that last tag was in the file. Now, suddenly, it's not.

I'm trying to figure out what I need to do to make getTail() execute again
"automagically" the way it used to. Unfortunately, I don't know how to get
past this on my own and am not sure where to look.
 
A

Arne Vajhøj

I've suddenly started having an odd problem with respect to my logging and
would appreciate a hint or tip to help me sort it out.

For some reason I can't figure out, the last line of the log, which simply
says:

</log>

is not being written. When I try to open the log file, I see a companion
file with it that appears to be a lock. (If the log file is Foo.log.xml,
the companion file is Foo.log.xml.lck and the companion file's size is 0.)
If I delete the lock file, I can open the log file and it is missing the
final</log> tag.

Now, I know that the final</log> tag is written within XMLFormatter in a
simple method called getTail(). What I can't figure out is why getTail()
isn't being invoked any more. I don't know how it was being invoked before
- I certainly never call it in my code - but it was being invoked because
that last tag was in the file. Now, suddenly, it's not.

I'm trying to figure out what I need to do to make getTail() execute again
"automagically" the way it used to. Unfortunately, I don't know how to get
past this on my own and am not sure where to look.

Don't log in XML format ...

Arne
 
N

Novice

Don't log in XML format ...

Obviously, that is one option but it wouldn't be my first choice ;-)

I strongly prefer the XML logs because I've got an XSL that formats the
XML log for my purposes and makes the log very easy to read. It's worked
fine for weeks now - and still does when the last </log> tag is in the
file - so I just want to figure out what foolish thing I've done to keep
the getTail() method from working.
 
A

Arne Vajhøj

Obviously, that is one option but it wouldn't be my first choice ;-)

I strongly prefer the XML logs because I've got an XSL that formats the
XML log for my purposes and makes the log very easy to read. It's worked
fine for weeks now - and still does when the last</log> tag is in the
file - so I just want to figure out what foolish thing I've done to keep
the getTail() method from working.

The concept of writing per log message and then having to write a
termination line after the last log message is very problematic.

Arne
 
M

Martin Gregorie

The concept of writing per log message and then having to write a
termination line after the last log message is very problematic.
He might possibly get away with it if he wraps everything in main() in a

try
{...
}Catch(Exception){}
finally{}

structure but I, for one, would prefer to design the log so it doesn't
need a terminal line.

Don't forget that almost the only time the log will be looked at is after
something has gone wrong. This is precisely when an incomplete log is
most likely to be produced.
 
M

markspace

I've suddenly started having an odd problem with respect to my logging and
would appreciate a hint or tip to help me sort it out.

For some reason I can't figure out, the last line of the log, which simply
says:

</log>

is not being written.


Did you close the Handlers in use?


Can you produce an SSCCE that duplicates the problem? Try to use a
MemoryHandler if possible, so your SSCCE doesn't write files to a
tester's hard drive.
 
M

markspace

Novice wrote:
...

If you stay with this line, I recommend writing a recovery program that
can build a valid XML file from an arbitrarily truncated log, not just
the missing end.


This is a good idea and I was going to suggest it in my post as well,
then I decided to hear back from the OP first.

I wonder if a relatively primitive parser like SAX could be used for
this? It might be immune enough to parsing errors to allow arbitrary
unclosed tags to be detected and then added to a file.

<http://docs.oracle.com/javase/7/docs/api/javax/xml/parsers/SAXParser.html>
 
A

Arne Vajhøj

This is a good idea and I was going to suggest it in my post as well,
then I decided to hear back from the OP first.

I wonder if a relatively primitive parser like SAX could be used for
this? It might be immune enough to parsing errors to allow arbitrary
unclosed tags to be detected and then added to a file.

I would not rely on an assumption about exactly when a SAX parser
would barf on invalid XML.

Arne
 
N

Novice

markspace said:
Did you close the Handlers in use?


Can you produce an SSCCE that duplicates the problem? Try to use a
MemoryHandler if possible, so your SSCCE doesn't write files to a
tester's hard drive.

I could try to create an SSCCE but before I do, I think I need to take a
step back and sort out some conceptual stuff with your help.

As I started composing a reply to one one of the responders in the
thread, it occurred to me that I really wasn't clear on what was SUPPOSED
to happen with a normal log file. I had gotten as far as determining that
getTail() in XMLFormatter would write the missing </log> tag but I hadn't
really thought through what SHOULD be happening. It occurred to me that
maybe the aberrant behavior was what I _had_ been getting and maybe what
I'm seeing now actually makes sense.

Now that I've read and googled a bit more, I'm thinking I was right about
that theory, although I'm definitely not certain. But in case I'm going
down the wrong path, let me think out loud for a minute and you folks can
tell me if I'm out in left field or if I'm making sense.

After some poking around, I found close() and flush() methods in Handler,
which is abstract, and in its children and grandchildren. The one that
concerns me is FileHandler, which is the one that writes physical logs.
Also, java.util.logging.Formatter, which is abstract, and its descendants
have a getTail() method, which is what writes that missing </log> record.

It seems to me that a real log, one you'd see in a production system,
would want to execute flush() and close() and getTail() when it was about
to be closed. But what events would cause that? I went back to the
logging.properties file and rediscovered
java.util.logging.FileHandler.limit, which has a default value of 50,000.
I'll assume that's lines but it may be the number of log records too; it
doesn't much matter. I'm assuming that's how big a physical log file can
get before it needs to be closed and a new file created. In the meantime,
Java keeps writing log records to that log. The mere fact that I start or
stop a given program shouldn't have any effect on whether the log is
locked or not; the log will be available when it's needed and will get
written until it is full.

Now, what happens when it gets full? Well, if it is a continuous log, we
just keep creating new physical law files, typically with a sequential
number to distinguish it from earlier logs, e.g. log0, log1, log2, etc.
The integer just keeps going up. If it is a circular log, we keep
incrementing the integer until we hit a threshhold like 5 or 10, then
start back at 0, overwriting the old log0. If I remember correctly, Java
can do both continuous or circular logs.

Have I got this more-or-less right so far? Because that would make sense
to me based on my (somewhat vague) memories of logging in relational
databases.

If I do have that right, then the aberrant behavior was when the log was
nicely closed as soon as my program ended. (Which raises the question why
did THAT happen? But if I'm right about the log not normally getting the
</log> tag added until the log is closed because it's full, then I'm not
going to worry about the aberrant behavior. I just want to know how to
work with a log that is functioning normally.)

So, if a log normally only gets that </log> tag written when it is ready
to be closed because it's full, what happens in the real world if you
want to read a log BEFORE it's full? If there is some kind of system
crash long before the log is full, you're not going to want to wait until
the log is full; you want to read it NOW. So how what do you do? That has
to be a routine situation and there must be a routine answer. And that's
as far as I've gotten with my reasoning.

If I'm right, nothing is really broken after all and I was just finding
those log files closed and finished by sheer good luck (or having done
some stupid things that worked in my favor). In that case, I just need to
know how the rest of you arrange to read an incomplete log. Maybe there's
a standard utility that you use to read an incomplete log??


I'll stop there and wait for your feedback before I go off the rails
completely ;-)
 
A

Arne Vajhøj

That is the sort of reason why I feel the truncated log issue should be
examined now, rather than waiting until it happens. Maybe the log is
sufficiently readable and searchable even in the XML form that Novice
could use it anyway?

I think he said that he wanted to process it with XSLT.

XSLT will not work with an invalid XML file.

But it should be trivial to add that end tag to the file.

It should even be doable with command shell.

Arne
 
A

Arne Vajhøj

Obviously, that is one option but it wouldn't be my first choice ;-)

I strongly prefer the XML logs because I've got an XSL that formats the
XML log for my purposes and makes the log very easy to read. It's worked
fine for weeks now - and still does when the last</log> tag is in the
file - so I just want to figure out what foolish thing I've done to keep
the getTail() method from working.

I do not like the us of XSL either.

If/when the log file becomes huge, then XSLT is going
to be heavy heavy.

Arne
 
A

Arne Vajhøj

As I started composing a reply to one one of the responders in the
thread, it occurred to me that I really wasn't clear on what was SUPPOSED
to happen with a normal log file. I had gotten as far as determining that
getTail() in XMLFormatter would write the missing</log> tag but I hadn't
really thought through what SHOULD be happening. It occurred to me that
maybe the aberrant behavior was what I _had_ been getting and maybe what
I'm seeing now actually makes sense.

Now that I've read and googled a bit more, I'm thinking I was right about
that theory, although I'm definitely not certain. But in case I'm going
down the wrong path, let me think out loud for a minute and you folks can
tell me if I'm out in left field or if I'm making sense.

After some poking around, I found close() and flush() methods in Handler,
which is abstract, and in its children and grandchildren. The one that
concerns me is FileHandler, which is the one that writes physical logs.
Also, java.util.logging.Formatter, which is abstract, and its descendants
have a getTail() method, which is what writes that missing</log> record.

It seems to me that a real log, one you'd see in a production system,
would want to execute flush() and close() and getTail() when it was about
to be closed. But what events would cause that? I went back to the
logging.properties file and rediscovered
java.util.logging.FileHandler.limit, which has a default value of 50,000.
I'll assume that's lines but it may be the number of log records too; it
doesn't much matter. I'm assuming that's how big a physical log file can
get before it needs to be closed and a new file created. In the meantime,
Java keeps writing log records to that log. The mere fact that I start or
stop a given program shouldn't have any effect on whether the log is
locked or not; the log will be available when it's needed and will get
written until it is full.

Now, what happens when it gets full? Well, if it is a continuous log, we
just keep creating new physical law files, typically with a sequential
number to distinguish it from earlier logs, e.g. log0, log1, log2, etc.
The integer just keeps going up. If it is a circular log, we keep
incrementing the integer until we hit a threshhold like 5 or 10, then
start back at 0, overwriting the old log0. If I remember correctly, Java
can do both continuous or circular logs.

Have I got this more-or-less right so far? Because that would make sense
to me based on my (somewhat vague) memories of logging in relational
databases.

If I do have that right, then the aberrant behavior was when the log was
nicely closed as soon as my program ended. (Which raises the question why
did THAT happen? But if I'm right about the log not normally getting the
</log> tag added until the log is closed because it's full, then I'm not
going to worry about the aberrant behavior. I just want to know how to
work with a log that is functioning normally.)

So, if a log normally only gets that</log> tag written when it is ready
to be closed because it's full, what happens in the real world if you
want to read a log BEFORE it's full? If there is some kind of system
crash long before the log is full, you're not going to want to wait until
the log is full; you want to read it NOW. So how what do you do? That has
to be a routine situation and there must be a routine answer. And that's
as far as I've gotten with my reasoning.

If I'm right, nothing is really broken after all and I was just finding
those log files closed and finished by sheer good luck (or having done
some stupid things that worked in my favor). In that case, I just need to
know how the rest of you arrange to read an incomplete log. Maybe there's
a standard utility that you use to read an incomplete log??

I'll stop there and wait for your feedback before I go off the rails
completely ;-)

I still believe that your best option will be to use a non
XML format.

The second best option must be to have a little FixEndTag
utility ready to be used.

Arne
 
M

markspace

I think he said that he wanted to process it with XSLT.

XSLT will not work with an invalid XML file.

But it should be trivial to add that end tag to the file.


I think you missed Patricia's key point when she said "arbitrarily
truncated". What if the logger dies at some other place besides the
</log> tag? Your "use the command shell" option won't work.
 
A

Arne Vajhøj

I think you missed Patricia's key point when she said "arbitrarily
truncated". What if the logger dies at some other place besides the
</log> tag? Your "use the command shell" option won't work.

I won't.

That solution is just handling the observed problem.

Arne
 
N

Novice

I could try to create an SSCCE but before I do, I think I need to take
a step back and sort out some conceptual stuff with your help.

As I started composing a reply to one one of the responders in the
thread, it occurred to me that I really wasn't clear on what was
SUPPOSED to happen with a normal log file. I had gotten as far as
determining that getTail() in XMLFormatter would write the missing
</log> tag but I hadn't really thought through what SHOULD be
happening. It occurred to me that maybe the aberrant behavior was what
I _had_ been getting and maybe what I'm seeing now actually makes
sense.

Now that I've read and googled a bit more, I'm thinking I was right
about that theory, although I'm definitely not certain. But in case
I'm going down the wrong path, let me think out loud for a minute and
you folks can tell me if I'm out in left field or if I'm making sense.

After some poking around, I found close() and flush() methods in
Handler, which is abstract, and in its children and grandchildren. The
one that concerns me is FileHandler, which is the one that writes
physical logs. Also, java.util.logging.Formatter, which is abstract,
and its descendants have a getTail() method, which is what writes that
missing </log> record.

It seems to me that a real log, one you'd see in a production system,
would want to execute flush() and close() and getTail() when it was
about to be closed. But what events would cause that? I went back to
the logging.properties file and rediscovered
java.util.logging.FileHandler.limit, which has a default value of
50,000. I'll assume that's lines but it may be the number of log
records too; it doesn't much matter. I'm assuming that's how big a
physical log file can get before it needs to be closed and a new file
created. In the meantime, Java keeps writing log records to that log.
The mere fact that I start or stop a given program shouldn't have any
effect on whether the log is locked or not; the log will be available
when it's needed and will get written until it is full.

Now, what happens when it gets full? Well, if it is a continuous log,
we just keep creating new physical law files, typically with a
sequential number to distinguish it from earlier logs, e.g. log0,
log1, log2, etc. The integer just keeps going up. If it is a circular
log, we keep incrementing the integer until we hit a threshhold like 5
or 10, then start back at 0, overwriting the old log0. If I remember
correctly, Java can do both continuous or circular logs.

Have I got this more-or-less right so far? Because that would make
sense to me based on my (somewhat vague) memories of logging in
relational databases.

If I do have that right, then the aberrant behavior was when the log
was nicely closed as soon as my program ended. (Which raises the
question why did THAT happen? But if I'm right about the log not
normally getting the </log> tag added until the log is closed because
it's full, then I'm not going to worry about the aberrant behavior. I
just want to know how to work with a log that is functioning
normally.)

So, if a log normally only gets that </log> tag written when it is
ready to be closed because it's full, what happens in the real world
if you want to read a log BEFORE it's full? If there is some kind of
system crash long before the log is full, you're not going to want to
wait until the log is full; you want to read it NOW. So how what do
you do? That has to be a routine situation and there must be a routine
answer. And that's as far as I've gotten with my reasoning.

If I'm right, nothing is really broken after all and I was just
finding those log files closed and finished by sheer good luck (or
having done some stupid things that worked in my favor). In that case,
I just need to know how the rest of you arrange to read an incomplete
log. Maybe there's a standard utility that you use to read an
incomplete log??


I'll stop there and wait for your feedback before I go off the rails
completely ;-)


Just to follow up on my own post, I've found a simple way to cause that
closing tag to be written to the log file. My program is launched for a
very simple JFrame that has two buttons, one for each of the two
principal classes. I've added code so that when I click on the close
button on the JFrame, I simply use the Logger object to determine all the
handlers for that Logger. Then, I do a for each loop to go through the
handlers and simply close each one. Upon completion, my physical log file
has no lock file associated with it and I can open it without difficulty.
When I edit the log file, I see that the </log> tag is right where it
should be.

It's a bit of a sledgehammer approach to the problem but I can fine-tune
it to close only the FileHandler. But it doesn't answer the question of
how the situation would be handled in the real world. Are real production
programs closing physical log files when they stop running? It seems
unlikely that a 24x7 (or nearly 24x7) application would stop running at
all except for a maintenance period yet if it starts writing error
messages to its log, I can't see people waiting until the log is full
before they get their hands on it and find out what's going on.

It would be very helpful if someone could explain what would happen in a
real-world production situation where a long-running batch program or a
constantly-running online program started writing error messages to the
log. What will operators do to be able to read the log? I'm guessing they
force it to close early and then start a new file or they have a tool
that will read the log file correctly even without the final </log> tag.
 
A

Arne Vajhøj

Just to follow up on my own post, I've found a simple way to cause that
closing tag to be written to the log file. My program is launched for a
very simple JFrame that has two buttons, one for each of the two
principal classes. I've added code so that when I click on the close
button on the JFrame, I simply use the Logger object to determine all the
handlers for that Logger. Then, I do a for each loop to go through the
handlers and simply close each one. Upon completion, my physical log file
has no lock file associated with it and I can open it without difficulty.
When I edit the log file, I see that the</log> tag is right where it
should be.

You should not have to do that kind of stuff.
It's a bit of a sledgehammer approach to the problem but I can fine-tune
it to close only the FileHandler. But it doesn't answer the question of
how the situation would be handled in the real world. Are real production
programs closing physical log files when they stop running? It seems
unlikely that a 24x7 (or nearly 24x7) application would stop running at
all except for a maintenance period yet if it starts writing error
messages to its log, I can't see people waiting until the log is full
before they get their hands on it and find out what's going on.

It would be very helpful if someone could explain what would happen in a
real-world production situation where a long-running batch program or a
constantly-running online program started writing error messages to the
log. What will operators do to be able to read the log? I'm guessing they
force it to close early and then start a new file or they have a tool
that will read the log file correctly even without the final</log> tag.

In a 24x7 environment I would expect:
* the appender to be a rolling file appender so the file get closed
and anew one opened every day
* log output to be so big that buffers get flushed to disk within
minutes so the current file can be used

Arne
 
N

Novice

You should not have to do that kind of stuff.
I agree. It does seem excessive to that within the program.
In a 24x7 environment I would expect:
* the appender to be a rolling file appender so the file get closed
and anew one opened every day
* log output to be so big that buffers get flushed to disk within
minutes so the current file can be used
But, as you've said all along and Patricia has now supported, you'd be
writing text files, not XML, so there wouldn't be the same issue of
having to write the closing </log> tag (or an HTML equivalent in an HTML
log). Then again, you'd still have a lock even on a text file log, at
least if I'm reading the API correctly, so you'd still have to force an
early rollover of the log to be able to read the one you want to see,
right?

I like the fact that XML gives you lots of options about formatting the
output the way you want it but if it causes problems like this incomplete
file issue I may have to abandon XML logs and go back to simple text
logs, assuming that the simple text logs will be viewable when I want
them, even if that means having to run a simple utility to force an early
rollover.

That seems preferable to having to close handlers within my program.
That's not such a big problem in a programmer test environment but I
don't think a production program would be closing handlers every time the
program ends, especially if it was some kind of online program that was
running all the time.
 
A

Arne Vajhøj

I agree. It does seem excessive to that within the program.

But, as you've said all along and Patricia has now supported, you'd be
writing text files, not XML, so there wouldn't be the same issue of
having to write the closing</log> tag (or an HTML equivalent in an HTML
log).

I would use a flat format with one line per log message and no headers
and trailers.
Then again, you'd still have a lock even on a text file log, at
least if I'm reading the API correctly, so you'd still have to force an
early rollover of the log to be able to read the one you want to see,
right?

No. I would expect it to be possible to read the file even though
it is open for write by the app.
I like the fact that XML gives you lots of options about formatting the
output the way you want it but if it causes problems like this incomplete
file issue I may have to abandon XML logs and go back to simple text
logs, assuming that the simple text logs will be viewable when I want
them, even if that means having to run a simple utility to force an early
rollover.

That seems preferable to having to close handlers within my program.
That's not such a big problem in a programmer test environment but I
don't think a production program would be closing handlers every time the
program ends, especially if it was some kind of online program that was
running all the time.

Yup.

Arne
 
A

Arved Sandstrom

On 12-03-09 01:36 AM, Novice wrote:
[ SNIP ]
It would be very helpful if someone could explain what would happen in a
real-world production situation where a long-running batch program or a
constantly-running online program started writing error messages to the
log. What will operators do to be able to read the log? I'm guessing they
force it to close early and then start a new file or they have a tool
that will read the log file correctly even without the final </log> tag.
In a real-world production situation odds are you're getting errors
logged all the time. There are a whole bunch of errors that will happen,
sometimes frequently, sometimes a few times a day, sometimes once a
month, but you know they _will_ happen. So routine errors get logged all
the time.

As for how do ops support folks read ongoing stuff? Well, surely you
must have used 'tail', or perhaps opened up a decent programming text
editor on a file that's being currently written to, or even just looked
at console output. Apart from some buffering going on (none, line or
block), which really only determines if you see your stuff immediately,
very quickly or just quickly, fact is that what you're logging shows up
in your log file rapidly, and then you can read it - there's no need to
close the file.

As Arne mentioned, a file will get closed when the log file is rolled
over for size and/or time, or because the entire application stopped.
But that's a different matter.

I've worked with several app servers that routinely log some files as
XML; I can't say it's ever occurred to me to take a peek at the XML as
text and see what state it's in during ongoing writing. These are server
logs, not application logs, so I don't usually have reason to.

I'm with Arne on not using XML logs yourself. There's no need for it.
Out in the wide world users of logs will thank you for not using XML.

AHS
 
N

Novice

On 12-03-09 01:36 AM, Novice wrote:
[ SNIP ]
It would be very helpful if someone could explain what would happen
in a real-world production situation where a long-running batch
program or a constantly-running online program started writing error
messages to the log. What will operators do to be able to read the
log? I'm guessing they force it to close early and then start a new
file or they have a tool that will read the log file correctly even
without the final </log> tag.
In a real-world production situation odds are you're getting errors
logged all the time. There are a whole bunch of errors that will
happen, sometimes frequently, sometimes a few times a day, sometimes
once a month, but you know they _will_ happen. So routine errors get
logged all the time.
I'm a little surprised by that. I would have thought actual errors would
get acted on fairly quickly, particulary anything categorized as SEVERE.
(I'll use the java.util.logging levels for these remarks.) I could see
that something that is categorized as WARNING may take longer to fix,
especially if WARNING is used for messages that indicate the program is
able to continue using some kind of alternate strategy; if the alternate
strategy is perfectly reasonable and doesn't skew the result, there's no
great harm in leaving the problem unsolved. But if we assume that a
SEVERE error is a show-stopper, wouldn't that get dealt with right away?
As for how do ops support folks read ongoing stuff? Well, surely you
must have used 'tail',

I haven't worked on Unix/Linux in a while but I have used 'tail' ;-)
Frankly, I don't recall if it works on open files but from you've said,
apparently it does.
or perhaps opened up a decent programming text
editor on a file that's being currently written to, or even just
looked at console output. Apart from some buffering going on (none,
line or block), which really only determines if you see your stuff
immediately, very quickly or just quickly, fact is that what you're
logging shows up in your log file rapidly, and then you can read it -
there's no need to close the file.
Perfect. Existing tools that will read the file while still being written
are what I was hoping for ;-)
As Arne mentioned, a file will get closed when the log file is rolled
over for size and/or time, or because the entire application stopped.
But that's a different matter.
Indeed.

I've worked with several app servers that routinely log some files as
XML; I can't say it's ever occurred to me to take a peek at the XML as
text and see what state it's in during ongoing writing. These are
server logs, not application logs, so I don't usually have reason to.

I'm with Arne on not using XML logs yourself. There's no need for it.
Out in the wide world users of logs will thank you for not using XML.
You and Arne and Patricia have convinced me. XML isn't the way to go
after all, especially if it necessitates changing the way things get done
- such as when logs get closed - or writing new tools to be able to read
incomplete files just to be able to read XML logs. That means I can
remove the code that closes the handler from my program which strikes me
as a a good thing ;-)

Well, this little mishap with the XML logs has been a good thing overall
in that it's steered me away from a problematic approach and back towards
something that follows established practice.

Thanks to everyone who contributed to the thread.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,769
Messages
2,569,582
Members
45,070
Latest member
BiogenixGummies

Latest Threads

Top