Trimming a large log file?

Do you have a question? Post it now! No Registration Necessary.  Now with pictures!

Threaded View
I am looking for a simple way to keep my log files from growing too
large. Basically I would want something that truncates the off first 25
kb of a 100kb log file. I could do something with reading in the file
twice, first to determine number of lines. Then a second time to write
a copy of the reduced log file. Any simpler workarounds?

Re: Trimming a large log file?

On Tue, 28 Nov 2006 20:11:54 -0800, veg_all wrote:

Quoted text here. Click to load it

If you're using a *nix os, then logrotate's probably your best bet.

Re: Trimming a large log file?

Another poster mentioned logrotate which is a very easy way to do it.
You can also do it with something like log4php
( /), if you're willing to use an
external lib. Here's an example configuration:

- Create in your app's dir. Set the file attribute
to your log file path.

    log4php.rootLogger=DEBUG, LOG

- Initialize log4php in an include or your script:

    define('LOG4PHP_CONFIGURATION', dirname(__FILE__) .
    require_once(dirname(__FILE__) .
    register_shutdown_function(array('LoggerManager', 'shutdown'));

- Log away in your scripts:

    $log =& LoggerManager::getLogger('MyApp');
    $log->debug("Debug test.");

If not using log4php, basically you would check the size of the log
file before appending to it (use ftell or filesize), and if it exceeds
your maximum, close the file, move it to a backup or delete it, then
re-open the file, and proceed to write. The LoggerAppenderRollingFile
class in log4php/src/log4php/appenders/LoggerAppenderRollingFile.php is
a good example of this.

Not precisely the rotating buffer that you wanted, but it might do for
most apps... If you do need a rotating buffer, you could do it in the
way you described... wrote:
Quoted text here. Click to load it

Re: Trimming a large log file?

Quoted text here. Click to load it

You can also do it using the standard Unix "head", "tail", and optionally  
"wc" commands.  (You can use "man" to look those up.)

For example, if you want to keep the last 20,000 lines of the log:

tail -n 20000 infile >outfile

If you're a clever shell programmer, you may find a way to shift the files  
around cleverly so that no appends are lost, but I'm not that clever.  You  
need to understand Unix file semantics and all that.

Rotating the logs (without actually trying to modify or shorten the current  
log file) is actually much simpler (a simple "mv").  Unix file semantics  
guarantee that if a file is open and you rename it, any appends to it will  
get done.  The next open-for-append call will create a new file.  

Re: Trimming a large log file?

Thanks for the replies. Here is what I came up with in Perl : Ill
transfer to php since I need it there as well.

sub check_log_size {

my $size =  ( -s 'log' ) / 1024 ;
$size = int $size;

if ( $size > 25 )       {
        open (FILEHANDLE1, "log" ) || &die_sub ( "Cant open log");
        open (FILEHANDLE2, ">log_old" );
        while (<FILEHANDLE1>) { print FILEHANDLE2 $_; }
        close (FILEHANDLE1);
        close (FILEHANDLE2);
        unlink ( 'log' );
} # end sub

Re: Trimming a large log file?

Quoted text here. Click to load it

The solution you proposed above probably has race conditions.  Specifically,  
if a log entry is made (by another process) at the points I've marked with  
an asterisk above, you will lose some lines from the log file.

The more traditional solution is just to rename the log file to a new name  
(i.e. something like "mv log log_old").  In other words, no copy, just "mv".

Each process that writes the log file has code like this (and I could be  
wrong on the exact form, too lazy to look up the 'C' library calls).

handle = fopen("log", "a");  /* Note the "a" for append */
fprintf(handle, "This is my log file entry.\n");

The rationale--and somebody please whack me if I'm wrong--is that Unix file  
semantics guarantee that when one process(es) opens a file for append and  
another process renames it, one process or the other will win.  If the "mv"  
happens after the "fopen" above but before the "fprintf", then the log entry  
will be appended to the renamed file.  The next call to fopen in append mode  
will create the new current log file (to replace the one that was renamed).

So, when you "mv" a file in order to rotate logs, there will in practice be  
a fraction of a second after the rename where some pending log entries will  
get written to the renamed file, but none will be lost.

This traces to Unix file semantics, inodes, and all that.

I'm sure Perl has an "mv" function that maps to the operating system's mv  


Re: Trimming a large log file?

Quoted text here. Click to load it

Also, if you're seeing this problem for the first time, this might help:

The operating system will assign CPU time to processes in unpredictable  
patterns, so if one process stops running and the other goes at the points  
I've marked with an asterisk ...

The "mv" solution is robust because Unix was designed that way ... the  
solution you proposed probably has a race conditions that may cause log file  
lines to be lost.  

Re: Trimming a large log file?

Quoted text here. Click to load it

   If filesize > 2Mb rename log.txt log.bak

Site Timeline