processing raw logs faster

Do you have a question? Post it now! No Registration Necessary.  Now with pictures!

Threaded View
I am trying to process raw IIS log files and insert them into a MySQL
database. I have no problem accomplishing this, but the php code runs
very slow.

Right now, it is processing 10,000 lines in a log file every ~300
seconds. I am dealing with daily log files with records over 500,000.
It takes hours to process a daily file.


$server = "localhost:3306"; // db host
$user = ""; // db username
$pswd = ""; // db password
$db = ""; // db name

$DB_link = mysql_connect($server,$user,$pswd) OR DIE (mysql_error());

mysql_select_db($db, $DB_link) OR DIE (mysql_error());

$start_time = time();

$handle = fopen("/logs/ex050830.log", "r");
$linecnt = 0;
$totalcnt = 0;
while (!feof($handle)) {

   $list = array();
   $buffer = fgets($handle, 20000);

if (! preg_match("/^\s*?#/", $buffer) ){
   $line = split(" ", $buffer);

   $stmt = "INSERT INTO logs ( `hit_date` , `hit_time` , `s-sitename` ,
`s-computername` , ".
         "`s-ip` , `cs-method` , `cs-uri-stem` , `cs-uri-query` ,
`s-port` , `cs-username` , `c-ip` , ".
         "`cs-version` , `User-Agent` , `Cookie` , `Referer` ,
`cs-host` , `sc-status` , `sc-substatus` , ".
         "`sc-win32-status` , `sc-bytes` , `cs-bytes` , `time_taken` )
         " VALUES ( '".$line[0]."', '".$line[1]."', '".$line[2]."',
'".$line[3]."', '".$line[4]."', ".





   if( $linecnt >= 10000 ){
        $totalcnt += $linecnt;
            echo "[ ".$totalcnt." ( ". ( time() - $start_time) ." )
        $linecnt = 0;




Re: processing raw logs faster

On 31 Aug 2005 12:43:15 -0700, ""

Quoted text here. Click to load it

 Is it the PHP that's slow, or the database updates? Put some timing code in to
get an idea of which bit takes longest, so you can optimise the right part.

Andy Hassall :: :: :: disk and FTP usage analysis tool

Re: processing raw logs faster

Quoted text here. Click to load it

I removed the sql insert and the code ran faster but not by much.

Quoted text here. Click to load it

What would you suggest?

Re: processing raw logs faster

On Wed, 31 Aug 2005 17:27:53 -0700, wrote:

Quoted text here. Click to load it

You can get a decent database from . It's kinda
better suited for heavy OLTP processing then MySQL. Not as good as Oracle
RDBMS, but definitely getting there.


Re: processing raw logs faster

Mladen Gogala wrote:

Quoted text here. Click to load it

No - he wanted it to go faster. wrote:

Quoted text here. Click to load it

Here's your first problem. Regexes are slow. My PERL RE's are a bit a bit
rusty - but that looks a bit suspect anyway. Try coding it without REs.

Quoted text here. Click to load it

Join the strings together - OK it doesn't help the readability - but you
will get some performance benefit. Actually it would be a lot better to
move the invariant parts outside the loop:

$stub="INSERT INTO logs....VALUES(";
while (!feof($handle)) {
        $stmt=$stub . "'".$line[0]."', '".$line[1]."', '".$line[2]."',

You could try a more functional approach to generating the VALUES clause -
something like:

        $stmt = $stub . "'" . implode("','",$line) . "')";

This could be more efficient:

Quoted text here. Click to load it


if (!($linecnt % 10000)) {
        echo "[ ".$linecnt." ( ". ( time() - $start_time) ." ) ]\t";

You should also get a boost by using INSERT DELAYED (assuming your DBMS and
table are compliant)




Re: processing raw logs faster

Colin McKinnon wrote:
Quoted text here. Click to load it

The PCRE extension caches compile regex, so the overhead isn't that
high when you keep testing for the same expression. It's highly
unlikely that you can write something more efficient for the expression
/^\s*#/ in PHP.

The call to split() definitely should be replaced by explode(),
however. The Posix regex functions do recompile the expression each

Re: processing raw logs faster

On Thu, 01 Sep 2005 09:49:27 +0100, Colin McKinnon wrote:

Quoted text here. Click to load it

Well, that is precisely what he will get, unless he's using MySQL in the
single user mode. Being an Oracle DBA for 15+ years and working for large
companies (my largest database was > 1TB and with 3000 concurrent users) I
forgot that there are such databases. MySQL is a serious junk which has
problems with transactions (if "dirty reads" are disabled and standard
ANSI-level transaction isolation enabled, database speed is 25% of the
original), MySQL is prone to lock escalation (many row locks are converted
to a table lock, row locking was introduced in MySQL v4, so it is very
new) and MySQL is notorious for index corruption and having to rebuild
indexes. Postgres has no lock escalation, normally operates in
ANSI-standard serialization level and indexes have to be rebuilt only on
rare occasions. For large loads, which is precisely what he's describing,
nothing in the world beats Oracle RDBMS with parallel/direct insert.
Postgres is the best free alternative. MySQL is a serious junk. The ease
with which you dismissed Postgres tells me that you probably don't know
much about databases.

Quoted text here. Click to load it

I love unexplained declarations/qualifications like this! In the form of
verdict, such absolute statements do not need explanations. Unfortunately,
this verdict is bound to be struck down on appeal. Regex is a standard
programming mechanism and is not "slow" unless used carelessly. Loops can
also be slow, if used carelessly. Avoid using loops, too? How about switch
statements? I will not even mention new PHP5 and try-catch combination
with exceptions. Creating monstrosities without regex will save you
nothing because the few instructions saved by refraining from using
regular expressions will be more then compensated by the monstrosity of
the program without RE. Your logic leads directly to the assembly language
programming - it's without a doubt much faster then PHP, Perl, Java or
anything else. You will not have to suffer from regex. Just buy few
mathematical books describing the theory of algorithms, including finite
automata and implement that in the assembly language. The speed will be
unbeatable, once you finish he program in the year 2525. Obey the
principles of the good programming and you will find that the programs
will work fine, even with RE. Write stupid programs which are
algorithmically incorrect and not even coding in binary will make them
less stupid.

Quoted text here. Click to load it

Some advice is not meant to be followed. This is one such example. First
thing to do is to profile your program, using Zend Studio if possible, and
find out where the time is spent. That is the most logical thing to do. If
your complaint is that "the program is slow", find out exactly what is
slow and where is the time spent. My experience tells me that the main
cause for slowness is usually inadequate usage of the database. With
MySQL, it's not hard to do. Unless you know the database very well, you
will likely have performance problems. Avoiding RE will then do nothing
but mutilate, scar  and disembowel the program.


Re: processing raw logs faster

You should really rethink your approach. What information do you want
to extract out of the log? Simply sticking every pagehit into the
database will cause MySQL to blow up in no time.

Re: processing raw logs faster wrote:
Quoted text here. Click to load it

Why do you need PHP code at all?  If I remember correctly, IIS log
files are comma-separated text files that can be imported directly
into MySQL using LOAD DATA INFILE command...  


Site Timeline