Do you have a question? Post it now! No Registration Necessary. Now with pictures!
- Posted on
- MySQL with large table.
- Shashikant Kore
June 8, 2005, 2:21 am
rate this thread
I am using MySQL for a table which will have 100M+ records, avg length
of records being 130 bytes.
When the number of records reach approx. 25M (and the file size close
to 4GB), the rate of inserts falls drastically from 800 per second to
30-40 per second.
* MySQL 3.23.58 on Fedora Core 3
* Table has 4 indices.
* I have got rid of 4GB file size problem with MAX_ROWS=1000000000.
* File system : ext3 on single disk. ext3 could create 10G file
without much trouble. So I am convinced that ext3 is not the
* Tried using InnoDB engine but it also doesn't meet the requirements.
Requirements of database:
* A single table in the database with 100M+ rows, each of size 130
* 500-600 inserts per second.
* 200 selects and 200 updates per second. (These statements will
affect only one row)
* 3-5 select statements per minute which can return 10k to 500k
* No foreign keys/ACID transaction requirements.
* Fast recovery in case of crash.
* Does MySQL performance drops when the table grows beyond 4G?
* Which are most important server variables which need to fine tuned?
Currently I am using only key_buffer = 512M.
Answers/advice/opinions/pointers much appreciated.
Re: MySQL with large table.
You might try using MERGE tables to spread the data over multiple
physical files that are each less than 4G. See:
Why such an old version of MySQL? Have you tried using a more recent
version to see if it makes a difference?
Here's a posting from a person who says that switching to xfs or
reiserfs, instead of ext3 filesystem helped get better performance from
Also, have you read the following web page:
There are several good suggestions for speeding up inserts, suchas using
LOAD DATA INFILE.
Re: MySQL with large table.
Bill Karwin wrote:
MERGE tables proved to be of great help. The inserts are now faster. I
am not really looking into option of LOAD DATA INFILE as the inserts
are not batched. But if inserts get slower in future, I would
definitely look into this matter.
I cannot say confidently that performance has improved a lot. I need to
go through the change log to figure out the new tunable parameters in
4.1 which will improve the performance.
Will look into the option of using XFS/JFS.
Yeah, I had scanned through this. Since I run very basic queries, most
of the suggestions were not applicable.
Meanwhile, I also explored option of using BDB engine, but it didn't
If anyone has worked with MERGE tables, please give me some insights
into improving its performance. The manual doesn't speak much on this.
- » Americanas.com SALDO de TVs Com at 80% de Desconto (29798)
- — Newest thread in » MySQL Database Forum