Do you have a question? Post it now! No Registration Necessary. Now with pictures!
January 19, 2010, 8:20 pm
rate this thread
(If anybody knows of a better place to ask this let me know but I
couldn't find any memcached related news groups.)
Memcached stores data in pre-allocated slabs of memory and has a 1mb
limit per each data item stored. I've been working on a php wrapper to
handle large items by splitting them into 1mb blocks and manage the
storage/retrieval of each part.
While testing I found that I can actually store items > 1mb anyway
(without splitting) and I believe this is because of compression.
When my code splits the data up into 1mb chunks, those chunks
themselves would get compressed before being stored so presumably if I
use my code, memcached won't ever get to use some of the larger slabs
that it has available (ie. data always stored no matter the size but
with wasted memory).
If I don't use my code then the data may fill up the larger slabs, but
will also fail to save if they're too big (ie. more complete use of
memcached's memory but an upper limit on data that can be stored).
I am wondering about the trade off between these two situations.
I get the impression that the compression occurs client side. It seems
unlikely but is there a way of finding out how big the compressed data
will be before setting it?
Should I look at the error code returned when I try to set a value when
it fails, and then split it up?
Does anybody have any other ideas?