Controlling the cache size when using database as a backend Magento

If you have been using the database as a cache storage mechanism (Eg. <slow_backend>) and you have a large number of SKUs, you will start to see a database that increases in size by quite a measure.

A customer of ours with around 160k SKUs, had a "core_cache" table that had grown to just under 10Gb and around 960k rows - causing difficulties during some operations due to table locks. Whilst it is safe to empty the table, truncating a 10Gb table will take quite a long time, and all the while it will be locked, causing connections to queue up, eventually maxing out the available connections for your MySQL user/total - not good.

To make it a little more manageable, we quickly added a little intelligence to the table, so that it can "self-rotate". This could be applied in the PHP itself, but for now, it was quicker to address the issue via the DB directly.

Moving the old table out the way was essential to begin with

CREATE TABLE `new_core_cache` LIKE `core_cache`;
RENAME TABLE `core_cache` TO `old_core_cache`, `new_core_cache` TO `core_cache`;

CREATE TABLE `new_core_cache_tag` LIKE `core_cache_tag`;
RENAME TABLE `core_cache_tag` TO `old_core_cache_tag`, `new_core_cache_tag` TO `core_cache_tag`;

ALTER TABLE `core_cache_tag` ADD FOREIGN KEY ( `cache_id` ) REFERENCES `core_cache` (
`id`
) ON DELETE CASCADE ON UPDATE CASCADE;

This then left us with the old two tables, which can be safely dropped in an instant

DROP TABLE `old_core_cache_tag`;
DROP TABLE `old_core_cache`;

Looking at the number of rows, and volume of data, a safe average was for every 100,000 records stored, it consumed 1GB of space. To make the DB a little more manageable, we wanted to add a 100,000 row ceiling to the DB (we could also try to cap by size, but InnoDB cannot report an accurate size). The cap was put in place using a MySQL trigger. Essentially, when the table hits 100,000 records, it will rotate off the row that expires the soonest to allow room for the next row.

CREATE TRIGGER `after_insert_core_cache_tag` AFTER INSERT ON `core_cache_tag`
 FOR EACH ROW BEGIN  
  DECLARE num_rows INTEGER;
  SELECT count(id) INTO num_rows FROM core_cache;
  IF num_rows &gt; 100000 THEN
    DELETE FROM core_cache 
    WHERE expire_time &gt; 0 
    ORDER BY expire_time ASC 
    LIMIT 1000;
  END IF;
END
//

There is probably more efficient methods than this, but as a temporary resolve, it should keep things in check.

It may also be worth commenting/modifying any thread_stack definitions in your MySQL configuration if you run into errors when inserting rows.