top button
Flag Notify
Site Registration

MySQL Archive Engine

+2 votes

I have a customer who is wanting to use the Archive Engine. I have no experience with this engine, other than what I am been reading. Why would I want to use Archive over InnoDB. They are only going to be placing audit information in the table.

posted Sep 17, 2013 by Kumar Mitrasen

Share this question
Facebook Share Button Twitter Share Button LinkedIn Share Button

3 Answers

+1 vote

From here:

The ARCHIVE engine supports INSERT and SELECT , but not DELETE, REPLACE, or UPDATE . It does support ORDER BY operations, BLOB columns, and basically all but spatial data types (see Section, “MySQL Spatial Data Types”). The ARCHIVE engine uses row-level locking.

Storage: Rows are compressed as they are inserted. The ARCHIVE engine uses zlib lossless data compression (see You can use OPTIMIZE TABLE to analyze the table and pack it into a smaller format (for a reason to use OPTIMIZE TABLE , see later in this section). Beginning with MySQL 5.0.15, the engine also supports CHECK TABLE. There are several types of insertions that are used:

You tell my why a customer who wants to place AUDIT information in a table might want that?

answer Sep 17, 2013 by Mandeep Sehgal
Yea, the more I think about it the more it makes good sense. Are there any special my.cnf setting we should be looking at. We currently have our systems tuned for InnoDB as our primary engine.

Thanks for the feedback.
+1 vote

We use Archive for archive clusters (obviously) and it is a good option to save lot of disk space if you assume the performance can be slightly worse. As someone pointed out, make sure you know which statements you use because ARCHIVE doesn't support all the MySQL ones.

If Archive isn't an option but you still want to save some disk, you can use InnoDB Compression:

In all the tests we did we saw some performance degradation but nothing too serious and nothing we couldn't afford, but if you decide to try this, make sure you do a PoC so you know how it could impact in your scenario.

answer Sep 18, 2013 by Seema Siddique
+1 vote

If you are looking for great compression another option is TokuDB. It supports quicklz, zlib, and lzma compression.

answer Sep 20, 2013 by Luv Kumar
Similar Questions
+1 vote

I was trying to optimize the InnoDB tables. I have executed the next query to detect what are the fragmented tables.

FROM TABLES WHERE TABLE_SCHEMA NOT IN ("information_schema","mysql") AND Data_free > 0

After that, I have seen that there are 49 fragmented tables. With one table, I have executed "optimize table table_name;" and "analyze table table_name;". The result is the same, the table continuous fragmented.

Any suggestions? I have followed the mysqltuner recommendations...

+1 vote

I've restored an MySQL backup from our MySQL server into another server. The backup includes InnoDB tables. After the import, MySQL recognized the innodb tables fine but when I try to do a check table it returns that the table doesn't exists.

Permission and owner of the table files (.frm files) are ok, since it recognizes MyISAM tables (they have the same permission). Innodb engine is enabled..

Which can cause the tables to appears as "non existent", as far as they do really exist?

+1 vote

I have in my server database some tables that are too much big and produce some slow query, even with correct indexes created.

For my application, it's necessary to have all the data because we make an authentication process with RADIUS users (AAA protocol) to determine if one user can or not navigate in Internet (Depending on the time of all his sessions).

So, with 8GB of data in one table, what are your advises to follow? Fragmentation and sharing discarded because we are working with disk arrays, so not apply. Another option is to delete rows, but in this case, I
can't. For the other hand, maybe the only possible solution is increase the resources (RAM).

Any suggestions?

+1 vote

$ python -m pip install MySQL-python

Collecting MySQL-python
 Downloading (108kB)
 100% | –ˆ –ˆ –ˆ –ˆ –ˆ –ˆ –ˆ –ˆ –ˆ –ˆ –ˆ –ˆ –ˆ –ˆ –ˆ –ˆ –ˆ –ˆ –ˆ –ˆ –ˆ –ˆ –ˆ –ˆ –ˆ –ˆ –ˆ –ˆ –ˆ –ˆ –ˆ –ˆ| 112kB 260kB/s
 930 [main] python2.7 12948 child_info_fork::abort: address space needed by 'datetime.dll' (0x870000) is already occupied
 Error [Errno 11] Resource temporarily unavailable while executing command python egg_info
Traceback (most recent call last):
 File "/usr/lib/python2.7/site-packages/pip/", line 215, in main
 status =, args)
 File "/usr/lib/python2.7/site-packages/pip/commands/", line 324, in run
 File "/usr/lib/python2.7/site-packages/pip/req/", line 380, in prepare_files
 File "/usr/lib/python2.7/site-packages/pip/req/", line 634, in _prepare_file
 File "/usr/lib/python2.7/site-packages/pip/req/", line 129, in prep_for_dist
 File "/usr/lib/python2.7/site-packages/pip/req/", line 439, in run_egg_info
 command_desc='python egg_info')
 File "/usr/lib/python2.7/site-packages/pip/utils/", line 667, in call_subprocess
 cwd=cwd, env=env)
 File "/usr/lib/python2.7/", line 390, in __init__
 errread, errwrite)
 File "/usr/lib/python2.7/", line 917, in _execute_child = os.fork()
OSError: [Errno 11] Resource temporarily unavailable

Anyone hit similar issue?