68

I noticed that my database server supports the Memory database engine. I want to make a database I have already made running InnoDB run completely in memory for performance.

How do I do that? I explored PHPMyAdmin, and I can't find a "change engine" functionality.

7
  • 4
    You should understand first what you are doing there. Using the memory engine has certain implications. Commented May 21, 2012 at 20:51
  • 3
    running in memory means that the data is wiped as soon as you restart the MySQL daemon (or it crashes), you'd be better off playing with query_cache for MySQL or memcached for caching results Commented May 21, 2012 at 20:54
  • 1
    I'm not sure that this is what you want. What is your performance problem exactly? Commented May 21, 2012 at 21:01
  • 7
    Very unlikely. Add/adjust your indexes. Consolidate your queries for efficiency. Stuff like that is generally more effective and should be your initial approach. Commented May 21, 2012 at 21:10
  • 1
    This is handy for doing many tests on POS-software with a database. When the test is over, the database is not longer needed. For running all tests, 500 databases are loaded and verified later... in RAM it would save lots of time probably. Commented Jun 16, 2021 at 9:19

7 Answers 7

31

Assuming you understand the consequences of using the MEMORY engine as mentioned in comments, and here, as well as some others you'll find by searching about (no transaction safety, locking issues, etc) - you can proceed as follows:

MEMORY tables are stored differently than InnoDB, so you'll need to use an export/import strategy. First dump each table separately to a file using SELECT * FROM tablename INTO OUTFILE 'table_filename'. Create the MEMORY database and recreate the tables you'll be using with this syntax: CREATE TABLE tablename (...) ENGINE = MEMORY;. You can then import your data using LOAD DATA INFILE 'table_filename' INTO TABLE tablename for each table.

Sign up to request clarification or add additional context in comments.

4 Comments

Nowhere does this actually say how to create an in memory database or table.
To the point made by @INTPnerd, when recreating the tables, use the syntax: CREATE TABLE someTable (...) ENGINE = MEMORY;
Is there a reason you don't suggest mysqldump? I just tried it and didn't see any issues (just a very quick test though).
@BetLamed restoring using the output of mysqldump will work, but much slower than a bulk import since each INSERT will need to run separately (with the incurred overhead of an INSERT transaction). Bulk import does away with that, and can be expected to be much faster for large amounts of data.
31

It is also possible to place the MySQL data directory in a tmpfs in thus speeding up the database write and read calls. It might not be the most efficient way to do this but sometimes you can't just change the storage engine.

Here is my fstab entry for my MySQL data directory

none            /opt/mysql/server-5.6/data  tmpfs   defaults,size=1000M,uid=999,gid=1000,mode=0700          0       0

You may also want to take a look at the innodb_flush_log_at_trx_commit=2 setting. Maybe this will speedup your MySQL sufficently.

innodb_flush_log_at_trx_commit changes the mysql disk flush behaviour. When set to 2 it will only flush the buffer every second. By default each insert will cause a flush and thus cause more IO load.

3 Comments

You may also want to take a look at libeatmydata. It prevents fsync and older sync calls and thus speedup the application. flamingspork.com/projects/libeatmydata (apt-get install eatmydata ; eatmydata /etc/init.d/mysql restart)
I'm thinking about have the entire mysql in memory and another mysql instance to replicate and persist data. Of course, assuming I can deal with the risk of loose data because replication delay.
@Jotschi That link 404s for me
16

Memory Engine is not the solution you're looking for. You lose everything that you went to a database for in the first place (i.e. ACID).

Here are some better alternatives:

  1. Don't use joins - very few large apps do this (i.e Google, Flickr, NetFlix), because it sucks for large sets of joins.

A LEFT [OUTER] JOIN can be faster than an equivalent subquery because the server might be able to optimize it better—a fact that is not specific to MySQL Server alone.

-The MySQL Manual

  1. Make sure the columns you're querying against have indexes. Use EXPLAIN to confirm they are being used.
  2. Use and increase your Query_Cache and memory space for your indexes to get them in memory and store frequent lookups.
  3. Denormalize your schema, especially for simple joins (i.e. get fooId from barMap).

The last point is key. I used to love joins, but then had to run joins on a few tables with 100M+ rows. No good. Better off insert the data you're joining against into that target table (if it's not too much) and query against indexed columns and you'll get your query in a few ms.

I hope those help.

9 Comments

That last point may be great for performance, but it can be a killer for data consistency and maintenance if not used properly. Trigger updates or similar cache replication strategies can help there, but the manner of caching the denormalized data shouldn't be taken for granted...
@PinnyM good point. This is where one must decide what is most important. RDBMS are great for atomicity and consistency, but not for scaling and performance, esp on joins. 6NF (Star Schema) is very normal, but join happy. You can still keep the data in some tables in a normalized manner and use triggers and scheduled tasks to update the denormalized (read only) tables from the normalized tables (read/write).
1 and 4 for are the worst advice ever.
@Darwin thanks for the downvotes. Pick consistency or scale. With billions of rows, consistency often isn't worth the overhead, depending on your use case (e.g. like all the Col stores I use at work).
@JosephLust you are right, "Pick consistency or scale". And about 99 percent of databases are best to have consistency. Most developers never ever get to work with large databases. By large i mean as in cant have it in memory for a reasonable price. I believe that you can today get servers with 0.5 to 1 TB memory for a reasonable price.
|
7

If your database is small enough (or if you add enough memory) your database will effectively run in memory since it your data will be cached after the first request.

Changing the database table definitions to use the memory engine is probably more complicated than you need.

If you have enough memory to load the tables into memory with the MEMORY engine, you have enough to tune the innodb settings to cache everything anyway.

Comments

6

"How do I do that? I explored PHPMyAdmin, and I can't find a "change engine" functionality."

In direct response to this part of your question, you can issue an ALTER TABLE tbl engine=InnoDB; and it'll recreate the table in the proper engine.

Comments

4

Additional thoughts :

Ramdisk - setting the temp drive MySQL uses as a RAM disk, very easy to set up.

memcache - memcache server is easy to set up, use it to store the results of your queries for X amount of time.

2 Comments

It has a Linux release as well. Good recommendation. Up-voted !!!
Thanks Thanasis I needed a win today... appreciated!
2

In place of the Memory storage engine, one can consider MySQL Cluster. It is said to give similar performance but to support disk-backed operation for durability. I've not tried it, but it looks promising (and been in development for a number of years).

You can find the official MySQL Cluster documentation here.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.