In a data-driven world, you can lose money and customers for every second a database lags. Therefore, it's absolutely...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
vital that you keep an eye on your MySQL database performance on Linux, and consider every option available for performance improvement.
Much of the time, the biggest ways to boost performance is to throw hardware at it. However, most people don't have unlimited budgets. There are a number of configuration changes you can make to a MySQL database installation to drastically improve performance, depending on your particular use cases.
Be sure to monitor performance before you make the changes, and follow up after to get a real idea of how much things have changed. You may be surprised by the gains you have made, or may even be more surprised that things didn't change at all, or in fact got worse. A popular tool for getting benchmarks for MySQL servers is SysBench.
A simple fix: Move to 64-bit
It can be amazingly common for people to run the 32-bit version of software like MySQL (and maybe even a 32-bit operating system) on a 64-bit box. However, given the computational nature of databases, a move from 32-bit to 64-bit can give you significant performance gains. If you have 64-bit hardware, don't miss out on its potential.
Choosing your default storage engine
One of the biggest areas where you can improve performance is to use storage engines to your advantage. By default, all MySQL tables use the MyISAM storage engine. This default is perfectly acceptable in situations where one or few people or processes are accessing a table. These days, however, a database and even a single table may be accessed or changed multiple times per second.
Each time a record is changed in a table handled by MyISAM, the entire table is locked until the change is complete. For example, if you have a Web application trying to make many concurrent changes in the same table, the change queue grows and grows as the table is locked, the change is made, and then the lock is removed.
MySQL lets you set which storage engine manages each table within each database. For any table where you have this problem, or anticipate having this problem, set its storage engine to InnoDB. The InnoDB engine locks on a row by row basis, rather than by table, allowing for many more changes to be made at once. For tables that are mostly read rather than written to, you should leave them managed by MyISAM to gain other efficiencies. If your tables are entirely read-only and small enough for the available RAM, then you might even consider using the MEMORY storage engine instead to load the table into RAM.
When looking at converting a table's storage engine, keep in mind that the process can put a heavy load on your server. The table has to be locked so it can be converted as well. So be sure to make these kinds of changes during off-peak hours.
Optimizing InnoDB settings
For those tables you've configured to use the InnoDB storage engine, one of the smartest changes you can make is to adjust the innodb_buffer_pool_size setting. If MySQL is running on a dedicated database server (another way to boost performance), you can allocate as much as fifty to eighty percent of the memory to this setting. For those running MySQL on a server with other functions, you'll want to only use twenty to thirty percent of your memory.
Another change that may make a big difference for those using InnoDB is tuning the innodb_thread_concurrency parameter. Each new request launches a thread, which continues running until it has returned the result it was asked for. While a multi-threaded approach allows MySQL to handle many requests at once, you do hit a point of diminishing returns where adding more threads doesn't give you any performance gains. This is followed by a later point when adding more threads makes everything slower.
InnoDB normally stores all of its table contents in one big file. However, if the file gets corrupted, this setting can get you into a world of hurt. If you're using InnoDB, it's highly recommended to configure this storage engine to save each table's contents into separate files using the innodb_file_per_table option. Not only does this practice protect you from having a single corruption in a single file destroy your entire database, it also helps to ensure that you're using your disk space more efficiently.
To understand how you get this efficiency, imagine that you have everything in one massive data file. A process gets out of control and adds so much data that it practically fills your hard drive. You erase GBs worth of bad data from the single table file. However, as far as your operating system is concerned, that space is still in use by MySQL. When you add new data, MySQL knows that huge chunk of empty space is available, but there's no getting it back for the rest of the system without completely tearing down and restoring all of your tables. If you had used one file per table, then you can tear down and restore just the tables that were affected. Another advantage of this approach is that you can separate the database out onto different partitions if necessary.
To get an overall idea of your InnoDB engine's behavior, use the command SHOW INNODB STATUS.
Investigate your options before making changes
There are many more angles you use when improving MySQL performance, with multiple books, Web sites and blogs happy to give you suggestions. As with any change to a mission-critical application like a production database, be sure to thoroughly investigate the option before moving forward or you may do more harm than good.
Finally, if you experiment by making your changes on the MySQL command line, remember to go back and add them either to your startup script, or your configuration files. Otherwise, when you restart the database server, your configuration changes will be lost.
ABOUT THE AUTHOR: Dee-Ann LeBlanc is a technology author and editor specializing in Linux, CMS, and open source. Her work has appeared everywhere from "Linux for Dummies" to "The Lab with Leo."