Wednesday, October 12, 2011

TIL: Lookout For DEFINER

The Issue


I haven't blogged in a while an I have a long TODO list of things to publish: The repository for the SNMP Agent, video and slides of my OSCON talk and a quick overview of MHA master-master support. In the meantime, here's a little fact that I didn't know from MySQL CREATE VIEW documentation:

Although it is possible to create a view with a nonexistent DEFINER account, an error occurs when the view is referenced if the SQL SECURITY value is DEFINER but the definer account does not exist.
How can this be possible?

The Problem

For a number of reasons we don't have the same user accounts on the master than we have on the slaves (ie: developers shouldn't be querying the master). Our configuration files include the following line:
replicate-ignore-table=mysql.user
So if we create a user on the master, the user definition doesn't go through the replication chain.

So a VIEW can be created in the master, but unless we run all the proper GRANT statements on the slave as well, the VIEWs won't be effective on the slaves. Example from our slave (output formatted for clarity):

show create view view3\G
*************************** 1. row ***************************
                View: view3
         Create View: CREATE ALGORITHM=UNDEFINED 
               DEFINER=`app`@`192.168.0.1` 
               SQL SECURITY DEFINER VIEW `view3` AS select 
[...]

show grants for `app`@`192.168.0.1`;
ERROR 1141 (42000): There is no such grant defined 
for user 'app' on host '192.168.0.1'

The Solution

Once again, Maatkit's to the rescue with mk-show-grants on the master:
mk-show-grants | grep 192.168.0.1
-- Grants for 'app'@'192.168.0.1'
GRANT USAGE ON *.* TO 'app'@'192.168.0.1' 
IDENTIFIED BY PASSWORD '*password_hash';
GRANT DELETE, EXECUTE, INDEX, INSERT, SELECT, 
SHOW VIEW, UPDATE ON `pay`.* TO 'app'@'192.168.0.1';
A simple copy from the master and paste onto the slave fixed it.

Conclusion

Every now developers come to me with unusual questions. In this case it was: How come I can access only 2 out of 3 views?. In cases like these, it usually pays off to not overthink the issue and look into the details. A SHOW CREATE PROCEDURE on the 3 views quickly showed that one had a different host for the DEFINER. A quick read through the documentation and an easy test confirmed the mistake. That's why I have 3 mantras that I keep repeating to whomever wants to listen:
  • Keep it simple
  • Pay attention to details
  • RTFM (F is for fine)
It constantly keeps me from grabbing some shears and going into yak shaving mode.

Wednesday, August 17, 2011

MySQL HA Agent Mini HowTo

Why This Post


While testing Yoshinori Matsunobo's MHA agent I found that although the wiki has a very complete documentation, it was missing a some details. This article intends to close that gap and bring up some issues to keep in mind when you do your own installation. At the end of the article I added a Conclusions section, if you're not interested in the implementation details, but to read my take on the project, feel free to jump straight to the end from here.

My Test Case


Most of our production environments can be simplified to match the MHA's agent most simple use case: 1 master w/ 2 or more slaves and at least one more slave in an additional tier:

Master A --> Slave B
         +-> Slave C --> Slave D

As noted in the documentation, in this case the MHA agent will be monitoring A, B & C only. I found that unless you have a dedicated manager node, a slave on the 3rd tier (Slave D above) is suitable for this role. All 4 servers were setup as VMs for my evaluation / tests. It makes it easier to simulate hard failure scenarios in a controlled environment. Once this is in place the fun begins.

1st Step: User Accounts


In all the examples in the documentation it uses root to login into MySQL and the OS. I prefer to create specific users for each application, so I created a specific MySQL user for the MHA agent and used the linux' mysql user (UID/GID = 27/27 in RedHat / CentOS).

MySQL Credentials

Reviewing the code, I was able to determine that the agent requires to run some privileged commands like: SET GLOBAL variable, CHANGE MASTER TO ..., FLUSH LOGS ..., SHOW SLAVE STATUS, etc. and creates internal working tables to be used during the master fail over. The easiest way to set it up was using:
GRANT ALL PRIVILEGES ON *.* TO mha_user@'ip address'  
IDENTIFIED BY password;
This should be repeated on all 4 servers using the IP addresses for all the potential manager nodes. Yes, it would be possible to use wildcards, but I consider restricting access from specific nodes a safer practice.

The MySQL replication user needs to be set up to connect from any other server in the cluster, since any of the slaves in the group could be promoted to be master, and have the rest of them connecting to it.

Linux User

As I mentioned before I use the default RedHat / CentOS definition for the mysql user. Keep in mind that if you installed from the official Oracle packages (ie: RPMs), they may not follow this criteria and could result in mismatching UID/GIDs between servers. The UIDs/GIDs for the mysql user and group have to be identical on all 4 servers. If this is not the case, you may use the following bash sequence/script as root to correct the situation:

#!/bin/bash 
# stop mysql
/etc/init.d/mysql stop
 
# Change ownership for all files / directories
find / -user mysql -exec chown -v 27 {} \;
find / -group mysql -exec chgrp -v 27 {} \;
 
# remove old user / group and rename the new ones
# might complain about not being able to delete group.
groupdel mysql
userdel mysql 

# Add the new user / group
groupadd -g 27 mysql
useradd -c "MySQL User" -g 27 -u 27 -r -d /var/lib/mysql mysql
 
# restart MySQL
/etc/init.d/mysql start

Once the mysql user is properly setup, you'll have to create password-less shared keys and authorize them on all the servers. The easiest way to do it is to create it in one of them, copy the public key to the authorized_keys file under the /var/lib/mysql/.ssh directory and then copy the whole directory to the other servers.

I use the mysql user to run the scripts since for most distributions it can't be used to login directly and there is no need to worry about file permissions, which makes it a safe and convenient user.

2nd Step: Follow The Documentation to Install and Configure


Once all the users have been properly setup, this step is straight forward. Check the Installation and Configuration sections of the wiki for more details.

For the placement of the configuration files I deviated a little bit from documentation, but not much:

  1. Used a defaults file: /etc/masterha_default with access only for user mysql since it includes the MHA agent password:
    -rw------- 1 mysql mysql 145 Aug 11 16:36 masterha_default.cnf
  2. The application settings were placed under /etc/masterha.d/ this way they're easy to locate and won't clutter the /etc directory.
For simplicity, I didn't include any of the optional scripts and checks (ie: secondary check) in the configurate. You may want to check the documentation and source code of these scripts. Some of them are not even code complete (ie: master_ip_failover). Unless you are implementing some of the more complicated use cases, you won't even need them. If you do, you'll need to write your own following the examples provided with the source code.

Once you have everything in place, run the following checks as the mysql user (ie: sudo su - mysql):
  1. masterha_check_ssh: Using my configuration files the command line looks like:
    masterha_check_ssh --conf=/etc/masterha_default.cnf --conf=/etc/masterha.d/test.cnf
  2. masterha_check_repl: This test will determine whether the agent can identify all the servers in the group and the replication topology. The command line parameters are identical to the previous step.

Both should show and OK status at the end. All utilities have verbose output, so if something goes wrong it's easy to identify the issue and correct it.

3rd Step: Run the Manager Script


If everything is OK, on the MHA node (Server D in my tests) run the following command as user mysql (ie: sudo su - mysql):

masterha_manager --conf=/etc/masterha_default.cnf --conf=/etc/masterha.d/test.cnf

You have to keep in mind that should the master fail, the agent will fail over to one of the slaves and stop running. This way it'll avoid split brain situations. You will either have to build the intelligence in the application to connect to the right master when failing or use a virtual IP. In both cases you'll might need to use customized IP failover scripts. The documentation provides more details.

Read the section about running the script in the background to choose the method that best fits your practice.

You will have to configure the notification script to get notified of the master failure. The failed server will have to be removed from the configuration file before re-launching the manager script, otherwise it will fail to start.

You can restart the failed server and set it up as a slave connected to the new master and reincorporate it to the replication group using masterha_conf_host.

Conclusion


This tool solves a very specific (and painful) problem which is: make sure all the slaves are in sync, promote one of them and change the configuration of all remaining slaves to replicate off the new master and it does it fairly quickly. The tool is simple and reliable and requires very little overhead. It's easy to see it is production ready.

The log files are pretty verbose, which makes it really easy to follow in great detail all the actions the agent took when failing over to a slave.

I recommend to any potential users to start with a simple configuration and add the additional elements gradually until it fits your infrastructure needs.

Although the documentation is complete and detailed, it takes some time to navigate and to put all the pieces of the puzzle together.

I would like the agent to support master-master configurations. This way it would minimize the work to re-incorporate the failed server into the pool. Yoshinori, if you're reading this, know that I'll volunteer to test master-master if you decide to implement it.

Thursday, July 21, 2011

My MySQL SNMP Agent

Back in February I wrote an article titled A Small Fix For mysql-agent. Since then we did a few more fixes to the agent and included a Bytes Behind Master (or BBM) chart. For those who can't wait to get their hands on the code, here's the current version: MySQL SNMP agent RPM. For those who'd like to learn about it's capabilities and issues, keep reading.

What to Expect From this Version


The article I quoted above pretty much describes the main differences with the original project, but we went further with the changes while still relying on Masterzen's code for the data collection piece.

The first big change is that we transformed Masterzen's code into a Perl module, this way we can easily plug in a new version without having to do massive editing to ours.

The 2nd change is that we added the code to calculate how many bytes behind is a slave, which should be cross checked always with seconds behind master to get replication's full picture. When a slave is just a few bytes behind, the script calculates the difference straight out of the SHOW SLAVE STATUS information. If the SQL thread is executing statements that are in a binary log file older than the one being updated by the I/O thread, then the script logs into the master to collect the sizes of the previous binary logs and make an accurate calculation of the delta.

For this change we hit another bug in CentOS 5 SNMP agent, by which 64bit counters were being truncated. The solution is to upgrade to CentOS 6 (not anytime soon, but that's another story) or a work around. We decided for the latter and display a variable flagging this value roll over. This is not needed for non-CentOS 5 platforms as far as we know.

By now I expect that many of you would have a question in your mind:

Why Not Branch / Fork?

Why provide an RPM instead of creating a branch/fork in the original project? There are many reasons, but I'll limit myself to a couple. I trust that before you write an enraged comment you'll keep in mind that this is a personal perception, which might be in disagreement with yours.

This code is different enough from the original that creating a branch to the original project would be too complicated to maintain. For example: we are using a completely different SNMP protocol and created a module out of the original code. We don't have the resources to follow behind all of Masterzen's possible patches and I wouldn't expect him to adopt my changes.

If we would've created a fork (a new project derived from the original), I believe at this point, it would divert the attention from the original project or others like PalominoDB's Nagios plugin.

What's Next

We plan to continue maintaining this RPM driven by our specific needs and keep sharing the results this way. If at some point we see it fit to drive the merge into another project or create a new fork of an existing one, we'll do it.

I will be presenting the project at OSCON next week. If you're going to be around, please come to my talk: Monitoring MySQL through SNMP and we can discuss issues like: why use pass_persist, why not use information schema instead of the current method, why not include your personal MySQL instrumentation pet peeve, I'd be glad to sit down with you and personally chat about it.

In the meantime, enjoy, provide feedback and I hope to get to know you at OSCON next Thursday.

Thursday, May 5, 2011

Some More Replication Stuff

Listening to the OurSQL podcast: Repli-cans and Repli-can’ts got me thinking, what are the issues with MySQL replication that Sarah and Sheeri didn’t have the time to include in their episode. Here’s my list:

Replication Capacity Index

This is a concept introduced by Percona in last year’s post: Estimating Replication Capacity which I revisited briefly during my presentation at this year’s MySQL Users Conference. Why is this important? Very simple: If you use your slaves to take backups, they might be outdated and will fall further behind during the backups. If you use them for reporting, your reports may not show the latest data. If you use it for HA, you may not start writing to it until the slave caught up.
Having said that, measuring replication capacity as you set up slaves is a good way to make sure that the slave servers will be able to catch up with the traffic in the master.

More On Mixed Replication

The podcast also discussed how mixed replication works and pointed to the general criteria that the server applies to switch to STATEMENT or ROW based. However there is one parameter that wasn’t mentioned and it might come back and haunt you: Transaction Isolation Level. You can read all about it in the MySQL Documentation: 12.3.6. SET TRANSACTION Syntax and in particular the InnoDB setting innodb_locks_unsafe for binlog.

Keep Binary Logs Handy

Today I found this article from SkySQL on Planet MySQL about Replication Binlog Backup, which is a really clever idea to keep your binary logs safe with the latest information coming out of the master. It offers a method of copying them without the MySQL server overhead. If you purge binary logs automatically to free space using the variable expire_logs_days, you will still have the logs when you need them for a longer time than your disk capacity on the master might allow.

Seconds Behind Master (SBM)

Again, another topic very well explained in the podcast, but here’s another case where this number will have goofy values. Lets say you have a master A that replicates master-master with server B and server C is a regular slave replicating off A. The application writes to A and B serves as a hot stand-by master.
When we have a deployment that requires DDL and/or DML statements, we break replication going from B to A (A to B keeps running to catch any live transactions) and apply the modifications to B. Once we verify that everything is working OK on B, we switch the application to write to B and restore replication going back to A. This offers a good avenue for rolling back in case the deployment breaks the database in any way (ie: rebuild B using the data in A). What we frequently see is, if the DDL/DML statement takes about 30min (1800 sec) on B, once we restore replication as explained, the slave C will show outrageous numbers for SBM (ie: >12hs behind, I really don’t know how does the SBM arithmetic works to explain this). So it’s a good idea to complement slave drifts monitoring with mk-heartbeat, which uses a timestamp to measure replication drifts.

Conclusion

This episode of the OurSQL podcast is a great introduction to replication and its quirks. I also believe that MySQL replication is one of the features that made the product so successful and wide spread. However, you need to understand its limitations if your business depends on it.

These are my $.02 on this topic, hoping to complement the podcast. I wanted to tweet my feedback to @oursqlcast, but it ended up being way more than 140 characters.

Tuesday, February 1, 2011

A Small Fix For mysql-agent

If you're already using an SNMP monitoring tool like OpenNMS, mysql-agent is a great way to add a number of graphics using Net-SNMP. However mysql-agent has a small bug that drove me crazy. I will try to highlight the process on how I discovered it (and hence fix it) since it involved learning about SNMP, how to diagnose it and eventually, once all the pieces came together, how simple it is to write your own agents.

Although versions are not that important, just for the sake of completeness we were using CentOS 5.5, MySQL 5.5.8 Community RPMs, Net SNMP version 5.3.22 and OpenNMS Web Console 1.8.7.

The Problem

I followed the directions on the mysql-agent blog only to find that I was facing the only open issue listed on mysql-agent's Github repository (spoiler alert, the solution is at the bottom). The set up has several components, which makes it difficult to diagnose:
  • mysql-agent
  • snmpd +  agentx
  • OpenNMS server
Running snmpwalk on the MySQL host, as suggested in the mysql-agent article, worked fine (as far as we could tell). However, OpenNMS wasn't getting the data and the graphs weren't showing up.

It turns out that, once you completed the OpenNMS configuration as described in the article, it's a good idea to run snmpwalk remotely, from the server running OpenNMS, as well. You need to specify your MySQL hostname instead of localhost:
snmpwalk -m MYSQL-SERVER-MIB -v 2c -c public mysql-host enterprises.20267

In our case, it failed. Unfortunately the logs didn't offer much information and whatever was failing, it was inside agentx.

The Alternative

Since the NetSNMP Perl class hides a lot of the details of the Net SNMP API, we decided to use an alternative method to write the agent using pass_persist. The beauty of this method is that you only need to write a filter script: SNMP requests come through standard input (stdin) and the output needs to be printed to standard output (stdout). In consequence, the agent can be tested straight from the command line before implementing it. A nice article about pass_persist can be found here. The pass_persist protocol is fully documented in the snmpd.conf man page.

To follow this route we had to tweak the script a little. The tweaks included:
  • No daemonize: Since the script used stdin/stdout, it needs to run interactively.
  • All values need to be returned as strings. It was the only work around we found to deal with 64bits values that otherwise weren't interpreted correctly.
  • stderr needed to be redirected to a file to avoid breaking the script's returned values ( add 2>/tmp/agent.log to the end of the command line) while you run it interactively.
  • Use SNMP::Persist Perl module to handle the SNMP protocol.
Once the changes were implemented (I promise to publish the alternative mysql-agent script after some clean up) these are the steps I followed to test it (for now I'll leave the -v option out, along with the stderr redirection).
  1. Invoke the agent as you would've done originally, keeping in mind that now it'll run interactively. On your MySQL server:
    mysql-agent-pp -c /path/to/.my.cnf -h localhost -i -r 30
  2. Test if the agent is working properly (blue -> you type, red -> script output):
    PING
    PONG
  3. Does it actually provide the proper values?
    get
    .1.3.6.1.4.1.20267.200.1.1.0
    .1.3.6.1.4.1.20267.200.1.1.0
    Counter32
    21
    getnext
    .1.3.6.1.4.1.20267.200.1.1.0
    .1.3.6.1.4.1.20267.200.1.2.0
    Counter32
    16
Note that case is important PING needs to be capitalized, get and getnext need to be in small caps. Once you know it works you'll need to add the pass_persist line to the snmpd.conf file and restart snmpd:
# Line to use the pass_persist method
pass_persist .1.3.6.1.4.1.20267.200.1 /usr/bin/perl /path/to/mysql-agent -c /path/to/.my.cnf -h localhost -i -r 30
Now execute snmpwalk remotely and if everything looks OK, you're good to go.

On our first runs, snmpwalk failed after the 31st value. Re-tried the specific values and a few other ones after those with get and getnext and it became obvious that for some, the responses weren't the expected ones.

The Bug and The Fix

So now, having identified the failing values, it was time to dig into the source code.

First the data gathering portion, which fortunately is well documented inside the source code. I found ibuf_inserts and ibuf_merged as the 31st and 32nd values (note that with get you can check other values further down the list, which I did to confirm that the issue was specific to some variables and not a generic problem). A little grepping revealed that these values were populated from the SHOW INNODB STATUS output, which in 5.5 didn't include the the line expected in the program logic, hence, the corresponding values stayed undefined. A patch to line 794 on the original script fixed this particular issue by setting the value to 0 for undefined values.

794c794
<             $global_status{$key}{'value'} = $status->{$key};
---
>             $global_status{$key}{'value'} = (defined($status->{$key}) and $status->{$key} ne '' ? $status->{$key} : 0);
This fix can be used for the original script and the new pass_persist one. I already reported it upstread in GitHub.

The original script still failed. OpenNMS still requires getbulk requests (explained in the Net-SNMP documentation) that agentx fails to convert into getnext. This can be reproduced using snmpbulkwalk instead of snmpwalk (Note: It took some tcpdump + wireshark tricks to catch the getbulk requests). The current beta of the pass_persist version of mysql-agent has been in place for a while without issues.

Conclusion

I'm not highlighting all the process since it was long and complicated, but I learned a few concepts in during this time the I'd like to point out

Look Around Before Looking for New Toys

If you're using OSS, you may already have in house most of what you need. This project started when we decided to use OpenNMS (already in place to monitor our infrastructure) and wanted to add to it the MySQL data we wanted to monitor closely. A simple Google search pointed us to mysql-agent right away.

Embrace OSS

All the tools that we used in this case are Open Source, which made it extremely easy to diagnose the source code when pertinent, try alternatives, benefit from the collective knowledge, make corrections and contribute them back to the community. A full evaluation of commercial software, plus the interaction with tech support to get to the point where we needed a patch would've been as involved as this one and the outcome wouldn't have been guaranteed either. I'm not against commercial software, but you need evaluate if it will add any real value as opposed to the open source alternatives.

SNMP is Your Friend

Learning about the SNMP protocol, in particular the pass_persist method was very useful. It removed the mystery out of it and writing agents in any language (even bash) is far from difficult. I'm looking forward to go deeper into MySQL monitoring using this technology.

I'm hoping this long post encourages you to explore the use of SNMP monitoring for MySQL on your own.

Credit: I need to give credit to Marc Martinez who did most of the thinking and kept pointing me in the right direction every time I got lost.

NOTE: I'm not entirely satisfied with the current pass_persist version of mysql-agent I have in place, although it gets the job done. Once I have the reviewed version, I plan ... actually promise to publish it either as a branch of the existing one or separately.

Friday, January 14, 2011

About InnoDB Index Size Limitations

This is mostly a reflection on a limitation in InnoDB that, in my opinion, has persisted for too long. I founded while reviewing the Amarok media player. The player uses MySQL in the backend, embedded or regular server, so it makes for a great source of real life data.

The Issue

By default, Amarok uses MyISAM tables. This means that if it crashes or stops unexpectedly (a logout while playing music may cause this), the latest updates to the DB are all lost. So I've been looking into using InnoDB instead to avoid loosing my playlists or player statistics.

The Problem

The limitation that bothers me is this one: "Index key prefixes can be up to 767 bytes" which has been in place for several years.
Take this Amarok table for example:
CREATE TABLE urls (
    id int(11) NOT NULL AUTO_INCREMENT,
    deviceid int(11) DEFAULT NULL,
    rpath varchar(324) COLLATE utf8_bin NOT NULL,
    directory int(11) DEFAULT NULL,
    uniqueid varchar(128) COLLATE utf8_bin DEFAULT NULL,

    PRIMARY KEY (id),
    UNIQUE KEY uniqueid (uniqueid),
    UNIQUE KEY urls_id_rpath (deviceid, rpath),
    KEY urls_uniqueid (uniqueid)
) ENGINE=MyISAM AUTO_INCREMENT=314
DEFAULT CHARSET=utf8 COLLATE=utf8_bin
The result of an ALTER TABLE to convert it to InnoDB:
alter table urls engine=InnoDB;
ERROR 1071 (42000): Specified key was too long; max key length is 767 bytes

The Rant

Note that the maximum key length is in bytes, not characters. So lets review the rpath column. This column stores the path to the media file in the catalog, which could easily be something like: /very/long/path/to/find/a/file/with/music.mp3. If it only uses English alphabet characters it's not very long, but as soon as you start using some multi-byte characters (ie: ñ, ç, ü, etc) the length of the string starts to increase in bytes (ie: ó = 4 bytes). A simple query shows the diference:

select id, rpath, bit_length(rpath) / 8 as bytes, 
char_length(rpath) as chars 
from urls limit 1;
+----+-----------------------------------------+---------+-------+
| id | rpath                                   | bytes   | chars |
+----+-----------------------------------------+---------+-------+
|  1 | ./home/gnarvaja/Music/Dodododódodo.mp3 | 41.0000 |    39 |
+----+-----------------------------------------+---------+-------+

So how big can the index be in bytes?

I let MySQL answer this question for me. I created a similar test table with only a PRIMARY and then recreated the index on rpath. Here's the command sequence and it's output:
CREATE TABLE urls_test (
  id int(11) NOT NULL AUTO_INCREMENT,
  rpath varchar(324) COLLATE utf8_bin NOT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin;

CREATE INDEX rpath_idx ON urls_test (rpath);
Query OK, 0 rows affected, 2 warnings (0.32 sec)

show warnings;
+---------+------+---------------------------------------------------------+
| Level   | Code | Message                                                 |
+---------+------+---------------------------------------------------------+
| Warning | 1071 | Specified key was too long; max key length is 767 bytes |
| Warning | 1071 | Specified key was too long; max key length is 767 bytes |
+---------+------+---------------------------------------------------------+
2 rows in set (0.00 sec)

SHOW CREATE TABLE urls_test\G
*************************** 1. row ***************************
       Table: urls_test
Create Table: CREATE TABLE `urls_test` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `rpath` varchar(324) COLLATE utf8_bin NOT NULL,
  PRIMARY KEY (`id`),
  KEY `rpath_idx` (`rpath`(255))
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin
1 row in set (0.00 sec)

So down to 255 characters from the original 324. In this case (a music player) it may not be a significant loss, but in a world where applications should support an increasing number of International character sets, in particular Asian ones, this limitation could potentially become serious.

I'm not a source code expert, so I'm not sure what it would take to remove, or at least expand the maximum key size. InnoDB's maximum key length (see URL quoted at the beginning of the article) seems like a good number: 3500 bytes ... or I may be overestimating the need for keys bigger than 255 bytes ... your call.

Thursday, December 16, 2010

MySQL 5.5 - Upgrading From Previous Releases

MySQL 5.5 GA has finally arrived and, if you haven't done already, it's time to test it starting with the upgrade procedure. MySQL upgrades are usually easy and uneventful, not in this case so I decided to share my experience with the RH 5 RPMs.

Step 0 - Is the upgrade for you?

Do you use InnoDB? This is where most of the improvements have been made, and it could be the main reason why you may want to be an early adopter. For a complete reference of the improvements check the New Features of InnoDB 1.1 section of the official MySQL Documentation, Sheeri Cabral's and Ronald Bradford's blogs.

Step 1 - Read the manual

Ok, I'm the first one to admit that I usually skip this step, but after a couple of false starts I decided that in this case it might be a good idea to go through the documentation, in particular Upgrading from MySQL 5.1 to 5.5. The Incompatible change entries are quite important in this release, and as you will find out, make the upgrade process a little more complex than usual. You mileage may vary, but make sure you're review them carefully.

Step 2 - Take a backup

I usually trust (since we test them) the overnight backups, in this case, I'm a little more paranoid since you may not be able to do a straight upgrade and will need to remove the old version before you install the new one(see next step). Taking a new backup or double checking your last backup is a very good idea. Make sure you include your current configuration file(s) with the backup.

Step 3 - The upgrade process

If you have been using the InnoDB plugin, you need to disable the plugin options in your configuration file before you start the upgrade. The details are in the Upgrading from MySQL page I quoted in Step 1, it's the first Incompatible change.

Although it wasn't listed in the documentation, when tried to run an RPM upgrade (rpm -Uvh MySQL-server-5.5.7_rc-1.rhel5.x86_64.rpm) it failed with a message telling me that I needed to remove the previous version and run a regular install (rpm -i) instead. I usually get a little bit anxious with this kind of recommendations, but fortunately both steps ran smoothly.


Keep in mind that the first time you'll have to start the service with the --skip-grant option to run mysql_upgrade. If you're upgrading a production server, it highly recommended you add the --skip-networking otherwise you may get connections from your application(s), which you it's not a good idea until at this point. This is the 3rd Incompatible change. Once you're done, shutdown the MySQL service (ie: using mysqladmin shutdown, no need for username/password in this case) and restart it as you normally would (ie: sudo /etc/init.d/mysql start)

Done

At this point your installation should be ready for testing. As I mentioned before, make sure you double check the Incompatible change notes to make sure your database configuration and/or application doesn't need any further adjustments. I would also recommend following closely the posts in Planet MySQL for reviews and articles from the community.