Thursday, December 16, 2010

MySQL 5.5 - Upgrading From Previous Releases

MySQL 5.5 GA has finally arrived and, if you haven't done already, it's time to test it starting with the upgrade procedure. MySQL upgrades are usually easy and uneventful, not in this case so I decided to share my experience with the RH 5 RPMs.

Step 0 - Is the upgrade for you?

Do you use InnoDB? This is where most of the improvements have been made, and it could be the main reason why you may want to be an early adopter. For a complete reference of the improvements check the New Features of InnoDB 1.1 section of the official MySQL Documentation, Sheeri Cabral's and Ronald Bradford's blogs.

Step 1 - Read the manual

Ok, I'm the first one to admit that I usually skip this step, but after a couple of false starts I decided that in this case it might be a good idea to go through the documentation, in particular Upgrading from MySQL 5.1 to 5.5. The Incompatible change entries are quite important in this release, and as you will find out, make the upgrade process a little more complex than usual. You mileage may vary, but make sure you're review them carefully.

Step 2 - Take a backup

I usually trust (since we test them) the overnight backups, in this case, I'm a little more paranoid since you may not be able to do a straight upgrade and will need to remove the old version before you install the new one(see next step). Taking a new backup or double checking your last backup is a very good idea. Make sure you include your current configuration file(s) with the backup.

Step 3 - The upgrade process

If you have been using the InnoDB plugin, you need to disable the plugin options in your configuration file before you start the upgrade. The details are in the Upgrading from MySQL page I quoted in Step 1, it's the first Incompatible change.

Although it wasn't listed in the documentation, when tried to run an RPM upgrade (rpm -Uvh MySQL-server-5.5.7_rc-1.rhel5.x86_64.rpm) it failed with a message telling me that I needed to remove the previous version and run a regular install (rpm -i) instead. I usually get a little bit anxious with this kind of recommendations, but fortunately both steps ran smoothly.


Keep in mind that the first time you'll have to start the service with the --skip-grant option to run mysql_upgrade. If you're upgrading a production server, it highly recommended you add the --skip-networking otherwise you may get connections from your application(s), which you it's not a good idea until at this point. This is the 3rd Incompatible change. Once you're done, shutdown the MySQL service (ie: using mysqladmin shutdown, no need for username/password in this case) and restart it as you normally would (ie: sudo /etc/init.d/mysql start)

Done

At this point your installation should be ready for testing. As I mentioned before, make sure you double check the Incompatible change notes to make sure your database configuration and/or application doesn't need any further adjustments. I would also recommend following closely the posts in Planet MySQL for reviews and articles from the community.

Friday, October 22, 2010

MySQL Enterprise Backup and The Meaning of Included

During the MySQL Users Conference, Edward Screven did a keynote presentation that made many of us feel warm and fuzzy about Oracle's future plans for MySQL. If you advance 16m 25s into the presentation, it even gives something to rejoice the MySQL Enterprise customers: "Backup is now included". He didn't say much more after that. Asking around at the conference the days following this announcement, I couldn't get a straight answer about when and how would it be available for existing customers.

Now, 6 months later (give or take a couple of weeks), the MySQL Enterprise Features page has no signs of the now included MySQL Enterprise Backup (the utility previously known as InnoDB Hot Backup) and there has been no other news supporting Edward's announcement anywhere else (if I'm wrong, please point to it with a comment).

Has anybody any insight about what the definition of included is according to Oracle's dictionary? Maybe it's not included for existing customers and it will be when Oracle comes out with the new price list? This last statement will surely make existing customers pretty unhappy.

Maybe there was no reason to feel warm and fuzzy after all. What is your take on this particular issue?

Wednesday, September 22, 2010

A Replication Surprise

While working on a deployment we came across a nasty surprise. In hindsight it was avoidable, but it never crossed our minds it could happen. I'll share the experience so when you face a similar situation, you'll know what to expect.

Scenario

To deploy the changes, we used a pair of servers configured to replicate with each other (master-master replication). There are many articles that describe how to perform an ALTER TABLE with minimum or no downtime using MySQL replication. The simple explanation is:
  1. Set up a passive master of the database you want to modify the schema. 
  2. Run the schema updates on the passive master.
  3. Let replication to catch up once the schema modifications are done.
  4. Promote the passive master as the new active master.
The details to make this work will depend on each individual situation and are too extensive for the purpose of this article. A simple Google search will point you in the right direction.

The Plan

The binlog_format variable was set to MIXED. While production was still running on the active master, we stopped replication from the passive to the active master so we would still get all the DML statements on the passive master while running the alter tables. Once the schema modifications were over, we could switch the active and passive masters in production and let the new passive catch up with the table modifications once the replication thread was running again.

The ALTER TABLE statement we applied was similar to this one:
ALTER TABLE tt ADD COLUMN cx AFTER c1;
There were more columns after cx and c1 was one of the first columns. Going through all the ALTER TABLE statements takes almost 2 hour, so it was important to get the sequence of event right.

Reality Kicks In

It turns out that using AFTER / BEFORE or changing column types broke replication when it was writing to the binlog files in row based format, which meant that we couldn't switch masters as planned until we had replication going again. As a result we had to re-issue an ALTER TABLE to revert the changes and then repeat them without the AFTER / BEFORE.

The column type change was trickier and could've been a disaster, fortunately this happened on a small table (~400 rows which meant the ALTER TABLE took less than 0.3sec). In this case we reverted the modification on the passive master and run the proper ALTER TABLE on the active master. Should this have happened with a bigger table, there was no other alternative than either rollback the deployment or deal with the locked table while the modification happened.

Once this was done we were able to restart the slave threads, let it catch up and and everything was running as planned ... but with a 2hr delay.

Unfortunately, using STATEMENT replication wouldn't work in this case for reasons that would need another blog article to explain.

Happy Ending

After the fact, I went back to the manual and I found this article: Replication with Differing Table Definitions on Master and Slave. I guess we should review the documentation more often, the changes happened after 5.1.22. I shared this article with the development team, so next time we won't have surprises.

Friday, July 30, 2010

Simple Backup Server

I have not written an article in a while, I partially blame it on the World Cup and my day job. The time has come to share some of my recent experiences with a neat project to provide several teams internally with current MySQL backups.

When faced with these types of challenges is my first step is to look into OSS packages and how can they be combined into an actual solution. It helps me understand the underlying technologies and challenges.

ZRM Backup

I have reviewed Zmanda's Recovery Manager for MySQL Community Edition in the Fall 2008 issue of MySQL magazine. It remains one of my favorite backup tools for MySQL since it greatly simplifies the task and configuration of MySQL backups taking care of most of the details. Its flexible reporting capabilities came in handy for this project as I'll explain later. Some of the key settings:


  • We included the hostname in the ZRM backup-set to make it easier to locate. Linux shell example:

    export BKUP_SET=`hostname -s`-logical


  • Following ZRM conventions, generate a HTML report in the main backup directory.

    mysql-zrm-reporter --where backup-set=$BKUP_SET --type html \
       --show backup-status-info >/dir/to/backup/$BKUP_SET/report.html


  • The actual backup files live under the subdirectories:

    /dir/to/backup/$BKUP_SET/<timestamp>/
    where /dir/to/backup could be mounted on a NFS server

    Please check the ZRM for MySQL documentation for details on its configuration and operation. Use the report format that best suits your needs, ZRM provides plenty of options and if none fits your needs exactly, you can still generate your own.

    lighttpd HTTP Server

    As a web server, lighty adds very little overhead so it can run on the same MySQL server and the backups can be served directly from it. If this is not acceptable and the backups are stored in an NFS volume, lighty can be installed on the NFS server, the configuration will remain very similar to the one I describe here.

    For this example I’ll assume that the MySQL server host is named dbhost, in which case the /etc/lighttpd/lighttpd.conf file should include:

    server.modules  = ( 
                                    "mod_alias",
                                    "mod_access",
                                    "mod_accesslog" )
    
    ## Default root  directory set to the main backup set
    server.document-root = "/dir/to/backup/dbhost-logical/"
    
    ## Set one alias per  backup type
    alias.url            = ( "/logical/" => "/dir/to/backup/dbhost-logical/")
    alias.url           += ( "/incremental/" => "/dir/to/backup/dbhost-incremental/")
    
    ## Enable logs for  diagnosis
    server.errorlog      = "/var/log/lighttpd/error.log"
    accesslog.filename   = "/var/log/lighttpd/access.log"
    
    server.port          = 8088
    
    ## virtual directory listings enabled
    dir-listing.activate = "enable"
    
    ##  enable debugging to facilitate diagnosis
    debug.log-request-header    = "enable"
    debug.log-response-header   = "enable"
    debug.log-request-handling  = "enable"
    debug.log-file-not-found    = "enable"

    The server.document-root and alias.url settings should match the main directories for the ZRM backup sets.

    Make sure that the user and / or group defined for the lighttpd process have proper permissions to access the backup set directory tree. The backups will be available as a directory listing when using the following URLs: http://dbhost:8088/logical/ or http://dbhost:8088/incremental/. Clicking on the report.html file in those directories (see the previous section in the article), the users have access to the backup reports and verify if any of them had errors. The files are also accessible using wget.

    If you need to enable tighter security, lighty supports https and LDAP authentication the details are in its documentation and it takes less than 10 minutes to setup.

    monit Monitoring

    When you need a service to be running 24/7, monit is a great tool in the Linux arsenal to achieve it. It monitors a number of system and services variables and it will start, stop and/or restart any service under a number of conditions. For this POC, the goal is to keep lighty running, restarting it after an eventual crash. We are using the following configuration:

    check process lighttpd
       with pidfile  "/var/run/lighttpd.pid"
       alert some@yourorg.com NOT { instance,  pid, ppid, action }
       start program = "/etc/init.d/lighttpd start" with timeout  60 seconds
       stop program = "/etc/init.d/lighttpd stop" with timeout 30 seconds
       if 2 restarts  within 3 cycles then timeout
       if failed port 8088 protocol http with  timeout 15 seconds within 3 cycles then restart

    If the process crashes or port 8088 becomes unresponsive, monit will (re)start lighty automatically.

    Conclusion (sort of)

    Implement all these tools in a new server takes less than 1 hour. During the the first couple of implementations I learned a lot about how the packages interacted, security issues and users’ needs and expectations. As the users started looking into the solution, they also came up with newer use cases.

    While we look for a more suitable solution, these free (as in freedom and beer) packages provided us with the opportunity to learn while still achieving our goals and delivering results to our users. Now we know what to look for if we decide to evaluate an open core or commercial solution.
  • Tuesday, July 20, 2010

    Changes to the Blog

    Hi all, it's really unfortunate that this is my first post after a while (the World Cup kept me busy/distracted over most of last month), but I need to make some changes to the blog. Lately I had to reject plenty of spam coming through the comments. I will now require to be a registered user and see if I don't have to deal with what I consider one of the worst outcomes of modern technology. I hope this won't stop the feedback I get to my articles. See you soon.

    Friday, May 14, 2010

    MySQL 5.1.46 With InnoDB Plugin Kicks Butt

    We were discussing the recommendations we issue each quarter around MySQL and the question of using InnoDB plugin came up. We usually follow Planet MySQL closely, so we read what the blogs had to say and it was all good, but we decided to provide our users some data of our own. We used our own sysbench tests on to get the information we needed.

    A Word About Benchmarks

    I don't trust most of the benchmarks that are published online because they really apply to the use case of whomever is writing the article. They are usually many factors that can influence them and I find it difficult to apply them as-is to our environment.

    I do trust the benchmarks published online as a reference on how to create and run our own benchmarks. So this article is based on this premise. I recommend you to do your own homework to verify the results for your own use cases.

    The Test

    Having said that, we use sysbench against the official MySQL RPM with no special adjustments to the configuration file. We run it once with the embedded InnoDB engine and re-ran them with the InnoDB plugin engine. This is the bash shell wrapper we use:
    #!/bin/bash
    # Sysbench MySQL benchmark wrapper
    for nthr in 1 8 16; do
       echo "($(date +%H:%M:%S)) -- Testing $nthr threads"
       sysbench --db-driver=mysql --num-threads=$nthr --max-time=900 --max-requests=500000 --mysql-user=user --mysql-password=password --test=oltp --oltp-test-mode=complex --oltp-table-size=10000000 prepare
       echo "($(date +%H:%M:%S)) -- Running test for $nthr threads"
       sysbench --db-driver=mysql --num-threads=$nthr --max-time=900 --max-requests=500000 --mysql-user=user --mysql-password=password --test=oltp --oltp-test-mode=complex --oltp-table-size=10000000 run | tee $(hostname -s)_$nthr.log
       echo "($(date +%H:%M:%S)) -- Cleaning up $nthr threads"
       sysbench --db-driver=mysql --num-threads=$nthr --max-time=900 --max-requests=500000 --mysql-user=user --mysql-password=password --test=oltp --oltp-test-mode=complex --oltp-table-size=10000000 cleanup
       echo "($(date +%H:%M:%S)) -- done ($nthr)"
    done
    I like to run a 1 thread test since it gives us an idea of the underlying raw performance. Based on other tests we have done, our systems performance peaks somewhere between 8 and 16 concurrent threads, for this test there was no point in running other configurations. You may replace "1 8 16" with the numbers you think will best represent your systems in production. All the tests are run locally, when testing across the network the numbers will vary based on your network performance.

    The Actual Results

    So, without further ado, here are the results as reported by sysbench:

    Number of threadsNo Plugin Trx/secPlugin Trx/sec
    1176.32325.75
    8332.82 742.80
    16334.47736.40

    The results for the No Plugin column are in line with what we got in tests for older 5.1.x releases.

    Conclusion

    MySQL v5.1.46 using InnoDB plugin kicks ass! I apologize for the language, but the numbers are pretty amazing. I hope you find this post useful.

    Monday, May 3, 2010

    Simple Backup Restore Trick

    I don't usually post these simple tricks, but it came to my attention today and it's very simple and have seen issues when trying to get around it. This one tries to solve the question: How do I restore my production backup to a different schema? It looks obvious, but I haven't seen many people thinking about it.

    Most of the time backups using mysqldump will include the following line:

    USE `schema`;

    This is OK when you're trying to either (re)build a slave or restore a production database. But what about restoring it to a test server in a different schema?

    The actual trick


    Using vi (or similar) editors to edit the line will most likely result in the editor trying to load the whole backup file into memory, which might cause paging or even crash the server if the backup is big enough (I've seen it happen). Using sed (or similar) might take some time with a big file. The quick and dirty trick I like is:

    grep -v "USE \`schema\`" backup.sql | mysql -u user -p new_schema

    Adapt the mysql command options to your needs. It's necessary to escape the backticks (`), otherwise the shell might interpret it as your trying to execute schema and use the output as the actual schema name. Also, make sure that new_schema already exists in the server.

    This method is quick and dirty and leaves the original backup intact. I hope you find it useful.

    Friday, March 26, 2010

    My Impressions About MONyog

    At work we have been looking for tools to monitor MySQL and at the same time provide as much diagnosis information as possible upfront when an alarm is triggered. After looking around at different options, I decided to test MONyog from Webyog, the makers of the better known SQLyog. Before we go on, the customary disclaimer: This review reflects my own opinion and in no way represents any decision that my current employer may or may not make in regards of this product.

    First Impression

    You know what they say about the first impression, and in this where MONyog started with the right foot. Since it is an agent-less system, it only requires to install the RPM or untar the tarball in the server where you're going to run the monitor and launch the daemon to get started. How much faster or simpler can it be? But in order to start monitoring a server you need to do some preparations on it. Create a MONyog user for both the OS and the database. I used the following commands:

    For the OS user run the following command as root (thank you Tom):
    groupadd -g 250 monyog && useradd -c 'MONyog User' -g 250 -G mysql -u 250 monyog && echo 'your_os_password' | passwd --stdin monyog
    For the MySQL user run:
    GRANT SELECT, RELOAD, PROCESS, SUPER on *.* to 'adm_monyog'@'10.%' IDENTIFIED BY 'your_db_password';
    Keep in mind that passwords are stored in the clear in the MONyog configuration database, defining a MONyog user helps to minimize security breaches. Although for testing purposes I decided to go with a username/password combination to SSH into the servers, it is possible to use a key which would be my preferred setting in production.

    The User Interface

    The system UI is web driven using Ajax and Flash which makes it really thin and portable. I was able to test it without any issues using IE 8 and Firefox in Windows and Linux. Chrome presented some minor challenges but I didn't dig any deeper since I don't consider it stable enough and didn't want to get distracted with what could've been browser specific issues.

    In order to access MONyog you just point your browser the server where it was installed with an URL equivalent to:
    http://monyog-test.domain.com:5555 or http://localhost:5555
    You will always land in the List of Servers tab. At the bottom of this page there is a Register a New Server link that you follow and start adding servers at will. The process is straight forward and at any point you can trace your steps back to make any corrections as needed (see screenshot). Once you enter the server information with the credentials defined in the previous section, you are set. Once I went through the motions, the first limitation became obvious: You have to repeat the process for every server, although there is an option to copy from previously defined servers, it can become a very tedious process.

    Once you have the servers defined, to navigate into the actual system you need to check which servers you want to review, select the proper screen from a drop down box at the bottom of the screen and hit Go. This method seems straight forward, but at the beginning it is a little bit confusing and it takes some time to get used to it.

    Features

    MONyog has plenty of features that make it worth trying if you're looking for a monitoring software for MySQL. Hopefully by now you have it installed and ready to go, so I'll comment from a big picture point of view and let you reach your own conclusions.

    The first feature that jumps right at me is its architecture, in particular the scripting support. All the variables it picks up from the servers it monitors are abstracted in JavaScript like objects and all the monitors, graphics and screens are based on these scripts. One the plus side, it adds a a lot of flexibility to how you can customize the alerts, monitors, rules and Dashboard display. On the other hand, this flexibility present some management challenges: customize thresholds, alerts and rules by servers or group of servers and backup of customized rules. None of these challenges are a showstopper and I'm sure MONyog will come up with solutions in future releases. Since everything is stored in SQLite databases and the repositories are documented, any SQLite client and some simple scripting is enough to get backups and workaround the limitations.

    The agent-less architecture requires the definition of users to log into the database and the OS in order to gather the information it needs. The weak point here is that the credentials, including passwords, are stored in the clear in the SQLite databases. A way to secure this is to properly limit the GRANTs for the MySQL users and ssh using a DSA key instead of password. Again, no showstopper for most installations, but it needs some work from Webyog's side to increase the overall system security.

    During our tests we ran against a bug in the SSH library used by MONyog. I engaged their Technical Support looking forward to evaluate their overall responsiveness. I have to say it was flawless, at no point they treated me in a condescending manner, made the most of the information I provided upfront and never wasted my time with scripted useless diagnostic routines. They had to provide me with a couple of binary builds, which they did in a very reasonably time frame. All in all, a great experience.

    My Conclusion

    MONyog doesn't provide any silver bullet or obscure best practice advice. It gathers all the environment variables effectively and presents it in an attractive and easy to read format. It's a closed source commercial software, the architecture is quite open through scripting and with well documented repositories which provides a lot of flexibility to allow for customizations and expansions to fit any installations needs. For installations with over 100 servers it might be more challenging to manage the servers configurations and the clear credentials may not be viable for some organizations. If these 2 issues are not an impediment, I definitively recommend any MySQL DBA to download the binaries and take it for a spin. It might be the solution you were looking for to keep an eye on your set of servers while freeing some time for other tasks.

    Let me know what do you think and if you plan to be at the MySQL UC, look me up to chat. Maybe we can invite Rohit Nadhani from Webyog to join us.

    Monday, March 8, 2010

    Speaking At The MySQL Users Conference

    My proposal has been accepted, yay!

    I'll be speaking on a topic that I feel passionate about: MySQL Server Diagnostics Beyond Monitoring. MySQL has limitations when it comes to monitoring and diagnosing as it has been widely documented in several blogs.

    My goal is to share my experience from the last few years and, hopefully, learn from what others have done. If you have a pressing issue, feel free to comment on this blog and I'll do my best to include the case in my talk and/or post a reply if the time allows.

    I will also be discussing my future plans on sarsql. I've been silent about this utility mostly because I've been implementing it actively at work. I'll post a road map shortly based on my latest experience.

    I'm excited about meeting many old friends (and most now fellow MySQL alumni) and making new ones. I hope to see you there!

    Friday, February 12, 2010

    Log Buffer #178, a Carnival of the Vanities for DBAs

    Dave Edwards has offered me to write this week's Log Buffer, and I couldn't help but jump at the opportunity. I'll dive straight into it.

    Oracle

    I'll start with Oracle, the dust of the Sun acquisition has settled, so maybe it's time to return our attention to the regular issues.

    Lets start with Hemant Chitale's Common Error series and his Some Common Errors - 2 - NOLOGGING as a Hint explaining what to expect from NOLOGGING. Kamran Agayev offers us an insight into Hemant's personality with his Exclusive Interview with Hemant K Chitale. My favorite quote is:

    Do you refer to the documentation? And how often does it happen?

    Very frequently. Most often the SQL Reference (because I don’t — and do not intend to – memorise syntax. Syntax has to be understood rather than memorized). Also, the RMAN Reference (known as the “Backup and Recovery Reference”) and the Database Reference.
    At least I'm not the only one forgetting the exact syntax of every command.

    Chen Shapira offers us her thoughts on diagnostics in Automated Root Cause Analysis, and I have to agree with her that sometimes it is best to be offered good visualization tools rather than cut and dry solutions and recommendations.
     
    Miladin Modrakovic explains how to avoid an Oracle 11g vulnerability: Oracle Exploit Published 11g R2. Gary Myers makes his own contribution about security issues with 10g and 11g in Exploits and revoking the risks of revoking PUBLIC.

    As a MySQL DBA I've heard many times the question, "What is the right pronunciation?" and purists would say 'es-que-el' as the ANSI standard specifies. But before there were any standards, there was SEQUEL. I heard the real story many times. Iggy Fernandez's article does a pretty good job summarizing it in Not the SQL of My Kindergarten Days quoting some references for those who would like to dig into the details.

    During the weeks leading to the final approval of Sun's acquisition by the EU, there was a lot of speculation about MySQL's destiny's under Oracle. I'm sure that many of the MySQL Community members that kept their cool, they did so because they knew that Ken Jacob would most likely have a say on it. So when the news of his resignation was published, I'm sure that those people (myself among them) starting scratching their heads and started wondering about MySQL's future as well. Matt Assay's news article on CNet, Oracle loses some MySQL mojo, offers a great insight on the issue including quotes of Pythian's own Sheeri Cabral. There are plenty of other articles on the issue in Planet MySQL's feed.

    MySQL

    Continuing in the context of Oracle's acquisition and Ken's resignation, Bridget Bothelo's article MySQL users hope for the best, prep for the worst speculates about what is in the mind of those who run MySQL in production. If you are interested in the different releases and branches, you'll find plenty of blogs this week starting with Jay Jensen's question When should we expect the next stable MySQL release beyond 5.1? and Ronald Bradford's FOSDEM 2010 presentation Beyond MySQL GA: patches, storage engines, forks, and pre-releases – FOSDEM 2010.

    Life goes on and in there is still plenty of action in the MySQL community. As Colin Charles reminds us in his MySQL Conference Update: Grid is up, go promote and register!, this should be an interesting year. In the storage engine and tools front, it's worth checking InfiniDB's impressive performance numbers in InfiniDB load 60 Billion SSB rows trended for storage engine developments and RE: HeidiSQL 5.0 Beta available in the tools segments.

    Finally to end the MySQL section with some more mundane issues, here is a collection of articles with mysqldump related scripts and tools: Restore from mysqldump --all-databases in parallel and MyDumpSplitter-Extract tables from Mysql dump-shell script. No list of articles on backups would be complete without asking Can you trust your backup?.

    Today we were talking at work about Perl vs Python for scripting. Me, I'm a Perl 'gansta' (see the PostgreSQL section). Traditionally MySQL has had a pretty bad Python driver, but Geert Vanderkelen is working on correcting that. If you're a Python fan check his FOSDEM 2010 presentation at FOSDEM: 'Connecting MySQL and Python', handout & wrap-up.

    SQL Server

    Aaron Bertrand published 2 really interesting articles that could be apply to other databases as well: When bad error messages happen to good people and Injection is not always about SQL with a funny (if it weren't for the Prius parked in my driveway) example at the end.

    2010 MVP Summit is coming up and Thomas LaRock offers his 2010 MVP Summit Preview. His insight applies to other similar events (are you reading MySQL UC attendees?).

    In my experience date and time representation and manipulation are tricky in databases, these 2 articles offer some tips: Convert FILETIME to SYSTEM time using T-SQL and Dan Guzman's Ad-Hoc Rollup by date/time Interval

    I'm really bad judging the value of SQL server articles, so I'm going to choose the easy way and trust Adam Machanic's T-SQL Tuesday #002: The Roundup to provide with a few technical references.

    PostgreSQL

    Apparently the PostgreSQL community need their own "Geert" (see reference in the MySQL section) based on what I've read on Damn it feels good to be a (perl) gangsta and Josh Berkus' Postgres needs a new Python driver. Are you up to the challenge? In that case, step up to the plate, that's what Open Source is all about.

    The PostGIS group had an important announcement: PostGIS 1.5.0 out and PLR working on Windows 8.3-8.4 installs which the author calls "Perhaps the best release ever", so make space on your disk and schedule and take it for a spin.

    Baron Schwartz offers an interesting view on How PostgreSQL protects against partial page writes and data corruption. It offers great insight from a well known MySQL guru.

    Last but not least End Point's people have determined with mathematical precision PostgreSQL version 9.0 release date prediction, make sure you read the article and get ready for it.

    I hope I kept you reading up to this point and see you around in the blogosphere.

    Wednesday, February 3, 2010

    Using MariaDB with MySQL Sandbox

    A few days back MariaDB announced their first GA release (see Released: MariaDB 5.1.42), so it is time to start testing it and there is not better way to test any MySQL version in a control environment other than MySQL Sandbox. However Sandbox relies on the fact that the tarball and tarball target directory are prefixed with mysql, which is not true with MariaDB. So here are the 2 tricks I had to use to make it work out of the box.

    These steps are explained to create a single sandbox, the tips can be extrapolated to any other configuration. Also, I am trying to avoid renaming any files and/or directories as to leave the packages as close to the original as possible.

    Step 1: Use A Symlink For The Tarball

    The make_sandbox script will then think it's manipulating a MySQL tarball. Assuming that the default directory is where you have the tarball:
    ln -sv mariadb-5.1.42-Linux-x86_64.tar.gz mysql-5.1.42-Linux-x86_64.tar.gz
    make_sandbox /home/gnarvaja/Downloads/mysql-5.1.42-Linux-x86_64.tar.gz --sandbox_directory=maria_5.1.42

    Make the adjustments needed to your own platform and version.

    The make_sanbox run is going to fail since it expects a subdirectory named ./mysql-5.1.42-Linux-x86_64 which doesn't exist since we used a MariaDB tarball.

    Step 2: Use A Symlink For The MariaDB Binaries Directory

    For the same reason as above, now create a symlink for the directory to where the tarball was extracted and re-run make_sandbox:

    ln -sv mariadb-5.1.42-Linux-x86_64 mysql-5.1.42-Linux-x86_64
    make_sandbox /home/gnarvaja/Downloads/mysql-5.1.42-Linux-x86_64.tar.gz --sandbox_directory=maria_5.1.42

    Remember to always include the --sandbox_directory option to avoid name conflicts in case you want to compare MariaDB with the corresponding MySQL release.

    This time the installation will succeed and you'll be ready to start your testing.

    Conclusion

    I tried to install using the original tarball name using different options and the process failed with different error messages. I looked into the make_sandbox code and I saw some dependencies that would've taken me some time to figure out and fix. This method can be considered a hack, but it gets you up and running in no time.

    Giusseppe, if you happen to see this blog, I'll be glad to test a patch when you have it.

    Wednesday, January 13, 2010

    Ubuntu Karmic's Network Manager Issues

    Since Ubuntu 8.04 aka Hardy Heron, I've had issues with every new release. As Ubuntu evolves into being a viable desktop OS alternative, its complexity has been growing and with the new and improved looks new challenges arise. This bug in particular has been very difficult to diagnose and I can't imagine anyone without enough Linux experience to overcome it on their own, so I decided to summarize the steps I took to fix it ... and vent my frustration at the end.

    The Symptom

    I came across the issue for the first time while trying Ubuntu's Karmic Netbook remix. After overcoming the typical Broadcom wifi driver, Network Manager would connect, but Firefox would fail to load the web pages 90% of the time. Using ping in the command line worked just fine. Maybe I needed to update the software packages to get the latest patches, surprise, apt-get was having similar problems and timing out. So the problem was deep in the OS layer.

    After a lot fiddling and some googling I found bug #417757:
    [...] In Karmic, DNS lookups take a very long time with some routers, because glibc's DNS resolver tries to do IPv6 (AAAA) lookups even if there are no (non-loopback) IPv6 interfaces configured. Routers which do not repond to this cause the lookup to take 20 seconds (until the IPv6 query times out). [...]
    These routers are common place in many households and most users are completely unaware that they have their own DNS servers, what IPv6 means or even how to update the router's firmware if needed.

    The Solution(s)

    Going through the comments in the bug I found several recommendations, some made more sense than others, but these are the 2 I used. Most regular users will feel comfortable with these steps. I haven't tried, but it might not be necessary to apply both.

    Disable IPv6

    You should apply this one especially if the networks to which you connect are not using IPv6 (most home and public networks). Otherwise, skip it. The solution is explained here. To edit the /etc/sysctl.conf file use:
    sudo vi /etc/sysctl.conf
    Replace vi with your editor of choice. Reboot before retrying the connection.

    You can try the setting without changing your system configuration or restarting the machine using the following command:
    sudo sysctrl -w net.ipv6.conf.all.disable_ipv6=1

    Use OpenDNS or Google DNS Servers

    If the previous solution isn't enough and/or you want to try these DNS servers instead of relying on your router or ISP's DNS servers (in many cases it'll improve the DNS lookup performance) edit your /etc/dhcp3/dhclient.conf file using the following command:
    sudo vi /etc/dhcp3/dhclient.conf
    Add the following lines after the line starting with #prepend:
    # OpenDNS servers
    prepend domain-name-servers 208.67.222.222, 208.67.220.220;
    # Google DNS servers
    prepend domain-name-servers 8.8.8.8, 8.8.4.4;
    Or if you want to use the GUI, you can follow the instructions in How to setup Google Public DNS in Ubuntu 9.10 Karmic Koala, the instructions work with any of the IP addresses above. Once you apply these changes, restart your box and retry.

    The Editorial

    If you just read the article for the technical content, this is a good spot to stop. If you're interested in my rant, keep going.

    It is well known that long term Linux users have been frustrated by the complications that have been popping up with video (in particular dual head setups), sound and networking in the releases post Ubuntu 8.04 (Hardy). The last 3 releases have improved the overall GUI usability a lot, but they have introduced a number of bugs and issues that make those improvements irrelevant. It's easy to find articles in the web about these issues and I've been hearing and reading about them at multiple Linux and MySQL forums.

    Then there are comment like this one which miss the point completely:
    [...] > You can't tell your grandmother to edit some config files because her internet is slow
    Does your grandmother use Ubuntu then? If so, then just help her out in fixing the issue :) [...]
    This goes against what bug #1 is trying to address: Wider Linux adoption.

    My requests to the Ubuntu community are:
    1. Stop fiddling with the UI and start solving real usability problems. Without easy display, sound and network integration supporting widespread installed hardware, only the übergeeks are going to use Linux and bug #1 will still remain unsolved long after Ubuntu's Zippy Zebra release (I made up the name).
    2. This is another example where the Open Source community can be as bad as regular companies addressing real world needs. The OSS advantage is in the community members that, instead of spreading FUD and useless comments and articles, come up with proper suggestions and contributions. Unfortunately, sometimes it takes time to find them and until a given software package is forked, no real progress is made.
    I promise that on my next article I'll write about a MySQL topic.

    Thursday, January 7, 2010

    sar-sql Has A Wiki

    Finally settled for a wiki for sar-sql using Ubuntu's own wiki. Right now it only has my regular installation procedure. I will enhance and keep adding items as my time allows. I hope it will help to shape the future of the script.

    Enjoy it with responsibility.

    PS: I use the Kubuntu format because it is my desktop of choice.