Thursday, April 21, 2011

Embedded MySql Server for jUnit Testing With Maven

IMPORTANT
Be sure to see the two UDPATES to this blog at the bottom.

Those of you who use an in-memory hsqldb instance for junit testing know the frustration when faced with some code that uses syntax of features specific to MySQL.  Your options for this situation are basically to skip the tests completely, or to have a dedicated junit server somewhere on your network, neither of which are particularly great ideas.

Then I came across this gem, which seems to be the most linked-to article on embedded MySQL servers in Java.  In fact it seems to be the only real article with any information!
http://jroller.com/mmatthews/entry/yes_it_really_is_this

He perhaps makes it sound a little easier than it is, but all in all he nails it.  There are a few tricks, though.
  1. The Connector/MXJ library (in Maven "com.mysql:management:5-0-2-beta") requires two JSR libraries from SUN: jmxri and jmxtools.  Unfortunately, these are not available in the standard Maven repository, due to binary distribution constraints.  This is the same constraint as on javax.servlet libraries, and has the same solution.
    1. Download the binary package from Sun.  I needed v1.2
    2. Rename the jars to "jmxri-1.2.jar" and "jmxtools-1.2.jar"
    3. Upload to your local Maven repository.  Use "com.sun.jmx" as the groupId and "jmxri" and "jmxtools" as the artifactId respectively.
  2.  If you already have a MySQL instance installed on the build machine (or your local box) then there will be a conflict on the port and socket connector file.  We'll need to change this at runtime.
    1. Create a new class "EmbeddedMysqlDataSource extends MysqlDataSource"
    2. Provide a new constructor to add additional configuration to the instance setup
    3. Override the "getConnection( Properties props )" method
    4. Set a new value for the MysqldResourceI.SOCKET and MysqldResourceI.PORT properties
    5. Call the super-class implementation
  3. To simplify, I provide a static Factory class to provide a DataSource object pointing to the embedded database instance.  It also allows you to track the necessary information to kill the instance when you're done with it.
Here's what my class looks like:

package com.literatitech.example;

import java.io.File;
import java.io.IOException;
import java.sql.Connection;
import java.sql.SQLException;
import java.util.Properties;

import com.mysql.jdbc.jdbc2.optional.MysqlDataSource;
import com.mysql.management.MysqldResourceI;
import com.mysql.management.driverlaunched.ServerLauncherSocketFactory;

// We use the excellent Log5j variant
import com.spinn3r.log5j.Logger;

public class EmbeddedMysqlDataSource extends MysqlDataSource
{
  private int port;
  private String sock;
  private String url;
  private File basedir;
  private File datadir;
  private Connection connection;

  private static Logger logger = Logger.getLogger();

  public static EmbeddedMysqlDataSource getInstance()
  {
    EmbeddedMysqlDataSource dataSource = null;
    try {
      dataSource = new EmbeddedMysqlDataSource( 4000 );
      dataSource.setUrl( dataSource.getEmbeddedUrl() );
      dataSource.setUser( "root" );
      dataSource.setPassword( "" );
    } catch( Exception e2 ) {
      dataSource = null;
      logger.info( "Could not create embedded server.  Skipping tests. (%s)", e2.getMessage() );
      e2.printStackTrace();
    }
    return dataSource;
  }

  public static void shutdown( EmbeddedMysqlDataSource ds )
  {
    try {
      ds.shutdown();
    } catch( IOException e ) {
      logger.info( "Could not shutdown embedded server. (%s)", e.getMessage() );
      e.printStackTrace();
    }
  }

  public EmbeddedMysqlDataSource( int port ) throws IOException
  {
    super();
    this.port = port;
    sock = "sock" + System.currentTimeMillis();

    // We need to set our own base/data dirs as we must
    // pass those values to the shutdown() method later
    basedir = File.createTempFile( "mysqld-base", null );
    datadir = File.createTempFile( "mysqld-data", null );

    // Wish there was a better way to make temp folders!
    basedir.delete();
    datadir.delete();
    basedir.mkdir();
    datadir.mkdir();
    basedir.deleteOnExit();
    datadir.deleteOnExit();

    StringBuilder sb = new StringBuilder();
    sb.append( String.format( "jdbc:mysql:mxj://localhost:%d/test", port ));
    sb.append( "?createDatabaseIfNotExist=true" );
    sb.append( "&server.basedir=" ).append( basedir.getPath() );
    sb.append( "&server.datadir=" ).append( datadir.getPath() );
    url = sb.toString();
  }

  public String getEmbeddedUrl()
  {
    return url;
  }

  @Override
  protected java.sql.Connection getConnection( Properties props ) throws SQLException
  {
    if( connection == null ) {
      props.put( MysqldResourceI.PORT, String.valueOf( port ));
      props.put( MysqldResourceI.SOCKET, sock );
      props.put( MysqldResourceI.BASEDIR, basedir.getPath() );
      props.put( MysqldResourceI.DATADIR, datadir.getPath() );
      connection = super.getConnection( props );
    }
    return connection;
  }

  public void shutdown() throws IOException
  {
    ServerLauncherSocketFactory.shutdown( basedir, datadir );
  }
}


Once done you can spool up a new MySQL instance within a unit test with:

dataSource = EmbeddedMysqlDataSource.getInstance();
...
EmbeddedMyqlDataSource.shutdown();


Notes:
  • The embedded server can run concurrently with a normal instance on the same machine
  • The server process doesn't shutdown instantly... might want to pause in the shutdown method
  • Consult the Connector/MXJ Documentation for more configuration options

UPDATE 1:  Turns out there is a know issue which prevents more than one connection from being obtained on a port other then 3306.  So, your choices are to get one connection on an alternate port and use it for all your testing, or to give up running an embedded server on a machine which has another MySQL install.  Perhaps we could make a Connection singleton which ignores normal close() calls...

UPDATE 2:  Another problem -- if you use the com.mysql:management:5-0-2-beta artifact from Maven, it has a dependency for aspectj:aspectjtools which causes conflicts within the java.xml.parsers package during Spring startup, specifically a "Provider org.apache.xerces.jaxp.DocumentBuilderFactoryImpl not found" exception.  You can exclude it in your POM but I'm not what effect, if any, that will have on the server operation.

        <dependency>
            <groupId>com.mysql</groupId>
            <artifactId>management</artifactId>
            <version>5-0-2-beta</version>
            <type>jar</type>
            <scope>test</scope>
            <exclusions>
                <exclusion>
                    <artifactId>aspectjtools</artifactId>
                    <groupId>aspectj</groupId>
                </exclusion>
            </exclusions>
        </dependency>

Sunday, April 10, 2011

Synology SSH 'root' Account, OpenSSH & /etc/shadow

I discovered something interesting about the Synology's SSH implementation the other day. The built-in SSH application does some weird account trickery without informing you. In essence, when you ssh into your NAS as user "root" it covertly does a credential look-up against the user "admin" instead, which has its credentials set by the normal means via the User app from the browser based Control Panel. The actual root user has a completely different password, which is presumably in place to allow remote support.

In other words Synology has !!! ROOT ACCESS TO YOUR NAS !!!

This would normally not even be an issue, especially if your NAS is not accessible outside your LAN. However, if you install the OpenSSH suite of tools then you will switch to using the OpenSSH version of ssh which knows nothing of this root/admin tomfoolery. In this case, attempting to log into the root account using the normal admin password will fail, as it will want the real root password.

The solution is pretty easy, and kills two birds with one stone, removing Synology remote access and isolating your root password from your admin password.
  1. Log in as root
  2. Edit your /etc/shadow file
  3. Delete the line which starts with "root:"
  4. Make a copy of the line which starts with "admin:"
  5. Change the "admin" to "root" in the copied line from step 4
You should now be able to log into root account using admin credentials. Note that changing your admin password in the browser GUI will have no effect on your root password.

The other alternative is to track down why OpenSSH is used in lieu of Synology's version and disable it, but I'd rather stick with OpenSSH personally. Please post a comment if you find any information regarding this tack.

Thursday, April 7, 2011

Relocating Your iTunes Library

If you're like me, then you find the way iTunes attempts to abstract away your music collection extremely annoying. In particular, I do not care to let iTunes manage my music folder since I already have it quite well managed, thank you very much. I also find "music/Arcade Fire/Neon Bible/Black Mirror.mp3" to be a tad more 'intuitive' than whatever series of hash values, whistles and clicks iTunes uses. Of course, this can be a problem if you ever decide to *move* your music collection, as the iTunes library metadata is, unsurprisingly, not human readable and has hard-coded paths to your music files.

A few folks did some research on how to get around this, and came up with an excellent plan, as detailed here and here. I gave this a whirl on my wife's library and it worked a charm. I was moving files from "D:\music" to "H:\MEDIA\music" so used the command:
sed -i -e "s|D:/music|H:/MEDIA/music|g" "iTunes Music Library.xml"

What happened next is why I despise computers. After using this method to move my wife's music, and having it work perfectly, I did the EXACT SAME PROCESS on my library and it failed miserably! On startup, instead of being apologetic "Oh, your library is corrupt... so sorry, please let me rebuild it..." iTunes just curtly goes "Hmm... Your library looks wonky. Let me ERASE IT."

Luckily, the folks at Apple put more of their "intuitiveness" into the product, and there is an easier way. If you just load iTunes up with the old library data, you'll likely get an iTunes display with lots of (!) icons indicating your file is missing. If you double-click on one of these, it will inform you it's missing and give you the choice to locate it on disk. When you do locate it on disk, you will *then* be given the option to find other files based upon this first one.

The process takes a ludicrously long time, since iTunes figures merely rewriting its library a few thousand times isn't enough work, so it decides to download album art and analyse gapless playback info all at the same time, bringing my i7 laptop w/ 4GB of RAM to its knees. Eventually, though, your library will be relocated in the mind of iTunes and you can continue cursing it as per usual.

Synology DS210j HDD Temperature

I mentioned in a previous post that I installed a new drive in my NAS to correct a degraded RAID volume. I was pondering why the drive failed so soon after installation in the NAS. Coincidence is certainly a possibility, but perhaps there was more to it.

As I was pondering this, I was keeping an eye on the RAID volume rebuild. I noticed that the temperature for Disk 2 was climbing steadily from 39C through to 49C with no signs of stopping. Normal HDD operating temperatures usually top out around 55C and even then you don't want to operate in that range is it degrades the media more quickly. Could this be what happened to my poor disk from before?

A bit of thinking and searched revealed that the Fan Speed Mode in the Power settings app is effect controlling profile which dictates how fast the fan spins for a given temperature. Mine was set to "Quiet" mode, which perhaps should be labeled "bake your disks" mode, as some more digging indicates that mode is only useful when using 2.5" disks or operating your NAS outdoors in Qikkigiaq.

I switched to "Cooling Mode" and my drive temperatures stabilized, then dropped down to 35C and 45C respectively. Disk 2 runs hotter I noticed, but that's like as it is sandwiched between Disk 1 and the case.

Synology DS210j RAID Failure

A mere week after migrating my 2 500GB Seagate Barracuda drives to my new Synology DS210j I came home to an angrily beeping NAS. Checking in with the Storage Manager app showed that my RAID volume was DEGRADED and horribly bad things were IMMINENT. A bit of investigation (ie. clicking on the SMART Info tab) showed that my Reallocated_sector_ct had dropped below the threshold (or risen above it... hard to say really) and the NAS was in no way going to let me use that disk.

Although fairly confident the disk is likely still good, there's no arguing with an angry NAS, so I shut the thing down and ordered a new drive. Given the current pricing, a 1TB was practically the same price as a 500GB so I picked up a 1TB Western Digital Caviar Blue drive.

The first trick was to determine which was Disk 1 (the bad) and which was Disk 2 (the good). The information is squirreled away in the installation manual so I shall present it here instead. It's not exactly a page from their CAD design but you get the picture that Disk 1 is the one furthest away from the little nubbly bits at the bottom.

With that mystery solved, I pulled out the bad (Disk 1), moved the good (Disk 2) into the driver's seat (ie. it in slot 1) and put the ugly (new drive) into slot 2. The WD really is pretty ugly, when you look at it, especially compared to the svelte Seagate drives. I digress.

Rebuilding the volume was easy peasy though. In the Storage Manager app, going to Volume, selecting the unhappy volume and then Manage I was given a "Repair" option. It showed me my options for repairing the drive, which in this case was just the WD drive, and after hitting Apply the beeping stopped and the NAS happily went about repairing the volume.

So, all in all, I'll give it +2 for letting me know so quickly that my drive had issues, but -1 for not letting me decide my own level of risk for the volume. I will do some tests on the "bad" drive to see how bad it actually is.

UPDATE:  I downloaded and ran the excellent SeaTools disk test suite and it, too, was unequivocally warning me not to use this disk.  I then discovered that the drive was still under warranty from Seagate (you can find out by entering your s/n on their site) so now I have a factory refurbished 500GB drive sitting on my desk.  The NAS is chugging away happily with the 1TB so I guess I'll keep it around in case of another failure.