Sunday, October 26, 2014

Dahua IPC-HFW4300S Setup

I recently acquired an IPC-HFW4300S camera to play around with.  It's very feature rich, but has almost no documentation available online, leading to many fruitless hours of guesswork trying to configure the camera.

Here's a few tips that hopefully help you if you're stuck in a similar situation:

The Hard Reset
Every tinkerer's friend, the hard-reset gets you out of almost any jam you can imagine.  To do a hard-reset:

  1. Remove the screw that holds the lens visor in place, and remove the visor.
  2. Carefully unscrew the front lens cover from the camera.
  3. The reset button is tiny and located at 12 o'clock on the face.
  4. Press this for 5+ seconds.  Be very careful not to touch the camera lens!
  5. Carefully screw the lens cover back on and reattach the visor.
What IP Address?
It seems to have a few preferred addresses, and some claim they are static values, but I do see the camera as registered under my router's DHCP server, so it's anyone's guess.  I've seen it on 192.168.1.108 and 192.168.1.231.  My advice here is to watch your router's connection list, and look for a MAC address prefix of 90:02:09.  I switched to a static IP immediately (but not before creating a new admin account!)

Browser Choice
I could never get the admin interface to work in Chrome.  I switched to Safari and have no troubles.

Admin Account
The default username/password is admin/admin.  Once you've logged in once, this account seems to be automatically disabled, so you must immediately create a new account in the admin group, or you'll need to start over.  There's also a mysterious account 888888/888888 in the admin group that may save you if you forget, but seems you should likely change that password too.  The system won't let you delete it for some reason.

Account is Locked Out
If you're trying to log in as the admin user, you likely didn't read the above paragraph.  If you're stuck, try 888888/888888 then immediately make a new admin user.  If that fails, you've got no choice but to hard-reset and start from scratch.

Additional Resources

Sunday, January 13, 2013

Synology, OpenSSH & BASH (the proper way)

Synology OpenSSH

I wrote in a past article about how to set up your Synology NAS to use OpenSSH, closing down a weird backdoor access in the process.  In that article, I warned to be careful as if you messed up you could lose access to your NAS.  Well, I should have taken my own advice as I did just that, and ended up locking myself out of my root account.

So, having to redo everything with a more experienced eye, I present a better (and safer) methodology to access your NAS via OpenSSH.  Throughout this, I strongly suggest keeping a terminal open with root user logged in so you can verify and fix things along the way.

If you get locked out of your root account, the only solution is to reinstall the system partition.  This should leave your data entirely safe, but as always it's advisable to make a backup first.

Boss User

Doing daily routines while logged in as root is considered dangerous since you can inadvertently do real and irrevocable damage with an errant "rm -r *" command or the like.  The preferred method is to do your daily business with a less privileged account and only act as root when you need it.

For this, we'll create a new user via the web interface control panel.  For our example, we'll use the name "boss".  Be sure to give all permissions to this new users, especially adding him to the "administrators" group.

Now to prep the boss account, log in as root and execute:

mkdir -p /var/services/homes/boss
chown admin:administrators /var/services/homes/boss
ln -s /var/services/homes /homes

To enable ssh access, edit the "/etc/passwd" file.  Find the line that starts with "boss" and change "/sbin/nologin" to "/bin/sh"

boss:x:1026:100::/var/services/homes/boss:/bin/sh 

Verify you can ssh into the NAS with the boss account.

SU Command

To run a command as root, you use the "su" command, which effectively lets you become root.
However, the stock install seems to have the permissions incorrect which prevents you from using the command.  The "su" command is actually just a symlink to "busybox" and we can set the suid bit (while logged in as root) with:

chmod 4755 /bin/busybox

You can test this by logging in as "boss" and trying:

su - -c whoami

If it returns "root" (after entering your password) then it is working.

Use OpenSSH & BASH

Assuming you've bootstrapped your system, install the relevant packages:

ipkg install bash
ipkg install zlib
ipkg install openssh

We can now configure our accounts to use this shell.  We'll do so in a more careful manner, though, to protect against future DSM updates which might make the BASH binary unreachable.  Edit the ~/.profile file for your user and add this at the end:

if [ -x /opt/bin/bash ] ; then
  exec bash             
fi                            
                           
This will run the normal "sh" shell during login then go to "bash" only if it exists.  You can then put your normal bash configuration in ".bashrc" as per normal.


Diable Root Access

If your NAS is visible to the internet then you'll want to disable the root ssh account entirely.  You can this by adding a line "PermitRootLogin no" to the /etc/ssh/sshd_config file.

You should also disable the "admin" and "guest" users from accessing the web-interface.  You will use the "boss" account for all your operations.  You do this from the "Users" part of the control panel.
  1. Log into the web interface as the "boss" user from above.
  2. Edit the "admin" and "guest" accounts and check the "Disable this account" box.

SUDO Convience

If you want to really get fancy, you can install the "sudo" package.  This will let you run individual commands as root (eg. sudo mkdir /homes/foo").

su -
ipkg install sudo

While still root, edit the file "/opt/etc/sudoers".  Find this line:

# %wheel ALL=(ALL) ALL

Remove the leading "#" character, and change "wheel" to "Administrators".

%Administrators ALL=(ALL) ALL

This gives all users in the "Administrators" group sudo access.
You can use this variant to skip the password prompt on use... be careful though!

%Administrators ALL=(ALL) NOPASSWD: ALL

Saturday, January 12, 2013

rsync Mystery

rsync Mystery

After a long hiatus, I ventured onto my Synology NAS and discovered there were a few DSM updates pending.  The updates themselves add some pretty fantastic functionality, like Airplay support directly from the NAS and your own personal Cloud.  However, after the updates my automated rsync backup (Cygwin -> NAS) was failing with "rsync: command not found" indicating the rsync binary could not be found on the remote server.

Investigation

I could ssh to the NAS and execute "which rsync" to see that it was in "/usr/syno/bin/rsync" which was in my PATH variable.  However, when I ran "ssh root@foo 'which rsync'" I got nothing.  So I tried "ssh root@foo 'echo $PATH'" and I got a different path from when logged in directly!

Solution

To be honest I've no idea why the paths are different.  My guess is that the ~/.profile file is not being sourced during rsync connectivity.  Anyway, the easy-out was to create a symlink from somewhere on the standard path to the actual rsync binary: "ln -s /usr/syno/bin/rsync /opt/bin/rsync" and all worked well once more.

Friday, July 6, 2012

Element-Level Caching of Collection Mapping Methods

Note:  This article applies to Spring 3.0 and EhCache 2.5
Update:  Allow non 1:1 mapping
Update:  Uses Spring 3.1's Caching abstractions
Update:  Refactored for clarity

Annotation Based Collection Caching

If you have ever used the ehcache-spring-annotations package Spring caching abstraction then you know what an awesome thing method-level caching is.  If you haven't used it, go check it out now!  To summarise (poorly), you annotate a method with "@Cacheable" and the package uses proxying to wrap the method invocation in a cache template.

For example:
@Cacheable("foo")
public SomeObject getObjectFromServer( String parameter )
{
  return someLengthyRestCall( parameter );
}

During execution, you would get a program flow similar to this:
key = generateKey( methodSignature, parameter );
if( cache.contains( key )) return cache.get( key );
value = getObjectFromServer( parameter );
cache.put( key, value );
return value;

It's more complicated, obviously, but the benefits and easy of this pattern are obvious.  In addition, you can leverage all the power and flexibility of ehcache to do your actual caching.  I apply this to all my MVC service implementations for instant web-service caching.

Limitations

One issue I had, though, was when working with collection-to-collection mapping methods.  This is a common pattern (for me, anyway) where a list of type A is converted to a list of type B in an idempotent stateless manner.

List<A> getAfromB( List<B> list )
{
  List<A> result= new ArrayList<A>( list.size() );
  for( B b: list ) result.add( getAfromB( b ));
  return result;
}

Another common pattern is unordered mapping method:
Collection<A> getAfromB( Collection<B> coll )
{
  Collection<A> result= new HashSet<A>( coll.size() );
  for( B b: list ) result.add( getAfromB( b ));
  return result;
}

If you annotate such a method as @Cacheable it will only cache complete result mappings, which can still be useful if you need to map {A,B,C} -> {X,Y,Z} on a regular basis.  What would really be neat, though, is if caching were applied to each element individually, with only the unknown values being passed on for resolution.

Enter the Aspect

This is a perfect application of AOP (aspect-oriented programming).  Although I'm no AOP expert, I was able to get my feet wet and enable just such a solution in only about two hours, thanks to the excellent documentation provided with the Spring.  This is a pretty bare-bones implementation, but it illustrates the important AOP bits.

Annotation

First we must declare a new custom annotation with which we will mark any method that meets are requirements.  By using explicit annotation-based configuration we give responsibility over the proper use of this aspect to the programmer.  In this configuration, we allow unordered Collection:Collection mapping by specifying which field in the result objects contains the request key.  An ordered List:List mapping is also possible by specifying "IMPLICIT" as the keyField.

@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
public @interface CollectionCacheable
{
  public static final String IMPLICIT = "##_implicit_##";
  String cacheName();      // EhCache to use
  String keyPrefix();      // This plus ID is unique key
  String keyField();       // ID field in result object
  Class<?> implClass default ArrayList.class;
}

Advice Class

Next we create the actual advice class.  We use @Aspect annotation to make it an aspect, and add a setter to allow injection of a Spring CacheManager object. We also have the key generator and convenience methods for casting.
@Aspect
public class CollectionCacheAspect
{
  // Object used as placeholder when weaving new and cached results
  private static final Object HOLDER = new Object();
  // Object used as part of the key when caching the 'null' object.
  private static final Serializable NULL_KEY = new Long( Long.MIN_VALUE );

  // CacheManager, configured elsewhere
  private CacheManager cacheManager;

  @Required
  public void setCacheManager( CacheManager cacheManager )
  {
    this.cacheManager = cacheManager;
  }

  @SuppressWarnings( "unchecked" )
  public static <T> List<T> cast(List<?> p)
  {
    return (List<T>) p;
  }

  @SuppressWarnings( "unchecked" )
  public static <T> Class<T> cast(Class<?> p)
  {
    return (Class<T>) p;
  }
 
  public Serializable generateKey( String keyPrefix, Object input )
  {
    return 31 * (long)keyPrefix.hashCode() + input.hashCode();
  }

Next we add @Pointcut configuration, which will decide whether to treat the proxied call as an individual, ordered List:List or unordered Collection:Collection operation. 
@Around("@annotation(config) && args(arg) ")
public Object doCollectionCache( ProceedingJoinPoint pjp,
                                 CollectionCache config,
                                 Object arg ) throws Throwable
{
  // Get annotation configuration
  @SuppressWarnings( "unchecked" )
  Class<?> implClass = (Class<Collection<Object>>)config.implClass();
  String cacheName = config.cacheName();
  String keyPrefix = config.keyPrefix();
  String keyField = config.keyField();
  // Get Cache
  Cache cache = cacheManager.getCache( cacheName );
  if( cache == null ) {
    throw new AopInvocationException( "CollectionCachePut:  Cache '"+
                                                            cacheName +
                                                            "' does not seem to exist?" );
  }

  // Call appropriate implementation based on run-time scenario
  Object result;
  if( CollectionCache.IMPLICIT.equals( keyField )) {
    if( List.class.isInstance( arg ) &&
        List.class.isAssignableFrom( implClass )) {
      // IMPLICIT mode (special handling for List->List)
      Class<List<Object>> listClass = cast( implClass );
      result = cacheOrdered( pjp, cache, keyPrefix, listClass, (List<?>) arg );
    } else {
      // Normal single-item cache where arg is the key
      result = cacheSingle( pjp, cache, keyPrefix, keyField, arg );
    }
  } else if( Collection.class.isInstance( arg )) {
    // UNORDERED mode (uses explicit field from result objects)
    Class<Collection<Object>> collClass = cast( implClass );
    result = cacheUnordered( pjp, cache, keyPrefix, keyField,
                             collClass, (Collection<?>)arg );
  } else {
    // SINGLE mode
    result = cacheSingle( pjp, cache, keyPrefix, keyField, arg );
  }
  return result;
}


Single Element Operation 

Since we want non-Collection requests to share the same cache as the Collection calls, we must provide the ability to operate on a single element.  This also handles the special "null" case.

private Object cacheSingle( ProceedingJoinPoint pjp, Cache cache,
                            String keyPrefix, String keyField, Object input )
  throws Throwable
{
  // Determine key
  Object value;
  Object suffix = ( input == null ) ? NULL_KEY : input;
  Serializable key = generateKey( keyPrefix, suffix );
  // Check cache
  ValueWrapper wrapper = cache.get( key );
  // Return cached, or fetch actual value
  if( wrapper != null ) {
    value = wrapper.get();
  } else {
    value = pjp.proceed( new Object[] { input } );
    // Cache fetched value if not null
    if( value != null ) {
      cache.put( key, value );
    }
  }
  return value;
}


Unordered Operation

Unordered mapping is the simpler of the two multi-value modes of operation, since we need not worry about maintaining the order of the request since the cache key is explicitly found in the result values.

private Collection<?> cacheUnordered( ProceedingJoinPoint pjp, Cache cache,
                                      String keyPrefix, String keyField,
                                      Class<Collection<Object>> implClass,
                                      Collection<?> input ) throws Throwable
{
  // Holder for intermediary results
  Collection<Object> hits = new ArrayList<Object>( input.size() );
  // Holder for our misses, which we'll pass on to the original target
  Collection<Object> misses = implClass.newInstance();
  for( Object in: input ) {
    // Search cache for each element; nulls always miss
    ValueWrapper wrapper = null;
    // Put found value in "hits", else put missed key in "misses"
    if( in != null ) {
      Serializable key = generateKey( keyPrefix, in );
      wrapper = cache.get( key );
      if( wrapper == null ) {
        misses.add( in );
      } else {
        hits.add( wrapper.get() );
      }
    } else {
      misses.add( in );
    }
  }

  // Pass our cache misses to original target
  Collection<Object> results = Collections.<Object>emptyList();
  if( misses.size() > 0 ) {
    results = cast( (List<?>)pjp.proceed( new Object[] { misses } ));
  }

  // Cache results
  for( Object value: results ) {
    // Pull key from explicit field
    Object suffix = PropertyAccessorFactory.forBeanPropertyAccess( value )
                    .getPropertyValue( keyField );
    if( suffix != null ) {
      Serializable key=generateKey( keyPrefix, suffix );
      cache.put( key, value );
    }
    // Merge new values into result collection
    hits.add( value );
  }
  return hits;
}


Ordered Operation

Ordered mapping is more difficult. We use the previously defined HOLDER object to mark placeholders in the output List where we will put the results of cache misses from the target method.

private List<?> cacheOrdered( ProceedingJoinPoint pjp, Cache cache, 
                              String keyPrefix, Class<List<Object>> implClass,
                              List<?> input ) throws Throwable
{
  // Holder for intermediary results
  List<Object> hits = new ArrayList<Object>( input.size() );
  // Holder for our misses, which we'll pass on to the original target method
  List<Object> misses = implClass.newInstance();
  for( int i=0; i<input.size(); i++ ) {
    // Search cache for each element; nulls always miss
    ValueWrapper wrapper = null;
    Object in = input.get( i );
    if( in != null ) {
      // Check cache for this object
      Serializable key = generateKey( keyPrefix, in );
      wrapper = cache.get( key );
    }
    if( wrapper == null ) {
      // If element is not found, put HOLDER Object and load the 'misses' list
      hits.add( HOLDER );
      misses.add( in );
    } else {
      // If element is found, then add cached value to intermediary results
      hits.add( wrapper.get() );
    }
  }

  // Pass our cache misses to original target
  List<Object> results = Collections.<Object>emptyList();
  if( misses.size() > 0 ) {
    results = cast( (List<?>)pjp.proceed( new Object[] { misses } ));
  }

  if( results.size() != misses.size() ) {
    // If our result size does not match input size, we cannot cache new values
    // as we do not know the associated key.  Just merge the lists and return.
    for( Object h: hits ) {
      if( h != HOLDER ) {
        results.add( h );
      }
    }
    return results;

  } else {
    // We'll reuse this list for our output
    misses.clear();
    // Iterate intermediary results
    Iterator<?> iter = results.iterator();
    for( int i=0; i<hits.size(); i++ ) {
      Object h = hits.get( i );
      if( h == HOLDER ) {
        if( iter.hasNext() ) {
          // Each place-holder will have its actual value in the results list
          // at the same location (ie. Nth HOLDER is in results[N]
          Object value = iter.next();
          misses.add( value );
          // Cache new non-null values
          if( input.get( i ) != null ) {
            Serializable key=generateKey( keyPrefix, input.get( i ));
            cache.put( key, value );
          }
        }
      } else {
        // This was a cache hit earlier so just use it
        misses.add( h );
      }
    }
  }
  return misses;
}

Cache Evictions

Evictions is just a simpler application of the above concepts.

The annotation:

@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
public @interface CollectionEvict
{
  public static final String IMPLICIT = CollectionCache.IMPLICIT;
  String cacheName();
  String keyPrefix() default "";
  String keyField();
  boolean removeAll() default false;
}

A pointcut to handle the special "no-args-remove-all" scenario:
@Before("@annotation(config)" )
public void doCollectionEvict( CollectionEvict config ) throws Throwable
{
  if( !config.removeAll() ) {
      // No keys and (removeAll == false)?  Nothing to do here.
      return;
  }
  doCollectionEvict( config, null );
}

A pointcut to choose which mode of operation (ordered, unordered, implicit, etc)
@Before("@annotation(config) && args(arg) ")
public void doCollectionEvict( CollectionEvict config,
                               Object arg ) throws Throwable
{
  // Get annotation configuration
  String cacheName = config.cacheName();
  String keyPrefix = config.keyPrefix();
  String keyField  = config.keyField();
  boolean removeAll = config.removeAll();

  // Get Cache
  Cache cache = cacheManager.getCache( cacheName );
  if( cache == null ) {
    throw new AopInvocationException( "CollectionCacheEvict:  Cache '"+ cacheName +"' does not seem to exist?" );
  }

  if( removeAll ) {
    // Evict all items
    cache.clear();

  } else if( List.class.isInstance( arg )) {
    // Evict as list
    for( Object in: (List<?>)arg ) {
      evict( cache, keyPrefix, keyField, in );
    }

  } else {
    // Evict as object
    if( arg == null ) {
      evict( cache, keyPrefix, keyField, NULL_KEY );
    } else {
      evict( cache, keyPrefix, keyField, arg );
    }
  }
}
And the actual eviction logic:
private void evict( Cache cache, String keyPrefix,
                    String keyField, Object input )
{
  // Key is based upon strategy marked by presence of keyField parameter
  // If parameter is present (ie. is not "" ) then cache by explicit field
  final boolean implicitKey = CollectionCache.IMPLICIT.equals( keyField );
  if( input != null ) {
    Object suffix = implicitKey ? input :
                    PropertyAccessorFactory.forBeanPropertyAccess( input )
                    .getPropertyValue( keyField );
    Serializable key = generateKey( keyPrefix, suffix );
    cache.evict( key );
  }
}


Spring Configuration

Wiring it all together with Spring is:

<!-- Enable AOP -->
<aop:aspectj-autoproxy/>
<!-- The EhCacheManager is usually created within Hibernate startup, so we must
indicate we want the shared singleton instance. -->
<bean id="mvcEhCache" class="org.springframework.cache.ehcache.EhCacheManagerFactoryBean">
    <property name="shared" value="true"/>
</bean>
<!-- Spring's abstract CacheManager
<bean id="cacheManager" class="org.springframework.cache.ehcache.EhCacheCacheManager">
  <property name="cacheManager" ref="mvcEhCache"/>
</bean>
<!-- Our Aspect -->
<bean id="collectionCacheAspect" class="CollectionCacheAspect">
  <property name="cacheManager" ref="cacheManager"/>
</bean>

Putting it All Together

Now we can annotate any appropriate class and get element-level caching:

@CollectionCache( cacheName="listCache", keyPrefix="AtoB", keyField=CollectionCacheable.IMPLICIT )
List<A> getAfromB( List<B> list )
{
  List<A> result= new ArrayList<A>( list.size() );
  for( B b: list ) result.add( getAfromB( b ));
  return result;
}

@CollectionEvict( cacheName="listCache", keyPrefix="AtoB", keyField=CollectionEvict.IMPLICIT )
void evictB( B item ) {}

Conclusions

This, of course, is just part of a full solution.  It should be easy to add additional annotations and functionality to allow adding of individual items to the same cache, and for triggering removal of elements either via a list or individually.  It's a long way from being as feature-filled or rigorous as the original ehcache-spring-annotations package but solves a specific problem and is a good introduction to AOP in Spring.

Thursday, April 26, 2012

Dealing With Corrupt Maven Artifacts

Yesterday I was happily coding away, when Eclipse gave me a disturbing error:
Archive for required library: '/foo/bar.jar' in project 'Baz' cannot be read or is not a valid ZIP file
Now, since I use Maven3 for all my dependency handling, this seemed a rather strange error.  I went to the location of the artifact JAR in my Maven repository and sure enough the JAR file was corrupt!  'Odd', I thought, and deleted the version folder and tried to build again, thinking it would fetch the proper version this time.  Same error!  And the corrupt JAR file had returned!

I checked out the canonical copy on the actual Maven repository site and it was fine.  I trawled Google looking for similar situations and found Eclipse Bug 375249 which shows it to be a known bug in 3.6-Indigo but fixed in 3.7-Juno, with a suggested workaround.  The workaround didn't work, however.

I finally realised that the rebuild step above wasn't fetching the artifact JAR from the canonical Maven repository, but from our local Maven repository, which acts as a local cache for artifacts.  On a hunch, I deleted my local copy of the JAR, then the cached copy on our Archiva repository and rebuilt.  Success!

I have no idea why this one set of archives got corrupted when being pulled down by Archiva, but at least I can now fix the issue and carry on my development.

Monday, January 9, 2012

iTunes 10 and the NAS

Recently I decided to give up on the MediaServ / ROKU method for listening to music from my NAS and try out the Apple way with a MacBook and Airport Express.  The ROKU method worked fine, but the user interface was pretty clunky.  I got a new MacBook Air from work so decided to give it a whirl.

Getting things working was pretty easy.
  1. Re-enable your iTunes service, if previously disabled
  2. Enable the iTunes via the UI at "Control Panel->iTunes".  Set the share name and password if desired
  3. This will create a new folder at /volume1/music, which is where your shared music must reside. 

    Since I already have my music elsewhere, I'll just create a symlink to my existing collection
    cd /volume1
    rm -rf music
    ln -s /volume1/data/MEDIA/music
  4. Return to the UI and click "Re-index" to scan your collection.  This might take a while.
  5. In iTunes, verify your server is appearing under "Shared Libraries" and click it
  6. You should see all your music available to be played.
  7. If you add more files to your folder, you'll might have to re-index again.
Note that when I first attempted this, iTunes exhibited some odd behaviour.  After selecting the shared library, it would load the listing and display it briefly before returning to the main music library.  A bit of poking around revealed this to be a known issue, caused by the latest iTunes 10 distribution.  Luckily, Synology already had a patched DSM binary available (v3.2) so if you are using iTunes 10 and witnessing any weird incompatibilities then try updating your DSM via "Control Panel->DSM Update"

Saturday, June 18, 2011

Those Annoying @eaDir Files (Synology DS210j NAS)

If you've installed MyMedia Server to your Synology NAS, the first thing you'll likely notice is that there are folders called "@eaDir" everywhere on your system.  This is a "hidden" folder where the server stores thumbnail files associated with iTunes support.  Since we are not using iTunes, we do not need these folders!

To stop the folders from being created:
cd /usr/syno/etc.defaults/rc.d
S66synoindexd.sh stop
S77synomkthumbd.sh stop
S88synomkflvd.sh stop
S99iTunes.sh stop
chmod 000 S66synoindexd.sh S77synomkthumbd.sh S88synomkflvd.sh S99iTunes.sh

To re-enable the folders being created:
cd /usr/syno/etc.defaults/rc.d
chmod 655 S66synoindexd.sh synomkthumbd.sh S88synomkflvd.sh S99iTunes.sh
S66synoindexd.sh start
S77synomkthumbd.sh start
S88synomkflvd.sh start
S99iTunes.sh start


To delete *all* @eaDir folders on your system:
CAUTION:  This will delete files without confirmation, so be sure you have it right!!
cd /volume1/music (or wherever your doc root is)
find . -name @eaDir -print | while read n ; echo $n ; rm -rf "$n" ; done


If you are copying files to the NAS from an Mac then sometimes @eaDir folders will reappear.  When this happens, just re-run the above delete script to get rid of them.