Friday, April 20, 2018

Budgeting a Split Check in YNAB

One topic that seems to cause a lot of consternation in the YNAB community is how to deal with this common situation:  You go to a restaurant with a friend, and put the whole bill on your credit card then get paid cash by your friend for their share.

After much fiddling about, here's how I deal with it:
  1. Make the transaction a split.
  2. Add whole amount of the bill as an outflow against your Restaurant category.
  3. Add the cash received amount as an inflow against your Restaurant category.
  4. Add the cash received amount as an outflow against the "Petty Cash" category.
This results in a net transaction that matches your credit-card bill, and keeps all the bookkeeping magic in a single transaction.

Check my previous article for info on the "Petty Cash" category.

Handling Cash in YNAB

The Curse of Cash

One aspect of budgeting that I have always found so frustrating is the tedious nature of tracking my cash expenditures.  This article explains how I was able to break past my OCD and make peace with the paper & coins in my pocket.

I had two major problems with my cash spending, which has caused me to walk away from budgeting more times than I can remember:
  1. The vast majority of my cash purchases do not result in a receipt, making it almost impossible to accurately record my transactions unless I do it immediately after purchase.
  2. My borderline OCD tendencies make it impossible to walk away from a two cent error in my calculations

Pettiness as a Virtue

Normally I'd create a "Wallet" account and track how much money was coming out the ATM, split bills, and whatever I found behind the couch cushions when cleaning.  Every transaction in cash would be dutifully tracked, down the nickel I threw in the fountain to make a wish.  Inevitably I'd miss a day (or a week) and spend an hour wracking my brain on where the 37 cents missing from my Wallet account could have gone.  Nine times out of ten I'd just give up and mark it as "Misc".

Finally I said "Enough is enough" and made a hard compromise.  I realized that alongside the budget categories that represent the major trappings of Western civilization (mortgage, auto loan, petrol, student debt, credit cards, etc) the amount I was spending in actual cold, hard cash was irrelevant.  In effect, it was petty in the truest sense.

So, instead of tracking cash as an account, I created a "Petty Cash" category and toss all my cash in there.  The key is to treat it like any other category -- an outflow to Petty Cash (eg. an ATM withdrawal) is the end of the tracking.  As far as your budget is concerned, that money is gone, already spent, frittered away on nothing worth tracking.  Likewise, small income amounts (found money, deposit returns, etc) can just go in your pocket without any paperwork.

But What About...

I see you are already thinking of the hundred special cases where this doesn't work.  So there is a little trickery required to keep things straight.  The biggest obstacle here is when you use your petty cash to fund a tracked expense (eg. eating at a cash-only restaurant).   Here's my solution:
  1. Create a Bookkeeping on-budget account
  2. Record the amount in question as an inflow from Petty Cash.  
  3. Record the amount in question as an outflow to the relevant category.
What you've done here is create a net-zero transaction that brings money back into the budget (from the cash reserves in your pocket) and then immediately out of the budget to the proper category.  This works for any cash transaction, so long as you recognize that Petty Cash does not represent the amount of money in your wallet, but is a category like any other where you can magic budget money out of the ether when necessary.

The Fine Print

 There are a few considerations here:
  1. This only works if your cash expenditures are truly petty.  
    1. This can be accelerated by using credit cards whenever possible.
    2. Obviously that only works if you are a pay-off-your-balance-every-month sort of person.
  2. A hybrid approach is to have both a Wallet account and a Petty Cash category
    1. Budget some small amount to Petty Cash every month ($20)
    2. This gives lots of wiggle room for small purchases, found money, etc.
    3. If you are consistently overspending, this amount, reconsider your categories

Monday, April 16, 2018

Synology and Dropbox Oddity

One thing I love about my Synology NAS is that it can stand double-duty for an always-on server within my house for long-running lightweight operations.  With the iPkg package manager giving access to nearly every command-line tool imaginable, it's often easier to fire up a screen session and have the NAS chug away at something.  Sure, my laptop could get things done in a fraction of the time, but the Synology's CPUs are idle 90% of the time so might as well put them to some use.  For example, I've had it concatenating MP4 files from my dash-cam and re-coding them for YouTube for the past week, using ffmpeg.

Recently, I was downloading some raw JSON data (tracking data for a friend's cross-country bicycle ride) via a cron script that just made an authenticated curl call to the relevant service.  I was saving the JSON data into the Dropbox folder on Synology with an eye to giving my friend access to that folder.  I verified the files were getting downloaded properly, but after a few days I noticed they're not showing up in any of my other Dropbox clients.

To cut a long story short, the files written within the cron job seem to avoid detection by the Cloud Sync service, and thus just sit there without being sync'ed.  The easy solution is to go back later and touch the files, which causes them to sync immediately, but this of course is a manual step in an automated process so is not ideal.  One could automate this hastily by having a bash script periodically touch all files in the folder, which you can persist via screen.  You could even get fancy and force identical timestamps if required.

That's good enough for this short-term project, but I'm curious if there is a better way to force sync these files.  Feel free to add your ideas in the comments.

Sunday, October 26, 2014

Dahua IPC-HFW4300S Setup

I recently acquired an IPC-HFW4300S camera to play around with.  It's very feature rich, but has almost no documentation available online, leading to many fruitless hours of guesswork trying to configure the camera.

Here's a few tips that hopefully help you if you're stuck in a similar situation:

The Hard Reset
Every tinkerer's friend, the hard-reset gets you out of almost any jam you can imagine.  To do a hard-reset:

  1. Remove the screw that holds the lens visor in place, and remove the visor.
  2. Carefully unscrew the front lens cover from the camera.
  3. The reset button is tiny and located at 12 o'clock on the face.
  4. Press this for 5+ seconds.  Be very careful not to touch the camera lens!
  5. Carefully screw the lens cover back on and reattach the visor.
What IP Address?
It seems to have a few preferred addresses, and some claim they are static values, but I do see the camera as registered under my router's DHCP server, so it's anyone's guess.  I've seen it on 192.168.1.108 and 192.168.1.231.  My advice here is to watch your router's connection list, and look for a MAC address prefix of 90:02:09.  I switched to a static IP immediately (but not before creating a new admin account!)

Browser Choice
I could never get the admin interface to work in Chrome.  I switched to Safari and have no troubles.

Admin Account
The default username/password is admin/admin.  Once you've logged in once, this account seems to be automatically disabled, so you must immediately create a new account in the admin group, or you'll need to start over.  There's also a mysterious account 888888/888888 in the admin group that may save you if you forget, but seems you should likely change that password too.  The system won't let you delete it for some reason.

Account is Locked Out
If you're trying to log in as the admin user, you likely didn't read the above paragraph.  If you're stuck, try 888888/888888 then immediately make a new admin user.  If that fails, you've got no choice but to hard-reset and start from scratch.

Additional Resources

Sunday, January 13, 2013

Synology, OpenSSH & BASH (the proper way)

Synology OpenSSH

I wrote in a past article about how to set up your Synology NAS to use OpenSSH, closing down a weird backdoor access in the process.  In that article, I warned to be careful as if you messed up you could lose access to your NAS.  Well, I should have taken my own advice as I did just that, and ended up locking myself out of my root account.

So, having to redo everything with a more experienced eye, I present a better (and safer) methodology to access your NAS via OpenSSH.  Throughout this, I strongly suggest keeping a terminal open with root user logged in so you can verify and fix things along the way.

If you get locked out of your root account, the only solution is to reinstall the system partition.  This should leave your data entirely safe, but as always it's advisable to make a backup first.

Boss User

Doing daily routines while logged in as root is considered dangerous since you can inadvertently do real and irrevocable damage with an errant "rm -r *" command or the like.  The preferred method is to do your daily business with a less privileged account and only act as root when you need it.

For this, we'll create a new user via the web interface control panel.  For our example, we'll use the name "boss".  Be sure to give all permissions to this new users, especially adding him to the "administrators" group.

Now to prep the boss account, log in as root and execute:

mkdir -p /var/services/homes/boss
chown admin:administrators /var/services/homes/boss
ln -s /var/services/homes /homes

To enable ssh access, edit the "/etc/passwd" file.  Find the line that starts with "boss" and change "/sbin/nologin" to "/bin/sh"

boss:x:1026:100::/var/services/homes/boss:/bin/sh 

Verify you can ssh into the NAS with the boss account.

SU Command

To run a command as root, you use the "su" command, which effectively lets you become root.
However, the stock install seems to have the permissions incorrect which prevents you from using the command.  The "su" command is actually just a symlink to "busybox" and we can set the suid bit (while logged in as root) with:

chmod 4755 /bin/busybox

You can test this by logging in as "boss" and trying:

su - -c whoami

If it returns "root" (after entering your password) then it is working.

Use OpenSSH & BASH

Assuming you've bootstrapped your system, install the relevant packages:

ipkg install bash
ipkg install zlib
ipkg install openssh

We can now configure our accounts to use this shell.  We'll do so in a more careful manner, though, to protect against future DSM updates which might make the BASH binary unreachable.  Edit the ~/.profile file for your user and add this at the end:

if [ -x /opt/bin/bash ] ; then
  exec bash             
fi                            
                           
This will run the normal "sh" shell during login then go to "bash" only if it exists.  You can then put your normal bash configuration in ".bashrc" as per normal.


Diable Root Access

If your NAS is visible to the internet then you'll want to disable the root ssh account entirely.  You can this by adding a line "PermitRootLogin no" to the /etc/ssh/sshd_config file.

You should also disable the "admin" and "guest" users from accessing the web-interface.  You will use the "boss" account for all your operations.  You do this from the "Users" part of the control panel.
  1. Log into the web interface as the "boss" user from above.
  2. Edit the "admin" and "guest" accounts and check the "Disable this account" box.

SUDO Convience

If you want to really get fancy, you can install the "sudo" package.  This will let you run individual commands as root (eg. sudo mkdir /homes/foo").

su -
ipkg install sudo

While still root, edit the file "/opt/etc/sudoers".  Find this line:

# %wheel ALL=(ALL) ALL

Remove the leading "#" character, and change "wheel" to "Administrators".

%Administrators ALL=(ALL) ALL

This gives all users in the "Administrators" group sudo access.
You can use this variant to skip the password prompt on use... be careful though!

%Administrators ALL=(ALL) NOPASSWD: ALL

Saturday, January 12, 2013

rsync Mystery

rsync Mystery

After a long hiatus, I ventured onto my Synology NAS and discovered there were a few DSM updates pending.  The updates themselves add some pretty fantastic functionality, like Airplay support directly from the NAS and your own personal Cloud.  However, after the updates my automated rsync backup (Cygwin -> NAS) was failing with "rsync: command not found" indicating the rsync binary could not be found on the remote server.

Investigation

I could ssh to the NAS and execute "which rsync" to see that it was in "/usr/syno/bin/rsync" which was in my PATH variable.  However, when I ran "ssh root@foo 'which rsync'" I got nothing.  So I tried "ssh root@foo 'echo $PATH'" and I got a different path from when logged in directly!

Solution

To be honest I've no idea why the paths are different.  My guess is that the ~/.profile file is not being sourced during rsync connectivity.  Anyway, the easy-out was to create a symlink from somewhere on the standard path to the actual rsync binary: "ln -s /usr/syno/bin/rsync /opt/bin/rsync" and all worked well once more.

Friday, July 6, 2012

Element-Level Caching of Collection Mapping Methods

Note:  This article applies to Spring 3.0 and EhCache 2.5
Update:  Allow non 1:1 mapping
Update:  Uses Spring 3.1's Caching abstractions
Update:  Refactored for clarity

Annotation Based Collection Caching

If you have ever used the ehcache-spring-annotations package Spring caching abstraction then you know what an awesome thing method-level caching is.  If you haven't used it, go check it out now!  To summarise (poorly), you annotate a method with "@Cacheable" and the package uses proxying to wrap the method invocation in a cache template.

For example:
@Cacheable("foo")
public SomeObject getObjectFromServer( String parameter )
{
  return someLengthyRestCall( parameter );
}

During execution, you would get a program flow similar to this:
key = generateKey( methodSignature, parameter );
if( cache.contains( key )) return cache.get( key );
value = getObjectFromServer( parameter );
cache.put( key, value );
return value;

It's more complicated, obviously, but the benefits and easy of this pattern are obvious.  In addition, you can leverage all the power and flexibility of ehcache to do your actual caching.  I apply this to all my MVC service implementations for instant web-service caching.

Limitations

One issue I had, though, was when working with collection-to-collection mapping methods.  This is a common pattern (for me, anyway) where a list of type A is converted to a list of type B in an idempotent stateless manner.

List<A> getAfromB( List<B> list )
{
  List<A> result= new ArrayList<A>( list.size() );
  for( B b: list ) result.add( getAfromB( b ));
  return result;
}

Another common pattern is unordered mapping method:
Collection<A> getAfromB( Collection<B> coll )
{
  Collection<A> result= new HashSet<A>( coll.size() );
  for( B b: list ) result.add( getAfromB( b ));
  return result;
}

If you annotate such a method as @Cacheable it will only cache complete result mappings, which can still be useful if you need to map {A,B,C} -> {X,Y,Z} on a regular basis.  What would really be neat, though, is if caching were applied to each element individually, with only the unknown values being passed on for resolution.

Enter the Aspect

This is a perfect application of AOP (aspect-oriented programming).  Although I'm no AOP expert, I was able to get my feet wet and enable just such a solution in only about two hours, thanks to the excellent documentation provided with the Spring.  This is a pretty bare-bones implementation, but it illustrates the important AOP bits.

Annotation

First we must declare a new custom annotation with which we will mark any method that meets are requirements.  By using explicit annotation-based configuration we give responsibility over the proper use of this aspect to the programmer.  In this configuration, we allow unordered Collection:Collection mapping by specifying which field in the result objects contains the request key.  An ordered List:List mapping is also possible by specifying "IMPLICIT" as the keyField.

@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
public @interface CollectionCacheable
{
  public static final String IMPLICIT = "##_implicit_##";
  String cacheName();      // EhCache to use
  String keyPrefix();      // This plus ID is unique key
  String keyField();       // ID field in result object
  Class<?> implClass default ArrayList.class;
}

Advice Class

Next we create the actual advice class.  We use @Aspect annotation to make it an aspect, and add a setter to allow injection of a Spring CacheManager object. We also have the key generator and convenience methods for casting.
@Aspect
public class CollectionCacheAspect
{
  // Object used as placeholder when weaving new and cached results
  private static final Object HOLDER = new Object();
  // Object used as part of the key when caching the 'null' object.
  private static final Serializable NULL_KEY = new Long( Long.MIN_VALUE );

  // CacheManager, configured elsewhere
  private CacheManager cacheManager;

  @Required
  public void setCacheManager( CacheManager cacheManager )
  {
    this.cacheManager = cacheManager;
  }

  @SuppressWarnings( "unchecked" )
  public static <T> List<T> cast(List<?> p)
  {
    return (List<T>) p;
  }

  @SuppressWarnings( "unchecked" )
  public static <T> Class<T> cast(Class<?> p)
  {
    return (Class<T>) p;
  }
 
  public Serializable generateKey( String keyPrefix, Object input )
  {
    return 31 * (long)keyPrefix.hashCode() + input.hashCode();
  }

Next we add @Pointcut configuration, which will decide whether to treat the proxied call as an individual, ordered List:List or unordered Collection:Collection operation. 
@Around("@annotation(config) && args(arg) ")
public Object doCollectionCache( ProceedingJoinPoint pjp,
                                 CollectionCache config,
                                 Object arg ) throws Throwable
{
  // Get annotation configuration
  @SuppressWarnings( "unchecked" )
  Class<?> implClass = (Class<Collection<Object>>)config.implClass();
  String cacheName = config.cacheName();
  String keyPrefix = config.keyPrefix();
  String keyField = config.keyField();
  // Get Cache
  Cache cache = cacheManager.getCache( cacheName );
  if( cache == null ) {
    throw new AopInvocationException( "CollectionCachePut:  Cache '"+
                                                            cacheName +
                                                            "' does not seem to exist?" );
  }

  // Call appropriate implementation based on run-time scenario
  Object result;
  if( CollectionCache.IMPLICIT.equals( keyField )) {
    if( List.class.isInstance( arg ) &&
        List.class.isAssignableFrom( implClass )) {
      // IMPLICIT mode (special handling for List->List)
      Class<List<Object>> listClass = cast( implClass );
      result = cacheOrdered( pjp, cache, keyPrefix, listClass, (List<?>) arg );
    } else {
      // Normal single-item cache where arg is the key
      result = cacheSingle( pjp, cache, keyPrefix, keyField, arg );
    }
  } else if( Collection.class.isInstance( arg )) {
    // UNORDERED mode (uses explicit field from result objects)
    Class<Collection<Object>> collClass = cast( implClass );
    result = cacheUnordered( pjp, cache, keyPrefix, keyField,
                             collClass, (Collection<?>)arg );
  } else {
    // SINGLE mode
    result = cacheSingle( pjp, cache, keyPrefix, keyField, arg );
  }
  return result;
}


Single Element Operation 

Since we want non-Collection requests to share the same cache as the Collection calls, we must provide the ability to operate on a single element.  This also handles the special "null" case.

private Object cacheSingle( ProceedingJoinPoint pjp, Cache cache,
                            String keyPrefix, String keyField, Object input )
  throws Throwable
{
  // Determine key
  Object value;
  Object suffix = ( input == null ) ? NULL_KEY : input;
  Serializable key = generateKey( keyPrefix, suffix );
  // Check cache
  ValueWrapper wrapper = cache.get( key );
  // Return cached, or fetch actual value
  if( wrapper != null ) {
    value = wrapper.get();
  } else {
    value = pjp.proceed( new Object[] { input } );
    // Cache fetched value if not null
    if( value != null ) {
      cache.put( key, value );
    }
  }
  return value;
}


Unordered Operation

Unordered mapping is the simpler of the two multi-value modes of operation, since we need not worry about maintaining the order of the request since the cache key is explicitly found in the result values.

private Collection<?> cacheUnordered( ProceedingJoinPoint pjp, Cache cache,
                                      String keyPrefix, String keyField,
                                      Class<Collection<Object>> implClass,
                                      Collection<?> input ) throws Throwable
{
  // Holder for intermediary results
  Collection<Object> hits = new ArrayList<Object>( input.size() );
  // Holder for our misses, which we'll pass on to the original target
  Collection<Object> misses = implClass.newInstance();
  for( Object in: input ) {
    // Search cache for each element; nulls always miss
    ValueWrapper wrapper = null;
    // Put found value in "hits", else put missed key in "misses"
    if( in != null ) {
      Serializable key = generateKey( keyPrefix, in );
      wrapper = cache.get( key );
      if( wrapper == null ) {
        misses.add( in );
      } else {
        hits.add( wrapper.get() );
      }
    } else {
      misses.add( in );
    }
  }

  // Pass our cache misses to original target
  Collection<Object> results = Collections.<Object>emptyList();
  if( misses.size() > 0 ) {
    results = cast( (List<?>)pjp.proceed( new Object[] { misses } ));
  }

  // Cache results
  for( Object value: results ) {
    // Pull key from explicit field
    Object suffix = PropertyAccessorFactory.forBeanPropertyAccess( value )
                    .getPropertyValue( keyField );
    if( suffix != null ) {
      Serializable key=generateKey( keyPrefix, suffix );
      cache.put( key, value );
    }
    // Merge new values into result collection
    hits.add( value );
  }
  return hits;
}


Ordered Operation

Ordered mapping is more difficult. We use the previously defined HOLDER object to mark placeholders in the output List where we will put the results of cache misses from the target method.

private List<?> cacheOrdered( ProceedingJoinPoint pjp, Cache cache, 
                              String keyPrefix, Class<List<Object>> implClass,
                              List<?> input ) throws Throwable
{
  // Holder for intermediary results
  List<Object> hits = new ArrayList<Object>( input.size() );
  // Holder for our misses, which we'll pass on to the original target method
  List<Object> misses = implClass.newInstance();
  for( int i=0; i<input.size(); i++ ) {
    // Search cache for each element; nulls always miss
    ValueWrapper wrapper = null;
    Object in = input.get( i );
    if( in != null ) {
      // Check cache for this object
      Serializable key = generateKey( keyPrefix, in );
      wrapper = cache.get( key );
    }
    if( wrapper == null ) {
      // If element is not found, put HOLDER Object and load the 'misses' list
      hits.add( HOLDER );
      misses.add( in );
    } else {
      // If element is found, then add cached value to intermediary results
      hits.add( wrapper.get() );
    }
  }

  // Pass our cache misses to original target
  List<Object> results = Collections.<Object>emptyList();
  if( misses.size() > 0 ) {
    results = cast( (List<?>)pjp.proceed( new Object[] { misses } ));
  }

  if( results.size() != misses.size() ) {
    // If our result size does not match input size, we cannot cache new values
    // as we do not know the associated key.  Just merge the lists and return.
    for( Object h: hits ) {
      if( h != HOLDER ) {
        results.add( h );
      }
    }
    return results;

  } else {
    // We'll reuse this list for our output
    misses.clear();
    // Iterate intermediary results
    Iterator<?> iter = results.iterator();
    for( int i=0; i<hits.size(); i++ ) {
      Object h = hits.get( i );
      if( h == HOLDER ) {
        if( iter.hasNext() ) {
          // Each place-holder will have its actual value in the results list
          // at the same location (ie. Nth HOLDER is in results[N]
          Object value = iter.next();
          misses.add( value );
          // Cache new non-null values
          if( input.get( i ) != null ) {
            Serializable key=generateKey( keyPrefix, input.get( i ));
            cache.put( key, value );
          }
        }
      } else {
        // This was a cache hit earlier so just use it
        misses.add( h );
      }
    }
  }
  return misses;
}

Cache Evictions

Evictions is just a simpler application of the above concepts.

The annotation:

@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
public @interface CollectionEvict
{
  public static final String IMPLICIT = CollectionCache.IMPLICIT;
  String cacheName();
  String keyPrefix() default "";
  String keyField();
  boolean removeAll() default false;
}

A pointcut to handle the special "no-args-remove-all" scenario:
@Before("@annotation(config)" )
public void doCollectionEvict( CollectionEvict config ) throws Throwable
{
  if( !config.removeAll() ) {
      // No keys and (removeAll == false)?  Nothing to do here.
      return;
  }
  doCollectionEvict( config, null );
}

A pointcut to choose which mode of operation (ordered, unordered, implicit, etc)
@Before("@annotation(config) && args(arg) ")
public void doCollectionEvict( CollectionEvict config,
                               Object arg ) throws Throwable
{
  // Get annotation configuration
  String cacheName = config.cacheName();
  String keyPrefix = config.keyPrefix();
  String keyField  = config.keyField();
  boolean removeAll = config.removeAll();

  // Get Cache
  Cache cache = cacheManager.getCache( cacheName );
  if( cache == null ) {
    throw new AopInvocationException( "CollectionCacheEvict:  Cache '"+ cacheName +"' does not seem to exist?" );
  }

  if( removeAll ) {
    // Evict all items
    cache.clear();

  } else if( List.class.isInstance( arg )) {
    // Evict as list
    for( Object in: (List<?>)arg ) {
      evict( cache, keyPrefix, keyField, in );
    }

  } else {
    // Evict as object
    if( arg == null ) {
      evict( cache, keyPrefix, keyField, NULL_KEY );
    } else {
      evict( cache, keyPrefix, keyField, arg );
    }
  }
}
And the actual eviction logic:
private void evict( Cache cache, String keyPrefix,
                    String keyField, Object input )
{
  // Key is based upon strategy marked by presence of keyField parameter
  // If parameter is present (ie. is not "" ) then cache by explicit field
  final boolean implicitKey = CollectionCache.IMPLICIT.equals( keyField );
  if( input != null ) {
    Object suffix = implicitKey ? input :
                    PropertyAccessorFactory.forBeanPropertyAccess( input )
                    .getPropertyValue( keyField );
    Serializable key = generateKey( keyPrefix, suffix );
    cache.evict( key );
  }
}


Spring Configuration

Wiring it all together with Spring is:

<!-- Enable AOP -->
<aop:aspectj-autoproxy/>
<!-- The EhCacheManager is usually created within Hibernate startup, so we must
indicate we want the shared singleton instance. -->
<bean id="mvcEhCache" class="org.springframework.cache.ehcache.EhCacheManagerFactoryBean">
    <property name="shared" value="true"/>
</bean>
<!-- Spring's abstract CacheManager
<bean id="cacheManager" class="org.springframework.cache.ehcache.EhCacheCacheManager">
  <property name="cacheManager" ref="mvcEhCache"/>
</bean>
<!-- Our Aspect -->
<bean id="collectionCacheAspect" class="CollectionCacheAspect">
  <property name="cacheManager" ref="cacheManager"/>
</bean>

Putting it All Together

Now we can annotate any appropriate class and get element-level caching:

@CollectionCache( cacheName="listCache", keyPrefix="AtoB", keyField=CollectionCacheable.IMPLICIT )
List<A> getAfromB( List<B> list )
{
  List<A> result= new ArrayList<A>( list.size() );
  for( B b: list ) result.add( getAfromB( b ));
  return result;
}

@CollectionEvict( cacheName="listCache", keyPrefix="AtoB", keyField=CollectionEvict.IMPLICIT )
void evictB( B item ) {}

Conclusions

This, of course, is just part of a full solution.  It should be easy to add additional annotations and functionality to allow adding of individual items to the same cache, and for triggering removal of elements either via a list or individually.  It's a long way from being as feature-filled or rigorous as the original ehcache-spring-annotations package but solves a specific problem and is a good introduction to AOP in Spring.