Friday, 21 October 2011

The Spring Batch framework's simplest tutorials (for 2.1.8)

Getting going with the Spring Batch framework is no mean feat. The official documentation is 136 pages long, and whilst incredibly useful, doesn't quite provide that first step on the ladder in the form of functioning tutorials/examples.

The Springsource website does also provide a great number of workable samples and downloads, which again, are fantastic when you are up and running with it, but are limited in their use until, again, you have that foot on the ladder.

After some searching I did find three brilliant tutorials, which provided a great deal of information to me. The tutorials are on 0xCAFEBABEs blog here:

Spring Batch Hello World 1

Spring Batch Hello World 2
Spring Batch Hello World 3

The only drawback is that they are a bit "2008" and therefore do not work out of the box with Spring 3 and Spring Batch 2.x.

At this point, I would say, if you are keen to learn about Spring Batch from the starting line, I would do what I did, and simply follow the tutorials. You'll have to fight your way through the stack-traces, exceptions and compiler errors, but this will really educate you, and force you to examine your XML, and the API docs, rather than just copy and pasting the examples into Eclipse and being done with it.

However, to give this post some beef, I will outline the updates I made to the tutorials for deployment on Spring 3 and Spring Batch 2.1.8. This might not be the best, or only way to modify these tutorials, and I would welcome any alternatives or feedback in the comments section.

Spring Batch Hello World 1 (update)

Once you have been through the tutorial, you'll first be faced with some compiler errors in This is because the Spring Batch 2.x tasklet interface has changed, it now takes a StepContribution object and ChunkConext as parameters to it's execute function, and it's return type is a RepeatStatus. This is made clear in the updated Springsource API docs. Ultimately, your class will now look like this:

public class PrintTasklet implements Tasklet {     private String message;     public void setMessage(String message) {         this.message = message;     }         public RepeatStatus execute(StepContribution arg0, ChunkContext arg1) throws Exception {         System.out.print(message);         return RepeatStatus.FINISHED;     } }

There are also a couple of configuration changes you will need to make. In your applicationContext.xml, the jobRepository bean will need a forth constructor-arg (again, as you can see in the API), bringing it's definition to:

 <bean id="jobRepository" class="">
        <bean class="org.springframework.batch.core.repository.dao.MapJobInstanceDao"/>
        <bean class="org.springframework.batch.core.repository.dao.MapJobExecutionDao" />
        <bean class="org.springframework.batch.core.repository.dao.MapStepExecutionDao"/>
        <bean class="org.springframework.batch.core.repository.dao.MapExecutionContextDao"/>

You will also need to add a definition for a transactionManager bean to your appliationContext.xml:

<bean id="transactionManager" class=""/>

In the simpleJob.xml, you must pass this transactionManager into the taskletStep bean, making it's definition:

<bean id="taskletStep" abstract="true" class="org.springframework.batch.core.step.tasklet.TaskletStep">
    <property name="jobRepository" ref="jobRepository"/>
    <property name="transactionManager" ref="transactionManager"/>

And that should get you working, use the batch file from the tutorial to launch your job from the cmd line.

Spring Batch Hello World 2 (update)

It is no more difficult getting the second tutorial up and running in your 2.1.8 environment too. In fact, it's a similar set of changes. needs the same update, to become:

public class ParameterPrintTasklet extends StepExecutionListenerSupport
        implements Tasklet {

    private String message;

    public void beforeStep(StepExecution stepExecution) {
        JobParameters jobParameters = stepExecution.getJobParameters();
        message = jobParameters.getString("message");

    public RepeatStatus execute(StepContribution stepcontribution,
            ChunkContext chunkcontext) throws Exception {
        return RepeatStatus.FINISHED;

and so long as you have kept your applicationContext.xml up to date from the above tutorial, all you need to change in the simpleJob.xml is to pass the transactionManager bean into your taskletStep (note this bean is now defined within the parameterJob bean):

<property name="steps">  
    <bean class="org.springframework.batch.core.step.tasklet.TaskletStep">  
      <property name="tasklet" ref="print"/>  
      <property name="jobRepository" ref="jobRepository"/>  
      <property name="transactionManager" ref="transactionManager"/>

Spring Batch Hello World 3 (update)

Despite being the most (functionally) complicated example, there is not a massive amount to change in tutorial 3 to make it work in the modern world. Basically, the definitions of the beans need updating to reflect the new constructors and properties of the v2.1.8 Spring Batch API.

To keep it really short and sweet, here is the simpleJob.xml once I had finished with the update:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns=""
    <import resource="applicationContext.xml"/>
    <!-- Set up our reader and its properties -->
    <bean id="itemReader" class="org.springframework.batch.item.file.FlatFileItemReader">  
      <property name="resource" value="file:hello.txt" />  
      <property name="recordSeparatorPolicy" ref="recordSeparatorPolicy" />  
      <property name="lineMapper" ref="lineMapper" />  

    <bean id="recordSeparatorPolicy" class="org.springframework.batch.item.file.separator.SimpleRecordSeparatorPolicy"/>
    <bean id="lineMapper" class="org.springframework.batch.item.file.mapping.DefaultLineMapper">
        <property name="lineTokenizer" ref="lineTokenizer" />
        <property name="fieldSetMapper" ref="fieldSetMapper" />
    <bean id="fieldSetMapper" class="org.springframework.batch.item.file.mapping.PassThroughFieldSetMapper" />
    <bean id="lineTokenizer" class="org.springframework.batch.item.file.transform.DelimitedLineTokenizer">
        <constructor-arg value="," />
    <!-- Set up our writer, and it's properties -->
    <bean id="itemWriter" class="org.springframework.batch.item.file.FlatFileItemWriter">  
      <property name="resource" value="file:hello2.txt" />
      <property name="lineAggregator" ref="lineAggregator"/>

    <bean id="lineAggregator" class="org.springframework.batch.item.file.transform.DelimitedLineAggregator">  
      <property name="delimiter" value=" "/>
    <!-- Set up our transformation step with these beans in -->      
    <bean id="step" class="org.springframework.batch.core.step.item.SimpleStepFactoryBean">  
      <property name="transactionManager" ref="transactionManager" />  
      <property name="jobRepository" ref="jobRepository" />  
      <property name="itemReader" ref="itemReader" />  
      <property name="itemWriter" ref="itemWriter" />  
    <!-- Set up our job to run said step -->
    <bean id="readwriteJob" class="org.springframework.batch.core.job.SimpleJob">  
      <property name="name" value="readwriteJob" />  
      <property name="steps">  
          <ref local="step"/>  
      <property name="jobRepository" ref="jobRepository"/>

The big differences are in the definitions of FlatFileItemReader and FlatFileItemWriter.

The DelimitedLineTokenizer from 1.1.4 is no longer a direct property of the FlatFileItemReader, and instead needs to be passed into a lineMapper bean, which in turn is then injected into the FlatFileItemReader.

The FlatFileItemWriter no longer needs a fieldSetCreator bean, but leave the bean definition in as the itemReader does still need it.

Don't forget to give your SimpleStepFactoryBean the transactionManager, and you should be good to go. The paths of the resource files are relative to the root of the Java project.

Friday, 23 September 2011

Fixing the to the bottom of the mobile screen

The API provides some great features, functions and themes to quickly code up web services for mobile platforms, and have them appear in a familiar, native style to the device, with intuitive and easy to use menus, navigation, and form features.

There is, however, one quirky issue (with dojo 1.6 at least) which took me a few minutes to figure out, and so I thought I'd write up a quick post to help anyone else who is having trouble getting the TabBar to "stick" to the bottom of the screen, whilst the rest of the content happily scrolls.

Using the API, you will most likely have built up your panes with a collection of, and elements, and these have a different affect on where the appears.

If you have used, this makes the TabBar float over the top of an otherwise scrolling main-content. You need to add a style of "margin-top: -49px;" to the TabBar node to raise it into the viewport, but it will then always be visible, like so:

To navigate, on mobile devices you use a "stroking" finger gesture, and on computers/compat devices, you must use an equivalent click-and-drag movement. This can have shortfalls in some environments, such as Blackberrys, where there is no touch screen, or click-and-drag functionality.

If you opted for then the will appear at the bottom of the page, and the user will have to scroll down to the bottom access it. A scrollbar is displayed on the right hand side to facilitate this movement. 

This is the most universal and reliable format for cross-platform services, because you can still invoke a scroll motion using a touch-screen swipe, but pointing and scrolling devices can still also be used for easy navigation.

You can mix and match your view types, even within the same navigational flow, and each pane will act accordingly as above, of course, if you do switch between them, it is going to give an inconsistent user experience.

Monday, 12 September 2011

Making dojo datagrid columns non-resizable and unsortable

When dealing with grids of data in dojo I often find myself wanting to manage the control of the individual columns quite specifically, disabling the user resize ability, or click-to-sort on headers.

Here are two ways these two functions can be achieved on a dojox.grid.EnhancedGrid in a _Templated widget.

Disabling resize is quite easy to manage, when you know how. The column headers of a datagrid can be passed a noresize flag, which when set to true, inhibits this ability.

Simply define your column header as so:

<th field="id" width="50" noresize=true width="200"> ID </th>

Setting the width is optional, but any column without it will default to be 6em wide.

Preventing sorting is a little trickier. To do this I added the following code to the postCreate function of my widget to disable sorting on the fifth and sixth columns of my table (indexing starts at 1):

postCreate: function() { 
    this.myGrid.canSort = function(col){ if(col === 5 || col === 6) { return false; } else { return true; }};

Thursday, 1 September 2011

Creating a custom filter in Spring

Adding a filter in spring is something that seems like it should be easy, but can be tricky if you get your configurations in a twist.

So here is how I added a really simple filter to my Spring application.

In my web.xml I added the following filter definition, alongside the other standard security and encoding ones:


The org.springframework.web.filter.DelegatingFilterProxy is the relevant filter type that allows you to then declare a targetBeanName value... which corresponds to a declaration we add in our applicationContext.xml, as so:

<bean name="expiredPasswordFilterBean" class="com.companyname.webapp.filter.ExpiredPasswordFilter"/>

With that connection set up, we now head back to the web.xml to define the mapping for our filter... separately from the above filter definition. In my case, this was enough:


the filter name connects to the previous web.xml definition, and the url-pattern in my case was for the filter to be run on every request for a HTML path.

With that all in place, all I needed to do was declare the actual filter control class. This is defined in the applicationContext above as being at com.companyname.webapp.filter.ExpiredPasswordFilter, and the class looks a little like this:



import javax.servlet.FilterChain;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.springframework.web.filter.OncePerRequestFilter;

public class ExpiredPasswordFilter extends OncePerRequestFilter {

 protected void doFilterInternal(HttpServletRequest req, HttpServletResponse res, FilterChain chain) throws ServletException, IOException {
   *  This is where I did my user flag checking
                chain.doFilter(req, res);

The actual logic of the filter can be almost anything you want! But just make sure you call chain.doFilter(req, res); to invoke the next filter in the chain. This can be at any point throughout your code, and control is passed straight back to your filter once the chain has been executed, which can be useful for ensuring other security/authentication filters have been run before you do anything additional yourself.

Thursday, 25 August 2011

Hibernate Exception: "a different object with the same identifier value was already associated with the session"

I recently encountered this Hibernate exception whilst writing some routine user management functionality, and trying to save a user object back to the database.

org.hibernate.NonUniqueObjectException: A different object with the same
identifier value was already associated with the session [objectId]

I then found two potential causes of this, and by explaining them here, you should be able to check for and solve them if you're having this trouble.

The first cause is from duplicating a Hibernate object in memory, and trying to save either copy. Such as:

public void UpdateLastLogin(int userId) {

    User original = userDao.get(userId);

    User updated = new User();
    updated.setLastLogin(new Date());;

The user "original" is loaded into the Hibernate cache by the get() method, but then we create a new user, assign it the same primary key (ID) and try to save it. Hibernate throws a NonUniqueObjectException because "original" is still in the cache and could cause concurrency errors down the line.

The second cause of this error I have discovered comes from the Appfuse framework. Calling the genericDao.exists() method to check the presence of a key in the database actually loads it into the Hibernate cache as well. This means if you have another instance of the object due to be saved, you will invalidate it through this NonUniqueObjectException.

An example of this case could be as follows:

public void newEmail(User user) {
    boolean toSave = false;
    if(!userDao.exists(user.getUserId())) {
        toSave = true;
    } else {
        String currentEmail = userDao.getUserEmail(user.getId());
        if (!currentEmail.equals(user.getEmail())) {
            toSave = true;
    if(toSave) {;

So if you are getting NonUniqueObjectExceptions. Look through your code for either of these cases, and consider carefully whether you are, and whether you need to duplicate Hibernate controlled objects.

Thursday, 21 July 2011


I am a java developer, specialising in web services, from Bristol, in the UK. I work as an application modernisation specialist for Desynit. This blog is for me to share and develop ideas and solutions discovered along the way.

My current projects are focused on the AppFuse2 Java platform, with Spring and Hibernate, and the DOJO javascript suite, including work with the groundbreaking API for developing web services for mobile phones.

If any of this sounds like it might be of interest to you, bookmark or follow me now; and check back for future posts!