Did you know? Programmers convert coffee to code.

If you like my articles, sponsor me a coffee.

At the other end of the link I post some results of my other hobbies, like playing bass. It is worth checking it out ;)

Running Liquibase Changesets — Part 4: Spring

The integration with Spring is not a big thing. You only have to configure your data source, the Liquibase bean and that’s it. OK, this sounds simple but there are really some pitfalls if you’ve never done this.

Spring integration

Note: If you are new to Spring, than I recommend to read “Spring in Action” byCraig Walls, or do some tutorials guides hosted and provided by Spring. (Am I the only one who writes a standalone application in Java with Spring these days?)

First of all let’s add Spring to the pom.xml.

This step is very easy, you can search after “spring-core maven” and the first result should be the maven repository, where you can select your release to use (I try to keep my configuration as much up to date as possible). And add the dependency tag to your pom.xml. For example:


The core of Spring is the base, you need this to get all the other modules you need up and running — if you want to use Spring.

Configuring the Liquibase bean

After Spring is set up we configure the bean for Liquibase and run the application. As I mentioned at the end of the previous article I tend to get the database configuration into a single file because of future re-use: Liquibase and the Database Connector (Hibernate or just Spring, I’ll decide later on but I tend to introduce Hibernate and then switch to Spring).

But first of all let me show you how to configure Spring and Liquibase from the application. The only thing you need is to inject / contribute the Liquibase bean and add the two needed properties: the dataSource and the changeLog file. This is done with a Spring configuration file. The first level is the “applicationContext.xml” because Spring looks if this file exists and loads it. I’ll write about this in the next section.

<bean id="liquibase" class="liquibase.integration.spring.SpringLiquibase">
    <property name="dataSource" ref="dataSource" />
    <property name="changeLog" value="classpath:dbchange/master.xml" />

The changeLog file is self-explaining, the dataSource is another bean which has to be configured — in my example as follows:

<bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource"
    <property name="driverClass" value="org.h2.Driver" />
    <property name="jdbcUrl" value="jdbc:h2:file:../db/testdb" />
    <property name="user" value="sa" />
    <property name="password" value="" />

The most interesting part is the class of the data source: I use c3p0 DataSource implementation because Hibernate and Spring have a little problem with session management which will be resolved with Hibernate 5 — so currently no hope. And we use c3p0 in production too so I have to say it’s a good session management tool.

The other parts of the dataSource configuration are the same as it was previously: the driver stays H2, username and password are the default for an H2 database, the jdbcUrl is the same too. Note: some dataSource classes use different naming for the properties. For example in the “org.apache.commons.dbcp.BasicDataSource” has url, username as mention a few differences.

I admit that the solution for the configuration is not the best because you store the configuration values in the Spring context and it would be better to separate them into a properties file (eventually re-use the liquibase.properties already available?). But this is another topic for the Spring article, currently this fits my “Just Barely Good Enough” needs 🙂

Adapting the application to Spring

So, how can we call Liquibase from the application at start? This is also simple. After we configured the applicationContext.xml file we can delete every code in the main method of the App.java and add the following 2 (yes, two) lines and start the application:

ClassPathXmlApplicationContext ctx = new ClassPathXmlApplicationContext("applicationContext.xml");

This was it. Liquibase is running at the application start, without messing up the Java code with configuration data and calling Liquibase manually. Explanation of the code follows in the next article about spring.

A little problem

If you never started the application before — you can skip this section. If you follow along the whole Variations series you’ve been running into the same problem as I.

If you started your application after Part 4 of the Liquibase series than you will have a problem: the changesets won’t run because Liquibase would run the whole changeset again, although the tables already exist. But why?

This is all because Liquibase matches changesets in the database (Table DatabaseChangeLog) and the files to execute by

  • ID
  • author
  • file path / file name

ID and author are the same for the changesets but not the file path. Previously it was something like “target/classes/dbchange/r1.0/ChangeLog-initial.xml”, currently it is “classpath:dbchange/r1.0/ChangeLog-initial.xml” — so the two do not match and Liquibase tries to execute the DDL update but fails because the table “Vehicle” already exists.

Workaround? Drop (delete) your H2 database. This is the most effortless solution for this problem because H2 will add a new file if it does not find any at the specified folder. For other DBMS? Update the table manually. If you use Java currently in a production environment and want to change to Spring… It’ll be a challenge but not impossible.


Spring is a good choice for Liquibase. The only problem is the migration from any other solution to Spring because of the changeset paths.

And this is The End of the Liquibase series in the Variations topic. I’ll stay with Liquibase-Spring configuration in the future. So this means I’ll use Spring as DI container in the mean time because Spring is a feature rich framework. Other thoughts about why I’ve chosen Spring comes in the next post about Spring.


As mentioned previously I’ve written a test to ensure that the same DDL is used among contexts in the database. And now, integrated with Travic CI the build fails if this unit test goes wrong. Althogether: it is a good practice to do not let the build succeed if unit tests fail. The code for this test is found here.

And because I changed the test engine from JUnit to TestNG, do not forget to update the maven dependencies with TestNG and Mockito. Eventually I’ll write an article about unit testing with these tools.



Share the knowledge!

Senior developer, consultant, author, mentor, apprentice. I love to share my knowledge and insights what I achieve through my daily work which is not trivial -- at least not for me.

Click Here to Leave a Comment Below

%d bloggers like this: