Lazy Immutables

23 September 2015 ~ blog, groovy, vanilla

A co-worker and I were discussing the Groovy @Immutable annotation recently where I was thinking it would be useful if it allowed you to work on the object as a mutable object until you were ready to make it permanent, and then you could "seal" it and make it immutable. This would give you a bit more freedom in how the object is configured - sometimes the standard immutable approach can be overly restrictive.

Consider the case of of an immutable Person object:

class Person {
    String firstName
    String middleName
    String lastName
    int age

With @Immutable you have to create the object all at once:

def person = new Person('Chris','J','Stehno',42)

and then you're stuck with it. You can create a copy of it with one or more different properties using the copyWith method, but you need to specify the copyWith=true in the annotation itself, then you can do something like:

Person otherPerson = person.copyWith(firstName:'Bob', age:50)

I'm not sure who "Bob J Stehno" is though. With more complicated immutables, this all at once requirement can be annoying. This is where the @LazyImmutable annotation comes in (part of my Vanilla - Core library). With a similar Person class:

@LazyImmutable @Canonical
class Person {
    String firstName
    String middleName
    String lastName
    int age

using the new annotation, you can create and populate the instance over time:

def person = new Person('Chris')
person.middleName = 'J'
person.lastName = 'Stehno'
person.age = 42

Notice that the @LazyImmutable annotation does not apply any other transforms (as the standard @Immutable does). It's a standard Groovy object, but with an added method: the asImmutable() method is injected via AST Transformation. This method will take the current state of the object and create an immutable version of the object - this does imply that the properties of lazy immutable objects should follow the same rules as those of the standard immutable so that the conversion is determinate. For our example case:

Person immutablePerson = person.asImmutable()

The created object is the same immutable object as would have been created by using the @Immutable annotation and it is generated as an extension of the class you created so that it's type is still valid. The immutable version of the object also has a useful added method, the asMutable() method is used to create a copy of the original mutable object.

Person otherMutable = immutablePerson.asMutable()

It's a fairly simple helper annotation, but it just fills one of those little functional gaps that you run into every now and then. Maybe someone else will find it useful.

Baking Your Blog with JBake, Groovy and GitHub

02 September 2015 ~ blog, groovy

As a developer, it has always bugged me to have my blog or web site content stored on a server managed by someone else, outside of my control. Granted, WordPress and the like are very stable and generally have means of pulling out your data if you need it, but I really just like to have my content under my own control. Likewise, I have other projects I want to work on, so building content management software is not really on my radar at this point; that's where JBake comes in.

JBake is a simple JVM-based static site generation tool that makes casual blogging quite simple once you get everything set up. It's a bit of a raw project at this point, so there are a few rough edges to work with, but I will help to file them down in the discussions below.

Getting started with JBake, you have a couple options. You can install JBake locally and use it as a command line tool, or you can use the JBake Gradle Plugin. The Gradle plugin is currently lacking the local server feature provided by the command line tools; however, it does provide a more portable development environment along with the universe of other Gradle plugins. We will use the Gradle plugin approach here and I will provide some workarounds for the missing features to bring the functionality back on even ground with the command line tool.

The first thing we need is our base project and for that I am going to use a Lazybones template that I have created (which may be found in my lazybones-templates repository). You can use the Gradle plugin and do all the setup yourself, but it was fairly simple and having a template for it allowed me to add in the missing features we need.

If you are unfamiliar with Lazybones, it's a Groovy-based project template framework along the lines of Yeoman and the old Maven Archetype plugin. Details for adding my template repo to your configuration can be found on the README page for my templates.

Create the empty project with the following:

lazybones create jbake-groovy cookies

where "cookies" is the name of our project and the name of the project directory to be created. You will be asked a few questions related to template generation. You should have something similar to the following:

$ lazybones create jbake-groovy cookies
Creating project from template jbake-groovy (latest) in 'cookies'
Define value for 'JBake Plugin Version' [0.2]:
Define value for 'JBake Version' [2.3.2]:
Define value for 'Gradle version' [2.3]:
GitHub project: [username/projectname.git]: cjstehno/cookies.git

The "username" should reflect the username of your GitHub account, we'll see what this is used for later. If you look at the generated "cookies" directory now you will see a standard-looking Gradle project structure. The JBake source files reside in the src/jbake directory with the following sub-directories:

You will see that by default, a simple Bootstrap-based blog site is provided with sample blog posts in HTML, ASCII Doc, and Markdown formats. This is the same sample content as provided by the command line version of the project setup tool. At this point we can build the sample content using:

./gradlew jbake

The Gradle plugin does not provide a means of serving up the "baked" content yet. There is work in progress so hopefully this will be merged in soon. One of the goodies my template provides is a simple Groovy web server script. This allows you to serve up the content with:

groovy serve.groovy

which will start a Jetty instance pointed at the content in build/jbake on the configured port (8080 by default, which can be changed by adding a port number to the command line). Now when you hit http://localhost:8080/ you should see the sample content. Also, you can leave this server running in a separate console while you develop, running the jbake command as needed to rebuild the content.

First, let's update the general site information. Our site's title is not "JBake", so let's change it to "JCookies" by updating it in the src/jbake/templates/header.gsp and src/jbake/templates/menu.gsp files. While we're in there we can also update the site meta information as well:

<title><%if (content.title) {%>${content.title}<% } else { %>JCookies<% }%></title>
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="A site about cookies.">
<meta name="author" content="Chris Stehno">
<meta name="keywords" content="cookies,baking">
<meta name="generator" content="JBake">

Then to apply the changes, run ./gradlew jbake and refresh the browser. Now we see our correct site name.

Note that JBake makes no requirements about the templates or content to be used. It provides special support for blog-style sites; however, you can remove all the content and make a standard simple static site if you wish.

Let's add a new blog entry. The blog entries are stored in the src/jbake/content/blog directory by year so we need to create a new directory for 2015. Content may be written in HTML, ASCII Doc, or Markdown, based on the file extension. I am a fan of Markdown so we'll use that for our new blog entry. Let's create an entry file named

JBake uses a custom header block at the top of content files to store meta information. For our entry we will use the following:

title=Chocolate Chip Cookies

The title and date are self-explanatory. The type can be post or page to denote a blog post or a standard page. The tags are used to provide extra tag information to categorize the content. The status field may be draft or published to denote whether or not the content should be included in the rendered site. Everything below the line of tildes is your standard markdown content.

For the content of our entry we are going to use the Nestle Chocolate Chip Cookie recipe - it gives us a nice overview of the content capabilities, and they are yummy!

The content in Markdown format, is as follows:

## Ingredients

* 2 1/4 cups all-purpose flour
* 1 teaspoon baking soda
* 1 teaspoon salt
* 1 cup (2 sticks) butter, softened
* 3/4 cup granulated sugar
* 3/4 cup packed brown sugar
* 1 teaspoon vanilla extract
* 2 large eggs
* 2 cups (12-oz. pkg.) NESTLÉ® TOLL HOUSE® Semi-Sweet Chocolate Morsels
* 1 cup chopped nuts

## Instructions

1. Preheat oven to 375° F.
1. Combine flour, baking soda and salt in small bowl. Beat butter, granulated sugar, brown sugar and vanilla extract in large mixer bowl until creamy. Add eggs, one at a time, beating well after each addition. Gradually beat in flour mixture. Stir in morsels and nuts. Drop by rounded tablespoon onto ungreased baking sheets. 
1. BAKE for 9 to 11 minutes or until golden brown. Cool on baking sheets for 2 minutes; remove to wire racks to cool completely. 

May be stored in refrigerator for up to 1 week or in freezer for up to 8 weeks.

Rebuild/refresh and now you see we have a new blog post. Now, since we stoleborrowed this recipe from another site, we should provide an attribution link back to the original source. The content header fields are dynamic; you can create your own and use them in your pages. Let's add an attribution field and put our link in it.


Then we will want to add it to our rendered page, so we need to open up the blog entry template, the src/jbake/templates/post.gsp file and add the following line after the page header:

<p>Borrowed from: <a href="${content.attribution}">${content.attribution}</a></p>

Notice now, that the templates are just GSP files which may have Groovy code embedded into them in order to perform rendering logic. The header data is accessible via the content object in the page.

This post is kind of boring at this point. Yes, it's a recipe for chocolate chip cookies, and that's hard to beat, but the page full of text is not selling it to me. Let's add a photo to really make your mouth water. Grab an image of your favorite chocolate chip cookies and save it in src/jbake/assets/images as cookies.jpg. Static content like images live in the assets folder. The contents of the assets folder will be copied into the root of the rendered site directory.

Now, we need to add the photo to the page. Markdown allows simple HTML tags to be used so we can add:

<img src="/images/cookies.jpg" style="width:300px;float:right;"/>

to the top of our blog post content, which will add the image at the top of the page, floated to the right of the main content text. Now that looks tasty!

You can also create tandard pages in a similar manner to blog posts; however, they are based on the page.gsp template. This allows for different contextual formatting for each content type.

You can customize any of the templates to get the desired content and functionality for your static site, but what about the overall visual theme? As I mentioned earlier, the default templates use the Twitter Bootstrap library and there are quite a few resources available for changing the theme to fit your needs and they range from free to somewhat expensive. We just want a free one for demonstration purposes so let's download the bootstrap.min.css file for the Bootswatch Cerulean theme. Overwrite the existing theme in the src/jbake/assets/css directory with this new file then rebuild the site and refresh your browser. Now you can see that we have a nice blue banner along with other style changes.

The end result at this point will look something like this:

All-in-all not too bad for a few minutes of coding work!

Another nice feature of JBake is delayed publishing. The status field in the content header has three accepted values:

We used the published option since we wanted our content to be available right away. You could easily create a bunch of blog entries ahead of time, specifying the date values for when they should be published but having the status values set to published-date so that they are released only after the appropriate date. The downside of this is that since JBake is a static generator, you would have to be sure and build the site often enough to pick up the newly available content - maybe with a nightly scheduled build and deployment job.

When you are ready to release your site out into the greater internet wilderness, you will need a way to publish it; this is another place where my lazybones template comes in handy. If you are hosting your site as github-pages, the template comes with a publishing task built-in, based on the gradle-git plugin. This is where the GitHub username and repository information from the initial project creation comes into play. For this to work, you need a repository named "cookies" associated with your GitHub account. You will also want to double check that the repo clone URL is correct in the publish.gradle file. Then, to publish your site you simply run:

./gradlew publish

and then go check your project site for the updated content (sometimes it takes a minute or two, though it's usually instantaneous).

At this point we have a easily managed static web site; what's left to be done? Well, you could associate it with your own custom domain name rather than the one GitHub provides. I will not go into that here, since I really don't want to purchase a domain name just for this demo; however, I do have a blog post (Custom GitHub Hosting) that goes into how it's done (at least on GoDaddy).

JBake and GitHub with a dash of Groovy provide a nice environment for quick custom blogs and web sites, with little fuss. Everything I have shown here is what I use to create and manage this blog, so, I'd say it works pretty well.

Portions of this discussion are based on a blog post by Cédric Champeau, "Authoring your blog on GitHub with JBake and Gradle", who is also a contributor to JBake (among other things).

Vanilla Test Fixtures

15 May 2015 ~ blog, groovy, testing, vanilla

Unit testing with data fixtures is good practice to get into, and having a simple means of creating and managing reusable fixture data makes it much more likely. I have added a FixtureBuilder and Fixture class to my Vanilla-Testing library.

Unit testing with domain object, entities and DTOs can become tedious and you can end up with a lot of duplication around creating the test fixtures for each test. Say you have an object, Person defined as:

class Person {
    Name name
    LocalDate birthDate
    int score

You are writing your unit tests for services and controllers that may need to create and compare various instances of Person and you end up with some constants somewhere or duplication of code with custom instances all over the test code.

Using com.stehno.vanilla.test.FixtureBuilder you can create reusable fixtures with a simple DSL. I tend to create a main class to contain my fixtures and to also provide the set of supported fixture keys, something like:

class PersonFixtures {

    static final String BOB = 'Bob'
    static final String LARRY = 'Larry'
    static final Fixture FIXTURES = define {
        fix BOB, [ name:new Name('Bob','Q','Public'), birthDate:LocalDate.of(1952,5,14), score:120 ]
        fix LARRY, [ name:new Name('Larry','G','Larson'), birthDate:LocalDate.of(1970,2,8), score:100 ]

Notice that the define method is where you create the data contained by the fixtures, each mapped with an object key. The key can be any object which may be used as a Map key (proper equals and hashCode implementation).

The reasoning behind using Maps is that Groovy allows them to be used as constructor arguments for creating objects; therefore, the maps give you a reusable and detached dataset for use in creating your test fixture instances. Two objects instances created from the same fixture data will be equivalent at the level of the properties defined by the fixture; however, each can be manipulated without effecting the other.

Once your fixtures are defined, you can use them in various ways. You can request the immutable data map for a fixture:

Map data =

You can create an instance of the target object using the data mapped to a specified fixture:

Person person = PersonFixtures.FIXTURES.object(Person, PersonFixtures.LARRY)

Or, you can request the data or an instance for a fixture while applying additional (or overridden) properties to the fixture data:

Map data =, score:53)
Person person = PersonFixtures.FIXTURES.object(Person, PersonFixtures.LARRY, score:200)

You can easily retrieve field property values for each fixture for use in your tests:

assert 100 == PersonFixtures.FIXTURES.field('score', PersonFixtures.LARRY)

This allows field-by-field comparisons for testing and the ability to use the field values as parameters as needed.

Lastly, you can verify that an object instance contains the expected data that is associated with a fixture:

assert PersonFixtures.FIXTURES.verify(person, PersonFixtures.LARRY)

which will compare the given object to the specified fixture and return true of all of the properties defined in the fixture match the same properties of the given object. There is also a second version of the method which allows property customizations before comparison.

One step farther... you can combine fixtures with property randomizaiton to make fixture creation even simpler for those cases where you don't care about what the properties are, just that you can get at them reliably.

static final Fixture FIXTURES = define {
    fix FIX_A, [ name:randomize(Name).one(), birthDate:LocalDate.of(1952,5,14), score:120 ]
    fix FIX_B, randomize(Person){
            (Name): randomize(Name),
            (LocalDate): { }

The fixture mapper accepts PropertyRandomizer instances and will use them to generate the random content once, when the fixture is created and then it will be available unchanged during the testing.

One thing to note about the fixtures is that the fixture container and the maps that are passed in as individual fixture data are all made immutable via the asImmutable() method; however, if the data inside the fixture is mutable, it still may have the potential for being changed. Be aware of this and take proper precautions when you create an interact with such data types.

Reusable text fixtures can really help to clean up your test code base, and they are a good habit to get into.

Property Randomization for Testing

06 May 2015 ~ blog, groovy, vanilla

Unit tests are great, but sometimes you end up creating a lot of test objects requiring data, such as DTOs and domain objects. Generally, I have always come up with movie quotes or other interesting content for test data. Recently, while working on a Groovy project, I thought it would be interesting to have a way to randomly generate and populate the data for these objects. The randomization would provide a simpler approach to test data as well as providing the potential for stumbling on test data that would break your code in interesting ways.

My Vanilla project now has a PropertyRandomizer class, which provides this property randomization functionality in two ways. You can use it as a builder or as a DSL.

Say you have a Person domain class, defined as:

class Person {
    String name
    Date birthDate

You could generate a random instance of it using:

def rando = randomize(Person).typeRandomizers( (Date):{ new Date() } )
def instance =

Note, that there is no default randomizer for Date so we had to provide one. The other fields, name in this case would be randomized by the default randomizer.

The DSL usage style for the use case above would be:

def rando = randomize(Person){
        (Date):{ new Date() } 
def instance =

Not really much difference, but sometimes a DSL style construct is cleaner to work with.

What if you need three random instances for the same class, all different? You just ask for them:

def instances = rando.times(3)

// or 

instances = rando * 3

The multiplication operator is overridden to provide a nice shortcut for requesting multiple random instances.

You can customize the randomizers at either the type or property level or you can configure certain properties to be ignored by the randomization. This allows for nested randomized objects. Say your Person has a new pet property.

class Person {
    String name
    Date birthDate
    Pet pet

class Pet {
    String name

You can easily provide randomized pets for your randomized people:

def rando = randomize(Person){
        (Date):{ new Date() },
        (Pet): { randomize(Pet).one() }
def instance =

I have started using this in some of my testing, at it comes in pretty handy. My Vanilla library is not yet available via any public repositories; however, it will be soon, and if there is expressed interest, I can speed this up.

Secure REST in Spring

04 May 2015 ~ blog, groovy

Getting HTTPS to play nice with REST and non-browser web clients in development (with a self-signed certificate) can be a frustrating effort. I struggled for a while down the path of using the Spring RestTemplate thinking that since I was using Spring MVC as my REST provider, it would make things easier; in this case, Spring did not come to the rescue, but Groovy did or rather the Groovy HTTPBuilder did.

To keep this discussion simple, we need a simple REST project using HTTPS. I found the Spring REST Service Guide project useful for this (with a few modifications to follow).

Go ahead and clone the project:

git clone

Since this is a tutorial project, it has a few versions of the code in it. We are going to work with the "complete" version, which is a Gradle project. Let's go ahead and do a build and run just to ensure everything works out of the box:

cd gs-rest-service/complete
./gradlew bootRun

After a bunch of downloading and startup logging you should see that the application has started. You can give it a test by opening http://localhost:8080/greeting?name=Chris in your browser, which should respond with:

    "id": 2,
    "content": "Hello, Chris!"

Now that we have that running, we want a RESTful client to call it rather that hitting it using the browser. Let's get it working with the simple HTTP case first to ensure that we have everything working before we go into the HTTPS configuration. Create a groovy script, rest-client.groovy with the following content:

    @Grab(group='org.codehaus.groovy.modules.http-builder', module='http-builder', version='0.7.1')

import static

def http = new HTTPBuilder( 'http://localhost:8080/' )

http.get( path: 'greeting', query:[name:'Chris'] ) { resp, json ->
    println "Status: ${resp.status}"
    println "Content: $json"

Since this is not a discussion of HTTPBuilder itself, I will leave most of the details to your own research; however, it's pretty straight forward. We are making the same request we made in the browser, and after another initial batch of dependency downloads (grapes) it should yield:

Status: 200
Content: [content:Hello, Chris!, id:6]

Ok, our control group is working. Now, let's add in the HTTPS. For the Spring Boot project, it's pretty trivial. We need to add an file in src/main/resources with the following content:

server.port = 8443
server.ssl.key-store = /home/cjstehno/.keystore
server.ssl.key-store-password = tomcat
server.ssl.key-password = tomcat

Of course, update the key-store path to your home directory. For the server, we also need to install a certificate for our use.

I am not a security certificate expert, so from here on out I will state that this stuff works in development but I make no claims that this is suitable for production use. Proceed at your own risk!

From the Tomcat 8 SSL How To, run the keytool -genkey -alias tomcat -keyalg RSA and run through the questions answering everything with 'localhost' (there seems to be a reason for this).

At this point you should be able to restart the server and hit it via HTTPS (https://localhost:8443/greeting?name=Chris) to retrieve a successful response as before, though you will need to accept the self-signed certificate.

Now try the client. Update the URL to the new HTTPS version:

def http = new HTTPBuilder( 'https://localhost:8443/' )

and give it a run. You should see something like:

Caught: peer not authenticated peer not authenticated

I will start with the simplest method of resolving this problem. HTTPBuilder provides a configuration method that will just ignore these types of SSL errors. If you add:


before you make a request, it will succeed as normal. This should be used only as a development configuration, but there are times when you just want to get something workign for testing. If that's all you want here, you're done. From here on out I will show how to get the SSL configuration working for a more formal use case.

Still with me? Alright, let's have fun with certificates! The HTTPBuilder wiki page for SSL gives us most of what we need. To summarize, we need to export our server certificate and then import it into a keyfile that our client can use. To export the server certificate, run:

keytool -exportcert -alias "tomcat" -file mytomcat.crt -keystore ~/.keystore -storepass tomcat

which will export the "tomcat" certificate from the keystore at "~/.keystore" (the one we created earlier) and save it into "mytomcat.crt". Next, we need to import this certificate into the keystore that will be used by our client as follows:

keytool -importcert -alias "tomcat" -file mytomcat.crt -keystore clientstore.jks -storepass clientpass

You will be asked to trust this certificate, which you should answer "yes" to continue.

Now that we have our certificate ready, we can update the client script to use it. The client script becomes:

    @Grab(group='org.codehaus.groovy.modules.http-builder', module='http-builder', version='0.7.1')

import static
import org.apache.http.conn.scheme.Scheme
import org.apache.http.conn.ssl.SSLSocketFactory

def http = new HTTPBuilder( 'https://localhost:8443/' )

def keyStore = KeyStore.getInstance( KeyStore.defaultType )

new File( args[0] ).withInputStream {
   keyStore.load( it, args[1].toCharArray() )

http.client.connectionManager.schemeRegistry.register(new Scheme("https", new SSLSocketFactory(keyStore), 443) )

http.get( path: 'greeting', query:[name:'Chris'] ) { resp, json ->
    println "Status: ${resp.status}"
    println "Content: $json"

The main changes from the previous version are the loading and use of the keystore by the connection manager. When you run this version of the script, with:

groovy rest-client.groovy clientstore.jks clientpass

you get:

Status: 200
Content: [content:Hello, Chris!, id:1]

We are now using HTTPS on both the server and client for our rest service. It's not all that bad to setup once you figure out the steps, but in general the information seems to be tough to find.

Tour de Mock 6: Spock

09 April 2015 ~ blog, groovy, testing

My last entry in my "Tour de Mock" series was focused on basic Groovy mocking. In this post, I am going to take a look at the Spock Framework, which is an alternative testing framework with a lot of features, including its own mocking API.

Since it's been a while, let's refer back to the original posting as a refresher of what is being tested. We have a Servlet, the EmailListServlet

public class EmailListServlet extends HttpServlet {

    private EmailListService emailListService;

    public void init() throws ServletException {
        final ServletContext servletContext = getServletContext();
        this.emailListService = (EmailListService)servletContext.getAttribute(EmailListService.KEY);

        if(emailListService == null) throw new ServletException("No ListService available!");

    protected void doGet(final HttpServletRequest req, final HttpServletResponse res) throws ServletException, IOException {
        final String listName = req.getParameter("listName");
        final List<String> list = emailListService.getListByName(listName);
        PrintWriter writer = null;
        try {
            writer = res.getWriter();
            for(final String email : list){
        } finally {
            if(writer != null) writer.close();

which uses an EmailListService

public interface EmailListService {

    public static final String KEY = "com.stehno.mockery.service.EmailListService";

     * Retrieves the list of email addresses with the specified name. If no list
     * exists with that name an IOException is thrown.
    List<String> getListByName(String listName) throws IOException;

to retrieve lists of email addresses, because that's what you do, right? It's just an example. :-)

First, we need to add Spock to our build (recently converted to Gradle, but basically the same) by adding the following line to the build.gradle file:

testCompile "org.spockframework:spock-core:1.0-groovy-2.4"

Next, we need a test class. Spock uses the concept of a test "Specification" so we create a simple test class as:

class EmailListServlet_SpockSpec extends Specification {
    // test stuff here...

Not all that different from a JUnit test; conceptually they are very similar.

Just as in the other examples of testing this system, we need to setup our mock objects for the servlet environment and other collaborators:

def setup() {
    def emailListService = Mock(EmailListService) {
        _ * getListByName(null) >> { throw new IOException() }
        _ * getListByName('foolist') >> LIST

    def servletContext = Mock(ServletContext) {
        1 * getAttribute(EmailListService.KEY) >> emailListService

    def servletConfig = Mock(ServletConfig) {
        1 * getServletContext() >> servletContext

    emailListServlet = new EmailListServlet()
    emailListServlet.init servletConfig

    request = Mock(HttpServletRequest)
    response = Mock(HttpServletResponse)

Spock provides a setup method that you can override to perform your test setup operations, such as mocking. In this example, we are mocking the service interface, and the servlet API interfaces so that they behave in the deisred manner.

The mocking provided by Spock took a little getting used to when coming from a primarily mockito-based background, but once you grasp the overall syntax, it's actually pretty expressive. In the code above for the EmailListService, I am mocking the getListByName(String) method such that it will accept any number of calls with a null parameter and throw an exception, as well as any number of calls with a foolist parameter which will return a reference to the email address list. Similarly, you can specify that you expect only N calls to a method as was done in the other mocks. You can dig a little deeper into the mocking part of the framework in the Interaction-based Testing section of the Spock documentation.

Now that we have our basic mocks ready, we can test something. As in the earlier examples, we want to test the condition when no list name is specified and ensure that we get the expected Exception thrown:

def 'doGet: without list'() {
    1 * request.getParameter('listName') >> null

    emailListServlet.doGet request, response


One thing you should notice right away is that Spock uses label blocks to denote different parts of a test method. Here, the setup block is where we do any additional mocking or setup specific to this test method. The when block is where the actual operations being tested are performed while the then block is where the results are verified and conditions examined.

In our case, we need to mock out the reuest parameter to return null and then we need to ensure that an IOException is thrown.

Our other test is the case when a valid list name is provided:

def 'doGet: with list'() {
    1 * request.getParameter('listName') >> 'foolist'

    def writer = Mock(PrintWriter)

    1 * response.getWriter() >> writer

    emailListServlet.doGet request, response

    1 * writer.println(LIST[0])
    1 * writer.println(LIST[1])
    1 * writer.println(LIST[2])

In the then block here, we verify that the println(String) method of the mocked PrintWriter is called with the correct arguments in the correct order.

Overall, Spock is a pretty clean and expressive framework for testing and mocking. It actually has quite a few other interesting features that beg to be explored.

You can find the source code used in this posting in my TourDeMock project.

Testing AST Transformations

08 March 2015 ~ blog, groovy, testing, vanilla

While working on my Effigy project, I have gone deep into the world of Groovy AST Transformations and found that they are, in my opinion, the most interesting and useful feature of the Groovy language; however, developing them is a bit of a poorly-documented black art, especially around writing unit tests for your transformations. Since the code you are writing is run at compile-time, you generally have little access or view to what is going on at that point and it can be quite frustrating to try and figure out why something is failing.

After some Googling and experimentation, I have been able to piece together a good method for testing your transformation code, and it's actually not all that hard. Also, you can do your development and testing in a single project, rather than in a main project and testing project (to account for the need to compile the code for testing)

The key to making transforms testable is the GroovyClassLoader which gives you the ability to compile Groovy code on the fly:

def clazz = new GroovyClassLoader().parseClass(sourceCode)

During that parseClass method is when all the AST magic happens. This means you can not only easily test your code, but also debug into your transformations to get a better feel for what is going wrong when things break - and they often do.

For my testing, I have started building a ClassBuilder code helper that is a shell for String-based source code. You provide a code template that acts as your class shell, and then you inject code for your specific test case. You end up with a reasonably clean means of building test code and instantiating it:

private final ClassBuilder code = forCode('''
    package testing

    import com.stehno.ast.annotation.Counted

    class CountingTester {

@Test void 'single method'(){
    def instance = code.inject('''
        String sayHello(String name){
            "Hello, $name"

    assert instance.sayHello('AST') == 'Hello, AST'
    assert instance.getSayHelloCount() == 1

    assert instance.sayHello('Counting') == 'Hello, Counting'
    assert instance.getSayHelloCount() == 2

The forCode method creates the builder and prepares the code shell. This construct may be reused for each of your tests.

The inject method adds in the actual code you care about, meaning your transformation code being tested.

The instantiate method uses the GroovyClassLoader internally to load the class and then instantiate it for testing.

I am going to add a version of the ClassBuilder to my Vanilla project once it is more stable; however, I have a version of it and a simple AST testing demo project in the ast-testing CoffeaElectronica sub-repo. This sample code builds a simple AST Transformation for counting method invocations and writes normal unit tests for it (the code above is taken from one of the tests).

Note: I have recently discovered the class; I have not yet tried it out, but it seems to provide a similar base functionality set to what I have described here.

Custom Domain for GitHub Pages

15 February 2015 ~ blog

I have been working for a while now to get my blog fully cut over to being generated by JBake and hosted on GitHub; it's not all that difficult, just a format conversion and some domain fiddling, but I was procrastinating.

Pointing your GitHub Pages at a custom domain is not all that hard to do, and they provide decent documentation about how to do it; however, some streamlining is nice for DNS novices like myself. I may have done things a bit out of order, but it worked in the end...

First, I created A records for the GitHub-provided IP Addresses. I use Godaddy for my domain names, so your experience may be a bit different; but, in the Godaddy DNS Zone File editor you end up adding something like:

A Record

Next, I added a CName record alias for www pointing to my GitHub account hostname, which ended up looking like this:

CName Record

Lastly, you need to make changes in your repository - this step seems to be missed by a lot of people. The gist of it is that you add a new file to your gh-pages branch, named CNAME (all caps, no extension). And in that file you add your domain name (without http://www.). Save the file and be sure you push it to your remote repository.

At this point it worked for me, but the documentation said it could take up to 48 hours to propagate the changes.

Gradle and CodeNarc

07 November 2014 ~ blog, java, testing, gradle, groovy

The subject of "code quality tools" has lead to many developer holy wars over the years, so I'm not really going to touch the subject of their value or level of importance here, suffice to say that they are tools in your toolbox for helping to maintain a base level of "tedious quality", meaning style rules and general coding conventions enforced by your organization - it should never take the ultimate decision making from the developers.

That being said, let's talk about CodeNarc. CodeNarc is a rule-based code quality analysis tool for Groovy-based projects. Groovy does not always play nice with other code analysis tools, so it's nice that there is one specially designed for it and Gradle provides access to it out of the box.

Using the Gradle CodeNarc plugin is easy, apply the plugin to your build

apply plugin: 'codenarc'

and then do a bit of rule configuration based on the needs of your code base.

codenarcMain {
    ignoreFailures false
    configFile file('config/codenarc/codenarc-main.rules')

    maxPriority1Violations 0
    maxPriority2Violations 10
    maxPriority3Violations 20

codenarcTest {
    ignoreFailures true
    configFile file('config/codenarc/codenarc-test.rules')

    maxPriority1Violations 0
    maxPriority2Violations 10
    maxPriority3Violations 20

The plugin allows you to have different configurations for your main code and your test code, and I recommend using that functionality since generally you may care about slightly different things in your production code versus your test code. Also, there are JUnit-specific rules that you can ignore in your production code scan.

Notice that in my example, I have ignored failures in the test code. This is handy when you are doing a lot of active development and don't really want to fail your build every time your test code quality drops slightly. You can also set the thresholds for allowed violations of the three priority levels - when the counts exceed one of the given thresholds, the build will fail, unless it's ignored. You will always get a report for both main and test code in your build reports directory, even if there are no violations. The threshold numbers are something you will need to determine based on your code base, your team and your needs.

The .rules files are really Groovy DSL files, but the extension is unimportant so I like to keep them out of the Groovy namespace. The CodeNarc web site has a sample "kitchen sink" rule set to get things started - though it has a few rules that cause errors, you can comment those out or remove them from the file. Basically the file is a list of all the active rules, so removing one disables it. You can also configure some of them. LineLength is one I like to change:

LineLength { length = 150 }

This will keep the rule active, but will allow line lengths of 150 rather than the default 120 characters. You will need to check the JavaDocs for configurable rule properties; for the most part, they seem to be on or off.

Running the analysis is simple, the check task may be run by itself, or it will be run along with the build task.

gradle check

The reports (main and test) will be available in the build/reports/codenarc directory as two html files. They are not the prettiest reports, but they are functional.

If you are starting to use CodeNarc on an existing project, you may want to take a phased approach to applying and customizing rules so that you are not instantly bogged down with rule violations - do a few passes with the trimmed down rule set, fix what you can fix quickly and configure or disable the others and set your thresholds to a sensible level then make a goal to drop the numbers with each sprint or release so that progress is made.

Hello Again Slick2D

11 October 2014 ~ blog, java, groovy

I am finally getting back around to working on my little game programming project and I realized that somewhere along the
way, my project stopped working. I am using the Slick2D library, which seems to have little
in the way of formal release or distribution so it didn't surprise me. I think I had something hacked together making it
work last time. I decided to try and put some more concrete and repeatable steps around basic setup, at least for how I use it - I'm no
game programmer.

I'm using Groovy as my development language and Gradle for building. In the interest of time and clarity, I am going to use a
dump-and-describe approach here; there are only two files, so it should not be a big deal.

The build.gradle file is as follows:

group = 'com.stehno.demo'
version = '0.1'

buildscript {
    repositories {

        maven {
            url ''

    dependencies {
        classpath 'com.stehno:gradle-natives:0.2'

apply plugin:'groovy'
apply plugin:'application'
apply plugin:'com.stehno.natives'

compileJava {
    sourceCompatibility = 1.8
    targetCompatibility = 1.8

mainClassName = 'helloslick.HelloSlick'

repositories {

dependencies {
    compile 'org.codehaus.groovy:groovy-all:2.3.6'

    compile 'org.slick2d:slick2d-core:1.0.1'

test {
    systemProperty 'java.library.path', file('build/natives/windows')

run {
    systemProperty 'java.library.path', file('build/natives/windows')

natives {
    jars = [
    platforms = 'windows'

task wrapper(type: Wrapper) {
    gradleVersion = '2.1'

The first point of note, is that I am using my Gradle Natives plugin, not as
a self-promotion, but since this is the reason I wrote it. This plugin takes care of extracting all the little native
libraries and putting them in your build so that they are easily accessible by your code. The configuration is found near
the bottom of the file, in the natives block - we want to extract the native libraries from the lwjgl and jinput libraries
for this project and in my case, I only care about the Windows versions (leave off platforms to get all platforms).

There was one interesting development during my time away from this project, a 3rd-party jar version of Slick2D has been pushed to maven central, which makes it a lot easier - I think I had to build it myself and fiddle with pushing it to my local maven repo or something. Now it's just another remote library (hopefully it works as expected - I have not played with it yet).

The last point of interest here is the use of the application plugin. This plugin provides an easy way to run your game
while specifying the java.library.path which is the painful part of running applications with native libraries. With the
application plugin and the run configuration in place, you can run the game from Gradle - admittedly not ideal, but this
is just development; I actually have a configuration set for the IzPack installer that I will write about later.

Now, we need some code to run, and the Slick2D wiki provides a simple Hello world sample that I have tweaked a bit for my
use - mostly just cosmetic changes:

package helloslick

import groovy.util.logging.Log
import org.newdawn.slick.*

import java.util.logging.Level

class HelloSlick extends BasicGame {

    HelloSlick(String gamename){

    public void init(GameContainer gc) throws SlickException {}

    public void update(GameContainer gc, int i) throws SlickException {}

    public void render(GameContainer gc, Graphics g) throws SlickException {
        g.drawString 'Hello Slick!', 50, 50

    public static void main(String[] args){
        try {
            AppGameContainer appgc = new AppGameContainer(new HelloSlick('Simple Slick Game'))
            appgc.setDisplayMode(640, 480, false)

        } catch (SlickException ex) {
            log.log(Level.SEVERE, null, ex)

This just opens a game window and writes "Hello Slick!" in it, but if you have that working, you should be ready for playtime
with Slick2D.

Once you have the project setup (build.gradle in the root, and HelloSlick.groovy in /src/main/groovy/helloslick), you
are ready to go. Run the following to run the project.

gradle unpackNatives run

And if all is well, you will see the game window and message.

Like I said, this is mostly just for getting my development environment up and running as a sanity check, but maybe it is useful to others.

Yes, the explicit unpackNatives calls are annoying, it's something I am working on.

Spring Boot Embedded Server API

15 September 2014 ~ blog, spring, groovy, java, gradle

I have been investigating Spring-Boot for both work and personal projects and while it seems very all-encompassing and useful, I have found that its "opinionated" approach to development was a bit too aggressive for the project conversion I was doing at work; however, I did come to the realization that you don't have to use Spring-Boot as your projects core - you can use it and most of its features in your own project, just like any other java library.

The project I was working on had a customized embedded Jetty solution with a lot of tightly-coupled Jetty-specific configuration code with configuration being pulled from a Spring Application context. I did a little digging around in the Spring-Boot documentation and found that their API provides direct access to the embedded server abstraction used by a Boot project. On top of that, it's actually a very sane and friendly API to use. During my exploration and experimentation I was able to build up a simple demo application, which seemed like good fodder for a blog post - we're not going to solve any problems here, just a little playtime with the Spring-Boot embedded server API.

To start off, we need a project to work with; I called mine "spring-shoe" (not big enough for the whole boot, right?). I used Java 8, Groovy 2.3.2 and Gradle 2.0, but slightly older versions should also work fine - the build file looks like:

apply plugin: 'groovy'

compileJava {
    sourceCompatibility = 1.8
    targetCompatibility = 1.8

compileGroovy {
    groovyOptions.optimizationOptions.indy = false

repositories {

dependencies {
    compile 'org.codehaus.groovy:groovy-all:2.3.2'

    compile 'javax.servlet:javax.servlet-api:3.0.1'
    compile 'org.eclipse.jetty:jetty-webapp:8.1.15.v20140411'

    compile 'org.springframework.boot:spring-boot:1.1.5.RELEASE'
    compile 'org.springframework:spring-web:4.0.6.RELEASE'
    compile 'org.springframework:spring-webmvc:4.0.6.RELEASE'

Notice, that I am using the spring-boot library, not the Gradle plugin or "starter" dependencies - this also means that you have to bring in other libraries yourself (e.g. the web and webmvc libraries above).

Next, we need an application starter, which just instantiates a specialized Application context, the AnnotationConfigEmbeddedWebApplicationContext:

package shoe

import org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext
import org.springframework.boot.context.embedded.EmbeddedWebApplicationContext

class Shoe {
    static void main( args ){
        EmbeddedWebApplicationContext context = new AnnotationConfigEmbeddedWebApplicationContext('shoe.config')
        println "Started context on ${new Date(context.startupDate)}"

Where the package shoe.config is where my configuration class lives - the package will be auto-scanned. When this class' main method is run, it instantiates the context and just prints out the context start date. Internally this context will search for the embedded server configuration beans as well as any servlets and filters to be loaded on the server - but I am jumping ahead; we need a configuration class:

package shoe.config

import org.springframework.boot.context.embedded.EmbeddedServletContainerFactory
import org.springframework.boot.context.embedded.jetty.JettyEmbeddedServletContainerFactory
import org.springframework.context.annotation.Bean
import org.springframework.context.annotation.Configuration
import org.springframework.web.servlet.config.annotation.EnableWebMvc

class ShoeConfig {

    @Bean EmbeddedServletContainerFactory embeddedServletContainerFactory(){
        new JettyEmbeddedServletContainerFactory( 10101 )

As you can see, it's just a simple Java-based configuration class. The EmbeddedServletContainerFactory class is the crucial part here. The context loader searches for a configured bean of that type and then loads it to create the embedded servlet container - a Jetty container in this case, running on port 10101.

Now, if you run Shoe.main() you will see some logging similar to what is shown below:

INFO: Jetty started on port: 10101
Started context on Thu Sep 04 18:59:24 CDT 2014

You have a running server, though its pretty boring since you have nothing useful configured. Let's start make it say hello using a simple servlet named HelloServlet:

package shoe.servlet

import javax.servlet.ServletException
import javax.servlet.http.HttpServlet
import javax.servlet.http.HttpServletRequest
import javax.servlet.http.HttpServletResponse

class HelloServlet extends HttpServlet {

    protected void doGet( HttpServletRequest req, HttpServletResponse resp ) throws ServletException, IOException{
        resp.writer.withPrintWriter { w->
            w.println "Hello, ${req.getParameter('name')}"

It's just a simple HttpServlet extension that says "hello" with the input value from the "name" parameter. Nothing really special here. We could have just as easily used an extension of Spring's HttpServletBean here instead. Moving back to the ShoeConfig class, the modifications are minimal, you just create the servlet and register it as a bean.

@Bean HttpServlet helloServlet(){
    new HelloServlet()

Now fire the server up again, and browse to http://localhost:10101/helloServlet?name=Chris and you will get a response of:

Hello, Chris

Actually, any path will resolve to that servlet since it's the only one configured. I will come back to configuration of multiple servlets and how to specify the url-mappings in a little bit, but let's take the next step and setup a Filter implementation. Let's create a Filter that counts requests as they come in and then passes the current count along with the continuing request.

package shoe.servlet

import org.springframework.web.filter.GenericFilterBean

import javax.servlet.FilterChain
import javax.servlet.ServletException
import javax.servlet.ServletRequest
import javax.servlet.ServletResponse
import java.util.concurrent.atomic.AtomicInteger

class RequestCountFilter extends GenericFilterBean {

    private final AtomicInteger count = new AtomicInteger(0)

    void doFilter( ServletRequest request, ServletResponse response, FilterChain chain ) throws IOException, ServletException{
        request.setAttribute('request-count', count.incrementAndGet())

        chain.doFilter( request, response )

In this case, I am using the Spring helper, GenericFilterBean simply so I only have one method to implement, rather than three. I could have used a simple Filter implementation.

In order to make use of this new count information, we can tweak the HelloServlet so that it prints out the current count with the response - just change the println statement to:

w.println "<${req.getAttribute('request-count')}> Hello, ${req.getParameter('name')}"

Lastly for this case, we need to register the filter as a bean in the ShoeConfig class:

@Bean Filter countingFilter(){
    new RequestCountFilter()

Now, run the application again and hit the hello servlet a few times and you will see something like:

<10> Hello, Chris

The default url-mapping for the filter is "/*" (all requests). While, this may be useful for some quick demo cases, it would be much more useful to be able to define the servlet and filter configuration similar to what you would do in the web container configuration - well, that's where the RegistrationBeans come into play.

Revisiting the servlet and filter configuration in ShoeConfig we can now provide a more detailed configuration with the help of the ServletRegistrationBean and the FilterRegistrationBean classes, as follows:

@Bean ServletRegistrationBean helloServlet(){
    new ServletRegistrationBean(
        urlMappings:[ '/hello' ],
        servlet: new HelloServlet()

@Bean FilterRegistrationBean countingFilter(){
    new FilterRegistrationBean(
        urlPatterns:[ '/*' ],
        filter: new RequestCountFilter()

We still leave the filter mapped to all requests, but you now have access to any of the filter mapping configuration parameters. For instance, we can add a simple init-param to the RequestCountingFilter, such as:

int startValue = 0

private AtomicInteger count

protected void initFilterBean() throws ServletException {
    count = new AtomicInteger(startValue)

This will allow the starting value of the count to be specified as a filter init-parameter, which can be easily configured in the filter configuration:

@Bean FilterRegistrationBean countingFilter(){
    new FilterRegistrationBean(
        urlPatterns:[ '/*' ],
        filter: new RequestCountFilter(),
        initParameters:[ 'startValue': '1000' ]

Nice and simple. Now, when you run the application again and browse to http://localhost:10101/helloServlet?name=Chris you get a 404 error. Why? Well, now you have specified a url-mapping for the servlet, try http://localhost:10101/hello?name=Chris and you will see the expected result, something like:

<1004> Hello, Chris

You can also register ServletContextListeners in a similar manner. Let's create a simple one:

package shoe.servlet

import javax.servlet.ServletContextEvent
import javax.servlet.ServletContextListener

class LoggingListener implements ServletContextListener {

    void contextInitialized(ServletContextEvent sce) {
        println "Initialized: $sce"

    void contextDestroyed(ServletContextEvent sce) {
        println "Destroyed: $sce"

And then configure it in ShoeConfig:

@Bean ServletListenerRegistrationBean listener(){
    new ServletListenerRegistrationBean(
        listener: new LoggingListener()

Then, when you run the application, you will get a message in the server output like:

Initialized: javax.servlet.ServletContextEvent[source=ServletContext@o.s.b.c.e.j.JettyEmbeddedWebAppContext{/,null}]

Now, let's do something a bit more interesting - let's setup a Spring-MVC configuration inside our embedded server.

The first thing you need for a minimal Spring-MVC configuration is a DispatcherServlet which, at its heart, is just an HttpServlet so we can just configure it as a bean in ShoeConfig:

@Bean HttpServlet dispatcherServlet(){
    new DispatcherServlet()

Then, we need a controller to make sure this configuration works - how about a simple controller that responds with the current time; we will also dump the request count to show that the filter is still in play. The controller looks like:

package shoe.controller

import org.springframework.web.bind.annotation.RequestMapping
import org.springframework.web.bind.annotation.RestController

import javax.servlet.http.HttpServletRequest

class TimeController {

    String time( HttpServletRequest request ){
        "<${request.getAttribute('request-count')}> Current-time: ${new Date()}"

Lastly for this example, we need to load the controller into the configuration; just add a @ComponentScan annotation to the ShoeConfig as:


Fire up the server and hit the http://localhost:10101/time controller and you see something similar to:

<1002> Current-time: Fri Sep 05 07:02:36 CDT 2014

Now you have the ability to do any of your Spring-MVC work with this configuration, while the standard filter and servlet still work as before.

As a best-practice, I would suggest keeping this server configuration code separate from other configuration code for anything more than a trivial application (i.e. you wouldn't do your security and database config in this same file).

For my last discussion point, I want to point out that the embedded server configuration also allows you to do additional customization to the actual server instance during startup. To handle this additional configuration, Spring provides the JettyServerCustomizer interface. You simply implement this interface and add it to your sever configuration factory bean. Let's do a little customization:

class ShoeCustomizer implements JettyServerCustomizer {

    void customize( Server server ){
        SelectChannelConnector myConn = server.getConnectors().find { Connector conn ->
            conn.port == 10101

        myConn.maxIdleTime = 1000 * 60 * 60
        myConn.soLingerTime = -1


Basically just a tweak of the main connector and also telling the server to send an additional response header with the date value. This needs to be wired into the factory configuration, so that bean definition becomes:

@Bean EmbeddedServletContainerFactory embeddedServletContainerFactory(){
    def factory = new JettyEmbeddedServletContainerFactory( 10101 )
    factory.addServerCustomizers( new ShoeCustomizer() )
    return factory

Now when you start the server and hit the time controller you will see an additional header in the response:

Date:Fri, 05 Sep 2014 12:15:27 GMT

As you can see from this long discussion, the Spring-Boot embedded server API is quite useful all on its own. It's nice to see that Spring has exposed this functionality as part of its public API rather than hiding it under the covers somewhere.

The code I used for this article can be found in the main repository for this project, under the spring-shoe directory.

Older posts are available in the archive.