Spring Boot Remote Shell

07 November 2015 ~ blog, groovy, spring

Spring Boot comes with a ton of useful features that you can enable as needed, and in general the documentation is pretty good; however, sometimes it feels like they gloss over a feature that eventually realize is much more useful than it originally seemed. The remote shell support is one of those features.

Let’s start off with a simple Spring Boot project based on the example provided with the Boot documentation. Our build.gradle file is:

buildscript {
    repositories {

    dependencies {
        classpath 'org.springframework.boot:spring-boot-gradle-plugin:1.2.7.RELEASE'

version = "0.0.1"
group = "com.stehno"

apply plugin: 'groovy'
apply plugin: 'spring-boot'

sourceCompatibility = 8
targetCompatibility = 8

mainClassName = 'com.stehno.SampleController'

repositories {

dependencies {
    compile "org.codehaus.groovy:groovy-all:2.4.5"

    compile 'org.springframework.boot:spring-boot-starter-web'

task wrapper(type: Wrapper) {
    gradleVersion = "2.8"

Then, our simple controller and starter class looks like:

public class SampleController {

    String home() {
        'Hello World!'

    static void main(args) throws Exception {
        SpringApplication.run(SampleController, args)

Run it using:

./gradlew clean build bootRun

and you get your run of the mill "Hello world" application. For our demonstration purposes, we need something a bit more interesting. Let’s make the controller something like a "Message of the Day" server which will return a fixed configured message. Remove the hello controller action and add in the following:

String message = 'Message for you, sir!'

@RequestMapping('/') @ResponseBody
String message() {

which will return the static message "Message for you, sir!" for every request. Running the application now, will still be pretty uninteresting, but wait, it gets better.

Now, we would like to have the ability to change the message as needed without rebuilding or even restarting the server. There are handful of ways to do this; however, I’m going to discuss one of the seemingly less used options…​ The CRaSH Shell integration provided in Spring Boot (43. Production Ready Remote Shell).

To add the remote shell support in Spring Boot, you add the following line to your dependencies block in your build.gradle file:

compile 'org.springframework.boot:spring-boot-starter-remote-shell'

Now, when you run the application, you will see an extra line in the server log:

Using default password for shell access: 44b3556b-ff9f-4f82-9f1b-54a16da471d5

Since no password was configured, Boot has provided a randomly generated one for you (obviously you would configure this in a real system). You now have an SSH connection available to your application. Using the ssh client of your choice you can login using:

ssh -p 2000 user@localhost

Which will ask you for the provided password. Once you have logged in you are connected to a secure shell running inside your application. You can run help at the prompt to get a list of available commands, which will look something like this:

> help
Try one of these commands with the -h or --help switch:

autoconfig Display auto configuration report from ApplicationContext
beans      Display beans in ApplicationContext
cron       manages the cron plugin
dashboard  a monitoring dashboard
egrep      search file(s) for lines that match a pattern
endpoint   Invoke actuator endpoints
env        display the term env
filter     a filter for a stream of map
java       various java language commands
jmx        Java Management Extensions
jul        java.util.logging commands
jvm        JVM informations
less       opposite of more
mail       interact with emails
man        format and display the on-line manual pages
metrics    Display metrics provided by Spring Boot
shell      shell related command
sleep      sleep for some time
sort       sort a map
system     vm system properties commands
thread     JVM thread commands
help       provides basic help
repl       list the repl or change the current repl

As you can see, you get quite a bit of functionality right out of the box. I will leave the discussion of each of the provided commands to another post. What we are interested at this point is adding our own command to update the message displayed by our controller.

The really interesting part of the shell integration is the fact that you can extend it with your own commands.

Create a new directory src/main/resources/commands which is where your extended commands will live, and then add a simple starting point class for our command:

package commands

import org.crsh.cli.Usage
import org.crsh.cli.Command
import org.crsh.command.InvocationContext

@Usage('Interactions with the message of the day.')
class message {

    @Usage('View the current message of the day.')
    def view(InvocationContext context) {
        return 'Hello'

The @Usage annotations provide the help/usage documentation for the command, while the @Command annotation denotes that the view method is a command.

Now, when you run the application and list the shell commands, you will see our new command added to the list:

message    Interactions with the message of the day.

If you run the command as message view you will get the static "Hello" message returned to you on the shell console.

Okay, we need the ability to view our current message of the day. The InvocationContext has attributes which are propulated by Spring, one of which is spring.beanfactory a reference to the Spring BeanFactory for your application. We can access the current message of the day by replacing the content of the view method with the following:

BeanFactory beans = context.attributes['spring.beanfactory']
return beans.getBean(SampleController).message

where we find our controller bean and simply read the message property. Running the application and the shell command now, yield:

Message for you, sir!

While that is pretty cool, we are actually here to modify the message, not just view it and this is just as easy. Add a new command named update:

@Usage('Update the current message of the day.')
def update(
    InvocationContext context,
    @Usage('The new message') @Argument String message
) {
    BeanFactory beans = context.attributes['spring.beanfactory']
    beans.getBean(SampleController).message = message
    return "Message updated to: $message"

Now, rebuild/restart the server and start up the shell. If you execute:

message update "This is cool!"

You will update the configured message, which you can verify using the message view command, or better yet, you can hit your server and see that the returned message has been updated…​ no restart required. Indeed, this is cool.

You can find a lot more information about writing your own commands in the CRaSH documentation for Developing Commands. There is a lot of functionality that I am not covering here.

At this point, we are functionally complete. We can view and update the message of the day without requiring a restart of the server. But, there are still some added goodies provided by the shell, especially around shell UI support - yes, it’s text, but it can still be pretty and one of the ways CRaSH allows you to pretty things up is with colors and formatting via styles and the UIBuilder (which is sadly under-documented).

Let’s add another property to our controller to make things more interesting. Just add a Date lastUpdated = new Date() field. This will give us two properties to play with. Update the view action as follows:

SampleController controller = context.attributes['spring.beanfactory'].getBean(SampleController)

String message = controller.message
String date = controller.lastUpdated.format('MM/dd/yyyy HH:mm')

out.print new UIBuilder().table(separator: dashed, overflow: Overflow.HIDDEN, rightCellPadding: 1) {
    header(decoration: bold, foreground: black, background: white) {

    row {
        label(date, foreground: green)
        label(message, foreground: yellow)

We still retrieve the instance of the controller as before; however, now our output rendering is a bit more complicated, though still pretty understandable. We are creating a new UIBuilder for a table and then applying the header and row contents to it. It’s actually a very powerful construct, I just had to dig around in the project source code to actually figure out how to make it work.

You will also need to update the update command to set the new date field:

SampleController controller = context.attributes['spring.beanfactory'].getBean(SampleController)
controller.message = message
controller.lastUpdated = new Date()

return "Message updated to: $message"

Once you have that built and running you can run the message view command and get a much nicer multi-colored table output.

> message view
Date             Message
11/05/2015 10:37 And now for something completely different.

Which puts wraps up what we are trying to do here and even puts a bow on it. You can find more information on the remote shell configuration options in the Spring Boot documentation in Appendix A: Common Application Properties. This is where you can configure the port, change the authentication settings, and even disable some of the default provided commands.

The remote shell support is one of the more interesting, but underused features in Spring Boot. Before Spring Boot was around, I was working on a project where we did a similar integration of CRaSH shell with a Spring-based server project and it provided a wealth of interesting and useful opportunities to dig into our running system and observe or make changes. Very powerful.

Multi-Collection Pagination

31 October 2015 ~ blog

A few years ago, I was working on a project where we had collections of data spread across multiple rows of data…​ and then we had to provide a paginated view of that data. This research was the result of those efforts. The discussion here is a bit more rigorous than I usually go into, so if you just want the implementation code jump to the bottom.


Consider that you have a data set representing a collection of collections:

    [ A0, A1, A2, A3, A4, A5 ],
    [ B0, B1, B2, B3, B4, B5 ],
    [ C0, C1, C2, C3, C4, C5 ]

We want to retrieve the data in a paginated fashion where the subset (page) with index P and subset size (page size) S is used to retrieve only the desired elements in the most efficient means possible.

Consider also that the data sets may be very large and that the internal collections may not be directly associated with the enclosing collection (e.g. two different databases).

Also consider that the subsets may cross collection boundaries or contain fewer than the desired number of elements.

Lastly, requests for data subsets will be more likely discrete events – one subset per request, rather than iterating over all results.

For a page size of four (S = 4) you would have the following five pages:

P0 : [ A0, A1, A2, A3 ]
P1 : [ A4, A5, B0, B1 ]
P2 : [ B2, B3, B4, B5 ]
P3 : [ C0, C1, C2, C3 ]
P4 : [ C4, C5 ]


The overall collection is traversed to determine how many elements are contained within each sub-collection; this may be pre-computed or done at runtime. Three counts are calculated or derived for each sub-collection:

  • Count (CI) - the number of elements in the sub-collection.

  • Count-before (CB) - the total count of all sub-collection elements counted before this collection, but not including this collection.

  • Count-with (CW) - the total count of all sub-collection elements counted before and including this collection.

For our example data set we would have:

    { CI:6, CB:0, CW:6 [ A0, A1, A2, A3, A4, A5 ] },
    { CI:6, CB:6, CW:12 [ B0, B1, B2, B3, B4, B5 ] },
    { CI:6, CB:12, CW:18 [ C0, C1, C2, C3, C4, C5 ] }

This allows for a simple means of selecting only the sub-collections we are interested in; those containing the desired elements based on the starting and ending indices for the subset (START and END respectively). These indices can easily be calculated as:


END = START + S – 1
The indices referenced here are for the overall collection, not the individual sub-collections.

The desired elements will reside in sub-collections whose inclusive count (CW) is greater than the starting index and whose preceding count (CB) is less than or equal to the ending index, or:


For the case of selecting the second subset of data (P = 1) with a page size of four (S = 4) we would have:


END = 7

This will select the first two or the three sub-collections as "interesting" sub-collections containing at least some of our desired elements, namely:

{ CI:6, CB:0, CW:6 [ A0, A1, A2, A3, A4, A5 ] },
{ CI:6, CB:6, CW:12 [ B0, B1, B2, B3, B4, B5 ] }

What remains is to gather from these sub-collections (call them SC[0], SC[1]) the desired number of elements (S).

To achieve this, a local starting and ending index must be calculated while iterating through the "interesting" sub-collections to gather the elements until either the desired amount is obtained (S) or there are no more elements available.

  1. Calculate the initial local starting index (LOCAL_START) by subtracting the non-inclusive preceding count value of the first selected collection (SC[0]) from the overall starting index.

  2. Iterate the selected collections (in order) until the desired amount has been gathered

This is more clearly represented in pseudo code as:


for-each sc in SC while REMAINING > 0

    if( REMAINING < (sc.size() - LOCAL_START) )
        LOCAL_END = sc.size()-1

    G.addAll( FOUND )


Where the gathered collection of elements (G) is your resulting data set containing the elements for the specified data page.

It must be stated that the ordering of the overall collection and the sub-collections must be consistent across multiple data requests for this procedure to work properly.


Ok now, enough discussion. Let’s see what this looks like with some real Groovy code. First, we need our collections of collections data to work with:

def data = [
    [ 'A0', 'A1', 'A2', 'A3', 'A4', 'A5' ],
    [ 'B0', 'B1', 'B2', 'B3', 'B4', 'B5' ],
    [ 'C0', 'C1', 'C2', 'C3', 'C4', 'C5' ]

Next, we need to implement the algorithm in Groovy:

int page = 1
int pageSize = 4

// pre-computation

int before = 0
def prepared = data.collect {d ->
    def result = [
        countIn: d.size(),
        countBefore: before,
        countWith: before + d.size(),

    before += d.size()

    return result

// main computation

def localStart = (page * pageSize ) - prepared[0].countBefore
def remaining = pageSize

def gathered = []

prepared.each { sc->
    if( remaining ){
        def localEnd
        if( remaining < (sc.values.size() - localStart) ){
            localEnd = localStart + remaining - 1
        } else {
            localEnd = sc.values.size() - 1

        def found = sc.values[localStart..localEnd]

        remaining -= found.size()
        localStart = 0

println "P$page : $gathered"

which yields

P1 : [A4, A5, B0, B1]

and if you look all the way back up to the beginning of the article, you see that this is the expected data set for page 1 of the example data.

It’s not a scenario I have run into often, but it was a bit of a tricky one to unravel. The pre-computation steps ended up being the key to keeping it simple and stable.

Spring ViewResolver for "GSP"

26 October 2015 ~ blog, groovy, vanilla, spring

Recently, while working on a Spring MVC application, I was considering which template framework to use for my views and I was surprised to realize that there was no implementation using the Groovy GStringTemplateEngine. There is one for the Groovy Markup Templates; however, in my opinion, that format seems pretty terrible - they are interesting in themselves, but they seem like they would be a nightmare to maintain, and your designers would kill you if they ever had to work with them.

This obvious gap in functionality surprised me and even a quick Google search did not turn up any implementations, though there was some documentation around using the Grails GSP framework in a standard Spring Boot application, but this seemed like overkill for how simple the templates can be. Generally, implementing extensions to the Spring Framework is pretty simple so I decided to give it a quick try…​ and I was right, it was not hard at all.

The ViewResolver implementation I came up with is an extension of the AbstractTemplateViewResolver with one main method of interest, the buildView(String) method which contains the following:

protected AbstractUrlBasedView buildView(final String viewName) throws Exception {
    GroovyTemplateView view = super.buildView(viewName) as GroovyTemplateView (1)

    URL templateUrl = applicationContext.getResource(view.url).getURL() (2)

    view.template = templateEngine.createTemplate(
    ) (3)

    view.encoding = defaultEncoding

    return view
  1. Call the super class to create a configured instance of the View

  2. Load the template from the ApplicationContext using the url property of the View

  3. Create the Template from the contents of the URL

This method basically just uses the view resolver framework to find the template file and load it with the GSTringTemplateEngine - the framework takes care of the caching and model attribute management.

The View implementation is also quite simple; it is an extension of the AbstractTemplateview, with the only implmented method being the renderMergedTemplateModel() method:

protected void renderMergedTemplateModel(
    Map<String, Object> model, HttpServletRequest req, HttpServletResponse res
) throws Exception {
    res.contentType = contentType
    res.characterEncoding = encoding

    res.writer.withPrintWriter { PrintWriter out ->
        out.write(template.make(model) as String)

The Template content is rendered using the configured model data and then written to the PrintWriter from the HttpServletResponse, which sends it to the client.

Lastly, you need to configure the resolver in your application:

@Bean ViewResolver viewResolver() {
    new GroovyTemplateViewResolver(
        contentType: 'text/html',
        cache: true,
        prefix: '/WEB-INF/gsp/',
        suffix: '.gsp'

One thing to notice here is all the functionality you get by default from the Spring ViewResolver framework for very little added code on your part.

Another thing to note is that "GSP" file in this case is not really a true GSP; however, you have all the functionality provided by the GStringTemplateEngine, which is quite similar. An example template could be something like:

        [${new Date()}] Hello, ${name ?: 'stranger'}

        <% if(personService.seen(name)){ %>
            You have been here ${personService.visits(name)} times.
        <% } %>

It’s definitely a nice clean template language if you are already coding everything else in Groovy anyway.

I will be adding a spring helper library to my vanilla project; the "vanilla-spring" project will have the final version of this code, though it should be similar to what is dicussed here. The full source for the code discussed above is provided below for reference until the actual code is released.

package com.stehno.vanilla.spring.view

// imports removed...

class GroovyTemplateViewResolver extends AbstractTemplateViewResolver {

     * The default character encoding to be used by the template views. Defaults to UTF-8 if not specified.
    String defaultEncoding = StandardCharsets.UTF_8.name()

    private final TemplateEngine templateEngine = new GStringTemplateEngine()

    GroovyTemplateViewResolver() {
        viewClass = requiredViewClass()

    protected Class<?> requiredViewClass() {

    protected AbstractUrlBasedView buildView(final String viewName) throws Exception {
        GroovyTemplateView view = super.buildView(viewName) as GroovyTemplateView

        view.template = templateEngine.createTemplate(

        view.encoding = defaultEncoding
        return view
package com.stehno.vanilla.spring.view

// imports removed...

class GroovyTemplateView extends AbstractTemplateView {

    Template template
    String encoding

    protected void renderMergedTemplateModel(
        Map<String, Object> model, HttpServletRequest req, HttpServletResponse res
    ) throws Exception {
        res.contentType = contentType
        res.characterEncoding = encoding

        res.writer.withPrintWriter { PrintWriter out ->
            out.write(template.make(model) as String)

Copying Data with ObjectMappers

10 October 2015 ~ blog, groovy, vanilla

When working with legacy codebases, I tend to run into a lot of scenarios where I am copying data objects from one format to another while an API is in transition or due to some data model mismatch.

Suppose we have an object in one system - I am using Groovy because it keeps things simple, but it could be Java as well:

class Person {
    long id
    String firstName
    String lastName
    LocalDate birthDate

and then you are working with a legacy (or external) API which provides similar data in the form of:

class Individual {
    long id
    String givenName
    String familyName
    String birthDate

and now you have to integrate the conversion from the old/external format (Individual) to your internal format (Person).

You can write the code in Java using the Transformer interface from Apache Commons Collections, which ends up with something like this:

public class IndividualToPerson implements Transformer<Individual,Person>{
    public Person transform(Individual indiv){
        Person person = new Person();
        person.setId( indiv.getId() );
        person.setFirstName( indiv.getGivenName() );
        person.setLastName( indiv.getFamilyName() );
        person.setBirthDate( LocalDate.parse(indiv.getBirthDate()) );
        return person;

I wrote a blog post about this many years ago (Commons Collections - Transformers); however, if you have more than a handful of these conversions, you can end up handwriting a lot of the same code over and over, which can be error prone and time consuming. Even switching the code above to full-on Groovy does not really save you much, though it is better:

class IndividualToPerson implements Transformer<Individual,Person>{
    Person transform(Individual indiv){
        new Person(
            id: indiv.id,
            firstName: indiv.givenName,
            lastName: indiv.familyName,
            birthDate: LocalDate.parse(indiv.birthDate)

What I came up with was a simple mapping DSL which allows for straight-forward definitions of the property mappings in the simplest code possible:

ObjectMapper inividualToPerson = mapper {
    map 'id'
    map 'givenName' into 'firstName'
    map 'familyName' into 'lastName'
    map 'birthDate' using { d-> LocaleDate.parse(d) }

which builds an instance of RuntimeObjectMapper which is stateless and thread-safe. The ObjectMapper interface has a method copy(Object source, Object dest) which will copy the properties from the source object to the destination object. Your transformation code ends up something like this:

def people = individuals.collect { indiv->
    Person person = new Person()
    individualToPerson.copy(indiv, person)

or we can use the create(Object, Class) method as:

def people = individuals.collect { indiv->
    individualToPerson.create(indiv, Person)

which is just a shortcut method for the same code, as long as you are able to create your destination object with a default constructor, which we are able to do.

There is also a third, slightly more useful option in this specific collector case:

def people = individuals.collect( individualToPerson.collector(Person) )

The collector(Class) method returns a Closure that is also a shortcut to the conversion code shown previously. It's mostly syntactic sugar, but it's nice and clean to work with.

Notice the 'using' method - this allows for conversion of the source data before it is set into the destination object. This is one of the more powerful features of the DSL. Consider the case where your Person class has an embedded Name class:

class Person {
    long id
    Name name
    LocalDate birthDate

class Name {
    String first
    String last

Now we want to map the name properties into this new embedded object rather than into the main object. The mapper DSL can do this too:

ObjectMapper individualToPerson = mapper {
    map 'id'
    map 'givenName' into 'name' using { p,src-> 
        new Name(src.givenName, src.familyName)
    map 'birthDate' using { d-> LocaleDate.parse(d) }

It's a bit odd since you are mapping two properties into one property, but it gets the job done. The conversion closure will accept up to three parameters (or none) - the first being the source property being converted, the second is the source object instance and the third is the destination object instance. The one thing to keep in mind when using the two and three parameter versions is that the order of your property mapping definitions may begin to matter, especially if you are working with the destination object.

So far, we have been talking about runtime-based mappers that take your configuration and resolve your property mappings at runtime. It's reasonably efficient since it doesn't do all that much, but consider the case where you have a million objects to transform; those extra property mapping operations start to add up - that's when you go back to hand-coding it unless there is a way to build the mappers at compile time rather than run time...

There is. This was something I have really wanted to get a hold of for this project and others; the ability to use a DSL to control the AST transformations used in code generation... or, using the mapper DSL in an annotation to create the mapper class at compile time so that it is closer to what you would have hand-coded yourself (and also becomes a bit more performant since there are fewer operations being executed at runtime).

Using the static approach is simple, you just write the DSL code in the @Mapper annotation on a method, property or field:

class Mappers {

        map 'id'
        map 'givenName' into 'firstName'
        map 'familyName' into 'lastName'
        map 'birthDate' using { d-> LocaleDate.parse(d) }    
    static final ObjectMapper personMapper(){}

When the code compiles, a new implementation of ObjectMapper will be created and installed as the return value for the personMapper() method. The static version of the DSL has all of the same functionality of the dynamic version except that it does not support using ObjectMappers direction in the using command; however, a workaround for this is to use a closure.

Object property mapping/copying is one of those things you don't run into all that often, but it is useful to have a simple alternative to hand-writing the code for it. Both the dynamic and static version of the object mappers discussed here are available in my Vanilla library.

Lazy Immutables

23 September 2015 ~ blog, groovy, vanilla

A co-worker and I were discussing the Groovy @Immutable annotation recently where I was thinking it would be useful if it allowed you to work on the object as a mutable object until you were ready to make it permanent, and then you could "seal" it and make it immutable. This would give you a bit more freedom in how the object is configured - sometimes the standard immutable approach can be overly restrictive.

Consider the case of of an immutable Person object:

class Person {
    String firstName
    String middleName
    String lastName
    int age

With @Immutable you have to create the object all at once:

def person = new Person('Chris','J','Stehno',42)

and then you're stuck with it. You can create a copy of it with one or more different properties using the copyWith method, but you need to specify the copyWith=true in the annotation itself, then you can do something like:

Person otherPerson = person.copyWith(firstName:'Bob', age:50)

I'm not sure who "Bob J Stehno" is though. With more complicated immutables, this all at once requirement can be annoying. This is where the @LazyImmutable annotation comes in (part of my Vanilla - Core library). With a similar Person class:

@LazyImmutable @Canonical
class Person {
    String firstName
    String middleName
    String lastName
    int age

using the new annotation, you can create and populate the instance over time:

def person = new Person('Chris')
person.middleName = 'J'
person.lastName = 'Stehno'
person.age = 42

Notice that the @LazyImmutable annotation does not apply any other transforms (as the standard @Immutable does). It's a standard Groovy object, but with an added method: the asImmutable() method is injected via AST Transformation. This method will take the current state of the object and create an immutable version of the object - this does imply that the properties of lazy immutable objects should follow the same rules as those of the standard immutable so that the conversion is determinate. For our example case:

Person immutablePerson = person.asImmutable()

The created object is the same immutable object as would have been created by using the @Immutable annotation and it is generated as an extension of the class you created so that it's type is still valid. The immutable version of the object also has a useful added method, the asMutable() method is used to create a copy of the original mutable object.

Person otherMutable = immutablePerson.asMutable()

It's a fairly simple helper annotation, but it just fills one of those little functional gaps that you run into every now and then. Maybe someone else will find it useful.

Baking Your Blog with JBake, Groovy and GitHub

02 September 2015 ~ blog, groovy

As a developer, it has always bugged me to have my blog or web site content stored on a server managed by someone else, outside of my control. Granted, WordPress and the like are very stable and generally have means of pulling out your data if you need it, but I really just like to have my content under my own control. Likewise, I have other projects I want to work on, so building content management software is not really on my radar at this point; that's where JBake comes in.

JBake is a simple JVM-based static site generation tool that makes casual blogging quite simple once you get everything set up. It's a bit of a raw project at this point, so there are a few rough edges to work with, but I will help to file them down in the discussions below.

Getting started with JBake, you have a couple options. You can install JBake locally and use it as a command line tool, or you can use the JBake Gradle Plugin. The Gradle plugin is currently lacking the local server feature provided by the command line tools; however, it does provide a more portable development environment along with the universe of other Gradle plugins. We will use the Gradle plugin approach here and I will provide some workarounds for the missing features to bring the functionality back on even ground with the command line tool.

The first thing we need is our base project and for that I am going to use a Lazybones template that I have created (which may be found in my lazybones-templates repository). You can use the Gradle plugin and do all the setup yourself, but it was fairly simple and having a template for it allowed me to add in the missing features we need.

If you are unfamiliar with Lazybones, it's a Groovy-based project template framework along the lines of Yeoman and the old Maven Archetype plugin. Details for adding my template repo to your configuration can be found on the README page for my templates.

Create the empty project with the following:

lazybones create jbake-groovy cookies

where "cookies" is the name of our project and the name of the project directory to be created. You will be asked a few questions related to template generation. You should have something similar to the following:

$ lazybones create jbake-groovy cookies
Creating project from template jbake-groovy (latest) in 'cookies'
Define value for 'JBake Plugin Version' [0.2]:
Define value for 'JBake Version' [2.3.2]:
Define value for 'Gradle version' [2.3]:
GitHub project: [username/projectname.git]: cjstehno/cookies.git

The "username" should reflect the username of your GitHub account, we'll see what this is used for later. If you look at the generated "cookies" directory now you will see a standard-looking Gradle project structure. The JBake source files reside in the src/jbake directory with the following sub-directories:

You will see that by default, a simple Bootstrap-based blog site is provided with sample blog posts in HTML, ASCII Doc, and Markdown formats. This is the same sample content as provided by the command line version of the project setup tool. At this point we can build the sample content using:

./gradlew jbake

The Gradle plugin does not provide a means of serving up the "baked" content yet. There is work in progress so hopefully this will be merged in soon. One of the goodies my template provides is a simple Groovy web server script. This allows you to serve up the content with:

groovy serve.groovy

which will start a Jetty instance pointed at the content in build/jbake on the configured port (8080 by default, which can be changed by adding a port number to the command line). Now when you hit http://localhost:8080/ you should see the sample content. Also, you can leave this server running in a separate console while you develop, running the jbake command as needed to rebuild the content.

First, let's update the general site information. Our site's title is not "JBake", so let's change it to "JCookies" by updating it in the src/jbake/templates/header.gsp and src/jbake/templates/menu.gsp files. While we're in there we can also update the site meta information as well:

<title><%if (content.title) {%>${content.title}<% } else { %>JCookies<% }%></title>
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="A site about cookies.">
<meta name="author" content="Chris Stehno">
<meta name="keywords" content="cookies,baking">
<meta name="generator" content="JBake">

Then to apply the changes, run ./gradlew jbake and refresh the browser. Now we see our correct site name.

Note that JBake makes no requirements about the templates or content to be used. It provides special support for blog-style sites; however, you can remove all the content and make a standard simple static site if you wish.

Let's add a new blog entry. The blog entries are stored in the src/jbake/content/blog directory by year so we need to create a new directory for 2015. Content may be written in HTML, ASCII Doc, or Markdown, based on the file extension. I am a fan of Markdown so we'll use that for our new blog entry. Let's create an entry file named chocolate-chip.md.

JBake uses a custom header block at the top of content files to store meta information. For our entry we will use the following:

title=Chocolate Chip Cookies

The title and date are self-explanatory. The type can be post or page to denote a blog post or a standard page. The tags are used to provide extra tag information to categorize the content. The status field may be draft or published to denote whether or not the content should be included in the rendered site. Everything below the line of tildes is your standard markdown content.

For the content of our entry we are going to use the Nestle Chocolate Chip Cookie recipe - it gives us a nice overview of the content capabilities, and they are yummy!

The content in Markdown format, is as follows:

## Ingredients

* 2 1/4 cups all-purpose flour
* 1 teaspoon baking soda
* 1 teaspoon salt
* 1 cup (2 sticks) butter, softened
* 3/4 cup granulated sugar
* 3/4 cup packed brown sugar
* 1 teaspoon vanilla extract
* 2 large eggs
* 2 cups (12-oz. pkg.) NESTLÉ® TOLL HOUSE® Semi-Sweet Chocolate Morsels
* 1 cup chopped nuts

## Instructions

1. Preheat oven to 375° F.
1. Combine flour, baking soda and salt in small bowl. Beat butter, granulated sugar, brown sugar and vanilla extract in large mixer bowl until creamy. Add eggs, one at a time, beating well after each addition. Gradually beat in flour mixture. Stir in morsels and nuts. Drop by rounded tablespoon onto ungreased baking sheets. 
1. BAKE for 9 to 11 minutes or until golden brown. Cool on baking sheets for 2 minutes; remove to wire racks to cool completely. 

May be stored in refrigerator for up to 1 week or in freezer for up to 8 weeks.

Rebuild/refresh and now you see we have a new blog post. Now, since we stoleborrowed this recipe from another site, we should provide an attribution link back to the original source. The content header fields are dynamic; you can create your own and use them in your pages. Let's add an attribution field and put our link in it.


Then we will want to add it to our rendered page, so we need to open up the blog entry template, the src/jbake/templates/post.gsp file and add the following line after the page header:

<p>Borrowed from: <a href="${content.attribution}">${content.attribution}</a></p>

Notice now, that the templates are just GSP files which may have Groovy code embedded into them in order to perform rendering logic. The header data is accessible via the content object in the page.

This post is kind of boring at this point. Yes, it's a recipe for chocolate chip cookies, and that's hard to beat, but the page full of text is not selling it to me. Let's add a photo to really make your mouth water. Grab an image of your favorite chocolate chip cookies and save it in src/jbake/assets/images as cookies.jpg. Static content like images live in the assets folder. The contents of the assets folder will be copied into the root of the rendered site directory.

Now, we need to add the photo to the page. Markdown allows simple HTML tags to be used so we can add:

<img src="/images/cookies.jpg" style="width:300px;float:right;"/>

to the top of our blog post content, which will add the image at the top of the page, floated to the right of the main content text. Now that looks tasty!

You can also create tandard pages in a similar manner to blog posts; however, they are based on the page.gsp template. This allows for different contextual formatting for each content type.

You can customize any of the templates to get the desired content and functionality for your static site, but what about the overall visual theme? As I mentioned earlier, the default templates use the Twitter Bootstrap library and there are quite a few resources available for changing the theme to fit your needs and they range from free to somewhat expensive. We just want a free one for demonstration purposes so let's download the bootstrap.min.css file for the Bootswatch Cerulean theme. Overwrite the existing theme in the src/jbake/assets/css directory with this new file then rebuild the site and refresh your browser. Now you can see that we have a nice blue banner along with other style changes.

The end result at this point will look something like this:

All-in-all not too bad for a few minutes of coding work!

Another nice feature of JBake is delayed publishing. The status field in the content header has three accepted values:

We used the published option since we wanted our content to be available right away. You could easily create a bunch of blog entries ahead of time, specifying the date values for when they should be published but having the status values set to published-date so that they are released only after the appropriate date. The downside of this is that since JBake is a static generator, you would have to be sure and build the site often enough to pick up the newly available content - maybe with a nightly scheduled build and deployment job.

When you are ready to release your site out into the greater internet wilderness, you will need a way to publish it; this is another place where my lazybones template comes in handy. If you are hosting your site as github-pages, the template comes with a publishing task built-in, based on the gradle-git plugin. This is where the GitHub username and repository information from the initial project creation comes into play. For this to work, you need a repository named "cookies" associated with your GitHub account. You will also want to double check that the repo clone URL is correct in the publish.gradle file. Then, to publish your site you simply run:

./gradlew publish

and then go check your project site for the updated content (sometimes it takes a minute or two, though it's usually instantaneous).

At this point we have a easily managed static web site; what's left to be done? Well, you could associate it with your own custom domain name rather than the one GitHub provides. I will not go into that here, since I really don't want to purchase a domain name just for this demo; however, I do have a blog post (Custom GitHub Hosting) that goes into how it's done (at least on GoDaddy).

JBake and GitHub with a dash of Groovy provide a nice environment for quick custom blogs and web sites, with little fuss. Everything I have shown here is what I use to create and manage this blog, so, I'd say it works pretty well.

Portions of this discussion are based on a blog post by Cédric Champeau, "Authoring your blog on GitHub with JBake and Gradle", who is also a contributor to JBake (among other things).

Vanilla Test Fixtures

15 May 2015 ~ blog, groovy, testing, vanilla

Unit testing with data fixtures is good practice to get into, and having a simple means of creating and managing reusable fixture data makes it much more likely. I have added a FixtureBuilder and Fixture class to my Vanilla-Testing library.

Unit testing with domain object, entities and DTOs can become tedious and you can end up with a lot of duplication around creating the test fixtures for each test. Say you have an object, Person defined as:

class Person {
    Name name
    LocalDate birthDate
    int score

You are writing your unit tests for services and controllers that may need to create and compare various instances of Person and you end up with some constants somewhere or duplication of code with custom instances all over the test code.

Using com.stehno.vanilla.test.FixtureBuilder you can create reusable fixtures with a simple DSL. I tend to create a main class to contain my fixtures and to also provide the set of supported fixture keys, something like:

class PersonFixtures {

    static final String BOB = 'Bob'
    static final String LARRY = 'Larry'
    static final Fixture FIXTURES = define {
        fix BOB, [ name:new Name('Bob','Q','Public'), birthDate:LocalDate.of(1952,5,14), score:120 ]
        fix LARRY, [ name:new Name('Larry','G','Larson'), birthDate:LocalDate.of(1970,2,8), score:100 ]

Notice that the define method is where you create the data contained by the fixtures, each mapped with an object key. The key can be any object which may be used as a Map key (proper equals and hashCode implementation).

The reasoning behind using Maps is that Groovy allows them to be used as constructor arguments for creating objects; therefore, the maps give you a reusable and detached dataset for use in creating your test fixture instances. Two objects instances created from the same fixture data will be equivalent at the level of the properties defined by the fixture; however, each can be manipulated without effecting the other.

Once your fixtures are defined, you can use them in various ways. You can request the immutable data map for a fixture:

Map data = PersonFixtures.FIXTURES.map(PersonFixtures.BOB)

You can create an instance of the target object using the data mapped to a specified fixture:

Person person = PersonFixtures.FIXTURES.object(Person, PersonFixtures.LARRY)

Or, you can request the data or an instance for a fixture while applying additional (or overridden) properties to the fixture data:

Map data = PersonFixtures.FIXTURES.map(PersonFixtures.BOB, score:53)
Person person = PersonFixtures.FIXTURES.object(Person, PersonFixtures.LARRY, score:200)

You can easily retrieve field property values for each fixture for use in your tests:

assert 100 == PersonFixtures.FIXTURES.field('score', PersonFixtures.LARRY)

This allows field-by-field comparisons for testing and the ability to use the field values as parameters as needed.

Lastly, you can verify that an object instance contains the expected data that is associated with a fixture:

assert PersonFixtures.FIXTURES.verify(person, PersonFixtures.LARRY)

which will compare the given object to the specified fixture and return true of all of the properties defined in the fixture match the same properties of the given object. There is also a second version of the method which allows property customizations before comparison.

One step farther... you can combine fixtures with property randomizaiton to make fixture creation even simpler for those cases where you don't care about what the properties are, just that you can get at them reliably.

static final Fixture FIXTURES = define {
    fix FIX_A, [ name:randomize(Name).one(), birthDate:LocalDate.of(1952,5,14), score:120 ]
    fix FIX_B, randomize(Person){
            (Name): randomize(Name),
            (LocalDate): { LocalDate.now() }

The fixture mapper accepts PropertyRandomizer instances and will use them to generate the random content once, when the fixture is created and then it will be available unchanged during the testing.

One thing to note about the fixtures is that the fixture container and the maps that are passed in as individual fixture data are all made immutable via the asImmutable() method; however, if the data inside the fixture is mutable, it still may have the potential for being changed. Be aware of this and take proper precautions when you create an interact with such data types.

Reusable text fixtures can really help to clean up your test code base, and they are a good habit to get into.

Property Randomization for Testing

06 May 2015 ~ blog, groovy, vanilla

Unit tests are great, but sometimes you end up creating a lot of test objects requiring data, such as DTOs and domain objects. Generally, I have always come up with movie quotes or other interesting content for test data. Recently, while working on a Groovy project, I thought it would be interesting to have a way to randomly generate and populate the data for these objects. The randomization would provide a simpler approach to test data as well as providing the potential for stumbling on test data that would break your code in interesting ways.

My Vanilla project now has a PropertyRandomizer class, which provides this property randomization functionality in two ways. You can use it as a builder or as a DSL.

Say you have a Person domain class, defined as:

class Person {
    String name
    Date birthDate

You could generate a random instance of it using:

def rando = randomize(Person).typeRandomizers( (Date):{ new Date() } )
def instance = rando.one()

Note, that there is no default randomizer for Date so we had to provide one. The other fields, name in this case would be randomized by the default randomizer.

The DSL usage style for the use case above would be:

def rando = randomize(Person){
        (Date):{ new Date() } 
def instance = rando.one()

Not really much difference, but sometimes a DSL style construct is cleaner to work with.

What if you need three random instances for the same class, all different? You just ask for them:

def instances = rando.times(3)

// or 

instances = rando * 3

The multiplication operator is overridden to provide a nice shortcut for requesting multiple random instances.

You can customize the randomizers at either the type or property level or you can configure certain properties to be ignored by the randomization. This allows for nested randomized objects. Say your Person has a new pet property.

class Person {
    String name
    Date birthDate
    Pet pet

class Pet {
    String name

You can easily provide randomized pets for your randomized people:

def rando = randomize(Person){
        (Date):{ new Date() },
        (Pet): { randomize(Pet).one() }
def instance = rando.one()

I have started using this in some of my testing, at it comes in pretty handy. My Vanilla library is not yet available via any public repositories; however, it will be soon, and if there is expressed interest, I can speed this up.

Secure REST in Spring

04 May 2015 ~ blog, groovy

Getting HTTPS to play nice with REST and non-browser web clients in development (with a self-signed certificate) can be a frustrating effort. I struggled for a while down the path of using the Spring RestTemplate thinking that since I was using Spring MVC as my REST provider, it would make things easier; in this case, Spring did not come to the rescue, but Groovy did or rather the Groovy HTTPBuilder did.

To keep this discussion simple, we need a simple REST project using HTTPS. I found the Spring REST Service Guide project useful for this (with a few modifications to follow).

Go ahead and clone the project:

git clone git@github.com:spring-guides/gs-rest-service.git

Since this is a tutorial project, it has a few versions of the code in it. We are going to work with the "complete" version, which is a Gradle project. Let's go ahead and do a build and run just to ensure everything works out of the box:

cd gs-rest-service/complete
./gradlew bootRun

After a bunch of downloading and startup logging you should see that the application has started. You can give it a test by opening http://localhost:8080/greeting?name=Chris in your browser, which should respond with:

    "id": 2,
    "content": "Hello, Chris!"

Now that we have that running, we want a RESTful client to call it rather that hitting it using the browser. Let's get it working with the simple HTTP case first to ensure that we have everything working before we go into the HTTPS configuration. Create a groovy script, rest-client.groovy with the following content:

    @Grab(group='org.codehaus.groovy.modules.http-builder', module='http-builder', version='0.7.1')

import groovyx.net.http.HTTPBuilder
import static groovyx.net.http.Method.GET

def http = new HTTPBuilder( 'http://localhost:8080/' )

http.get( path: 'greeting', query:[name:'Chris'] ) { resp, json ->
    println "Status: ${resp.status}"
    println "Content: $json"

Since this is not a discussion of HTTPBuilder itself, I will leave most of the details to your own research; however, it's pretty straight forward. We are making the same request we made in the browser, and after another initial batch of dependency downloads (grapes) it should yield:

Status: 200
Content: [content:Hello, Chris!, id:6]

Ok, our control group is working. Now, let's add in the HTTPS. For the Spring Boot project, it's pretty trivial. We need to add an application.properties file in src/main/resources with the following content:

server.port = 8443
server.ssl.key-store = /home/cjstehno/.keystore
server.ssl.key-store-password = tomcat
server.ssl.key-password = tomcat

Of course, update the key-store path to your home directory. For the server, we also need to install a certificate for our use.

I am not a security certificate expert, so from here on out I will state that this stuff works in development but I make no claims that this is suitable for production use. Proceed at your own risk!

From the Tomcat 8 SSL How To, run the keytool -genkey -alias tomcat -keyalg RSA and run through the questions answering everything with 'localhost' (there seems to be a reason for this).

At this point you should be able to restart the server and hit it via HTTPS (https://localhost:8443/greeting?name=Chris) to retrieve a successful response as before, though you will need to accept the self-signed certificate.

Now try the client. Update the URL to the new HTTPS version:

def http = new HTTPBuilder( 'https://localhost:8443/' )

and give it a run. You should see something like:

Caught: javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated
javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated

I will start with the simplest method of resolving this problem. HTTPBuilder provides a configuration method that will just ignore these types of SSL errors. If you add:


before you make a request, it will succeed as normal. This should be used only as a development configuration, but there are times when you just want to get something workign for testing. If that's all you want here, you're done. From here on out I will show how to get the SSL configuration working for a more formal use case.

Still with me? Alright, let's have fun with certificates! The HTTPBuilder wiki page for SSL gives us most of what we need. To summarize, we need to export our server certificate and then import it into a keyfile that our client can use. To export the server certificate, run:

keytool -exportcert -alias "tomcat" -file mytomcat.crt -keystore ~/.keystore -storepass tomcat

which will export the "tomcat" certificate from the keystore at "~/.keystore" (the one we created earlier) and save it into "mytomcat.crt". Next, we need to import this certificate into the keystore that will be used by our client as follows:

keytool -importcert -alias "tomcat" -file mytomcat.crt -keystore clientstore.jks -storepass clientpass

You will be asked to trust this certificate, which you should answer "yes" to continue.

Now that we have our certificate ready, we can update the client script to use it. The client script becomes:

    @Grab(group='org.codehaus.groovy.modules.http-builder', module='http-builder', version='0.7.1')

import groovyx.net.http.HTTPBuilder
import static groovyx.net.http.Method.GET
import java.security.KeyStore
import org.apache.http.conn.scheme.Scheme
import org.apache.http.conn.ssl.SSLSocketFactory

def http = new HTTPBuilder( 'https://localhost:8443/' )

def keyStore = KeyStore.getInstance( KeyStore.defaultType )

new File( args[0] ).withInputStream {
   keyStore.load( it, args[1].toCharArray() )

http.client.connectionManager.schemeRegistry.register(new Scheme("https", new SSLSocketFactory(keyStore), 443) )

http.get( path: 'greeting', query:[name:'Chris'] ) { resp, json ->
    println "Status: ${resp.status}"
    println "Content: $json"

The main changes from the previous version are the loading and use of the keystore by the connection manager. When you run this version of the script, with:

groovy rest-client.groovy clientstore.jks clientpass

you get:

Status: 200
Content: [content:Hello, Chris!, id:1]

We are now using HTTPS on both the server and client for our rest service. It's not all that bad to setup once you figure out the steps, but in general the information seems to be tough to find.

Tour de Mock 6: Spock

09 April 2015 ~ blog, groovy, testing

My last entry in my "Tour de Mock" series was focused on basic Groovy mocking. In this post, I am going to take a look at the Spock Framework, which is an alternative testing framework with a lot of features, including its own mocking API.

Since it's been a while, let's refer back to the original posting as a refresher of what is being tested. We have a Servlet, the EmailListServlet

public class EmailListServlet extends HttpServlet {

    private EmailListService emailListService;

    public void init() throws ServletException {
        final ServletContext servletContext = getServletContext();
        this.emailListService = (EmailListService)servletContext.getAttribute(EmailListService.KEY);

        if(emailListService == null) throw new ServletException("No ListService available!");

    protected void doGet(final HttpServletRequest req, final HttpServletResponse res) throws ServletException, IOException {
        final String listName = req.getParameter("listName");
        final List<String> list = emailListService.getListByName(listName);
        PrintWriter writer = null;
        try {
            writer = res.getWriter();
            for(final String email : list){
        } finally {
            if(writer != null) writer.close();

which uses an EmailListService

public interface EmailListService {

    public static final String KEY = "com.stehno.mockery.service.EmailListService";

     * Retrieves the list of email addresses with the specified name. If no list
     * exists with that name an IOException is thrown.
    List<String> getListByName(String listName) throws IOException;

to retrieve lists of email addresses, because that's what you do, right? It's just an example. :-)

First, we need to add Spock to our build (recently converted to Gradle, but basically the same) by adding the following line to the build.gradle file:

testCompile "org.spockframework:spock-core:1.0-groovy-2.4"

Next, we need a test class. Spock uses the concept of a test "Specification" so we create a simple test class as:

class EmailListServlet_SpockSpec extends Specification {
    // test stuff here...

Not all that different from a JUnit test; conceptually they are very similar.

Just as in the other examples of testing this system, we need to setup our mock objects for the servlet environment and other collaborators:

def setup() {
    def emailListService = Mock(EmailListService) {
        _ * getListByName(null) >> { throw new IOException() }
        _ * getListByName('foolist') >> LIST

    def servletContext = Mock(ServletContext) {
        1 * getAttribute(EmailListService.KEY) >> emailListService

    def servletConfig = Mock(ServletConfig) {
        1 * getServletContext() >> servletContext

    emailListServlet = new EmailListServlet()
    emailListServlet.init servletConfig

    request = Mock(HttpServletRequest)
    response = Mock(HttpServletResponse)

Spock provides a setup method that you can override to perform your test setup operations, such as mocking. In this example, we are mocking the service interface, and the servlet API interfaces so that they behave in the deisred manner.

The mocking provided by Spock took a little getting used to when coming from a primarily mockito-based background, but once you grasp the overall syntax, it's actually pretty expressive. In the code above for the EmailListService, I am mocking the getListByName(String) method such that it will accept any number of calls with a null parameter and throw an exception, as well as any number of calls with a foolist parameter which will return a reference to the email address list. Similarly, you can specify that you expect only N calls to a method as was done in the other mocks. You can dig a little deeper into the mocking part of the framework in the Interaction-based Testing section of the Spock documentation.

Now that we have our basic mocks ready, we can test something. As in the earlier examples, we want to test the condition when no list name is specified and ensure that we get the expected Exception thrown:

def 'doGet: without list'() {
    1 * request.getParameter('listName') >> null

    emailListServlet.doGet request, response


One thing you should notice right away is that Spock uses label blocks to denote different parts of a test method. Here, the setup block is where we do any additional mocking or setup specific to this test method. The when block is where the actual operations being tested are performed while the then block is where the results are verified and conditions examined.

In our case, we need to mock out the reuest parameter to return null and then we need to ensure that an IOException is thrown.

Our other test is the case when a valid list name is provided:

def 'doGet: with list'() {
    1 * request.getParameter('listName') >> 'foolist'

    def writer = Mock(PrintWriter)

    1 * response.getWriter() >> writer

    emailListServlet.doGet request, response

    1 * writer.println(LIST[0])
    1 * writer.println(LIST[1])
    1 * writer.println(LIST[2])

In the then block here, we verify that the println(String) method of the mocked PrintWriter is called with the correct arguments in the correct order.

Overall, Spock is a pretty clean and expressive framework for testing and mocking. It actually has quite a few other interesting features that beg to be explored.

You can find the source code used in this posting in my TourDeMock project.

Testing AST Transformations

08 March 2015 ~ blog, groovy, testing, vanilla

While working on my Effigy project, I have gone deep into the world of Groovy AST Transformations and found that they are, in my opinion, the most interesting and useful feature of the Groovy language; however, developing them is a bit of a poorly-documented black art, especially around writing unit tests for your transformations. Since the code you are writing is run at compile-time, you generally have little access or view to what is going on at that point and it can be quite frustrating to try and figure out why something is failing.

After some Googling and experimentation, I have been able to piece together a good method for testing your transformation code, and it's actually not all that hard. Also, you can do your development and testing in a single project, rather than in a main project and testing project (to account for the need to compile the code for testing)

The key to making transforms testable is the GroovyClassLoader which gives you the ability to compile Groovy code on the fly:

def clazz = new GroovyClassLoader().parseClass(sourceCode)

During that parseClass method is when all the AST magic happens. This means you can not only easily test your code, but also debug into your transformations to get a better feel for what is going wrong when things break - and they often do.

For my testing, I have started building a ClassBuilder code helper that is a shell for String-based source code. You provide a code template that acts as your class shell, and then you inject code for your specific test case. You end up with a reasonably clean means of building test code and instantiating it:

private final ClassBuilder code = forCode('''
    package testing

    import com.stehno.ast.annotation.Counted

    class CountingTester {

@Test void 'single method'(){
    def instance = code.inject('''
        String sayHello(String name){
            "Hello, $name"

    assert instance.sayHello('AST') == 'Hello, AST'
    assert instance.getSayHelloCount() == 1

    assert instance.sayHello('Counting') == 'Hello, Counting'
    assert instance.getSayHelloCount() == 2

The forCode method creates the builder and prepares the code shell. This construct may be reused for each of your tests.

The inject method adds in the actual code you care about, meaning your transformation code being tested.

The instantiate method uses the GroovyClassLoader internally to load the class and then instantiate it for testing.

I am going to add a version of the ClassBuilder to my Vanilla project once it is more stable; however, I have a version of it and a simple AST testing demo project in the ast-testing CoffeaElectronica sub-repo. This sample code builds a simple AST Transformation for counting method invocations and writes normal unit tests for it (the code above is taken from one of the tests).

Note: I have recently discovered the groovy.tools.ast.TransformTestHelper class; I have not yet tried it out, but it seems to provide a similar base functionality set to what I have described here.

Older posts are available in the archive.