Custom Spring Boot Shell Banner

25 March 2016 ~ blog, groovy, spring

I did a Groovy User Group talk recently related to my Spring Boot Remote Shell blog post and while putting the talk together I stumbled across a bug in the integration between Spring Boot and the Crash shell (see Spring-Boot-3988). The custom banner you can add to your Spring Boot application (as /resources/banner.txt) is not applied by default to your crash shell, so you get the boring Spring logo every time you startup the shell. I had worked with the Crash shell previously and I remember that the banner was customizable so I did a little digging and figured out how to code a work-around - I also added this information to the bug ticket; I considered contributing a pull request, but I am not sure how this would be coded into the default application framework.

The work-around is pretty simple and straight-forward if you have worked with the crash shell before. You use their method of customization and then have it pull in your Spring Boot custom banner. In your /src/main/resources/commands directory you add a login.groovy file, which Crash will load with every shell connection. The file allows the customization of the banner and the prompt. We can then load our spring banner from the classpath. The basic code is as follows:

login.groovy
welcome = { ->
    def hostName;
    try {
        hostName = java.net.InetAddress.getLocalHost().getHostName();
    } catch (java.net.UnknownHostException ignore) {
        hostName = 'localhost';
    }

    String banner = YourApplication.getResourceAsStream('/banner.txt').text

    return """
${banner}
Logged into $hostName @ ${new Date()}
"""
}

prompt = { ->
    return "% ";
}

It’s a silly little thing to worry about, but sometimes it’s the little things that make an application feel more like your own.

I have created a pull request in the spring-boot project to address this issue…​ we’ll see what happens.

Groovy Dependency Injection

19 March 2016 ~ blog, groovy

Dependency Injection frameworks were a dime a dozen for a while - everybody had their own and probably a spare just in case. For the most part the field has settled down to a few big players, the Spring Framework and Google Guice are the only two that come to mind. While both of these have their pluses and minuses, they both have a certain level of overhead in libraries and understanding. Sometimes you want to throw something together quickly or you are in a scenario where you can’t use one of these off the shelf libraries. I had to do this recently and while I still wanted to do something spring/guice-like, I could not use either of them, but I did have Groovy available.

Note
I want to preface the further discussion here to say that I am not suggesting you stop using Spring or Guice or whatever you are using now in favor of rolling your own Groovy DI - this is purely a sharing of information about how you can if you ever need to.

Let’s use as an example a batch application used to process some game scores and report on the min/max/average values. We will use a database (H2) just to show a little more configuration depth and I will use the TextFileReader class from my Vanilla project to keep things simple and focussed on DI rather than logic.

First, we need the heart of our DI framework, the configuration class. Let’s call it Config; we will also need a means of loading external configuration properties and this is where our first Groovy helper comes in, the ConfigSlurper. The ConfigSlurper does what it sounds like, it slurps up a configuration file with a Groovy-like syntax and converts it to a ConfigObject. To start with, our Config class looks something like this:

class Config {
    private final ConfigObject config

    Config(final URL configLocation) {
        config = new ConfigSlurper().parse(configLocation)
    }
}

The backing configuration file we will use, looks like this:

inputFile = 'classpath:/scores.csv'

datasource {
    url = 'jdbc:h2:mem:test'
    user = 'sa'
    pass = ''
}

This will live in a file named application.cfg and as can be seen, it will store our externalized config properties.

Next, let’s configure our DataSource. Both Spring and Guice have a similar "bean definition" style, and what I am sure is based on those influences, I came up with something similar here:

@Memoized(protectedCacheSize = 1, maxCacheSize = 1)
DataSource dataSource() {
    JdbcConnectionPool.create(
        config.datasource.url,
        config.datasource.user,
        config.datasource.pass
    )
}

Notice that I used the @Memoized Groovy transformation annotation. This ensures that once the "bean" is created, the same instance is reused, and since I will only ever have one, I can limit the cache size and make sure it sicks around. As an interesting side-item, I created a collected annotation version of the memoized functionality and named it @OneInstance since @Singleton was alread taken.

@Memoized(protectedCacheSize = 1, maxCacheSize = 1)
@AnnotationCollector
@interface OneInstance {}

It just keeps things a little cleaner:

@OneInstance DataSource dataSource() {
    JdbcConnectionPool.create(
        config.datasource.url,
        config.datasource.user,
        config.datasource.pass
    )
}

Lastly, notice how the ConfigObject is used to retrieve the configuration property values, very clean and concise.

Next, we need to an input file to read and a TextFileReader to read it so we will configure those as well.

@OneInstance Path inputFilePath() {
    if (config.inputFile.startsWith('classpath:')) {
        return Paths.get(Config.getResource(config.inputFile - 'classpath:').toURI())
    } else {
        return new File(config.inputFile).toPath()
    }
}

@OneInstance TextFileReader fileReader() {
    new TextFileReader(
        filePath: inputFilePath(),
        firstLine: 2,
        lineParser: new CommaSeparatedLineParser(
            (0): { v -> v as long },
            (2): { v -> v as int }
        )
    )
}

I added a little configuration sugar so that you can define the input file as a classpath file or an external file. The TextFileReader is setup to convert the data csv file as three columns of data, an id (long), a username (string) and a score (int). The data file looks like this:

# id,username,score
100,bhoser,4523
200,ripplehauer,235
300,jegenflur,576
400,bobknows,997

The last thing we need in the configuration is our service which will do that data management and the stat calculations, we’ll call it the StatsService:

@TypeChecked
class StatsService {

    private Sql sql

    StatsService(DataSource dataSource) {
        sql = new Sql(dataSource)
    }

    StatsService init() {
        sql.execute('create table scores (id bigint PRIMARY KEY, username VARCHAR(20) NOT NULL, score int NOT NULL )')
        this
    }

    void input(long id, String username, int score) {
        sql.executeUpdate(
            'insert into scores (id,username,score) values (?,?,?)',
            id,
            username,
            score
        )
    }

    void report() {
        def row = sql.firstRow(
            '''
            select
                count(*) as score_count,
                avg(score) as average_score,
                min(score) as min_score,
                max(score) as max_score
            from scores
            '''
        )

        println "Count  : ${row.score_count}"
        println "Min    : ${row.min_score}"
        println "Max    : ${row.max_score}"
        println "Average: ${row.average_score}"
    }
}

I’m just going to dump it out there since it’s mostly SQL logic to load the data into the table and then report the stats out to the standard output. We will wire this in like the others in Config:

@OneInstance StatsService statsService() {
    new StatsService(dataSource()).init()
}

With that, our configuration is done. Now we need to use it in an application, which we’ll call Application:

class Application {

    static void main(args){
        Config config = Config.fromClasspath('/application.cfg')

        StatsService stats = config.statsService()
        TextFileReader reader = config.fileReader()

        reader.eachLine { Object[] line->
            stats.input(line[0], line[1], line[2])
        }

        stats.report()
    }
}

We instantiate a Config object, call the bean accessor methods and use the beans to do the desired work. I added the fromClasspath(String) helper method to simplify loading config from the classpath.

Like I said, this is no fulltime replacement for a real DI framework; however, when I was in a pinch, this came in pretty handy and worked really well. Also, it was easy to extend the Config class in the testing source so that certain parts of the configuration could be overridden and mocked as needed during testing.

Note
The demo code for this post is on GitHub: cjstehno/groovy-di.

Dependency Duplication Checking

12 March 2016 ~ blog, groovy, gradle

Sometimes it takes a critical mass threshold of running into the same issue repeatedly to really do something about it. How often, when working with a dependency manager like Gradle or Maven, have you run into some runtime issue only to find that it was caused by a build dependency that you had two (or more) different versions of at runtime? More often than you would like, I am sure. It can be a real surprise when you actually go digging into your aggregated dependency list only to find out you have more than one duplicate dependency just waiting to become a problem.

What do I mean by duplicate dependency? Basically, it’s just what it sounds like. You have two dependencies with different versions. Something like:

org.codehaus.groovy:groovy-all:2.4.4
org.codehaus.groovy:groovy-all:2.4.5

Most likely, your project defines one of them and some other dependency brought the other along for the ride. It is usually pretty easy to resolve these extra dependencies; in Gradle you can run the dependency task to see which dependency is bringing the extra library in:

./gradlew dependencies > deps.txt

I like to dump the output to a text file for easier viewing. Then, once you find the culprit, you can exclude the transitive dependency:

compile( 'com.somebody:coollib:2.3.5' ){
    exclude group:'org.codehaus.groovy', module:'groovy-all'
}

Then you can run the dependency task again to ensure that you got of it. Generally, this is a safe procedure; however, sometimes you get into a situation where different libraries depend on different versions that have significant code differences - that’s when the fun begins and it usually ends in having to up or down-grade various dependencies until you get a set that works and is clean.

What is the problem with having multiple versions of the same library in your project? Sometimes nothing, sometimes everything. The classloader will load whichever one is defined first in the classpath. If your project needs a class Foo with a method bar() and the version you expect to use has it but the previous version does not, bad things can happen at runtime.

Ok, now we know generally how to solve the multiple dependency problem, we’re done right? Sure, for a month or so. Unless your project is done and no longer touched, new dependencies and duplicates will creep in over time. I did this duplicataion purge on a project at work a few months ago and just last week I took a peek at the aggregated dependency list and was truely not so shocked to see three duplicated libraries. One of which was probably the cause of some major performance issues we were facing. That’s what inspired me to solve the problem at least to the point of letting you know when duplications creep in.

I created the dependency-checker Gradle plugin. It is available in the Gradle Plugin Repository. At this point, it has one added task, checkDependencies which, as the name suggests, searches through all the dependencies of the project to see if you have any duplicates within a configuration. If it finds duplicates, it will write them to the output log and fail the build.

Currently, you need to run the task for the checking to occur. I would like to get it to run with the default check task, or build task, but the code I had for that was not working - later version I guess. You can add that functionality into your own build by adding one or two lines to your build.gradle file:

tasks.check.dependsOn checkDependencies
tasks.build.dependsOn checkDependencies

These will make the appropriate tasks depend on the dependency check so that it will be run with every build - that way you will know right away that you have a potential problem.

I did take a tour around Google and the plugin repository just to make sure there was nothing else providing this functionality - so hopefully I am not duplicating anyone else’s work.

Vanilla TextFileReader/Writer

06 March 2016 ~ blog, groovy, vanilla

Something I have found myself doing quite often over my whole career as a developer is reading and writing simple text file data. Whether it is a quick data dump or a data set to be loaded from a 3rd party, it is something I end up doing a lot and usually it is something coded mostly from scratch since, surprisingly enough, there are very few tools available for working with formatted text files. Sure, there are a few for CSV, but quite often I get a reqest to read or write a format that is kind of similar to CSV, but just enough different that it breaks a standard CSV parser for whatever reason. Recently, I decided to add some utility components to my Vanilla project with the aim of making these readers and writers simpler to build.

Let’s start off with the com.stehno.vanilla.text.TextFileWriter and say we have a data source of Person objects in our application that the business wants dumped out to a text file (so they can import it into some business tools that only ever seem capable of importing simple text files). In the application, the data structure looks something like this:

class Person {
    String firstName
    String middleName
    String lastName
    int age
    float height
    float weight
}

with the TextFileWriter you need to define a LineFormatter which will be used to format the generated lines of text, one per object written. The LineFormatter defines two methods, String formatComment(String) for formatting a comment line, and String formatLine(Object) for formatting a data line. A simple implementation is provided, the CommaSeparatedLineFormatter will generate comment lines prefixed with a # and will expect a Collection object to be formatted and will format it as a CSV line.

The available implementation will not work for our case, so we will need to define our own LineFormatter. We want the formatted data lines to be of the form:

# Last-Name,First-Name-Middle-Initial,Attrs
Smith,John Q,{age:42, height:5.9, weight:230.5}

Yes, that’s a bit of a convoluted format, but I have had to generate worse. Our LineFormatter ends up being something like this:

class PersonLineFormatter implements LineFormatter {

    @Override
    String formatComment(String text) {
        "# $text" (1)
    }

    @Override
    String formatLine(Object object) {
        Person person = object as Person
        "${person.lastName},${person.firstName} ${person.middleName[0]},{age:${person.age}, height:${person.height}, weight:${person.weight}}" (2)
    }
}
  1. We specify the comment as being prefixed by a # symbol.

  2. Write out the Person object as the formatted String

We see that implementing the LineFormatter keeps all the application specific logic isolated from the common operation of actually writing the file. Now we can use our formatter as follows:

TextFileWriter writer = new TextFileWriter(
    lineFormatter: new PersonLineFormatter(),
    filePath: new File(outputDir, 'people.txt')
)

writer.writeComment('Last-Name,First-Name-Middle-Initial,Attrs')

Collection<Person> people = peopleDao.listPeople()

people.each { Person p->
    writer.write(p)
}

This will write out the text file in the desired format with very little new coding required.

Generally, writing out text representations of application data is not really all that challenging, since you have access to the data you need and some control over the formatting of the objects to be represented. The real challenge is usually going in the other direction, when you are reading in a data file from some external source, this is where the com.stehno.vanilla.text.TextFileReader becomes useful.

Let’s say you receive a request to import the data file we described above, maybe it was generated by the same business tools I mentioned earlier. We have something like this:

# Last-Name,First-Name-Middle-Initial,Attrs
Smith,John Q,{age:42, height:5.9, weight:230.5}
Jones,Robert M,{age:38, height:5.6, weight:240.0}
Mendez,Jose R,{age:25, height:6.1, weight:232.4}
Smalls,Jessica X,{age:30, height:5.5, weight:175.2}

The TextFileReader requires a LineParser to parse the input file lines into objects; it defines three methods, boolean parseable(String) which is used to determine whether or not the line should be parsed, Object[] parseLine(String) which is used to parse the line of text, and Object parseItem(Object, int) which is used to parse an individual element of the comma-separated line. There is a default implementation provided, the CommaSeparatedLineParser will parse simple comma-separated lines of text into arrays of Objects based on configured item converters; however, this will not work in the case of our file since there are commas in the data items themselves (the JSON-like format of the last element). So we need to implement one. Our LineParser will look something like the following:

class PersonLineParser implements LineParser {

    boolean parsable(String line){
        line && !line.startsWith(HASH) (1)
    }

    Object[] parseLine(String line){ (2)
        int idx = 0
        def elements = line.split(',').collect { parseItem(it, idx++) }

        [
            new Person(
                firstName:elements[1][0],
                middleName:elements[1][1],
                lastName:elements[0],
                age:elements[2],
                height:elements[3],
                weight:elements[4],
            )
        ] as Object[]
    }

    // Smith,John Q,{age:42, height:5.9, weight:230.5}
    // 0    ,1     ,2      ,3          ,4
    Object parseItem(Object item, int index){ (3)
        switch(index){
            case 0:
                return item as String
            case 1:
                return item.split(' ')
            case 2:
                return item.split(':')[1] as int
            case 3:
                return item.split(':')[1] as float
            case 4:
                return item.split(':')[1][0..-2] as float
        }
    }
}
  1. We want to ignore blank lines or lines that start with a # symbol.

  2. We extract the line items and build the Person object

  3. We convert the line items to our desired types

It’s not pretty, but it does the job and keeps all the line parsing logic out of the main file loading functionality. Our code to read in the file would look somethign like:

setup:
TextFileReader reader = new TextFileReader(
    filePath: new File(inputDir, 'people.txt'),
    lineParser: new PersonLineParser(),
    firstLine: 2 (1)
)

when:
def people = []

reader.eachLine { Object[] data ->
    lines << data[0]
}
  1. We skip the first line, since it will always be the header

The provided implementations for both the LineFormatter and LineParser will not account for every scenario, but hopefully they will hit some of them and provide a guideline for implementing your own. If nothing else, these components help to streamline the readign and writing of formatted text data so that you can get it done and focus on other more challenging development tasks.

Spring Boot Remote Shell

07 November 2015 ~ blog, groovy, spring

Spring Boot comes with a ton of useful features that you can enable as needed, and in general the documentation is pretty good; however, sometimes it feels like they gloss over a feature that eventually realize is much more useful than it originally seemed. The remote shell support is one of those features.

Let’s start off with a simple Spring Boot project based on the example provided with the Boot documentation. Our build.gradle file is:

build.gradle
buildscript {
    repositories {
        jcenter()
    }

    dependencies {
        classpath 'org.springframework.boot:spring-boot-gradle-plugin:1.2.7.RELEASE'
    }
}

version = "0.0.1"
group = "com.stehno"

apply plugin: 'groovy'
apply plugin: 'spring-boot'

sourceCompatibility = 8
targetCompatibility = 8

mainClassName = 'com.stehno.SampleController'

repositories {
    jcenter()
}

dependencies {
    compile "org.codehaus.groovy:groovy-all:2.4.5"

    compile 'org.springframework.boot:spring-boot-starter-web'
}

task wrapper(type: Wrapper) {
    gradleVersion = "2.8"
}

Then, our simple controller and starter class looks like:

SampleController.groovy
@Controller
@EnableAutoConfiguration
public class SampleController {

    @RequestMapping('/')
    @ResponseBody
    String home() {
        'Hello World!'
    }

    static void main(args) throws Exception {
        SpringApplication.run(SampleController, args)
    }
}

Run it using:

./gradlew clean build bootRun

and you get your run of the mill "Hello world" application. For our demonstration purposes, we need something a bit more interesting. Let’s make the controller something like a "Message of the Day" server which will return a fixed configured message. Remove the hello controller action and add in the following:

String message = 'Message for you, sir!'

@RequestMapping('/') @ResponseBody
String message() {
    message
}

which will return the static message "Message for you, sir!" for every request. Running the application now, will still be pretty uninteresting, but wait, it gets better.

Now, we would like to have the ability to change the message as needed without rebuilding or even restarting the server. There are handful of ways to do this; however, I’m going to discuss one of the seemingly less used options…​ The CRaSH Shell integration provided in Spring Boot (43. Production Ready Remote Shell).

To add the remote shell support in Spring Boot, you add the following line to your dependencies block in your build.gradle file:

compile 'org.springframework.boot:spring-boot-starter-remote-shell'

Now, when you run the application, you will see an extra line in the server log:

Using default password for shell access: 44b3556b-ff9f-4f82-9f1b-54a16da471d5

Since no password was configured, Boot has provided a randomly generated one for you (obviously you would configure this in a real system). You now have an SSH connection available to your application. Using the ssh client of your choice you can login using:

ssh -p 2000 user@localhost

Which will ask you for the provided password. Once you have logged in you are connected to a secure shell running inside your application. You can run help at the prompt to get a list of available commands, which will look something like this:

> help
Try one of these commands with the -h or --help switch:

NAME       DESCRIPTION
autoconfig Display auto configuration report from ApplicationContext
beans      Display beans in ApplicationContext
cron       manages the cron plugin
dashboard  a monitoring dashboard
egrep      search file(s) for lines that match a pattern
endpoint   Invoke actuator endpoints
env        display the term env
filter     a filter for a stream of map
java       various java language commands
jmx        Java Management Extensions
jul        java.util.logging commands
jvm        JVM informations
less       opposite of more
mail       interact with emails
man        format and display the on-line manual pages
metrics    Display metrics provided by Spring Boot
shell      shell related command
sleep      sleep for some time
sort       sort a map
system     vm system properties commands
thread     JVM thread commands
help       provides basic help
repl       list the repl or change the current repl

As you can see, you get quite a bit of functionality right out of the box. I will leave the discussion of each of the provided commands to another post. What we are interested at this point is adding our own command to update the message displayed by our controller.

The really interesting part of the shell integration is the fact that you can extend it with your own commands.

Create a new directory src/main/resources/commands which is where your extended commands will live, and then add a simple starting point class for our command:

message.groovy
package commands

import org.crsh.cli.Usage
import org.crsh.cli.Command
import org.crsh.command.InvocationContext

@Usage('Interactions with the message of the day.')
class message {

    @Usage('View the current message of the day.')
    @Command
    def view(InvocationContext context) {
        return 'Hello'
    }
}

The @Usage annotations provide the help/usage documentation for the command, while the @Command annotation denotes that the view method is a command.

Now, when you run the application and list the shell commands, you will see our new command added to the list:

message    Interactions with the message of the day.

If you run the command as message view you will get the static "Hello" message returned to you on the shell console.

Okay, we need the ability to view our current message of the day. The InvocationContext has attributes which are propulated by Spring, one of which is spring.beanfactory a reference to the Spring BeanFactory for your application. We can access the current message of the day by replacing the content of the view method with the following:

BeanFactory beans = context.attributes['spring.beanfactory']
return beans.getBean(SampleController).message

where we find our controller bean and simply read the message property. Running the application and the shell command now, yield:

Message for you, sir!

While that is pretty cool, we are actually here to modify the message, not just view it and this is just as easy. Add a new command named update:

@Usage('Update the current message of the day.')
@Command
def update(
    InvocationContext context,
    @Usage('The new message') @Argument String message
) {
    BeanFactory beans = context.attributes['spring.beanfactory']
    beans.getBean(SampleController).message = message
    return "Message updated to: $message"
}

Now, rebuild/restart the server and start up the shell. If you execute:

message update "This is cool!"

You will update the configured message, which you can verify using the message view command, or better yet, you can hit your server and see that the returned message has been updated…​ no restart required. Indeed, this is cool.

Tip
You can find a lot more information about writing your own commands in the CRaSH documentation for Developing Commands. There is a lot of functionality that I am not covering here.

At this point, we are functionally complete. We can view and update the message of the day without requiring a restart of the server. But, there are still some added goodies provided by the shell, especially around shell UI support - yes, it’s text, but it can still be pretty and one of the ways CRaSH allows you to pretty things up is with colors and formatting via styles and the UIBuilder (which is sadly under-documented).

Let’s add another property to our controller to make things more interesting. Just add a Date lastUpdated = new Date() field. This will give us two properties to play with. Update the view action as follows:

SampleController controller = context.attributes['spring.beanfactory'].getBean(SampleController)

String message = controller.message
String date = controller.lastUpdated.format('MM/dd/yyyy HH:mm')

out.print new UIBuilder().table(separator: dashed, overflow: Overflow.HIDDEN, rightCellPadding: 1) {
    header(decoration: bold, foreground: black, background: white) {
        label('Date')
        label('Message')
    }

    row {
        label(date, foreground: green)
        label(message, foreground: yellow)
    }
}

We still retrieve the instance of the controller as before; however, now our output rendering is a bit more complicated, though still pretty understandable. We are creating a new UIBuilder for a table and then applying the header and row contents to it. It’s actually a very powerful construct, I just had to dig around in the project source code to actually figure out how to make it work.

You will also need to update the update command to set the new date field:

SampleController controller = context.attributes['spring.beanfactory'].getBean(SampleController)
controller.message = message
controller.lastUpdated = new Date()

return "Message updated to: $message"

Once you have that built and running you can run the message view command and get a much nicer multi-colored table output.

> message view
Date             Message
-------------------------------------------------------------
11/05/2015 10:37 And now for something completely different.

Which puts wraps up what we are trying to do here and even puts a bow on it. You can find more information on the remote shell configuration options in the Spring Boot documentation in Appendix A: Common Application Properties. This is where you can configure the port, change the authentication settings, and even disable some of the default provided commands.

The remote shell support is one of the more interesting, but underused features in Spring Boot. Before Spring Boot was around, I was working on a project where we did a similar integration of CRaSH shell with a Spring-based server project and it provided a wealth of interesting and useful opportunities to dig into our running system and observe or make changes. Very powerful.

Multi-Collection Pagination

31 October 2015 ~ blog

A few years ago, I was working on a project where we had collections of data spread across multiple rows of data…​ and then we had to provide a paginated view of that data. This research was the result of those efforts. The discussion here is a bit more rigorous than I usually go into, so if you just want the implementation code jump to the bottom.

Introduction

Consider that you have a data set representing a collection of collections:

[
    [ A0, A1, A2, A3, A4, A5 ],
    [ B0, B1, B2, B3, B4, B5 ],
    [ C0, C1, C2, C3, C4, C5 ]
]

We want to retrieve the data in a paginated fashion where the subset (page) with index P and subset size (page size) S is used to retrieve only the desired elements in the most efficient means possible.

Consider also that the data sets may be very large and that the internal collections may not be directly associated with the enclosing collection (e.g. two different databases).

Also consider that the subsets may cross collection boundaries or contain fewer than the desired number of elements.

Lastly, requests for data subsets will be more likely discrete events – one subset per request, rather than iterating over all results.

For a page size of four (S = 4) you would have the following five pages:

P0 : [ A0, A1, A2, A3 ]
P1 : [ A4, A5, B0, B1 ]
P2 : [ B2, B3, B4, B5 ]
P3 : [ C0, C1, C2, C3 ]
P4 : [ C4, C5 ]

Computations

The overall collection is traversed to determine how many elements are contained within each sub-collection; this may be pre-computed or done at runtime. Three counts are calculated or derived for each sub-collection:

  • Count (CI) - the number of elements in the sub-collection.

  • Count-before (CB) - the total count of all sub-collection elements counted before this collection, but not including this collection.

  • Count-with (CW) - the total count of all sub-collection elements counted before and including this collection.

For our example data set we would have:

[
    { CI:6, CB:0, CW:6 [ A0, A1, A2, A3, A4, A5 ] },
    { CI:6, CB:6, CW:12 [ B0, B1, B2, B3, B4, B5 ] },
    { CI:6, CB:12, CW:18 [ C0, C1, C2, C3, C4, C5 ] }
]

This allows for a simple means of selecting only the sub-collections we are interested in; those containing the desired elements based on the starting and ending indices for the subset (START and END respectively). These indices can easily be calculated as:

START = P * S

END = START + S – 1
Note
The indices referenced here are for the overall collection, not the individual sub-collections.

The desired elements will reside in sub-collections whose inclusive count (CW) is greater than the starting index and whose preceding count (CB) is less than or equal to the ending index, or:

CW > START and CB ≤ END

For the case of selecting the second subset of data (P = 1) with a page size of four (S = 4) we would have:

START = 4

END = 7

This will select the first two or the three sub-collections as "interesting" sub-collections containing at least some of our desired elements, namely:

{ CI:6, CB:0, CW:6 [ A0, A1, A2, A3, A4, A5 ] },
{ CI:6, CB:6, CW:12 [ B0, B1, B2, B3, B4, B5 ] }

What remains is to gather from these sub-collections (call them SC[0], SC[1]) the desired number of elements (S).

To achieve this, a local starting and ending index must be calculated while iterating through the "interesting" sub-collections to gather the elements until either the desired amount is obtained (S) or there are no more elements available.

  1. Calculate the initial local starting index (LOCAL_START) by subtracting the non-inclusive preceding count value of the first selected collection (SC[0]) from the overall starting index.

  2. Iterate the selected collections (in order) until the desired amount has been gathered

This is more clearly represented in pseudo code as:

LOCAL_START = START – SC[0].CB
REMAINING = S

for-each sc in SC while REMAINING > 0

    if( REMAINING < (sc.size() - LOCAL_START) )
        LOCAL_END = LOCAL_START + REMAINING - 1
    else
        LOCAL_END = sc.size()-1

    FOUND = sc.sub( LOCAL_START, LOCAL_END )
    G.addAll( FOUND )
    REMAINING = REMAINING – FOUND.size()
    LOCAL_START = 0

end

Where the gathered collection of elements (G) is your resulting data set containing the elements for the specified data page.

It must be stated that the ordering of the overall collection and the sub-collections must be consistent across multiple data requests for this procedure to work properly.

Implementation

Ok now, enough discussion. Let’s see what this looks like with some real Groovy code. First, we need our collections of collections data to work with:

def data = [
    [ 'A0', 'A1', 'A2', 'A3', 'A4', 'A5' ],
    [ 'B0', 'B1', 'B2', 'B3', 'B4', 'B5' ],
    [ 'C0', 'C1', 'C2', 'C3', 'C4', 'C5' ]
]

Next, we need to implement the algorithm in Groovy:

int page = 1
int pageSize = 4

// pre-computation

int before = 0
def prepared = data.collect {d ->
    def result = [
        countIn: d.size(),
        countBefore: before,
        countWith: before + d.size(),
        values:d
    ]

    before += d.size()

    return result
}

// main computation

def localStart = (page * pageSize ) - prepared[0].countBefore
def remaining = pageSize

def gathered = []

prepared.each { sc->
    if( remaining ){
        def localEnd
        if( remaining < (sc.values.size() - localStart) ){
            localEnd = localStart + remaining - 1
        } else {
            localEnd = sc.values.size() - 1
        }

        def found = sc.values[localStart..localEnd]
        gathered.addAll(found)

        remaining -= found.size()
        localStart = 0
    }
}

println "P$page : $gathered"

which yields

P1 : [A4, A5, B0, B1]

and if you look all the way back up to the beginning of the article, you see that this is the expected data set for page 1 of the example data.

It’s not a scenario I have run into often, but it was a bit of a tricky one to unravel. The pre-computation steps ended up being the key to keeping it simple and stable.

Spring ViewResolver for "GSP"

26 October 2015 ~ blog, groovy, vanilla, spring

Recently, while working on a Spring MVC application, I was considering which template framework to use for my views and I was surprised to realize that there was no implementation using the Groovy GStringTemplateEngine. There is one for the Groovy Markup Templates; however, in my opinion, that format seems pretty terrible - they are interesting in themselves, but they seem like they would be a nightmare to maintain, and your designers would kill you if they ever had to work with them.

This obvious gap in functionality surprised me and even a quick Google search did not turn up any implementations, though there was some documentation around using the Grails GSP framework in a standard Spring Boot application, but this seemed like overkill for how simple the templates can be. Generally, implementing extensions to the Spring Framework is pretty simple so I decided to give it a quick try…​ and I was right, it was not hard at all.

The ViewResolver implementation I came up with is an extension of the AbstractTemplateViewResolver with one main method of interest, the buildView(String) method which contains the following:

protected AbstractUrlBasedView buildView(final String viewName) throws Exception {
    GroovyTemplateView view = super.buildView(viewName) as GroovyTemplateView (1)

    URL templateUrl = applicationContext.getResource(view.url).getURL() (2)

    view.template = templateEngine.createTemplate(
        applicationContext.getResource(view.url).getURL()
    ) (3)

    view.encoding = defaultEncoding

    return view
}
  1. Call the super class to create a configured instance of the View

  2. Load the template from the ApplicationContext using the url property of the View

  3. Create the Template from the contents of the URL

This method basically just uses the view resolver framework to find the template file and load it with the GSTringTemplateEngine - the framework takes care of the caching and model attribute management.

The View implementation is also quite simple; it is an extension of the AbstractTemplateview, with the only implmented method being the renderMergedTemplateModel() method:

protected void renderMergedTemplateModel(
    Map<String, Object> model, HttpServletRequest req, HttpServletResponse res
) throws Exception {
    res.contentType = contentType
    res.characterEncoding = encoding

    res.writer.withPrintWriter { PrintWriter out ->
        out.write(template.make(model) as String)
    }
}

The Template content is rendered using the configured model data and then written to the PrintWriter from the HttpServletResponse, which sends it to the client.

Lastly, you need to configure the resolver in your application:

@Bean ViewResolver viewResolver() {
    new GroovyTemplateViewResolver(
        contentType: 'text/html',
        cache: true,
        prefix: '/WEB-INF/gsp/',
        suffix: '.gsp'
    )
}

One thing to notice here is all the functionality you get by default from the Spring ViewResolver framework for very little added code on your part.

Another thing to note is that "GSP" file in this case is not really a true GSP; however, you have all the functionality provided by the GStringTemplateEngine, which is quite similar. An example template could be something like:

hello.gsp
<html>
    <head><title>Hello</title></head>
    <body>
        [${new Date()}] Hello, ${name ?: 'stranger'}

        <% if(personService.seen(name)){ %>
            You have been here ${personService.visits(name)} times.
        <% } %>
    </body>
</html>

It’s definitely a nice clean template language if you are already coding everything else in Groovy anyway.

I will be adding a spring helper library to my vanilla project; the "vanilla-spring" project will have the final version of this code, though it should be similar to what is dicussed here. The full source for the code discussed above is provided below for reference until the actual code is released.

GroovyTemplateViewResolver.groovy
package com.stehno.vanilla.spring.view

// imports removed...

@TypeChecked
class GroovyTemplateViewResolver extends AbstractTemplateViewResolver {

    /**
     * The default character encoding to be used by the template views. Defaults to UTF-8 if not specified.
     */
    String defaultEncoding = StandardCharsets.UTF_8.name()

    private final TemplateEngine templateEngine = new GStringTemplateEngine()

    GroovyTemplateViewResolver() {
        viewClass = requiredViewClass()
    }

    @Override
    protected Class<?> requiredViewClass() {
        GroovyTemplateView
    }

    @Override
    protected AbstractUrlBasedView buildView(final String viewName) throws Exception {
        GroovyTemplateView view = super.buildView(viewName) as GroovyTemplateView

        view.template = templateEngine.createTemplate(
            applicationContext.getResource(view.url).getURL()
        )

        view.encoding = defaultEncoding
        return view
    }
}
GroovyTemplateView.groovy
package com.stehno.vanilla.spring.view

// imports removed...

@TypeChecked
class GroovyTemplateView extends AbstractTemplateView {

    Template template
    String encoding

    @Override
    protected void renderMergedTemplateModel(
        Map<String, Object> model, HttpServletRequest req, HttpServletResponse res
    ) throws Exception {
        res.contentType = contentType
        res.characterEncoding = encoding

        res.writer.withPrintWriter { PrintWriter out ->
            out.write(template.make(model) as String)
        }
    }
}

Copying Data with ObjectMappers

10 October 2015 ~ blog, groovy, vanilla

When working with legacy codebases, I tend to run into a lot of scenarios where I am copying data objects from one format to another while an API is in transition or due to some data model mismatch.

Suppose we have an object in one system - I am using Groovy because it keeps things simple, but it could be Java as well:

class Person {
    long id
    String firstName
    String lastName
    LocalDate birthDate
}

and then you are working with a legacy (or external) API which provides similar data in the form of:

class Individual {
    long id
    String givenName
    String familyName
    String birthDate
}

and now you have to integrate the conversion from the old/external format (Individual) to your internal format (Person).

You can write the code in Java using the Transformer interface from Apache Commons Collections, which ends up with something like this:

public class IndividualToPerson implements Transformer<Individual,Person>{
    
    public Person transform(Individual indiv){
        Person person = new Person();
        person.setId( indiv.getId() );
        person.setFirstName( indiv.getGivenName() );
        person.setLastName( indiv.getFamilyName() );
        person.setBirthDate( LocalDate.parse(indiv.getBirthDate()) );
        return person;
    }
}

I wrote a blog post about this many years ago (Commons Collections - Transformers); however, if you have more than a handful of these conversions, you can end up handwriting a lot of the same code over and over, which can be error prone and time consuming. Even switching the code above to full-on Groovy does not really save you much, though it is better:

class IndividualToPerson implements Transformer<Individual,Person>{
    
    Person transform(Individual indiv){
        new Person(
            id: indiv.id,
            firstName: indiv.givenName,
            lastName: indiv.familyName,
            birthDate: LocalDate.parse(indiv.birthDate)
        )
    }
}

What I came up with was a simple mapping DSL which allows for straight-forward definitions of the property mappings in the simplest code possible:

ObjectMapper inividualToPerson = mapper {
    map 'id'
    map 'givenName' into 'firstName'
    map 'familyName' into 'lastName'
    map 'birthDate' using { d-> LocaleDate.parse(d) }
}

which builds an instance of RuntimeObjectMapper which is stateless and thread-safe. The ObjectMapper interface has a method copy(Object source, Object dest) which will copy the properties from the source object to the destination object. Your transformation code ends up something like this:

def people = individuals.collect { indiv->
    Person person = new Person()
    individualToPerson.copy(indiv, person)
    person
}

or we can use the create(Object, Class) method as:

def people = individuals.collect { indiv->
    individualToPerson.create(indiv, Person)
}

which is just a shortcut method for the same code, as long as you are able to create your destination object with a default constructor, which we are able to do.

There is also a third, slightly more useful option in this specific collector case:

def people = individuals.collect( individualToPerson.collector(Person) )

The collector(Class) method returns a Closure that is also a shortcut to the conversion code shown previously. It's mostly syntactic sugar, but it's nice and clean to work with.

Notice the 'using' method - this allows for conversion of the source data before it is set into the destination object. This is one of the more powerful features of the DSL. Consider the case where your Person class has an embedded Name class:

class Person {
    long id
    Name name
    LocalDate birthDate
}

@Canonical
class Name {
    String first
    String last
}

Now we want to map the name properties into this new embedded object rather than into the main object. The mapper DSL can do this too:

ObjectMapper individualToPerson = mapper {
    map 'id'
    map 'givenName' into 'name' using { p,src-> 
        new Name(src.givenName, src.familyName)
    }
    map 'birthDate' using { d-> LocaleDate.parse(d) }
}

It's a bit odd since you are mapping two properties into one property, but it gets the job done. The conversion closure will accept up to three parameters (or none) - the first being the source property being converted, the second is the source object instance and the third is the destination object instance. The one thing to keep in mind when using the two and three parameter versions is that the order of your property mapping definitions may begin to matter, especially if you are working with the destination object.

So far, we have been talking about runtime-based mappers that take your configuration and resolve your property mappings at runtime. It's reasonably efficient since it doesn't do all that much, but consider the case where you have a million objects to transform; those extra property mapping operations start to add up - that's when you go back to hand-coding it unless there is a way to build the mappers at compile time rather than run time...

There is. This was something I have really wanted to get a hold of for this project and others; the ability to use a DSL to control the AST transformations used in code generation... or, using the mapper DSL in an annotation to create the mapper class at compile time so that it is closer to what you would have hand-coded yourself (and also becomes a bit more performant since there are fewer operations being executed at runtime).

Using the static approach is simple, you just write the DSL code in the @Mapper annotation on a method, property or field:

class Mappers {

    @Mapper({
        map 'id'
        map 'givenName' into 'firstName'
        map 'familyName' into 'lastName'
        map 'birthDate' using { d-> LocaleDate.parse(d) }    
    })
    static final ObjectMapper personMapper(){}
}

When the code compiles, a new implementation of ObjectMapper will be created and installed as the return value for the personMapper() method. The static version of the DSL has all of the same functionality of the dynamic version except that it does not support using ObjectMappers direction in the using command; however, a workaround for this is to use a closure.

Object property mapping/copying is one of those things you don't run into all that often, but it is useful to have a simple alternative to hand-writing the code for it. Both the dynamic and static version of the object mappers discussed here are available in my Vanilla library.

Lazy Immutables

23 September 2015 ~ blog, groovy, vanilla

A co-worker and I were discussing the Groovy @Immutable annotation recently where I was thinking it would be useful if it allowed you to work on the object as a mutable object until you were ready to make it permanent, and then you could "seal" it and make it immutable. This would give you a bit more freedom in how the object is configured - sometimes the standard immutable approach can be overly restrictive.

Consider the case of of an immutable Person object:

@Immutable
class Person {
    String firstName
    String middleName
    String lastName
    int age
}

With @Immutable you have to create the object all at once:

def person = new Person('Chris','J','Stehno',42)

and then you're stuck with it. You can create a copy of it with one or more different properties using the copyWith method, but you need to specify the copyWith=true in the annotation itself, then you can do something like:

Person otherPerson = person.copyWith(firstName:'Bob', age:50)

I'm not sure who "Bob J Stehno" is though. With more complicated immutables, this all at once requirement can be annoying. This is where the @LazyImmutable annotation comes in (part of my Vanilla - Core library). With a similar Person class:

@LazyImmutable @Canonical
class Person {
    String firstName
    String middleName
    String lastName
    int age
}

using the new annotation, you can create and populate the instance over time:

def person = new Person('Chris')
person.middleName = 'J'
person.lastName = 'Stehno'
person.age = 42

Notice that the @LazyImmutable annotation does not apply any other transforms (as the standard @Immutable does). It's a standard Groovy object, but with an added method: the asImmutable() method is injected via AST Transformation. This method will take the current state of the object and create an immutable version of the object - this does imply that the properties of lazy immutable objects should follow the same rules as those of the standard immutable so that the conversion is determinate. For our example case:

Person immutablePerson = person.asImmutable()

The created object is the same immutable object as would have been created by using the @Immutable annotation and it is generated as an extension of the class you created so that it's type is still valid. The immutable version of the object also has a useful added method, the asMutable() method is used to create a copy of the original mutable object.

Person otherMutable = immutablePerson.asMutable()

It's a fairly simple helper annotation, but it just fills one of those little functional gaps that you run into every now and then. Maybe someone else will find it useful.

Baking Your Blog with JBake, Groovy and GitHub

02 September 2015 ~ blog, groovy

As a developer, it has always bugged me to have my blog or web site content stored on a server managed by someone else, outside of my control. Granted, WordPress and the like are very stable and generally have means of pulling out your data if you need it, but I really just like to have my content under my own control. Likewise, I have other projects I want to work on, so building content management software is not really on my radar at this point; that's where JBake comes in.

JBake is a simple JVM-based static site generation tool that makes casual blogging quite simple once you get everything set up. It's a bit of a raw project at this point, so there are a few rough edges to work with, but I will help to file them down in the discussions below.

Getting started with JBake, you have a couple options. You can install JBake locally and use it as a command line tool, or you can use the JBake Gradle Plugin. The Gradle plugin is currently lacking the local server feature provided by the command line tools; however, it does provide a more portable development environment along with the universe of other Gradle plugins. We will use the Gradle plugin approach here and I will provide some workarounds for the missing features to bring the functionality back on even ground with the command line tool.

The first thing we need is our base project and for that I am going to use a Lazybones template that I have created (which may be found in my lazybones-templates repository). You can use the Gradle plugin and do all the setup yourself, but it was fairly simple and having a template for it allowed me to add in the missing features we need.

If you are unfamiliar with Lazybones, it's a Groovy-based project template framework along the lines of Yeoman and the old Maven Archetype plugin. Details for adding my template repo to your configuration can be found on the README page for my templates.

Create the empty project with the following:

lazybones create jbake-groovy cookies

where "cookies" is the name of our project and the name of the project directory to be created. You will be asked a few questions related to template generation. You should have something similar to the following:

$ lazybones create jbake-groovy cookies
Creating project from template jbake-groovy (latest) in 'cookies'
Define value for 'JBake Plugin Version' [0.2]:
Define value for 'JBake Version' [2.3.2]:
Define value for 'Gradle version' [2.3]:
GitHub project: [username/projectname.git]: cjstehno/cookies.git

The "username" should reflect the username of your GitHub account, we'll see what this is used for later. If you look at the generated "cookies" directory now you will see a standard-looking Gradle project structure. The JBake source files reside in the src/jbake directory with the following sub-directories:

You will see that by default, a simple Bootstrap-based blog site is provided with sample blog posts in HTML, ASCII Doc, and Markdown formats. This is the same sample content as provided by the command line version of the project setup tool. At this point we can build the sample content using:

./gradlew jbake

The Gradle plugin does not provide a means of serving up the "baked" content yet. There is work in progress so hopefully this will be merged in soon. One of the goodies my template provides is a simple Groovy web server script. This allows you to serve up the content with:

groovy serve.groovy

which will start a Jetty instance pointed at the content in build/jbake on the configured port (8080 by default, which can be changed by adding a port number to the command line). Now when you hit http://localhost:8080/ you should see the sample content. Also, you can leave this server running in a separate console while you develop, running the jbake command as needed to rebuild the content.

First, let's update the general site information. Our site's title is not "JBake", so let's change it to "JCookies" by updating it in the src/jbake/templates/header.gsp and src/jbake/templates/menu.gsp files. While we're in there we can also update the site meta information as well:

<title><%if (content.title) {%>${content.title}<% } else { %>JCookies<% }%></title>
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="A site about cookies.">
<meta name="author" content="Chris Stehno">
<meta name="keywords" content="cookies,baking">
<meta name="generator" content="JBake">

Then to apply the changes, run ./gradlew jbake and refresh the browser. Now we see our correct site name.

Note that JBake makes no requirements about the templates or content to be used. It provides special support for blog-style sites; however, you can remove all the content and make a standard simple static site if you wish.

Let's add a new blog entry. The blog entries are stored in the src/jbake/content/blog directory by year so we need to create a new directory for 2015. Content may be written in HTML, ASCII Doc, or Markdown, based on the file extension. I am a fan of Markdown so we'll use that for our new blog entry. Let's create an entry file named chocolate-chip.md.

JBake uses a custom header block at the top of content files to store meta information. For our entry we will use the following:

title=Chocolate Chip Cookies
date=2015-05-04
type=post
tags=blog,recipe
status=published
~~~~~~

The title and date are self-explanatory. The type can be post or page to denote a blog post or a standard page. The tags are used to provide extra tag information to categorize the content. The status field may be draft or published to denote whether or not the content should be included in the rendered site. Everything below the line of tildes is your standard markdown content.

For the content of our entry we are going to use the Nestle Chocolate Chip Cookie recipe - it gives us a nice overview of the content capabilities, and they are yummy!

The content in Markdown format, is as follows:

## Ingredients

* 2 1/4 cups all-purpose flour
* 1 teaspoon baking soda
* 1 teaspoon salt
* 1 cup (2 sticks) butter, softened
* 3/4 cup granulated sugar
* 3/4 cup packed brown sugar
* 1 teaspoon vanilla extract
* 2 large eggs
* 2 cups (12-oz. pkg.) NESTLÉ® TOLL HOUSE® Semi-Sweet Chocolate Morsels
* 1 cup chopped nuts

## Instructions

1. Preheat oven to 375° F.
1. Combine flour, baking soda and salt in small bowl. Beat butter, granulated sugar, brown sugar and vanilla extract in large mixer bowl until creamy. Add eggs, one at a time, beating well after each addition. Gradually beat in flour mixture. Stir in morsels and nuts. Drop by rounded tablespoon onto ungreased baking sheets. 
1. BAKE for 9 to 11 minutes or until golden brown. Cool on baking sheets for 2 minutes; remove to wire racks to cool completely. 

May be stored in refrigerator for up to 1 week or in freezer for up to 8 weeks.

Rebuild/refresh and now you see we have a new blog post. Now, since we stoleborrowed this recipe from another site, we should provide an attribution link back to the original source. The content header fields are dynamic; you can create your own and use them in your pages. Let's add an attribution field and put our link in it.

attribution=https://www.verybestbaking.com/recipes/18476/original-nestle-toll-house-chocolate-chip-cookies/

Then we will want to add it to our rendered page, so we need to open up the blog entry template, the src/jbake/templates/post.gsp file and add the following line after the page header:

<p>Borrowed from: <a href="${content.attribution}">${content.attribution}</a></p>

Notice now, that the templates are just GSP files which may have Groovy code embedded into them in order to perform rendering logic. The header data is accessible via the content object in the page.

This post is kind of boring at this point. Yes, it's a recipe for chocolate chip cookies, and that's hard to beat, but the page full of text is not selling it to me. Let's add a photo to really make your mouth water. Grab an image of your favorite chocolate chip cookies and save it in src/jbake/assets/images as cookies.jpg. Static content like images live in the assets folder. The contents of the assets folder will be copied into the root of the rendered site directory.

Now, we need to add the photo to the page. Markdown allows simple HTML tags to be used so we can add:

<img src="/images/cookies.jpg" style="width:300px;float:right;"/>

to the top of our blog post content, which will add the image at the top of the page, floated to the right of the main content text. Now that looks tasty!

You can also create tandard pages in a similar manner to blog posts; however, they are based on the page.gsp template. This allows for different contextual formatting for each content type.

You can customize any of the templates to get the desired content and functionality for your static site, but what about the overall visual theme? As I mentioned earlier, the default templates use the Twitter Bootstrap library and there are quite a few resources available for changing the theme to fit your needs and they range from free to somewhat expensive. We just want a free one for demonstration purposes so let's download the bootstrap.min.css file for the Bootswatch Cerulean theme. Overwrite the existing theme in the src/jbake/assets/css directory with this new file then rebuild the site and refresh your browser. Now you can see that we have a nice blue banner along with other style changes.

The end result at this point will look something like this:

All-in-all not too bad for a few minutes of coding work!

Another nice feature of JBake is delayed publishing. The status field in the content header has three accepted values:

We used the published option since we wanted our content to be available right away. You could easily create a bunch of blog entries ahead of time, specifying the date values for when they should be published but having the status values set to published-date so that they are released only after the appropriate date. The downside of this is that since JBake is a static generator, you would have to be sure and build the site often enough to pick up the newly available content - maybe with a nightly scheduled build and deployment job.

When you are ready to release your site out into the greater internet wilderness, you will need a way to publish it; this is another place where my lazybones template comes in handy. If you are hosting your site as github-pages, the template comes with a publishing task built-in, based on the gradle-git plugin. This is where the GitHub username and repository information from the initial project creation comes into play. For this to work, you need a repository named "cookies" associated with your GitHub account. You will also want to double check that the repo clone URL is correct in the publish.gradle file. Then, to publish your site you simply run:

./gradlew publish

and then go check your project site for the updated content (sometimes it takes a minute or two, though it's usually instantaneous).

At this point we have a easily managed static web site; what's left to be done? Well, you could associate it with your own custom domain name rather than the one GitHub provides. I will not go into that here, since I really don't want to purchase a domain name just for this demo; however, I do have a blog post (Custom GitHub Hosting) that goes into how it's done (at least on GoDaddy).

JBake and GitHub with a dash of Groovy provide a nice environment for quick custom blogs and web sites, with little fuss. Everything I have shown here is what I use to create and manage this blog, so, I'd say it works pretty well.

Portions of this discussion are based on a blog post by Cédric Champeau, "Authoring your blog on GitHub with JBake and Gradle", who is also a contributor to JBake (among other things).

Vanilla Test Fixtures

15 May 2015 ~ blog, groovy, testing, vanilla

Unit testing with data fixtures is good practice to get into, and having a simple means of creating and managing reusable fixture data makes it much more likely. I have added a FixtureBuilder and Fixture class to my Vanilla-Testing library.

Unit testing with domain object, entities and DTOs can become tedious and you can end up with a lot of duplication around creating the test fixtures for each test. Say you have an object, Person defined as:

class Person {
    Name name
    LocalDate birthDate
    int score
}

You are writing your unit tests for services and controllers that may need to create and compare various instances of Person and you end up with some constants somewhere or duplication of code with custom instances all over the test code.

Using com.stehno.vanilla.test.FixtureBuilder you can create reusable fixtures with a simple DSL. I tend to create a main class to contain my fixtures and to also provide the set of supported fixture keys, something like:

class PersonFixtures {

    static final String BOB = 'Bob'
    static final String LARRY = 'Larry'
    
    static final Fixture FIXTURES = define {
        fix BOB, [ name:new Name('Bob','Q','Public'), birthDate:LocalDate.of(1952,5,14), score:120 ]
        fix LARRY, [ name:new Name('Larry','G','Larson'), birthDate:LocalDate.of(1970,2,8), score:100 ]
    }
}

Notice that the define method is where you create the data contained by the fixtures, each mapped with an object key. The key can be any object which may be used as a Map key (proper equals and hashCode implementation).

The reasoning behind using Maps is that Groovy allows them to be used as constructor arguments for creating objects; therefore, the maps give you a reusable and detached dataset for use in creating your test fixture instances. Two objects instances created from the same fixture data will be equivalent at the level of the properties defined by the fixture; however, each can be manipulated without effecting the other.

Once your fixtures are defined, you can use them in various ways. You can request the immutable data map for a fixture:

Map data = PersonFixtures.FIXTURES.map(PersonFixtures.BOB)

You can create an instance of the target object using the data mapped to a specified fixture:

Person person = PersonFixtures.FIXTURES.object(Person, PersonFixtures.LARRY)

Or, you can request the data or an instance for a fixture while applying additional (or overridden) properties to the fixture data:

Map data = PersonFixtures.FIXTURES.map(PersonFixtures.BOB, score:53)
Person person = PersonFixtures.FIXTURES.object(Person, PersonFixtures.LARRY, score:200)

You can easily retrieve field property values for each fixture for use in your tests:

assert 100 == PersonFixtures.FIXTURES.field('score', PersonFixtures.LARRY)

This allows field-by-field comparisons for testing and the ability to use the field values as parameters as needed.

Lastly, you can verify that an object instance contains the expected data that is associated with a fixture:

assert PersonFixtures.FIXTURES.verify(person, PersonFixtures.LARRY)

which will compare the given object to the specified fixture and return true of all of the properties defined in the fixture match the same properties of the given object. There is also a second version of the method which allows property customizations before comparison.

One step farther... you can combine fixtures with property randomizaiton to make fixture creation even simpler for those cases where you don't care about what the properties are, just that you can get at them reliably.

static final Fixture FIXTURES = define {
    fix FIX_A, [ name:randomize(Name).one(), birthDate:LocalDate.of(1952,5,14), score:120 ]
    fix FIX_B, randomize(Person){
        typeRandomizers(
            (Name): randomize(Name),
            (LocalDate): { LocalDate.now() }
        )
    }
}

The fixture mapper accepts PropertyRandomizer instances and will use them to generate the random content once, when the fixture is created and then it will be available unchanged during the testing.

One thing to note about the fixtures is that the fixture container and the maps that are passed in as individual fixture data are all made immutable via the asImmutable() method; however, if the data inside the fixture is mutable, it still may have the potential for being changed. Be aware of this and take proper precautions when you create an interact with such data types.

Reusable text fixtures can really help to clean up your test code base, and they are a good habit to get into.


Older posts are available in the archive.