Grails benefits from comprehensive out-of-the-box implementations for rendering JSON and XML data, when coupled with some Groovy syntactic sugar, developers are capable of generating JSON or XML data from a structure as simply as:
However, we may quickly find that we have a little too much data in the response, or we even want to provide data at compile time (without storing it transiently in a domain class as I have done in the example below).
The above, using the Grails JSON converter would yield (130 char):
However, as we begin to develop RESTFul services where we may (most probably will!) require both low-latency and high-throughput, we’ll need to reduce the size of our responses, and often, only respond with data that is absolutely required (and to really be RESTFul…)
To tackle this across some larger services developed in Groovy/Grails I’ve adopted quite a pleasent design-pattern, which utilizes the existing JSON and XML converters in Grails.
During the BootStrap of the application, we may associate all Marshallers we’ve defined for classes we’ve constructed.
We are then presented with a single entry-point for setting up all marshalling in our application, allowing for more customized responses in both JSON and XML. Similarly, we have an entry-point for any other manipulations to reduce response sizes.
The above is achievable using the transformation minus on the properties map, finding all objects that are null. Of course the amount of transformations you can do here are endless; for example not rendering database identifiers for referenced objects or class name we can use a regular expression or match in the findAll closure
Furthermore, using the optional marshaller member we can instead marshal according to our own exact needs. Using some of the above, we’re presented with a much more concise (39 char) response which we define in code (which is also great for generating our REST documentation, but I’ll leave that for another post). Using GZip, we can minify this further however we must be mindful of the time complexity of decompression versus the size of the data (afterall we’re often trying to achieve lower-latency, and higher through-put)