A Portability Tale: Serial Monogamy

26 09 2007

I’ve recently been testing a pile of my Java code on different JVMs and operating systems. I’ve done enough of this in the past to expect it to all just work – which still never fails to amaze me – but with the inevitable odd little quirk or two that needs sorting out. Usually not so much in the code itself as in various things around it (build scripts, configuration files and so on).

I’ll write-up the particular minor issues that I encountered in a later post, but in the meantime it reminded me of an experience I had a few years ago, that I now think of as an example of “serial monogamy” portability.

I’d summarize this as:

for (Platform currentPlatform : arbitrarySequenceOfPlatforms) {
    manager.speak("Portability isn't an issue, we only use " 
        + currentPlatform);
    wait(surprisinglyShortTime);
}

Fortunately, we managed to ignore the implication that we should freely churn out a mass of non-portable code, and survived across a rapid succession of three different “strategic” platforms.

Here’s the full story, followed by some current thoughts on the subject.

The project

It was 2000, maybe 2001, and at the time I was working at an insurance company, relatively new to Java, and about to develop our first web-application.

This was very much an “outside the mainstream” role. The bulk of the IT department worked on mainframe systems using a 4GL and a non-relational database that were both well past their use-by dates. They’d been trying, fairly unsuccessfully, to migrate away from these technologies. You’d think this would have taught everyone a lesson about platform dependencies, but sometimes a problem is so big that it becomes a way of life.

The business people knew we needed to start finding our place on the internet, but weren’t sure what to do about it – the company didn’t deal directly with the public, and commercial use of the internet was all very new and unclear. Similarly, the IT people weren’t up to speed on internet technologies, with these being almost entirely outside of their experience and too bleeding-edge. So there were many, many unknowns, and lots of having to explain things that I was only just learning myself.

Amidst all of this, about the only thing that was made very clear to me right from the start was not to worry about portability. The system would definitely be running on the company’s IBM mainframes, as that was our “strategic” platform.

Therefore I was not to concern myself with portability in any way…

It’s definitely Websphere on IBM Mainframes

Our management also mandated the use of IBM Websphere as the application server. And an old version of it at that, on the basis that this would obviously be more stable than a newer release… Well, the “websfear” story is a saga in its own right, which many others who got ordered to use Websphere at around that time can probably guess, but that will keep for another day.

So the project was strictly aimed at Websphere on an IBM JDK on the IBM mainframe.

Whatever the practicalities for production, that wasn’t suitable for our development work, and it wasn’t even available to us when we started development. So I set up a development environment on our normal MS Windows desktops, with a Sun JDK and Ironflare Orion as a convenient application server (after trying out a few others).

There seemed to be lots of stuff in Websphere where you could get sucked into using IBM-specific classes and methods for no particular reason and without even realizing it. But I made sure we stuck to writing platform-independent Java code and using the standard J2EE APIs. The use of a different application server for development ensured we stayed free of any dependencies on Websphere. It also helped pin-down a lot of bugs in Websphere that were clearly its own bugs rather than something we were doing wrong, whatever the IBM people tried to imply (at some point we started using Tomcat as well for the same purpose). Most of the other usual portability issues were ironed out by virtue of developing on MS Windows but with the mainframe as the target platform.

All of this was regarded as OK so long as we kept quiet about it. Any mention of portability or being able to develop on one platform whilst deploying on another would bring an immediate reminder from management that there’s absolutely no point in making any allowance for any other platforms: we must not spend a moment’s effort or worry on portability, and Java being “theoretically” cross-platform is of no relevance whatsoever.

What actually happened, predictably, is that whilst we were writing the application the company decided on a new IT strategy…

It’s definitely Websphere on IBM AS/400

The new strategy was to move everything from the mainframe onto AS/400s.

So at this point I was told that the application needs to run on Websphere on AS/400 – but don’t worry about portability, because that’s our new “strategic” platform for the forseeable future and there won’t ever be any need to run on anything else.

Thankfully all our code ran on the AS/400 quite happily, despite it having a somewhat unusual operating system that I’d never worked with before (see, for example, this overview).

Guess what? We then got taken over by another company…

It’s definitely Oracle on Unix

The company taking us over had existing non-Java web-applications, and ran everything on Oracle software running on Unix.

So at this point I was told that our application needs to run on Oracle’s application server on Unix. But also that this obviously won’t work, because our application has been developed purely for Websphere on IBM hardware. And anyway, Oracle’s application server is a big and nasty old beast that we’d never manage to get to grips with.

As it happens, about this time Oracle suddenly introduced a re-branded copy of Ironflare Orion (i.e. exactly what we were using for development) as their brand-new “OC4J” J2EE environment. So overnight we became intimately familiar with the brand-new Oracle J2EE software that the company’s Oracle guys were scratching their heads about.

Of course, nobody believed our code would actually port across to Unix (“fine in theory, but it’ll never work in practice”). So we got access to one of the Unix machines and I demonstrated our application running on Unix. I had to make one fix to some image-processing code or something like that, but very minor. Naturally, all the people who’d doubted it were completely unimpressed (“hey, obviously it works, java is supposed to do that anyway”).

There was also some talk of maybe using BEA Weblogic, and I think I checked that our application worked on that as well, but nothing ever came of this.

So at this point I was told that the application needs to run on Oracle OC4J on Unix – but don’t worry about portability, because that’s our “strategic” platform and there won’t ever be any need to run on anything else. Of course.

Back to the Future

Sometime after this I left for wilder and riskier things. I can’t be sure of exactly what has happened since, and the particular application involved is long since dead, but the last thing I knew was that they’d been taken over by an even bigger company.

Now I may be wrong, but as I heard it all the Oracle and Unix stuff has since been junked so that everything can be consolidated onto the parent company’s infrastructure.

Which I gather is all on mainframes…

Current thoughts

I can see how the “serial monogamy” approach is reasonable from a YAGNI point-of-view. It clearly made sense to our management at the time, given our then-current expectations of software portability.

But in view of how well Java and its related standards handle portability, it probably only really makes sense if the platform truly doesn’t change and if there is a significant cost or problem in making the code as portable as possible – or if portability can be added afterwards at no greater cost than building it in from the start. On the whole this all seems rather unlikely, or at best an unecessary gamble. Except, of course, where you specifically need to use some particular proprietary and non-portable functionality.

So I tend to see portability in general as a useful code-quality that’s just part of writing good, maintainable code, rather than as an additional “feature” that requires extra effort and which you might or might not actually need at some point.

On the other hand, actually testing the portability does seem to be something that can fairly safely be left until needed – provided the code is written to be portable, you have adequately knowledge, tools and procedures to ensure this, and you have adequate automated tests to exercise the code.

Where that’s true, my own experience has been that proving the portability and carrying out any necessary final corrections can be done quite easily if and when needed, and doesn’t benefit from being done prematurely. As ever, this depends on the project and the people involved, and your mileage may vary.

My own approach at present is to write all code to be as portable as possible, but defer testing the portability until the code is otherwise complete or the portability is actually needed (depending on the nature of the project).





Another work-around for Ant + JUnit classpath problems

14 08 2007

This was originally an article claiming that one possible work-around for Ant classpath problems when running JUnit is to explicitly include the Ant libraries into the JUnit task’s <classpath>.

But… That was just plain dumb!

I’ve since realized this was all just a silly mistake on my part. The junit task had its includeAntRuntime option turned off, which was the cause of the problems.

My so-called work-around was effectively just doing what “includeAntRuntime” would do if turned on (which is its default anyway).

Apologies to anyone that was misled by this. It’s horrible to realise that not only have I made a silly mistake and wasted time implementing a work-around, but I’ve also then published a load of tripe about it!

I really ought to know better – I usually check all the options on Ant tasks, even when copying existing code (which is how I’ve now spotted this mistake). I’m also usually better at diagnosing problems. Somehow, “includeAntRuntime” just didn’t register with me at the time, and finding a quick work-around disuaded me from delving deeper, which isn’t like me at all. Blame it on too many hours working and not enough rest and recovery – it must be time to take a day or two off!

Hopefully, replacing the article with this note will at least stop anyone being misled if they stumble across this page in future.





First use of ObMimic for out-of-container testing of Servlets and Struts (Part 2)

27 06 2007

Updated Feb 2018: OpenBrace Limited has closed down, and its ObMimic product is no longer supported.

As explained in part 1 of this posting, I’ve recently started trying out my newly-completed ObMimic library for out-of-container POJO-like testing of servlet code.

So, as promised, here are some of my early experiences from starting to use ObMimic for testing of some simple existing filters, listeners, an old “Struts 1” application (including out-of-container running of the Struts 1 controller servlet and configuration file), and tuckey.org’s “UrlRewriteFilter”.

These are just some initial “smoke test” experiments to check how ObMimic copes with simple situations, and to evaluate its usability. The ObMimic code itself has been tested in detail during its development, and its use for more complex and useful scenarios will be examined later.

Also note that this is primarily intended for my own historical records – actual ObMimic documentation, tutorials, Javadoc etc will be published when it’s ready for public release.

Experiment 1: Some simple listeners, filters and other basic Servlet API code

As a gentle first step, I revisited some listeners and filters in various old projects, and some other utility methods that take instances of Servlet API interfaces as arguments. Some of these had existing tests using mock objects, others didn’t. None of them do anything particularly complicated.

Writing out-of-container tests for them using ObMimic was straightforward and all worked as intended. As you’d expect, it basically just involves:

  • Creating and configuring the various objects needed as arguments. For example, using “new ServletContextMimic()” to create a ServletContext for use in a ServletContextEvent, and configuring it via the ServletContextState object returned by its getMimicState() method. Or creating and configuring an appropriate FilterChainMimic for passing to a filter.
  • Making the call to an instance of the class being tested.
  • Carrying out whatever checks are necessary to see if the code being tested has worked correctly.

The details, of course, depend on exactly what the code being tested is supposed to be doing.

For simple listeners, filters and other such code this is pretty straightforward. For simple unit-tests of such code, one could do much the same thing using mock objects, though from my (admittedly biased) point of view I think that the ObMimic code is slightly simpler and more direct than the same tests using mock objects, even for these simple cases. At any rate, it better fits my general preference for “state-based” rather than “interaction-based” testing, and as we’ll see later it can also handle far more complex situations.

For testing more complex code, there’d typically be more set-up involved to get the context, request, response etc configured as required (for example, so as to appropriately handle any “forwarding” done by the code being tested). Similarly, checking of results can become arbitrarily complicated depending on what the code being tested actually does. But that’s just the usual joy of testing something. We’ll see a more complex example later on.

Experiment 2: Struts 1 “Actions”

The next set of components examined were a handful of “Actions” in an old Struts 1 application. Actually, this was a bit of an anti-climax. The Struts “ActionForm” classes are “POJO”s anyway and already had test cases, and the “execute” method of each Struts “Action” class just needs:

  • A suitable HttpServletRequestMimic and HttpServletResponseMimic for use as the request and response (with appropriately-configured ServletContextMimic, HttpSessionMimic etc as necessary for each test).
  • An instance of the relevant ActionForm subclass, configured with the required property values.
  • A Struts ActionMapping instance, configured to map the relevant result strings to a suitable “ActionForward” instance.

The Struts ActionMapping and ActionForward classes are both suitably POJO-like, so are easily configured. There isn’t even any need to configure mapping of the ActionForward’s path to an actual target resource, as the “execute” method just returns the relevant ActionForward rather than actually carrying out the forwarding.

A few of the Action classes did need a fair bit of configuration of the HttpServletRequestMimic, its ServletContextMimic and the relevant HttpSessionMimic for some of the individual tests, but this was all relatively straightforward.

Although such tests check the Action’s “execute” method in isolation, it would also seem useful (and, for purposes of these experiments, more challenging) to be able to test the broader overall handling of a request. That is, including the mapping of a request to the correct ActionForm and Action and their combined operation. So the next experiment was to try and execute the Struts “controller” servlet, so as to be able to do “out-of-container” testing of the Struts configuration file, ActionForm and Action all together.

Experiment 3: Struts 1 controller servlet

The aim for this experiment was to try to get the Struts 1 controller servlet running “out-of-container” using ObMimic. This is partly motivated by wanting to be able to “integration test” the combination of a Struts configuration file, ActionForm and Action. But more importantly this seemed like a useful more general and more challenging test of what ObMimic can cope with, and an indication of how easy or hard it might be to get ObMimic working for other web frameworks.

The first step is to configure a ServletContext to be able to run Struts, in much the same way as one would configure a real web-application for Struts. Whilst there are several ways to do this, and the following example includes some things that aren’t stricly necessary, for purposes of this experiment I chose to do this as closely as possible to how it would be done in a web.xml. This resulted in code of the following form (adjusted a bit to help illustrate it):

// Create the ServletContextMimic and (for convenience) retrieve
// its MimicState and relevant objects within its MimicState.

ServletContextMimic servletContext = new ServletContextMimic();
ServletContextState contextState = servletContext.getMimicState();
WebAppConfig webAppConfig = contextState.getWebAppConfig();
WebAppResources webAppResources = contextState.getWebAppResources();

// Give the web-app a context path.

contextState.setContextPath("/examples");

// Add the struts-config file (provided as a system resource file
// in this class's package) as a static resource at the 
// appropriate location.

String strutsConfigResourceName
    = getClass().getPackage().getName().replace('.', '/') 
        + "/ExampleStrutsConfig.xml";
webAppResources.setResource("/WEB-INF/struts-config.xml",
    new SystemReadableResource(strutsConfigResourceName));

// Add a servlet definition for the struts controller, including 
// an init-parameter giving the location of the struts-config file.

InitParameters strutsControllerParameters = new InitParameters();
strutsControllerParameters.set("config", 
    "/WEB-INF/struts-config.xml");
int loadOnStartupOrder = 10;
ServletDefinition strutsController = new ServletDefinition(
    "strutsController", 
    ActionServlet.class.getCanonicalName(),
    strutsControllerParameters, 
    loadOnStartupOrder);
webAppConfig.getServletDefinitions().add(strutsController);

// Add a servlet mapping for the struts controller.

ServletMapping strutsControllerMapping 
    = new ServletMapping("strutsController", "*.do");
webAppConfig.getServletMappings().add(strutsControllerMapping);

// Initialize the context ("load-on-startup" servlets are 
// created and initialized etc).

ServletContextMimicManager contextManager 
    = new ServletContextMimicManager(servletContext);
contextManager.initializeContext();

Here, the SystemReadableResource class used to access the Struts config file is a ReadableResource as described in a previous article. It reads the content of the Struts config file to be used in the test, with this being supplied as a file in the same package as the above code. (Alternatively, the application’s existing Struts config file could accessed using a “FileReadableResource”, but the details would depend on the project’s directory structures, whereas the approach shown here also allows individual tests to use their own specific Struts configuration and keep it with the test-case code).

The rest of the classes involved are ObMimic classes. Hopefully the gist of this is fairly clear even without their full details.

One slight concession is that ObMimic doesn’t yet support JSP, so where the struts-config file specifies a path to a JSP file, the test needs to map such paths to a Servlet instead. This involves defining a suitable servlet (e.g. an HttpServlet subclass with a “doPost” method that sets the response’s status code to OK and writes some identifying text into the response’s body content, so that the test can check that the right servlet was reached). The corresponding servlet definition and servlet mapping can then be added to the above configuration of the ServletContextMimic (similar to those for the Struts controller servlet).

Then we just need a suitable request to process. Again, there are various ways to do this, and the particular details depend on the needs of the individual test. In outline, the code used for this experiment was along these lines (demonstrating a POST with body-content request parameters):

HttpServletRequestMimic request = new HttpServletRequestMimic();
request.getMimicState().setServletContext(servletContext);
request.getMimicState().setHttpMethodName("POST");
request.getMimicState().populateRelativeURIFromUnencodedURI(
    "/examples/exampleStrutsFormSubmit.do");
request.getMimicState().setContentTypeMimeType(
    "application/x-www-form-urlencoded");
try {
    request.getMimicState().setBodyContent("a=1&b=2", 
        "ISO-8859-1");
} catch (UnsupportedEncodingException e) {
    fail("Attempt to configure request body content "
        + "for a POST failed due to unexpected " + e);
}

Here, the “populateRelativeURIFromUnencodedURI” method is one of various such short-cuts provided by ObMimic for setting request URI/URL details from various types of overall URL strings. This one takes a non-URL-encoded container-relative path, interprets it based on the ServletContext’s mappings etc, and populates the request’s context-path, servlet-path and path-info accordingly.

The response can start out as just a plain HttpServletResponseMimic with the correct ServletContext:

HttpServletResponseMimic response = new HttpServletResponseMimic();
response.getMimicState().setServletContext(servletContext);

So then we can invoke the Struts controller servlet, and it should all work just as it would within a servlet container, based on the supplied struts-config file and the ServletContextMimic’s configuration.

We could get hold of the Struts controller servlet from the ServletContextMimic by name, or maybe even just use a new instance of it. However, as we’ve gone to the effort of configuring a mapping for it, we might as well start with the request’s URI and do the actual look-up. For this I use a convenience method on ObMimic’s ServletContextMimicManager class that returns the target resource for a given context-relative path (again, there are various ways to do this, with or without any necessary filter chain etc, but this will do for these purposes):

ServletContextMimicManager contextManager 
    = new ServletContextMimicManager(servletContext());
Servlet actionServlet 
    = contextManager.getServletForRelativePath(
        "/exampleStrutsFormSubmit.do");
try {
    actionServlet.service(request, response);
} catch (IOException e) {
    fail(...suitable message...);
} catch (ServletException e) {
    fail(...suitable message...);
}

Then it’s just a matter of checking the response’s content (using, for example, calls such as “response.getMimicState().getBodyContentAsString()”), and anything else necessary to check that the request has been processed correctly.

Well, that’s the theory. So what happened in practice? A couple of minor problems were encountered, but easily overcome:

  • The version of Struts used appears to issue “removeAttribute” calls even where the attribute is not present. Although the Servlet API Javadoc for HttpSession specifies that its removeAttribute method does nothing if the attribute is not present, the Javadoc for ServletContext and ServletRequest don’t explicitly specify whether this is permitted or how it should be handled. ObMimic therefore treats such calls to ServletContext.removeAttribute and ServletRequest.removeAttribute as “API ambiguities”. Its default behaviour for these is to throw an exception to indicate a questionable call. But ObMimic’s handling of such ambiguous API calls is configurable, so the immediate work-around was just to have the test-case programmatically configure ObMimic to ignore this particular ambiguity for these particular methods, such that the removeAttribute calls do nothing if the attribute doesn’t exist. In retrospect it’s probably way too strict to treat this as an ambiguity – it’s a reasonable assumption that removeAttribute should succeed but do nothing if the attribute doesn’t exist, and there is probably lots of code that does this. So I’ve relented on this, and gone back and changed ObMimic so that this isn’t treated as an ambiguity anymore.
  • It turns out that the version of Struts used actually reads the contents of the /WEB-INF/web.xml file. This took a bit of hunting down, as the resulting exception wasn’t particularly explicit, but because the run is all “out-of-container” it was easy to step through test and into the Struts code in a debugger and find where it failed. The solution is to add a suitable web.xml file to the test class’s package and make this available to the ServletContext as a static resource at /WEB-INF/web.xml (in the same way as the struts-config.xml file). Actually, at least for this particular test, the precise content of the web.xml doesn’t seem to matter – Struts seems perfectly happy with a dummy web.xml file with a valid top-level <web-app> element but no content within it.

And that’s it. Having added a suitable /WEB-INF/web.xml static resource into the ServletContextMimic, Struts happily processes the request, pushes it through the right ActionForm and Action, and forwards it to the servlet that’s standing in for the target JSP. All within a “plain” JUnit test, with no servlet container involved (and easily repeatable with different struts-config.xml files, different context init-parameters, or with ObMimic simulating different Servlet API versions etc etc).

Experiment 4: URL Rewrite Filter

I’ve a few example/demo applications where I’ve played around with the UrlRewriteFilter library from tuckey.org to present “clean” URLs and hide technology-specific extensions such as “.jsp”. So I thought I’d try out-of-container testing of this as well.

The rules files that control the URL rewriting are fairly straightforward, but once you have multiple rules with wildcards etc it can become a bit fiddly to get exactly what you want. Tracking down anything that isn’t as intended can be a bit clumsy when it’s running in a servlet container, just from the nature of being in a deployed and running application. So I like the idea of being able to write and debug normal out-of-container test-cases for the config file, and using dummy or diagnostic servlets instead of the “normal” application resources.

This was pretty quick and straightforward after tackling the Struts controller servlet.

Although the details were very different, it again involves configuring a ServletContextMimic with the definitions, mappings and static resources for the UrlRewriteFilter and its configuration file. Much of the code was just copied and edited from the Struts experiment. Again, it proved useful to write a little servlet to which the “rewritten” URLs can be directed, with this having a “doGet” method that writes a message into the response’s body content, so as to indicate that it was invoked and what request URL it saw.

Then each actual test consists of using the relevant ObMimic facilities to obtain and invoke the filter chain for an example URL, with the filter chain’s ultimate target being a static resource whose content just shows if it was reached. After invoking the filter chain, the response’s body content can be examined to check which servlet processed it and what URL the servlet saw.

This wasn’t a very extensive test, as I just wanted to quickly see if it was basically possible, but it all worked without a hitch.

As with the preceding Struts experiment, the key issues are finding your way around the ObMimic classes in order to get the configuration you need, and figuring out what servlets and stuff you need in order to check the results.

Conclusions

So far, I’m happy with ObMimic technically. It’s particularly encouraging to have got both the Struts 1 controller servlet and the URL-rewrite filter running “out-of-container” so easily, as this suggests that it should be feasible to do the same for a variety of web frameworks and tools (especially once JSP support is implemented, which will be a priority for future versions of ObMimic).

On the other hand, I think the ObMimic Javadoc and other documentation needs more work. In practice, the key to using ObMimic is find your way around the MimicState classes that encapsulate each Mimic’s internal state. IDE code-completion is hugely useful for all this, as you can hunt around within each MimicState to look for the relevant properties and methods. However, it helps to have a rough idea of the general scheme of things – what’s available, what you’re looking for, and where things are most likely to be found. To a lesser extent it’s also helpful to know your way around the various supporting classes and methods that provide shortcuts for some of the more complex tasks. The documentation needs to provide some high-level help with all this.

Then there’s the Javadoc. This provides a comprehensive and detailed reference, but unfortunately it’s just too big and detailed. As it stands I think it would be too daunting for new users, or for casual use of the Javadoc. The first problem is that the standard Javadoc main index gives a full list of packages in alphabetical order. I’m hoping to deliver ObMimic as a single self-contained library, so there are a lot of packages, and the most useful routes into the Javadoc end up being scatered around the middle of a long list.

More generally, there are lots of specific tasks which are straightforward once you know how to do them, but hard to figure out from scratch. Things like how to make a static resource available within a ServletContext, or set up “POST” requests, or support JNDI look-ups, or maintain sessions across requests, or the easiest way to populate an HttpServletRequestMimic given the text of an HTTP request…

So my initial lessons from these experiments are:

  • ObMimic’s Javadoc needs to be made more approachable. One idea might be to supplement the standard Javadoc index page with a hand-written index page that groups the packages into meaningful categories and shows everything in a more sensible order.
  • It’d be useful to provide some kind of outline “map” of the MimicState classes, summarizing the properties and key methods of each class.
  • The ObMimic Javadoc needs to be supplemented by a set of task-oriented “how-to” guides.




Experiments with out-of-container testing of Servlet code using ObMimic (Part 1)

4 06 2007

Updated Feb 2018: OpenBrace Limited has closed down, and its ObMimic product is no longer supported.

At long last I’ve reached the stage where I can “eat my own dogfood” and use my ObMimic library for out-of-container testing of servlet code, so I’ve been trying it out on some existing applications.

But to start with, I guess I’d better explain just what “ObMimic” is, as I’ve not said anything much about it publicly yet.

So this article introduces ObMimic, and will be followed shortly by another one explaining some findings from my own initial use of it.

ObMimic is an as-yet-unreleased library of classes that supports “out of container” testing of code that depends on the Servlet API, by providing a complete set of fully-configurable POJO implementations of all of the Servlet API’s interfaces and abstract classes.

For every interface and abstract class of the Servlet API, ObMimic provides a concrete class (with a simple no-argument constructor) that is a complete and accurate simulation of the relevant Servlet API interface or class, based on an “internal state” object through which you can configure and query all of the relevant details.

This lets you test servlets, filters, listeners, and any other code that depends on the Servlet API (including, at least to some extent, higher-level frameworks that run on top of it), in the same way as you would for “plain” Java code. Typically, you construct and configure the Servlet API objects you need, pass them to the code being tested, and examine the results – with no need for any servlet containers, deployment, networking overheads, or complex “in-container” test frameworks.

Compared to HTTP-based “end-to-end” tests, this supports finer-grained and faster tests, and makes it far easier to test the effect of different deployment-descriptor values such as “init parameters” (because the relevant details can be changed at any time via normal “setter” methods, instead of requiring changes to the web.xml file and redeployment). You can also readily use ObMimic with JUnit, TestNG or any other test framework, as it doesn’t depend on any special base-class for tests and is entirely orthogonal to any test-framework facilities.

This approach is similar to using mocks or stubs for the Servlet API classes, but unlike mocks or stubs, ObMimic provides ready-made, complete and accurate implementations of the Servlet API functionality as defined by the Servlet API’s Javadoc. This includes proper handling of features such as request-dispatching, session-handling, listener notifications, automatic “commit” of responses when their specified content-length is reached, servlet and filter mapping, access to static resources at context-relative paths, merging of HTTP “POST” body-content and query-string request parameters, the effects of different sequences of Servlet API method calls, and all the other myriad and complex interactions between different Servlet API methods.

I call these implementation classes “mimics” in order to distinguish them from “mocks” and “stubs” and on the basis that they “mimic” the behaviour of real Servlet API implementations. Technically, they are “fake” objects as described by xUnit Patterns and Martin Fowler, with the addition of some stub/mock-like facilities. But the term “fake” doesn’t quite feel right, doesn’t seem to be very widely used, and some people use it as a synonym for “stub” (for example, Wikipedia as at the time of writing). Inventing yet another term isn’t ideal either, but at least it shouldn’t lead to any pre-conceptions or confusion with anything else.

Whilst mocks and stubs are fine for “interaction-based” testing or for arbitrary interfaces for which you don’t have “real” implementations, using “mimic” implementations seems a simpler, more natural and more useful approach for “state-based” testing or when you have access to appropriate “mimic” classes. At least, that’s my personal take on it. For a broader discussion of some of the relevant issues, see Martin Fowler’s article Mocks Aren’t Stubs.

By way of an example, at its simplest ObMimic lets you write test-case code like the following (where everything uses a “default” ServletContextMimic as the relevant ServletContext, and all details not explicitly configured start out with reasonable default values, such as the request being a “GET”):

HttpServletRequestMimic request = new HttpServletRequestMimic();
HttpServletResponseMimic response = new HttpServletResponseMimic();
request.getMimicState().getRequestParameters().set("a", "1"); // just for example
Servlet myServlet = new SomeExampleServletClass();
myServlet.init(new ServletConfigMimic());
myServlet.service(request, response);
// ... check contents of request, response etc...

Actually, the very simplest example is that if you just need, say, a ServletContext to pass as an argument to some method but its content doesn’t matter, you can just do “new ServletContextMimic()” – which must be about as simple as this could ever be.

ObMimic is also potentially usable for higher-level frameworks that run on top of the Servlet API, such as Struts. Such frameworks generally just need a suitably configured ServletContext and the relevant servlet/filter/listener definitions and mappings, plus various configuration files as static resources within the context – all of which are supported by ObMimic’s ServletContextMimic. And in many cases you can test components without needing the whole framework anyway – just requests and responses together with framework components that are themselves POJOs or otherwise suitably configurable. That’s the theory anyway. In practice this will depend on the details of the particular framework and the nature of its own classes and other API dependencies. But more of that in the next article…

Other current features of ObMimic include:

  • Configurability to simulate different Servlet API versions (2.3, 2.4 or 2.5).
  • A “mimic history” feature for recording and inspecting the Servlet API calls made to individual mimics.
  • Explicit checking and control over the many ambiguities in the Servlet API. That is, where the Servlet API Javadoc is ambiguous about how a particular argument value or sequence of calls should be treated, the ObMimic Javadoc documents the ambiguity and by default ObMimic throws an exception if the code being tested issues such a call, but can also be configured to ignore the call, throw a specified exception, or ignore the ambiguity and process the call in some “reasonable” manner.
  • A basic “in memory” JNDI simulation to support JNDI look-ups by the code being tested.
  • Easy to add to projects, as it consists of a single jar archive with no dependences other than Java 5 or higher and the Servlet API itself (which the code being tested will already need anyway).

Features not yet present but intended for future versions include:

  • Mimics for the JSP API, to support “out-of-container” testing of JSP pages, tag handlers etc.
  • Population of ServletContextMimics from web.xml deployment descriptors.
  • Population of HttpServletRequestMimics from the text of HTTP requests.
  • Production of HTTP response texts from HttpServletResponseMimics.
  • Specific support for particular web-frameworks (depending on demand and any particular issues encountered).

I guess I’ll be writing a lot more about these and other features over the next few months.

Anyway, the ObMimic code has been fully tested during its development, and has certainly been useful during its own testing. However, it’s in the nature of the Servlet API that any non-trivial code tends to depend on a large subset of the API and the interactions between its classes. So it hasn’t seemed particularly worth trying out ObMimic on any “real” projects whilst it was incomplete.

Now that ObMimic has reached the stage where it covers the entire Servlet API, I’ve finally been able to take it for a spin and try it out on some previously-written code. In particular, I’m keen to see how it copes with framework such as Struts, as this is likely to be a good way to shake out any problems.

So the next article will look at my initial experiences with using ObMimic to test some existing filters, listeners, Struts components, and overall Struts 1 operation (including out-of-container execution of the Struts 1 “ActionServlet” controller). I hope to follow this with further articles as I try it out for other web-frameworks, and progress towards a beta-test and public release.

By the way, in case you were wondering, the “Ob” in “ObMimic” is based on our not-yet-officially-announced company name.





Testing those difficult-to-reach exceptions.

19 12 2006

Summary

Generic parameters can be used in “throws” clauses. This makes it possible to define a method that can throw any exception passed to it as an argument, whilst being treated by the compiler as having only that particular exception’s type in its “throws” clause (for example, using a method declaration of the form “<T extends Throwable> methodName(T argName) throws T”).

Such a method, together with a boolean property through which test cases can turn the actual throwing of the exception on/off, can provide an easy to use, general technique for testing exception-handling paths that would otherwise be difficult or impossible to test, by adding a simple call to the method into any code that requires it.

Background

Every now and then I find myself with exception-handling code that can’t easily be tested because it is difficult or impossible to force the relevant exception to occur. Until recently I’ve taken an ad-hoc approach to testing such code, but I’ve recently adopted a more general solution, and in doing so have stumbled across a neat little feature of “generics” that I hadn’t realized before.

There are a variety of circumstances in which one can end up with code whose exception handling can’t easily be tested. If you always try to write comprehensive tests for your code, you’ve probably encountered such situations at one time or another. Sometimes it’s due to calling a method that can potentially throw a checked exception but where the cirumstances of the call are such that the exception can’t actually occur; sometimes the exception is one that the JDK defines as optional depending on the JVM implementation or underlying platform; sometimes the exception is just for truly exceptional circumstances that can’t easily be conjured up on demand. There are probably a variety of other situations where this can arise.

I’ve also encountered this with some code-coverage tools when trying to get 100% code-coverage on “synchronized” blocks, due to the byte-code having a separate path for exiting the block if any exception occurs (that is, although the source code does not have any explicit exception handling, the byte-code includes a separate path for catching any unchecked exceptions thrown during the block).

As a simple example, consider a call to one of the JDK methods that take a “charset name” argument and throw an UnsupportedEncodingException if the specified charset is not supported by the underlying JVM. If you’re specifically using one of the “charset name”s that are guaranteed to be supported on all JVMs, then the exception can never actually occur. You wouldn’t generally want to leave the UnsupportedEncodingException uncaught, as that would force all callers to cater for a checked exception that can’t occur and probably isn’t at all relevant to them (and in any case this would just shift the same problem up to the calling methods). You also wouldn’t usually want to just ignore such an exception either, because if it does occur then something is clearly wrong (and this would still leave you with an exception handling path that you can’t test, albeit one that just ignores the error). So you end up with code like the following, with a “catch” clause that you can’t easily test.

try {
    foo = bar.getBytes("US-ASCII");
} catch (UnsupportedEncodingException e) {
    // Shouldn't occur, as US-ASCII always supported.
    throw new InternalFailureRuntimeException(
        "JVM did not recognise charset name US-ASCII.", e);
}

Of course, one could just decide to not test the exception handling. For the above example, that’s probably reasonable, but for other situations it might not be. In any case, I’m always wary of code that isn’t tested (especially error-handling code, where execution of the code only happens when at least one thing has already gone wrong), and for various reasons on my current projects I always require my tests to give a clean “100%” code coverage.

Until recently I’ve taken a fairly ad-hoc approach to each such situation. Often the problem can be avoided by some combination of judicious refactoring, additional configuration or indirection facilities, and the use of suitable mocks/stubs or other such “custom” implementations for relevant method arguments. At worst, one can always introduce code to explicitly throw the required exception at some suitable point under the control of an additional boolean property, with suitable naming and documentation to indicate that this is for testing purposes only.

Inevitably, all solutions to this involve some degree of compromise to the design of the code and its public or protected interface. Tackling each situation independently results in a variety of different solutions, compromises, explanations etc, as well as needing separate analysis and decisions for each situation. So recently I’ve tried to see if I can find a simple, consistent solution for all such situations.

Solution

The most general solution would seem to be to introduce “test only” code to explicitly throw the relevant exception for those particular tests that require it. As a minimum, this involves an “if” condition controlled by some suitable boolean, as well as the actual throwing of the exception itself and suitable explanatory comments. Is it possible to move the “if” logic into a centralized method, and thus reduce this into a single self-documenting method call? Ideally one would want a single, centralized method that can optionally throw whatever type of exception is required, together with methods that test-cases can use to turn the actual throwing of the exception on and off.

To achieve this, the required exception would need to be passed to the method as an argument. How would such a method work in terms of its own “throws” clause and the calling code’s type safety? Well, let’s try to define such a method with the exception type as a generic parameter:

public static <T extends Throwable> 
        void allowTestingWithSpecifiedThrowable(final T e)
        throws T {
    if (testWithSpecifiedThrowable) {
        throw e;
    }
}

Somewhat surprisingly, this seems to be perfectly OK and it works as you would hope – the compiler treats each call to it as if its own “throws” clause specifies the particular type of Throwable that is actually being passed to it. So if you call such a method with an UnsupportedEncodingException, it is treated as a call to a method whose “throws” clause specifies that it can throw UnsupportedEncodingException, but if you call it with a ClassNotFoundException it is treated as a method whose “throws” clause specifies that it can throw ClassNotFoundException. As a result, the calling code can pass this single method any type of exception that the calling code catches or can throw, without being forced to catch or throw any other or more general exceptions.

Together with methods to turn the “testWithSpecifiedThrowable” boolean on and off, that’s the basic mechanism. The rest of the solution is straightforward, “normal” coding – such as deciding on appropriate method/property names, and whether to make the methods “static” or place them on some relevant object instance and provide methods for accessing that instance. Depending on how this is done, some care may also be needed to ensure thread-safety if this is required (in particular, to ensure that individual tests can set and use the “testWithSpecifiedThrowable” boolean without interfering with each other).

For my own implementation of this, I’ve called the relevant methods “allowTestingWithSpecifiedThrowable”, “setTestWithSpecifiedThrowable” and “resetTestWithSpecifiedThrowable”; made them static methods in a non-instantiable “TestUtilities” class; and declared them all as synchronized against the class itself.

With this in place, the exception handling of the preceding example can be made testable by adding a suitable call to the “allowTestingWithSpecifiedThrowable” method:

try {
    foo = bar.getBytes("US-ASCII");
    TestUtilities.allowTestingWithSpecifiedThrowable(
        new UnsupportedEncodingException("Testing"));
} catch (UnsupportedEncodingException e) {
    // Shouldn't occur (except in testing), 
    // as US-ASCII always supported.
    throw new InternalFailureRuntimeException(
        "JVM did not recognise charset name US-ASCII.", e);
}

The tests of the method’s normal behaviour remain unchanged, but the above code’s exception handling can now be tested using code of the following form:

synchronized (TestUtilities.class) {
    TestUtilities.setTestWithSpecifiedThrowable();
    try {
        ... call the code and check that it correctly
        handles the UnsupportedEncodingException 
        as expected...
    } finally {
        TestUtilities.resetTestWithSpecifiedThrowable();
    }
}

All other such “impossible” exceptions and hard-to-reach exception-handling paths can be tested in exactly the same way, by adding a simple call to “allowTestingWithSpecifiedThrowable” (with a suitable example exception) at the end of the method’s normal processing (immediately before the relevant “catch” clause).

In addition, to simplify the common case where I just want to test a method’s handling of any arbitrary runtime exception, I’ve also added methods into my “TestUtilities” class to optionally throw an ExplicitlyRequestedRuntimeException, with this being a simple subclass of RuntimeException. In particular, I use this to test the exception-handling exit from all synchronized blocks, to ensure that code-coverage tools don’t mistakenly report such paths as untested.

Remaining issues

The only remaining issue with this implementation is that there is no way to turn the exception throwing “on” for testing of one method (or one particular exception within it) without it also being “on” for any and all other uses of it that happen to occur within the same test. In practice this doesn’t appear to be a problem, as relatively few methods require the use of this facility, and most individual tests are small enough and specific enough to not encounter combinations of such methods. If it does prove to be a problem in future, one solution would be to introduce some kind of “identifier” into the TestUtilities methods so as to be able to turn the exception-throwing on and off for individual named situations.

Conclusion

Is this all overkill? Probably. But now that it is in place, I’ve found it to be a clean, consistent and easy way to enable testing of a variety of otherwise difficult-to-test exception-handling code.

I guess it’s not that much simpler than coding the “if…throw” directly anywhere it’s required, but it does reduce this to a single unconditional call of a specific named method. As well as being a bit simpler/shorter, this makes it relatively self-documenting (or at least lets me document it all in the TestUtilities class rather than in each place it is used). Usages are also easy to search for. In practice I’ve definitely found this to be simpler and more explicit than the various ad-hoc approaches I’d been using before.

More generally I think it’s really neat that one can use a generic parameter in a method’s “throws” clause.








%d bloggers like this: