Mocking an API should be somebody else’s problem

11 03 2008

In an interview about his book with Cédric Beust, Next Generation Java Testing: TestNT and Advanced Concepts, Hani Suleiman says:

“I’m fairly strongly against the use of mocks for Java EE constructs. These APIs are often complicated and come with whole swathes of tests to verify compliance. End users never see what a pain in the ass it actually is to certify a product as compatible to a given EE API. Mock implementations on the other hand are most certainly not certified. The danger of using them is that over time, people start implementing more and more of the API, its more code that can go wrong, and more code that’s written just for the sake of making your tests look good. Increasingly, it becomes more and more divorced from the real implementation.”

You might think that I would disagree with that, in view of my current work on an ObMimic library of test-doubles for the Servlet API (in the broad meaning of “test-double”).

But actually I strongly agree with it, and it’s one of the motivations behind ObMimic.

Yes, if you write your own stubs or mocks or use a general-purpose “mocking” tool, it can be extremely difficult to accurately simulate or predict the real behaviour of an API, and over time you’re likely to encounter more and more of the API. You can also find yourself needing more and more scaffolding and instrumentation to serve the needs of your tests. So it can be problematic and uneconomical to do this as part of an individual application’s testing. Even when it remains simple and doesn’t grow over time, it’s still additional thought and effort for each individual application, and very easy to get wrong. Whilst it’s all theoretically possible, and can seem very simple at the start, the long-term economics of it don’t look good.

But it looks rather different if the necessary facilities are all provided for you by a specialist library developed by someone else. Then it’s up to them to figure out all the quirks of the API and provide complete and accurate coverage, and all you have to do in individual applications is to use it.

This is much like the need for a “container” or other such API implementations in the first place. For example, given the Servlet API, it’s neither feasible nor economic for each application to implement its own servlet container, but it’s perfectly reasonable to have separate, dedicated projects that produce servlet containers that everybody else just uses.

My own opinion is that the same goes for test-doubles for APIs such as the Servlet API: it’s not worth everyone doing this half-heartedly themselves, but it is worth somebody doing it well and producing a comprehensive, high-quality library that everyone else can just use.

Of course, this only works if the resulting library is complete enough and of good enough quality for you to be able to rely on it. This points to the kind of criteria on which to judge such suites of test-doubles:

  • How complete is the API coverage?
  • How accurate is the API simulation?
  • How configurable are the test-doubles?
  • How examinable are the test-doubles?
  • How well documented is it?
  • How easy is it to use?
  • What extra features to support testing does it provide? (e.g. strict or configurable validation of API calls, tracking the API calls made, configurable for different versions of the API etc).
  • What dependencies does it have? (e.g. is it limited to only being used with specific tools or frameworks or only in certain scenarios, or is it more generally applicable).

Unfortunately, we don’t seem to have many API-specific libraries of test-doubles at the moment, and in my own limited experience those that we do have aren’t generally good enough.

That’s understandable, as it’s a huge amount of work to do this well for any substantial API that wasn’t written with testing in mind. Especially for APIs as complex, imperfect and subject to change as some of the older Java EE APIs.

Apart from my own ObMimic library for the Servlet API, I’m aware of some other attempts at doing this for the Servlet API, such as HttpUnit’s ServletUnit and the Spring framework’s org.springframework.mock.web package. However, in general these tend to be somewhat incomplete, inadequately documented, and lacking in configurability and test-instrumentation. Some are also outdated or defunct, limited to a particular web-app framework, or are a very minor and secondary component within a broader product that has a rather different purpose and priorities (and is thus unlikely to get much attention or maintenance).

In terms of other APIs, I’m aware of MockEJB for EJB, Mock Javamail for JavaMail, and a few such libraries for JNDI. There’s also a discussion of this issue in James Strachan’s blog article “Mocking out protocols and services is damn useful” (though some of the solutions mentioned are lightweight “real” implementations rather than test-doubles as such). But that seems to be about it.

Does anyone know any more? Or have any general views on the quality and suitability of any of these libraries?

As an ideal, I’d like to see the provision of a suitable library of any necessary test-doubles as a mandatory part of any Java API, in the same way that the JCP demands not just a specification but also a reference implementation and a compatibility kit.

That sounds like a big extra burden on API development. However, most new APIs ought to be designed so that test-doubles aren’t generally necessary in the first place, or can be relatively simple. For example, EJB 3.0 has far less need for this sort of thing than EJB 2.0, due to being more “POJO”-based. As another example, I believe the new JSR 310 Date and Time API is being designed so that the API itself will allow you to “stop” or control the time for testing purposes (for example, see slides 89-91 of Stephen Colebourne’s Javopolis 2007 presentation on JSR 310).

More generally, if this was always tackled as an intrinsic part of each API’s design then it ought to result in APIs that are more amenable to testing, and developing any test-doubles that are still necessary for such an API ought to be far simpler than trying to provide this retrospectively for an API that has ignored this issue. Having any necessary test-doubles should also be helpful in the development and testing of real implementations. In any case, this ought to be a more efficient division of labour than leaving everybody to hack their own way around the absence of such facilities.

As an absolute ideal, I’d want the resulting libraries of API-specific test-doubles to all take a similar form, with common features and facilities, terminology, naming conventions, usage patterns etc. But that’s probably getting into the realm of fantasy.

Out-of-container JSP testing with ObMimic and JspWeaver

19 02 2008

Updated Feb 2018: OpenBrace Limited has closed down, and its ObMimic product is no longer supported.

Updated May 2013: JspWeaver seems to no longer exist.


I’ve been experimenting with the use of ZeroTurnaround’s JspWeaver tool on top of my own ObMimic library so as to provide out-of-container testing of JSPs. It’s been relatively straightforward so far, though I’ve only tried fairly simple JSPs.

JspWeaver from ZeroTurnaround aims to speed up the development of JSP code by using on-the-fly interpretation of JSP code instead of the usual translate-and-compile each time it is changed. As I understand it, the argument is that when you’re repeatedly editing and trying out a JSP, the translate-and-compile delay for each run interrupts your flow and they all mount up over time. So even where the delays are acceptable, it still helps and saves time overall if it can be made faster. I imagine some people will find this a boon whilst others won’t see much point, depending on what delays they are seeing from JSP translation and compilation. It’s a commercial product but only $49 per seat (and with volume discounts), so the price shouldn’t be an issue if it’s helpful to you.

Anyway, what interested me about this was the possibility of combining it with my own “ObMimic” library. This is a library of test-doubles for the Servlet API so as to support out-of-container testing of servlets and other code that depends on Servlet API objects. It’s not yet released, but approaching beta (documentation still being worked on). ObMimic’s test-doubles, which I call “mimics”, provide complete and accurate POJO simulations of all of the Servlet API’s interfaces and abstract classes. This includes full support for the various complex interactions between Servlet API classes (e.g. “forwards” and “includes”, session handling etc), with all their little quirks and oddities. The mimics are also fully configurable, so for example you can set up a ServletContext instance to use specified servlet mappings, static resources etc.

This allows you to use normal JUnit/TestNG tests and plain Java code to do detailed testing of servlets, filters, and any other code that depends on the Servlet API, without having to deploy or run in a servlet container, and with full ability to programmatically configure and inspect the Servlet API objects.

However, whilst ObMimic can be used for out-of-container testing of servlet code, it doesn’t currently have any explicit support for JSPs. I intend to add that in future, but for the time being you can’t use ObMimic to test JSP code. In contrast, JspWeaver can process JSP files but needs to be run in a servlet container as it depends on the Servlet API.

When I heard about JspWeaver, it seemed natural to try running it on top of ObMimic’s simulation of the Servlet API. With ObMimic for out-of-container simulation of the Servlet API, and JspWeaver for JSP processing on top of the Servlet API, the two together ought to allow out-of-container testing of JSP code.

So far I’ve only tried this on some simple JSPs that use a variety of JSP features, but it has all been fairly straightforward and seems to work quite well.

Basic Approach and Example Code

The basic approach to running JspWeaver on top of ObMimic is to configure an ObMimic “ServletContextMimic” in much the same way as one would configure a real web-application to use JspWeaver. This involves defining the JspWeaver servlet and mapping all JSP paths to it, and making the relevant JSP files and other resources available at the appropriate context-relative paths within the servlet context.

In addition, JspWeaver also requires its licence file to be available at a context-relative path of /WEB-INF/lib, and also issues a warning if it can’t find a web.xml file at /WEB-INF/web.xml (though even a dummy web.xml file with minimal content proved sufficient for these tests).

Here’s some example code to illustrate one way to configure such a ServletContextMimic to use JspWeaver:

import com.openbrace.obmimic.mimic.servlet.ServletContextMimic;
import com.openbrace.obmimic.state.servlet.ServletContextState;
import com.openbrace.obmimic.substate.servlet.WebAppConfig;
import com.openbrace.obmimic.substate.servlet.WebAppResources;
import com.openbrace.obmimic.substate.servlet.InitParameters;
import com.openbrace.obmimic.substate.servlet.ServletDefinition;
import com.openbrace.obmimic.substate.servlet.ServletMapping;
import com.openbrace.obmimic.lifecycle.servlet.ServletContextMimicManager;
import com.zeroturnaround.jspweaver.JspInterpretingServlet;


// Create ServletContextMimic and retrieve its "mimicState",
// (which represents its internal state) and relevant
// subcomponents of its mimicState.
ServletContextMimic context = new ServletContextMimic();
ServletContextState contextState 
    = context.getMimicState();
WebAppConfig webAppConfig 
    = contextState.getWebAppConfig();
WebAppResources webAppResources 
    = contextState.getWebAppResources();

// Add a servlet definition for the JspWeaver servlet.
String servletName = "jspWeaverServlet";
String servletClass = JspInterpretingServlet.class.getName();
InitParameters initParams = null;
int loadOnStartup = 1;
    new ServletDefinition(servletName, servletClass,
        initParams, loadOnStartup));

// Add a servlet mapping for ".jsp" paths.
    new ServletMapping(servletName, "*.jsp"));

// Use the contents of a specified directory as the 
// servlet context's "resources".
String webAppRoot = ...path to root of web-application files...

// Explicitly "initialize" the servlet context so as to force
// creation and initialization of its servlets, filters etc
// (otherwise ObMimic does this automatically when the first 
// Servlet API call occurs, but for this example it's shown as
// being done explicitly at this point).
new ServletContextMimicManager(context).initializeContext();


Note that:

  • This example code configures the servlet context to obtain its context-relative resources from a real directory structure that contains the web-application’s files (i.e. corresponding to an expanded “war” archive, or at least the subset of its files that are actually needed for the particular tests being carried out). This needs to include the JspWeaver licence file in its /WEB-INF/lib, and to avoid JspWeaver warnings it needs to include at least a “dummy” web.xml in its /WEB-INF directory.
  • Although this example shows the use of a whole directory structure to provide the web-application’s resources, ObMimic also lets you configure individual context-relative paths to use a specific resource. The resource itself can be provided by a file, or a classpath resource, or an in-memory byte array, or the content obtained from a URL. So you could, for example, provide the JspWeaver licence file and web.xml as classpath resources held alongside the test case’s class files. Similarly you could use the “real” web-application directory structure but then set particular context-relative paths to have different content for testing purposes.
  • A dummy web.xml proved sufficient to prevent “web.xml not found” warnings from JspWeaver, but more complex situations might require appropriate content within the web.xml – basically this would be necessary for any values that JspWeaver actually reads directly from the web.xml as opposed to accessing via the Servlet API (e.g. explicit TLD declarations). Of course, if you’re using a “real” web-application directory structure, it probably already has a suitable web.xml file.

The above approach is very general, in the sense that it configures the servlet context to use JspWeaver for any JSP files. As an alternative, ObMimic could also be used to explicitly construct and initialize an instance of the JspWeaver servlet which could then be used directly. That might be simpler when testing a single self-contained JSP, but it wouldn’t handle any request dispatching to other JSP files, or testing of servlets that “forward” to JSPs, or when also including some other framework that applies filters or servlets before dispatching to the JSP page (e.g. JSF).

With a suitably-configured ServletContextMimic, JSPs can then be tested by using normal ObMimic facilities to create and configure a suitable HttpServletRequest and HttpServletResponse and using ObMimic’s request-dispatching facilities to process them. This can include testing of JSPs in combination with servlets, filters etc (for example, testing a Servlet that “forwards” to a JSP).

For example, if you simply want to use the JspWeaver servlet to process a specific JSP this can be done by retrieving the JspWeaver servlet from the ServletContextMimic and directly invoking its “service” method with a request that has the appropriate URL. Alternatively (or if a more complex combination of servlets, filters, JSPs etc is to be tested) you can use the normal Servlet API facilities to obtain a RequestDispatcher from the servlet context for the relevant path and then “forward” to it. More generally, you can also use various ObMimic facilities to construct and invoke the appropriate FilterChain for a given context-relative path.

To illustrate this, here’s some example code that shows one way to configure a request with a URL for a particular context-relative path, and then directly invokes the JspWeaver servlet to process it (assuming that “context” is a ServletContextMimic configured as shown above):

import com.openbrace.obmimic.mimic.servlet.http.HttpServletRequestMimic;
import com.openbrace.obmimic.mimic.servlet.http.HttpServletResponseMimic;
import javax.servlet.Servlet;
import javax.servlet.ServletException;


// Create a request and response, with the 
// ServletContextMimic as their servlet context.
HttpServletRequestMimic request 
    = new HttpServletRequestMimic();
HttpServletResponseMimic response 
    = new HttpServletResponseMimic();

// Configure the request, including all of its URL-related
// properties (request URL, context path, servlet path, path 
// info etc).
String contextPath = context.getMimicState().getContextPath();
String contextRelativePath = "/pages/jstl.jsp";
String serverRelativePath = contextPath + contextRelativePath;

... further configuration of the request as desired ...

// Retrieve the JspWeaver servlet from the servlet context
// and invoke it.
Servlet target = context.getMimicState().getServlets().get(
try {
    target.service(request, response);
} catch (ServletException e) {
    ... failed with a ServletException ...
} catch (IOException e) {
    ... failed with an IOException ...

... examine the response, context, HTTP session etc
    as desired ...


On completion, the normal ObMimic facilities can be used to examine all relevant details of the response, servlet context, HTTP session etc. For example, you can use “response.getMimicState().getBodyContentAsString()” to retrieve the response’s body content.

General Findings

I’ve successfully used the combination of JspWeaver and ObMimic to test simple examples of:

  • A very simple JSP with plain-text content.
  • A JSP with embedded JSP declarations, scriptlets and expressions.
  • A JSP with static and dynamic includes.
  • A JSP that uses a custom tag file.
  • A JSP that makes use of JSTL tags and EL expressions.
  • A Servlet that forwards to a JSP.

In theory this should be able to cope with any JSP code that JspWeaver is able to successfully interpret.

Performance is potentially a concern. JspWeaver’s interpretation of JSP is faster than translate+compile when repeatedly editing and manually viewing a page, and in general its use for “out-of-container” testing should be faster and more convenient than HTTP-based “in-container” tests. But it’s bound to be slower than using a pre-compiled page when repeatedly running many tests against an unchanging page. First impressions are that the performance is good enough for reasonable use, but I wouldn’t want to have huge numbers of tests for each page as part of my frequent/detailed test runs. Overall it looks like a reasonable approach until a suitable translate+compile out-of-container solution is available – at which point one might still want a choice between the two approaches (e.g. use “translate+compile” when running suites of tests in build scripts, but “interpret” when manually running individual tests within an IDE).

Detailed Findings

Whilst this has all basically worked without undue difficulty, there have been a few minor issues and other findings:

  • The “bsh” parser used by JspWeaver to parse JSP pages doesn’t appear to cope with generics yet. Any use of generics within JSP scripting seems to result in parsing exceptions, presumably because it is thrown by the presence of “<” and “>” characters (to the extent that they result in rather misleading or unhelpful error messages). Maybe this just isn’t allowed in JSP scripting, or just depends on whether the JSP translator/interpreter copes with generics. But off-hand I’m not aware of any general JSP restriction on this.
  • More generally, syntax errors in the JSP can be somewhat hard to find from the error messages produced by JspWeaver when trying to interpret the files (especially when using static/dynamic includes, tag files etc).
  • There might be issues over how best to integrate this into one’s overall build process. For example, if the JSP code needs tag libraries that are built by the same project, these would need to be already built and jar’d before testing the JSPs, even though normally you’d probably run all such tests before “packaging” the libraries. This shouldn’t be a show-stopper, but might need some adjustments and re-thinking of the order in which components are built and tested, and which tests are run at which point.
  • When locating tag files, JspWeaver uses the ServletContext.getResourcePaths method but passes it subdirectory paths without a trailing “/” (for example, “/WEB-INF/tags” rather than “/WEB-INF/tags/”). Although this will generally work, the Javadoc for this Servlet API method isn’t entirely explicit about the form of such paths, and arguably implies that they should end with a trailing “/”, as do all of its examples. By default ObMimic therefore rejects calls to this method for paths that don’t have a trailing “/”, to highlight the possibly suspect use of the Servlet API (i.e. behaviour might vary depending on the servlet container implementation). ObMimic therefore has to be configuring to ignore this and permit such calls (for which its behaviour is then to treat the given path as if it did have the trailing “/”).
  • It can be tricky getting the classpath correct for all the libraries needed for JSP processing. The libraries needed for JSTL and EL support have changed as JSTL and EL have evolved, EL has been moved into the JSP API, and there are incompatible versions of libraries knocking around. It’s just the usual “jar hell”. In particular, I’ve not yet found a combination of Glassfish jars that work with its “javaee.jar” (I always seem to get a java.lang.AbstractMethodError for javax.servlet.jsp.PageContext.getELContext(ELContext). The only working solution that I’ve found so far is to use the Tomcat 6 jstl.jar and standard.jar. This is presumably solvable with a bit more research and tinkering, unless it’s something in JspWeaver that specifically depends on “old” versions of these libraries, but I haven’t followed this up yet.
  • More generally, it can be hard to work out which jars are needed when any are missing, as the error messages and the nature of the failures aren’t always as useful as they could be (e.g. console shows an exception/stack trace from some point in the internal JspWeaver processing of the page, but the JSP completes successfully with truncated output but no exception).

Next Steps

I haven’t yet tried any particularly complex JSPs or “complete” real-life applications, but would like to get around to this if I ever find time. I’ve also got an existing Struts 1 example where the servlet processing is tested using ObMimic but with its JSPs “dummied out”, and at some point I’d like to go back to this and see if its tests can now properly handle the JSP invocations.

I’d also like to see if JSF pages can be tested this way. I’ve had a brief dabble with this, using Sun’s JSF 1.2 implementation as provided by Glassfish. I’ve got as far as seeing JSF “initialize” itself OK (once I’d realized that it needs an implementation-specific context listener, which for the Sun implementation is provided by class com.sun.faces.ConfigureListener). But JspWeaver doesn’t seem to be finding the TLDs for the JSF tags, even if the tag libraries are present in both the classpath and within the web-application directory structure. Providing the TLDs as separate files and explicitly specifying these in the web.xml does result in them being found, but then complains that the actual tag classes don’t have the relevant “setter” methods for the tag’s attributes. I’d guess this is a classpath problem (maybe related to which JSTL/EL libraries are used), or otherwise depends on exactly how JspWeaver searches for TLDs and tag libraries (or maybe what classloader it uses). But I haven’t yet got round to digging any deeper. Also, whilst I’d like to see tests of JSF pages working, I’m not even sure what testing JSF pages like this would mean, given their rather opaque URLs and “behind-the-scenes” processing (e.g. what request details are needed to test a particular feature of the page, and what would you examine to see if it worked correctly?).

Another thought is that ObMimic could be made to automatically detect the presence of JspWeaver and configure itself to automatically use it for all JSP files if not otherwise configured. Maybe this could be a configurable option for which servlet to use for processing JSPs, with the default being a servlet that checks for the presence of JspWeaver and delegates to it if present. That would let you automatically use JspWeaver if you have it, whilst also allowing for other JSP processors to be introduced.

Some specific issues from ObMimic portability testing

7 10 2007

As mentioned in my previous Serial Monogamy post, I’ve recently been doing some portability testing on the Java code of my ObMimic library for out-of-container testing of Servlet code.

So, as promised, here are details of the specific issues I encountered, primarily as a record for my own future reference but also in case it’s any help to anyone else.


First, some background:

  • The code consists of about 400 classes plus test-cases.
  • The resulting library is intended to be usable on any JRE for Java SE 5 or higher.
  • There’s nothing intrinsically platform-dependent or likely to raise any major portability issues. In particular, there’s no Swing or other such GUI code. But otherwise the code and its test-cases do use a fairly broad range of facilities. For example, there is some file handling, some charset-sensitive string processing, some URL encoding and decoding, and calls to JDK methods whose error-handling is defined as implementation-dependent.
  • Reasonable efforts have been made to keep the code fully portable (e.g. use of system properties for path and file separators, explicit use of Locales and charsets where appropriate etc).
  • All development and testing has been done on Sun JDKs on MS Windows, with testing of portability deferred until now. This approach was chosen based on the confidence gained from previous experiences with Java portability, and was judged to be the most efficient way to tackle it for this particular project.
  • The test-cases are intended to be reasonably comprehensive, and include tests of all configurable options, all error-handling code, all handling of checked exceptions etc. The EMMA code-coverage tool reports them as giving 100% code coverage.
  • My own build script for this code doesn’t need to be portable. However, one of the deliverables is an “Enterprise Edition” that includes all the source code and test-cases and its own Ant build script. This does need to be as portable as possible.
  • One potential restriction on the “Enterprise Edition” build script is that a custom Javadoc taglet is used to document one particular aspect of ObMimic’s API. In theory, such Javadoc taglets depend on “com.sun” classes in the Sun JDK’s tools.jar archive, and appear to be specific to Sun’s Javadoc tool rather than being a standard part of the Java SE platform. Hence this aspect of the build script theoretically restricts it to Sun JDKs, though the resulting library remains fully portable (and at a pinch you could always remove the taglet processing from the build if you really wanted to run the build on some other JDK). In practice, other JDKs generally claim that their Javadoc tool is fully compatible with Sun’s, and might even use the very same code. So prior to testing this, it was somewhat unclear how portable the custom “taglet” might be.

The Tests

For the moment I’m only testing on Sun, IBM and BEA JRockit JDKs for Java SE 5 and 6 plus the latest Sun JDK 7 build, on MS Windows and on a representative Linux system (actually, Ubuntu 7.04), and only on IA-32 hardware.

I’m assuming this should shake out most of the portability issues. It’s all I have readily to hand at the moment, and probably represents the majority of potential users.

I hope to extend this to Solaris and Macintosh and maybe other JDKs in future, and to other hardware as and when I can afford it. But I don’t expect many further issues once the code is fully working on both MS Windows and a Unix-based system and on JDKs from three different vendors – though I’d be interested if anyone has any experiences that suggest otherwise.

Another aim of the tests is to check that the deliverables don’t have any unexpected dependencies on my own development environment.

So the testing consists of:

  • Installing the various JDKs onto each of the relevant systems, but without all of the other tools and configuration that make up my normal development environment.
  • Putting the deliverables for each of ObMimic’s various “editions” onto each system.
  • On each system, running an Ant script that unzips/installs/configures each of the ObMimic editions, then runs the build script of the ObMimic “Enterprise Edition” (which itself builds the ObMimic library and test-cases from source, builds its Javadoc, and runs the full suite of test-cases). Then runs the test-cases against the pre-built libraries of the other ObMimic editions. And repeats this for each of the system’s JDKs in turn.

Actually, the “Enterprise Edition” build script is run using the full JDK, as it needs javac, javadoc, and the JDK’s tools.jar or equivalent, but all other tests are run using the relevant JRE on its own (to check that only a JRE is required).

The Findings

As expected, there were a few minor issues but most of the code was fine and worked first time under all of the JDKs on both MS Windows and Linux.

Even though I know it ought to work like this, it still makes me jump about like an excited little kid everytime I see it! I guess that’s what comes of past lives struggling with the joys of C/C++ macros, EBCDIC, MS Windows “thunking” and the like. Java makes it far too straightforward!

Anyway, here are the details of the few issues that I did encounter.

1. Source-code file encoding.

A few test-cases involving URL encoding/decoding and other handling of non-ASCII characters failed when the code had been compiled on Linux.

This turned out to be due to javac misreading the non-ASCII characters in the test-case source code. The actual problem is that the source files are all written using ISO-8859-1 encoding, but by default javac reads them using a default encoding that depends on the underlying platform. On MS Windows everything was being read correctly, but on Linux javac was trying to read these files as UTF-8 and was therefore misinterpreting the non-ASCII characters.

The solution was to explicitly specify the source file encoding to javac. This is done via javac’s “-encoding” option (or the corresponding “encoding” attribute of Ant’s “javac” task).

For additional safety, I also decided to limit all my Java source code files to pure 7-bit ASCII, with unicode escape codes for any non-ASCII characters (e.g. \u00A3 for the “pound sterling” character). This is perfectly adequate for my purposes, and should be the safest possible set of characters for text files. Searching for all non-ASCII characters in the code revealed only a handful of such characters, all of them in test data within test-cases.

The Sun JDK’s native2ascii tool might also be of relevance for anyone writing code in a non “Latin 1” language, but for me sticking to pure ASCII is fine.

2. Testing of methods that return unordered sequences.

Testing on the IBM JDK revealed a handful of test-cases that were checking for a specific sequence of data in a returned array, collection, iterator, enumeration or the like even where returned data is is explicitly specified as having an undefined order.

The Sun and IBM JDKs seem to fairly reliably produce different orderings for many of these cases. The Sun JDKs generally seem to give results in the order one might naively expect if forgetting that the results are unordered, but the IBM JDK generally seems to gives results in a very different order. Some of these mistakes thus slipped through the normal testing on Sun JDKs, but were picked up when tested on the IBM JDKs.

In some cases the solution was to rework the test to use a more suitable “expected result” object or comparison technique (especially as I already have a method for order-insensitive comparison of arrays). In other cases it proved simpler to just explicitly cater for each possible ordering.

It’s hard to know if all such incorrect tests have now been found. There could be more that just happen to work at the moment on the particular JDKs used. On the other hand, it’s only the tests that are wrong, not the code being tested, and the only impact is that the test is overly restrictive. So the only risk is that the test might unnecessarily fail in future or on other JDKs. For the moment that’s a risk I can live with, and I’ll fix any remaining problems as and when they actually appear.

Potentially this could also be avoided by always using an underlying collection that provides a predictable iteration order, even where this isn’t strictly required (for example, LinkedHashMap). However, that feels wrong if the relevant method’s specification explicitly defines the order as undefined, and could be misleading if callers start to take the reliable ordering for granted. This is especially true for ObMimic, where I’m simulating Servlet API methods to help test the calling code. I don’t want to provide the calling code with a predictable ordering when a normal servlet container might not. If anything, it might be better to deliberately randomise the order for each call, or at least make that a configurable option. So I’ve noted that as a possible enhancement for future versions of ObMimic.

3. All of the IBM JDK’s charsets support “encoding”.

One of the test-cases needs to use a Charset that doesn’t support encoding – that is, one whose Charset#canEncode() returns false. This failed on the IBM JDKs, due to being unable to find a suitable Charset to use.

The test-case tries to find a suitable Charset by searching through whichever Charsets are present and picking the first one it finds that doesn’t support encoding, and fails if it can’t find any such Charset. That’s fine on Sun’s JDK, where a few such charsets exist. But on the IBM JDK, every charset that is present returns true from its canEncode method, and the test therefore fails and reports that it can’t find a suitable charset to use.

Solution was to introduce a custom CharsetProvider into the test classes and have this provide a custom charset whose “canEncode” method returns false. This ensures that the test can always find such a charset, even if there none provided by the underlying JVM.

I guess I could just use this custom non-encodeable charset every time, but for some reason I feel more comfortable keeping the existing code to look through all available charsets and pick the first suitable one that it finds.

4. Javadoc “taglet” portability.

All of the JDKs handled the Javadoc “taglet” correctly.

In particular, the IBM and BEA JRockit JDKs do contain the “com.sun” classes needed by the custom “taglet”, and they had no problem compiling, testing and using the taglet.


Mostly, everything “just worked” as one would expect it to. The issues encountered were all pretty minor, only affected test-case code, and were easily identified and fixed.

It was worthwhile testing on both MS Windows and Linux as this revealed the source-code encoding problem, and it was worthwhile testing on both Sun and IBM JDKs as their internal implementations proved different enough to shake out a few mistakes and unjustified assumptions in the test-cases.

Some specific lessons I take from this:

  • Always specify the source-code encoding to the javac compiler (but also try to limit the source code to pure ASCII where possible, with unicode escapes for anything more exotic).
  • Whatever the other pros and cons of having comprehensive test-cases with 100% coverage, they’re mightily useful once you have them. With a comprehensive suite of tests, you can easily test things like portability (or, for example, what permissions are needed when running under a security manager). You just run the whole suite of existing tests, confident in the knowledge that this is exercising everything the code might do.
  • Whilst you’d probably assume that the Javadoc tool is a “proper” standard and part of the Java SE platform, technically it’s a Sun-specific tool within Sun’s JDK, and any custom doclets and taglets are dependent on “com.sun” classes. It seems crazy that after all this time the mechanisms for providing customised Javadoc still aren’t a standard part of the Java SE platform, but there you go. Despite this, in practice you can fairly safely regard the Javadoc tool and the “com.sun” classes as a de-facto standard. In particular the Javadoc tools in the IBM and BEA JRockit JDKs seem to be entirely compatible with Sun’s Javadoc tool. and do provide the necessary “com.sun” classes.

I’m also going to think about whether methods that return “unordered” arrays, iterators, enumerations etc ought to deliberately randomize the order of their returned elements. This would help pick out any tests that incorrectly assume a specific order. The downside is that any resulting test failures wouldn’t be entirely repeatible, which always makes things much harder. It’s also questionable whether this is worth the extra complexity and potential for errors that it would introduce into the “real” code. And it’s not something you’d want to do in any code you’re squeezing for maximum performance. So maybe this is one to ponder for a while, and keep up my sleeve for appropriate situations.

First use of ObMimic for out-of-container testing of Servlets and Struts (Part 2)

27 06 2007

Updated Feb 2018: OpenBrace Limited has closed down, and its ObMimic product is no longer supported.

As explained in part 1 of this posting, I’ve recently started trying out my newly-completed ObMimic library for out-of-container POJO-like testing of servlet code.

So, as promised, here are some of my early experiences from starting to use ObMimic for testing of some simple existing filters, listeners, an old “Struts 1” application (including out-of-container running of the Struts 1 controller servlet and configuration file), and’s “UrlRewriteFilter”.

These are just some initial “smoke test” experiments to check how ObMimic copes with simple situations, and to evaluate its usability. The ObMimic code itself has been tested in detail during its development, and its use for more complex and useful scenarios will be examined later.

Also note that this is primarily intended for my own historical records – actual ObMimic documentation, tutorials, Javadoc etc will be published when it’s ready for public release.

Experiment 1: Some simple listeners, filters and other basic Servlet API code

As a gentle first step, I revisited some listeners and filters in various old projects, and some other utility methods that take instances of Servlet API interfaces as arguments. Some of these had existing tests using mock objects, others didn’t. None of them do anything particularly complicated.

Writing out-of-container tests for them using ObMimic was straightforward and all worked as intended. As you’d expect, it basically just involves:

  • Creating and configuring the various objects needed as arguments. For example, using “new ServletContextMimic()” to create a ServletContext for use in a ServletContextEvent, and configuring it via the ServletContextState object returned by its getMimicState() method. Or creating and configuring an appropriate FilterChainMimic for passing to a filter.
  • Making the call to an instance of the class being tested.
  • Carrying out whatever checks are necessary to see if the code being tested has worked correctly.

The details, of course, depend on exactly what the code being tested is supposed to be doing.

For simple listeners, filters and other such code this is pretty straightforward. For simple unit-tests of such code, one could do much the same thing using mock objects, though from my (admittedly biased) point of view I think that the ObMimic code is slightly simpler and more direct than the same tests using mock objects, even for these simple cases. At any rate, it better fits my general preference for “state-based” rather than “interaction-based” testing, and as we’ll see later it can also handle far more complex situations.

For testing more complex code, there’d typically be more set-up involved to get the context, request, response etc configured as required (for example, so as to appropriately handle any “forwarding” done by the code being tested). Similarly, checking of results can become arbitrarily complicated depending on what the code being tested actually does. But that’s just the usual joy of testing something. We’ll see a more complex example later on.

Experiment 2: Struts 1 “Actions”

The next set of components examined were a handful of “Actions” in an old Struts 1 application. Actually, this was a bit of an anti-climax. The Struts “ActionForm” classes are “POJO”s anyway and already had test cases, and the “execute” method of each Struts “Action” class just needs:

  • A suitable HttpServletRequestMimic and HttpServletResponseMimic for use as the request and response (with appropriately-configured ServletContextMimic, HttpSessionMimic etc as necessary for each test).
  • An instance of the relevant ActionForm subclass, configured with the required property values.
  • A Struts ActionMapping instance, configured to map the relevant result strings to a suitable “ActionForward” instance.

The Struts ActionMapping and ActionForward classes are both suitably POJO-like, so are easily configured. There isn’t even any need to configure mapping of the ActionForward’s path to an actual target resource, as the “execute” method just returns the relevant ActionForward rather than actually carrying out the forwarding.

A few of the Action classes did need a fair bit of configuration of the HttpServletRequestMimic, its ServletContextMimic and the relevant HttpSessionMimic for some of the individual tests, but this was all relatively straightforward.

Although such tests check the Action’s “execute” method in isolation, it would also seem useful (and, for purposes of these experiments, more challenging) to be able to test the broader overall handling of a request. That is, including the mapping of a request to the correct ActionForm and Action and their combined operation. So the next experiment was to try and execute the Struts “controller” servlet, so as to be able to do “out-of-container” testing of the Struts configuration file, ActionForm and Action all together.

Experiment 3: Struts 1 controller servlet

The aim for this experiment was to try to get the Struts 1 controller servlet running “out-of-container” using ObMimic. This is partly motivated by wanting to be able to “integration test” the combination of a Struts configuration file, ActionForm and Action. But more importantly this seemed like a useful more general and more challenging test of what ObMimic can cope with, and an indication of how easy or hard it might be to get ObMimic working for other web frameworks.

The first step is to configure a ServletContext to be able to run Struts, in much the same way as one would configure a real web-application for Struts. Whilst there are several ways to do this, and the following example includes some things that aren’t stricly necessary, for purposes of this experiment I chose to do this as closely as possible to how it would be done in a web.xml. This resulted in code of the following form (adjusted a bit to help illustrate it):

// Create the ServletContextMimic and (for convenience) retrieve
// its MimicState and relevant objects within its MimicState.

ServletContextMimic servletContext = new ServletContextMimic();
ServletContextState contextState = servletContext.getMimicState();
WebAppConfig webAppConfig = contextState.getWebAppConfig();
WebAppResources webAppResources = contextState.getWebAppResources();

// Give the web-app a context path.


// Add the struts-config file (provided as a system resource file
// in this class's package) as a static resource at the 
// appropriate location.

String strutsConfigResourceName
    = getClass().getPackage().getName().replace('.', '/') 
        + "/ExampleStrutsConfig.xml";
    new SystemReadableResource(strutsConfigResourceName));

// Add a servlet definition for the struts controller, including 
// an init-parameter giving the location of the struts-config file.

InitParameters strutsControllerParameters = new InitParameters();
int loadOnStartupOrder = 10;
ServletDefinition strutsController = new ServletDefinition(

// Add a servlet mapping for the struts controller.

ServletMapping strutsControllerMapping 
    = new ServletMapping("strutsController", "*.do");

// Initialize the context ("load-on-startup" servlets are 
// created and initialized etc).

ServletContextMimicManager contextManager 
    = new ServletContextMimicManager(servletContext);

Here, the SystemReadableResource class used to access the Struts config file is a ReadableResource as described in a previous article. It reads the content of the Struts config file to be used in the test, with this being supplied as a file in the same package as the above code. (Alternatively, the application’s existing Struts config file could accessed using a “FileReadableResource”, but the details would depend on the project’s directory structures, whereas the approach shown here also allows individual tests to use their own specific Struts configuration and keep it with the test-case code).

The rest of the classes involved are ObMimic classes. Hopefully the gist of this is fairly clear even without their full details.

One slight concession is that ObMimic doesn’t yet support JSP, so where the struts-config file specifies a path to a JSP file, the test needs to map such paths to a Servlet instead. This involves defining a suitable servlet (e.g. an HttpServlet subclass with a “doPost” method that sets the response’s status code to OK and writes some identifying text into the response’s body content, so that the test can check that the right servlet was reached). The corresponding servlet definition and servlet mapping can then be added to the above configuration of the ServletContextMimic (similar to those for the Struts controller servlet).

Then we just need a suitable request to process. Again, there are various ways to do this, and the particular details depend on the needs of the individual test. In outline, the code used for this experiment was along these lines (demonstrating a POST with body-content request parameters):

HttpServletRequestMimic request = new HttpServletRequestMimic();
try {
} catch (UnsupportedEncodingException e) {
    fail("Attempt to configure request body content "
        + "for a POST failed due to unexpected " + e);

Here, the “populateRelativeURIFromUnencodedURI” method is one of various such short-cuts provided by ObMimic for setting request URI/URL details from various types of overall URL strings. This one takes a non-URL-encoded container-relative path, interprets it based on the ServletContext’s mappings etc, and populates the request’s context-path, servlet-path and path-info accordingly.

The response can start out as just a plain HttpServletResponseMimic with the correct ServletContext:

HttpServletResponseMimic response = new HttpServletResponseMimic();

So then we can invoke the Struts controller servlet, and it should all work just as it would within a servlet container, based on the supplied struts-config file and the ServletContextMimic’s configuration.

We could get hold of the Struts controller servlet from the ServletContextMimic by name, or maybe even just use a new instance of it. However, as we’ve gone to the effort of configuring a mapping for it, we might as well start with the request’s URI and do the actual look-up. For this I use a convenience method on ObMimic’s ServletContextMimicManager class that returns the target resource for a given context-relative path (again, there are various ways to do this, with or without any necessary filter chain etc, but this will do for these purposes):

ServletContextMimicManager contextManager 
    = new ServletContextMimicManager(servletContext());
Servlet actionServlet 
    = contextManager.getServletForRelativePath(
try {
    actionServlet.service(request, response);
} catch (IOException e) {
    fail(...suitable message...);
} catch (ServletException e) {
    fail(...suitable message...);

Then it’s just a matter of checking the response’s content (using, for example, calls such as “response.getMimicState().getBodyContentAsString()”), and anything else necessary to check that the request has been processed correctly.

Well, that’s the theory. So what happened in practice? A couple of minor problems were encountered, but easily overcome:

  • The version of Struts used appears to issue “removeAttribute” calls even where the attribute is not present. Although the Servlet API Javadoc for HttpSession specifies that its removeAttribute method does nothing if the attribute is not present, the Javadoc for ServletContext and ServletRequest don’t explicitly specify whether this is permitted or how it should be handled. ObMimic therefore treats such calls to ServletContext.removeAttribute and ServletRequest.removeAttribute as “API ambiguities”. Its default behaviour for these is to throw an exception to indicate a questionable call. But ObMimic’s handling of such ambiguous API calls is configurable, so the immediate work-around was just to have the test-case programmatically configure ObMimic to ignore this particular ambiguity for these particular methods, such that the removeAttribute calls do nothing if the attribute doesn’t exist. In retrospect it’s probably way too strict to treat this as an ambiguity – it’s a reasonable assumption that removeAttribute should succeed but do nothing if the attribute doesn’t exist, and there is probably lots of code that does this. So I’ve relented on this, and gone back and changed ObMimic so that this isn’t treated as an ambiguity anymore.
  • It turns out that the version of Struts used actually reads the contents of the /WEB-INF/web.xml file. This took a bit of hunting down, as the resulting exception wasn’t particularly explicit, but because the run is all “out-of-container” it was easy to step through test and into the Struts code in a debugger and find where it failed. The solution is to add a suitable web.xml file to the test class’s package and make this available to the ServletContext as a static resource at /WEB-INF/web.xml (in the same way as the struts-config.xml file). Actually, at least for this particular test, the precise content of the web.xml doesn’t seem to matter – Struts seems perfectly happy with a dummy web.xml file with a valid top-level <web-app> element but no content within it.

And that’s it. Having added a suitable /WEB-INF/web.xml static resource into the ServletContextMimic, Struts happily processes the request, pushes it through the right ActionForm and Action, and forwards it to the servlet that’s standing in for the target JSP. All within a “plain” JUnit test, with no servlet container involved (and easily repeatable with different struts-config.xml files, different context init-parameters, or with ObMimic simulating different Servlet API versions etc etc).

Experiment 4: URL Rewrite Filter

I’ve a few example/demo applications where I’ve played around with the UrlRewriteFilter library from to present “clean” URLs and hide technology-specific extensions such as “.jsp”. So I thought I’d try out-of-container testing of this as well.

The rules files that control the URL rewriting are fairly straightforward, but once you have multiple rules with wildcards etc it can become a bit fiddly to get exactly what you want. Tracking down anything that isn’t as intended can be a bit clumsy when it’s running in a servlet container, just from the nature of being in a deployed and running application. So I like the idea of being able to write and debug normal out-of-container test-cases for the config file, and using dummy or diagnostic servlets instead of the “normal” application resources.

This was pretty quick and straightforward after tackling the Struts controller servlet.

Although the details were very different, it again involves configuring a ServletContextMimic with the definitions, mappings and static resources for the UrlRewriteFilter and its configuration file. Much of the code was just copied and edited from the Struts experiment. Again, it proved useful to write a little servlet to which the “rewritten” URLs can be directed, with this having a “doGet” method that writes a message into the response’s body content, so as to indicate that it was invoked and what request URL it saw.

Then each actual test consists of using the relevant ObMimic facilities to obtain and invoke the filter chain for an example URL, with the filter chain’s ultimate target being a static resource whose content just shows if it was reached. After invoking the filter chain, the response’s body content can be examined to check which servlet processed it and what URL the servlet saw.

This wasn’t a very extensive test, as I just wanted to quickly see if it was basically possible, but it all worked without a hitch.

As with the preceding Struts experiment, the key issues are finding your way around the ObMimic classes in order to get the configuration you need, and figuring out what servlets and stuff you need in order to check the results.


So far, I’m happy with ObMimic technically. It’s particularly encouraging to have got both the Struts 1 controller servlet and the URL-rewrite filter running “out-of-container” so easily, as this suggests that it should be feasible to do the same for a variety of web frameworks and tools (especially once JSP support is implemented, which will be a priority for future versions of ObMimic).

On the other hand, I think the ObMimic Javadoc and other documentation needs more work. In practice, the key to using ObMimic is find your way around the MimicState classes that encapsulate each Mimic’s internal state. IDE code-completion is hugely useful for all this, as you can hunt around within each MimicState to look for the relevant properties and methods. However, it helps to have a rough idea of the general scheme of things – what’s available, what you’re looking for, and where things are most likely to be found. To a lesser extent it’s also helpful to know your way around the various supporting classes and methods that provide shortcuts for some of the more complex tasks. The documentation needs to provide some high-level help with all this.

Then there’s the Javadoc. This provides a comprehensive and detailed reference, but unfortunately it’s just too big and detailed. As it stands I think it would be too daunting for new users, or for casual use of the Javadoc. The first problem is that the standard Javadoc main index gives a full list of packages in alphabetical order. I’m hoping to deliver ObMimic as a single self-contained library, so there are a lot of packages, and the most useful routes into the Javadoc end up being scatered around the middle of a long list.

More generally, there are lots of specific tasks which are straightforward once you know how to do them, but hard to figure out from scratch. Things like how to make a static resource available within a ServletContext, or set up “POST” requests, or support JNDI look-ups, or maintain sessions across requests, or the easiest way to populate an HttpServletRequestMimic given the text of an HTTP request…

So my initial lessons from these experiments are:

  • ObMimic’s Javadoc needs to be made more approachable. One idea might be to supplement the standard Javadoc index page with a hand-written index page that groups the packages into meaningful categories and shows everything in a more sensible order.
  • It’d be useful to provide some kind of outline “map” of the MimicState classes, summarizing the properties and key methods of each class.
  • The ObMimic Javadoc needs to be supplemented by a set of task-oriented “how-to” guides.

Experiments with out-of-container testing of Servlet code using ObMimic (Part 1)

4 06 2007

Updated Feb 2018: OpenBrace Limited has closed down, and its ObMimic product is no longer supported.

At long last I’ve reached the stage where I can “eat my own dogfood” and use my ObMimic library for out-of-container testing of servlet code, so I’ve been trying it out on some existing applications.

But to start with, I guess I’d better explain just what “ObMimic” is, as I’ve not said anything much about it publicly yet.

So this article introduces ObMimic, and will be followed shortly by another one explaining some findings from my own initial use of it.

ObMimic is an as-yet-unreleased library of classes that supports “out of container” testing of code that depends on the Servlet API, by providing a complete set of fully-configurable POJO implementations of all of the Servlet API’s interfaces and abstract classes.

For every interface and abstract class of the Servlet API, ObMimic provides a concrete class (with a simple no-argument constructor) that is a complete and accurate simulation of the relevant Servlet API interface or class, based on an “internal state” object through which you can configure and query all of the relevant details.

This lets you test servlets, filters, listeners, and any other code that depends on the Servlet API (including, at least to some extent, higher-level frameworks that run on top of it), in the same way as you would for “plain” Java code. Typically, you construct and configure the Servlet API objects you need, pass them to the code being tested, and examine the results – with no need for any servlet containers, deployment, networking overheads, or complex “in-container” test frameworks.

Compared to HTTP-based “end-to-end” tests, this supports finer-grained and faster tests, and makes it far easier to test the effect of different deployment-descriptor values such as “init parameters” (because the relevant details can be changed at any time via normal “setter” methods, instead of requiring changes to the web.xml file and redeployment). You can also readily use ObMimic with JUnit, TestNG or any other test framework, as it doesn’t depend on any special base-class for tests and is entirely orthogonal to any test-framework facilities.

This approach is similar to using mocks or stubs for the Servlet API classes, but unlike mocks or stubs, ObMimic provides ready-made, complete and accurate implementations of the Servlet API functionality as defined by the Servlet API’s Javadoc. This includes proper handling of features such as request-dispatching, session-handling, listener notifications, automatic “commit” of responses when their specified content-length is reached, servlet and filter mapping, access to static resources at context-relative paths, merging of HTTP “POST” body-content and query-string request parameters, the effects of different sequences of Servlet API method calls, and all the other myriad and complex interactions between different Servlet API methods.

I call these implementation classes “mimics” in order to distinguish them from “mocks” and “stubs” and on the basis that they “mimic” the behaviour of real Servlet API implementations. Technically, they are “fake” objects as described by xUnit Patterns and Martin Fowler, with the addition of some stub/mock-like facilities. But the term “fake” doesn’t quite feel right, doesn’t seem to be very widely used, and some people use it as a synonym for “stub” (for example, Wikipedia as at the time of writing). Inventing yet another term isn’t ideal either, but at least it shouldn’t lead to any pre-conceptions or confusion with anything else.

Whilst mocks and stubs are fine for “interaction-based” testing or for arbitrary interfaces for which you don’t have “real” implementations, using “mimic” implementations seems a simpler, more natural and more useful approach for “state-based” testing or when you have access to appropriate “mimic” classes. At least, that’s my personal take on it. For a broader discussion of some of the relevant issues, see Martin Fowler’s article Mocks Aren’t Stubs.

By way of an example, at its simplest ObMimic lets you write test-case code like the following (where everything uses a “default” ServletContextMimic as the relevant ServletContext, and all details not explicitly configured start out with reasonable default values, such as the request being a “GET”):

HttpServletRequestMimic request = new HttpServletRequestMimic();
HttpServletResponseMimic response = new HttpServletResponseMimic();
request.getMimicState().getRequestParameters().set("a", "1"); // just for example
Servlet myServlet = new SomeExampleServletClass();
myServlet.init(new ServletConfigMimic());
myServlet.service(request, response);
// ... check contents of request, response etc...

Actually, the very simplest example is that if you just need, say, a ServletContext to pass as an argument to some method but its content doesn’t matter, you can just do “new ServletContextMimic()” – which must be about as simple as this could ever be.

ObMimic is also potentially usable for higher-level frameworks that run on top of the Servlet API, such as Struts. Such frameworks generally just need a suitably configured ServletContext and the relevant servlet/filter/listener definitions and mappings, plus various configuration files as static resources within the context – all of which are supported by ObMimic’s ServletContextMimic. And in many cases you can test components without needing the whole framework anyway – just requests and responses together with framework components that are themselves POJOs or otherwise suitably configurable. That’s the theory anyway. In practice this will depend on the details of the particular framework and the nature of its own classes and other API dependencies. But more of that in the next article…

Other current features of ObMimic include:

  • Configurability to simulate different Servlet API versions (2.3, 2.4 or 2.5).
  • A “mimic history” feature for recording and inspecting the Servlet API calls made to individual mimics.
  • Explicit checking and control over the many ambiguities in the Servlet API. That is, where the Servlet API Javadoc is ambiguous about how a particular argument value or sequence of calls should be treated, the ObMimic Javadoc documents the ambiguity and by default ObMimic throws an exception if the code being tested issues such a call, but can also be configured to ignore the call, throw a specified exception, or ignore the ambiguity and process the call in some “reasonable” manner.
  • A basic “in memory” JNDI simulation to support JNDI look-ups by the code being tested.
  • Easy to add to projects, as it consists of a single jar archive with no dependences other than Java 5 or higher and the Servlet API itself (which the code being tested will already need anyway).

Features not yet present but intended for future versions include:

  • Mimics for the JSP API, to support “out-of-container” testing of JSP pages, tag handlers etc.
  • Population of ServletContextMimics from web.xml deployment descriptors.
  • Population of HttpServletRequestMimics from the text of HTTP requests.
  • Production of HTTP response texts from HttpServletResponseMimics.
  • Specific support for particular web-frameworks (depending on demand and any particular issues encountered).

I guess I’ll be writing a lot more about these and other features over the next few months.

Anyway, the ObMimic code has been fully tested during its development, and has certainly been useful during its own testing. However, it’s in the nature of the Servlet API that any non-trivial code tends to depend on a large subset of the API and the interactions between its classes. So it hasn’t seemed particularly worth trying out ObMimic on any “real” projects whilst it was incomplete.

Now that ObMimic has reached the stage where it covers the entire Servlet API, I’ve finally been able to take it for a spin and try it out on some previously-written code. In particular, I’m keen to see how it copes with framework such as Struts, as this is likely to be a good way to shake out any problems.

So the next article will look at my initial experiences with using ObMimic to test some existing filters, listeners, Struts components, and overall Struts 1 operation (including out-of-container execution of the Struts 1 “ActionServlet” controller). I hope to follow this with further articles as I try it out for other web-frameworks, and progress towards a beta-test and public release.

By the way, in case you were wondering, the “Ob” in “ObMimic” is based on our not-yet-officially-announced company name.

%d bloggers like this: