ArbitraryObject: A useful useless class

23 10 2009

I’ve recently introduced a trivial little class called “ArbitraryObject” into my Java test-case code. Here’s the full story…

When writing test cases in Java, every now and then one comes across a situation where an object is needed but its precise type doesn’t matter, and you just need to pick some arbitrary class to use as an example.

Sometimes any class at all will do; sometimes there are constraints on what the class must not be, but anything else will do (e.g. anything that isn’t class X or one of its subclasses or superclasses).

Most commonly this happens for tests of methods that take “Object” arguments – an obvious example is testing that an implementation of “equals(Object)” returns false for any argument that isn’t of the appropriate type.

Another common case is testing of generic classes and methods with “any class” generic parameters, where one needs to pick a class to be used as the generic parameter’s actual type.

In these situations, what class should one use?

Perhaps the simplest choices are Object or String. However, Object seems a poor choice for this in general – if you’re testing something that takes any Object, you probably want to test it with something more specific than Object itself (even if you do also want to test it with a basic Object). It’s also not going to work where you need something that isn’t inheritance-related to some particular class.

Similarly, although String can be very convenient for this, strings are so common as argument values and in test-case code that their use tends to blend into the background. So it’s hard to see when a string is being used for this purpose as opposed to being a “real” use of a string.

More generally, if you’re trying to show how some code handles any arbitrary type, neither Object nor String seem the most useful or convincing examples to pick.

What we’re really looking for is a class that meets the following criteria:

  • It shouldn’t be relevant in any way to the class being tested (isn’t the class being tested, doesn’t appear as a method argument anywhere in the class being tested, and isn’t a superclass or subclass of such types);
  • It shouldn’t be used otherwise in the test-case code (so as to avoid any confusion);
  • Ideally it ought to be somewhat out-of-the-ordinary (so that we can reasonably assume that the code being tested doesn’t give it any special treatment, and so that its use in the test-case code stands out as something unusual, and so as to emphasise that it’s just an arbitrary example representing any class you might happen to use);
  • It should be easy to construct instances of the class (it should have a public constructor that doesn’t require any non-trivial arguments or other set-up or configuration);
  • There shouldn’t be any significant side-effects or performance impact from creating and disposing of instances and using their Object methods such as equals/hashCode/toString (e.g. these shouldn’t do anything like thread creation, accessing of system or network resources etc).

Until now I’ve been picking classes for this fairly arbitrarily. Sometimes I just grab one of the primitive-wrapper classes like java.lang.Float or perhaps java.math.BigInteger if these aren’t otherwise involved in the code – even though they’re rather too widely used to be ideal for this. Otherwise I’ve picked something obscure but harmless from deep within the bowels of the JDK, such as java.util.zip.Adler32.

The problems with this approach are:

  • The intention and reason for using the chosen class aren’t obvious from the code;
  • The test-case ends up with an otherwise-unnecessary and rather misleading “import” and dependency on the chosen class (unless it’s a java.lang class, but the most suitable of those suffer the drawback of being too widely used);
  • Any searches for the chosen class will find these uses of it as well as its “genuine” uses;
  • There’s no easy way to find everywhere that this has been done (for example, if I ever want to change how I handle these situations).

So instead I’ve now started using a purpose-built “ArbitraryObject” class.

The only purpose of this class is to provide a suitably-named class that isn’t related to any other classes, isn’t otherwise relevant to either the test-case code or the code being tested, and isn’t used for any other purpose.

The main benefit is that this makes the intention of the test-case entirely explicit. Wherever ArbitraryObject is used, it’s clear that it represents the use of any class, at a point where a test needs this. In addition, the test-case code no longer has any dependencies on obscure classes that aren’t actually relevant; it’s easy to find all the places where this is being done; and searches for other classes aren’t going to find any “accidental” appearances of a class where it’s been used for this purpose.

ArbitraryObject must be the most trivial class I’ve ever written. Not even worth showing the code! It’s just a subclass of Object with a public no-argument constructor and nothing else.

Potentially one could argue for additional features, such as giving each instance a “name” to be shown in its “toString” representation, making it Serializable, and so forth. But none of that seems worth bothering with.

So this ArbitraryObject class is entirely trivial, and as a class it’s kind of useless, but the name in itself is useful to me.

Sometimes all you need is an explicit name.

Advertisements




Don’t you just hate tests that can’t be automated?

30 08 2008

I’ve been spoilt. Thoroughly spoilt. I’ve spent the last few years working on things that can be tested. Sure, it’s a lot of work writing good tests, but once they’re in place testing is a breeze. Run the script (or push the button), the tests run, and either it’s ok or there’s a list of problems to investigate and fix. New version? Made a change? Refactored some code? New environment? Need to prove something still works? Just run the tests again.

But now the day of reckoning has come. I’m doing some part-time work for a company that provides web-sites based on their own CMS product, but with lots of web-page design and bespoke customisation for each client. It’s very good at what it does, but testing each individual site is just a massive manual exercise. No automation at all.

It seems like a huge drain on resources, and unreliable and unrepeatable as well. Each new client or redesign leads to a fresh set of testing, and it all relies on you knowing what to test and what to look for. It’s done on a relatively undocumented and informal basis. Unsurprisingly, they seem to have lots of “regressions”, bugs they have to deal with over and over again.

What really bothers me is that I’m struggling to see any solution that would be more cost-effective.

The bulk of the testing is checking that web pages display exactly as intended, with correct layout, styles, positioning etc – compared to some mock-up pages provided by the graphic designers (which are substantial, but by no means exhaustive), and relying heavily on human judgement (e.g. “surely there’s supposed to be whitespace between those two elements when they are side-by-side… and shouldn’t that line be the same thickness as that one…”).

That sounds simple, but the pages combine variable numbers of different types of content, with lots of different options, combinations of layouts, complex criteria that decide each page’s content, many different optional elements within items, special formatting for particular elements when used in particular combinations, elements where the client can supply arbitrary HTML. The list goes on and on – the permutations are as near to infinite as makes no odds.

The end result is that even when the underlying CMS functionality is taken as being stable and already tested (which is never entirely true, of course), there’s an absolutely huge amount of visual checking of web-pages to be done.

Inspection of the CSS and HTML doesn’t even begin to give you any idea of whether it has the intended effect in all situations and no nasty surprises, let alone whether it works in different browsers and window sizes. Exact pixel-by-pixel comparison of each page’s image against an expected result might work, but there would be an absolutely enourmous number of expected results to set up, and it would all be very fragile (e.g. even the smallest change to spacing or to an element that appears everywhere would require all the “expected” images to be re-done).

So there seems little alternative other than to work through the various combinations and visually check each resulting web-page against the mock-ups (with exact comparison of individual elements where this seems appropriate). And do that for all the different combinations of elements and other features. All whilst figuring out which part of which mock-up page to look at for how each part of each page should look.

It might be ok if this was for a single web-site, or a CMS on its own, or a CMS with some simple “skins” with a limited set of variations. Then you could at least consider investing in an overall set of tests, maybe with pixel-exact checking of bitmaps against the design. It’d be a lot of work, and need a lot of maintenance, but if there was just one such set of tests for your entire business (and especially if you could adjust the “design” process to take testing into account), it might be worthwhile. At worst you could at least justify spending time on producing a good test plan for doing the manual testing.

But it isn’t like that. Instead, the styling is different for each client/web-site, and often with other bespoke features just for that particular client. It’s all basically a “one-off” exercise for each client. Having a few people informally bash away at a test site for a few days is far less work than trying to somehow automate this, or even develop and document a solid test plan – much of which would then need substantial revision for the next client.

I have a few ideas to help improve their overall approach, and I’m well aware of tools that could automate testing of the “functional” aspects of their CMS. But it’s come as a bit of a shock to find that not only do I not know of any tools to automate the “visual” aspects of the tests, but I can’t even envisage tools that could automate this (at least, not without requiring even more work than manual testing).

Does anyone else face this problem? Surely it’s a fairly common requirement to want to check the appearance of a set of web-pages against their design? Is there an obvious solution that I’m missing? Any solution at all? Or is this really something where there’s no alternative to homo sapien and the Mk1 eyeball?

In the meantime I’d better get on, I have a huge set of web pages to examine…





FindBugs finds bugs (again)

30 07 2008

FindBugs is terrific. I’ve been using it for several years now, and each new release seems to find some more mistakes in my code that were previously slipping through unnoticed.

I’d like to think I’m very careful and precise when writing code, and have the aptitude, experience and education to be reasonably good at it by now. I’m also a stickler for testing everything as comprehensively as seems feasible. So it’s rather humbling to have a tool like FindBugs pointing out silly mistakes, or reporting issues that I’d not been aware of. The first time I ran FindBugs against a large body of existing code the results were a bit of a shock!

In the early days of FindBugs, I found the genuine problems to be mixed with significant numbers of false-positives, and ended up “excluding” (i.e. turning off) lots of rules. Since then it has become progressively more precise and robust, as well as detecting more and more types of problem.

These days I run FindBugs with just a tiny number of specific “excludes”, and make sure all my code stays “clean” against that configuration. The “excludes” are mainly restricted to specific JDK or third-party interfaces and methods that I can’t do anything about.

Further new releases of FindBugs don’t usually find many new problems in the existing code, but do almost always throw up at least one thing worth looking into.

So last weekend I upgraded to FindBugs version 1.3.4, and sure enough it spotted a really silly little mistake in one particular piece of “test-case” code.

The actual problem it identified was an unnecessary “instanceof”. This turned out to be because the wrong object was being used in the “instanceof”. The code is intended to do “instanceof” checks on two different objects to see if both of them are of a particular type, but by mistake the same variable name had been used in both checks. Hence one of the objects was being examined twice (with the second examination being spotted by FindBugs as entirely superfluous), and the other not at all. If this had been in “real” code I’d have almost certainly caught it in testing, but buried away in a “helper” method within the tests themselves it has managed to survive for a couple of years without being noticed.

I guess this raises the broader issue of whether (and how) test-case code should itself be tested, but that’s one for another day (…would you then also want to test your tests of your tests…?). Anyway, thanks to FindBugs, this particular mistake has been detected and fixed before causing any harm or confusion.

Every time I find something like this it makes me think how fantastic it is to have such tools. I use PMD and CheckStyle as well, and they’ve all helped me find and fix mistakes and improve my code and my coding. I’ve learnt lots of detailed stuff from them too. But FindBugs especially has proven to be very effective whilst also being easy to use – both in Ant scripts and via its Eclipse plug-in.

If you’re writing Java code and haven’t yet tried FindBugs, it’s well worth a look.





Private beta of ObMimic for out-of-container servlet testing

30 05 2008

Updated Feb 2018: OpenBrace Limited has closed down, and its ObMimic product is no longer supported.

The ObMimic library for out-of-container servlet testing is now being made available to a small number of users as a private “beta” release, in advance of a more public beta.

We’re ready for a few more people to start trying it out, so if you’re interested just let me know – either via this blog’s “contact me” page or via my company e-mail address of mike-at-openbrace-dot-com.

In outline, ObMimic provides a comprehensive set of fully-configurable test doubles for the Servlet API, so that you can use normal “plain java” tools and techniques to test servlets, filters, listeners and any other code that depends on the Servlet API. We call these test doubles “mimics”, because they “mimic” the behaviour of the real object.

We see this as the ultimate set of “test doubles” for this specific API: a set of plain Java objects that completely and accurately mimic the behaviour of the “real” Servlet API objects, whilst being fully configurable and inspectable and with additional instrumentation to support both “state-based” and “interaction-based” testing.

If you find servlet code harder to test than plain Java, ObMimic might be just what you’re looking for.

With ObMimic, you can create instances of any Servlet API interface or abstract class using plain no-argument constructors; configure and inspect all relevant details of their internal state as necessary; and pass them into your code wherever Servlet API objects are needed. This makes it easy to do detailed testing of servlets, filters, listeners and other code that depends on the Servlet API, without needing a servlet container and without any of the complexities and overheads of packaging, deployment, restarts/reloads, networking etc.

ObMimic includes facilities for:

  • Setting values that are “read-only” in the Servlet API (including full programmatic control over “deployment descriptor” values and other values that are normally fixed during packaging/deployment, or that have fixed values in each servlet container).
  • Examining values that are normally “write-only” in the Servlet API (such as a response’s body content).
  • Optionally recording and retrieving details of the Servlet API calls made to each object (with ability to turn this on and off on individual objects).
  • Controlling which version of the Servlet API is simulated, with versions 2.3, 2.4 and 2.5 currently supported (for example, you can programmatically repeat a test using different Servlet API versions).
  • Detecting and reporting any calls to Servlet API methods whose handling isn’t strictly defined by the API (e.g. passing null arguments to Servlet API methods whose Javadoc doesn’t specify whether nulls are permitted or how they are handled).
  • Controlling the simulation of container-specific behaviour (i.e. where the Servlet API allows variations or leaves this open).
  • Explicitly forcing Servlet API methods to throw a checked exception (e.g. so that you can test any code that handles such exceptions).
  • Handling JNDI look-ups using a built-in, in-memory JNDI simulation.

There are no dependencies on any particular testing framework or third-party libraries (other than Java SE 5 or higher and the Servlet API itself), so you can freely use ObMimic with JUnit, TestNG or any other testing framework or tool.

In contrast to traditional “mock” or “stub” objects, ObMimic provides complete, ready-made implementations of the Servlet API interfaces and abstract classes as defined by their Javadoc. As a result, your tests don’t have to depend on your own assumptions about the Servlet API’s behaviour, and both state-based and interaction-based tests can be supported. ObMimic can even handle complex sequences of Servlet API calls, such as for session-handling, request dispatching, incorporation of “POST” body content into request parameters, notification to listeners, and other such complex interactions between Servlet API objects. It can thus be used not only for testing individual components in isolation, but also for testing more complete paths through your code and third-party libraries.

With the appropriate configuration, it’s even possible to test code that uses other frameworks on top of the Servlet API. For example, we’ve been able to use ObMimic to test “Struts 1” code, and to run ZeroTurnaround’s JspWeaver on top of ObMimic to provide out-of-container testing of JSPs (as documented previously).

As a somewhat arbitrary example, the following code illustrates a very simple use of ObMimic to test a servlet (just to show the basics of how Servlet API objects can be created, configured and used):

import com.openbrace.obmimic.mimic.servlet.http.HttpServletRequestMimic;
import com.openbrace.obmimic.mimic.servlet.http.HttpServletResponseMimic;
import com.openbrace.obmimic.mimic.servlet.ServletConfigMimic;
import javax.servlet.Servlet;
import javax.servlet.ServletException;
import java.io.IOException;

...

/* Create a request and configure it as needed by the test. */    
HttpServletRequestMimic request = new HttpServletRequestMimic();
request.getMimicState().getRequestParameters().set("name", "foo");
request.getMimicState().getAttributes().set("bar", 123);
... further request set-up as desired ...

/* Create a response. */
HttpServletResponseMimic response = new HttpServletResponseMimic();

/*
 * Create and initialize the servlet to be tested (assumed to be a
 * class called "MyHttpServlet"), using a dummy/minimal 
 * ServletConfig.
 */
Servlet myServlet = new MyHttpServlet();
try {
    myServlet.init(new ServletConfigMimic());
} catch (ServletException e) {
    ... report that test failed with unexpected ServletException ...
}

/* Invoke the servlet to process the request and response. */
try {
    myServlet.service(request, response);
} catch (ServletException e) {
    ... report that test failed with unexpected ServletException ...
} catch (IOException e) {
    ... report that test failed with unexpected IOException ...
}

/*
 * Retrieve the response's resulting status code and body content,
 * as examples of how the resulting state of the relevant mimic 
 * instances can be examined.
 */
int statusCode 
    = response.getMimicState().getEffectiveHttpStatusCode();
String bodyContent
    = response.getMimicState().getBodyContentAsString();
... then check them as appropriate for the test ...

For further examples and details, refer to the previous posts “First experiments with out-of-container testing of Servlet code using ObMimic” part 1 and part 2, “Out-of-container JSP testing with ObMimic and JspWeaver”, and the related post “Mocking an API should be somebody else’s problem”.

There are also more extensive examples in ObMimic’s documentation.

ObMimic isn’t open-source, but it will have a zero-cost version (full API coverage but a few overall features disabled, such as the ability to configure the Servlet API version, control over how incorrect/ambiguous API calls are handled, and recording of API calls). There will also be a low-cost per-user “Professional” version with full functionality, and an “Enterprise” version that includes all of ObMimic’s source-code and internal tests (with an Ant build script) as well as a licence for up to 200 users.

At the moment there’s no web-site, discussion forums or bug-reporting mechanisms (all still being prepared), but ObMimic already comes with full documentation including both short and detailed “getting started” guides, “how to”s with example code, and extensive Javadoc – and for this private beta I’m providing direct support by e-mail.

Anyway, if you’d like to try out ObMimic, or have any questions or comments, or would like to be informed when there’s a more public release, just let me know via the “contact me” page or by e-mail.





Mocking an API should be somebody else’s problem

11 03 2008

In an interview about his book with Cédric Beust, Next Generation Java Testing: TestNT and Advanced Concepts, Hani Suleiman says:

“I’m fairly strongly against the use of mocks for Java EE constructs. These APIs are often complicated and come with whole swathes of tests to verify compliance. End users never see what a pain in the ass it actually is to certify a product as compatible to a given EE API. Mock implementations on the other hand are most certainly not certified. The danger of using them is that over time, people start implementing more and more of the API, its more code that can go wrong, and more code that’s written just for the sake of making your tests look good. Increasingly, it becomes more and more divorced from the real implementation.”

You might think that I would disagree with that, in view of my current work on an ObMimic library of test-doubles for the Servlet API (in the broad xunitpatterns.com meaning of “test-double”).

But actually I strongly agree with it, and it’s one of the motivations behind ObMimic.

Yes, if you write your own stubs or mocks or use a general-purpose “mocking” tool, it can be extremely difficult to accurately simulate or predict the real behaviour of an API, and over time you’re likely to encounter more and more of the API. You can also find yourself needing more and more scaffolding and instrumentation to serve the needs of your tests. So it can be problematic and uneconomical to do this as part of an individual application’s testing. Even when it remains simple and doesn’t grow over time, it’s still additional thought and effort for each individual application, and very easy to get wrong. Whilst it’s all theoretically possible, and can seem very simple at the start, the long-term economics of it don’t look good.

But it looks rather different if the necessary facilities are all provided for you by a specialist library developed by someone else. Then it’s up to them to figure out all the quirks of the API and provide complete and accurate coverage, and all you have to do in individual applications is to use it.

This is much like the need for a “container” or other such API implementations in the first place. For example, given the Servlet API, it’s neither feasible nor economic for each application to implement its own servlet container, but it’s perfectly reasonable to have separate, dedicated projects that produce servlet containers that everybody else just uses.

My own opinion is that the same goes for test-doubles for APIs such as the Servlet API: it’s not worth everyone doing this half-heartedly themselves, but it is worth somebody doing it well and producing a comprehensive, high-quality library that everyone else can just use.

Of course, this only works if the resulting library is complete enough and of good enough quality for you to be able to rely on it. This points to the kind of criteria on which to judge such suites of test-doubles:

  • How complete is the API coverage?
  • How accurate is the API simulation?
  • How configurable are the test-doubles?
  • How examinable are the test-doubles?
  • How well documented is it?
  • How easy is it to use?
  • What extra features to support testing does it provide? (e.g. strict or configurable validation of API calls, tracking the API calls made, configurable for different versions of the API etc).
  • What dependencies does it have? (e.g. is it limited to only being used with specific tools or frameworks or only in certain scenarios, or is it more generally applicable).

Unfortunately, we don’t seem to have many API-specific libraries of test-doubles at the moment, and in my own limited experience those that we do have aren’t generally good enough.

That’s understandable, as it’s a huge amount of work to do this well for any substantial API that wasn’t written with testing in mind. Especially for APIs as complex, imperfect and subject to change as some of the older Java EE APIs.

Apart from my own ObMimic library for the Servlet API, I’m aware of some other attempts at doing this for the Servlet API, such as HttpUnit’s ServletUnit and the Spring framework’s org.springframework.mock.web package. However, in general these tend to be somewhat incomplete, inadequately documented, and lacking in configurability and test-instrumentation. Some are also outdated or defunct, limited to a particular web-app framework, or are a very minor and secondary component within a broader product that has a rather different purpose and priorities (and is thus unlikely to get much attention or maintenance).

In terms of other APIs, I’m aware of MockEJB for EJB, Mock Javamail for JavaMail, and a few such libraries for JNDI. There’s also a discussion of this issue in James Strachan’s blog article “Mocking out protocols and services is damn useful” (though some of the solutions mentioned are lightweight “real” implementations rather than test-doubles as such). But that seems to be about it.

Does anyone know any more? Or have any general views on the quality and suitability of any of these libraries?

As an ideal, I’d like to see the provision of a suitable library of any necessary test-doubles as a mandatory part of any Java API, in the same way that the JCP demands not just a specification but also a reference implementation and a compatibility kit.

That sounds like a big extra burden on API development. However, most new APIs ought to be designed so that test-doubles aren’t generally necessary in the first place, or can be relatively simple. For example, EJB 3.0 has far less need for this sort of thing than EJB 2.0, due to being more “POJO”-based. As another example, I believe the new JSR 310 Date and Time API is being designed so that the API itself will allow you to “stop” or control the time for testing purposes (for example, see slides 89-91 of Stephen Colebourne’s Javopolis 2007 presentation on JSR 310).

More generally, if this was always tackled as an intrinsic part of each API’s design then it ought to result in APIs that are more amenable to testing, and developing any test-doubles that are still necessary for such an API ought to be far simpler than trying to provide this retrospectively for an API that has ignored this issue. Having any necessary test-doubles should also be helpful in the development and testing of real implementations. In any case, this ought to be a more efficient division of labour than leaving everybody to hack their own way around the absence of such facilities.

As an absolute ideal, I’d want the resulting libraries of API-specific test-doubles to all take a similar form, with common features and facilities, terminology, naming conventions, usage patterns etc. But that’s probably getting into the realm of fantasy.





Out-of-container JSP testing with ObMimic and JspWeaver

19 02 2008

Updated Feb 2018: OpenBrace Limited has closed down, and its ObMimic product is no longer supported.

Updated May 2013: JspWeaver seems to no longer exist.

Background

I’ve been experimenting with the use of ZeroTurnaround’s JspWeaver tool on top of my own ObMimic library so as to provide out-of-container testing of JSPs. It’s been relatively straightforward so far, though I’ve only tried fairly simple JSPs.

JspWeaver from ZeroTurnaround aims to speed up the development of JSP code by using on-the-fly interpretation of JSP code instead of the usual translate-and-compile each time it is changed. As I understand it, the argument is that when you’re repeatedly editing and trying out a JSP, the translate-and-compile delay for each run interrupts your flow and they all mount up over time. So even where the delays are acceptable, it still helps and saves time overall if it can be made faster. I imagine some people will find this a boon whilst others won’t see much point, depending on what delays they are seeing from JSP translation and compilation. It’s a commercial product but only $49 per seat (and with volume discounts), so the price shouldn’t be an issue if it’s helpful to you.

Anyway, what interested me about this was the possibility of combining it with my own “ObMimic” library. This is a library of test-doubles for the Servlet API so as to support out-of-container testing of servlets and other code that depends on Servlet API objects. It’s not yet released, but approaching beta (documentation still being worked on). ObMimic’s test-doubles, which I call “mimics”, provide complete and accurate POJO simulations of all of the Servlet API’s interfaces and abstract classes. This includes full support for the various complex interactions between Servlet API classes (e.g. “forwards” and “includes”, session handling etc), with all their little quirks and oddities. The mimics are also fully configurable, so for example you can set up a ServletContext instance to use specified servlet mappings, static resources etc.

This allows you to use normal JUnit/TestNG tests and plain Java code to do detailed testing of servlets, filters, and any other code that depends on the Servlet API, without having to deploy or run in a servlet container, and with full ability to programmatically configure and inspect the Servlet API objects.

However, whilst ObMimic can be used for out-of-container testing of servlet code, it doesn’t currently have any explicit support for JSPs. I intend to add that in future, but for the time being you can’t use ObMimic to test JSP code. In contrast, JspWeaver can process JSP files but needs to be run in a servlet container as it depends on the Servlet API.

When I heard about JspWeaver, it seemed natural to try running it on top of ObMimic’s simulation of the Servlet API. With ObMimic for out-of-container simulation of the Servlet API, and JspWeaver for JSP processing on top of the Servlet API, the two together ought to allow out-of-container testing of JSP code.

So far I’ve only tried this on some simple JSPs that use a variety of JSP features, but it has all been fairly straightforward and seems to work quite well.

Basic Approach and Example Code

The basic approach to running JspWeaver on top of ObMimic is to configure an ObMimic “ServletContextMimic” in much the same way as one would configure a real web-application to use JspWeaver. This involves defining the JspWeaver servlet and mapping all JSP paths to it, and making the relevant JSP files and other resources available at the appropriate context-relative paths within the servlet context.

In addition, JspWeaver also requires its licence file to be available at a context-relative path of /WEB-INF/lib, and also issues a warning if it can’t find a web.xml file at /WEB-INF/web.xml (though even a dummy web.xml file with minimal content proved sufficient for these tests).

Here’s some example code to illustrate one way to configure such a ServletContextMimic to use JspWeaver:

import com.openbrace.obmimic.mimic.servlet.ServletContextMimic;
import com.openbrace.obmimic.state.servlet.ServletContextState;
import com.openbrace.obmimic.substate.servlet.WebAppConfig;
import com.openbrace.obmimic.substate.servlet.WebAppResources;
import com.openbrace.obmimic.substate.servlet.InitParameters;
import com.openbrace.obmimic.substate.servlet.ServletDefinition;
import com.openbrace.obmimic.substate.servlet.ServletMapping;
import com.openbrace.obmimic.lifecycle.servlet.ServletContextMimicManager;
import com.zeroturnaround.jspweaver.JspInterpretingServlet;

...

// Create ServletContextMimic and retrieve its "mimicState",
// (which represents its internal state) and relevant
// subcomponents of its mimicState.
ServletContextMimic context = new ServletContextMimic();
ServletContextState contextState 
    = context.getMimicState();
WebAppConfig webAppConfig 
    = contextState.getWebAppConfig();
WebAppResources webAppResources 
    = contextState.getWebAppResources();

// Add a servlet definition for the JspWeaver servlet.
String servletName = "jspWeaverServlet";
String servletClass = JspInterpretingServlet.class.getName();
InitParameters initParams = null;
int loadOnStartup = 1;
webAppConfig.getServletDefinitions().add(
    new ServletDefinition(servletName, servletClass,
        initParams, loadOnStartup));

// Add a servlet mapping for ".jsp" paths.
webAppConfig.getServletMappings().add(
    new ServletMapping(servletName, "*.jsp"));

// Use the contents of a specified directory as the 
// servlet context's "resources".
String webAppRoot = ...path to root of web-application files...
webAppResources.loadResourcesFromDirectory(webAppRoot);

// Explicitly "initialize" the servlet context so as to force
// creation and initialization of its servlets, filters etc
// (otherwise ObMimic does this automatically when the first 
// Servlet API call occurs, but for this example it's shown as
// being done explicitly at this point).
new ServletContextMimicManager(context).initializeContext();

...

Note that:

  • This example code configures the servlet context to obtain its context-relative resources from a real directory structure that contains the web-application’s files (i.e. corresponding to an expanded “war” archive, or at least the subset of its files that are actually needed for the particular tests being carried out). This needs to include the JspWeaver licence file in its /WEB-INF/lib, and to avoid JspWeaver warnings it needs to include at least a “dummy” web.xml in its /WEB-INF directory.
  • Although this example shows the use of a whole directory structure to provide the web-application’s resources, ObMimic also lets you configure individual context-relative paths to use a specific resource. The resource itself can be provided by a file, or a classpath resource, or an in-memory byte array, or the content obtained from a URL. So you could, for example, provide the JspWeaver licence file and web.xml as classpath resources held alongside the test case’s class files. Similarly you could use the “real” web-application directory structure but then set particular context-relative paths to have different content for testing purposes.
  • A dummy web.xml proved sufficient to prevent “web.xml not found” warnings from JspWeaver, but more complex situations might require appropriate content within the web.xml – basically this would be necessary for any values that JspWeaver actually reads directly from the web.xml as opposed to accessing via the Servlet API (e.g. explicit TLD declarations). Of course, if you’re using a “real” web-application directory structure, it probably already has a suitable web.xml file.

The above approach is very general, in the sense that it configures the servlet context to use JspWeaver for any JSP files. As an alternative, ObMimic could also be used to explicitly construct and initialize an instance of the JspWeaver servlet which could then be used directly. That might be simpler when testing a single self-contained JSP, but it wouldn’t handle any request dispatching to other JSP files, or testing of servlets that “forward” to JSPs, or when also including some other framework that applies filters or servlets before dispatching to the JSP page (e.g. JSF).

With a suitably-configured ServletContextMimic, JSPs can then be tested by using normal ObMimic facilities to create and configure a suitable HttpServletRequest and HttpServletResponse and using ObMimic’s request-dispatching facilities to process them. This can include testing of JSPs in combination with servlets, filters etc (for example, testing a Servlet that “forwards” to a JSP).

For example, if you simply want to use the JspWeaver servlet to process a specific JSP this can be done by retrieving the JspWeaver servlet from the ServletContextMimic and directly invoking its “service” method with a request that has the appropriate URL. Alternatively (or if a more complex combination of servlets, filters, JSPs etc is to be tested) you can use the normal Servlet API facilities to obtain a RequestDispatcher from the servlet context for the relevant path and then “forward” to it. More generally, you can also use various ObMimic facilities to construct and invoke the appropriate FilterChain for a given context-relative path.

To illustrate this, here’s some example code that shows one way to configure a request with a URL for a particular context-relative path, and then directly invokes the JspWeaver servlet to process it (assuming that “context” is a ServletContextMimic configured as shown above):

import com.openbrace.obmimic.mimic.servlet.http.HttpServletRequestMimic;
import com.openbrace.obmimic.mimic.servlet.http.HttpServletResponseMimic;
import javax.servlet.Servlet;
import javax.servlet.ServletException;
import java.io.IOException;

...

// Create a request and response, with the 
// ServletContextMimic as their servlet context.
HttpServletRequestMimic request 
    = new HttpServletRequestMimic();
HttpServletResponseMimic response 
    = new HttpServletResponseMimic();
request.getMimicState().setServletContext(context);
response.getMimicState().setServletContext(context);

// Configure the request, including all of its URL-related
// properties (request URL, context path, servlet path, path 
// info etc).
String contextPath = context.getMimicState().getContextPath();
String contextRelativePath = "/pages/jstl.jsp";
String serverRelativePath = contextPath + contextRelativePath;
request.getMimicState().populateRelativeURIFromUnencodedURI(
    serverRelativePath);

... further configuration of the request as desired ...

// Retrieve the JspWeaver servlet from the servlet context
// and invoke it.
Servlet target = context.getMimicState().getServlets().get(
    "jspWeaverServlet");
try {
    target.service(request, response);
} catch (ServletException e) {
    ... failed with a ServletException ...
} catch (IOException e) {
    ... failed with an IOException ...
}

... examine the response, context, HTTP session etc
    as desired ...

...

On completion, the normal ObMimic facilities can be used to examine all relevant details of the response, servlet context, HTTP session etc. For example, you can use “response.getMimicState().getBodyContentAsString()” to retrieve the response’s body content.

General Findings

I’ve successfully used the combination of JspWeaver and ObMimic to test simple examples of:

  • A very simple JSP with plain-text content.
  • A JSP with embedded JSP declarations, scriptlets and expressions.
  • A JSP with static and dynamic includes.
  • A JSP that uses a custom tag file.
  • A JSP that makes use of JSTL tags and EL expressions.
  • A Servlet that forwards to a JSP.

In theory this should be able to cope with any JSP code that JspWeaver is able to successfully interpret.

Performance is potentially a concern. JspWeaver’s interpretation of JSP is faster than translate+compile when repeatedly editing and manually viewing a page, and in general its use for “out-of-container” testing should be faster and more convenient than HTTP-based “in-container” tests. But it’s bound to be slower than using a pre-compiled page when repeatedly running many tests against an unchanging page. First impressions are that the performance is good enough for reasonable use, but I wouldn’t want to have huge numbers of tests for each page as part of my frequent/detailed test runs. Overall it looks like a reasonable approach until a suitable translate+compile out-of-container solution is available – at which point one might still want a choice between the two approaches (e.g. use “translate+compile” when running suites of tests in build scripts, but “interpret” when manually running individual tests within an IDE).

Detailed Findings

Whilst this has all basically worked without undue difficulty, there have been a few minor issues and other findings:

  • The “bsh” parser used by JspWeaver to parse JSP pages doesn’t appear to cope with generics yet. Any use of generics within JSP scripting seems to result in parsing exceptions, presumably because it is thrown by the presence of “<” and “>” characters (to the extent that they result in rather misleading or unhelpful error messages). Maybe this just isn’t allowed in JSP scripting, or just depends on whether the JSP translator/interpreter copes with generics. But off-hand I’m not aware of any general JSP restriction on this.
  • More generally, syntax errors in the JSP can be somewhat hard to find from the error messages produced by JspWeaver when trying to interpret the files (especially when using static/dynamic includes, tag files etc).
  • There might be issues over how best to integrate this into one’s overall build process. For example, if the JSP code needs tag libraries that are built by the same project, these would need to be already built and jar’d before testing the JSPs, even though normally you’d probably run all such tests before “packaging” the libraries. This shouldn’t be a show-stopper, but might need some adjustments and re-thinking of the order in which components are built and tested, and which tests are run at which point.
  • When locating tag files, JspWeaver uses the ServletContext.getResourcePaths method but passes it subdirectory paths without a trailing “/” (for example, “/WEB-INF/tags” rather than “/WEB-INF/tags/”). Although this will generally work, the Javadoc for this Servlet API method isn’t entirely explicit about the form of such paths, and arguably implies that they should end with a trailing “/”, as do all of its examples. By default ObMimic therefore rejects calls to this method for paths that don’t have a trailing “/”, to highlight the possibly suspect use of the Servlet API (i.e. behaviour might vary depending on the servlet container implementation). ObMimic therefore has to be configuring to ignore this and permit such calls (for which its behaviour is then to treat the given path as if it did have the trailing “/”).
  • It can be tricky getting the classpath correct for all the libraries needed for JSP processing. The libraries needed for JSTL and EL support have changed as JSTL and EL have evolved, EL has been moved into the JSP API, and there are incompatible versions of libraries knocking around. It’s just the usual “jar hell”. In particular, I’ve not yet found a combination of Glassfish jars that work with its “javaee.jar” (I always seem to get a java.lang.AbstractMethodError for javax.servlet.jsp.PageContext.getELContext(ELContext). The only working solution that I’ve found so far is to use the Tomcat 6 jstl.jar and standard.jar. This is presumably solvable with a bit more research and tinkering, unless it’s something in JspWeaver that specifically depends on “old” versions of these libraries, but I haven’t followed this up yet.
  • More generally, it can be hard to work out which jars are needed when any are missing, as the error messages and the nature of the failures aren’t always as useful as they could be (e.g. console shows an exception/stack trace from some point in the internal JspWeaver processing of the page, but the JSP completes successfully with truncated output but no exception).

Next Steps

I haven’t yet tried any particularly complex JSPs or “complete” real-life applications, but would like to get around to this if I ever find time. I’ve also got an existing Struts 1 example where the servlet processing is tested using ObMimic but with its JSPs “dummied out”, and at some point I’d like to go back to this and see if its tests can now properly handle the JSP invocations.

I’d also like to see if JSF pages can be tested this way. I’ve had a brief dabble with this, using Sun’s JSF 1.2 implementation as provided by Glassfish. I’ve got as far as seeing JSF “initialize” itself OK (once I’d realized that it needs an implementation-specific context listener, which for the Sun implementation is provided by class com.sun.faces.ConfigureListener). But JspWeaver doesn’t seem to be finding the TLDs for the JSF tags, even if the tag libraries are present in both the classpath and within the web-application directory structure. Providing the TLDs as separate files and explicitly specifying these in the web.xml does result in them being found, but then complains that the actual tag classes don’t have the relevant “setter” methods for the tag’s attributes. I’d guess this is a classpath problem (maybe related to which JSTL/EL libraries are used), or otherwise depends on exactly how JspWeaver searches for TLDs and tag libraries (or maybe what classloader it uses). But I haven’t yet got round to digging any deeper. Also, whilst I’d like to see tests of JSF pages working, I’m not even sure what testing JSF pages like this would mean, given their rather opaque URLs and “behind-the-scenes” processing (e.g. what request details are needed to test a particular feature of the page, and what would you examine to see if it worked correctly?).

Another thought is that ObMimic could be made to automatically detect the presence of JspWeaver and configure itself to automatically use it for all JSP files if not otherwise configured. Maybe this could be a configurable option for which servlet to use for processing JSPs, with the default being a servlet that checks for the presence of JspWeaver and delegates to it if present. That would let you automatically use JspWeaver if you have it, whilst also allowing for other JSP processors to be introduced.





Some specific issues from ObMimic portability testing

7 10 2007

As mentioned in my previous Serial Monogamy post, I’ve recently been doing some portability testing on the Java code of my ObMimic library for out-of-container testing of Servlet code.

So, as promised, here are details of the specific issues I encountered, primarily as a record for my own future reference but also in case it’s any help to anyone else.

Background

First, some background:

  • The code consists of about 400 classes plus test-cases.
  • The resulting library is intended to be usable on any JRE for Java SE 5 or higher.
  • There’s nothing intrinsically platform-dependent or likely to raise any major portability issues. In particular, there’s no Swing or other such GUI code. But otherwise the code and its test-cases do use a fairly broad range of facilities. For example, there is some file handling, some charset-sensitive string processing, some URL encoding and decoding, and calls to JDK methods whose error-handling is defined as implementation-dependent.
  • Reasonable efforts have been made to keep the code fully portable (e.g. use of system properties for path and file separators, explicit use of Locales and charsets where appropriate etc).
  • All development and testing has been done on Sun JDKs on MS Windows, with testing of portability deferred until now. This approach was chosen based on the confidence gained from previous experiences with Java portability, and was judged to be the most efficient way to tackle it for this particular project.
  • The test-cases are intended to be reasonably comprehensive, and include tests of all configurable options, all error-handling code, all handling of checked exceptions etc. The EMMA code-coverage tool reports them as giving 100% code coverage.
  • My own build script for this code doesn’t need to be portable. However, one of the deliverables is an “Enterprise Edition” that includes all the source code and test-cases and its own Ant build script. This does need to be as portable as possible.
  • One potential restriction on the “Enterprise Edition” build script is that a custom Javadoc taglet is used to document one particular aspect of ObMimic’s API. In theory, such Javadoc taglets depend on “com.sun” classes in the Sun JDK’s tools.jar archive, and appear to be specific to Sun’s Javadoc tool rather than being a standard part of the Java SE platform. Hence this aspect of the build script theoretically restricts it to Sun JDKs, though the resulting library remains fully portable (and at a pinch you could always remove the taglet processing from the build if you really wanted to run the build on some other JDK). In practice, other JDKs generally claim that their Javadoc tool is fully compatible with Sun’s, and might even use the very same code. So prior to testing this, it was somewhat unclear how portable the custom “taglet” might be.

The Tests

For the moment I’m only testing on Sun, IBM and BEA JRockit JDKs for Java SE 5 and 6 plus the latest Sun JDK 7 build, on MS Windows and on a representative Linux system (actually, Ubuntu 7.04), and only on IA-32 hardware.

I’m assuming this should shake out most of the portability issues. It’s all I have readily to hand at the moment, and probably represents the majority of potential users.

I hope to extend this to Solaris and Macintosh and maybe other JDKs in future, and to other hardware as and when I can afford it. But I don’t expect many further issues once the code is fully working on both MS Windows and a Unix-based system and on JDKs from three different vendors – though I’d be interested if anyone has any experiences that suggest otherwise.

Another aim of the tests is to check that the deliverables don’t have any unexpected dependencies on my own development environment.

So the testing consists of:

  • Installing the various JDKs onto each of the relevant systems, but without all of the other tools and configuration that make up my normal development environment.
  • Putting the deliverables for each of ObMimic’s various “editions” onto each system.
  • On each system, running an Ant script that unzips/installs/configures each of the ObMimic editions, then runs the build script of the ObMimic “Enterprise Edition” (which itself builds the ObMimic library and test-cases from source, builds its Javadoc, and runs the full suite of test-cases). Then runs the test-cases against the pre-built libraries of the other ObMimic editions. And repeats this for each of the system’s JDKs in turn.

Actually, the “Enterprise Edition” build script is run using the full JDK, as it needs javac, javadoc, and the JDK’s tools.jar or equivalent, but all other tests are run using the relevant JRE on its own (to check that only a JRE is required).

The Findings

As expected, there were a few minor issues but most of the code was fine and worked first time under all of the JDKs on both MS Windows and Linux.

Even though I know it ought to work like this, it still makes me jump about like an excited little kid everytime I see it! I guess that’s what comes of past lives struggling with the joys of C/C++ macros, EBCDIC, MS Windows “thunking” and the like. Java makes it far too straightforward!

Anyway, here are the details of the few issues that I did encounter.

1. Source-code file encoding.

A few test-cases involving URL encoding/decoding and other handling of non-ASCII characters failed when the code had been compiled on Linux.

This turned out to be due to javac misreading the non-ASCII characters in the test-case source code. The actual problem is that the source files are all written using ISO-8859-1 encoding, but by default javac reads them using a default encoding that depends on the underlying platform. On MS Windows everything was being read correctly, but on Linux javac was trying to read these files as UTF-8 and was therefore misinterpreting the non-ASCII characters.

The solution was to explicitly specify the source file encoding to javac. This is done via javac’s “-encoding” option (or the corresponding “encoding” attribute of Ant’s “javac” task).

For additional safety, I also decided to limit all my Java source code files to pure 7-bit ASCII, with unicode escape codes for any non-ASCII characters (e.g. \u00A3 for the “pound sterling” character). This is perfectly adequate for my purposes, and should be the safest possible set of characters for text files. Searching for all non-ASCII characters in the code revealed only a handful of such characters, all of them in test data within test-cases.

The Sun JDK’s native2ascii tool might also be of relevance for anyone writing code in a non “Latin 1” language, but for me sticking to pure ASCII is fine.

2. Testing of methods that return unordered sequences.

Testing on the IBM JDK revealed a handful of test-cases that were checking for a specific sequence of data in a returned array, collection, iterator, enumeration or the like even where returned data is is explicitly specified as having an undefined order.

The Sun and IBM JDKs seem to fairly reliably produce different orderings for many of these cases. The Sun JDKs generally seem to give results in the order one might naively expect if forgetting that the results are unordered, but the IBM JDK generally seems to gives results in a very different order. Some of these mistakes thus slipped through the normal testing on Sun JDKs, but were picked up when tested on the IBM JDKs.

In some cases the solution was to rework the test to use a more suitable “expected result” object or comparison technique (especially as I already have a method for order-insensitive comparison of arrays). In other cases it proved simpler to just explicitly cater for each possible ordering.

It’s hard to know if all such incorrect tests have now been found. There could be more that just happen to work at the moment on the particular JDKs used. On the other hand, it’s only the tests that are wrong, not the code being tested, and the only impact is that the test is overly restrictive. So the only risk is that the test might unnecessarily fail in future or on other JDKs. For the moment that’s a risk I can live with, and I’ll fix any remaining problems as and when they actually appear.

Potentially this could also be avoided by always using an underlying collection that provides a predictable iteration order, even where this isn’t strictly required (for example, LinkedHashMap). However, that feels wrong if the relevant method’s specification explicitly defines the order as undefined, and could be misleading if callers start to take the reliable ordering for granted. This is especially true for ObMimic, where I’m simulating Servlet API methods to help test the calling code. I don’t want to provide the calling code with a predictable ordering when a normal servlet container might not. If anything, it might be better to deliberately randomise the order for each call, or at least make that a configurable option. So I’ve noted that as a possible enhancement for future versions of ObMimic.

3. All of the IBM JDK’s charsets support “encoding”.

One of the test-cases needs to use a Charset that doesn’t support encoding – that is, one whose Charset#canEncode() returns false. This failed on the IBM JDKs, due to being unable to find a suitable Charset to use.

The test-case tries to find a suitable Charset by searching through whichever Charsets are present and picking the first one it finds that doesn’t support encoding, and fails if it can’t find any such Charset. That’s fine on Sun’s JDK, where a few such charsets exist. But on the IBM JDK, every charset that is present returns true from its canEncode method, and the test therefore fails and reports that it can’t find a suitable charset to use.

Solution was to introduce a custom CharsetProvider into the test classes and have this provide a custom charset whose “canEncode” method returns false. This ensures that the test can always find such a charset, even if there none provided by the underlying JVM.

I guess I could just use this custom non-encodeable charset every time, but for some reason I feel more comfortable keeping the existing code to look through all available charsets and pick the first suitable one that it finds.

4. Javadoc “taglet” portability.

All of the JDKs handled the Javadoc “taglet” correctly.

In particular, the IBM and BEA JRockit JDKs do contain the “com.sun” classes needed by the custom “taglet”, and they had no problem compiling, testing and using the taglet.

Conclusions

Mostly, everything “just worked” as one would expect it to. The issues encountered were all pretty minor, only affected test-case code, and were easily identified and fixed.

It was worthwhile testing on both MS Windows and Linux as this revealed the source-code encoding problem, and it was worthwhile testing on both Sun and IBM JDKs as their internal implementations proved different enough to shake out a few mistakes and unjustified assumptions in the test-cases.

Some specific lessons I take from this:

  • Always specify the source-code encoding to the javac compiler (but also try to limit the source code to pure ASCII where possible, with unicode escapes for anything more exotic).
  • Whatever the other pros and cons of having comprehensive test-cases with 100% coverage, they’re mightily useful once you have them. With a comprehensive suite of tests, you can easily test things like portability (or, for example, what permissions are needed when running under a security manager). You just run the whole suite of existing tests, confident in the knowledge that this is exercising everything the code might do.
  • Whilst you’d probably assume that the Javadoc tool is a “proper” standard and part of the Java SE platform, technically it’s a Sun-specific tool within Sun’s JDK, and any custom doclets and taglets are dependent on “com.sun” classes. It seems crazy that after all this time the mechanisms for providing customised Javadoc still aren’t a standard part of the Java SE platform, but there you go. Despite this, in practice you can fairly safely regard the Javadoc tool and the “com.sun” classes as a de-facto standard. In particular the Javadoc tools in the IBM and BEA JRockit JDKs seem to be entirely compatible with Sun’s Javadoc tool. and do provide the necessary “com.sun” classes.

I’m also going to think about whether methods that return “unordered” arrays, iterators, enumerations etc ought to deliberately randomize the order of their returned elements. This would help pick out any tests that incorrectly assume a specific order. The downside is that any resulting test failures wouldn’t be entirely repeatible, which always makes things much harder. It’s also questionable whether this is worth the extra complexity and potential for errors that it would introduce into the “real” code. And it’s not something you’d want to do in any code you’re squeezing for maximum performance. So maybe this is one to ponder for a while, and keep up my sleeve for appropriate situations.








%d bloggers like this: