The Java EE Verifier and indirect and optional dependencies

3 08 2009

Running the Java EE 5 Verifier can be a useful way of checking EAR files and other Java EE artifacts before deploying and running them.

However, once you start using third-party libraries there’s one set of rules in the verifier that are rather too idealistic: the requirement that all referenced classes need to be present in the application. If any classes are referenced but can’t be found, these are reported by the verifier as failures.

In theory, it’s perfectly reasonable that Java EE applications are basically supposed to be “self-contained”, and that all classes referenced within them need to be present within the application itself (obviously excluding those of the Java EE environment itself). Actually, Java’s “extension” mechanism is also supported as a way of using jars from outside of the application, but this has limitations and drawbacks of its own and doesn’t really change the overall picture. There’s a useful overview of this subject in the “Sun Developer Network” article Packaging Utility Classes or Library JAR Files in a Portable J2EE Application (this dates from J2EE 1.4, but is still broadly appropriate for Java EE 5).

Anyway, verifying that the application’s deliverable includes all referenced classes seems better than risking sudden “class not found” errors at run-time (possibly on a “live” system and possibly only in very specific situations). The trouble is that once you start using third-party libraries, you then also need to satisfy their own dependencies on further libraries, even where these are only needed by optional facilities that you never actually use. Then you also need all the libraries that those libraries reference, and so on. This can easily get out of hand, and require all sorts of libraries that aren’t ever actually used by your application.

As a simple example, take the UrlRewriteFilter library for rewriting URLs within Java EE web-applications. This is limited in scope and its normal use only involves a single jar, so you’d think it would be relatively self-contained.

However, one of its features is that you can configure its “logging” facilities to use any of a number of different logging APIs. In practice, I don’t use anything other than the default setting, which uses the normal servlet-context log. But its code includes references to log4j, commons-logging and SLF4J so that it can offer these as options. The documentation says that you need the relevant jar in your classpath if you’re using one of these APIs, but the Java EE Verifier tells you that they all need to be present – even if you’re not actually using them (on the perfectly reasonable basis that there’s code present that can call them).

That’s not the end of the story. The SLF4J API in turn uses “implementation” jars to talk to actual logging facilities, and includes references to classes that are only present in such implementation jars. So you also need at least one such SLF4J implementation jar. At this point you’re now looking at the SLF4J website and trying to figure out which of its many jars you need. What are they all? Does it matter which one you pick? Perhaps you need all of them? Do they have any further dependencies on yet more jars? Are there any configuration requirements? Are these safe to include in your application without learning more about SLF4J? Do they introduce any security risks?

So apart from anything else, you’re now having to find out more than you ever wanted to know about SLF4J, just because a third-party library you’re using has chosen to include it as an option. Ironically, a mechanism intended to give you a choice between several logging APIs has ended up requiring you to bundle all of them, even when you’re not actually using any of them!

Anyway, in addition to the log4j jar, the commons-logging jar, the SLF4J API jar, and an SLF4J implementation jar, the UrlRewriteFilter also needs a commons-httpclient jar (though again, nothing in my own particular use of UrlRewriteFilter appears to actually use this). That in turn also requires a commons-codec jar.

Fortunately, that’s the limit of it for UrlRewriteFilter. But it’s easy to see how a third-party jar could have a whole chain of dependencies due to “optional” facilities that you’re not actually using.

As a rather different example, another library that I’ve used recently appears to have an optional feature that allows the use of Python scripts for something or other. This is an optional feature in one particular corner of the library, and is something I have no need for. To support this feature, the code includes references to what I presume are Jython classes. As a result the verifier requires Jython to be present (and then presumably any other libraries that Jython might depend on in turn). Now, bundling Jython into my Java EE application just to satisfy the verifier and avoid a purely-theoretical risk of a run-time “class not found” error seems plain crazy. If the code ever does unexpectedly try to use Jython, I’d much rather have it fail with a run-time exception than have it work successfully and silently do who-knows-what. To add insult to injury, Jython is presumably able to call Python libraries that might or might not be present but that the verifier will know nothing about – so bundling Jython in order to satisfy the verifier might actually make the application more vulnerable to code not being found at run-time.

With the mass of third-party libraries available these days, and the variety of dependencies these sometimes have, I suspect there must be cases that are far, far worse than this. (Anyone out there willing to put forward a “worst case”?)

So what’s the answer? Obviously you do need to bundle the jars for all classes that are actually used, but for jars whose classes are referenced but never actually used (and any further jars that they reference in turn) I can see a number of alternatives:

  • Work through all the dependencies and bundle all the jars so that the verifier is happy with everything. Often this is entirely appropriate or at least acceptable, but as we’ve seen above, this cure isn’t always very practical, and in some cases it can be worse than the disease.
  • A variation on the above is to leave the “unnecessary” jars out of the application but run the verifier on an adjusted copy of the application that does include them. That is, produce a “real” deliverable with just the jars that are actually needed, and a separate adjusted copy of it that also includes any other jars necessary to keep the verifier happy but that you know aren’t actually needed by the application. The verification is run on this adjusted copy, which is then discarded. The drawback is that you still have to work through the entire chain of dependencies and track down and get hold of all of the jars, even for those that aren’t really needed. There’s also the risk that you’ll treat a jar as unnecessary when it isn’t, which is exactly the mistake that the verifier is trying to protect you from.
  • Another alternative is to just give up and not use the verifier. But it seems a shame to miss out on the other verification rules just because one particular rule isn’t always practical.
  • Ideally, it’d be nice to be able to configure the verifier to allow particular exceptions (perhaps to specify that this particular rule should be ignored, or maybe to specify an application-specific list of packages or classes whose absence should be tolerated). But as far as I can see there’s no way to do this at present.
  • Another approach is to inspect the verifier’s results manually so that you can ignore these failures where you want to, but can still see any other problems reported by the verifier. However, it’s always cumbersome and error-prone to have to manually check things after each build, especially where you might have to wade through a long list of “acceptable” errors in order to pick out any unexpected problems.
  • Potentially you could script something to examine the verifier output, pick which warnings and failures should and shouldn’t be ignored, and produce a filtered report and overall outcome based on just the failures you’re interested in. In the absence of suitable options built into the verifier, you could use this approach to support appropriate options yourself. This is probably the most flexible approach (in that you could also use it for any other types of verifier-reported errors that you want to ignore). But it seems like more work than this deserves, and it’d be rather fragile if the messages produced by the verifier ever change.
  • As a last resort, if the library containing the troublesome reference is open-source you could always try building your own customised version with the dependency removed (e.g. find and remove the relevant “import” statements and replace any use of the relevant classes with a suitable run-time exception). Clearly, even where this is possible it will usually be more trouble than it’s worth and will usually be a bad idea, but it’s another option to keep up your sleeve for extreme cases (e.g. to remove a dependency on an unnecessary jar that you can no longer obtain).

The approach I’ve adopted for the time being is to run the verifier on “adjusted” copies of my applications, but only use this for jars that I’m very confident aren’t needed and aren’t wanted in the “real” application. The actual handling of this is built into my standard build script, which builds the “adjusted” application based on an application-specific list of which extra jars need to be added into it.

In the longer term, I’m hoping that the entire approach to this might all change anyway… in a world of dynamic languages, OSGi bundles, and whatever eventually comes of Project JigSaw and other such “modularization” efforts, the existing Java EE rules and packaging mechanisms just don’t seem very appropriate anymore. It all feels like part of the mess that has grown up around packaging, jar dependencies, classpaths, “extension” jars etc, together with the various quirks and work-arounds that have found their way into individual specfications, APIs and tools (often to handle corner-cases and real-world practicalities that weren’t obvious when the relevant specification was first written).

So I’m hoping that at some point we’ll have a cleaner and more general solution to packaging and modularization, and this little quirk and all the complications around it will simply go away.

Forcing Glassfish V2 to reload an auto-deployed web-application

31 01 2009

If you auto-deploy a war archive on Glassfish V2, any changes to the deployed application’s JSP files are picked up automatically. However, if you make changes to the deployed application’s web.xml file or any other such configuration files, you need some way to make Glassfish “reload” the application using the updated files.

It isn’t immediately apparent how to trigger this. At any rate, it had me scratching my head yesterday when I found myself trying to install a third-party application. The installation instructions led me to auto-deploy its war archive and then edit the deployed files, but the changes didn’t take effect.

I couldn’t see anything in the Glassfish admin console to make it stop and re-load the application, and the command-line facilities that I found for this don’t seem to apply to auto-deployed applications.

The obvious solution was to shut-down and restart Glassfish, but even that seemed to leave the application still using its original configuration and ignoring the changes.

Apparently the trick is that you have to put a file named .reload into the root of the deployed application’s directory structure.

This file’s timestamp is then checked by Glassfish and used to trigger reloading of the application. So you can force a reload at any time by “touching” or otherwise updating this “.reload” file.

I can’t claim any detailed knowledge in this area, and have only had a quick look, but I get the impression that this “.reload” mechanism is used by Glassfish for the reloading of all “exploded” directory deployments. For applications that are explicitly deployed from a specified directory structure, you can use the deploydir command with a “–force=true” option to force re-deployment (there might be other ways to do this, but that’s the most obvious I’ve seen so far). But on Glassfish V2 that doesn’t appear possible for auto-deployed applications, so the answer for those is to manually maintain the “.reload” file yourself.

For some other descriptions and information about this, see:

Some notes:

  • Manually touching/updating a “.reload” file also works for exploded archives that have been deployed via “deploydir” (i.e. as an alternative to using the “deploydir” command to force reloading).
  • The content of the “.reload” file doesn’t matter, and it can even be empty. It just has to be named “.reload” and must be in the root directory of the deployed application (that is, alongside the WEB-INF directory, not inside it).
  • Because the “.reload” file is in the root of the web-application and outside of its WEB-INF, it’s accessible to browsers just like a normal JSP, HTML or other such file would be. So it’s not something you’d want to have present in a live system (or you might want to take other steps to prevent it being accessible).

I haven’t looked in detail at whether Glassfish V3 has any improved mechanism for this, but:

  • The V3 Prelude’s “Application Deployment Guide” does have a page “To Reload Code or Deployment Descriptor Changes” that shows the same solution still in place.
  • Glassfish V3 also seems to have a new redeploy command for redeploying applications, which appears to be equivalent to “deploydir” with “–force=true” but doesn’t require a directory path, so can presumably be used on any application, including auto-deployed applications.

As a personal opinion, I’m quite happy with using auto-deployment for most purposes, but in general I’m very much against the idea of editing the resulting “deployed” files. It just doesn’t seem right to me, and I can see all sorts of potential problems.

So even where a third-party product is delivered as a war archive and requires customisation of its files, I prefer to make the necessary changes to an unzipped copy. I can then use my normal processes to build a finished, already-customized archive that can be deployed without needing any further changes.

But there are still times when it’s handy to auto-deploy a web-application or other component by just dropping its archive into Glassfish, and then be able to play around with it “in place” – for example, when first evaluating a third-party product, or when doing some quick experiments just to try something.

So being able to force reloading of an auto-deployed application remains useful.

Java’s String.trim has a strange idea of whitespace

11 11 2008

Java represents strings using UTF-16, so one might assume that its “trim” method for trimming whitespace would be based on Unicode’s view of which characters are whitespace. Or on Java’s. Or would at least be consistent with other JDK methods.

To my surprise, I’ve just realised that’s far from the case.

The String.trim() method talks about “whitespace”, but defines this in a very precise but rather crude and idiosyncratic way – it simply regards anything up to and including U+0020 (the usual space character) as whitespace, and anything above that as non-whitespace.

This results in it trimming the U+0020 space character and all “control code” characters below U+0020 (including the U+0009 tab character), but not the control codes or Unicode space characters that are above that.

Note that:

  • Some of the characters below U+0020 are control codes that I wouldn’t necessarily always want to regard as whitespace (e.g. U+0007 bell, U+0008 backspace).
  • There are further control codes in the range U+007F to U+009F, which String.trim() treats as non-whitespace.
  • There are plenty of other Unicode characters above U+0020 that should normally be recognized as whitespace (such as U+2003 EM SPACE, U+2007 FIGURE SPACE, U+3000 IDEOGRAPHIC SPACE).

So whilst String.trim() does trim tabs and spaces, it also trims some characters that you might not expect to be treated as whitespace, whilst ignoring other whitespace characters.

This seems far from ideal, and not what you might expect from a method whose headline says “… with leading and trailing whitespace omitted”.

In contrast:

  • The Character.isWhitespace(char) and Character.isWhitespace(int) methods are defined in terms of which characters are “whitespace according to Java”. This in turn is specified as the characters classified by Unicode as whitespace except for a few Unicode space characters that are “non-breaking” (though quite why these should always be considered to be non-whitespace isn’t obvious to me), plus a specified list of some other characters that aren’t classified as whitespace by Unicode but which you’d normally want to regard as whitespace (such as U+0009 TAB).
  • The Character.isSpaceChar(char) and Character.isSpaceChar(int) methods test whether a Unicode character is “specified to be a space character by the Unicode standard”.
  • The deprecated Character.isSpace(char) method tests for 5 specific characters that are “ISO-LATIN-1 white space”. Ironically, I suspect this deprecated method’s idea of whitespace is what many people are imagining when they use the non-deprecated String.trim() method.
  • The Character.isISOControl(char) and Character.isISOControl(int) methods test for the control codes below U+0020 whilst also recognising the control codes in the U+007F to U+009F range.

One can argue over which of these is the best definition of whitespace for any particular purpose, but the one thing that does seem clear is that String.trim() isn’t consistent with any of them, and doesn’t do anything particularly meaningful. It certainly doesn’t seem special enough to deserve being the String class’s only such “trim” method, and having a name that doesn’t indicate what set of characters it trims.

There is an old, old entry for this in Sun’s bug database (bug ID 4080617). However, this was long-ago closed as “not a defect”, on the basis that String.trim() does exactly what its Javadoc specifies (it trims characters which are not higher than U+0020). Never mind whether this is desirable or not, or how misleading it could be.

The most reasonable approach might be to add new methods to java.lang.String for “trimWhitespace” and “trimSpaceChars”, based respectively on the corresponding Character.isWhitespace and Character.isSpaceChar definitions of whitespace. Arguably it might also be worth having a “trimWhitespaceAndSpaceChars” method to trim all characters recognised as whitespace by either of those methods (because each includes characters that the other doesn’t, such as U+0009 TAB and Unicode’s non-breaking spaces, and sometimes you might want to treat all of these as whitespace).

It might also be safer if String.trim() was deprecated, as has been done with Character.isSpace(), possibly replacing it with a more accurately-named method for the existing behaviour (maybe “trimLowControlCodesAndSpace”?).

But in practice, at this point the damage has long since been set in stone, and changing this now could have such widespread impact that it probably isn’t feasible.

As for me, I’ll be removing all use of String.trim() from my code and treating it as if it were deprecated, on the basis that it’s misleading, often inappropriate, and too easy to misuse.

That leaves me looking for an alternative.

There are some existing widely-used libraries with relevant methods:

  • The Apache Commons Lang library has a StringUtils class with a “trim(String)” method that clearly documents that it trims “control characters” by using String.trim(), but also has a separate “strip(String)” method that trims whitespace based on Char.isWhitespace(char).
  • The Spring framework has a StringUtils class with a “trimWhitespace(String)” method (and various other such “trim…” methods) which appears to be based on Character.isWhitespace(char). It Javadoc doesn’t explicitly commit to any particular definition of “whitespace”, but it does refer to Character.isWhitespace(char) as a “see also”.

There are probably lots of other utility libraries with similar methods for this.

However, many of my projects don’t currently use these libraries, and introducing an additional library just for this doesn’t seem worthwhile. On top of which, some of my current code is critically dependent on which characters are trimmed, and “isWhitespace” might not always be what I want (e.g. if I want to treat both “breaking” and “non-breaking” spaces as whitespace).

Of course, this comes down to the usual arguments and trade-offs between using an existing library from elsewhere versus writing the code yourself (effort vs. further dependencies, licencing/redistribution, other useful facilities in the libraries, versioning, potential for “JAR hell” etc).

At the moment my judgement for my own particular circumstances and current projects is to avoid any dependency on these libraries, and handle this myself instead.

So I’ll probably add “trimWhitespace” and “trimSpaceChars” methods to my own utility routines to use in place of String.trim(). Possibly also a “trimWhitespaceAndSpaceChars”.

These will just be convenience methods built on top of a more fundamental method that takes an argument specifying which characters to regard as whitespace. That in turn will be provided by an interface for “filters” for Unicode characters, with each filter instance indicating yes/no for each character passed to it. Some predefined filters can then be provided for various sets of characters (Unicode whitespace, Java whitespace, ISO control codes etc), and others can be constructed for particular requirements as necessary.

I’ll probably also include a mechanism for combining and negating such filters, so that I can define filters for various sets of characters but also use combinations of them when trimming. Ideally this all needs to cater for Unicode code points rather than just chars, so as to cope correctly with Unicode supplementary characters above U+FFFF and represented within Strings by pairs of chars (in case any of these ever need to be recognised as whitespace, or in any other use of such filters).

An alternative approach might be to supply an explicit java.util.Set of the desired whitespace characters, but that’s not as convenient when you want to base the whitespace definition on an existing method such as Character.isWhitespace. In contrast, the “filter” approach can easily support building a filter from either such a method or from a given set of characters. So I think I’ve talked myself into the “filter” approach.

But more generally, is anyone else surprised by the String.trim() method’s definition? Or has everybody else known it all along? Or does nobody use String.trim anyway? Is everybody using Commons Lang’s “strip” or Spring’s “trimWhitespace” instead?

Or does nobody worry about which characters get trimmed when they trim whitespace?

FindBugs finds bugs (again)

30 07 2008

FindBugs is terrific. I’ve been using it for several years now, and each new release seems to find some more mistakes in my code that were previously slipping through unnoticed.

I’d like to think I’m very careful and precise when writing code, and have the aptitude, experience and education to be reasonably good at it by now. I’m also a stickler for testing everything as comprehensively as seems feasible. So it’s rather humbling to have a tool like FindBugs pointing out silly mistakes, or reporting issues that I’d not been aware of. The first time I ran FindBugs against a large body of existing code the results were a bit of a shock!

In the early days of FindBugs, I found the genuine problems to be mixed with significant numbers of false-positives, and ended up “excluding” (i.e. turning off) lots of rules. Since then it has become progressively more precise and robust, as well as detecting more and more types of problem.

These days I run FindBugs with just a tiny number of specific “excludes”, and make sure all my code stays “clean” against that configuration. The “excludes” are mainly restricted to specific JDK or third-party interfaces and methods that I can’t do anything about.

Further new releases of FindBugs don’t usually find many new problems in the existing code, but do almost always throw up at least one thing worth looking into.

So last weekend I upgraded to FindBugs version 1.3.4, and sure enough it spotted a really silly little mistake in one particular piece of “test-case” code.

The actual problem it identified was an unnecessary “instanceof”. This turned out to be because the wrong object was being used in the “instanceof”. The code is intended to do “instanceof” checks on two different objects to see if both of them are of a particular type, but by mistake the same variable name had been used in both checks. Hence one of the objects was being examined twice (with the second examination being spotted by FindBugs as entirely superfluous), and the other not at all. If this had been in “real” code I’d have almost certainly caught it in testing, but buried away in a “helper” method within the tests themselves it has managed to survive for a couple of years without being noticed.

I guess this raises the broader issue of whether (and how) test-case code should itself be tested, but that’s one for another day (…would you then also want to test your tests of your tests…?). Anyway, thanks to FindBugs, this particular mistake has been detected and fixed before causing any harm or confusion.

Every time I find something like this it makes me think how fantastic it is to have such tools. I use PMD and CheckStyle as well, and they’ve all helped me find and fix mistakes and improve my code and my coding. I’ve learnt lots of detailed stuff from them too. But FindBugs especially has proven to be very effective whilst also being easy to use – both in Ant scripts and via its Eclipse plug-in.

If you’re writing Java code and haven’t yet tried FindBugs, it’s well worth a look.

JspC: Switching from Tomcat to Glassfish

21 07 2008

The Ant build script that I use for all of my projects includes, for web-applications, translating and compiling any JSP files. For my purposes this is just to validate the JSPs and report any syntax and compilation errors as part of the build, rather than to put pre-compiled class files into the finished web-app.

I’ve just quickly switched from using Tomcat’s JSP compiler to using Glassfish V2’s JSP compiler, and it seems worth documenting the changes involved and some of the similarities and differences.

Note that I was previously using the Tomcat 5 JSP compiler, and it didn’t seem worth upgrading this to Tomcat 6 just in order to ditch it for Glassfish, so this isn’t a like-for-like comparison – some of the fixes/changes noted might also be present in Tomcat 6.

The actual change-over was relatively painless. It’s basically the same JSP compiler – which I understand is known as “Apache Jasper 2” – so the general nature of it and the options available are essentially the same.

In Tomcat this is provided via a “JspC” Ant task, and needs to be supplied with a classpath that includes the relevant Tomcat libraries. In contrast, Glassfish provides a “jspc” script that supplies the appropriate classpath and invokes Glassfish’s JSP compiler, passing it any supplied command-line arguments.

So switching over basically just consisted of taking out the invocation of the Tomcat-supplied “JspC” Ant task (and the corresponding set-up of its classpath), and replacing it with an Ant “exec” of the Glassfish “jspc” script with equivalent command-line arguments.

However, the Glassfish documentation for this seems a bit on the weak side. At least, I didn’t find it particularly easy to locate any definitive documentation on the command-line options for the Glassfish V2 “jspc” script. Maybe I just didn’t look in the right places. The program itself supports a “-help” option that lists its command-line options, but without much explanation. There’s a more detailed explanation of the options in the Sun Application Server 9.1 Update 2 reference manual at, but this doesn’t entirely match the current Glassfish release (e.g. it doesn’t include the recently-added “ignoreJspFragmentErrors” option). Nevertheless, it’s the best documentation I’ve found so far. In any case, the options haven’t yet diverged much from those of Tomcat JspC, so much of the Tomcat documentation remains relevant.

I’m also a bit unsure of the exact relationship between the Tomcat and Glassfish code. They both appear to be “Apache Jasper 2”, but this doesn’t seem to exist as a product in its own right, only as a component within Tomcat. The Glassfish code is presumably a copy or fork of the Tomcat code, but with its own bug-fixes and new features, and maintained and developed as part of Glassfish. With Glassfish being the reference implementation for new JSP versions, I assume the Glassfish implementation is now the main branch going forward, even if some of the changes get incorporated into both.

To add to my uncertainty, I’m also rather confused as to whether Glassfish does or doesn’t also provide an Ant task for invoking its JSP compiler. There is an “asant” script that invokes Glassfish’s internal copy of Ant with a suitable classpath, with various targets and supporting Ant tasks. There’s also documention for previous releases of the “Sun Application Server” that show a “sun-appserv-jspc” Ant task. But the current Glassfish V2 documention doesn’t seem to list any such task amongst its “asant” targets, nor otherwise document a “jspc” or “sun-appserv-jspc” Ant task. Maybe I just didn’t find the right document. I guess I should just hunt around the Glassfish libraries for the relevant class, or try invoking it based on the previous release’s documentation. But for the moment, invoking the “jspc” script is perfectly adequate for my purposes, so I’m sticking with that unless and until I get a chance to look at this again.

A few other findings:

  • When given a complete web-application, the Tomcat 5 JspC compiler seems to process precisely those files that have a “.jsp” or “.jspx” extension. Maybe someone can enlighten me, but I can’t see anything in the Ant task’s attributes that allow it to be configured to process other file extensions. In contrast, Glassfish’s jspc script seems to automatically process all file types that are identified by the web.xml as being JSPs.
  • With the Tomcat JspC task, the JSP translation had to be followed by a separate run of “javac” to compile the resulting java source code. In contrast, the Glassfish jspc script supports a “-compile” option that carries out the compilation as part of its own processing. What’s more, I gather this uses the JSR 199 Java Compiler API for “in process” compilation if this is available (i.e. when running on JDK 6 or higher), and seems much faster as a result.
  • A slight limitation of the Glassfish jspc “-compile” option is that there doesn’t seem to be any control over where the resulting class files are written. Instead, they just get written into the same directory as the java source files. For my purposes this doesn’t matter, but if you wanted to put the class files into a specific location, or deploy them without the source code, you’d have to follow the jspc run with your own moving/copying/filtering of files as necessary.
  • I’m not particularly concerned with the exact performance of this, but subjectively the builds do seem noticeably faster since switching over to the Glassfish JspC and using its “built-in” compile instead of a separate “javac” run.
  • The Glassfish jspc script also supports a “-validate” option, which validates “.tld” and “web.xml” files against their schemas and DTDs. However, I don’t currently use this, and instead use a separate run of Glassfish’s verifier script to verify the finished web-application archive as whole.

I wonder if anyone can clarify the exact relationship between the Tomcat and Glassfish JspC implementations and the underlying “Jasper 2”? Or the exact status (and maybe classname, location, documentation etc) of any Glassfish “jspc” Ant task?

Glassfish v2 and Kaspersky 7

12 06 2008

I’ve just encountered a problem starting Glassfish V2 on an MS Windows PC, and it seems to be due to Glassfish’s JMX port being blocked by Kaspersky 7 Anti-Virus for some reason. Glassfish gives the appearance of starting successfully, but then after a while it terminates with exceptions due to a time-out whilst trying to connect to port 8686 for JMX/RMI.

There’s nothing in the Kaspersky logs to indicate that anything has been detected/blocked, but the problem goes away if Kaspersky isn’t running during the start-up. It’s also ok if Kaspersky is started after Glassfish is fully up and running – it only fails if Kaspersky is running during the start-up.

This might be something peculiar to my own set-up on this particular PC, but it sounds awfully similar to what’s being discussed in “Does Eclipse Europa support GlassFish v2 UR2” on

So in case this helps anyone else, here’s what I’ve found, and how I’ve got around it.

The problem seems to be due to Kaspersky “traffic monitoring” of port 8686, and can be fixed by setting Kaspersky to not monitor that particular port. Unfortunately, there doesn’t seem to be any way to make Kaspersky monitor all ports except specific ones. The only way I’ve found so far to keep the monitoring in general but exclude port 8686 is to turn off “monitor all ports” and make sure that 8686 isn’t in the list of monitored ports. You’re then monitoring only those ports that are explicitly specified. This weakens the protection, but is probably acceptable if you’re behind a suitable firewall or otherwise have a known, limited set of ports that are potentially accessible.

I’ve tried playing around with other parts of the Kaspersky “web anti-virus” settings (e.g. weaken/disable “heuristic” scan, exclude localhost URLs), but I haven’t yet found anything else that fixes the problem. There might be things you can configure on Glassfish that would avoid the problem, but without any indication from Kaspersky of what it thinks it’s blocking (and why) there isn’t much to go on.

For info, this is with Glassfish V2 U1 or U2 running on Java SE 6 update 6, and Kaspersky I didn’t previously have any problems with Glassfish V2 U1 whilst using Kaspersky 6, so I suspect this is specific to Kaspersky 7. Unless anyone knows better…

Private beta of ObMimic for out-of-container servlet testing

30 05 2008

Updated Feb 2018: OpenBrace Limited has closed down, and its ObMimic product is no longer supported.

The ObMimic library for out-of-container servlet testing is now being made available to a small number of users as a private “beta” release, in advance of a more public beta.

We’re ready for a few more people to start trying it out, so if you’re interested just let me know – either via this blog’s “contact me” page or via my company e-mail address of mike-at-openbrace-dot-com.

In outline, ObMimic provides a comprehensive set of fully-configurable test doubles for the Servlet API, so that you can use normal “plain java” tools and techniques to test servlets, filters, listeners and any other code that depends on the Servlet API. We call these test doubles “mimics”, because they “mimic” the behaviour of the real object.

We see this as the ultimate set of “test doubles” for this specific API: a set of plain Java objects that completely and accurately mimic the behaviour of the “real” Servlet API objects, whilst being fully configurable and inspectable and with additional instrumentation to support both “state-based” and “interaction-based” testing.

If you find servlet code harder to test than plain Java, ObMimic might be just what you’re looking for.

With ObMimic, you can create instances of any Servlet API interface or abstract class using plain no-argument constructors; configure and inspect all relevant details of their internal state as necessary; and pass them into your code wherever Servlet API objects are needed. This makes it easy to do detailed testing of servlets, filters, listeners and other code that depends on the Servlet API, without needing a servlet container and without any of the complexities and overheads of packaging, deployment, restarts/reloads, networking etc.

ObMimic includes facilities for:

  • Setting values that are “read-only” in the Servlet API (including full programmatic control over “deployment descriptor” values and other values that are normally fixed during packaging/deployment, or that have fixed values in each servlet container).
  • Examining values that are normally “write-only” in the Servlet API (such as a response’s body content).
  • Optionally recording and retrieving details of the Servlet API calls made to each object (with ability to turn this on and off on individual objects).
  • Controlling which version of the Servlet API is simulated, with versions 2.3, 2.4 and 2.5 currently supported (for example, you can programmatically repeat a test using different Servlet API versions).
  • Detecting and reporting any calls to Servlet API methods whose handling isn’t strictly defined by the API (e.g. passing null arguments to Servlet API methods whose Javadoc doesn’t specify whether nulls are permitted or how they are handled).
  • Controlling the simulation of container-specific behaviour (i.e. where the Servlet API allows variations or leaves this open).
  • Explicitly forcing Servlet API methods to throw a checked exception (e.g. so that you can test any code that handles such exceptions).
  • Handling JNDI look-ups using a built-in, in-memory JNDI simulation.

There are no dependencies on any particular testing framework or third-party libraries (other than Java SE 5 or higher and the Servlet API itself), so you can freely use ObMimic with JUnit, TestNG or any other testing framework or tool.

In contrast to traditional “mock” or “stub” objects, ObMimic provides complete, ready-made implementations of the Servlet API interfaces and abstract classes as defined by their Javadoc. As a result, your tests don’t have to depend on your own assumptions about the Servlet API’s behaviour, and both state-based and interaction-based tests can be supported. ObMimic can even handle complex sequences of Servlet API calls, such as for session-handling, request dispatching, incorporation of “POST” body content into request parameters, notification to listeners, and other such complex interactions between Servlet API objects. It can thus be used not only for testing individual components in isolation, but also for testing more complete paths through your code and third-party libraries.

With the appropriate configuration, it’s even possible to test code that uses other frameworks on top of the Servlet API. For example, we’ve been able to use ObMimic to test “Struts 1” code, and to run ZeroTurnaround’s JspWeaver on top of ObMimic to provide out-of-container testing of JSPs (as documented previously).

As a somewhat arbitrary example, the following code illustrates a very simple use of ObMimic to test a servlet (just to show the basics of how Servlet API objects can be created, configured and used):

import com.openbrace.obmimic.mimic.servlet.http.HttpServletRequestMimic;
import com.openbrace.obmimic.mimic.servlet.http.HttpServletResponseMimic;
import com.openbrace.obmimic.mimic.servlet.ServletConfigMimic;
import javax.servlet.Servlet;
import javax.servlet.ServletException;


/* Create a request and configure it as needed by the test. */    
HttpServletRequestMimic request = new HttpServletRequestMimic();
request.getMimicState().getRequestParameters().set("name", "foo");
request.getMimicState().getAttributes().set("bar", 123);
... further request set-up as desired ...

/* Create a response. */
HttpServletResponseMimic response = new HttpServletResponseMimic();

 * Create and initialize the servlet to be tested (assumed to be a
 * class called "MyHttpServlet"), using a dummy/minimal 
 * ServletConfig.
Servlet myServlet = new MyHttpServlet();
try {
    myServlet.init(new ServletConfigMimic());
} catch (ServletException e) {
    ... report that test failed with unexpected ServletException ...

/* Invoke the servlet to process the request and response. */
try {
    myServlet.service(request, response);
} catch (ServletException e) {
    ... report that test failed with unexpected ServletException ...
} catch (IOException e) {
    ... report that test failed with unexpected IOException ...

 * Retrieve the response's resulting status code and body content,
 * as examples of how the resulting state of the relevant mimic 
 * instances can be examined.
int statusCode 
    = response.getMimicState().getEffectiveHttpStatusCode();
String bodyContent
    = response.getMimicState().getBodyContentAsString();
... then check them as appropriate for the test ...

For further examples and details, refer to the previous posts “First experiments with out-of-container testing of Servlet code using ObMimic” part 1 and part 2, “Out-of-container JSP testing with ObMimic and JspWeaver”, and the related post “Mocking an API should be somebody else’s problem”.

There are also more extensive examples in ObMimic’s documentation.

ObMimic isn’t open-source, but it will have a zero-cost version (full API coverage but a few overall features disabled, such as the ability to configure the Servlet API version, control over how incorrect/ambiguous API calls are handled, and recording of API calls). There will also be a low-cost per-user “Professional” version with full functionality, and an “Enterprise” version that includes all of ObMimic’s source-code and internal tests (with an Ant build script) as well as a licence for up to 200 users.

At the moment there’s no web-site, discussion forums or bug-reporting mechanisms (all still being prepared), but ObMimic already comes with full documentation including both short and detailed “getting started” guides, “how to”s with example code, and extensive Javadoc – and for this private beta I’m providing direct support by e-mail.

Anyway, if you’d like to try out ObMimic, or have any questions or comments, or would like to be informed when there’s a more public release, just let me know via the “contact me” page or by e-mail.

%d bloggers like this: