ObMimic Public Beta for Out-of-Container Servlet Testing

3 06 2013

Updated August 2014: ObMimic is now out of beta, with latest version available from www.openbrace.com

ObMimic from OpenBrace Limited is a library of complete and fully-configurable plain-Java implementations of Servlet API objects for use as ready-made test-doubles in out-of-container testing of Servlet code. After a long development – and an even longer hiatus – it’s now available as a public beta release at www.openbrace.com.

You can use ObMimic to write comprehensive, detailed tests for your Servlet API code, using the same tools and techniques as for normal plain-Java code – without having to deploy and run your code inside a servlet container, and without having to write your own mocks or stubs and rely on your own assumptions about the Servlet API’s behaviour.

At its simplest, your tests can obtain fully-functional “mimic” instances of Servlet API objects using plain no-argument constructors — for example, you can create an HttpServletRequest with just:

new HttpServletRequestMimic();

Beyond that, you can configure and inspect the logical state of each such object as necessary for your tests. This includes control over details that would normally be “fixed” when running within a Servlet container (e.g. “init” parameters, Servlet API version, behaviours that are allowed to vary between containers, deliberate throwing of exceptions for testing of exception handling etc). There’s a detailed list of features on the website’s Features page.

If you want to test code that uses the Servlet API but find that detailed testing of such code is harder, more restrictive or slower than for normal Java code, ObMimic may be what you’re looking for.

The website provides a free download of ObMimic (including a free licence-key to unlock the “Professional Edition” features during the beta). The website also has a full copy of ObMimic’s documentation (including comprehensive Javadoc and a set of How To guides) and a set of discussion forums.

For some earlier posts that describe ObMimic and show some example code, see Experiments with out-of-container testing of Servlet code using ObMimic (Part 1) and First use of ObMimic for out-of-container testing of Servlets and Struts (Part 2).





Beware of using java.util.Scanner with “/z”

17 12 2011

There are various articles and blog postings around that suggest that using Scanner with a “/z” delimiter is an easy way to read an entire file in one go (with “/z” being the regular expression for “end of input”).

Some examples are:

Because a single read with “/z” as the delimiter should read everything until “end of input”, it’s tempting to just do a single read and leave it at that, as the examples listed above all do.

In most cases that’s OK, but I’ve found at least one situation where reading to “end of input” doesn’t read the entire input – when the input is a SequenceInputStream, each of the constituent InputStreams appears to give a separate “end of input” of its own. As a result, if you do a single read with a delimiter of “/z” it returns the content of the first of the SequenceInputStream’s constituent streams, but doesn’t read into the rest of the constituent streams.

At any rate, that what I get on Oracle JDK 5, 6 and 7.

This might be a quirk or bug in Scanner, SequenceInputStream, regular expression processing, or how “end of input” is detected, or it might be some subtlety in the meaning of “/z” that I’m not privy to. Equally, there might be other types of InputStream with constituent sub-components that each report a separate “end of input”. But whatever the underlying reasons and scope of this problem, it seems safest to never assume that a single read delimited by “/z” will always read the whole of an input stream.

So if you really want to use Scanner to read the whole of something, I’d recommend that even when using “/z” you should still iterate the read until the Scanner reports “hasNext” as false (even though that rather reduces the attraction of using Scanner for this, as opposed to some other more direct approach to reading through the whole of the input).





Java Enum as Singleton: Good or Bad?

4 07 2011

Item 3 in the 2nd Edition of Effective Java explains three ways of implementing a singleton in Java, the last of which is “Enum as Singleton”. This uses an Enum with a single element as a simple and safe way to provide a singleton. It’s stated as being the best way to implement a singleton (at least, for Java 5 onwards and where the additional flexibility of the “static factory method” approach isn’t required).

But is this technique a good or bad idea? Is anyone actually doing it this way? If you’ve used it or encountered it, are you happy with it or do you have any reservations?

Please note: I’m not interested in any wars over whether singletons are evil or not. The concept exists, one comes across them in real code, and there are reasonable discussions to be had over whether they are always a bad idea or have their uses in particular situations. None of that is relevant to how best to implement a singleton if one ever does wish to do so, or the pros and cons of different implementation techniques.

OK, with that dispensed with, what should we make of the “Enum as Singleton” technique?

From my point of view, it works, the code is trivially simple, and it does automatically take care of the “serialization” issue (that is, maintaining one instance per classloader even in the face of serialization and deserialization of the instance). But it feels too much like a trick, and (arguably) not in the spirit of the concept of an enumeration type. When I see an Enum that isn’t being used to enumerate a set of constants and that only has one element, I think I’m more likely to have to stop and figure out what’s going on rather than immediately and automatically thinking “oh, here’s a singleton”. If it becomes more common I’ll no doubt get used to seeing this idiom, but if so I might then find myself misled by any “normal” enumeration that just happens to only have one element.

Another concern is that whilst the use of a static factory method to provide a singleton offers more flexibility than either the use of a public static member or a single-element Enum, it requires different client code for accessing the singleton. So using either of the latter two approaches means that you risk having to change client code if you ever need to “upgrade” the singleton to the more flexible “static factory method” approach.

A further issue is how best to name Enum classes and instances that are implementing singletons. Should one stick to the usual naming conventions for Enums, or adopt some other naming convention (and maybe include “Singleton” in the name to make the intent clear)? And what if the singleton object is mutable in any way? Or is that a more general issue over the naming of enumeration “constants” if they are actually mutable? Or maybe it makes more sense to say that Enums must be genuine constants and should never, ever be mutable – in which case “Enum as Singleton” shouldn’t be used for any singleton with mutable state, which limits its applicability even more?

So now that the “Enum as Singleton” technique has been widely known for a few years, does anyone have any significant experiences from real-world use of it? Or any other opinions on this technique?





Third-time lucky with EJB

30 12 2010

I learnt EJB 1, but never encountered any situation that justified actually using it.

I learnt EJB 2, but never encountered any situation that justified actually using it.

So despite knowing that EJB 3 was much better, and having a general picture of it, I’d been holding off from any detailed reading or study on EJB 3 until specifically needing it.

Well, now I have a development for which EJB 3 seems appropriate, so this time I’m finally using it for real!

In practice this really means JPA with a tiny bit of EJB on top, as “Entities” aren’t technically EJBs any more (to the extent that if an EJB jar consists of nothing but “entities” it’s considered invalid due to not containing any EJBs).

On the whole I’ve been quite impressed by EJB 3 and JPA. The basic programming is, as advertised, much cleaner and simpler than before, and it lives up to its reputation of being much easier to get started with and needing far less “boilerplate” code.

Inevitably it’s taken a fair amount of work to arrive at suitable design choices, naming and coding conventions, build script enhancements, test facilities etc, and in general to address all the myriad of assorted issues and choices that crop up whenever one adopts any additional technology. And, of course, once you get into anything non-trivial you’re dragged into the usual pile of quirks and work-arounds and implementation-specific bugs. You only need to take a quick look though the JPA FAQ and read some of the referenced discussions to get a feel for how tricky and arbitrary some of this can be.

But overall, incorporating EJB 3 and JPA into a couple of new projects has all been relativately straightforward, and certainly no worse than is par for the course these days.

I still wish I could somehow claw back the time I’d previously spent learning EJB 1 and 2 (though not nearly as much as the time I spent learning Microsoft COM technologies!). But what’s done is done, and so far I’m very pleased with how easy and effective EJB 3 and JPA now appear to be.

If you’re still holding back from EJB and JPA because of bad experiences with previous versions, I can add my voice to the chorus of people saying it’s worth a fresh look.





Servlet 3.0 – A spaghetti API?

26 07 2010

The introduction in Servlet 3.0 of “web fragments” and both annotation-based and programmatic mechanisms for introducing components into a web-application are all very welcome.

However, combined with all the other new features, their configuration facilities, the relevant class/jar-finding mechanisms, and the interactions between everything, the overall complexity of the Servlet API seems to have increased horrendously.

To my mind, an awful lot of it is starting to look like a tangled mess of spaghetti – the API equivalent of spaghetti code.

Here’s just one relatively minor example (but please, please, please put me straight if I’ve missed the meaning of this and it’s all really simple and elegant).

Here goes…

The Javadoc of every “since 3.0” method in javax.servlet.ServletContext (for example, getEffectiveMajorVersion) includes a “throws” clause that says:

Throws: java.lang.UnsupportedOperationException – if this ServletContext was passed to the ServletContextListener#contextInitialized method of a ServletContextListener that was neither declared in web.xml or web-fragment.xml, nor annotated with WebListener

So the behaviour of a ServletContext, including things like whether or not you can determine which Servlet version it needs, thus depends on whether it “was passed to” a ServletContextListener to notify that listener of the context’s initialization – depending on which of various ways were used to create the listener.

For now let’s just gloss over the various minor questions and issues raised by this, such as:

  • What does “was passed to” actually mean? Has been passed to, at any time previously? Is currently being processed within a call to? Both? Something else?
  • Does or doesn’t this apply if the ServletContext “was passed to” multiple listeners of which some are of the specified type and some are not?
  • What is the actual purpose of this rule (i.e. why should being passed to a particular type of listener prevent the ServletContext from processing any of its “since 3.0” methods)?

Quite apart from all that, and far more fundamentally, isn’t it rather perverse for an object’s methods to depend directly on what other objects it “was passed to”? Especially where there doesn’t seem to be any immediately obvious reason for such a dependency?

And doesn’t it seem even more wrong that an object’s behaviour should depend on which other objects are “listening” for events on it? Isn’t that the tail wagging the dog?

Even assuming there’s some reasonable reason for this, and that there’s some sense in which it makes some kind of sense, is this really the kind of thing we want to see in an API?

Just in case this still seems too simple for you, the ServletContext also now includes a createListener method for creating listeners, and a number of overloaded addListener methods for adding listeners to itself (but only provided it has not already been initialized). The method for creating listeners does allow the creation of ServletContextListeners, but the methods for adding listeners only supports the addition of a ServletContextListener “If this ServletContext was passed to ServletContainerInitializer#onStartup” (which I’ll come to later).

Now both of these methods are subject to various conditions, including the “throws” clause described above. Listeners created and added in this way are, presumably, precisely the sort of listeners that such “throws” clauses are referring to (that is, not defined in web.xml or web-fragment.xml and potentially not annotated with WebListener). But what does it mean for methods that create and add such listeners to also have this “throws” clause themselves? Especially when they also require the ServletContext to have not yet have been initialized, in which case it presumably can’t have been passed to any ServletContextListeners yet anyway?

Is anyone else getting confused yet?

If even that still seems simple enough, ServletContextListeners are also no longer the only things listening for the application and/or context’s initialization. There is also now a ServletContainerInitializer interface, for classes that want to handle the application’s start-up (or does it really mean the container’s start-up, as its name would seem to imply?). Clearly, this is another route through which ServletContextListeners can be programmatically created and introduced, in particular by having the ServletContainerInitializer use the ServletContext’s “createListener” and/or “addListener” methods – with the “addListener” methods making specific allowance for this as described above, and requiring the ServletContext to know whether or not it “was passed to” a ServletContainerInitializer.

Of course, this ServletContainerInitializer interface has its own complexities and quirks. I won’t go into full detail on these here, but just to give a flavour:

  • It specifies naming conventions and mechanisms for how its implementing classes are found (and these mechanisms have their own quirks and ambiguities, for example the naming convention appears to require classes to be placed in the “javax.servlet” package, in violation of the usual rues and licence terms, and the class-level javadoc says that implementations must be within the application’s /WEB-INF/services directory but the relevant method’s javadoc talks about different behaviour depending on whether it is within /WEB-INF/lib or elsewhere);
  • It uses an annotation to specify what types of classes are to be passed to its sole method as arguments, together with rules for how the relevant classes are to be found, with this in turn including a requirement for the container to provide “a configuration option” to control whether failure to find such classes should be logged;
  • Its javadoc includes the quite wonderful statement “In either case, ServletContainerInitializer services from web fragment JAR files excluded from an absolute ordering must be ignored, and the order in which these services are discovered must follow the application’s classloading delegation model.”.

Am I alone in thinking this is all getting way out of hand? How many features like these (with their accompanying restrictions, exclusions and interactions) does it take before the API as a whole becomes incomprehensible?

At this point I was going to sarcastically sugguest some incredibly complex and convoluted fictional requirement for things I’d like to see added into the next version of the API. But I’m too afraid that someone might treat it as a serious feature request, and in any case it’s not easy to come up with anything that’s more convoluted than the existing features (at least, not without sounding completely silly).

So instead I’ll just say that, personally, I fear that the Servlet API may have already jumped the shark.





Why would you ask for zero bytes from a Java InputStream?

12 04 2010

When would one pass a length argument of zero to java.io.InputStream.read(byte[], int, int) so as to not read any bytes? Does anyone have a good example of when this is necessary or convenient?

The method’s javadoc shows that it explicitly caters for being passed a length of zero, but to me that looks like an unnecessary complication that has plenty of potential for misunderstanding, incorrect implementation by subclasses, and a risk of infinite loops in client code.

I’ve been trying to imagine what common situation might justify catering for a request to read zero bytes, but haven’t come up with anything convincing.

The actual wording of the Javadoc is “If len is zero, then no bytes are read and 0 is returned; otherwise…” and then goes on to explain it’s normal processing within the “otherwise…” clause, including the handling of end-of-stream and any IOExceptions that might occur. There are some separate paragraphs before and after this, and separate explanations of argument validation, but it seems quite clear that if the length is zero the “if len is zero” statement applies instead of the normal processing and its various conditions and outcomes.

At first glance that seems straightforward and simplifies things – the remainder of the rules only apply for non-zero lengths.

However, it’s not as simple as it seems:

  • If you’re already at end-of-stream, reading zero bytes will complete as normal, won’t change anything, and will return zero. It’s easy to see how a caller could get stuck in an infinite loop if they’re not explicitly checking for this. (Conversely, if the caller is explicitly checking for a result of zero, it wouldn’t appear to be any harder for the caller to instead check for a length of zero beforehand and avoid the call altogether). It also means you can’t use a read of zero bytes as a safe way of just checking whether you’ve reached at end-of-stream yet.
  • The javadoc says, quite separately, that an IOException is thrown if the stream is already closed. It isn’t clear which condition takes precedence if zero bytes are requested but the stream is also already closed. More generally it’s not clear whether this is specifying that an IOException SHOULD be thrown if the stream is already closed or just explaining that this MAY result in an IOException (i.e. if an attempt to actually use the stream happens to result in such an exception). So depending on how you read it, you can argue either that an attempt to read zero bytes when the stream is already closed should complete normally and return zero, or that it should throw an IOException.
  • It’s invalid to specify an offset and length that together exceed the size of the destination array (such that writing the bytes into the array would go out-of-bounds). This appears to apply even if the length is zero, and that is indeed how it’s implemented in the source code (at least, in the Sun JDK 6 source code). But this is somewhat inconsistent with the general treatment of a zero length as returning zero regardless of other issues (e.g. even if already at end-of-stream). Arguably it would be more appropriate and more consistent with the rest of the specification to completely ignore the offset and array arguments if the length is zero and you’re not actually going to read any bytes into the array.
  • If a call successfully reads one or more bytes but then encounters an exception, the read ends at that point and returns normally, with the exception then being thrown for the first byte of the next read. But if the next read is for zero bytes, it will complete successfully without even attempting a read, and won’t encounter the exception. Whilst that’s in keeping with the normal behaviour of the method, it’s yet another thing that callers asking for zero bytes might need to be aware of and cater for (depending on exactly what they’re doing and how the read of zero bytes arises).
  • InputStream implementations aren’t entirely consistent with this specification, even within the JDK. In particular, the Javadoc for java.io.ByteArrayInputStream says that it tests for end of stream and returns -1 prior to considering whether to read any bytes or return zero. Hence if a ByteArrayInputStream is at end of stream and you ask to read zero bytes, it gives you -1 to indicate end-of-stream rather than zero as specified by the underlying InputStream base class. With the various ambiguities noted above, third-party InputStream implementations of this method are probably even more likely to be inconsistent in how they handle reads of zero bytes.

So why isn’t a length argument of zero just prohibited? As far as I can see, the typical use of this method shouldn’t normally involve passing a length of zero, and any client code that really can result in a legitimate call for a length of zero is probably going to have to do something to explicitly handle it anyway (for example, to avoid getting stuck in a loop). The length is already required to be non-negative, so why isn’t it just required to be greater than zero instead? That would seem to be a lot simpler and less open to misinterpretation, misuse or incorrect implementation.

What am I missing? Can anyone enlighten me with a good example of something that benefits from being able to ask for zero bytes? That is, a relatively common use of InputStream.read(byte[], int, int) where passing a “len” argument of zero can actually occur, and where allowing this is significantly more convenient for callers than requiring the caller to explicitly check for and handle this case itself.

Please note that I’m not for a moment suggesting that something this well-established could realistically be changed at this point. I’m just curious as to why it is the way it is. It it a mistake? A lack of attention to detail that we’re now stuck with? Or is there a real good reason for it that I just haven’t come across yet?





In memory of CESIL

18 01 2010

A couple of weeks ago, for some now-forgotten reason, I found myself thinking about the first programming language I ever learnt.

In particular, I was thinking about the pros and cons of it being a minimal, assembler-like language that was designed specifically to teach the fundamentals of computers and programming.

The language was CESIL (“Computer Education in Schools Instructional Language”), the year was 1974, I was 15, and the class was CSE “Computer Studies”. The hardware was hand-punched cards that were sent off by post to some other place, with results returned a week later, just in time for the next lesson.

For the second year we moved on to BASIC using a Teletype crammed into a corner of a little room up in the school’s attic (theoretically that should have been a lot better, but in practice it was so inconvenient and unreliable as to be almost useless).

That sounds unbearably tedious and uninspiring now, but way back then it was amazing that we were able to use an actual real-life computer and were learning about computers as part of our schooling.

Anyway, as clunky as the arrangements were, and as much as others will no doubt disagree, I still think CESIL itself was a great introduction to programming.

Conceptually, it was an assembler-like language that operated on integer values in a single register. It supported:

  • Named variables
  • A handful of simple operations on the register and one variable or constant
  • Conditional branching based on testing of the register value and labels on statements
  • Some minimal input facilities (reading one value at a time from a list supplied at the end of the source file)
  • Some minimal output facilities (a simple “print” statement) facilities
  • Pretty much nothing else… at least as far as I remember.

The syntax was just about the simplest one could imagine for that particular set of features. You can probably already picture the entire language.

The beauty of this was that, with no prior knowledge of computers of any kind (which in those days usually meant not even having a realistic concept of what a “computer” is), you could be taken straight into tackling simple algorithmic problems (like finding the largest of a given set of numbers, sorting numbers etc), without any undue distraction or complications. At the same time the class lessons could use this as a context for explaining computer hardware, what was actually happening when your program ran, and more general computer-science topics.

You were immediately seeing a reasonable model of the fundamental concepts of computers and algorithms, getting a feel for what programming is like, how to test and debug code etc, but without getting bogged-down in syntax rules or complex abstractions. The once-per-week cycle taught you to be careful and precise, and to desk-check and dry-run your code. More generally it forced you to figure out what you were doing, rather than just flapping around and guessing with a trial-and-error approach (like so many people seem to do today).

In practice the limited resources also meant that some things had to be done in pairs or small teams. So whether by design or accident, you also got a taste of what it’s like to develop code with other people.

Against it, there was clearly a limit to how far you could go with CESIL, the practical difficulties made for very slow progress, and the language itself was never going to be of any direct use to you afterwards (as opposed to, say, being taught a real assembler language). So even in a two-year course you really did need to move on to something else fairly quickly and then forget about CESIL.

But on the whole I still think it was a great way to lay the foundations for a lifetime of programming. I’m grateful to whoever instigated getting this set up and introduced into schools – I suspect my career may have been very different without it!

Anyway, once this had crept back into my head for no particular reason, it seemed worth a quick look to see if CESIL has entirely vanished or whether there’s any information on it still knocking around. Whilst I’ve not found any official documentation or history, somebody has blogged about stumbling across the CESIL course book.

More astonishingly, someone has resurrected it – by writing an open-source interpreter for it. I’m rather tickled to see that someone has done this, just for the sake of it!








%d bloggers like this: