Web Applications

Good magic, bad magic

Philip Potter pinged me his post on Sinatra magic during the week. Mark Needham’s comment and code on solving the mocking problem is good advice to the problem as posed.

At Wazoku where we use the often equally magical Bottle framework we don’t use top-down TDD but instead outside-in functional tests (with no funky runners as we don’t need CI). This solves the whole magic issue by shifting the attention to what the public interactions of the application are. This is one of the massive benefits of using a microapp HTTP/JSON/REST-like architecture. I could flip the API from Bottle to Django or Compojure or Sinatra and my test suite can keep on rocking and telling me whether the behaviour my consumers are relying on is correct.

The major thing I felt when reading through Philip’s post was the massive amount of effort that was going into testing relatively simple behaviour. This is a bit of anti-pattern with Agile developers (or perhaps it is part of the mastery thing where rote “correct” behaviour is modified by experience and judgement). One of the massive advantages of using something like Sinatra is that you can get a whole web app with rich behaviour into less than 200 lines. If you then create thousands of lines of test code and battle with the magic for hours on end you’ve completely destroyed your productivity.

If you have a code base that you expect to be large and highly contested by a large development team you need good, layered testing and to use frameworks that support that. If you have an app that is small and when its done it is done then there is no need to agonise as to whether it was done “right”.

The idea that top-down TDD is the only correct way to write software is corrosive. When faced with a generally poorly skilled and educated workforce it is good to have rules. I have imposed a certain style of TDD on a group myself because it gives a good framework for work and achieves very consistent output.

However with skilled people on small scale projects you can kill yourself by imposing arbitrary rules. I love Sinatra and while I might be equivocal about magic I think it is ridiculous to moan about it if you are using something as unicorn-packed as Ruby. For example Philip was trying to use RSpec mocks and stubs to do his TDD. The result is kind of saying that you’re disappointed that your “good” magic for testing didn’t work with the “bad” magic of a DSL for web applications. Even if your RSpec code passed its tests you still haven’t said anything particularly deep about the production behaviour of your application as your unit testing environment was severely compromised by the manipulations of your mocking framework.

So my rule of thumb is: if its simple, do it; if it was simple, functionally test it; if it was never really simple then test-drive it with suitable tools.

Standard
Programming, Work

Code Coverage 90% meaningless

I have always been quite sceptical about the value of code coverage metrics. Most of the teams I have seen who have been productive and produced code with few defects in production have not used code coverage as a metric. Where code coverage tends to be obsessively tracked it seems to be more as a management reporting metric (often linked to “Quality”) and rarely seems to correlate with lower defects or malleable software instead it often appears in low-collaboration or low trust environments.

Code coverage has most benefit in an immature unit testing environment or in a “test-after” team. With test-after you have to have code coverage to ensure that people remembered to test all the possible execution paths. My personal preference is to push TDD as a practice in preference to code coverage because a side-effect is that you get 100% code coverage.

Code coverage is also quite a different beast to static or complexity analysis of code bases. Static analysis is a useful tool and some complexity measures actually make good indicators of the “quality” of the code base. It is also not the same as instrumented code, which is invaluable with dealing with code you’ve inherited or to discover how much of the codebase actually gets used in production.

Standard
Java, Programming

Three Little Mockers

At my last client I got the unusual chance to try three Java mocking frameworks within the same project. The project had started to use EasyMock as the project team felt that Mockito didn’t really have any decent documentation (it does but it’s in the Mockito codebase, not on the Google Code wiki). However as a ThoughtWorks team was working next door there was a strong push to use that.

My personal preference is still for JMock so I also selfishly pushed that into the project to round out the selection. With all three there; we were ready for a Mock Off!

The first distinctive thing is that EasyMock and Mockito can mock concrete classes not just interfaces. The second thing that all three have very different methods of constructing and verifying mocks. EasyMock is record-replay, JMock constructs expectations that are automatically verified by a JUnit runner when the test stops executing, Mockito uses a TestSpy where you can verify what happened to the mock whenever you want.

The record-replay paradigm lead to EasyMock being discarded early on. It has two kinds of problems as far as I was concerned. Firstly you have the feeling of “inside out” where the test is copying the internals of the class under test. Secondly I was confused several times as to what state my mock was in and having to switch the mocks to replay mode felt noisy, particularly where you were setting multiple collaborators in the test (do you switch them to replay once their recording is done or do you record all mocks then switch all of the them to replay?).

Mockito’s easy mocking of concrete classes made it a valuable addition to the toolkit and it does deliver the promised noise free setup and verificiation. However its use of global state was frustrating, particularly in that you can create a compiling use of the API that fails at runtime. If you call verify without a following method then you get an error about illegal use of the framework. Although this is meant to happen in the test that follows the test with the illegal construction, in practice this isĀ  hideous pain when the test suite is a non-trivial size. It also meant that a test was appearing to pass when actually nothing was being verified and the error appeared in another pair’s test (the one that implemented the next test) making them think that something was wrong with their code.

Mockito also uses a lot of static imports (which I do have a weaknesses for) but it also means that you have to remember the entry points into the framework.

JMock by comparision to Mockito feels quite verbose, the price for having all that IDE support and a discoverable API is that you have to have a Mockery or two in all your classes and you are defining all your Expectations for the Mockery. There’s no denying that even with the IDE autocomplete you’re doing a lot more typing with JMock. From a lazy programmer point of view you are going to go with Mockito every time.

And that is pretty much what happened in the project. Which I think is a shame. Now in explaining this I am going to go into a bit of ThoughtWorks testing geekery so if you are lazy feel free to go off and use Mockito because the pain I’m talking about will happen in the future not now.

I feel that Mockito is a framework for Test Driven Development and JMock is a framework for Test Driven Design. A lot of times you want the former: tight-deadline work and brownfield work. You want to verify the work you are doing but often design is getting pushed to the sidelines or you actually lack the ability to change the design as you might want. Mockito is great for that.

However Mockito doesn’t let the code speak to you. It takes care of so much of the detail that you often lose feedback on your code. One thing that making you type out your Expectations in JMock does is make you think, really think, about the way your code is collaborating. I think you are much more likely to design your code with JMock because it offers lots of opportunities to refactor. For example if two services are constantly being mocked together then maybe they contain services that are so closely related they should be unified, logically or physically. You don’t see that opportunity for change with Mockito.

By using both JMock and Mockito I was able to see that Mockito did tend to make us a bit lazy and our classes tended to grow fatter in terms of functionality and collaborators because there was no penalty for doing so. I was also concerned that sometimes Mockito’s ability to mock concrete classes meant that sometimes we mocked objects that should have been real or stub implementations.

Mockito is a powerful framework and I suspect that Java developers will take to it as soon as they encounter it. However for genuine Test Driven Design I think Mockito suffocates the code and requires developers with a lot more discipline and design-nous, almost certainly not the people that wanted an easy to use framework in the first place.

Standard
Java, Programming

Shtop! You’re copying the wrong JUnit!

I have been using JUnit 4.4 on my most recent project and this time instead of just licking around the edges I have actually being trying to use the features it adds. This time I have actually switched from using assertX to just using assertThat in combination with the various Hamcrest matchers.

I have also been using the Datapoint and Theory functionality for a mix of empirical testing (putting through a set of data that has proven problematic in the past) and also for creating shorter specific tests that tests a class of behaviour.

I haven’t used Assumptions yet but everything new I have touched has turned out to be solid gold. The only obvious lacking feature is something like RSpec’s Pending description, @Ignore is a bit rubbish really.

Therefore when I am looking at other languages (such as Flex’s ActionScript) I am really disappointed when their testing tools is actually a rather literal port of JUnit 3. Porting JUnit 3 is the act of a charlatan. It seems to be about finding the lowest common denominator that would allow you to wave some TDD colours rather than genuinely providing tools that developers need.

During the long hiatus between JUnit3 and JUnit4 was TestNG and in many ways I would rather that people used that as their template for a testing library. My current gold standard is RSpec though and I am not really seeing anything that ambitious in the “new” languages (except the very lovely Specs).

Often language teams leave library development to the “community” but if you are going to the effort of officially porting something why choose JUnit 3? The only reason I can find is that Java developers are familiar with it and that is just a rubbish reason, Java developers should get out of their comfort zone and move from JUnit 3 to 4 or TestNG. In fact since JUnit 4.5 has just been released this month why don’t you take the chance and plunge in before the month is out?

Standard
Java, Software

Test Blight

Another Bliki entry related post; I’m not sure I would refer to what is described in the article as Test Cancer, I would say it is Test Blight. As soon as one test is switched off it weakens all the other tests and soon the whole “test tree” is dying off as it begins to constrain and describe the system less and less.

Test Cancer is probably a better term for the situation where your test base keeps growing and growing but in a meaningless way that actually obscures what is important in terms of the system description. For example JUnit tests that have no assertions but just execute code. Or hundreds of test files that are testing the accessors of your value objects.

Standard