Java, Software

Java IDEs support Subversion

At last! A genuine point of differentiation between NetBeans and Eclipse. NetBeans has long had SVN support but to be honest it is a little ugly and requires you to download the Collab.Net SVN client. Subversive has been a far superior SVN plugin with a choice of connectors and a decent implementation of a diff editor and team synchronisation page.

Therefore I am absolutely delighted that the Eclipse project has adopted the Subversive code base and is incubating it for release as the official in-built SVN plugin for Eclipse. It’s an excellent decision and puts Eclipse ahead as far as SVN support is concerned. I look forward to the official incorporation of the new plugin.

Part of the reason why Subversive is great is that it really makes it easy to use a variety of SVN provider implementations. My favourite is SVNKit which has always been reliable, fast and fully featured. It surprises me that NetBeans which is otherwise very much all about the Java has chosen to use a native SVN implementation. I would really like them to create an SVNKit based plugin.

Standard
Java, Software, Swing

Getting things done with Thinking Rock

I have used all manner of organisation tools but at the moment the one that is really working for me (and indeed which is telling me I need to write this post) is Thinking Rock. A Java application based on the NetBeans RCP that implements the GTD process.

The basic elements of the application are okay, focussing on quick capture of thoughts and providing enough tools to correctly categorise them. However it is in reviewing and working on your actions that the application really shines. A single screen allows you to review and organise both projects and tasks. The filters for managing tasks are excellent and really make it possible to work with hundreds of thoughts and ideas at the same time. Better yet as a Java Swing programme you get exactly the same functionality and features on OSX and Windows allowing me to use it on all the various computers I own or use at work. A fully-fledged version 2.0 is promised soon but the development version I have been using has been completely stable and fully featured for my use.

I thoroughly recommend it to anyone else with a cross-platform need.

Standard
Java, Software

Setting up a Derby Database

Okay so I currently I am setting up a number of databases on my own V-Server and I thought it would be helpful to make a few notes about the current Java databases and how easy they are to run in Server mode. The two biggest databases are Derby and HSQLDB, this post is about Derby. Both databases are really geared towards the embedded space and generally focus their documentation and tasks around this. In their default setups they are both pretty insecure for running on a public network, neither for example defaults to using passwords to authenticate logins.

All the databases are getting setup in the same basic way. Each database is going to be run in the userspace of a dedicated UNIX user and that user is meant to be accessed via SSH and SSH Keys. All the datafiles will reside within the user’s space. I’m not going to go into that as it is fairly straightforward and is probably better covered by the various UNIX admin guides you can find around the internet.

So Derby first then, well I have played around with Derby before so I hoped this would be the easiest. One thing that you need to know about Derby is that its documentation is good but it is organised into several different documents… by a madman. There is no apparent logic, rhyme or reason to the way information is included in one document or another. When setting up the server I had to jump between the Developer’s Guide, the Tuning Guide, the Server Guide and more unusually the Tool Guide. If you have the same experience then don’t worry, you’re probably doing it right.

Having downloaded the Derby package unpack it to the root directory. I find it helps with upgrades to create a symbolic link ~/derby to the actual installation directory. You can also create a profile environment variable DERBY_HOME that points to $HOME/derby.

With that all setup you can simply run ~/derby/startNetworkServer and you should be in the basic network server business. This is where it gets a bit more complicated. Derby will look for its configuration in the directory where you start it not in DERBY_HOME. So you need to create a derby.properties file in $HOME to configure the behaviour of the server.

To require authorisation you need to properties created in the properties file. Remember that the format of this file is a standard Java properties file. The two properties are:

  • derby.connection.requireAuthentication=true
  • derby.authentication.provider=BUILTIN

You should now restart the server. Now when you want to interact with the database you should need to provide a username and password. You have to check this though to make sure it works.

Run the ij tool from the command line: try to create a new database with something like the following

connect 'jdbc:derby:test;create=true';

You should get a message saying something like ‘Invalid authentication’. If the database is actually created then the database is probably not picking up your properties file, which in turn is likely to be an installation issue.

All being well we can now add a user to the properties file. User information is also a property so to make the file manageable you probably want to use comments to divide up the file into relevant sections and try and keep all the user setup in the same place. The format of the user entry is derby.user.USERNAME=PASSWORD. So for example

derby.user.test=changeme

Okay now restart the server and login via ij again. This time use a connection URL like the one below.

connect 'jdbc:derby:test;create=true;user=test;password=changeme';

This should now create a new database under HOME and give you access to it normally. Drop out of ij and confirm that the test directory has been created.

That is pretty much that now, from here on in you can connect to your server database instance as you would any other database server.

Written like this the setup must seem very basic and quick to perform, which is good. However when I was finding my way with it, it took over an hour and most of that time was taken up with looking for things in the documentation. Nothing is logically organised with Derby, for example the configuration properties are all described in the Tuning Guide. Except the network control properties which are described in the Admin’s Guide.

Unlike code DRY is not a good idea unless you are going to very rigorously modularise the documentation. Even then why not repeat something if it means that all the information relevant to task is one place?

Derby has a great feature list but I wonder how many users are ignorant of what it can do because of the poor documentation?

Standard
Java, Software

Test Blight

Another Bliki entry related post; I’m not sure I would refer to what is described in the article as Test Cancer, I would say it is Test Blight. As soon as one test is switched off it weakens all the other tests and soon the whole “test tree” is dying off as it begins to constrain and describe the system less and less.

Test Cancer is probably a better term for the situation where your test base keeps growing and growing but in a meaningless way that actually obscures what is important in terms of the system description. For example JUnit tests that have no assertions but just execute code. Or hundreds of test files that are testing the accessors of your value objects.

Standard
Macbook, Software

Music for the Mac

When I switched to OSX I was surprised to find that the “just works” system had no support for Ogg or FLAC. I also missed the simplicity and power of programs such as CDEx and MusikCube. I have had a look at all kinds of replacements but recently I was delighted to find a two programs that fill exactly the same niche on OSX. Max is a ripping program with a more elaborate interface than CDEx, this makes simple ripping a little more involved but it is possible to set up different encoding outputs for each rip so that once the track has been ripped it can then be encoded to different formats. You could produce a lossless FLAC version and a radio quality MP3 version for a small flash player. I haven’t been able to produce different encoding settings for the same output format and I am not sure that is supported.

Of course having encoded the music you also need something to play it on and finding a decent player on OSX is hard due to the smothering presence of iTunes. Cog is a player with a lot of features and decent format support that has a clean and simple interface. It is now my player of choice on the Mac and I would highly recommend it.

Standard
Java, Software

XWiki

I downloaded XWiki and gave the standalone application a whirl. I’m in the market for a simple wiki, I have used JSP Wiki before and it is fine but it can be a pain to setup correctly and I would also need to figure out how to secure editing offline first (as this is meant to be a public-read, private-edit wiki). My first impression is simply one of sluggish performance. It has a lot of nice features and it does arrive locked down but viewing and editing the pages is very lethargic and the Ajax is painfully slow. This could be the server but other servers are proving snappier.

Interestingly it seems to create a database in memory and import/export the data to file on startup/shutdown. I’m not sure what the advantage of that is, it seems counter-intuitive. Looking at options for a standalone database I am unsurprised to find a not saying that Hibernate does not play well with Derby. The lack of support between products built for either Derby and HSQLDB is annoying and seems unnecessary. However from past experience I know not to try and swim against the tide on this. If the developer worked exclusively on one or the other than chances are it is going to work better with the intended platform.

I also spotted a release note about a bug that requires Tomcat Security to be switched off. That doesn’t seem to be something you want to ship with. It’s not exactly the way you play with others.

Standard
Java, Software

Eclipse 3.3 Impressions

So I’ve been using Eclipse 3.3 Europa to work on OSX and Windows using Google Code as the SVN provider.  Details about the project later but for now just a few thoughts on the new release:

The Awesomes…

  • Cut and Pasting code also results in the imports being added to the destination class.
  • Refactoring is all done inline in the class editor, there’s no separate dialog unless the refactor runs into trouble
  • Drag and drop in the editor works (hallelujah!)
  • Sensible packaging strategy that seems to contain everything you need for Java development (except SVN, Subversive or Subclipse should definitely be part of the standard bundle)
  • Seems to be perfectly cross-platform (Eclipse 3.2 was a bit weird on Vista and I never really liked it on OSX)
  • Ant 1.7!

Pains in the ass…

  • It’s still a little more than flaky in its threading, auto-completion has this weird glitch where it hangs, tells you its waited too long and then when you acknowledge that dialog it actually shows you the completions. This one seems confined to Windows.
  • On OSX the JUnit Test Runner seems to start up as it’s own application. I can’t figure out what set of circumstances makes it happen (initially I thought I had clicked an option or used a weird key combo but does not seem so likely). I also had the Class Loader get in a total mess when running the Unit Tests but the two may be linked.
  • No way to quickly adjust the font size in the editor. I’ve got a big screen and that’s great for the big picture but sometimes I want to enlarge code I’m working on so I don’t have to have use ninja precision with the mouse.

Outside of these there are some grey areas, the spell checker is a good idea and god knows there are enough programmers who need it. However it is not a tremendous value add and when it goes wrong it is a memory and processor monster. At work on Vista it decided that an XML file was a text file that was entirely invalid grinding the whole machine to a halt. What I would have liked instead is a bundled Servlet Engine like Tomcat on NetBeans. I’m sure there is some plugin that will provide this but I think it should be a part of the bundle (like SVN support).

Overall though Europa is still a bit of a “is that it?” experience. I recognise that there has been a significant amount of effort and everything looks really polished now but sometimes the new features make me thing “why was I putting up with that in the first place?”. Why didn’ t drag and drop work in the Java editors? If I was doing Java Enterprise why did I have to round up and install my own plugins? Why wasn’t the Ant Plugin for Eclipse 3.2 updated as a minor release?

The Europa developers have done a great job and there is no way I would stick with Eclipse 3.2, but there’s nothing so far that blows me away here.

So, what about the big question that always brings readers to the blog? NetBeans 5.5.1 or Eclipse 3.3? Well 3.3 does make some of NetBeans features look distinctly archaic: Ant 1.7 and JUnit 4 are big pluses. However its about having the right tool for the right job. I always say that if I have a gnarly old codebase then I trust Eclipse to make sense of it and get it working and get me working on it. However for Swing and Web Apps development I think nothing has really changed. You can set up a web app, framework and be experimenting in twenty minutes with NetBeans and Matisse is pretty unparalleled at the moment. With 3.3 I feel the Eclipse team have definitely done the right thing and made J2EE work a much more pleasant experience, however that was more about making up ground on NetBeans. From here on I think they need to think about how Eclipse can make other types of development easier. Personally I do think that is about bundling things like Tomcat and JavaDB into the IDE so that I can setup a configuration that includes my test databases, my test webserver and anything else I need to get my app up and running so that I am working on my code rather than my infrastructure.

Still no criticism can detract from a job well done by the Eclipse team. I just want them to do it again; and more often.

Standard
Java, Software, Web Applications

Learning Struts 2

I have been trying to get to grips with Struts 2 recently. Lesson one: very little of your Struts 1 knowledge will carry over. Lesson two: documentation is skimpy and much less coherent that Struts 1.

My first experience was an hour and a half of bafflement until I saw that I had put the config file (called struts.xml now to avoid clashing with the original framework I suppose) under the web application initialisation folder instead of the root of the classpath (i.e. under WEB_INF rather than WEB_INF/classes). Because Struts 2 is all about the defaults I could not see any issue with what I was deploying until I realised that my application would generate the same error message (Action not in namespace) whether my xml config was valid or not.

This is a problem with XML-driven configuration in general but is also a specific defaulting issue. If the framework is defaulting it should say so rather than just silently defaulting everything. The alternative is to explicitly say what packages are being loaded from config but I think not finding a config file is more likely to be an error situation, after all what application is going to be deployed in the default state if you cannot interact with it? Even if the config file is empty you are still expecting there to be one…

Like most web frameworks the learning curve on Struts 2 is initially smooth as you put together Hello World before hitting a vertical climb when you want to do anything serious. Struts 2 relies a lot on injection via marker interfaces and interceptors in stacks; none of which really map to the Struts 1 world.

The goal of Struts 2 is to have a POJO based framework that is more unit-testable and less linked to the Servlet spec. I think it is successful in this and it is what has kept me perserving with the framework. However to do so it has made a lot of things very abstract and in terms of testing there is has been some headscratching as again a lot of Struts 1 testing strategies (which focus on mocking the various objects) do not really apply.

For example when trying to test whether an Action was correctly setting something in the Session Context I was stumped for a while and ended up using Action Context (something that the documentation on the web described as preferred and depreciated in different sources).

This solution didn’t sit well; after a bit of rethinking I finally got to the point where I decided to implement the SessionAware interface (which provides a Map parameterised setSession method) which worked when deployed but failed unit testing because I couldn’t figure out how to access the value of the session on the exit of the Action’s execute method.

The answer is easy, trivial almost but reflects the different way of thinking the new methodology requires. Since the injection engine will add the Session Context attribute map what goes into the setSession method is actually the container’s Session bound variables. Therefore when unit testing you just create a suitable map (Struts 2 doesn’t seem Generics aware but I presume the type is <String, ? extends Object>) pass it to the action via setSession but then retain a reference and test the content of the Map after the execute method of the Action has been called. It is easy but it is not easy to start thinking like this after Struts 1.

Standard
Java, Software, Work

Eclipse 3.3 M6

So I had a coding assignment to do the other day (it’s job seeking time again unfortunately) and I decided to to take the opportunity to test drive Eclipse 3.3M6 and Ant 1.7. Trying out software while also trying to make a good coding impression is probably a dumb idea and I don’t think I’ll be doing it again but still there’s never any better time than the present.

NetBeans has overtaken Eclipse as far as I am concerned, simply because Sun has managed to live up to the cross-platform hype with their Swing interface and seems to have lots of different interfaces to projects from other IDEs and various deployment platforms.

On the other hand I wouldn’t try and corral a legacy project into NetBeans. Eclipse is still my fave for that (and while things like the Web Tools project can be a mare to install and get running they do bring Enterprise functionality to an IDE that previously made you pay for it). So then is it worth getting excited about Eclipse Europa?

Sadly the answer seems to be no. There are a couple of nice tweaks to the interface but nothing major. In fact although I’ve used M6 at home a couple of times now I’m hard pressed to bring to mind what exactly is different (the refactoring dialog is a bit better for example). Stranger still, some aspects actually seem worse. Code Intelligence and Completion used to be Eclipse piece de resistance with NetBeans being a distinct second. This time though I had exactly the same issues with Eclipse as I did with NetBeans, wacky suggestions from obscure packages rather than the more obvious choices from, say, java.util or even the project I was working on. The good news is that Eclipse seems to learn quickly from previous choices and after an evening of coding was back to it’s usual self. Only some of the code templating seems to still be quirky.

Overall though after a years worth of feverish competition between NetBeans and Eclipse I was disappointed that Eclipse seems to be running out of steam or perhaps new ideas. Now my first thought was that perhaps there are only so many features that an IDE needs and perhaps we’re getting close to completion on them. That seems disappointing unambitious though and if there was nothing else to do I would say that some of the basics could do with a thorough revamping. Eclipse’s text editors are nothing to write home about with a lack of features that are standard in normal programmer’s editors (drag and drop seems particularly weird although there might be progress in 3.3) and options for the editors are still accessed via the labyrinthine preference menu rather than the intuitive right-clicking of the editor tab.

I’m not sure where Eclipse is going at the moment but I would be disappointed to see it just stand still, that’s good for no-one. Oh and before NetBeans gets all the love I do have to say that I was completely baffled by some Swing programming I was doing the other day in the NetBeans (not via Matisse), the problem turned out to be that I was running my class rather than the project and one rebuilds the application jar while the other doesn’t. There’s probably some logic in that but I don’t quite see it (probably because it’s tucked away in the run profile somewhere).

Standard
Java, Software, Work

Transfer Objects versus Value Objects

What are Transfer Objects and what are Value Objects? This is a question that has plagued me since I started Enterprise Java Programming. While Transfer Objects actually have a nice definition in the J2EE Design Patterns Value Objects are a different beast and various companies, individuals and organisations seem to have different ideas about what they are.

Some are naturally pretty hilarious (as are the implementations of most nebulous ideas in IT) the most ridiculous so far is that a Value Object is a collection of public fields. Pretty much like a struct in C. I think that came from a misunderstanding of the blueprint definition that states: the members in the Transfer Object are defined as public, thus eliminating the need for get and set methods. Of course you still need to make the fields final if the object is to be immutable and a value object by definition is immutable.

Now though I feel I have enough of a working understand of the ideas to offer the following definitions.

Firstly a Value Object must be immutable, serializable and it’s content must be publicly accessible.  The content of a Value Object can be accessible via public final fields but to avoid the internal data becoming part of the public interface access should ideally be abstracted via getter methods. A value object should always be initialised entirely via its constructor; nesting value objects if necessary to avoid excessively long constructors.

A Value Object can only be changed and persisted by the creation of a new Value Object based on the values of the original Value Object obtained.

A Transfer Object while similar in most respects is mutable. In addition there is a reasonable expectation that the Transfer Object will be persisted if it is returned to the originating layer. So for example if a Session Bean provides a Transfer Object as the return value of one of its methods it is reasonable to expect the API to also provide a method that accepts an instance of the same Transfer Object. Any changes communicated to the Bean will be persisted and consistent so that if the original method that obtained the instance is called again it returns the values that have been returned to the layer and not the original object.

In this respect a Transfer Object is more the statement of expected behaviour on a Java Bean. I went to a talk about EJB3 where the speaker mentioned the detached object anti-pattern and I couldn’t agree more. Value Objects and Transfer Objects are really only useful in situations where the recipient layer is not really going to modify the objects that much. As soon as you allow POJO clones of entities to change value during a user transaction then you tend to get into all kinds of problems. It is exactly this kind of situation that ORMs and Hibernate clones tend to fall apart. They are great at obtaining Value Lists and dreadful at the kind of heavy lifting that is actually difficult and is where Entity Beans came a cropper in the first place.

Standard