Programming

Migrating Neo4J Python apps on Heroku

Okay this is quite specialised by the four or five of you who will have the same problem I wanted to save you some time and suffering.

So Neo Technologies have been incubating a plugin of their excellent graph database on Heroku for a while. So far the plugin was only available in beta but now anyone can have it. This is excellent news and I would recommend it as a way of getting starting with graph-based web programming. However if you were in the beta program then you now need to migrate from the beta plugin to the new one.

The instructions that went out to the beta program implied that this was simply a case of dumping a backup zip, switching out plugins and then uploading your zip. Well the good news is that exporting and importing the zips works exactly as advertised but the bad news is that the two plugins are quite different in terms of the environment they expose. The beta plugin had an extensive list of variables that exposed the various parts of your hosted environment. The new one just exposes the variable NEO4J_URL which is a url to the server and contains an embedded username and password.

Now the new variable does actually encode all the information that the original manifest did but in a very limited way and your library is going to have to work quite hard to correctly construct the base urls and requests required to access the REST API. I’m not sure which libraries do do it (I presume the Java ones) but neither of the Python ones do.

I’m going to describe what you need to do for the neo4j-rest-client which is the one I use in my apps but it will probably be similar for py2neo which is what you might want to use if you want to use a lot of Cypher.

So the simplest way to explain the solution is code.

import urlparse
from neo4jrestclient.client import GraphDatabase

def extract_credentials(url):
	parsed_url = urlparse.urlparse(url)

	if parsed_url.username and parsed_url.password:
		return (parsed_url.username, parsed_url.password)

	return None

GRAPH_URL = os.environ.get('NEO4J_URL', "http://localhost:7474") + "/db/data/"

credentials = extract_credentials(GRAPH_URL)

if credentials:
	db = GraphDatabase(GRAPH_URL, username = credentials[0], password = credentials[1])
else:
	db = GraphDatabase(GRAPH_URL)

So the neo4j-rest-client library supports username and password credentials but doesn’t parse them out of the url itself. Fortunately urlparse makes this pretty trivial. The conditional pieces of the code deal with the situation where we are running locally, essentially if we can’t see the Heroku environment variables we want to fallback to the local case (most Heroku stuff works this way).

A more frustrating issue is the difference between the url of the server and the root resource for the REST API. Naturally these are not the same but few libraries handle been given the wrong url that gracefully. Since the host URL does return successfully you usually get some failure about parsing or unpacking the root document. Submitting a patch to detect where the url ends in db/data or not would seem to be the logical solution.

So this code should boot a successful wrapper around the REST interface and your app should work again.

Except, that there seems to be another issue in the registering and deregistering of the plugin manifests. What I have observed is that heroku config lists the beta environment variables and not the new values. So even if you do this the library still gets 404 errors on the root document (because it is looking for the Neo4J environment that has been deallocated).

So the best way to migrate your app in my view is:

  • go to your current app and download a database backup
  • create a new app with a temporary name (or something like my-app-2)
  • carry out your code changes as described above
  • load your new code into your instance
  • upload your backup into the new instance
  • if the app is working rename your old app (to something like my-app-old)
  • name your new app whatever your old app was called

This seems easier and less hassle than migrating in place. Once the beta plugin is turned off you should be able to delete the old app.

This process has allowed me to migrate my two demo apps successfully (pending bug reports): Crumbly Castle and Flow Demo.

Standard
Web Applications

The web is a graph

Last week I gave a talk on how I have been creating web applications that very lightly wrap an underlying graph to provide not just content for a page but also the workflow and state of the user’s current interaction with the application.

As part of the talk I have created two demo apps that are available on Heroku. Crumbly Castle is inspired by Dark/Demon Souls and allows you to explore a castle that is populated by the ghosts of everyone who has ever played it.  The other offers a questionnaire system that generates characters in the style of the Elder Scroll or Fallout games. The code for the applications is on Github so you can fork it and deploy it for yourself. Both use the hosted Neo4J addon for Heroku which provides hassle-free hosting but is currently only available to beta program members.

You can obviously use both on your local machine.

Both of the demos are metaphors for more serious kinds of enterprise applications but I think it is often easier to produce prototypes or demos that are based on immediately engaging concepts. It certainly helps to have something that the audience can play with during the talk!

So briefly I just wanted to summarise the points I try to make during the talk and explain why you might want to look at using a graph as your web application store. So my major point is that web application development is usually page-centric, when you hit a page the controller tends to examine the whole state of the application to find out why you came to the page. Are you logged in? Were you trying to look at something? Is there a session associated with you?

I posit that we should instead be looking at the journeys between the pages as being the interesting things. Given where you are in the journey graph where can you go next? Essentially I am taking the same logic as a state machine or rule engine uses and instead expressing it as a relationships in a graph.

The most common trick the applications use is to assign a fixed url to a user session that identifies a node in the graph. Then with each transition I change the relationships the node has to other data based on the user’s actions and then simply send a redirect back to the fixed url which will then render a different result based on the current state of the node.

This means that the web application becomes very simple to write and the controller simply has to select the template and the related nodes that are needed to generate links and actions.

I think it is a really interesting approach that is a really natural fit for simplifying a lot of session-state heavy apps.

Standard
Programming, Python

How does the patch decorator in Mock work?

I tend to use Mock more as a stubbing library rather than for mocking. The patch decorator is pretty handy in terms of this as it takes care of all the resetting once your stubbed test has run making it easy to have a test where a dependency returns an empty list, followed by a single-entry list and so on.

However I often forget how exactly it works so I’ve decided to write up my latest remembering of how to do this (via John Hartley’s help and reminders) so I have something to look up next time I forget.

The first thing is that the patch decorator takes a string that represents the fully qualified name of the stub/mock you want to create. In a Django app for example that means you should include the app name at the root. The name also reflects the local name of an imported item. Something I commonly do wrong is to bind to the absolute name, say ‘random.choice’ rather than ‘myapp.mymodule.random.choice’. If you are in the situation where your stub is correct when you call it directly but never happens when you run the code under test I am pretty sure that naming will be at the root of your problems 95% of the time.

For each string argument you have in patch you also need to define a parameter to the test function, this will contain the actual Mock object and is what you use to actually stub the value to what you want it to be for the test. Use names that make sense here, stub_db, fake_file_reader not just mock1, mock2 and so on.

With these relatively few reminders in place you should now be in a position to stub simply with Mock!

Standard
Python, Web Applications

Deploying Python apps to Epio

I recently got my beta access to ep.io, the Python application deployment platform. I had the chance today to have a play around and try out some deployments so I thought I would try and give my view on the experience before. I’ve deployed Python apps to Heroku and Gondor before so those services form my reference points here.

So firstly, there’s a command-line client that you install via pip and you effectively deploy to the platform via a client-command, SSH keys and what looks like git on the server-side. This is more like Gondor than Heroku (which is intimately linked to git). It means you have your choice of source control and if you want to be a Python purist you never need to step outside of Python for everything you are doing.

Applications consist of essentially one configuration file that states where the WSGI application is and what the requirements file is. Compared to Gondor it is a very simple setup but it did feel that it could be even simpler if it made convention-based assumptions such as the requirements file being called requirements.txt, for example.

Leveraging WSGI and configuration this way gives a very flexible platform and I was able to get both Flask and Bottle to work (the former very quickly because it has documentation, the latter via trial and error that might require its own blog-post). I didn’t have time to try Django but I felt pretty confident that I could get whatever framework I wanted working once I understood the basic setup.

Unlike Heroku, Epio provides a fixed framework for executing the apps. It seems you will be running behind NGINX and Gunicorn. Both are good choices and I certainly like them but if you want to play around with different servers like Tornado or CherryPy you may prefer Heroku’s more open deployment model. I did like the way that you can use the configuration file to have NGINX serve static content directly.

Epio naturally has less of an ecosystem than Heroku but has Solr, Postgres and Redis out of the box. All solid choices and covering off the majority of what I would need. I was certainly grateful that I didn’t have to grapple with remote database administration and could prototype apps with just Redis.

Deployment and logging have kind of rough edges. Being able to access logs directly from the application page was a win for me, however when I was struggling to define the WSGI entrypoint correctly it seemed as if the application wasn’t being really compiled until the first request comes in. I would see an entry confirming a new deployment but then nothing until I hit the app. I think there should be some kind of sanity check of what you have uploaded to see whether it will even run.

Right now epio is providing a Python-based cloud deployment platform with a sensible set of supplementary services and low opinion about the source control system to you use. It feels like if this had been around at the start of the year it would have blown me away. However now there is more competition and therefore questions of price and ease of use will matter in terms of  how compelling it is to use the service.

If you do Python web development I would definitely recommend you sign up for beta and give it a go yourself as it seems a very solid prototyping platform. If you are not a Ruby and Git fan then you may well love what is on offer here because it is already very convenient, makes few demands on you and gets your web app public in minutes.

Standard
Work

Google Apps and App Engine

If you use Google Apps to provide you with email then you should also really be thinking about enabling and using Google App Engine as well. Internal applications are much easier to deliver to the business as a whole and having a ready-made platform makes it easier to try out ideas that previously would have been impractical.

The first advantage is that Google Apps that are bound into your domain allow you to create something that is easy to access for an existing user (no additional login is required) but also gives you peace of mind that you are exposing virtually zero surface area for attack.

The second is that for Python at least it is easy to access a very full featured environment with a minimum of code. Want to send emails, have task queues, access to memcache, serve static content? It is all a YAML configuration line or import away.

I love services like Heroku but a lot of internal apps have relatively light usage and benefit from the batteries included approach rather than combining various plugins. It makes it easy to switch between different approaches and react to different demands.

Standard
Programming, Python

Django and JSON stores: a match in heaven

My current project is using CouchDB as its store and Django to implement the web frontend. When you have a JSON store such as CouchDB then Python is a natural complement due to its brilliant parsing of JSON into native data structures and its powerful dictionary data type that makes it really easy to work with maps.

In a previous project using Python and Mongo we used Presentation objects to provide domain logic on top of the raw data maps but this time around I wanted to try and cut out a layer and just work with maps as much as possible (perhaps the influence of the Clojure programming I’ve been doing on the side).

However this still leaves two problems that need addressing. Firstly Django templates generally handle dictionaries like a dream allowing to address them with the standard dot syntax. However both Mongo and Couch use leading underscores to indicate “special” variables and this clashes with the Python convention of having a leading underscore indicate a private member of the class. The most immediate time you encounter this is when you want to use the id of a document in a url and the naive doc._id does not work.

The other problem is the issue of legitimate domain logic. In Wazoku we want to use people’s names if they have supplied them and fallback to their email if they haven’t supplied their name.

The answer to both of these problems (without resorting to an intermediary object) is Django’s filters. The necessary logic can be written in a few lines of Python that simply examines the dictionary and does the necessary transformation to derive the id or user’s name. This is much lighter than the corresponding Presentation solution.

Standard
Programming, Python, Ruby

Truly open classes

Here’s an interesting observation, I needed to write a little script to automate some number calculating for me. I was wondering whether to do it in Ruby or Python. I’m doing a lot of Python at the moment so I felt I ought to give Ruby a little go. Share some of the love.

However the solution I had in mind really didn’t work with Ruby because while Ruby has open classes it has a comparatively fixed idea of attributes. In Python you can set attributes very freely on any object so I have got in the habit of creating something and then enhancing by applying a function. Example? Okay.

def make_captain(actor):
actor.rank = "Captain"
return actor

class Person:
pass

captain = make_captain(Person())

So this little trick doesn’t work, or rather is much more difficult to do in Ruby as Ruby, at is dynamic heart, is a language that believes in object-orientation and that classes should encapsulate rather than being little collections of data. You can use instance_variable_get/set but it lacks the elegance of the Python syntax.

In Ruby it would be easier to define the attributes in the class using the existing metaprogramming constructs and then have a class method to generate the content (effectively encapsulating my script logic).

Now this isn’t a straight “Ruby sux, no Python sux more” post. Between Scala, Clojure and Python I have been doing a lot more in a functional style that depreciates objects as anything more than value carriers. The Ruby vision of a class would give me something with a stronger sense of purpose and encapsulation, something that is hard to benefit from in a script for a particular purpose.

What is going to be interesting this year is trying to identify when the value of a piece of code is in the structure of it’s data-definition (i.e. objects) versus its process (functions). Having had a think about it I should perhaps rewrite my script to use some OO modelling because it may answer similar requirements down the line. However from a strict Lean/Waste point of view I should have gone with the Python solution as Ruby was imposing a restriction on me while providing benefits that I was unlikely to realise.

Standard