How we learned to stop worrying and love documenting … … and so could you!
In this Sphinx software documentation post mortem we describe how we went a few steps further.
I’d like to tell how we set up our software documentation in an collaborative web project that we finished a while ago, what we learnt from it and where we’re heading.
The Plan
So, suddenly there was this large (for us) web project with several big websites, microsites, smartphone app integration, “business logic” integration etc. with a time span of a couple of years of development of custom modules and of course we wanted to do everything right®.
Since a rather loosely operating and partly remote working team was involved, I came up with the idea of using Sphinx for the software documentation part.
Sphinx - in short - is a documentation engine that takes in plain text files and spits out several formats among whose html and pdf were just what we needed.
You write Sphinx documents in ReST, a markup language very similiar to Markdown but with many documentation specific enhancements (and a few shortcomings, too, but we’ll get to that later on).
I didn’t know all the members of the team beforehand and certainly didn’t want to impose tools upon them that they wouldn’t like, so I introduced my idea during our kick off workshop. Lucky me: it was immediately accepted by everybody involved!
The Execution
My main “idea” was to not only treat documentation like code, but also place the files where the code is: right next to it in the same git repository, in a dedicated “doc” folder.
This way it would be really easy to
- write documentation while developing
- use the power of git (logs, diffs, stash etc.)
- be responsible for their own state of documentation by everyone
- check out everything, test and build it and collaborate with the others
- coordinate, assemble, edit and put final touch to the documentation as a whole
So they wrote code, lots of it, wrote some documentation from time to time, committed and pushed when it was due, then I pulled, tested, edited and corrected it locally, and pushed it back, too. And since it was clear that I would never commit executable code, only text, we never had a single merge problem with the documentation. Ok, maybe once or twice … but those were … … just small accidents! ;)
The Pros
In practice of course still not everybody was as fond of writing documentation as I was, but that was to be expected. On the other hand I think that for a lot of us it was a relief to do it like this, compared to the prior centralized, word processor based way we all were used to, where the editor in chief was mainly the person that sucked, stole time and produced phantasy bullshit that had to be revised multiple times until it was usable.
One of the many definitive positive effects was Jenkins building all of our documentation regularly on the server. Philipp had also set up an integrated "Documentation" tab right inside the respective Redmine projects, so there always was a consolidated URI on the intranet to check the state of our growing documentation while we were rolling out new features to one of those websites.
We liked that very much and it was cool to be able to link from and to the Redmine Wiki while the project was growing and growing.
The Cons
A little bit problematic, however, was the output of end user manuals as pdf files for print. To get good looking printed manuals with properly scaled images and page breaks that make sense you have to do a bit of extra tinkering with ReST. And not in a semantic way, but what feels rather like markup hacking. But this was mainly my part and since these *special* files were also in the repo it was easy to adopt similiar edge cases for neighbouring modules.
We also struggled with the Sphinx setup on local computers every once in a while. Different operating systems provided different python versions, hence different Sphinx version, module versions etc. This was a bit of a mess.
In hindsight those were the same struggles like the ones you had with local python, php, ruby and vagrant/VirtualBox setups back then: it was ugly some times, but the plusses most of the time outweighed the occasional cursing … … and it has gotten much better since then.
The Verdict
About two years later we have over thousand *printed* pages of documentation that every person on the team could really be proud of. Even our client had no complaints! ;-)
Next Steps Taken
When in 2014 Docker came along I immediately thought how this could solve our earlier local setup problems for documentation with Sphinx. So I put together a basic image, set up .gitignore rules for the “_build” and the other output folders, pushed the result to our cool local Docker repository and it just worked! WOW!
But time had also passed since our initial adventures with Sphinx.
The MarkDown community had not stood still and created MkDocs in the meantime, so why not try that, too?
MkDocs seems so much more suitable for smaller projects and also friendlier for developers of the “Github generation” - they know their MarkDown well enough, no need to teach them new ReST tricks, right? - Done!
Now they can just
docker run -it -v $(pwd):/code -w /code kfinteractive/mkdocs:latest mkdocs build --clean
in their doc folder (or a suitable bash alias, run-mkdocs
eg.)
Next, I even created a Sphinx-Docker image for RaspberryPi and BeagleBone Black (= ARM architecture). Nice, indeed!
So now we create a Docker documentation image for every project, everybody on every team can have a functional and identical version of Sphinx or MkDocs, the setup seems stable enough for further customizations and enhancements:
I think with an environment like this and documentation treated and run like code we have really reached a big milestone in creating highest quality and easily maintainable software.