The Cleaner Coder

The Clean CoderI recently finished the latest book from Robert C. Martin aka Uncle Bob, called The Clean Coder.
Once finished there are many pro and cons about this book. At the beginning I was quite skeptic about the book but at the end I am glad I’ve read it to the very end.

The books is neither a set of rules that make you a better software developer nor is it really provide a code of conduct to follow in your professional life. However, if you spend some time in this industry, you will have many déjà vu moments while reading this book.

Very positive (if you a frequent reader and being interested in the people behind the books) is the fact that you will learn a lot about Martin as a person. Each chapter is more or less a short essay about a past project or part of his former work, some experiences he made and the (not so surprisingly) conclusions he made. There are quite a few sentences that are worth to remember, sometimes things you thought of many times but have not found the right words to write it down. Also you might find some interesting anecdotes you might learn something from (or did you know where carriage return and new line come ‘/r/n’ from and why they vary on different operating systems?).

One very positive aspect is that he points out what a professional (software developer) is, how he should behave and what could do to be recognized as such. In our industry you are still recognized as some kind of nerd, a geek who codes 24h hours a day, does not sleep, consumes a lot of caffeine and plays video games which a lack of social skills. While some of these things might be true, one expects often that you work more than the regular time, you solve each and every problem without any failure and that you come up with miracles, wonders performing magic, voodoo and code kung-fu each and every day (often for a very conservative salary). Nothing you would expect from other professionals (lawyers, doctors etc.).

Eventually, he writes about many things I, and probably you too, experience each and every day in our day work. At the end it is a nice reading book you might read during some evenings. Just the very final chapter about the tools he uses in his work (vi, Emacs, Eclipse etc.) and the frequent mentioning of FitNesse (which is Martins’ project) are quite unnecessary.

I am not fully convinced that the book is a must have reading, however, I work in this industry for nearly 13 years in various projects, research and product development, large and small companies, consulting and academia with different teams in different countries. At the very end it is quite calming to see once more that my problems are everywhere the same, have been the same for a long time and probably will stay the same for a long time in this industry.

Teamwork in Scrum

ScrumMaster Logo SealSome aspect attracting my attention since my very first Scrum training is the fact how a Team actually handles team work.

In a conventional software development team you probably find a bunch of hierarchical organized group members including managers, an architect, engineers and developers. While the architect figures out what to do with respect to the managers needs, the engineers define how to do it and de developers eventually do whatever the hierarchical organization above has defined.

In a Scrum Team, the Team is told what to do directly by the ProductOwner in form of stories. Consequently, what to do, how to do it and the act of doing it is completely within the responsibility of the Team. In the conventional approach, each level could blame the level above in case of a failure. The developers blame the engineers for insufficient definitions, the engineers blame the architect for a faulty architecture or misleading bullet tracing and the architect might blame the managers due to a strict budgeting, too much pressure and so on. Of course each manager might blame his manager in turn and so on.

So, how do we end up in a team effect? I have seen teams where each developer picks a story from the task board, teams where each developer had his/her own area of stories (server, client, user interface etc.) and teams where the overall team works at one story at once. Boris Gloger, holds the view the entire Team (of developers) works on the same story. I am not absolutely sure yet,  how this works in larger teams (5+ developers) with small stories, however in theory, if a story is done by the hole team it succeeds as team.

If each developer works on a separate story, I recall a passage from the latest book of Robert C. Martin, talking about who breaks the code will own it. What’s the logical implication of this? If a developer breaks code he might get blamed by other team members. That’s the quick death to a team. If the team separates in smaller groups which maybe blame this developer, this might be an even quicker detach to the team. If the blaming happens in front of the ProductOwner, this might fundamentally harm the overall Scrum Team, in particular the Team – ProductOwner relationship. Suspiciousness, maybe shorter sprint duration and additional tension within the whole Scrum Team might be the result. The very best ScrumMaster will have a hard time undoing this. Considering the fact that a really good team might take up six to twelve moths to form up, blaming should be avoided at all circumstances.

So what’s the right attitude in this case? If something goes wrong, a story is not fully implemented, code is broken, previous functionality is lost or whatever goes to the dogs, the whole team should stand up, work together to fix this issue as quick as possible without caring who introduced it. Of course the team should perform a root cause analysis what exactly happened, however, as it should be done in retrospectives, the topic should not be what went wrong and who’s fault it was. Instead it should be about how to avoid such a issue in the future. That way its all about improvements and that’s what Scrum is about at the very end.

Processing a Larger Pair

Yesterday, I had my first poker round for a very long time with two good friends of mine. A couple of years ago, we started playing Texas Hold’em – as computer scientists, of course just because of the maths and statistics.

Four of a KindDuring yesterdays game, we had a great hand facing a pocket pair of sevens a pocket pair of aces. By the flop and turn two more sevens came up providing me four of a kind and eventually the pot. Afterwards we had a nice chat about when and how to play a pocket pair as with three or less players on the table, one would play a pocket pair to the very end most of the times. However, with yesterdays hand in mind, I was quite interested in the statistics and the probability, my opponent might have a larger pair than 77.

I thought of this being a nice exercise for today visualizing this using Processing. As for any visualization, I needed some data. Therefore, I picked the corresponding table of probabilities from the Poker probability page on Wikipedia.

The visualization itself is straight forward, drawing he probabilities, axis and finally the labels. I decided to draw the axis after he probability curves simply to keep them on top of any other element on the canvas.

void draw()
{
    for(int col = 1; col < colCount; col++) {
      drawProbability(col);
    }
    drawAxis();
    drawLabels();
}

Finally, the result looks like the following. Indeed, you can see, that within a game of three people, there is only a 12% chance that someone would get a larger pair even if you hold a pocket pair of twos.

Processing chart for Poker Probabilities holding a Pair

Of course, you could create such a chart using Microsoft Excel and there is no rocket science in this visualization. However, this was quite a nice exercise to re-activate my Processing skills. Positioning of labels is done relative to the size of the canvas and the length of the text as well as the color chosen for the number of opponents is chosen dynamically. The whole example is available at http//aheil.codeplex.com.

I See Clouds of White

For several years, I run my own local server as well as a root server hosted online. I run all kinds of services, some of them I used on a regular base, some of them I used from time to time and others I just set up to learn and experiment. However, the time I set up most of these, was a time when there where not many choices if you wanted to host something online. So I run my local repository, Microsoft Team Foundation Server, my own mail server online, my FTP and Web Server and many other services.

As maintaining all these services became almost a full time job, I finally decided to move anything in to the Cloud – or at least somewhere online. In some way, this is an experiment as I try to achieve to run anything I need (or lets say I want) somewhere online while staying within the budget of my Web and local server.

For the local server I calculate $42 for maintenance and electricity a month while the monthly  rent for the Web server is $70. All together I face yearly costs of nearly $1.350 average fixed costs a year not included software licenses and time invested to maintain and update the servers.

Step by step I now move my services to various online services (free and paid). First of all I moved my blog to wordpress.com. That was a rather easy decision as I already switched to the WordPress software several moths ago on my own server. Exporting and importing the content therefore was quite an easy job. Finally, I picked domain mapping for http://www.hack-the-planet.net which is about $12 a year.

To keep track of stuff to do, I use remember the milk for quite a time now. $25 a year is not that cheap for a simple list of todos, however, I get the Web application, a fine app for iPhone and iPad as well as GMail and Google Calendar Gadgets synced all over the place.

A critical step, however, are my source code repository. I maintain all code I’ve ever written in CVS and Subversion for ages. Without your own server it’s not that easy to grant rights on repositories for friends and colleagues you work with. Here, I decided to move to two different platforms. First of all, I started a new project called aheil code (to keep the corporate identity in sync with aheil blog) at CodePlex. That’s the place I plan to share anything under Ms-PL license. Closed source however, I go to store with Assembla. They provide a limited but cost free program for private repositories which should be sufficient for my needs.

Instead of using my own FTP to exchange files between machines (and people), DropBox appeared to be a great solution. I joined DropBox at a very early beta state and I am still very happy. (If you don’t have an DropBox account yet, follow http://db.tt/kNZcbyI which gives you and me 250MB of extra free space). I use about 4GB of space at the moment for free. However, once you need more there is always the possibility to switch to a paid account. The client is available for almost any platforms and I use it for various scenarios across many of them including Web, Windows, Mac and iOS. Before I used Microsoft Live Mesh, however, canceling the beta, changing the name, running two Microsoft services (Mesh and SkyDrive) at the same time you were not able to combine and finally changing the software drove me finally to DropBox.

I terms of productivity tools, I completely switched to Google Calendar as it syncs nicely with iPhone and iPad and even iCal on my Mac. I used (and really liked) Outlook for many years, but the lack of syncing with third party online services seem to be an epic fail in 2011. I can tell you that you won’t notice this fact within Microsoft (living in a happy Exchange and SharePoint equipped world), but out there in the World Wild Wide Web, connectivity is all that counts.

Also, I joined Evernote to sync, copy and share notes and documents. Again, client software is available for all major platforms including iOS, Windows and Mac. I still try to figure out how to use Evernote on a daily base, but at the moment, the maintenance costs (manual sorting, organizing etc.) are beyond the benefit.

So far, I was not able to cover all services I need, for example I am still looking for a good (and secure) online backup solution, a way to host my IMAP server and Web server as well as a possibility for a local storage solution. At least the last point seems to be almost solved by my new router which allows you use a external HDD as network drive. Using my previous solution, I was able to connect to my local network from anywhere using OpenVPN in a very convenient way. Also here, I am looking for an alternative solution where maybe a router might take care of this.

So far, the experiment to move everything to the “cloud” was quite a success. I was able to migrate quite a lot of my services and only spent 3% of my available budget for services so far.

DropBox with TrueCrypt on Lion and Windows

After receiving my new MacBook, I wanted to sync a whole set of files between both systems. For convenience, I decided to use DropBox instead of a thumb drive and for security reasons, I decided to use TrueCrypt to encrypt some of my confidential data within the DropBox folders.

Using a TrueCrypt container within DropBox is quite convenient as I am syncing my DropBox folders with various machines (e.g. at work). However, I do not want to access these file there nor do I want that an admin might check out my “oooh so secret” files (not saying they would, though).

DroppBox with TrueCrypt on Lion and Windows

With my rusty Mac OS kung-fu, I had to install TrueCrypt first. Of course, this failed and being the first app I did install on Lion, this was somewhat demotivating. Before you have install a version from MACFuse. It seems that the official version is not up to date, however, there are rumors you might use the latest version provided at Tuxera.com.

Once MACFuse and TrueCrypt are set up and the machine is rebooted, create a TrueCrypt container within DropBox. When creating on OS X Lion, you might want choose FAT for the containers file system so you can mount it on the Windows system as well. However, any change within this container will synchronize the container as a whole. Not being very efficient if this is a 256 MB file, it seems that one can turn of the timestamp of the TrueCrypt container to avoid syncing it. This will prevent that the container gets synced after files within the container are changes, however, the itself files are still updated. To turn it of, open TrueCrypt and select Settings / Preferences… chose the Security tab and uncheck the Preserve modifications timestamp of file container checkbox.

TrueCrypt Settings on OS X Lion

Of course, the same has to be done on your Windows system.

TrueCrypt Settings on Windows 7

Once both settings are applied, only the initial sync of the container will take some time. Thereafter, only the files within the container are updated. for me this seems to be a quite good solution to keep my boxes in sync and to avoid rubbernecks seeking through my private stuff. The setup is done quite easily, only the hassle with MACFuse was quite annoying.

Heavy WPF Commands – Part I

Recently, I had a conversation about how much logic a WPF command in .NET might provide and how few implementation it should contain. Basically, this conversation was triggered by different understanding of the semantics of WPF commands. I realized that there is a lot misunderstanding how to implement and use commands in WPF that causes trouble in understanding the program flow and hard to maintain code in the long term. Therefore, I decided this is definitely worth some research on WPF commands and their usage. This is the first articles in a series, I will focus on the usage of WPF commands, their meaning, various implementations and best practices.

The intention of commands are pretty clear described in the Commanding Overview MSDN article:

The first purpose is to separate the semantics and the object that invokes a command from the logic that executes the command.

Therefore, commands allow you to reuse parts of your application logic for different targets of of application while separating this logic from the actual user experience (UIX). Regardless whether implementing the logic using the code behind approach in WPF with RoutedCommands, following the MVVM pattern using RelayCommands or building composite applications using DelegateCommands and CompositeCommands, the concept behind commands stays always the same.

All this is made possible by the ICommand interface with its two method declarations CanExecute and Execute. Basically, this allows your UIX to ask if a certain action can be performed and of course to trigger this action. The one who asks for this is called command source. If the command tells it cannot execute this action, is usually disables itself.

The Idea behind WPF Commands

Which logic to perform is not within the concern of the command source. The command maps the “Do Something” call from the command source to the actual logic and invokes this logic on behalf of the command source. Consequently, the logic to be invoked should not be implemented at an instance of the command itself. Any command serves simply as a protocol between application logic and UIX and allows you to separate both from each other.

In a following articles, I will focus on various best practices how to implement commands before having a look in the different possibilities how to use them in common WPF as well as in composite applications.