This comic is included to demonstrate a point. No ownership, permission, etc should be inferred. It was modified from its original form (horizontal to vertical) to fit this space. If you're not already reading Dilbert on a daily basis, you can find it here or even read Scott Adam's blog here.

Over the past four years, I've deployed dotProject for organizations ranging from small 1-2 person software shops to universities supporting hundreds if not thousands of users and every size organization in between. The common reason for their use of dotProject boils down to this:

They need to track the status and costs of their projects.

There may be additional considerations related to invoicing, tracking availability, and drawing pretty Gantt charts, but even those are simply variations of the same. Despite this goal, they aren't always sure how to do it.

Over drinks a few weeks ago, I met with a relatively new PM who has been using dotProject in his organization for approximately three months. His teams have had the opportunity to figure out its strengths and weaknesses, have been logging their time (mostly) diligently, and they've even customized reporting to get data according to their criteria. In my book, it sounded like a growing success so I didn't understand his frustration.

It turns out that most of his teams are taking consistently 50-100% longer than the allotted time. It's not just a single slow developer, it's the whole team. It wasn't just the loosely managed projects, but core infrastructure projects that are closely tracked. The only exceptions were a handful of minor projects which seemed to track +/-10%.

The next day, I came to visit for lunch and met with some of his team members. After comparing some of the projects, we found the answer. It wasn't a problem with the tool or the teams… All of the "close" projects were managed by a single PM recently promoted from development team leader while the others used estimates provided by non-developers (often from an RFP response). The solution was so simple that no one thought to check: their estimates were bad.

While it seems like a good idea to manage by spreadsheet, bottom line, etc, it becomes easy to miss key aspects of not just the data but also how the data was collected. Before you blame the team or the tool, check your assumptions, validate the pieces, and know what you're measuring.

Of course, there's gaming measurements… but that's a whole other discussion… 😉

Write a Reply or Comment

Your email address will not be published.