Are you tracking your time the right way? Here’s how to avoid our mistakes.

Written by andrewaskins | Published 2017/12/08
Tech Story Tags: productivity | tracking-your-time | software-engineering | software-development | engineering-estimates

TLDRvia the TL;DR App

I’ve written before about how bad time estimates almost bankrupted our company, and how we improved them by over 1000%. But I still get a lot of questions about how, exactly, a dev team can track their time better, or when it starts being beneficial.

The problem is that everyone working in this industry is constantly told to track their time. As a result, people do it reflexively — whether they bill hourly or not — but don’t do it in any way that actually yields useful data. The end result? They’re underpaid and burned out, and eventually shutter their doors in favor of getting a job.

For us, when we had two projects in a row, went over our estimates on both of them, and then couldn’t figure out where in the project/time estimate the overage was, we knew we had to change things.

What we used to do

What we used to do was log our time and type in a detailed description of what we were doing with every time entry, underneath very broad headers. The actual descriptions would be very specific to the task…but when you’re dealing with a project that has two thousand hours in it, and hundreds upon hundreds of time logs to go with those hours, it’s almost impossible to sort through those descriptions and see where you’re going over.

Here’s exactly what we changed:

More categories and milestones

We’ve always broken down estimates into broad categories, then much shorter milestones. The top-level categories are things like design, backend, iOS app, etc. — chunks of work that typically take 4–8 weeks. Underneath that are milestones, sub-items under an estimate. Our new rule of thumb for them is that they can’t be any larger than 25 hours (and we often break them down into even smaller increments).

For us, 25 hours breaks down to about a week, and through trial and error we discovered that if we have milestones that are any longer than a week and we’re off, it can be hard to figure out in the retrospective which specific thing was causing the estimate to be off. It wasn’t even necessarily scope creep that was interfering with our estimates — most of the time, it’s just easier to be off when you’re estimating for a huge chunk of work. Keeping it to one week lets us easily see which part we over- or underestimated, and made it less likely we’d have domino fall of cascading underestimates.

We also make sure to record the estimates for these milestones, and give the breakdown to our clients. Before we would only give the client the big project number, and maybe the numbers for the categories. Those would also be the only numbers we would save. This meant the only estimates we had to look at were 4–8 weeks long, again making it impossible to see which specific things caused our estimate to be inaccurate.

Last but not least, all milestones get laid out in Teamweek on a Gantt chart, so we can easily see how they line up and where we’re at.

In our case, each of our team members has a different specialty and tends to work on separate things, so it’s easy to divide up the work and have one person take ownership of a milestone. Past that, it’s up to the individual person — Bill, our backend developer, likes to break his milestones down into smaller, informal milestones of 1–2 days.

All of this means that when we’re done with a project, we can look at how many hours it took us to build a specific feature, and know exactly where those hours went and which part of the feature took the longest amount of time.

One area we’ve had to make a lot of changes to is making sure that no one person or part of the process becomes a bottleneck.

For instance, we now write documentation for the API first. We used to build everything and write documentation for it as we went, or write it at the end of the project, but that wound up causing delays and miscommunications between front end and back end work.

If you have the documentation done at the beginning, though, the back end and front end both understand how the data will be handed off and front end can start mocking things up immediately, then plug in the back end as it gets built.

Changed our tracking style

To match our new big-picture process, we also changed how we track things in Toggl. We have tags in Toggl for the top-level categories like design, front end, back end, iOS, and Android. Then, when it comes to tracking the milestones, we use the description field to track those.

All of these changes mean that we can more easily see how much we spent on specific milestones or parts of the project, without sifting through hundreds of individual entries.

Here’s some screenshots that showcase how cluttered our team Toggl used to be:

Note the huge amount of tags — some of them overlapping!

Tasks ranging from 18 minutes (too small to be useful data) to 52 hours (which part of bug fixing took so long?)

Here’s an example of how we use Toggl now:

The tag, “Front End Dev,” corresponds to the category. The Project is Albatross MVP. And the description, “Login,” corresponds to the smaller milestone. If we record working on “Login” at two separate times, Toggl will combine them in their reports.

Created a spreadsheet…then built Albatross

The last thing we needed to do was find a way to easily compare all of these metrics to our original estimates, and see how far over or under we were. We created a Google spreadsheet to easily keep track of that (download your copy here). At the end of every week, we pulled our logs from Toggl and updated the spreadsheet, which gave us a finger on the pulse of the project (and made it easy to course-correct if we were starting to get off track).

**The spreadsheet gave us something we’d been sorely missing: a true understanding of our data.**The first time we created it, we updated it at the end of the project, only to discover we’d gone over on several tasks. The difference was, this time we knew exactly where we’d gone over and by how much, instead of just knowing that we were over for the project and having no idea how to make sure that didn’t happen again. With the next project (Case Status), we began updating the spreadsheet weekly, and we still went over in a few categories, but at the end of the project we came in two weeks ahead of schedule and under budget in most categories.

It’s also helped us fight scope-creep overall — in one case, our client wanted to add a new type of user to the application, but looking at the spreadsheet, we knew that we’d have to either charge for that addition or cut it, since it wasn’t in the estimate.

The one downside of using the spreadsheet? The time spent updating it. It took about an hour every week to sort through Toggl and update the spreadsheet, which meant that it was easy to skip that task on particularly busy weeks. And when we did keep up with it, it still added up to $500 in lost productivity over the month. That was part of the impetus behind building Albatross: the update happens automatically, and we can see at a glance which items are at risk of going over.

Takeaways:

  • Break projects into a larger number of smaller milestones, aim for these to be 25 hours or less.
  • Write API Documentation before writing the API to make the hand off easier.
  • Match your time tracking entries to the milestones in your initial estimate. ONLY track using milestones in your initial estimate, no creating new descriptions.
  • Regularly update your estimate with the actual hours you’ve spent on each milestone.

We’re building a tool to help you create more accurate software estimates. Improve your margins and regain control of your time with Albatross.

The MVP is a super-powered version of the time tracking spreadsheet we used to use. As it evolves, we’ll be doing even more to help you leverage your data to create better estimates.

Originally published at getalbatross.com.


Published by HackerNoon on 2017/12/08