Episode #4: Nothing In Nothing Out
[The One Where We Realize There is a Step 0 Before the First Step
Previously on Tales From a Metrics Journey…
In Episode #3, our intrepid metrics explorer ran into a vicious cycle with top-down metrics. They tried to break out of this vicious cycle - and fell into a virtuous cycle of focusing on improving metrics without actually using them. Let’s settle in for Episode #4 to see what happened!
Nothing in Nothing out
This lesser-known version of the “garbage in, garbage out” concept reared its head when we wanted to start measuring something small. The problem was that we didn’t actually have much data to extract metrics from! For example:
- Services weren’t instrumented, which meant we couldn’t get any performance metrics from them (neither did the infrastructure provide instrumentation out of the box).
- When JIRA was being used, it wasn’t being implemented in a way that facilitated gathering metrics. Teams were not making use of Issue Types, Components, Labels to distinguish one JIRA issue from another, and tickets did not have sizing. It was all just a blob of JIRA issues.
- JIRA wasn’t always used - some teams didn’t even capture their work in JIRA. People would drop by someone’s desk and ask for a favour and they’d just do it. No traceability, no visibility to the work, which in turn made it impossible to have metrics.
- Support issues and questions could go through multiple routes (JIRA, Slack question, email, phone call), so there was no way to aggregate the support work stream in order to see metrics.
Step 0 before Step 1
We realized that before we could extract any metrics, even bad ones, we had to start by categorizing the work and making the work visible. For example:
- All work that took more than a few minutes had to have a JIRA issue (excluding meetings, breaks, etc. Teams created their own agreements on when to use JIRA and when it wasn’t needed).
- We started to use JIRA more effectively - projects got Components and Labels, and teams started using Issue Types (bug, feature, task) to help categorize the type of work they were doing.
- Teams started to use some form of estimation and sizing for issues (t-shirt size, Story Points, right-sizing, etc.).
- Services started to get basic instrumentation.
Resistance and Effort to Change
Some of these changes, like instrumenting our services, needed time to implement. For these we had to make time within our commitments to implement them - and had to convince stakeholders that these efforts would pay off.
Many of the others (making work visible and adjusting how we used JIRA) didn’t need a lot of time investment, but needed discipline by the team and its leadership to get buy-in and enforce the new processes. It wasn’t easy and we tried various things:
- Automation: JIRA has automation for setting components, Issue Type, and other fields based on rules.
- Explaining the why: periodically reviewing why we need to make work visible and why it was important to categorize the work.
- Audit and enforcement: team leadership held accountable for work traceability and organization. Periodic metric review and discussion was important here.
Over time, we finally started to get enough data that we could extract some metrics!
On the Next Episode
Tune in to the next episode where we find that our initial metrics were typically bad, but that these bad metrics were still valuable when used properly.
Lessons Learned
These are lessons that we learned from going back to Step 0 and breaking out of our virtuous cycle:
- “Make work visible” is the foundation on which measurement and metrics must be built. Without this, there is no way to move forward.
- It takes discipline to categorize work, and it is not immediately apparent why it’s useful to do so.
- Periodic review of metrics is an import lever to push this forward. If a team doesn’t have metrics, the periodic review makes this abundantly clear and can create momentum.
[ Guest post from Mojtaba Hosseini ]