#1,614 in Business & money books
Use arrows to jump to the previous/next product

Reddit mentions of Actionable Agile Metrics For Predictability: An Introduction

Sentiment score: 2
Reddit mentions: 2

We found 2 Reddit mentions of Actionable Agile Metrics For Predictability: An Introduction. Here are the top ones.

Actionable Agile Metrics For Predictability: An Introduction
Buying options
View on Amazon.com
or
Specs:
Release dateAugust 2015

idea-bulb Interested in what Redditors like? Check out our Shuffle feature

Shuffle: random products popular on Reddit

Found 2 comments on Actionable Agile Metrics For Predictability: An Introduction:

u/Dayleryn ยท 2 pointsr/agile

Your approach is quite interesting and brings a lot of questions and impressions, so kudos for sharing your thoughts!

> One question I have with this approach is at what point is something considered a complete "feature" and should thus produce a velocity point each iteration?

Simply put, when the "feature"'s Definition of Done is satisfied. If you have trouble defining a DoD for certain feature types, then these types might need to be clarified and/or downsized.

> What is a better name for this metric than velocity?

"Added value" seems right, considering you subtract from that metric whenever value-decreasing elements such as bugs and errors are discovered.

Now, my first concern with your approach is, like I started this reply with, that it brings a lot of questions... too many for me to consider this approach viable in the longer term.

First, such a composite metric fuses so many different variables that correlations will become undetectable... unless you measure these variables separately, which will make the composite metric irrelevant. For instance, say your "added value" suffered from an unexpected drop for a few days, one month ago. Based on your musings, this could be correlated with an acute lack of QA, a few developers on vacation, an exceptional hardware error causing hundreds of errors to be logged... anything, really. Therefore, if your metric doesn't help you investigate root causes, how will it be useful to your team?

Second, your rules related to bugs and errors make the wrongful assumption that all bugs and errors have the same impact, share the same priority and diminish your software's value equally. If your metric came to be used by management for evaluation purposes, it wouldn't fair for your team to be penalized the same for a production-crashing bug than for a typo in an administration console.

Finally, from the way you describe the metric's heuristics and their evolution, I'm afraid you'll be tempted to spend significant effort tweaking these heuristics further... that could be invested in investigating bottlenecks, researching testing tools, refining your features' Definitions of Ready and Done, etc. Say your added value is impacted way more than you expected by bugs and errors: will you consider adjusting them to -1 added value points by 5 hours of non-resolution rather than 3? Or perhaps only certain types of bugs and errors should cause these losses? What about higher-value features that could provide more than 1 added value point? Worst of all: as soon as you make one of these tweaks, your metric's past values will no longer be comparable to the revised values and your historical data will become near useless.

If you are interested by alternative approaches differing from Story Points and traditional hourly estimates, I have learned much from The Scrumban Revolution and Actionable Agile Metrics from Predicability; applying their knowledge to my previously-Scrum team brought us many benefits and improved productivity. Hope these can help you as much in return!

u/singhpr ยท 1 pointr/agile

This is a great read on useful metrics - https://www.amazon.com/dp/B013ZQ5TUQ