Wednesday, July 21, 2010

Information about information

I liked this post, Information about information, from a few days ago by Seth Godin. One of the challenges that my team is working with right now is figuring out how to take the huge amounts of data that we generate and create actionable information. We're making progress, but there is still a huge amount of data that doesn't get reviewed in a timely manner. Sometimes it's weeks before we realize that the automated tests have actually found a bug. If we don't look in the correct time scale it appears to be noise in the system, but if we look graph out a couple of months of results onto a single page, sometimes a pattern leaps out.

Friday, July 16, 2010

Standards, measurement, autonomy, improvement

I was reading a post today on Gemba Panta Rei: Igor Stravinsky Agrees: Standards Enable Creativity

My favorite line was actually a quote from Taiichi Ohno, the "father" of the Toyota Production System.

"Where there are not standards, there can be no improvement."

It got me thinking about the resistance I've run into over the years in various attempts to standardize and measure the work people do in developing and testing software. I've assumed that a lot of the resistance was based on fear. People are afraid if they are measured they are going to fall short. Which leads to lots of conversations about how hard it is to measure what developers and testers do.

The most successful measurement/improvement efforts that I've been part of have happened in places where there is a high trust environment. Senior management was able to start that they wanted to standardized definitions of work and measure things so that they could make the case for more resources. Because they had built up enough trust, they were believed. Or at least believed enough that no one sabotaged the measurements.

I wonder how much of the resistance was also about autonomy? People value being able to work in a particular way--usually the way that they have been working, which feels comfortable and familiar. The best success I've seen with this was at a start-up, so there were not already established ways of doing things that people were invested in. At other places I've seen HUGE resistance from people who have been working in a particular way for a long time and don't want to move to something different.

But it's really hard to improve if you keep doing the same things the same way. The claim "we just need more resources" doesn't fall on receptive ears at the exec level if you aren't making improvements.

Sunday, June 27, 2010

What is the REAL problem?

I was reading a post by Rajesh Setty recently entitled "The problem is never the problem." It brought to mind the hardest recent interpersonal/team challenge I've had to deal with at work. The ostensible issue was around the security implementation in a new tool.

The real issue, as best I could determine, was that some of the people involved don't trust each other very much. We had to address the trust issue before we were able to deal with the technical issues.

Friday, June 18, 2010

Do you know who your real customer is?

I was reading Seth Godin today and found a post that spoke to my experiences running an agile team over the past few years.

His statement was, "If you are working hard to please the wrong people, you'll fail."

My team builds tools for internal use. I've struggled to figure out which customer I should be please. Should I be trying to please the testers who use are tools? Should I be trying to please the test managers? Should I be trying to please the dev managers? Should I be trying to please my boss (hard to go wrong there)?

At various points, I've emphasized one customer over the over. And it has changed over time. It's a source of tension for myself, and to a lesser extent for my team.

Which customer are you trying to please? Is it the same all the time, or does it change with the wind?

Friday, June 11, 2010

Decisions have costs; change is expensive

I've read some fascinating things lately about the energy cost (as measured by brain imaging) of making decisions, resisting temptation, and why that makes it hard for individuals to change their behavior.

I have vivid recollections of being exhausted by all the decisions I had to make setting up a new household. Some of the decisions had significant consequences--which house was I going to rent? Others were utterly trivial--which of the 16 different kinds of kitchen trash cans that they sell at Home Depot was I going to buy? And the funny thing was that make a bunch of inconsequential decisions was far more draining than making a couple of important ones. Apparently now there is science to back up my personal beliefs about that.

I'm not quite sure how to apply all this to Agile/Lean software development. I think the best match might be in the arena of Real Options. They conciously choose to delay decisions. Hopefully in additional to better information, they will have fully re-charged brains when the have to make the final choices.

Thursday, May 20, 2010

Saturated Testing

Nice post from Rikard Edgren. If additional testing in an area doesn’t give any important, new information, and is it worth the effort? I have found situations in which additional testing would not change the outcome, but not very often.

Perhaps a more interesting question from agile perspective is "will adding additional tests here increase or decrease the total utility of my automated testing suite?"

If adding a set of tests to your automated unit tests make it too long (long enough that developers won't run them as often?), then maybe it would decrease the total utility of your test suite

Good pain

I was reading Rajesh Setty's blog this morning and noticed a Tweet he had posted that said "The pain of discipline is now, the pain of regret is later." There was a related tweet from someone else that said the pain of regret is always greater than the pain of discipline.

One of the things that I noticed in my first year or so as a Scrum product owner was that simultaneously the best and worst part of the process, at least for me, was the moment in the planning session when I drag items from the backlog into the new sprint.

Inevitably, I always wanted to try and squeeze one or two more items into the sprint than would fit. I had to acknowledge that we couldn't do all the work that I had hoped we would be able to (worst part). And having acknowledged that reality, then I would work with the team and stakeholders to make hard decisions about what we were NOT going to work on. We would end up with an amount of work that we were comfortable we could finish (best part).

In this moment, I was experiencing the pain of discipline.

Saturday, May 1, 2010

Infinite possibilities-unlimited freedom, or unsolvable problem?

I read an interesting post by Jon Miller on Gemba Panta Rei today. It asked the question "Is an infinite number of possible solutions is a good thing or a bad thing?" Ultimately he concluded that it is a bad thing. A problem with an infinite number of solutions is indistinguishable from an undefined problem. The act of defining the problem creates constraints on the range of solutions.

I've seen this in action many times. I'm a product owner for a team that uses Scrum. I present the team with a user story. They respond "Pete, this is way too vague, we can't estimate how much work it is." So we talk about different ways of breaking the problem up into smaller chunks. Eventually we have several smaller, more constrainted user stories that we can estimate.

The possibilites are no longer infinite, but now we see ways to solve the problem.

Wednesday, April 28, 2010

Do you reveal impediments?

I just read an interesting from post LyssaAdkins on Collabnet's blog. She is writing about the challenges for upper management moving Scrum/Agile pratices.

Best quote (for me anyway): "coach the teams to be relentless impediment revealers".

I've found getting into the mindset of revealing impediment to be quite challenging.

CollabNet Scrum and Agile Blog » Addressing upper management challenges with Scrum

What decisions are you going to make with that information?

One of my beliefs about building reporting tools is that the information that is included in the report should be driven by the questions that you want to be able to answer when using the tool. If you don't include all the information that is needed to answer, then the tool won't be useful. If you add unneeded information you are A) doing extra work when you build it and B) cluttering up the page and distracting people from what really matters.

I was talking with colleague this week about our integrated build validation test (BVT) information page page. We have multiple BVT "tools", each of which potentially gets run against multiple platforms for each build. Currently we report pass/fail at the tool level. If any platform fails a given tool, that tool reports as "fail". If all the platforms for a given tool pass then that tool reports a pass. My colleague doesn't like this. He thinks we should report all results. He thinks it's important to know that the slam BVT (one of the tools) passed on five platforms but failed on a sixth. In general, I think he's correct. We do want to know that it's working on some platforms but not others. But on this particular web page, that information doesn't seem relevant and I've never quite understood why he feels so strongly about it.

As we were digging into this it seemed that his core concern was that we were "making decisions" with incomplete information.

Certainly that sounds like a bad idea. But is it really? Not all information is relevant to a decision. In court, the most important issue to be decided is sometimes whether or not a particular piece of information can be put before a jury. If the judge rules that it is irrelevant to the question the jury is being asked to answer, then you are not allowed to present it. Even though that leads to decisions being made with incomplete information.

In this instance, what decisions are we making? The decisions we might make based on the BVT info page are "do we move forward with this build now, do we not move forward this build, or do we need to investigate and get more information before we decide?"

The BVT info page will show that either A) all of the BVT tests have passed against this build, or B) at least one test is failed. If all of the tests have passed, we pretty much always decide to move forward with the build. If most or all of the BVT tools have failed, we usually decide not to move forward with the build. Finally, if one or two of the BVT tools report failure we conclude that we need further investigation before we make the decision.

So how would knowing that for a given tool some platforms past and other platforms didn't change our decision? As far as I can tell it wouldn't. If it all passes, this problem doesn't apply. If most of the tools of failed on at least one platform that strongly suggests the build isn't solid enough to go forward. And if a particular tool is failing on we are going to want to investigate. And if the tool is failing on some platforms but not others, we are still going to want to investigate it.

For some decisions, like whether or not to send it off to developer to look at or have a tester look at it, or deciding which developer or tester to have investigate the issue, knowing that the tool failed on some platforms and passed on others is significant. But for the decision is actually being made from this particular report, I don't think it is. So including that information would just clutter up the report without adding value.

Communicating with your non-Agile co-workers

I read a nice post by Jurgen Appelo today on communication.

The money quote: "Real communication includes making sure that the meaning that is assigned to a message is the same on both sides."

This is something that we've struggled with quite a bit. My team, which uses Agile methods and terminology, has stumbled many times when talking to our customer--the Test Engineering team.

I think the biggest problem has been with the definition of "done." The software that we write is used internally. As is common with internal tools, there is a desire to say something is done as soon as it hits the bare minimum of functionality/usability and move on to solving the next problem.

We don't do it that way. We expect our code to be tested, robust, and maintainable before we call it done. Several times over that last 3 years we've put up prototypes of tools that did have the bare minimum of functionality and usability. Then people were surprised when we said there was more work to do and that it wasn't finished yet.

Monday, April 26, 2010

Deniability

I read short but provocative post by Seth Godin on deniability. He asked how much of your time on a project is spent on CYA, excuse preparation, and generally making sure that you won't be blamed if the project doesn't ship.

I've worked places where the cost of being blamable for a project failing was very high. In those organizations a lot of energy was spent on finger-pointing and blame avoidance. How many more projects would have succeeded if all that energy had gone into the project instead of the politics?

Can an organization that is blame oriented successfully implement agile development practices? It seems unlikely. In a blame oriented organization, pointing out impediments is risky. And if no one points out impediments, they don't get removed.

Sunday, April 25, 2010

Standards in software development

I was reading a piece by Jurgen Appelo in which he argues that in a standards (things like naming conventions, file structures, error log standards, etc.) will emerge from the bottom up " goals and metrics make it painfully clear for employees that it is more optimal for them to change."

I'm intrigued by this. I run a small (6 person) development team that worked out it's own coding standards a couple of years ago. I believe that the primary driver was switching to common code ownership. It's a lot easier to maintain and extend someone else's code if you are sticking to a set of mutual conventions.

Currently there are two separate efforts (that I know of) to develop coding standards at my workplace. The test engineering team, which writes test tools and automation in python, is working on coding guidelines for use in writing automated tests. As far as I can tell this is coming primarily from the individual contributors with some nudges from management. The development organization, which writes the product code in C and C++ (and a smattering of other langauges for bits and pieces) is trying to integrate their various coding standards into a single document. It's a non-trivial task, there are over 100 coders in 6 offices, spread across 3 continents, working on 7 different products. This effort seems to be driven from the top down.

It will be interesting to see how the two efforts progress.

After reading Jurgen's post, I was left with one question. What kinds of goals and metrics can you put into place to make it more likely that team members will see that standarization will make their work easier?

Monday, April 19, 2010

Interesting post from James Shore on hiring and certifications

I've long had reservations about certifications in the tech industry.

As far as I'm concerned the only thing a certification really tells me about someone is that they are trying to make their resume stand out.

James Shore has a post in which he discusses the reasons he hears from other people for using certification in the hiring process: it provides a mechanism for filtering candidates.

Which is true. But do I really want to filter on this, which I view primarily as self-promotion? Probably not.

James describes the process he has used in the past for filtering candidates, the first part of which is a screening questionaire with a bunch of open-ended questions. At F5, we recently interviewed a bunch of college students for this summer's intern slots. The initial screening included an on-line questionaire. Most of the questions were multiple-choice, but there was one open-ended question that we put in front of every single technical intern candidate. It was very telling as to who was knowledgeable and who wasn't.

Which leaves me wondering what would happen if Itried the rest of James' hiring process. I should give it a try the next time I have an opening to fill.

Thursday, March 11, 2010

Is everyone on your Agile team happy?

I was reading blog post on the New York Times web site today called "The Secret to Having Happy Employees, by Jay Goltz." He describes his two-part method for having happy employees. Part one: treat people well. Part two: fire the people who are still unhappy even when you treat them well.

As a manager for around 12 years, I've fired people on occasion. It's always unpleasant. But keeping an employee who is unhappy is pretty unpleasant too, and it can undermine everything else you are doing.

Over the past couple of years, I've spent time talking with people who have introduced Agile development practices into both new and existing teams. I've also brought Agile methods (Scrum initially, then engineering practices including test-driven development, common code ownership, user stories, continuous integration, etc.) into the team I've run for the past 3 years.

For what I've seen and heard, most people are neutral-to-happy about adopting Agile methods. Some people love them. And a few people are unhappy about it.

One of the basic principles of Agile development is to treat people as individuals, trust and respect them, and give them the support they need to do the job. To me, this sounds a lot like Jay Goltz's "Part one."

What do we do about Part two? If someone is really unhappy even when trusted, respected, and given the tools to do their job, what do you do about it?

Monday, March 8, 2010

Island of Agile

Last month I went to Agile Open Northwest. I co-lead a session about being Agile in an organization that is not. At one point someone asked "Why would you want to be an island of agile?"

I thought to myself, wow, that's a great title for a blog. So here is my blog on being an Island of Agile--running an agile team inside of an organization that is not.