Wednesday, April 28, 2010

Do you reveal impediments?

I just read an interesting from post LyssaAdkins on Collabnet's blog. She is writing about the challenges for upper management moving Scrum/Agile pratices.

Best quote (for me anyway): "coach the teams to be relentless impediment revealers".

I've found getting into the mindset of revealing impediment to be quite challenging.

CollabNet Scrum and Agile Blog » Addressing upper management challenges with Scrum

What decisions are you going to make with that information?

One of my beliefs about building reporting tools is that the information that is included in the report should be driven by the questions that you want to be able to answer when using the tool. If you don't include all the information that is needed to answer, then the tool won't be useful. If you add unneeded information you are A) doing extra work when you build it and B) cluttering up the page and distracting people from what really matters.

I was talking with colleague this week about our integrated build validation test (BVT) information page page. We have multiple BVT "tools", each of which potentially gets run against multiple platforms for each build. Currently we report pass/fail at the tool level. If any platform fails a given tool, that tool reports as "fail". If all the platforms for a given tool pass then that tool reports a pass. My colleague doesn't like this. He thinks we should report all results. He thinks it's important to know that the slam BVT (one of the tools) passed on five platforms but failed on a sixth. In general, I think he's correct. We do want to know that it's working on some platforms but not others. But on this particular web page, that information doesn't seem relevant and I've never quite understood why he feels so strongly about it.

As we were digging into this it seemed that his core concern was that we were "making decisions" with incomplete information.

Certainly that sounds like a bad idea. But is it really? Not all information is relevant to a decision. In court, the most important issue to be decided is sometimes whether or not a particular piece of information can be put before a jury. If the judge rules that it is irrelevant to the question the jury is being asked to answer, then you are not allowed to present it. Even though that leads to decisions being made with incomplete information.

In this instance, what decisions are we making? The decisions we might make based on the BVT info page are "do we move forward with this build now, do we not move forward this build, or do we need to investigate and get more information before we decide?"

The BVT info page will show that either A) all of the BVT tests have passed against this build, or B) at least one test is failed. If all of the tests have passed, we pretty much always decide to move forward with the build. If most or all of the BVT tools have failed, we usually decide not to move forward with the build. Finally, if one or two of the BVT tools report failure we conclude that we need further investigation before we make the decision.

So how would knowing that for a given tool some platforms past and other platforms didn't change our decision? As far as I can tell it wouldn't. If it all passes, this problem doesn't apply. If most of the tools of failed on at least one platform that strongly suggests the build isn't solid enough to go forward. And if a particular tool is failing on we are going to want to investigate. And if the tool is failing on some platforms but not others, we are still going to want to investigate it.

For some decisions, like whether or not to send it off to developer to look at or have a tester look at it, or deciding which developer or tester to have investigate the issue, knowing that the tool failed on some platforms and passed on others is significant. But for the decision is actually being made from this particular report, I don't think it is. So including that information would just clutter up the report without adding value.

Communicating with your non-Agile co-workers

I read a nice post by Jurgen Appelo today on communication.

The money quote: "Real communication includes making sure that the meaning that is assigned to a message is the same on both sides."

This is something that we've struggled with quite a bit. My team, which uses Agile methods and terminology, has stumbled many times when talking to our customer--the Test Engineering team.

I think the biggest problem has been with the definition of "done." The software that we write is used internally. As is common with internal tools, there is a desire to say something is done as soon as it hits the bare minimum of functionality/usability and move on to solving the next problem.

We don't do it that way. We expect our code to be tested, robust, and maintainable before we call it done. Several times over that last 3 years we've put up prototypes of tools that did have the bare minimum of functionality and usability. Then people were surprised when we said there was more work to do and that it wasn't finished yet.

Monday, April 26, 2010

Deniability

I read short but provocative post by Seth Godin on deniability. He asked how much of your time on a project is spent on CYA, excuse preparation, and generally making sure that you won't be blamed if the project doesn't ship.

I've worked places where the cost of being blamable for a project failing was very high. In those organizations a lot of energy was spent on finger-pointing and blame avoidance. How many more projects would have succeeded if all that energy had gone into the project instead of the politics?

Can an organization that is blame oriented successfully implement agile development practices? It seems unlikely. In a blame oriented organization, pointing out impediments is risky. And if no one points out impediments, they don't get removed.

Sunday, April 25, 2010

Standards in software development

I was reading a piece by Jurgen Appelo in which he argues that in a standards (things like naming conventions, file structures, error log standards, etc.) will emerge from the bottom up " goals and metrics make it painfully clear for employees that it is more optimal for them to change."

I'm intrigued by this. I run a small (6 person) development team that worked out it's own coding standards a couple of years ago. I believe that the primary driver was switching to common code ownership. It's a lot easier to maintain and extend someone else's code if you are sticking to a set of mutual conventions.

Currently there are two separate efforts (that I know of) to develop coding standards at my workplace. The test engineering team, which writes test tools and automation in python, is working on coding guidelines for use in writing automated tests. As far as I can tell this is coming primarily from the individual contributors with some nudges from management. The development organization, which writes the product code in C and C++ (and a smattering of other langauges for bits and pieces) is trying to integrate their various coding standards into a single document. It's a non-trivial task, there are over 100 coders in 6 offices, spread across 3 continents, working on 7 different products. This effort seems to be driven from the top down.

It will be interesting to see how the two efforts progress.

After reading Jurgen's post, I was left with one question. What kinds of goals and metrics can you put into place to make it more likely that team members will see that standarization will make their work easier?

Monday, April 19, 2010

Interesting post from James Shore on hiring and certifications

I've long had reservations about certifications in the tech industry.

As far as I'm concerned the only thing a certification really tells me about someone is that they are trying to make their resume stand out.

James Shore has a post in which he discusses the reasons he hears from other people for using certification in the hiring process: it provides a mechanism for filtering candidates.

Which is true. But do I really want to filter on this, which I view primarily as self-promotion? Probably not.

James describes the process he has used in the past for filtering candidates, the first part of which is a screening questionaire with a bunch of open-ended questions. At F5, we recently interviewed a bunch of college students for this summer's intern slots. The initial screening included an on-line questionaire. Most of the questions were multiple-choice, but there was one open-ended question that we put in front of every single technical intern candidate. It was very telling as to who was knowledgeable and who wasn't.

Which leaves me wondering what would happen if Itried the rest of James' hiring process. I should give it a try the next time I have an opening to fill.