How to Facilitate Meetings

by Berit Lakey From Network Service Collective, Movement for a New Society, 4722 Baltimore Avenue, Philadelphia, PA 19143; 215 724-1464. Meetings are occasions when people come together to get something done, whether it is sharing information or making decisions. They may be good, bad or indifferent.
  • Some of the ingredients of good meetings are:
  • Commonly understood goals;
  • A clear process for reaching those goals;
  • An awareness that people come with their personal preoccupations and feelings as well as in interest in the subject at hand; and
  • A sense of involvement and empowerment (people feeling that the decisions are their decisions; that they are able to do what needs doing).
While there is no foolproof way to insure successful meetings, there are a number of guidelines that will go a long way toward helping groups to meet both joyfully and productively. Most people can learn how to facilitate a good meeting but it does take some time and attention. The more people within a group who are aware of good group process skills, the easier the task of the facilitator and the more satisfactory the meeting. A facilitator is not quite the same as a leader or chairperson, but more like a clerk in a Quaker meeting. A facilitator accepts responsibility to help the group accomplish a common task: to move through the agenda in the time available and to make necessary decisions and plans for implementation. A facilitator makes no decisions for the group, but suggests ways that will help the group to move forward. He or she works in such a way that the people present at the meeting are aware that they are in charge, that it is their business that is being conducted, and that each person has a role to play.

It is important to emphasize that the responsibility of the facilitator is to the group and its work rather than to the individuals within the group. Furthermore, a person with a high stake in the issues discussed will have a more difficult task functioning as a good facilitator.

AGENDA PLANNING: If at all possible, plan the agenda before the meeting. It is easier to modify it later than to start from scratch at the beginning of the meeting. If very few agenda items are known before the meeting starts, try to anticipate by thinking about the people who will be there and what kind of process will be helpful to them. In the agenda include:
  1. Something to gather people, to bring their thoughts to the present, to make them recognize each other's presence (singing, silence, brief mention of good things that have happened to people lately, etc.);
  2. Agenda Review -- It's a good idea to have the agenda written on large sheets of newsprint or on a blackboard so that everybody can see it. By reviewing the agenda, the facilitator can give the participants a chance to modify the proposed agenda and then to contract to carry it out.
  3. Main Items - If more than one item needs to be dealt with, it is important to set priorities:
    • If at all possible, start with something that can be dealt with reasonably easily. This will give the group a sense of accomplishment and energy.
    • The more difficult or lengthier items, or those of most pressing importance, come next. If there are several, plan to have quick breaks between them to restore energy and attention (just a stretch in place, a rousing song, a quick game).
    • A big item may be broken into several issues and discussed one at a time to make it more manageable. Or it may be helpful to suggest a process of presenting the item with background information and clarification, breaking into small groups for idea sharing and making priorities, and then returning to the main group for discussion.
    • Finish with something short and easy to provide a sense of hope for next time.
  4. Announcements.
  5. Evaluation - Serves several purposes: to provide a quick opportunity for people to think through what happened and to express their feelings about the proceedings and thus to provide a sense of closure to the experience; and to learn to have better meetings in the future.
    • Estimate the time needed for each item and put it on the agenda chart. This will:
    • Indicate to participants the relative weights of the items;
    • Help participants tailor their participation to the time available; and
    • Give a sense of the progress of the meeting.
FACILITATING A MEETING: The tone of the meeting is usually set in the beginning. It's important to start on a note of confidence and energy and with the recognition that those present are people, not just roles and functions. Sometimes singing will do this - especially in large gatherings - or a quick sharing of good things that have happened to individual people lately. The time it takes is repaid by the contribution it makes to a relaxed and upbeat atmosphere where participants are encouraged to be real with each other. Agenda Review:
  1. Go through the whole agenda in headline form, giving a brief idea of what is to be covered and how.
  2. Briefly explain the rationale behind the order of the proposed agenda.
  3. Then, and not before, ask for questions and comments.
  4. Don't be defensive on the agenda you have proposed, but don't change everything at the suggestion of one person - check it out with the group first.
  5. If major additions are proposed, make the group aware that adjustments must be made because of limited time available, like taking something out, postponing something until later, etc.
  6. If an item that some people do not want to deal with is suggested for discussion, consider that there is no consensus and it cannot be included at that time.
  7. Remember that your responsibility as facilitator is to the whole group and not to each individual.
  8. When the agenda has been amended, ask the participants if they are willing to accept it - and insist on a response. They need to be aware of having made a contract with you about how to proceed. Besides, it is their meeting!
Agenda Items Proper:
  1. Arrange (before the meeting) to have somebody else present each item.
  2. Encourage the expression of various viewpoints. The more important the decision, the more important it is to have all pertinent information (facts, feelings and opinions) on the table.
  3. Expect differences of opinion; when handled well, they can contribute greatly to creative solutions.
  4. Be suspicious of agreements reached too easily. Test to make sure that people really do agree on essential points.
  5. Don't let discussion continue between two people, but ask for comments by others. After all, it is the group that needs to make the decisions and carry them out.
  6. As much as possible, hold people to speaking for themselves only and to being specific when they refer to others. NO: "some people say . . . ," "we all know . . . ," "they would not listen . . . ." Even though this is scary in the beginning, it will foster building of trust in the long run.
  7. Keep looking for minor points of agreement and state them. It helps morale.
  8. Encourage people to think of fresh solutions as well as to look for possible compromises.
  9. In tense situations or when solutions are hard to reach, remember humor, affirmation, quick games for energy, change of places, small buzz groups, silence, etc.
  10. When you test for consensus, state in question form everything that you feel participants agree on. Be specific: "Do we agree that we'll meet on Tuesday evenings for the next two months and that a facilitator will be found at each meeting to function for the next one?" Do NOT merely refer to a previous statement: "Do you all agree that we should do it the way it was just suggested?"
  11. Insist on a response. Here again the participants need to be conscious of making a contract with each other.
  12. If you find yourself drawn into the discussion in support of a particular position, it would be preferable to step aside as facilitator until the next agenda item. This can be arranged beforehand if you anticipate a conflict of interest.
  13. Almost any meeting will benefit from quick breaks in the proceedings - energy injections - provided by short games, songs, a common stretch, etc.
Evaluation: In small meetings (up to 50 people at least) it is often wise to evaluate how things went (the meeting process, that is, not the content). A simple format: on top of a large sheet of newsprint or a blackboard put a + on the left side, a - in the middle, and a / on the right side. Under the + list positive comments, things that people felt good about. Under the - list the things that could have been done better, that did not come off so well. Under / list specific suggestions for how things could have been improved. Don't get into arguments about whether something was in fact helpful or not; people have a right to their feelings. It is not necessary to work out consensus on what was good and what was not about the meeting. A few minutes is usually all that is needed; don't drag it out. Try to end with a positive comment. Meetings almost invariably get better after people get used to evaluating how they function together. Closing: Try to end the meeting in the same way it started - with a sense of gathering. Don't let it just fizzle. A song, some silence, standing in a circle, shaking hands - anything that affirms the group as such and puts a feeling of closure on the time spent together is good.

SPECIAL ROLES

"VIBESWATCHER" At times when the discussion is expected to be particularly controversial or when there are more people than the facilitator can be awarely [sic] attentive to, it may make sense to appoint a "vibeswatcher," a person who will pay attention to the emotional climate and energy level of the attenders. Such a person is encouraged to interrupt the proceedings when necessary with an observation of how things are going and to suggest remedies when there is a problem.
  1. As "vibeswatcher" you pay most attention to the nonverbal communication, such as:
  2. Body language: Are people yawning, dozing, sagging, fidgeting, leaving?
  3. Facial expressions: Are people alert or "not there," looking upset, staring off into space?
  4. Side conversations: Are they distracting to the facilitator or to the group?
  5. People interrupting each other.
It is often difficult to interpret such behavior correctly. Therefore it may be wise to report what you have observed and possibly to suggest something to do about it. If energy is low, a quick game, stretch or a rousing song may wake people up. If tension or conflict level is preventing people from hearing each other, a simple getting up and finding new places to sit might help. A period of silence might also be helpful when people may have a chance to relax a bit and look for new insights. It is important for the vibeswatcher to keep a light touch - don't make people feel guilty or defensive. Also, be confident in your role. There is no reason for apologizing when you have an observation or a suggestion for the group; you are doing them a favor. PROCESS OBSERVER From time to time any group can benefit from having somebody observe how it works. During periods of conflict or transition (changing consciousness about sexism, for example) a process observer may be of special value. While functioning as a process observer, be careful not to get involved in the task of the group. A note pad for short notations will help you to be accurate. Remember to notice helpful suggestions or procedures that moved the group forward. Once a group has a sense of its strengths, it is easier to consider the need for improvements.
  1. Here are some specific things you might look for:
  2. What was the general atmosphere in which the group worked (relaxed, tense)?
  3. How were the decisions made?
  4. If there was any conflict, how was it handled?
  5. Did everybody participate? Were there procedures that encouraged participation?
  6. How well did the group members listen to each other?
  7. Were there recognized leaders within the group?
  8. How did the group interact with the facilitator?
  9. Were there differences between male and female participation?
When you as process observer (whether appointed or not) are paying specific attention to patterns of participation, an easy device would be to keep score on paper. In a small group a mark can be made next to a person's name every time s/he speaks. If you are looking for differences in participation patterns between categories of people, such as male-female, black-white, new member-old member, etc., keeping track of number of contributions in each category is enough. In giving feedback to the group, try to be matter-of-fact and specific so that people do not get defensive and can know exactly what you are talking about. Again, remember to mention the strengths you observed in the group. If you take it upon yourself to function as a process observer without checking it with the group beforehand, be prepared for some hostility. Your contribution may turn out to be very valuable, but a lot of tact and sensitivity is called for. CO-FACILITATOR Instead of the usual practice of having one facilitator, it is often wise to have two facilitators. Here are some of the reasons and circumstances for team facilitation:
  1. More information and ideas are available during the planning.
  2. More energy (physical and emotional) is available to the group, especially during times of conflict or when handling complicated matters.
  3. If a facilitator becomes personally involved in the discussion, it is easy to hand the job over to the co-facilitator for the time being.
  4. Co-facilitation is a way for more people to gain experience and to become skilled facilitators.
  5. It is less exhausting, demanding and scary.
For people who are not used to working as a team, it is probably wise to divide responsibility for the agenda clearly before the meeting. However, co-facilitation means that the person who is currently not "on duty" is still responsible for paying attention as "vibeswatcher" and pitching in to help clarify issues, to test for consensus, etc. In evaluating their work together, people who work as co-facilitators can help each other by giving feedback and support, and thus learn and grow. Berit Lakey, April 1975

Defined vs. Empirical Process Control

How many would admit to facilitating that "perfect" phased-gate resource-leveled MS Project plan only to face a massive re-plan caused some new cross-functional dependency? How long did it take to institute the change? The following every-day examples are more likely:
  • Your plan is hereby obsolete - due to critical new feedback from the voice of the customer
  • Your progress reports are inaccurate - "actuals" materially deviate from "planned"
  • Your functional requirements need to change- caused by vendor performance issues early in the "execution phase"
In those types of projects, manufacturing process control* theory argues for empirical process control, such as the Scrum agile process
  1. "It is typical to adopt the defined (theoretical) modeling approach when the underlying mechanisms by which a process operates are reasonably well understood.
  2. When the process is too complicated for the defined approach, the empirical approach is the appropriate choice."

 

  • In the first case, the Project Plan creates the cost and schedule estimates - it's a plan driven (command-and-control "waterfall") type of project.
  • In the second case, the Product or Services Vision creates the feature estimates - it's a value and vision driven (agile) project.
Learn to recognize the difference between the two, formalizing your type of project early in the inception phase, before any commitments are made!

* Process Dynamics, Modeling, and Control

Ogunnaike and Ray, Oxford University Press, 1992

Is your team leadership organized for Enterprise 2.0?

I used to believe that major organizational changes could only be accomplished by one highly visible individual. Steve Jobs, Bill Gates, and Scott Cook come to mind. It was easy to conclude that the type of leadership so critical to major change can come only from a single larger-than-life person. It's a false belief. Because major change is so difficult to accomplish, a powerful force is required to sustain the process. No one individual, even a monarch-like CEO, is ever able to develop a shared vision, engage one-on-one with the eco-system, eliminate major impediments, deliver short-term successes, lead and manage dozens of change projects, and anchor new approaches deep in the organization's culture. Weak committees are even worse. Strong guiding coalition teams are mandatory - each one optimized around personality traits, trust, and shared objectives. Building such an organization is a critical part of the early stages of any initiative involving re-structuring, re-engineering or re-tooling strategies. I'll talk more about that in my next post. Managing change is not just about communication to share "information" - Emails (web, desktop, mobile), Content Management, Blogs (private and corporate), RSS aggregation, Instant Messenger choices (private and public), iShare, weTag, myRSS, mySpace, yourFace. Managing change is about leadership models that can keep pace with rapid change. In today's Enterprise 2.0 world, teams are absolutely accountable for rapid decisions in a rapidly changing world. "Lone-rangers" and "weak-committees" physically cannot attend to all the information required to make good non-routine decisions. Nor do they possess the credibility, the awareness or the time required to convince others to make the personal sacrifices called for in implementing changes. Only teams with the right composition and trust can win today. This reality applies equally to product development coalition teams, "in the trenches", or at the very top of an organization during a major transformational initiative. This combined shortage time and attention is unique in today's information age. This is where Enterprise 2.0 technology combined with coalition leadership is critical. Geese Graphic If orchestrated properly, coalition teams can quickly align around a context of shared knowledge on a Wiki platform. Wikis can reward large teams with rapidly evolving shared knowledge, and, therefore accelerated response and decision making. If something is written that annoys other stakeholders, it's just going to be deleted. So if you want your writing to survive, you really have to strive to be cooperative and helpful. If the social engineering with Wiki's is properly aimed at creating a cooperative and helpful culture of change, then consensus decisions can be rapidly achieved among all the team coalitions in the community. Votes can be taken, but results not binding for further alignment. Overly harsh or argumentative contributors are corrected by their peers and addressed off-line if necessary. While e-mail, Intranets, webcasts, and a burgeoning array of online collaboration tools have helped stakeholders become more efficient, there's little evidence that the Web has dramatically altered team governance, or fundamentally changed the way in which they make decisions at least thus far. Looking forward, though, there's every reason to believe that the Internet will change how decisions are made just as thoroughly as it's changed every other facet of commercial life. Why? Because the Internet is an immensely powerful tool for multiplying human accomplishment - a goal that is central to the work of every leader and the design of every management information system.

Lines of Sight

ARE YOU A DEVELOPMENT, PROJECT, OR TEST MANAGER?

DO YOU BELIEVE YOUR PROJECT IS UNDER CONTROL?

LET ME ASK YOU A FEW SIMPLE QUESTIONS.

  1. When will the system-platform-service be accepted? Really? How do you know?
  2. How many more test cycles should we plan for? No... I mean exactly?
  3. Are the client's needs satisfied? Prove it.

System Stabilization Metrics

Early in my career, in the role of a project manager on a multi-month project, one particular Marketing Director would periodically stop by, asking how long it be before her new product would ready for release. I would say in about "three or four weeks, which seemed reasonable since we had been testing for two weeks already and things were going well. (I had learned to estimate in date ranges always better than an exact date estimate.) I was pretty happy with "three or four weeks, rather than "June 13." Then the Director asked how I knew it wouldn't be two weeks or five weeks. I didn't have a good answer and didn't want to admit that I was running on gut feel. Tracking defect inflow and outflow is one of the best ways to verify intuition when forecasting releases. Defect inflow is the rate at which new bugs are reported against the system. Plotting defect inflow over time will look like an inverted V because testing is slow going at first. Common causes include:
  1. The system under test contains blocking defects. (A blocking defect prevents further deeper characterization of the system under test past the point of the defect.)
  2. Setup and testing extended past the last cycle, delaying test-design activities into the current cycle, impeding characterization.
  3. Testers are unfamiliar with the system under test, due to evolving designs engagement gaps with development discussions.
Defect outflow represents the number of defects fixed each week. Typically this will start out slow (it cannot exceed the rate of inflow at first, until a backlog of defects builds), but will grow and level off at a steady rate. Just prior to release, there is usually a push to fix only absolutely critical defects. At this point, outflow drops dramatically. The metrics in the Figure are representative of most successful projects. On the larger projects, it's a good idea to plot defect inflow and outflow on a monthly basis; daily and even weekly ranges can vary too much. I found it best to limit defect charts to eight periods of data. If you are following an agile process such as XP or Scrum, a project can change drastically in that time, so older data is of minimal value. Eight months is also long enough to see most trends. As you monitor defect inflow and outflow, consider all factors that may influence your measurements. You don't want to make a pronouncement that the system is nearly ready to go-live only due to declining inflows. Inflows decline for a number of reasons, for example, testers reworking a significant number of tests rather than characterizing functionality. Since all defects are not equal, you may want to try weighted inflow and outflow tracking. Use a five-tier defect categorization and weighting such as:
  1. Critical 20
  2. High 10
  3. Medium 7
  4. Low 3
  5. Very Low 1
By weighting defect inflow and outflow, stakeholders get a much better feel for what is happening on a project. For example, I have typically found that trends in weighted inflow usually lead trends in unweighted inflow. On most projects, there is a point at which testing is still finding a significant number of bugs, but the bugs are lower in severity. In this case, you'll see weighted inflow drop while unweighted inflow will typically remain constant. In most cases, this indicates that you are one to three periods away from seeing a matching drop in unweighted inflow. By measuring weighted defect inflow, you'll be able to forecast this important turning point in a project a few weeks earlier. Monitoring weighted outflow (the weighted defects fixed) helps ensure that your programmers are addressing the highest severity defects first. Regardless of how you define Critical and Low, you would almost assuredly prefer that programmers work on Critical defects. Focusing on weighted outflows directs everyone's attention toward higher-value work.

Programmer Quality Metrics

It is useful to track fix rejects (or QA rejects), meaning QA was unable to verify a particular defect fix as, indeed, fixed. You can calculate rejects on the programming team as a whole, on each programmer individually, or on both. For many complex enterprise teams, comprising many programmers and testers, it is common to see reject rates exceeding 8-12%. Programmer Quality Metrics Graphic Be careful when using these metrics. In almost all cases, you should not use them as a significant factor in formal or informal evaluations of programmers. Knowing a team's reject rate is essential in predicting GO-LIVE timing.

Counting Defect Inflow and Outflow

Counting defect inflow and outflow is complex. One common complication is how to count defects that have been reopened. For example, suppose QA reports a defect one day; a programmer marks it as fixed the next. Now, two weeks later, the defect shows up again. Should it be reported as part of inflow? I say no. Think of defect inflow as defects first reported during the period. This is the easiest way to generate the metrics out of the defect tracking systems I've used. In most defect tracking systems, it is easy to get a count of defects that were marked fixed in a given reporting period, but it is difficult (albeit not impossible) to get a count of defects closed this week that had been closed before. Imagine the query count all defects opened this week or defects reopened this week that were resolved earlier. Keep in mind, though, that if you count defect inflow as defects first reported during the period and outflow as any bug marked as closed during the period, you can find yourself with some unusual results. For example, in a two-week period assume that one defect is reported during the first week and no defects are reported during the second week. If a programmer fixes the defect the first week, you'll show an outflow of one. During the second week, a tester discovers that the defect was only hidden and is really still there. If the programmer really fixes the defect the second week, you'll show no inflow but an outflow of one in that second week. In other words your aggregate metrics will show one defect reported, two fixed. Don't go through the extra work of reducing inflow or outflow counts from previous weeks. The metrics should reflect knowledge at the time they were collected. There will always be some amount of uncertainty in your latest period of metrics, so it is consistent to allow that same amount of uncertainty in prior weeks. The real solution to situations like this is to make sure that anyone who will make decisions based on your metrics understands how they are calculated and any biases they may contain. For example, the method of counting inflow and outflow described above is what I call a programmer-biased metric. In other words, the metric is completely consistent with how a programmer (as opposed to a tester) would view the counts. As such, it is likely to result in a slightly optimistic view of the program at any point. It is optimistic because defect inflow is shown as slightly less than it probably is and defect outflow is shown as slightly better than it probably is. Be sure that those who use your metrics understand the biases built into them. If your customers aren't happy, then you won't be happy for long. Tracking separately field reported defects, collected during early field trials, as well as user acceptance defects collected during phased iterations, is your last line of defense before GOING-LIVE. Trends should follow the same V pattern. Thresholds should meet or exceed Field Support constraints in order to meet ongoing cost objectives. Programmer Quality Metrics Graphic

Parting Guidance

With any measurement effort, most people will adjust behavior to measure well against the metric. Others will game the system avoiding detection. Two good methods work correcting the latter human tendency, serving to alter behavior after it becomes "measured". First, you can make it very clear that there is no consequence to the measurement. That was my recommendation with defect rejects. Don't take the worst couple of programmers out behind the corporate woodshed; instead, simply identify them for additional training, or assignment to more appropriate work, or awareness estimating. The second method is to measure other complementary areas that will expose, and, ultimately, counteract those costly behaviors. At minimum, defect-trend metrics are essential for every significant systems development project. Without them, Project Managers are forced to rely on gut feel or other intangible or anecdotal evidence in answering those critical questions. The needs of each project or development organization are different, but those metrics at least point you in the right direction.

Agile Hallmarks

Summarized below are several of the key characteristics shared by every successful agile project. For some methodologies these correspond exactly with individual practices, whereas for other methodologies there is a looser correspondence.

1. Releases and Fixed-Length Iterations

Agile methods have two main units of delivery: releases and iterations. A release consists of several iterations, each of which is like a micro-project of its own. Features, defects, enhancement requests and other work items are organized, estimated and prioritized, then assigned to a release. Within a release, these work items are then assigned by priority to iterations, as shown in the diagram below.

Iteration Diagram

The outcome from each iteration is working, tested, accepted software and associated work items. Agile development projects thrive on the rhythm or heartbeat of fixed-length iterations. The continuous flow of new running, tested features at each iteration provides the feedback that enables the team to keep both the project and the system on track. Only from the features that emerge from fixed-length ("time-boxed") iterations can you get meaningful answers to questions like "How much work did we do last month compared to what we predicted we would?" and "How much work did we get done compared to the month before?" and our personal favorite, "How many features will we really get done before the deadline?" The cruelty of several tight, fixed deadlines within a release cycle focuses everyone's mind. Face to face with highly-visible outcomes from the last iteration (some positive, some negative), the team finds itself focused on refining the process for the next iteration. They are less tempted to "gold-plate" features, to be fuzzy about scope, or to let scope creep. Everyone can actually see and feel how every week, every day, and every hour counts. Everyone can help each other remain focused on the highest possible business value per unit time. The operating mechanics of an agile development process are highly interdependent. Another way to represent an iterative development process is through a set of interlocking processes that turn at different speeds:

Agile Planning & Delivery Cycle Each day, the team is planning, constructing, and completing work (shared during daily meetings). Software is designed, coded, tested and integrated for customer acceptance. Every iteration includes planning, testing, and delivering working software. Every release includes planning, testing, and deploying software into production. Team communication and collaboration are critical, in order to coordinate and successfully deliver in such a highly adaptive and productive process. As the iterations are fulfilled, the team hits its stride, and the heartbeat of each iteration is welcomed, not dreaded. Teams realize there is time for continuous process improvement, continuous learning and mentoring, and other best practices.

2. Running, Tested Software

Running, tested features are an agile team's primary measure of progress. Working features serve as the basis for enabling and improving team collaboration, customer feedback, and overall project visibility. They provide the evidence that both the system and the project are on track. In early iterations of a new project, the team may not deliver many features. Within a few iterations, the team usually hits its stride. As the system emerges, the application design, architecture, and business priorities are all continuously evaluated. At every step along the way, the team continuously works to converge on the best business solution, using the latest input from customers, users, and other stakeholders. Iteration by iteration, everyone involved can see whether or not they will get what they want, and management can see whether they will get their money's worth. Consistently measuring success with actual software gives a project a very different feeling than traditional projects. Programmers, customers, managers, and other stakeholders are focused, engaged, and confident.

3. Value-Driven Development

Agile methods focus rigorously on delivering business value early and continously, as measured by running, tested software. This requires that the team focuses on product features as the main unit of planning, tracking, and delivery. From week to week and from iteration to iteration, the team tracks how many running, tested features they are delivering. They may also require documents and other artifacts, but working features are paramount. This in turn requires that each "feature" is small enough to be delivered in a single iteration. Focusing on business value also requires that features be prioritized, and delivered in priority order. Different methodologies use different terminology an techniques to describe features, but ultimately they concern the same thing: discrete units of product functionality.
Methodology Feature Terminology
Extreme Programming User Stories
Scrum Product Backlog
Unified Process Use Cases & Scenarios
FDD Features

4. Continuous (Adaptive) Planning

It is a myth that agile methods forbid up-front planning. It is true that agile methods insist that up-front planning be held accountable for the resources it consumes. Agile planning is also based as much as possible on solid, historical data, not speculation. But most importantly, agile methods insist that planning continues throughout the project. The plan must continuously demonstrate its accuracy: nobody on an agile project will take it for granted that the plan is workable. At project launch, the team does just enough planning to get going with the initial iteration and, if appropriate, to lay out a high-level release plan of features. And iterating is the key to continuous planning. Think of each iteration as a mini-project that receives "just-enough" of its own planning. At iteration start, the team selects a set of features to implement, and identifies and estimates each technical task for each feature. (This task estimation is a critical agile skill.) For each iteration, this planning process repeats. It turns out that agile projects typically involve more planning, and much better planning, than waterfall projects. One of the criticisms of "successful" waterfall projects is that they tend to deliver what was originally requested in the requirements document, not what the stakeholders discover they actually as the project and system unfold. Waterfall projects, because they can only "work the plan" in its original static state, get married in a shotgun wedding to every flaw in that plan. Agile projects are not bound by these initial flaws. Continuous planning, being based on solid, accurate, recent data, enables agile projects to allow priorities and exact scope to evolve, within reason, to accommodate the inescapable ways in which business needs continuously evolve. Continuous planning keeps the team and the system honed in on maximum business value by the deadline. In the agile community, waterfall projects are sometimes compared to "fire and forget" weapons, for which you painstakingly adjust a precise trajectory, press a fire button, and hope for the best. Agile projects are like cruise missiles, capable of continuous course correction as they fly, and therefore much likelier to hit a target (a feature-set and a date) accurately.

5. Multi-Level Planning

Continuous planning is much more accurate if it occurs on at least two levels:
  • At the release level, we identify and prioritize the features we must have, would like to have, and can do without by the deadline.
  • At the iteration level, we pick and plan for the next batch of features to implement, in priority order. If features are too large to be estimated or delivered within a single iteration, we break them down further.
As features are prioritized and scheduled for an iteration, they are broken down into their discrete technical tasks. This just-in-time approach to planning is easier and more accurate than large-scale up-front planning, because it aligns the level of information available with the level of detail necessary at the time. We do not make wild guesses about features far in the future. We don't waste time trying to plan at a level of detail that the data currently available to us does not support. We plan in little bites, instead of trying to swallow the entire cow at once.

6. Relative Estimation

Many agile development teams use the practice of relative estimation for features to accelerate planning and remove unnecessary complexity. Instead of estimating features across a spectrum of unit lengths, they select a few (3-5) relative estimation categories, or buckets, and estimate all features in terms of these categories. Examples include:
  • 1-5 days
  • 1, 2, or 3 story points
  • 4, 8, 16, 40, or 80 hours
With relative estimation, estimating categories are approximate multiples of one another. For example, a 3-day feature should take 3 times as long as a 1-day feature, just as a 40-hour feature is approximately 5 times as time-consuming as an 8-hour feature. The concepts of relative estimation and/or predefined estimation buckets prevent the team from wasting time debating whether a particular feature is really 17.5 units or 19 units. While each individual estimate may not be as precise, the benefit of additional precision diminishes tremendously when aggregated across a large group of features. The significant time and effort saved by planning with this type of process often outweighs any costs of imprecise estimates. Just as with everything else in an agile project, we get better at it as we go along. We refine our estimation successively. If a feature exceeds an agreed maximum estimate, then it should be broken down further into multiple features. The features generated as a result of this planning ultimately need to be able to be delivered within a single iteration. So if the team determines that features should not exceed 5 ideal days, then any feature that exceeds 5 days should be broken into smaller features. In this way we "normalize" the granularity of our features: the ratio of feature sizes is not enormous.

7. Emergent Feature Discovery

As opposed to spending weeks or months detailing requirements before initiating development, agile development projects quickly prioritize and estimate features, and then refine details when necessary. As features for an iteration are described in more detail by the customers, testers, and developers working together. Additional features can be identified, but no feature is described in detail until it is prioritized for an iteration.

8. Continuous Testing

With continuous testing the team deterministically measure progress and prevent defects. It cranks out the running, tested features, which reduces the risk of failure late in the project. What could be riskier than postponing all testing till the end of the project? Many waterfall projects have failed when they have discovered, in an endless late-project "test-and-fix" phase, that the architecture is fatally flawed, or the components of the system cannot be integrated, or the features are entirely unusable, or the defects cannot possibly be corrected in time. With continuous testing the team avoids that risk. At the unit level and acceptance (feature) level, the tean characterizes tests as the code itself is written or (better yet) beforehand. The most agile of agile projects strive to automate as many tests as possible, relying on manual tests only when absolutely necessary. This speeds testing and makes it more deterministic, which in turn gives us more continuous and more reliable feedback. There is an emerging wealth of new tools, techniques, and best practices for rigorous continuous testing; much of the innovation is originating in the Test-Driven Development (TDD) community. When is a feature done? When all of its unit tests and acceptance tests pass, and the customer accepts it. This is exactly what defines a running, tested feature. There is no better source of meaningful, highly-visible project metrics.

9. Continuous Improvement

The team continuously refines the system and the project by reflecting on what we have done (using both hard metrics like running, tested features and more subjective measures), and then adjusting our estimates and plans accordingly. But the team also uses the same mechanism to successively refine and continuously improve the process itself. Especially at the close of major milestones (iterations, releases, etc.), the team may find problems with iteration planning, problems with the build process or integration process, problems with islands of knowledge among programmers, or any number of other problems. It looks for points of leverage from which to shift those problems. The team adjusts its processes, and acquires or invent new ones, improving after every release. Its process evolve to deliver incremental value per unit time to the customer, the team, and the organization. It matures and evolves, like a healthy organism.

10. Small, Cross-functional Teams

Smaller teams have been proven to be much more productive than larger teams, with the ideal ranging from five to ten people. If you have to scale a project up to more people, make every effort to keep individual teams as small as possible and coordinate efforts across the teams. Scrum-based organizations of up to 800 have successfully employed a "Scrum of Scrums" approach to project planning and coordination. With increments of production-ready software being delivered every iteration, teams must also be cross-functional in order to be successful. That means teams include members with all of the skills necessary to successfully deliver software, including analysis, design, coding, testing, writing, user interface design, planning, and management. Teams work together to determine how best to take advantage of one another's skills and mentor each other. Teams transition away from designated testers and coders and designers to integrated teams in which each member helps do whatever needs doing to get the iteration done. Individual team members derive less personal identity from being a competitive expert with a narrow focus, and increasingly derive identity and satisfaction from being part of an extraordinarily productive and efficient team. As the positive reinforcement accumulates from iteration to iteration, the team becomes more cohesive. Ambient levels of trust, comeraderie, empathy, collaboration, and job satisfaction increase, associated with any well-managed agile project.

Executing global strategies with the Program Management Office

As technology breakthroughs help teams overcome geographic boundaries, more and more enterprises are going global. The benefits of globalization far outweigh its challenges. Enterprises pursuing the globalization path leverage global best practices and IT to their business advantage, reducing their operations costs while increasing their focus on strategic business needs. Global teams rely on fundamentally different processes, which enable them to remain competitive. Global strategies need to be driven by a well-integrated IT strategy and enabled by a Program Management Office (PMO). Enterprises may adopt an organic or an inorganic strategy to pursue their ongoing global programs. Global IT Program drivers and the Program Management Office; Globalization requires strategic programs to achieve enterprise business objectives. Today, technology is the essential engine of globalization programs, mandating an IT Strategy that is tightly integrated with the globalization strategy. Any global program, with a well-laid out IT strategy, is driven by a set of key common business drivers to leverage synergies across the enterprise, irrespective of the nature of the globalization imperative. Common business drivers include:
  • The need for consistent and standard processes
  • Established control mechanisms
  • Consistent enterprise-wide financial reporting
  • Visibility to the global supply chain
The Program Management Office (PMO) serves as the vital link between the global IT strategy and its successful implementation. Sometimes, enterprises unwisely constrain the role of the PMO to managing project portfolios. This limited view of PMO capabilities often leads to a singular focus on project success, missing the holistic benefits of the PMO, which includes forming the vision for the enterprise's future strategic initiatives. This project-based view of PMO leads to suboptimal PMO effectiveness. With such restricted thinking, there is often a lack of enterprise commitment and executive support to empower the PMO. In other words, the PMO is sometimes perceived as a project execution service, fulfilling just-in-time initiatives across the enterprise, but lacking material impact on the enterprise's opportunities for true scale and growth. However, with constantly changing business dynamics and with the advent of the global economy, globalization requirements are no longer tactical. This raises an important question: once a global initiative is completed successfully, should the PMO be disbanded or should it continue to exist to address further global needs? There are ongoing challenges, demanding considerable governance and planning at the strategic level of the enterprise, spanning all functions including:
  • Measuring revenue and profit - where we've been
  • Measuring customer experience (loyalty) - where we're going
  • Measuring operational metrics/goals that drive behavior - where we're going
  • Tracking work streams impacting the customer experience, including:
    • Six Sigma initiatives, which maximize product serviceability and drive down customer Total Cost of Ownership (TCO)
    • Customer relationships
    • Business readiness
    • Sales/marketing programs
    • Product quality initiatives
    • Professional services programs
    • Customer support initiatives
The emerging role of the PMO; the PMO is uniquely positioned in the globalization arena to help enterprises transform business drivers into cohesive projects through structured planning. The PMO fulfills a bifurcated role in the enterprise: 1) strategic; advising senior management in decision making, and 2) operational; managing project portfolios within the program, enabling globally distributed teams to work synchronously, ensuring all program objectives are achieved. The PMO should be a centrally placed structure with representation from both IT and business and not just be IT centric projects.

Outsourcing the Quality function

In many enterprises, software quality leaders are facing a serious quandary; addressing the complex and ever-changing needs of the business, while maintaining high quality service levels with fewer and fewer resources. Much like kayakers paddling upstream against a fast moving river, software quality leaders are searching for answers to help them stay on top of the water and avoid the obstacles scattered along the way. More and more organizations are looking toward outsourcing as one way of equipping themselves for the growing complexity of today's environment. Outsourcing the software quality function can achieve the following objectives:

  • Designing, delivering and coordinating the software quality function on a worldwide basis to support global expansion.
  • Developing more effective quality services to address a range of issues, including the challenges of an aging workforce, regulatory compliance, as well as increasingly more complex bundles of products, services and solutions.
  • Reducing costs as the enterprise faces continued pressures that limit quality budgets and headcount, forcing quality leaders to continually "do more with less."
  • Reaping the benefits of enhanced technologies (such as automated testing and performance tools) without the need for significant capital upgrades and additional technological skills.

The evolution of quality outsourcing mirrors much of what we have seen in the HR outsourcing landscape. Many of the original HR outsourcing arrangements focused on individual processes, with a strong emphasis on administrative areas such as benefits and pension administration. However, over the last several years, there has been a trend toward integrated, multi-process outsourcing that incorporates more strategic capabilities, such as recruiting and compensation planning. The quality function appears to be progressing through a similar transformation, incorporating the entire software quality function, including test design, development and delivery of all functional, performance and system functions. As the outsourcing market grows and matures, one expects to see continued out-tasking, more end-to-end outsourcing and an increasing number of deals where the entire quality function is externalized. The following highlights some handy lessons gleaned in outsourcing SQA in one of the top 10 software companies in the world.

Four drivers of success

Four pivotal actions are required of software enterprises as they consider outsourcing:

  1. Identify the appropriate leadership capabilities required to oversee the overall outsourcing effort.
  2. Create an overall transition management plan that identifies all the activities required to transfer responsibility to the vendor.
  3. Develop an ongoing governance and relationship management structure to address conflicts and build an effective working relationship between the client and the vendor.
  4. Build a measurement and reporting framework that communicates how well the outsourcing arrangement is operating.

 

Let's analyze the first driver; Identify the appropriate leadership capabilities required to oversee the overall outsourcing effort.

Given the strategic importance of, and complexity associated with, outsourcing the quality function, organizations need to identify an individual, or individuals, early in the outsourcing process who have the right skills and capabilities to lead the effort. For many companies, this initial challenge becomes a significant issue, as the individuals who are most likely to be effective at taking charge in this new environment are often attached to other efforts and may be difficult to discharge from their original duties. The skills and competencies that are needed to lead an outsourcing effort are very different from the skills garnered from leading a functional department. In an article published in the Sloan Management Review in 2000, Michael Useem from the Wharton School of Business and Joseph Harder from the Darden School at the University of Virginia identified six key leadership capabilities that are valuable in outsourcing arrangements.3 Though the context in which they were writing focused primarily on IT outsourcing, the six capabilities provide a relevant framework for the types of leadership skills needed by quality professionals as they take on outsourcing responsibilities.

Strategic quality vision; To be able to determine what capabilities need to be handled internally versus outsourced to an outside vendor, the leader of an outsourcing effort needs to have a solid understanding of the firm's existing and future business needs and its current quality capabilities. Further, the individual must be able to articulate how outsourcing can be a cost-effective alternative to internal resources and how an outsourcing vendor will fit into a larger portfolio of software projects.

Analytical approach to problem solving; In a quality outsourcing environment, the focus moves from managing a set of internal activities to evaluating a set of outcomes and results. Therefore, the leader must be able to analyze and draw conclusions from key metrics, such as service level agreements, to validate that the vendor is delivering on its promised commitments. Also, he or she must be able to evaluate the quality outcomes and determine how those are contributing to the organization's goals.

Deal making; Leaders of an outsourcing effort need to understand both the current and projected capabilities of outsourcing vendors and be able to make decisions regarding their viability as potential partners. Once the potential vendors are determined, the quality outsourcing leader needs to be able to work effectively with other organizational stakeholders, such as the legal and procurement functions, to put together a working arrangement that is agreed to by all parties.

Partnership governance; Given the complexity and interdependence required for a successful outsourcing arrangement, the organization and vendor must frequently interact with one another. Therefore, the ability of a leader to maintain an effective relationship with peers on the vendor side is paramount. This includes being able to address small problems before they become larger issues, while at the same time protect the company's longer-term interests.

Change management; Given the magnitude of change associated with end-to-end outsourcing of a function the size and magnitude of the quality function, an outsourcing leader must be able to manage the information and involvement needs of a number of stakeholder groups. This includes individuals at all levels, from internal customers to staff whose jobs are being displaced, relocated to a vendor organization, or significantly altered to meet the needs of the new environment.

Program management; With numerous operational and transformational initiatives occurring simultaneously in a quality outsourcing environment, the leader must be a skilled program manager. Among the key skills necessary is the ability to understand the links and interdependencies between projects, allocate and juggle resources, identify potential roadblocks and communicate status to a variety of interested parties. Because one individual may not possess the full complement of skills required, companies should be open to supplementing with capabilities from inside, or even outside, the quality function when managing a large-scale outsourcing effort. For example, deal making and partnership development skills may be more readily found in a business development or alliance management function. And program and project management capabilities might be more common in an IT department or central project office function. In my experience, two individuals were selected to lead the outsourcing effort. One brought with him a strong sense of the business issues facing the organization and experience in pursuing deals with a number of learning partners. The second person had a background in managing a the quality outsourcing effort, and provided a wealth of operational knowledge on developing metrics and running a large and complex outsourcing program. By combining the skills of both individuals, the organization provided the groundwork for setting up an effective outsourcing relationship.

Trawling for project risks

Successful project teams aggressively and systematically seek out and mitigate project risks, from the project inception phase through every phase transition including deployment. Along the way, new risks are surfaced, prioritized, tracked, and resolved. Each risk is prioritized by its probability, and its overall potential impact on the project. Good risk management increases productivity, quality, and ultimately user satisfaction. Review and mitigation methods vary on each project, but how are risks discovered? Here are some questions leaders and review authorities can ask:
  1. What are the top ten risks as determined by government, technical, and program management staff? When did you last check your top ten risks?
  2. How do you identify a risk?
  3. How do you resolve a risk?
  4. How much money and time do you have set aside for risk resolution?
  5. What risks would you classify as"show-stoppers,' and how did you derive them?
  6. How many risks are in the risk database? How recently did you update the database?
  7. How many risks have been added in the last month? Describe the most recent risk added to the database. When was it added? What was your mitigation plan for it?
  8. Can you name a risk that you had two months ago and describe what you did to mitigate it?
  9. What risks do you expect to mitigate or resolve in the next six months?
  10. Are risks assessed and prioritized in terms of their likelihood of occurrence and their potential impact on the project? Give an example.
  11. Are as many viewpoints as possible (in addition to the project team's) involved in the risk assessment process? Give an example.
  12. If you will not remain for the project's completion, what risks have been identified that will remain after you leave? Are any of them imminent? Will you leave a transition plan?
  13. Pick a risk and explain the risk mitigation plan for it.
  14. What is your top supportability risk?
  15. What percentage of risks impact the final delivery of the system? How did you arrive at that decision?
  16. To date, how many risks have you closed out?
  17. Who in this meeting/briefing is the Risk Officer? Is the role of Risk Officer his/her primary responsibility? If not, what percentage of the Officer's time is devoted to being the project's Risk Officer?
  18. What percentage of your risks have been identified by other than team stakeholders?
  19. How are identified risks given project visibility?

Addressing the common project risks and remedies

While risk management is integral to other project management processes, project definition and planning must proceed before fundamental risks can be analyzed. Create or obtain the essential foundation documents for your project: 1) A sponsor-approved charter that outlines your project's schedule, scope and resource goals (these are targets NOT estimates), 2) A preliminary project plan document that clearly describes the work steps (project tasks), dependencies, schedule, and budget, 3) A list of assumptions, and 4) A list of stakeholders that will be working, and are qualified to work, on the project.

Consider the common project risks and remedies risks in the table below. Some may be unique to the technology, team, or customers for your systems, and these will require special consideration. Many risks, however, are common and might be encountered in slightly different forms on many projects. When risk events occur, they generally threaten one or more aspects of the "triple constraint" that defines your project: scope, schedule, and resources. Scope is what you are trying to accomplish. Scope risks include failure to successfully perform a specific task or more pervasive issues of quality, performance, reliability, and compliance with applicable regulation or policy. Tasks whose work product's failure to comply with specifications, or whose failure to achieve defined outcomes would substantially harm the project are sources of scope risk. Schedule is when significant events are desired to occur. Schedule risks tend to be found among tasks on the project's critical path, the tasks with the least amount of "slack" or forgiveness if they start late or exceed their schedule estimates. Review the definitions, estimates, and resource requirements of critical path tasks to identify potential schedule risks. Resources are the people, money, and materials needed to complete the project. Tasks that consume scarce or large amounts of resources (people, money, or materials) are sources of resource risks.

REMEDIES FOR SCOPE RISKS REMEDIES FOR SCHEDULE RISKS REMEDIES FOR RESOURCES RISKS
To make early detection easier, decompose complex tasks into smaller tasks with clearly defined work products and quality gates. Add contingency time (lags) to the schedule at key points on the critical path to act as a "shock absorber" to dampen normal schedule fluctuations. Review back-up procedures. Back up software and data regularly and comprehensively. Attempt to restore from back-ups periodically.
Shift tasks that use new tools or techniques to occur earlier in the project. Early experience allows earlier detection of problems and refinement of processes. Consider duplicate parallel activities for high- risk schedule tasks. For example, transmit a proposal prior to a submission deadline by sending two sets of proposals via two couriers using different routes. Identify alternative sources. Split large orders between two vendors. If one vendor runs into trouble, the other may still provide half of what you need. It may get you better customer service, too.
Acknowledge the risk of deferring defect detection. Build in tasks for early reviews and testing of key work products. Break high-risk tasks into smaller pieces that provide early feedback if they are not completed on time. Assign each team member a "buddy" responsible for fulfilling his or her role in the event of a short-term absence.
Recommend trimming borderline functions early. If you can't eliminate or defer a risky component, prototype it as soon as possible to provide early warning of trouble and increased options. Consider swapping resources so that your most skilled team members work on critical- path tasks. This decreases the potential variability introduced by learning curves or lack of experience. Establish a budget contingency. If you can, set aside some portion (~10%) of the budget for unanticipated tasks and expenses, rather than fully committing all assigned resources.
If a particular function or feature represents a disproportionate amount of risk, negotiate that component to a subsequent version of the system, or defer implementation of that component (if possible) until other components are successfully completed and integrated. Review estimates and definitions of critical- path tasks. If one item on the critical path slips, all subsequent items come under pressure. Identifying potential problems early can allow reassignment of resources or reconsideration of approach. Build/buy spares. If you need one hundred workstations, consider ordering an extra two or three and having them configured at the same time. Hot spares can facilitate adding new staff and recovering quickly from equipment failure.
Set reasonable expectations and plan to meet them. Unrealistic goals are the biggest source of project risk. Look for external dependencies along the critical path (e.g., computer components to be received). Explore paying to expedite delivery to relieve schedule pressure. Consider bringing in outside experts to assist with complex tasks and mentor your staff early in the project.

These are interesting times for Enterprise Software

While discussions concerning open-source software development tend to focus on the innovative techniques used to produce complex systems by mobilizing highly distributed programming talent, progressive software enterprise leaders note the significance of open-source software as a platform for effective learning and collaboration through apprenticeship. Open-source programmers often start with code developed by others and then develop enhancements for specific environments. As the code is developed, it is posted for review and testing by a broad community of experienced programmers. Programmers in open-source projects learn at four levels: 1) they observe and work with code from other programmers, 2) they observe their own code in action, 3) they get feedback and commentary from other people who execute their code, and 4) they have access to feedback and commentary about code developed by other open-source programmers. Programmers typically begin on the periphery of the platform and advance, by building their skills, to become coaches and mentors. In the way, they structure their own learning environments, pulling in whatever resources are most relevant and timely.

In lieu of accepting the usual limits on the available constrained resources, open-source programmers constantly seek to expand the range of resources through ad-hoc social networks, through the enterprise, or through various open-source platforms. Rather than seeking to dictate the priorities and actions of the stakeholders involved in open-source projects, programmers take the initiative to utilize the tools and people required to address opportunities as they arise. Open source platforms harness programmer’s passion, commitment, and desire to learn, thereby creating communities that can rapidly improvise and innovate. The open source world isn’t the only niche community where this kind of learning and innovation takes place.

Software enterprise leaders can leverage this highly efficient development model to their advantage, through collaboration platforms uniquely tailored to the enterprise. Collaboration platforms foster flexible interactions among project stakeholders, building closer relationships between the user-community and programmers. Collaboration platforms offer an environment for rapid feedback, rich reflection on the results of distributed experimentation, and greater scalability. The results are clear, looking at successful open-source vendors – secure and deep competitive advantage continues to be realized at a time when traditional distinctions are disappearing. In collaborative open networks, distributed teams are very skilled at autonomously identifying and mobilizing just-in-time resources, including third-party resources, to accomplish extra-ordinary results.