Diagnostic Intervention of a Group Facilitation: A Facilitated, Collaborative, Self-Assessment Using the Criteria for Performance Excellence

December 11, 2004

A paper written for “Introduction to Conflict Resolution and Alternate Dispute Resolution” (Leadership 9610).

Note: To protect client confidentiality, while the specifics discussed in this paper are true and accurate, the name of the team leader, team recorder, and the name of the unit have been changed.

Meeting Purpose, Issue, and Parties
In the late autumn of 2004, ten members of the Coast Guard Group Northeast command, an operational unit responsible for Coast Guard small boat operations in a large coastal and river environment stretching several hundred miles, met to participate in a two-and-a-half day facilitated, collaborative workshop to assess their command against the Commandant’s Performance Excellence Criteria (2003). During the workshop, two facilitators led the meeting participants through a series of questions designed to help the unit personnel assess their approach and deployment of organizational leadership and management practices based on the Criteria. The Criteria, the Commandant’s Performance Excellence Criteria, is the Coast Guard’s version of the Baldrige Criteria for Performance Excellence (2003) published by the National Institute of Standards and is composed of seven categories. The workshop has a set agenda and includes an introductory piece, a process to review each category, and a concluding piece. The facilitation team leader was Juliet; I served as the co-facilitator; a third person, Dwayne, served as the recorder, typing session notes on a computer.

The meeting was held at a local maritime museum’s meeting space, a cathedral-ceilinged room of nearly 2000 square feet with windows on three sides opening onto a river vista. The ten participants sat an a U-shaped table with a white board and flip charts at the open end of the “U.” A small table for the facilitation team was at the other end of the room; this table served as a place for process review and was where Dwayne sat to type the notes during the meeting. In addition, six other round tables were spread around the room providing space for small group and break-out work. The meeting participants were a fairly mature group, composed of long-time Coast Guard military members. Nine of the participants were men; all participants were roughly 35 to 45 years of age. Two participants were enlisted members of the Coast Guard; the other participants were all commissioned officers. The facilitation team prepared the room the morning of the first session. As a part of the preparation, we hung posters – including meeting ground rules and a place for a “parking lot” – on the walls around the meeting space.

Agenda and Meeting Process
The meeting was kicked off by the junior commissioned officer who had served as the project officer setting up the meeting. She opened the meeting, reviewed a few administrative details, and then introduced Juliet. Juliet began the introductory portion of the collaborative assessment with a broad overview of the process. She then had the participants and the facilitation team members introduce themselves; each person was to provide their name, role at the unit, and their experience with the Criteria. Following this round-robin introduction, Juliet launched into a 15-slide PowerPoint presentation which provided an overview of the Criteria and the collaborative assessment process. She did not refer to the ground rules, parking lot, or other posters on the wall.

Once the overview was completed, the group took a short break. Juliet then began to facilitate through the first category, “Customer and Mission Focus.” The collaborative assessment was developed by a number of Coast Guard consultants; I was on the initial development team.

The collaborative assessment calls for each category facilitation to include three parts: pre-work, processing the category questions and determining strengths and opportunities for improvement, and post-work. The category pre-work is composed of five parts. The facilitator reviews the content of the category, including reviewing the questions. Participants are asked to identify which Criteria core principles relate to the specific category; they are asked which of the other six categories link directly to the current category; they are asked to review their unit’s “performance factors,” a document required to be completed before the start of the assessment, and identify which parts of the performance factors relates to the current category. For each of these identifications, the participants must articulate their rationale for choosing whatever they chose. The final bit of pre-work is for the participants to develop a list of characteristics, initiatives, process, and/or systems the “ideal unit” or “ideal organization” would do within the realm of the particular category. In the second major part of the assessment for each category, the facilitator has the participants answer the questions in the category. Participants are then to identify unit strengths and opportunities for improvement within the category criteria. The process calls for the participants to consent to three strengths and three opportunities for improvement. After processing the category questions, the post-work includes the participants identifying “results” which might come from the category content, identifying negative “systemic issues” which are beyond the unit’s circle of influence to correct, and identifying any “proven practices” the unit has within the content of the category. Finally, the facilitator is to review, and clear if possible, any items which the participants have placed in the “parking lot.”

Each of the seven categories is completed using the same basic agenda outline. Following the seventh category, the facilitation team helps the participants choose several initiatives and then develop action plans to implement those initiatives. Finally, the facilitation team provides an outbrief to the participants, and the collaborative assessment is thus completed.

Participants were in formal session for 50 to 90 minutes at a clip with ten to 15 minute breaks between formal meeting time. In addition, participants had an hour lunch break; on the second day, the participants had a working lunch and spent the hour together talking about some upcoming personnel changes and an impending reorganization.

Facilitation Process
As the facilitation team leader, Juliet conducted the initial introductions and facilitated the first category. In my experience, the first morning – which is the introductions and the first category – is usually a bit slow. This assessment was no different. The participants are not used to thinking systematically; they are not used to spending time together thinking and discussing important, but not urgent, issues. The Coast Guard culture puts a priority on urgency, and the assessment process and the assessment content is anything but urgent. In this particular assessment, this cultural priority was evident in the types of questions posed to the facilitator and the conversations around the questions. Juliet did not, however, provide what I thought was a satisfactory “business case” for the assessment or the Criteria to the participants. Another credibility issue came to light when Juliet cited the “performance factors” document, and the participants determined that the copy she had provided as a handout was incomplete. In addition, during this category Juliet mentioned a reference pocket guide by Mark Graham Brown (2003) which the participants did not have. While Juliet allowed silence to hang, using it as a tool for encouraging participation, all the work for the category was done with the ten participants engaged together around the table. At the conclusion of the first category, Juliet had captured 19 strengths and seven opportunities for improvement. These 26 items had been written on the easel paper at the front of the room; to narrow the field, Juliet attempted to have the group reach consensus on the most important issues. She did not use facilitative tools as provided in the Coast Guard’s Process Improvement Guide (1977), such as the affinity diagram or multi-voting, however, and allowed the participants to wander conversationally until the top five were identified.

At the end of the entire assessment process, each participant completed a two page survey indicating their satisfaction with the process and the facilitation team. One participant noted on the form:

A better explanation of the CPC process at the unit level at the beginning of the session would have been helpful. The info glossed over the program at a macro/CG-wide level, but we need a better understanding at the unit level of what we would do, why it was important, what it would do for us, etc. Peter helped explain this better when he introduced a later category after lunch the first day. (Coast Guard Quality Performance Consultants, 2004)

Having noticed the process problem in terms of engagement, I sought to take a structured approach to the process and to use various tools and techniques to increase participation and buy-in by the participants. In addition, I felt that by teaching an overarching model, I might be able to put into perspective the purpose of the meeting.

In starting the category work, I did a couple of things different from Juliet. The first thing I did was post a sheet with the category agenda on it; my first task was to review the category agenda and to explain the use of the parking lot. With the first four items in the pre-work, I had the participants work in pairs and then report out to the entire group. For the last pre-work item, identifying characteristics of the ideal unit, I used silent brainstorming; I had each participant right as many things as they could come up with, one each on a single “yellow sticky.” I then posted these and read them aloud. For the question review, I assigned each pair two questions and provided them ten minutes to discuss the questions and come up with answers. I then had one volunteer from each pair report to the larger group. After completing all the questions, I asked the participants to each identify no more than one strength and one opportunity for improvement in the context of the category and, during the break between categories, to write their responses on the flip charts at the front of the room.

Following the second category, Juliet began the third category. She took from my example and used pairs during the pre-work portion. The day ended before getting to processing the category questions.

Facilitator Interventions
During the ride from the meeting facility to the hotel, the facilitation team discussed the day and the facilitation process. Juliet was concerned with the pace and content of the introductory piece. Dwayne, having never participated in a facilitated, Baldrige-based, collaborative assessment asked questions concerning both content and process. The facilitation team also met for dinner and continued to discuss methods for involving the participants. I noted I thought the large group discussion wasn’t particularly effective in getting participation; Juliet noted that in one prior group she’d worked with, pairing participants had not worked as the participants spent the time talking off-point and not dealing with the issues or questions provided for the pair work.

As the co-facilitator and not the team lead, I was willing to offer suggestions and facilitate by example. I did not feel comfortable directing facilitation method. Indeed, even when I am in the lead facilitator role, I will usually coach and lead by example rather than direct.

The second day’s session began with a brief hello from Juliet and then straight to work. She divided the participants into three groups, choosing the groups to allow for distribution of personalities and of organizational roles. She then assigned each group a single question from the category and told them to identify two strengths and two opportunities for improvement for each question. She gave them a 20-minute limit, provided each group with a large sheet of paper and a marker, and directed them to go spread out in the room to work. After 20 minutes, Juliet brought them back and asked each group to brief-out. She allowed each group’s presenter to make the presentation as they saw fit. One group’s presenter sat at the table, another stood at the top of the “U” after taping the flip-chart paper to the white board, and the third group’s presenter stood at the top of the “U” and wrote on the white board. Juliet then repeated the process, covering all the questions.

This process engaged the participants and allowed for a variety of styles. The small groups remained focused as Juliet occasionally circulated around the room observing the groups at work and answering content questions. The process did take a bit longer than usual.

Following my facilitation of a category, Juliet returned for the fifth category, “Human Resource Focus,” a fairly long category with 10 category questions. She processed the pre-work as she had her last time before the group until she got to the “ideal unit” portion. For this segment, she had each participant draw out a piece of paper from a cup. The papers were numbered from 1 to 10, and each participant pulled one piece of paper. Juliet then asked the participants to silently brainstorm what the ideal unit would look like or do in regard to the category question which matched the number on the piece of paper. She asked them to think of as many items as they could; how would the ideal unit answer that specific category question? After several minutes, Juliet called time; she then told them that when the group had answered each question during the process the category portion, the person with the corresponding number would provide their additional suggestions as to what the ideal unit would do. I thought this was a novel way to engage the participants.

Juliet then began to process the questions with the entire group, capturing the strengths and opportunities for improvement on the two flip charts at the top of the “U,” one flip chart for strengths and the other flip chart for opportunities for improvement. After each question had been answered, she had the “ideal unit person” offer their thoughts.

During the processing of one of the questions, the commanding officer of the unit began to dominate the conversation. The domination was so intense Dwayne passed me a note asking me what I would do when the commanding officer takes over the conversation. I wrote back that I’d either shut him down right away, talk to him off-line, or do nothing. Juliet allowed the conversation to run on; after the two senior enlisted members made a comment, Juliet stopped the conversation by saying it was time for lunch (and it was).

There is a tension between allowing people to talk in meetings and staying on track. What I had not seen from the back of the room, but what Juliet told me over lunch, is that the senior enlisted were chomping at the bit to speak, and she did not want to end the conversation until they had had a chance to say their peace. Juliet understood the need for the senior enlisted personnel to be heard and understood the cultural significance of their input as the voice of the enlisted members.

Interestingly, Juliet had facilitated the first seven questions in the standard form with the larger group. I had noticed a lack of energy, as there had been during the first category question period. Over lunch she asked for feedback; I noted that when we broke up the groups there was more energy and things moved quicker, or at least it seemed they moved quicker. Certainly, in pairs and groups the participants were more engaged. Juliet noted that this fifth category was one she did not usually facilitate and she was less familiar with the content. She noted that she had never thought of breaking into groups, even though she had with her last block of facilitation. She had reverted to the basic process because she wasn’t familiar with the content. In a sense, she’d frozen on process and reverted to the very basics.

For the last three questions, Juliet broke the participants into randomly assigned groups and had each group discuss one of the remaining questions. Energy and conversation increased.

Participant Feedback to Facilitators
At the conclusion of the two-and-a-half day workshop, each participant was given a two page survey to determine satisfaction in terms of the purpose of the workshop – to educate participants about the Criteria and to assess the unit against the Criteria – and to determine satisfaction with the facilitation team. Eight of the ten respondents indicated the time spent on the process was “just right.” The other two respondents said the time on process was “too long.” In terms of “return on investment” (“Based on the time and resources invested, what do you believe your return on investment will be”), three respondents said 1x, two respondents indicated 2x, three respondents said 3x, and two respondents indicated the return on investment would be more than 3x.

The participants also had an opportunity to evaluate the competence of the lead facilitator and the co-facilitator. The responses from the participants are indicated in Table 1.

As shown in Table 1, both facilitators received a preponderance of “master” ratings and no ratings below “practiced.” Juliet received 14 “practiced” ratings and 46 “master” ratings, while I received 12 and 48, respectively. While novice, practiced, and master were not defined, all participants had experience attending facilitated meetings and working with facilitators. Of interest to me were the responses to the characteristic of being “courteous and respectful.” While Juliet received ten “master” responses, I received only 8. This, I believe, is an accurate assessment. As I noted to Dwayne when he asked about handling an overbearing commanding officer, I actually consider shutting the individual down as a possible intervention. I’m direct and sometimes abrupt and can see not being assessed as a master at being respectful; while I always use “sir” and “ma’am,” I’ve been known to have an “edge” and a sense of sarcasm less than courteous. Juliet is another matter, however. She is always courteous and respectful no matter what the situation.

Another question on the survey, a referral business indicator, asked if the participant would refer this facilitator to another unit or leader. All ten respondents answered “yes.”

Intervention Success
Overall, this workshop was a success, in large measure because of the facilitation by both Juliet and me. As Schwarz (1994) notes, working with another facilitator can be challenging. While Juliet and I have different styles and personality types, we are complementary when it comes to facilitation. As Schwarz notes, “cofacilitation can be effective to the extent that the facilitators’ orientations are either congruent or complementary.” (p. 211) We are not competitive, but seek to learn from each other. And, when we work together, we have a clear definition of roles and responsibilities.

Schwarz (1994) also defines nine types of interventions. (p. 123) While neither Juliet nor I did much “reframing,” Schwarz’s ninth intervention, we did use the other eight fairly often and generally without even ascribing the type of intervention we were using. Both of us were able to focus on process while keeping an ear toward content, as we were also the technical content experts with regard to the Criteria.

Certainly, the facilitation was not perfect, but as co-facilitators, we did a good job of adjusting on the fly and keeping the participants on-task and on-message, providing them with an opportunity to both learn about the Criteria and to assess their unit against the Criteria. And, all three people on the facilitation team learned new techniques for facilitation and were able to grow as facilitators.

Data for Table 1: On Site Facilitation Competence
The numbers indicate the number of responses in that block by the meeting participants. n = 10

Knowledge elements


Exhibits subject matter expertise

Juliet – Novice: 0
Juliet – Practiced: 1
Juliet – Master: 9

Peter – Novice: 0
Peter – Practiced: 0
Peter – Master: 10

Ability to apply theory to practice in client’s world of work
Juliet – Novice: 0
Juliet – Practiced: 5
Juliet – Master: 5

Peter – Novice: 0
Peter – Practiced: 3
Peter – Master: 7

Oral Communication elements

Ability to keep group interested and engaged

Juliet – Novice: 0
Juliet – Practiced: 3
Juliet – Master: 7

Peter – Novice: 0
Peter – Practiced: 2
Peter – Master: 8

Is courteous and respectful
Juliet – Novice: 0
Juliet – Practiced: 0
Juliet – Master: 10

Peter – Novice: 0
Peter – Practiced: 2
Peter – Master: 8

Facilitation elements

Ability to keep the group focused on desired outcome
Juliet – Novice: 0
Juliet – Practiced: 2
Juliet – Master: 8

Peter – Novice: 0
Peter – Practiced: 2
Peter – Master: 8

Ability to effectively manage the group’s time
Juliet – Novice: 0
Juliet – Practiced: 3
Juliet – Master: 7

Peter – Novice: 0
Peter – Practiced: 1
Peter – Master: 9
— END OF TABLE DATA —

References

Brown, M.G. (2003). The Pocket Guide to the Baldrige Award Criteria (9th ed.). New York: Productivity, Inc.

Coast Guard Quality Performance Consultants. (2004). [Unit survey responses for Commandant’s Performance Challenge collaborative assessments]. Unpublished raw data.

National Institute of Standards and Technology. (2003). Criteria for Performance Excellence. Gaithersburg, MD: Author. {See this website.}

Schwarz, R. M. (1994). The Skilled Facilitator: Practical wisdom for developing effective groups. San Francisco: Jossey-Bass Publishers.

U.S. Coast Guard. (2003). Commandant’s Performance Excellence Criteria Guidebook. Washington, DC: Author.

U.S. Coast Guard Quality Center. (1997). Process Improvement Guide (3rd ed.). Petaluma, CA: Author.