<--- Score
72. What are the dynamics of the communication plan?
<--- Score
73. What are the compelling stakeholder reasons for embarking on Computer Based Training?
<--- Score
74. Do you have a Computer Based Training success story or case study ready to tell and share?
<--- Score
75. What is the definition of success?
<--- Score
76. Are customer(s) identified and segmented according to their different needs and requirements?
<--- Score
77. Are task requirements clearly defined?
<--- Score
78. If substitutes have been appointed, have they been briefed on the Computer Based Training goals and received regular communications as to the progress to date?
<--- Score
79. What is the scope of the Computer Based Training work?
<--- Score
80. Has/have the customer(s) been identified?
<--- Score
81. Are accountability and ownership for Computer Based Training clearly defined?
<--- Score
82. Do you all define Computer Based Training in the same way?
<--- Score
83. Are different versions of process maps needed to account for the different types of inputs?
<--- Score
84. What is a worst-case scenario for losses?
<--- Score
85. When are meeting minutes sent out? Who is on the distribution list?
<--- Score
86. Is there any additional Computer Based Training definition of success?
<--- Score
87. How will the Computer Based Training team and the group measure complete success of Computer Based Training?
<--- Score
88. How do you hand over Computer Based Training context?
<--- Score
89. Have all of the relationships been defined properly?
<--- Score
90. Is the current ‘as is’ process being followed? If not, what are the discrepancies?
<--- Score
91. Why are you doing Computer Based Training and what is the scope?
<--- Score
92. Who are the Computer Based Training improvement team members, including Management Leads and Coaches?
<--- Score
93. How do you manage scope?
<--- Score
94. Are the Computer Based Training requirements testable?
<--- Score
95. Is there regularly 100% attendance at the team meetings? If not, have appointed substitutes attended to preserve cross-functionality and full representation?
<--- Score
96. How do you manage changes in Computer Based Training requirements?
<--- Score
97. What information do you gather?
<--- Score
98. Is the team adequately staffed with the desired cross-functionality? If not, what additional resources are available to the team?
<--- Score
99. Is special Computer Based Training user knowledge required?
<--- Score
100. Has anyone else (internal or external to the group) attempted to solve this problem or a similar one before? If so, what knowledge can be leveraged from these previous efforts?
<--- Score
101. What Computer Based Training requirements should be gathered?
<--- Score
102. Does the scope remain the same?
<--- Score
103. What scope to assess?
<--- Score
104. What are the tasks and definitions?
<--- Score
105. What is out-of-scope initially?
<--- Score
106. Are approval levels defined for contracts and supplements to contracts?
<--- Score
107. Have the customer needs been translated into specific, measurable requirements? How?
<--- Score
108. What information should you gather?
<--- Score
109. The political context: who holds power?
<--- Score
110. How do you gather the stories?
<--- Score
111. Are required metrics defined, what are they?
<--- Score
112. What are the requirements for audit information?
<--- Score
113. Has the improvement team collected the ‘voice of the customer’ (obtained feedback – qualitative and quantitative)?
<--- Score
114. What critical content must be communicated – who, what, when, where, and how?
<--- Score
115. How do you manage unclear Computer Based Training requirements?
<--- Score
116. Has a Computer Based Training requirement not been met?
<--- Score
117. Where can you gather more information?
<--- Score
118. How have you defined all Computer Based Training requirements first?
<--- Score
119. Are resources adequate for the scope?
<--- Score
120. Are all requirements met?
<--- Score
121. Are the Computer Based Training requirements complete?
<--- Score
122. Has a high-level ‘as is’ process map been completed, verified and validated?
<--- Score
123. Is the improvement team aware of the different versions of a process: what they think it is vs. what it actually is vs. what it should be vs. what it could be?
<--- Score
124. Is there a completed, verified, and validated high-level ‘as is’ (not ‘should be’ or ‘could be’) stakeholder process map?
<--- Score
125. How do you gather requirements?
<--- Score
126. Is there