<--- Score
10. Is scope creep really all bad news?
<--- Score
11. What are the compelling stakeholder reasons for embarking on Software reliability testing?
<--- Score
12. How does the Software reliability testing manager ensure against scope creep?
<--- Score
13. What are the tasks and definitions?
<--- Score
14. How do you hand over Software reliability testing context?
<--- Score
15. Is the team equipped with available and reliable resources?
<--- Score
16. Does the scope remain the same?
<--- Score
17. Is Software reliability testing currently on schedule according to the plan?
<--- Score
18. Is the work to date meeting requirements?
<--- Score
19. Are accountability and ownership for Software reliability testing clearly defined?
<--- Score
20. What information do you gather?
<--- Score
21. Why are you doing Software reliability testing and what is the scope?
<--- Score
22. What is the scope of the Software reliability testing effort?
<--- Score
23. What are the requirements for audit information?
<--- Score
24. What was the context?
<--- Score
25. What defines best in class?
<--- Score
26. What intelligence can you gather?
<--- Score
27. What are the Software reliability testing tasks and definitions?
<--- Score
28. Has anyone else (internal or external to the group) attempted to solve this problem or a similar one before? If so, what knowledge can be leveraged from these previous efforts?
<--- Score
29. Will team members regularly document their Software reliability testing work?
<--- Score
30. Is there a critical path to deliver Software reliability testing results?
<--- Score
31. What scope to assess?
<--- Score
32. Are the Software reliability testing requirements complete?
<--- Score
33. Are roles and responsibilities formally defined?
<--- Score
34. The political context: who holds power?
<--- Score
35. Is there a clear Software reliability testing case definition?
<--- Score
36. Have the customer needs been translated into specific, measurable requirements? How?
<--- Score
37. What is a worst-case scenario for losses?
<--- Score
38. What is the context?
<--- Score
39. When are meeting minutes sent out? Who is on the distribution list?
<--- Score
40. How did the Software reliability testing manager receive input to the development of a Software reliability testing improvement plan and the estimated completion dates/times of each activity?
<--- Score
41. Who is gathering information?
<--- Score
42. How and when will the baselines be defined?
<--- Score
43. What are the Roles and Responsibilities for each team member and its leadership? Where is this documented?
<--- Score
44. What system do you use for gathering Software reliability testing information?
<--- Score
45. What are the Software reliability testing use cases?
<--- Score
46. Are resources adequate for the scope?
<--- Score
47. Who defines (or who defined) the rules and roles?
<--- Score
48. What is out-of-scope initially?
<--- Score
49. How do you catch Software reliability testing definition inconsistencies?
<--- Score
50. How was the ‘as is’ process map developed, reviewed, verified and validated?
<--- Score
51. What is the definition of success?
<--- Score
52. How are consistent Software reliability testing definitions important?
<--- Score
53. What would be the goal or target for a Software reliability testing’s improvement team?
<--- Score
54. Is data collected and displayed to better understand customer(s) critical needs and requirements.
<--- Score
55. Is special Software reliability testing user knowledge required?
<--- Score
56. Are different versions of process maps needed to account for the different types of inputs?
<--- Score
57. When is/was the Software reliability testing start date?
<--- Score
58. How do you manage changes in Software reliability testing requirements?
<--- Score
59. Does the team have regular meetings?
<--- Score
60. Are task requirements clearly defined?
<--- Score
61. What are (control) requirements for Software reliability testing Information?
<--- Score
62. How do you gather requirements?
<--- Score
63. Has a project plan, Gantt chart, or similar been developed/completed?
<--- Score
64. Do the problem and goal statements meet the SMART criteria (specific, measurable, attainable, relevant, and time-bound)?
<--- Score
65. What critical content must be communicated