Sample Quality Plan
1. Purpose and scope
This plan identifies the set of standards, practices, reviews, checkpoints and other quality improvement methods to be implemented for the SampleProject Phase 1 project. The project adheres to UW and Kuali Student QA practices, and UW's Lean-Kanban development process.
This plan applies to the following:
- Software developed and maintained by the University of Washington SampleProject technical team
- Software and systems relied up on to complete the implementation (SDB, Rice, SWS)
- Software project management and development practices.
- Necessary education and communication documentation
2. Quality Roles and responsibilities
The project team is responsible for the level of quality of the project deliverables and will participate in quality reviews. Specific responsibilities include:
Quality Assurance Responsibility
Executive project sponsors
Business Readiness Team
Technical team members (including Application Integration Services)
SIS Technical Architect
Data Decision team
3. Quality checkpoints / deliverable reviews
This project follows the Information Management Software Development Life Cycle and its associated checkpoints and reviews:
- Architecture checkpoint (may be combined with the design checkpoint)
- Design checkpoint
- Production readiness checkpoint
The project is following an iterative approach with multiple sprints making up a milestone. For example, the first milestone, Milestone 1 is met in 5 sprints. Each milestone may include an architecture and design checkpoint. Since deliverables may not be released into production at each milestone, a production readiness checkpoint is held only when release to production is planned. The team decides when these checkpoints occur during the milestone.
Any decision that is has significant impact on the project is documented in JIRA and approved by the "Student Program Technical Coordinating Team."
The technical lead will track Technical Decisions.
3.1 Requirements review
The requirements for the SampleProject are developed and reviewed using Agile techniques: the team creates, reviews and prioritizes user stories with high level acceptance criteria (validation strategy) at the beginning of each sprint. The acceptance criteria are reviewed before or during the Specify state, by the project manager, the quality manager, and/or the test lead, and used as the basis for acceptance tests and more detailed tests by the test lead, and for unit tests by the developer. Any wireframes developed by the UX designer are also considered requirements and are peer-reviewed during the Specify state.
3.2 Architecture/Design checkpoint
The team reviews the architecture and design (including systems interface and user experience design) informally, with decisions from those reviews tracked using JIRA (as "decision" issues). Architectural and design issues are discussed with the Student Program Technical Coordinating team with additional formal review conducted as agreed upon among the SampleProject team and the Coordinating Team. Note: SampleProject architecture is based on the Kuali Student Architecture, and both Kuali Student and Kuali Rice are partners.
The reviews are guided by the architecture and design standards identified in Standards, Practices and Guidelines. This checkpoint determines if security risks have been addressed, if the best solution has been chosen from the alternatives, if all the requirements are addressed in the design and test plan, and if support has been considered.
As development progresses for each milestone, the architecture and designs are peer-reviewed (by at least one other person). Major designs are presented to the technical team, the Coordinating team and to the Kuali Student team for review.
Example of minor design reviews and the Kanban board (does not include coding or testing activities):
- User story for the functionality is sized and prioritized during the planning session, and loaded onto the board in the backlog
- A developer moves the user story to 'Specify' when work begins on the design
- Another developer reviews the design, which is reworked if necessary
- Key decisions made during the design are captured in JIRA and mentioned during the standup
- Once the design is acceptable to a second developer, the user story moves to 'Execute' and coding/unit testing begins.
The SDLC's Key Quality Questions for Architect and Design are referenced at the end of each milestone (for example, does everyone understand the design? have solutions to security risks been discussed? have major decisions been captured?).
3.3 Security reviews
While security and authentication/authorization is discussed during deliverable reviews, the project team conducts internal reviews of high risk deliverables against OWASP Secure Coding Principles for Web applications and the OWASP top ten most critical web application security flaws (see Standards, Practices and Guidelines).
Authorization and access is tested during integration testing, and revalidated during user acceptance testing.
The quality manager will assist the project team with the UW-IT Application Security Review Process:
- Completing the Web Application Security Self-Assessment Worksheet
- Developing a Security diagram, showing data flows and locations
- Depending on the results of the assessment, conducting a formal security review session with the Business Owners and others.
3.4 Code walkthroughs
Project team members hold informal 'desk check' code reviews for most of the code, including any automated tests, with more formal code walkthroughs for high risk code. High risk code involves new methods, new technology, new design, or restricted/confidential data. High risk deliverables should be identified as early as possible in the milestone.
Code is reviewed against the coding standards identified in Standards, Practices and Guidelines.
The code review tool 'Crucible' is being explored for use by the team.
Kuali Student posts their own guidelines for coding standards and code reviews:
3.5 Project management/documentation reviews
The product owner, scrum master, and the quality manager will periodically review the project's progress by evaluating:
- progress relative to the overall project plan, scope, deliverables, project budget, and project schedule
- project risks and risk management
- requirements coverage (user story tracking to project deliverables and testing)
- quality issues raised during the quality activities.
Evaluations of the project's development processes will be held during sprint retrospectives and at the end of the project to determine if the processes provide value and how they can be improved.
Other project documents may be reviewed as needed (e.g., support plan, production support documentation).
The project reports to the UW-IT Project Review Board every two weeks.
Issues from these reviews are raised to the stakeholders as needed.
A Test plan (or set of plans) will include the overall test strategy for the verification and validation of the software deliverables, the defect tracking processes, test environments, and test roles and responsibilities, using the Test Planning Checklist on the UW Project Management Portal as a guide.
Tests to be run are identified, with detailed test cases prepared for high risk or high priority tests.
Project team members hold informal 'desk check' test case reviews for most of the tests, with more rigorous reviews for high risk areas.
At a minimum, the following types of verification and validation are performed:
- Build, Deploy and Unit testing on a nightly automated basis
- Functional/integration testing (automated and manual interface, end-to-end, security, regression tests)
- Data migration testing
- Performance and load testing
- User acceptance testing
- Documentation/help verification
- Test & production deployment / implementation verification (technical environment validation)
3.7 Production readiness checkpoint
Team members (including appropriate stakeholders and non-project members of UW Information Technology) will review the following project artifacts for completeness and correctness, to determine if the project deliverables are ready to deploy to production.
- Production Readiness Checklist from the PM Portal
- Production readiness recommendation
- Implementation plan
- Support plan
- Production support documentation
User acceptance test results (including any outstanding issues from functional/integration, data migration, and user acceptance testing) are measured against production readiness criteria: to be determined by the project team, the Business Readiness Team, Product Owners Advisory Group, and project sponsors. Initial criteria:
- All integration, data migration, and user acceptance tests have been run and all high priority tests have passed
- Load testing results meet the minimum performance thresholds identified in the requirements
- All "Blocker" bugs (SampleProject and Kuali) have been resolved and associated tests rerun successfully
- All unresolved bugs have been documented and include workarounds if needed
- Production environment is available, stable, populated and validated (Kuali and SampleProject components)
- The SampleProject production readiness recommendation has been completed and stakeholders agree
4. Standards, practices and guidelines
The Information Management standards, practices and guidelines, along with the architecture and design standards, naming and coding standards, and source control and build practices identified in the Kuali Student Developer Guide will be followed by this project where appropriate. The project team and the quality manager determine the appropriateness of the guidelines.
Additional standards, guidelines and practices adopted by the project:
Standard, practice or guideline to be applied
4.1 System configuration management practices
4.2 Problem reporting and corrective action practices
4.3 Development Methodology
Quality metrics to measure project deliverable quality will be further defined by the team:
- The ratio of positive vs. negative results from a user satisfaction survey
- The amount of time spent on fixing defects after a feature is considered complete.
- The amount of time spend on fixing defects after implementation.