Half-Day Tutorials: Tuesday, April 21st
An Overview of CMMI
Sree Yellayi, Siemens Corporate Research
The Capability Maturity Model Integration (CMMI) helps organizations in managing their processes in a structured manner, making a positive impact on product quality and the organization's business objectives. CMMI follows a methodical approach for process improvement. It enables an organization to implement various proven software engineering practices, get assessed for their existing capabilities, and take action for improving these capabilities. With a CMMI implementation, the organization, the project, and the individuals are better enabled to deliver the desired product. This tutorial will provide an overview of the CMMI model, terminology, maturity levels, and process areas. You will explore how to use the CMMI model and assessments to identify process gaps and to prepare a roadmap to process improvement with or without the goal of seeking a formal CMMI level designation.
- Understand the basics and structure of the staged representation of the CMMI for Development model
- Describe the CMMI maturity levels, process areas, common features, and generic practices
- Examine the use of CMMI assessments to identify process gaps
About the instructor...
Sree Yellayi is a process improvement specialist with Siemens Corporate Research in Princeton, New Jersey. With 14 years of experience in the IT industry, Sree has been associated with ISO 9001, SW-CMM, and CMMI for more than 12 years in various capacities as a practitioner, consultant, and coach. He holds bachelors and master’s degrees in computer science, is an SEI Authorized CMMI Instructor, and SCAMPI Lead Appraiser. Sree has taught over 40 SEI authorized CMMI classes since 2003 in the United States, Japan, Malaysia, and the United Kingdom.
Back to top
Communicate to Influence
Toby Weber, Dynamic Transitions
Technology professionals have great ideas, cleaver solutions, and innovative designs and approaches. So what? If they can't communicate powerfully, collaborate effectively, and influence others to value their ideas, assessments, and solutions, what they know will not matter. This workshop presents the components of powerful communication and influence. Understand differences in behavioral styles along with generational differences. Learn how to translate those differences into strategies and language that honors others for who they are and maximizes your ability to influence them. Develop the ability to "people read," assess your influencing target, and plan your influencing strategy. Learn skills, tools, and strategies for building trust, motivating, influencing, and enrolling others at every level. Understand what it takes to get others "on-board" and fully committed to moving forward. Come prepared to apply each of the skills and tools to your own influencing challenges.
- Understand that people are different resulting in different priorities, motivation, and ways of communicating.
- Learn to communicate with others as they are, do not expect them to think and communicate just like you do.
- Assess your influencing "audience" and target your communication accordingly.
About the instructor...
Toby Weber is President of Dynamic Transitions, Consultants for Organizational and Personal Effectiveness. Toby specializes in organizational development, coaching, and training projects designed to enhance her client's overall productivity, growth, and the effective management of change. She has a proven track record of collaboration with senior corporate and technical leadership to develop and implement innovative solutions to organizational change challenges. Along with her expertise in organizational and process assessment and re-design, Toby's clients value her ability to mediate conflicts as well as coach them in conflict resolution and performance management strategies.
Back to top
REQUIREMENTS BASED ESTIMATION |
Requirements Based Estimating & Scheduling Best Practices
Steven Rakitin, Software Quality Consulting, Inc.
The increasing demand for complex software coupled with the inability of many organizations to write clear, concise requirements results in increased development costs, increased rework, and lower quality products. All of which have a negative impact on your company's bottom line. Since software project teams are often unable to accurately estimate and schedule the work they need to perform, management frequently imposes the delivery date. With an end date mandated, project teams must "schedule backwards." Schedules developed this way are always unrealistic since task duration is estimated based on time available rather than time required. This interactive workshop presents the skills necessary to learn how to under-commit and over-deliver. Steve will focus on the importance of writing good requirements and the specific skills needed to accomplish this. He will discuss basic estimating and scheduling skills and review several best requirements-based practices including the Wideband Delphi Method and the Yellow Sticky Method.
- Learn how to write better requirements
- Understand ways to accurately estimate tasks based on those requirements
- Explore the development of accurate schedules based on these estimates
About the instructor...
Steve has over 30 years experience as a software engineer and software quality manager. He frequently speaks on topics related to software development and software quality at conferences worldwide. He's published several papers on the subject of software quality and a written a book titled Software Verification & Validation for Practitioners and Managers. As President of Software Quality Consulting, Inc., he works with clients who are interested in improving the predictability of their development processes and the quality of their products.
Back to top
Combinatorial Testing Explained
Peter Zimmerer, Siemens AG
Good test designs often require testing many different sets of valid and invalid input parameters, hardware and software environments, and system conditions. This results in a combinatorial explosion of test cases. For example, testing different combinations of possible hardware and software components on a typical PC could involve hundreds or even thousands of possible tests. The classic question for effective testing is always, "Given limited time and resources, which of the combinations should be tested?" This tutorial describes the underlying problems and challenges in test case design for combinatorial testing in different application scenarios. Peter will explain possible solutions to this problem using a variety of testing techniques. He will give an overview of supporting tools, free as well as commercial, including their features, characteristics, and usage scenarios. Finally, successful experiences gained from real-world projects using the right tools will be presented clearly demonstrating the necessity and benefits of the proposed testing techniques.
- Gain an awareness of the design dilemmas caused by the combinatorial explosion of test conditions
- Learn about testing techniques absolutely required for testing combinations in real-world projects
- Understand the array of tools available for supporting combinatorial testing
About the instructor...
Peter Zimmerer is a Principal Engineer at Siemens AG, Corporate Technology, in Munich, Germany. He received his M.Sc. degree (Diplominformatiker) in computer science from the University of Stuttgart. Peter is an ISTQB(TM) Certified Tester Full Advanced Level. For more than fifteen years, Peter has been working in the field of software testing and quality engineering for object-oriented, distributed, component-based, and embedded software. He was involved in the design and development of various Siemens in-house testing tools for component and integration testing. At Siemens, he performs consulting on testing strategies, methods, processes, automation, and tools in real-world projects and is responsible for the research activities in this area. He is co-author of several journal and conference contributions and a frequent speaker at international conferences.
Back to top
Show ROI: Mine the Diamonds in your Data
Rebecca Staton-Reinstein, PhD, Advantage Leadership, Inc.
How do we get management on board? This perennial question from quality professionals has one overwhelming response: Show them the money! As a quality professional you sit on a data goldmine. If you test, perform any kind of reviews, have a help-desk log, or a data center, you have access to critical defect data. These data are waiting for your analysis to find hidden costs and release value. Learn to analyze and monetize the data to make a compelling presentation to your management. Participate in a practical, hands-on workshop to learn to demonstrate the ROI from your quality efforts whether testing, training, standards development, best practices implementation, or process improvements. Learn to lock in your gains with persuasive measurement reporting using an Impact Tracker and Dashboard. Become management’s partner to improve efficiency and effectiveness in tough times. Recession proof your quality initiatives by showing management how to eliminate or avoid millions of dollars of costs.
- Discover the hidden data assets in your operation and develop your own defect treasure map
- Understand how to analyze and put a dollar value on defect, effort, and results data
- Lock in gains, demonstrate ROI, and create ongoing significant measurement
About the instructor...
As President of Advantage Leadership, Inc., Rebecca Staton-Reinstein, Ph.D., CSQA, works with companies to improve the quality and productivity of software-related efforts. She helps IT organizations assess the current situation and create strategic plans to engineer successful processes, establish business-oriented measurement, and improve bottom-line results. She works with both technical and managerial staff to discover hidden costs and demonstrate ROI. Rebecca has successfully established three QA organizations; she has an international client base, and is the author of books on improving software quality and strategic planning including Get Great Requirements, The Hard Job of Making Software Work: Building the QA Function Step-by-Step, Success Planning: A 'How-To' Guide for Strategic Planning, and Conventional Wisdom: How Today's Leaders Plan, Perform, and Progress Like the Founding Fathers.
Back to top
Leading Testing: Optimizing the Mechanism and Organism of a Test Team
Phillip G. Armour, Corvus International Inc.
Back to top
REQUIREMENTS BASED TESTING |
Overcoming Requirements Based Testing Hidden Pitfalls
Robin F. Goldsmith, JD, Go Pro Management, Inc.
Most testers rely extensively on requirements based tests. Requirements based testing does not apply to a specific proprietary test design technique, rather it is a generic term describing tests intended to demonstrate strict conformity to system requirements. There are, however, a number of pitfalls associated with this methodology. For example, there are the obvious limitations imposed by inadequately defined requirements. In this interactive session, Robin reveals a number of traps inherent in requirements based testing including distinguishing business requirements from system requirements, assessing the extent to which the requirements are complete, the premise of one test per requirement, the appropriate level of test case detail, and the inclusion of requirements based unit tests by developers. Join Robin and learn to avoid those hazards that compromise the thoroughness of requirements based testing.
- Learn to identify strengths and often unrecognized weaknesses of requirements based tests
- Understand the importance of testing based on business, as well as system, requirements
- Discover how to identify more of the necessary but often overlooked tests
About the instructor...
Robin F. Goldsmith, JD has been President of Go Pro Management, Inc., consultancy since 1982. He works directly with and trains business and systems professionals in requirements, quality and testing, metrics, ROI, software acquisition, and project and process management. Previously he has been a developer, systems programmer/DBA/QA, and project leader with the City of Cleveland, leading financial institutions, and a "Big 4" consulting firm. Member of the IEEE Software Test Documentation Std. 829-2008 Revision Committee and formerly International Vice President of the Association for Systems Management and Executive Editor of the Journal of Systems Management, Robin is the author of the Proactive Testing methodology and the recent Artech House book, Discovering REAL Business Requirements for Software Project Success.
Back to top
Leveraging Reusable Tests and Data
Karen Johns, Mosaic, Inc.
Reuse is a recognized and valuable capability for test automation and software development. The benefits of reuse for manual testing, however, are often overlooked and therefore, not realized. Opportunities to build an effective regression test bed, achieve practical test automation, enable more effective use of outsourced resources, and increase the efficiency of the entire test team are lost. This tutorial will give you the techniques to leverage the benefits of reuse for your testing. You will
explore the opportunities for reuse of both tests and test data. A process and forms to define test cases and test data as reusable assets will be presented along with exercises to provide hands-on experience with the process. Actual experiences from the field will also be discussed.
- Learn to develop reusable manual tests positioned for automation
- Gain knowledge to utilize test data in a shared and reusable manner
- Understand how reusable tests can be practically automated
About the instructor...
Karen Johns is a managing consultant with Mosaic, Inc.; a company that specializes in helping organizations manage the risk of developing, maintaining, and installing their mission-critical systems. Karen is the product manager and chief developer for Mosaic's methodology products and was instrumental in integrating Mosaic's object-driven test automation process with their field-proven testing methodology. Bringing over 30 years of experience, Karen has expertise in information systems quality assurance with emphasis on software testing, test automation, software process improvement, measurement, project management, and training. Karen has presented at QAI and Better Software conferences and at local CSPIN and CQAA gatherings.
Back to top
Risk Based Testing: Analysis and Strategy
Clyneice Chaney, Quality Squared
Trying to meet even tighter deadlines while still delivering products that meet customer requirements is the greatest challenge testers face today. Formulating answers to age-old questions like "What should we test?" and "How long do we test?" requires different strategies in fast-paced environments. Risk based testing analysis and strategy development provides the means for testers to meet deadlines with better response to such questions. This tutorial focuses on identifying and prioritizing risks. You will discuss how to develop the right test strategy designed to help testers provide the input that management needs to make informed product release decisions. You will learn risk analysis and reduction techniques relevant in software testing as well as test design strategy based on that risk analysis and reduction.
- Learn to assess risks on current projects
- Discover how to plan, test, and report using a risk based testing strategy
- Determine how to provide the best input for management decisions
About the instructor...
Clyneice Chaney has over 20 years of testing, quality assurance, and process improvement experience. Clyneice holds certifications from the American Society for Quality as a Certified Quality Manager, QAI Global Institute's Certified Quality Analyst, and Project Management Institute's Professional Project Manager. She has participated as an examiner for state quality awards for Georgia and Virginia. She is currently an instructor for the International Institute for Software Testing and has presented technical papers at the Software Engineering Institute: SEPG Conference, American Society for Quality: Quality Manager's conference, Quality Assurance Institute International Testing Conference, International Conference on Software Process Improvement and Software Test and Performance Testing Conferences.
Back to top