Resources

The QUEST Conference and EXPO is the best source for new technologies and proven methods for Quality Engineered Software and Testing. In the months leading up to each QUEST Conference, QAI Global Institute is proud to host a series on complimentary webinars focused on a variety of important topics in software quality and testing. Each webinar in this exclusive series will be presented live by a industry expert – providing insights to assist IT, QA, and Software Testing professionals in the field. We invite you to participate in any or all of the webinars, and join us at the QUEST Conference to continue the conversation! Live webinars are available through both desktop and select mobile devices. Once you register for a webinar, you will receive a confirmation email with a link to join the event. View system requirements in advance to ensure you that your desktop or mobile device is ready to connect with the event.

April 6, 2016 | 12:30 PM – 1:30 PM EDT

Improve your Retrospectives with Agile Kaizen with Angela Dugan

View Webinar Recording | Download Presentation Slides Continuous self-improvement of software teams is traditionally accomplished through retrospectives, a form of post-mortem held at the completion of an iteration. More often than not, retrospectives begin to fade and the list of action items keeps growing until teams simply succumb to business-as-usual practices. In some cases, teams eventually abandon retrospectives altogether because they feel like a waste of time!

  • Do you feel like your retrospectives are a death march where no one is actively participating?
  • Do the same problems seem to resurface repeatedly in the team’s retrospectives?
  • Are your retrospectives ending prematurely or being cancelled in favor of “getting more real work done”?
  • Or maybe you feel great about your agile retrospectives, but just want to learn more about Kaizen…

Join Angela as she explains how you can use Kaizen to analyze and improve your retrospectives, regardless of your team’s process. She will begin with a brief review of what a retrospective is and walk through some examples of both healthy and unhealthy retrospective scenarios she has experienced herself.  Angela will then explain the concept of Kaizen, the Kaizen process, and how you can leverage a Kaizen process to turn your retrospectives back into the effective continuous improvement tools they are meant to be!

Learning Objectives

  • Determining if your current agile retrospectives are effective
  • Learn Kaizen Burst techniques
  • Using Kaizen in agile retrospectives

About the speaker

Angela Dugan is the ALM Practice Manager for Polaris Solutions, a small technology consulting firm based out of Chicago and St. Louis. She has been in software development since 1999, including 5 years as an ALM Tools evangelist with Microsoft. Angela also runs the Chicago Visual Studio ALM user group, is an active organizer and speaker at several local conferences, is a Microsoft ALM MVP, and is both a Certified Scrum master and SAFe Program Consultant. Outside of wrangling TFS, Angela is an avid board gamer, an aspiring runner, and a Twitter addict. She lives in a 1910 house in Oak Park, Illinois that she is constantly working on/cursing at with her husband David.

March 24, 2016 | 12:30 PM – 1:30 PM EDT

Metrics That Matter – In the Context of Software Testing and QA with Bernd Haber

View Webinar Recording | Download Presentation Slides Often times, IT Management has little patience or time for detailed test status reports because the metrics are hard to interpret. By the same token, they misunderstand the objectives of software testing. Hence, the perception of testing is that it is an effort to improving solution quality,  that it is considered a linear and independent task, and that test results are assumed to stay valid over time. Control is an important aspect, maybe the most important, of any software project, including during the testing lifecycle. Significant time and effort (money) is invested in preparing testing dashboards with detailed metrics and reports. But, many wildly successful projects have proceeded without much control, like Google Earth or Wikipedia. This session reviews how different kinds of projects have different control needs and changing expectations of what can be controlled. It will present some alternative and less complex approaches to metrics and reporting.

Learning Objectives

  • Expand the understanding of a advanced software test metrics program and framework as related to KPI value levers, such as quality, productivity, maturity and cost
  • Provide a more nuanced insight in to the world of software testing metrics for testing practitioners and test project leads
  • Provide a reference point for test project leads to adjust an existing test metrics approach in order to accommodate senior leadership needs and expectations

About the speaker

Bernd Haber is responsible for Accenture’s North America Testing Service for the Products Industry Operating Group that includes clients in Retail, Life Science, Consumer Goods, Airline, Automotive and Hospitality. He is a senior executive member of Accenture Testing Platform, the firm’s Global Testing Practice and Testing Community of Practice. Bernd specializes in the field of test strategy development, test operation transformation, process performance & quality assurance, as well as test metrics and measurements. He is one of the winners of 2011 Accenture’s Inventor Award program as related to his patent-pending QA Metrics Dashboard solution. Bernd has been with Accenture for more than 22 years and holds a Master’s degree in Mechanical Engineering and Computer Aided Manufacturing.

February 18, 2016 | 12:30 PM – 1:30 PM EST

Pairwise Testing: What it is, When to Use and Not to Use with
Philip Lew

View Webinar Recording | Download Presentation Slides Many defects only occur when a combination of inputs or events occur that interact with each other. However, if you were to test every combination of inputs it may take years especially with today’s complex systems.  How do you choose which combination of inputs? Pairwise testing. Pairwise testing is a combinatorial technique for reducing the number of test cases without drastically compromising functional coverage, in order to get more ‘bang for the buck’.

Philip Lew explains the nuts and bolts of pairwise testing, how to incorporate pairwise testing into your test design and planning, and when to use other combinatorial techniques. Learn about some of the tools that can be used as Phil examines when and when not to use pairwise testing, some of its advantages and limitations. He’ll also provide a live demonstration of using pairwise testing and more. Learning Objectives

  • What is pairwise testing and what are it’s advantages and disadvantages.
  • When to use pairwise testing and how.
  • How and when to use other combinatorial techniques beyond pairwise testing.

About the speaker

Philip Lew, CEO of XBOSoft, oversees strategy, operations and business development since founding the Company in 2006. His broad experience spans across deep technical expertise as a software engineer, advising on technology and business processes to founding companies such as Pulse Technologies Inc, a leader in contact center systems integration, until it’s acquisition by EIS International. In a space of 25 years he has served as an Ernst and Young Consultant, led the Systems Integrations Services Group at EIS, held roles at executive level both in USA and Europe and serves as an Adjunct Professor at Alaska Pacific University. As well as presenting at leading worldwide conferences such as STPCon, PNSQC and Better Software East-West, StarEast-West, his papers have been published in ACM, IEEE, Project Management Technology, Telecommunications Magazine, Call Center Magazine, TeleProfessional, and DataPro Research Reports.

January 21, 2016 | 12:30 PM – 1:30 PM EST

Compatibility Testing for Mobile Devices with Michael Yudanin

View Webinar Recording | Download Presentation Slides Mobile testing presents a number of challenges that require special attention. One of these is testing your apps and websites for compatibility with different mobile platforms. Mobile devices differ in terms of operating systems and their flavors, processing power, memory, display size, resolution – all this in addition to the familiar challenge of multiple browsers. Moreover, the mobile market is not only diverse but also extremely dynamic: new OS versions and new hardware show up quite frequently. How do we focus on what can cause issues rather than repeating all our tests on all platforms? How do we implement the principles of compatibility testing for the mobile market, covering the bases and minimizing risk without turning testing into a continuous nightmare and our cubicles – into smartphone warehouses? What are the relevant factors that determine on which devices and with which operating systems to test? What are the principal differences between testing mobile websites and native apps as far as compatibility is concerned? What are the criteria for deciding whether to purchase devices or rent a lab?

Learning Objectives

  • The main differences between mobile compatibility testing and compatibility testing for PC and Mac applications and websites.
  • The main factors that determine the selection of tests and platform combination for mobile compatibility testing.
  • Deciding whether to purchase multiple devices, rent a lab, or use a hybrid approach.

About the speaker

Michael Yudanin is the CEO of Conflair, a QA and testing company. He has been working on automating tests for mobile devices since before mobile apps and smartphones became commonplace. Michael developed RealMobileTM, a unique approach to using common automation tools to automate testing of mobile apps and websites. Among the large enterprises that have benefited from this approach to mobile testing are Home Depot, Bank of America, The Weather Channel, and Spirit Airlines. Michael is a frequent speaker at testing conferences and regularly delivers classes on test planning, requirements management, test automation, XML, web services testing, and other subjects.

October 13, 2015

How Agile are you? Creating a High Maturity Agile Implementation with Daniel Tousignant

View Webinar Recording | Download Presentation Slides

Understanding Agile maturity is key to having a successful Agile implementation. Like many maturity models and similar to the CMMI levels, Agile maturity can be measured from Level 1- Initial or “Ad Hoc” to Level 5 Optimizing or “Culturally Agile”. Fortunately, the Agile Manifesto helps us create a roadmap to assess where we are in our path to Agility. By review the 4 values and 12 principles we know what questions to ask in order to assess our maturity. This webinar will help you understand the path to Agile maturity and how to gain access to a free self-assessment tool to gauge your organizations Agile maturity.

About the speaker

Joseph Ours
Dan is a lifelong project manager and trainer with extensive experience in managing software development projects. Based upon his experience, he has adopted Agile methods for developing and implementing software. He is also passionate about the Agile approach of leadership emerging from self-organizing teams. Dan has over 20 years of experience providing world class project management for strategic projects, direct P& L experience managing up to 50 million dollar software development project budgets, experience managing multi-million dollar outsourced software development efforts and strong, demonstrated, results-driven leadership skills including ability to communicate a clear vision, build strong teams, and drive necessary change within organizations. Dan holds a Bachelor of Science majoring in Industrial Engineering from the University of Massachusetts, Amherst and is a Certified Project Management Professional, Professional Scrum Master, PMI Agile Certified Practitioner and Certified Scrum Professional and is the owner of Cape Project Management, Inc.

September 30, 2015

Lean and Enterprise Agile Frameworks with Dr. David Rico

View Webinar Recording | Download Presentation Slides

Dr. David F. Rico will give a presentation on “Agile Enterprise Frameworks: For Managing Large Cloud Computing Projects,” which are emerging models for managing high-risk, time-sensitive R&D-oriented new product development (NPD) projects with demanding customers and fast-changing market conditions (at the enterprise, portfolio, and program levels). Dr. Rico will establish the context, provide a definition, and describe the value-system for lean and agile program and project management. He’ll provide a brief survey and comparative analysis of the pros and cons of emerging lean and agile frameworks such as Enterprise Scrum, LeSS, DaD, SAFe, and RAGE. Then he’ll describe the Scaled Agile Academy’s Scaled Agile Framework (SAFe) in greater detail (which is the de facto international standard for scaling the use of agile methods to the enterprise, portfolio, and program levels for both systems and software development). SAFe is hybrid model best known for “blending” megatrends such as lean and agile principles into a single unified framework, establishing an authoritative foundation for scaling agile methods to large-scale private and public sector programs, and unifying East (lean) and West (agile) into a common language for systems and software development that is both lean “and” agile. In addition to SAFe case studies, late-breaking developments on the use of “Continuous Delivery,” “DevOps,” and bleeding-edge “Unstructured Web Databases” at Google and Amazon to automate large sections of the enterprise value stream will be discussed (which has been successfully used by some of the world’s largest firms to boost organizational productivity by one or two orders of magnitude).

About the speaker

Joseph Ours
Dr. Rico helps oversee a portfolio of large multi-billion dollar IT projects. He has been a technical leader in support of NASA, U.S. Navy, U.S. Air Force, and U.S. Army for over 30 years. He has led over 20 change initiatives based on Cloud Computing, Lean Thinking, Agile Methods, SOA, Web Services, Six Sigma, FOSS, PMBoK, ISO 9001, CMMI, SW-CMM, Baldrige, TQM, DoDAF, DoD 5000, etc. He specializes in IT investment analysis, portfolio valuation, and organization change. He has been an international keynote speaker, presented at leading industry conferences, written seven textbooks, published numerous articles, is a reviewer for multiple journals, and is a frequent PMI, INCOSE, ALN, and SPIN speaker. He is a Certified PMP, CSEP, ACP, CSM, and SAFe Agilist, and teaches at five Washington, DC-area universities. He holds a B.S. in Computer Science, M.S. in Software Engineering, and D.M. in Information Systems. He has been in the IT field since 1983.

August 18, 2015

Root Cause Analysis – Making decisions with Jeremy Berriault from Manulife Financial with Jeremy Berriault

View Webinar Recording | Download Presentation Slides

Decisions are made based on the data available at the time. Some decisions can be made fairly easy and quick such as deciding what to eat. On the other hand there are many decisions require more thought on the right direction such as the type of mortgage or investments. Senior leadership makes numerous decisions that affect many groups within an organization; some decisions may seem trivial and some may negatively affect individuals. The amount of data required to assist with these decisions is what we need to understand. How much data did they have that caused them to reach that decisions?

Root Cause Analysis is one set of data that affects IT projects. Decisions such as resourcing, budgets, and project selection are sometimes based on how much rework was done on previous projects. Providing a count of defects within specific categories, such as code or requirements, only provides a partial story of what is occurring within an organization. It could also create friction between groups as the subjective and vague nature does not provide the deep understanding to what is truly happening. This webinar will provide insight into the right amount of data needed to assist those decisions.

Learning Objectives

  • Discover how performing root cause analysis can improve your QA group’s value to your organization
  • Learn how to introducing cost/benefit analysis for continuous process improvement efforts
  • Explore how to use root cause analysis to achieve collaboration with stakeholders

About the speaker

Joseph Ours
Jeremy Berriault has been in the testing discipline for over 20 years within the Canadian Banking industry. He is currently the Director of the Quality Assurance Center at Manulife Financial, Group Functions Division. He is responsible for QA processes, job families and QA training curriculum across the enterprise. Jeremy holds an MBA, completing a research project on the attitudes and views business and testing groups have for each other as part of his master’s program. His research provided reasoning as to why each group would think the way they do and solutions to help resolve issues. Jeremy’s main drive is to help bring attention to the value testing groups can bring to an organization, not just providing assurance of quality software, but also financial and efficiency benefits across the organization.

Time to Cut the Cord with Dan McFall

View Webinar Recording | Download Presentation Slides Quality Assurance organizations faces tough challenges managing the mobile devices needed to get the job done every day. Existing tools can force you to tie precious devices to a single machine and then pass them around physically when it’s time to share or collaborate.  This eliminates efficiency in resource utilization and can impact test coverage.  We’ll discuss what makes mobile so different than other types of testing from an environmental and process standpoint and general strategies for recapturing that lost efficiency.

Learning Objectives

  • USB-tethering of devices for use by developers, QA, and support professionals diminishes team agility.
  • Without devices under management as part of a highly available infrastructure, DevOps efficiencies can evaporate in mobility.
  • Considerations for types of mobile testing and the wisest places to spend your time.

About the speaker

Dan McFall is a seasoned, knowledgeable software professional with extensive experience spanning mobility, manual and automated mobile testing, secure test device management, private cloud, Agile software development and technical support. Currently, Dan is the vice president of mobility solutions at Mobile Labs, a leading provider of mobile testing and secure test device management solutions. In his role, Dan works with global organizations to improve their development and QA processes around mobile device and application testing. Dan is a graduate of the Georgia Institute of Technology with a degree in Industrial and Systems Engineering.

Metrics: The Force Awakens with Joseph Ours

View Webinar Recording | Download Presentation Slides It is often said, “You cannot improve what you cannot measure.” That statement has led to a proliferation of measure and metrics gather programs throughout history. In software testing, metrics are used frequently to inform stakeholders regarding the quality and/or progress of testing in a project. Many time metrics are presented in visual form in order to tell a compelling story – often to influence decision making. That makes metrics a life force in the universe of quality assurance. In this presentation, we will discuss some common quality assurance and testing metrics and demonstrate how the force can be manipulated for good and evil. For those with ill intent, they will learn how to manipulate the metrics for their own purposes. For those pure of heart, they will learn how to see past the visual and defend against the dark arts.

Learning Objectives

  • The purpose of metrics
  • How to display metrics to tell your story
  • How to spot when someone is being told an inaccurate story with metrics

About the speaker

Joseph Ours
Joseph Ours draws on 15 years of experience providing executive-level leadership while managing high profile initiatives with a demonstrated ability to lead people toward successful delivery. Throughout his diverse career, he has built a solid reputation as a thought leader who exhibits a results-driven business approach and exceptional ability to achieve success. He is a strong leader in business processes with a proven history of providing project and portfolio management of large technology initiatives. Joseph brings both a strategic and tactical thought process to solving IT related issues. He holds bachelor’s degrees in electronic engineering technology and technical management in addition to a Master’s of Business Administration.

Enterprise Agility Starts with Healthy Teams, How Healthy is YOUR Agile Team?, with Sally Elatta

View Webinar Recording | Download Presentation Slides

Everyone wants metrics, but which ones really matter? Which metrics can help you ‘actually’ get better and give you visibility into the health of your teams? Take a deeper dive with our dynamic Agile Expert, Sally Elatta, as she walks you through the top 5 metrics you need to be looking at and how you can create a continuous growth process where teams, programs and portfolios are getting better quarter after quarter. All the attendees shall have access to download the powerful TeamHealth radar and try it with your own teams!
Learning Objectives

  • How do you really measure TeamHealth and what metrics should you look for?
  • How to create a continuous growth process that is predictable and measurable.
  • Appreciate the powerful TeamHealth radar.

About the speaker

Sally Elatta
Sally is dynamic consultant, trainer and coach who is passionate about transforming people, teams and organizations. Her unique mix of technical, business, leadership, and soft skills help her transform individuals at all levels. Sally has developed a unique set of results-driven training workshops that use real world best practices.

 

 


Why Test Automation Fails with Jim Trentadue

View Webinar Recording | Download Presentation Slides

Challenges in automation which testers face often lead to subsequent failures. Learn how to respond to these common challenges by developing a solid business case for increased automation adoption by engaging manual testers in the testing organization, being technology agnostic, and stabilizing test scripts regardless of applications changes. Learn Jim Trentadue’s explanations of a variety of automation perceptions and myths:

  • The perception of significantly increased time and people to implement automation.
  • The myth that once automation is achieved, testers will not be needed.
  • The myth that automation scripts will serve all the testing needs for an application.
  • The perception that developers and testers can add automation to a project without additional time, resources or training.
  • The belief that anyone can implement automation.

About the speaker

Sally Elatta
Jim Trentadue has over 15 years of experience as a coordinator/manager in the software testing field. He has filled various roles in testing over his career, focusing on test execution, automation, management, environment management, standards deployment, and test tool implementation. In the area of offshore testing, Jim has worked with multiple large firms on developing and coordinating cohesive relationships. Jim has presented at numerous industry conferences including the Rational Development Conference, IIST, and QAI chapter meetings. Jim has acted as a substitute teacher at the University of South Florida’s software testing class, mentoring students on the testing industry and trends for establishing future job searches and continued training.

Enterprise Agile Failure Modes and Solutions with Hillel Glazer

View Webinar Recording | Download Presentation Slides

Agile adoption holds so much promise it sounds too good to be true. Often transformation efforts to adopt agile get off to impressive starts. Pilot projects typically succeed and enthusiasm is high. However, when moving towards broader adoption and attempting to institutionalize sustained agile practices, the successes of the pilot efforts fade into the past and organizations find themselves frustrated with poor traction and increased headaches. These experiences are often accompanied by a slide in process maturity from what were prior accomplishments with establishing standard process assets and tools. These challenges not only appear among companies transitioning to agile, they even appear among companies who have never used anything but agile. So if agile is so great, holds so much promise and seems to succeed with so many teams, what causes these problems? Shouldn’t agile have solved them? The webinar will begin by introducing a framework for understanding different types of companies and how these differences are critical to successful transition to (or use of) agile methods at scale. Companies have differing delivery constraints and business drivers that are likely working against even the best of agile transformation. Next we will explore a strategy for establishing an end state vision and an operational model to guide transformation. Finally, we’ll define an approach for incrementally introducing change, measuring outcomes, and sustaining the change once things really get going and keep them going at scale.

About the speaker

Neil Potter
Hillel Glazer is the founder, Principal, CEO, and “all-around Performance Jedi” of Entinex, Inc., a Baltimore, MD-based management consulting firm made up of aerospace (and other) engineers. Hillel is an internationally recognized authority on bringing lean and agile values and principles into the regulated world, and he was selected as a Fellow of the Lean Systems Society in its inaugural fellows induction. Hillel and his company have close ties to the CMMI Institute at Carnegie Mellon University in Pittsburgh, where he serves as an advisor on ways to bring together CMMI (Capability Maturity Model) process improvement models that create high-performance, high-maturity cultures, with lean and agile practices. He is the author of the 2011 book, “High Performance Operations,” and has written widely on the subject of high performance systems, models and organizations including the world’s first peer-reviewed, professionally edited article on CMM and Agile in 2001 as well as the SEI’s first official Technical Report on Agile and CMMI in 2008.

You Want to Use SCRUM, You Are Told To Use CMMI– How They Can Work Together Elegantly with Neil Potter

View Webinar Recording | Download Presentation Slides

If you are a software engineer or IT professional, your group has very likely shown a strong interest in reducing costs, improving quality and productivity. Your group might also have looked at various pre-packaged frameworks, such as Agile (e.g., Scrum and Extreme Programming), CMMI and Six Sigma. At first glance, each of these frameworks might look at odds with each other, making it difficult to use two or more. This typically occurs because much of the information shared regarding these frameworks is from un-researched opinions and failure stories, rather than understanding the specifics of each framework. Each framework can be implemented successfully depending on how much care is placed on its implementation. In this session, CMMI and Scrum are compared since they are two of the most commonly used frameworks and groups frequently struggle with using them together.

About the speaker

Neil Potter
Neil has been working in the software application and IT fields since 1985 helping companies improve their performance. He has 28 years of experience in software and process engineering. Neil Potter is co-founder of The Process Group, a company formed in 1990 that consults on process improvement, CMMI, Scrum, software engineering and project management. Neil is a CMMI-Institute certified lead appraiser for SCAMPI appraisals, Intro to CMMI instructor (development and services), Six Sigma Greenbelt and Certified Scrum Master. Hands-on workshops on Scrum/CMMI/requirements/planning/estimation/inspection/supplier management Facilitation and organizational change He has a B.Sc. in Computer Science from the University of Essex (UK) and is the co-author of Making Process Improvement Work – A Concise Action Guide for Software Managers and Practitioners, Addison-Wesley (2002), and Making Process Improvement Work for Service Organizations, Addison-Wesley (2012).

Agile Resiliency: How CMMI will make Agile Thrive and Survive with Jeff Dalton

View Webinar Recording

Large corporations and the Federal government are increasingly directing software developers to “be agile,” but business practices related to marketing, procurement, project management, and systems definition are anything but. While more developers are living in an agile world, the business continues to live in waterfall surroundings. It’s not a conflict that is easily resolved, but there is opportunity to take control of the debate. Why not embrace both?

About the speaker

Jeff Dalton
Jeff Dalton is Broadsword’s President, Certified Lead Appraiser, CMMI Instructor, ScrumMaster and author of “agileCMMI,” Broadsword’s leading methodology for incremental and iterative process improvement. He is Chairman of the CMMI Institute’s Partner Advisory Board and President of the Great Lakes Software Process Improvement Network (GL-SPIN).

 

 


Automated Software Testing: Practices that Yield Positive Results with Elfriede Dustin

View Webinar Recording

This webinar describes various automated software testing practices that have yielded the positive results required of an automated test program. We will provide proven examples of best practices in a scriptless automated testing environment using image-based capture. Not only is it important that a capable automated software testing solution is used to meet specific automated testing requirements, but also that the appropriate capture techniques are applied. Often too much time is spent on automated software testing maintenance. This webinar will provide workaround suggestions and ideas for avoiding maintenance difficulties that can lead to shelved automated software testing solutions. You will also gain insight to the ideal requirements an automated testing solution should meet in order to be able to implement the proposed selected best practices.

About the speaker

Elfriede Dustin, IDT
Elfriede Dustin has over 20 years of IT experience implementing effective testing strategies both on government and commercial programs. She is the author or co-author of six books related to Software Testing, including Effective Software Testing, Automated Software Testing, Quality Web Systems, and The Art of Software Security Testing. Elfriede has implemented automated testing methodologies as an Internal SQA consultant at Symantec, worked as an Assistant Director for Integrated Testing at the IRS Modernization Efforts, built test teams as a QA Director for BNA Software, and was the QA Manager for the Coast Guard MOISE program. Her goal is to continue to help further automated software testing advances.

Career Planning for Agile QA with Bill Rinko-Gay

View Webinar Recording

Bill Rinko-Gay will present Career Planning for Agile QA. Based on Bill’s experience with Agile transformations and his many years in Quality Assurance, Bill will discuss what QA professionals can do to ensure that they have the right skills for employment as the computing industry transforms itself to Agile methods. Join this webinar and gain insight to how QA is different when software is developed with Agile methods and how to adapt QA to popular Agile management and development tools. You will learn key characteristics of the Agile QA professional and what skills should be acquired or enhanced. Attendees will be able to adjust training and career plans to take the move to Agile into account.

About the speaker

Bill Rinko-Gay, Agile Integrity, LLC
Bill Rinko-Gay is the founder and Contributing Member of Agile Integrity, LLC. Currently, Bill is working as a Transformation Agent and ScrumMaster for Macmillan Higher Education. Bill has been involved in Software Test and Quality Assurance since 1982 when he began testing command and control software for orbiting satellites for the US Air Force. Since leaving the DOD, Bill has worked on projects in defense, computer manufacturing, publishing, network security, financial, state and local governments. Beginning his fourth decade in the field, Bill is still improving techniques to allow teams to produce excellence. His most recent work is in Scrum and agile quality assurance, improving software quality in the 21st century. A regular speaker and trainer for QAI affiliate organizations, Bill currently holds PMP and Certified ScrumMaster certifications.

Software Quality Metrics Do’s and Don’ts with Philip Lew

View Webinar Recording

Don’t just measure and track progress and then deliver reports that no one reads. This webinar discusses some of the most common mistakes in using metrics. The primary take away is to learn from the mistakes of others, particularly where to use and not use metrics to measure your testing and QA efforts. The last thing you want is to measure the wrong thing and create unwanted behavior. With knowledge of what not to do, we’ll then dive into how to develop a measurements and metrics framework that aligns with the organization’s business objectives. This means taking on a manager’s viewpoint so that your metrics don’t just measure testing progress, but also measure product quality and how it impacts an organization’s bottom line. As part of the webinar, we’ll discuss a variety of metrics that can be used to track work effort with results and enable you to plan and forecast your testing needs.

About the speaker

Philip Lew, XBOSoft
With extensive experience in a variety of management and technical positions in software development and product management, today, Philip Lew leads XBOSoft’s (www.xbosoft.com) direction and strategy as their CEO. His Ph.D. research in software quality and usability resulted in several IEEE and ACM journal publications. Philip has also been published in a number of trade journals. He has presented at numerous trade and academic conferences and in the past 20 years and has worked with hundreds of organizations to assess the quality of their software, examine software quality processes, and set forth measurement plans to improve software quality using systematic methods. He received his Bachelor of Science and Master of Engineering degrees in Operations Research from Cornell University. His post-doctorate research focuses on software quality and usability measurement.

Avoiding Common Performance Test Planning Pitfalls with Vic Soder

View Webinar Recording

In large scale application implementation and integration projects, there are often well-intentioned performance test plans established to demonstrate readiness for a production go-live. Unfortunately, what numerous projects have experienced is that even with a formal performance test plan, a strong set of performance testing tools, and a team of well-trained performance test execution staff, the results of a formal performance test and analysis program often come up short of expectations. In this webinar, Vic will discuss a variety of challenges that must be planned for – and overcome – to ensure that a performance testing program is successful. These include having a representative and dedicated performance testing environment, a fully functioning application with production-like test data and identification and impact of workloads. Learn how to overcome these and other challenge in performance testing.

About the speaker

Vic Soder, Deloitte Consulting, LLP
Vic Soder is a Director with Deloitte Consulting LLP’s Systems Integration service line, with over 20 years of experience in Deloitte’s Technology practices. He leads Deloitte’s national Application Performance Center of Expertise, specializing in application performance, system sizing, and capacity planning advisory services. Vic has provided performance and capacity planning services to a wide variety of large clients in multiple industries. He previously worked in positions with Boole & Babbage and the Institute for Software Engineering, and holds a B.S. degree in Mathematical Sciences from Stanford University.