Details Sessions/Synopses

Conference Programme


Day 1 (Wed, 10 September)

 

Keynote 1

The ROI of Testing

Shaun Bradshaw, VP, Consulting Solutions, Zenergy Technologies, US

 

The primary objective of this presentation is to discuss what makes testing valuable and how to recession-proof your testing initiatives by showing management how to eliminate or avoid millions of dollars of costs on project development. Shaun Bradshaw will also share some examples of proven Test ROI. 

 

 

Tutorial 1 (1000 – 1700)       

TMMi Workshop

Clive Bates, Managing Consultant, Experimentus, UK           

 

TMMi is a guideline and reference framework that has a staged architecture for test process improvement. It contains up to 5 levels of maturity for assessing organisations so they can identify what level of maturity they are and also develop improvements to take them to the next level.

 

The objective of the tutorial is to provide attendees with the background to process improvement and why it is important in today’s climate. Clive will explain the model so attendees get a solid understanding how it is structured and can be applied to their organisation. The benefits of TMMi will be explained and there will be an opportunity for attendees to perform a simple assessment to give an indication of the maturity level in their organisation.

 

Throughout the day Clive will provide practical examples and application of the model so attendees understand how it can help organisations improve not only their testing activities but also develop an organisation wide test measurement program and become capable of continually improving its processes. There will be information on how to deliver qualitative and quantitative improvements using the IDEAL model.

 

Key points take away:

1. What is process improvement and why do you need to do it

2. Firm understanding of the TMMi reference model

3. How improvements can be introduced using a structured approach

 

 

Tutorial 2 (1000 – 1700)       

Take a Test Drive of Acceptance Test – Driven Development 

Ken Pugh, Fellow Consultant, Net Objectives, USA

 

The practice of agile software development requires a clear understanding of business needs. Misunderstanding requirements causes waste, slipped schedules, and mistrust within the organization. 

We show how acceptance tests decrease misunderstanding of requirements. A testable requirement provides a single source that serves as the analysis document, acceptance criteria, regression test suite, and progress-tracker for any given feature. 

In this session, we explore the creation, evaluation, and use of testable requirements by the business and developers.

We examine how to transform requirements into stories – small units of work – that each have business value, small implementation effort, and easy to understand acceptance tests. 

This tutorial features an interactive exercise that starts with a high level feature, decomposes it into stories, applies acceptance tests to those stories, and estimates the stories for business value and implementation effort.  

The exercise demonstrates how big requirement stories can be decomposed into business-facing stories, rather than technical tasks that are not understood by the business.

 


Tutorial 3 (1000 – 1700)       

The Six Trumps: Meet the DIMWiTS

Steven ‘Doc’ List, Consultant, US

 

We’ve all been the - umm - beneficiaries of a pedagogical style of training.

Most often, this is billed as a “talk” or “lecture”, and generally involves one person (the trainer or teacher or lecturer or speaker) speaking and the rest of the group (the attendees or students or listeners) listening. This typically leads to numbness, both in the nether regions and in the brain.

 

In the last 20 years, this has also come to be known as “Death by PowerPoint”. Education, meetings, and presentations have all taken on the same dullness, with the exception being those rare speakers/presenters/teachers who can make almost any material interesting and entertaining.

 

What we really want, though, is that the information gets in and turns into knowledge. We want the practices and skills to become a part of the participants, not just words in a book or hand-out. In the past 20 years and more, research has been ongoing into how the human being learns and how the brain takes in and retains information.

 

Based on work in neuroscience, people such as Dave Meier (“The Accelerated Learning Handbook”), John Medina (“Brain Rules”), and Sharon Bowman (“Training from the BACK of the Room” and "Using Brain Science to Make Training Stick!”) have pushed the boundaries of understanding and proposed new ways of facilitating learning.

 

In this session, Doc List will present one part of the work of Sharon Bowman: The Six Trumps™. We explore these six ideas, experience them in the session, develop new ideas of our own, and leave the session with a whole new mind-set about training and presenting.

 

In fact, the Six Trumps form the essence of the session itself. Yes, we will eat our own dog food!

 

Tutorial 4A (1000 – 1300)    

Innovation Thinking: Evolve and Expand Your Capability

Jennifer Bonine, Vice President, tap|QA, USA       

 

Innovation is a word tossed around frequently in organizations today. The standard clichés are Do more with less and Be creative. Companies want to be innovative but often struggle with how to define, implement, prioritize, and track their innovation efforts. Using the Innovation to Types model, Jennifer Bonine will help you transform your thinking regarding innovation and understand if your team and company goals match their innovation efforts. Learn how to classify your activities as "core" (to the business) or "context" (essential, but non-revenue generating). Once you understand how your innovation activities are related to revenue generating activities, you can better decide how much of your effort should be spent on core or context activities. Take away tools including an Innovation to Types model for classifying innovation, a Core and Context model to classify your activities, and a way to map your innovation initiatives to different contexts.

 

Tutorial 5A (1000 – 1300)    

Software Quality Metrics, Systematically Improving Quality

Philip Lew, CEO, XBOSoft, US

 

When implementing software quality metrics, we need to first understand the purpose of the metric and who will be using it. Will the metric be used for measuring people or the process, for illustrating the level of quality in software products, or for driving toward a specific objective? QA managers typically want to deliver productivity metrics to management while management may want to see only the metrics that support customer or user satisfaction or cost related (ROI) initiatives. Avoid this gap in communication by delivering software quality metrics with actionable objectives toward increasing or improving a business objective. Metrics just for the sake of information, while helpful, often just end up in spreadsheets that no one cares about. Not only do you need to learn how to define and develop metrics that connect with potential actions driving toward improvement, you also understand and avoid one of the main pitfalls of metrics, driving behaviour which could be both unintended and negative.

 

 

 

Tutorial 4B (1400 – 1700)

Putting Models at the Heart of Testing

Paul Gerrard, Principal, Gerrard Consulting, UK

 

Testing is a process in which we create mental models of the environment, the system, human nature, and the tests themselves. Test design is the process by which we select, from the infinite number possible, the tests that we believe will be most valuable to us and our stakeholders in a systematic way. A test model might be a checklist or set of criteria; it could be a diagram derived from a design document or an analysis of narrative text. Many test models are never committed to paper – they can be mental models constructed specifically to guide the tester whilst they explore the system under test.

 

This session describes what models are and how testers can use modelling to demonstrate the value of testing to stakeholders and to reduce the guesswork in test design. We’ll see how the well-known test design techniques are derived from models and provide a generic method for test design and coverage measurement. Model-based testing is taught mostly as a test automation pattern. Understanding test modelling will transform how you think about testing because modelling underpins all approaches to test design whether planned or exploratory, manual or automated, software or … anything.

 

Tutorial 5B (1400 – 1700)

Test Planning & Test Estimation

 Shaun Bradshaw, VP, Consulting Solutions, Zenergy Technologies, US

 

Planning is imperfect. We can never anticipate every problem nor can we plan for every contingency. However, the planning process will give us an opportunity to think about what we need before we need it!

 

Test plans often include many objectives and some of those objectives might be considered boiler plate. However, objectives that are often overlooked are those that relate to the business objectives.

 

A plan may change considerably if these objectives are to be aligned with the test objectives. Objectives must also be prioritised based on a number of factors including customer needs, customer use, monetary value, application interdependencies and historical fail fates. According to Shaun, every plan needs to consider risk and contingencies which must be aligned with cost and probability. Next is the development of an estimation model which will take into account all considerations to this point in order to calculate component estimates.

 

 

 


 

Conference Programme

Day 2 (Thurs, 11 September)

 

Opening Remarks by MSTB President

 

 

Keynote 2 (0910 – 1010)

The Changing Role of Testers

Paul Gerrard, Principal, Gerrard Consulting, UK

 

 

A: 1040 – 1125

 

Concurrent Session 1A

Effectively Communicating with Acceptance Test

Ken Pugh, Fellow Consultant, Net Objectives, USA

 

Defining, understanding, and agreeing on the scope of work to be done is often an area of discomfort for product managers, developers and quality assurance experts alike. Many of the items living happily in our defect tracking systems were born of the difficulty we have in performing said activities. Acceptance Testing roots out these defects by re-examining the process we use to define the work to be done and how it is tested.

 

This session introduces acceptance testing; explain why it works; and outline the different roles the team members play in the process. We will contrast acceptance testing with unit testing and show examples of how the process clarifies the work to be done.

 

Concurrent Session 2A

The Silent Assassin – Have you seen any?

Clive Bates, Managing Consultant, Experimentus, UK

 

Silent assassins work in every IT development project. Software quality issues that cause projects to be delayed can result in them being hundreds of thousands of Euros over budget or there are common occurrences of failures seen after a system has gone live.

 

These are software quality issues that nobody focuses on early enough, but which put the success of key IT projects at significant risk. These quality issues in most instances do not come to light until the end of the development process when testing identifies or crystallises the problem, or even until the system is live and performs badly or dangerously.

 

How many times have we got to testing in a project to see a major increase in Management focus pushing to hit deadlines and to get the software live, focus that should have been in place so much earlier in the project?

 

In this talk I will reference experience from many projects and review 6 common quality issues that I see very regularly.  I will focus on how as a test community we should act differently to find and remove these silent assassins as early as possible.

 

The role that testing plays needs to be rethought – no longer should we be just defect chasers.

 

Key points take away:

1.             What are the 6 common quality issues that exist in IT projects?

2.             Why do we let these issues happen time after time?

3.             How testers should work and act differently to remove the silent assassins

 

 

Concurrent Session 3A

Managing with Metrics

Shaun Bradshaw, VP, Consulting Solutions, Zenergy Technologies, US

 

Some consider test metrics a thorn in software development and testing, but when used properly, they provide valuable insights into what occurs during projects and what strategic and tactical adjustments must be made on a daily basis. Attendees will learn how key metrics drove test management decisions and how these same metrics will benefit their organisations. 

 

Concurrent Session 4A

Improving Mobile App UX, Don't Just Satisfy Users

Philip Lew, CEO, XBOSoft, US

 

Have you decided to beef up your mobile presence? You have mobile development staff and some ideas on what it should look like, or perhaps are going to outsource it, and are ready to begin. If you listen to the mobile developers this is a piece of cake. Just convert what you have on the web, or a subset of it, for easy access by mobile devices. Congratulations! join hundreds of others who represent their organisations with mediocrity through ‘good enough’ mobile applications.

 

In this workshop, Mobile Usability and UX expert Philip Lew will assist you from start to finish in examining your users’ needs and requirements to roll out a kick-butt mobile application that meets your business goals while satisfying your end users. We’ll cover best practices, as well as industry practices that are not the best, but you must live with anyway, in order to build an app that users accept and will use.

 

B: 1125 - 1215

 

Concurrent Session 1B (1125 – 1215)

The Art of Testing Transformation: Blending Technology with Cutting Edge Processes

Jennifer Bonine, Vice President,tap|QA Inc, USA

 

Technologies, testing processes, and the role of the tester have evolved significantly over the past several years. As testing professionals, it is critical that we evaluate and evolve ourselves to continue to add tangible value to our organizations. In your work, are  you focused on the trivial or on real "game changers"? Jennifer Bonine describes critical elements that, like a skilled painter, help you artfully blend people, process, and technology into a masterpiece, woven together to create a synergistic relationship that adds value to your organization. Jennifer shares ideas in the areas of mastering politics, maneuvering core versus context, and innovating your technology strategies and processes. She addresses questions on how many new processes can be introduced in an organization, what the role of organizational culture is in determining the success of a project, and how can you know what tools will add value versus simply adding overhead and complexity. This discussion can lead you to technologies and processes you can stake your career on.

 

 

Concurrent Session 2B (1125 – 1215)

Wrapping Your Brain Around Agile Testing

 Steven ‘Doc’ List, Consultant, US

 

Concurrent Session 3B (1125 – 1215)

Auto Testing Life Cycle

Devabalan Theyventheran , Head, Transformation Office and Business Process Dev. CIMB, Malaysia

 

Concurrent Session 4B (1125 – 1215)

Maintaining Product Quality through certification

Herman Tahir/C.C. Pang, Q-Lab

 

 

 

C: 1215 – 1300

 

Concurrent Session 1C

Essential Characteristics As A Software Tester. Is it For You?

Arina Ramlee, CEO, Cyberfield Technologies Sdn Bhd         

 

Concurrent Session 2C

Usability testing: a non-functional technique every tester should know about

Thomas McCoy, Instructional Designer, Dept of Social Services, Aus   

 

As testing professionals we add value across many areas of software development. While non-functional techniques like performance and security testing are narrow and require highly specialised expertise, usability testing is a technique we should all be familiar with.

 

Effective usability testing can mean the difference between product success and failure because a system that is difficult to use is unlikely to be embraced by clients. Usability testing is different from user acceptance testing (UAT) in that while UAT focuses on the system meeting the specified requirements, usability testing focuses on ease of use. In the past, usability testing required an enormous investment in equipment, but with today’s technology it can be done on a laptop computer with webcam and microphone.

 

In this session Thomas McCoy will explain and demonstrate how usability testing can be done and will also cover the related topic of accessibility testing, which aims to ensure that IT systems can be used by people with disability.

 

Concurrent Session 3C

Competency Metric for Software Test Professionals: An MSTH Perspectives

Dr Zaliman/Asoc. Prof. Dr Roslan, UNITEN          

 

Concurrent Session 1C

Introduction to Combinatorial Testing

Mark Kiemele, President and Co-founder. Air Academy Associates, US

 

Test and evaluation are an important duo in almost any endeavour, and it is certainly no exception in software development. Many who frequently perform testing are unaware that how one tests will determine how easy or how hard it will be to evaluate the results of testing.

 

This presentation will provide a brief overview of the best techniques for multi-variate testing. Multi-Variate Testing (MVT) is known by different names, and perhaps Design of Experiments (DOE) is the most commonly used alternative name. We will begin with Full and Fractional Factorial designs to illustrate the important properties of a test design. Then we will move into the higher dimension spaces where Latin Hypercube Designs are the best choices for fractionating or reducing the number of test cases that have to be investigated.

 

A special technique called High Throughput Testing will be demonstrated. These testing techniques can be used at all levels, including system level testing, functional testing, and regression testing. Since testing is essentially a continuous process in software development, it be hooves software testers to know the best techniques for testing.

 

Keynote 2 (1415 - 1515)

TESTimonial 360:A Fully Illustrated 360 degree view of Software Testing

Rob Sabourine, Principle Consultant, Amibug.com, Canada

 

 

Panel Discussion (1515 – 1615)

 

Topic::  3600 Testing

 

Moderator:

Prof Jasbir Dhaliwal

Panellists:

Rob Sabourin (AmiiBug.com)

Paul Gerrard (Gerrard Consulting)

Devabalan Theyventheran (CIMB)

 

 

Special Announcements (1645 – 1730)

 

Winners of Software Test Design Competition 2014

Best Paper Award for Postgraduate Research Workshop

 

Closing Remarks

END

 

 Note: Programme may change without prior notice          MSTB © 2014