Nederlandse Testdag 2023
Thursday 2nd of November 2023, Heerlen.
The 26th edition of the Dutch Testing Day (Nederlandse Testdag) will take place on the 2nd of November at the Open Universiteit in Heerlen. For years, the Nederlandse Testdag has been the main event where science, education and the business world share new ideas and insights in the field of testing.
Location
The testdag 2023 will take place at the
Open University of the Netherlands
Valkenburgerweg 177
6419 AT Heerlen, the Netherlands
Organisation
The 25th Nederlandse Testdag organization members:
Dr. Machiel van der Bijl
CEO and founder Axini
Prof. dr. Tanja Vos
Hoogleraar Software Engineering Open Universiteit
The Nederlandse Testdag board members:
Program
10:00 – 10:45 Keynote Anne Kramer – The Journey of modelling in testing: from the past until the present
10:45 – 11:15 Coffee break and market place
11:15 – 12:45 Talks – The present of modeling in testing
11:15 – 11:45 Julien Schmaltz – Finding your way into Model Based Testing: models and tools
11:45 – 12:15 Niels Doorn – Sensemaking in testmodelling with TestCompass
12:15 – 12:45 Tannaz Zameni – Combining Model-Based Testing and Behavior Driven Development
12:45 – 13:45 Lunch, networking, market place with stands from toolproviders, universities, service providers, etc
13:45 – 15:15 Talks – The present of modeling in testing
13:45 – 14:15 Kevin van der Vlist – Large Language Models in the scriptless test loop
14:15 – 14:45 Nathalie Rooseboom de Vries – You are cleared to use a model
14:45 – 15:15 Wishnu Prasetya – Model-based User Experience Assessment
15:15 – 15:45 Coffee break and market place
15:45 – 16:15 Erik Deibert – Flaky Testing: The CI/CD Silent Killer
16:15 – 17:00 Keynote Bryan Bakker – The future of modeling in testing
17:00 – Drinks, snacks, networking, market place with stands from tool providers, universities, etc.
Speakers
Anne Kramer - Keynote
Success manager at Smartesting
Bryan Bakker - Keynote
Senior Test Architect at Sioux Technologies
Erik Deibert
Functional Cluster Test Architect Haystaq
Julien Schmaltz
Director Consulting Expert – IT Automation & CI/CD at CGI
Kevin van der Vlist
Software Engineer at ING
Nathalie Rooseboom de Vries
Project Lead Verification & Validation at the Dutch Air Traffic Control LVNL
Niels Doorn
PhD candidate of the Open University
Tannaz Zameni
PhD candidate at the University of Twente
Wishnu Prasetya
Researcher at the Utrecht University
Keynote – The Journey of modelling in testing: from the past until today
Modeling has a long history. State diagrams were invented by David Harel in the 1980s. From the beginning, the core idea of modeling was to manage complexity. Today, the Unified Modeling Language (UML) is widely accepted. However, despite all the efforts of the International Requirements Engineering Board (IREB) to establish models in requirements engineering, its main application area remains system architecture and design.
The idea of model-based testing (MBT) emerged in 1999. It was originally conceived as an automated, tool-based approach. Take a test case generator, feed it a UML model from the system design, and obtain test cases from it with little to no effort.
Unfortunately, this approach has not proven successful for several reasons. One of them is the fact that testing is more than just checking against a design specification. Today, we understand MBT more broadly. The ISTQB defines model-based testing as “testing based on or involving models.”
In her keynote, Anne will take you on a journey from the early days of MBT to modern agile visual test design approaches. She will share her own experiences of good practices and common pitfalls that she has gathered over more than 15 years of practical application. As passionate as she is about MBT, it would be surprising if Anne didn’t end up convincing you as well.
Anne Kramer
Success manager at Smartesting
Anne Kramer first came into contact with model-based test design in 2006. Since then she has been burning for the topic. Among other things, she was co-author of the “ISTQB FL Model-Based Tester” curriculum and lead author of the English-language textbook “Model-Based Testing Essentials” (Wiley, 2016). After many years of working as a process consultant, project manager and trainer at sepp.med, Anne joined the French tool manufacturer Smartesting in April 2022. Since then, she has been fully dedicated to visual test design.
Keynote – The future of models in testing
The utilization of models in product development has seen a significant surge over the past years. For instance, models are employed to generate code for subsystems or produce interface code. Furthermore, they provide substantial opportunities for innovative testing methods, opening new avenues for efficient product development.
In software testing the trend of automation is clearly visible. More and more test steps are being automated, integrating seamlessly into the CI/CD pipeline. However, the primary challenge lies in the maintenance of these automated test cases. The absence of good programming practices can lead the test suite into a maintenance hell.
Another emerging trend is the incorporation of formal models during the product design phase. They serve as the blueprint for generating code, rendering handwritten codes obsolete. This technique has found notable success in high-tech environments, indicating a potential paradigm shift in the software development process.
As software development begins to embrace formality, test development too can adapt to a similar formal approach. This transformation can be realized with the help of model-based testing. It allows for the generation of test cases directly from a formal model that outlines the system’s behavior. From a maintenance perspective, this approach appears promising, although it also presents its own set of challenges.
In the foreseeable future, it is likely that we will see an extensive use of modeling for not just the system-under-test (for instance, an autonomous car) but also its surrounding environment. This includes factors like weather conditions, pedestrians, and other vehicular traffic, leading to a comprehensive virtual ecosystem. With such advancements, the future of testing is looking extremely promising, paving the way for more reliable and efficient product development.
Bryan Bakker,
Senior Test Architect at Sioux Technologies
After receiving his master’s degree in computer science in 1998 Bryan Bakker has worked as software engineer on different technical systems. Several years later Bryan has specialized in testing of embedded software in multidisciplinary environments; in these environments the software interfaces with other disciplines like mechanics, electronics and optics. He has worked on e.g. medical products, professional security systems, semiconductor equipment, and electron microscopy systems.
The last years Bryan focuses on test automation, reliability testing, design for testability and model-based testing. Bryan is also a tutor of several different test related courses and a frequent speaker at international conferences.
Flaky Testing: The CI/CD Silent Killer
In this presentation, Erik will take you into the fascinating world of Flaky Testing. What exactly are these? Why are they so dangerous for the CI/CD pipeline? This presentation will also address common root causes and explore what we as testers can do to avoid becoming victims of this silent killer for the CI/CD pipeline.
Erik Deibert
Functional Cluster Test Architect Haystaq
Erik has over 25 years of experience in the field of testing. He began his career as a tester at an Automotive company and later transitioned to a role as a Test Automation Architect at haystaq after a brief stint in Logistics. Currently, he serves as a Test Architect at ASML, where he focuses on quality in the broadest sense. Over the last 10 years, he has gained extensive experience with CI/CD pipelines and has become deeply interested in preventing flaky testing.
Finding your way into Model Based Testing: models and tools
In my experience, I have seen many definitions for Model Based Testing (MBT). Indeed, there are many different ways of using models in testing! What definition applies to a given situation? What is Model Based Testing actually? Are there tools that support these specific needs? In this talk, I would like to explore with you some answers to these questions. Together, we will look at different types of models and their associated tools and reflect about how they relate to testing objectives. At the end of the talk, we should together have a better understanding of the different approaches to model-based testing and be more efficient at using models for our testing activities.
Julien Schmaltz
Director Consulting Expert – IT Automation & CI/CD at CGI
Dr. Julien Schmaltz is Director Consulting Expert IT Automation at CGI. He holds a PhD degree from the University of Grenoble, France. His technical expertise is in formal methods and model-based technologies, such as model-based testing and code generation. Over the past years, he has gained experience in implementing technological innovations at clients in different domains, such as rail and high-tech. Together with customers and partners, he creates support for the renewal of processes, tools and mindset. He enjoys working at the intersection of technologies, people and processes. Before joining the industry in 2018, he conducted research and education at different universities (Radboud University, Open University of The Netherlands, Eindhoven University of Technology) in the field of model driven engineering and model based testing with applications to hardware and software systems. In cooperation with universities, he is actively engaged in facilitating the transfer of technology created by academic research to the market.
Large Language Models in the scriptless test loop
Automated ent-to-end testing of UI’s is not an easy challenge. Although exploratory testing techniques as provided by Testar are helpful, three important challenges remain. The first is action selection, which determines which UI element is eligible to be interacted with. Determining the ‘best’ choice at a given state is a hard decision to make, especially when you want to replicate human behaviour as much as possible. The second is generating input for the aforementioned UI element (i.e., action selection). This should take the journey of the user thus far into account (i.e., the context) and provide meaningful input for the purpose of exploring the application in the best way possible, according to predefined heuristics. The third is state abstraction, which can be used to reason about groups of similar tests at once. This is achieved by removing irrelevant details of (historical) state, retaining only the key characteristics to detect state similarity.
In this talk, we will discuss how LLM’s can be used as a way to improve action selection by consuming the application state as its context, and suggestion which action to take. We will also explore how we can use the models to generalise the concrete application states, simplifying the challenge of creating state abstractions. Finally, we discuss how an LLM can be used to generate the actual input that is sent to the selected UI component that has been selected. Together, these steps explain how we can use large language models in the test loop.
Kevin van der Vlist
Software Engineer at ING
Kevin is a software engineer in an R&D team in ING. The purpose of the team is to enable teams in improving their software quality and the general developer experience of the development lifecycle by researching and validating potential solutions. This is done by either exploring technology from industry that is not used (yet), or by looking into state-of-the-art academic developments that might be of use for ING. We often collaborate in public-private partnerships and try to ensure that those results can be of use for industry, and not just for academia. Our interest lies in models, domain specific languages, model driven engineering and (software) testing.You are cleared to use a model
When thinking about models in testing, many people think about ‘model based testing’. A model is made of the system and from that the cases are derived, preferably with elaborate tooling which also enables easy management of cases when the model changes. Many testers think this is the only ‘allowed’ context of ‘model based testing’. But…one can also think about ‘testing based on models’ and those models can be mental models, models to simplify complex systems, flow- and datamodels to be used for test designs… When a plane flies to its destination it gets cleared many times during its journey. It means it gets the ‘OK’ to go to the next part of its departure, flight or landing. In this talk I’m going to give you the clearance (do you actually need it?) to use models anywhere during your tests, by sharing various examples of the models I used – and still use- now I’m testing the Dutch Air Traffic system and meanwhile I share some insights in the air traffic system as well.
Nathalie Rooseboom de Vries
Project Lead Verification and Validation atDutch Air Traffic Control LVNL
But foremost; she’s a test enthusiast, evangelist or, if you prefer, fanatic or pragmatic, hence the alias “FunTESTic”. Her special interests in the field of testing are Software Testing- & Information Ethics (I started a/ the discussion on ‘Testing and Ethics’), TestArchitecture, Standardization, Testing processes, End-2-End Testing and a bit Datawarehouse Testing. Next to software testing she’s a (certified) herbalist, relaxation- and massage therapist. Last but not least… I have a huge passion for Surinam…(yes… I speak Sranantongo, follow the teachings of Winti (Anyame) and cook a mean ‘SrananKukru’ meal…)Sensemaking in testmodelling with TestCompass
In the dynamic landscape of software development, testing has emerged as the most commonly used technique for measuring software quality. Despite its
significance, software testing often receives insufficient attention in computer science education, leading to suboptimal testing practices among students and graduates.
Teaching software testing is a complex intellectual activity for which students need to allocate multiple cognitive resources at the same time. A systematically developed body of knowledge of didactic approaches, effects of educational settings, and learning outcomes is lacking.
As a first step in determining how software testing education can be improved, we have studied the sensemaking process of students designing test cases using an online modelling tool. We continue this research by studying the approaches taken by test experts when designing test cases. By comparing students to these experts, we will know where the deficiencies lie for them and what we need to train them on. We can use this knowledge to develop instructional designs that can be used in different educational contexts.
To address this issue, the presentation will shed light on the intricate process of teaching software testing.
Key Highlights:
- Unveiling the challenges associated with teaching software testing in educational settings.
- Exploring the multifaceted cognitive demands placed on students during software testing education.
- Examining the existing gap in didactic approaches and learning outcomes in software testing education.
- Insightful comparison between student-generated test cases and those created by seasoned test experts.
- Identifying deficiencies and opportunities for improvement in software testing education.
- Future prospects of developing innovative instructional designs to enhance software testing proficiency.
This presentation marks a significant stride toward bridging the gap between theory and practice in software testing education. Join us as we uncover valuable insights that will pave the way for a more effective approach to teaching this essential skill.
Niels Doorn
PhD student at the Open University
Niels Doorn is a dedicated PhD Student at the Open Universiteit, specializing in software testing methodologies. His dual role as a Team Leader and Lecturer / Researcher at the NHL Stenden University of Applied Sciences reflects his commitment to advancing computer science education. With a keen focus on improving software testing practices, Niels brings a unique perspective to the table.
From BDD Scenarios to Test Case Generation
Model-based testing (MBT) offers the possibility of automatic generation and execution of tests. However, it is not yet widely used in industry due to the difficulty in creating and maintaining models. On the other hand, Behavior Driven Development (BDD) is becoming more popular in the agile development process to achieve a common understanding of the system under development among stakeholders and to automate testing. However, BDD scenarios are written in human language and are usually not precise enough. Moreover, tests extracted from BDD scenarios are too short and incomplete; they only cover a very small part of the system. Our goal is to combine these two approaches to benefit from the usability of BDD and the test automation capabilities of MBT. In this paper, we first define a formal model of scenarios that we call BDD Transition Systems, second, we create more complete tests by composing scenarios (model composition), and finally, we generate and execute tests automatically. We demonstrate the applicability of this approach in a real-world example: an industrial printer.
Tannaz Zameni
PhD candidate at the University of Twente
Tannaz is a third-year PhD candidate at the University of Twente. Her main research topic is Model-based testing in combination with Behaviour Driven Development. She is part of the project TiCToC (TiCToC) that investigates methods and tools to manage and reduce the combinatorial explosion of testing complex high-tech systems. She is interested in implementing formal testing theories in practice and is actively striving to achieve that goal within and after her PhD.
Model-based User Experience Assessment
For many software applications user experience is important, yet assessing how the users would experience an application before the application is released is very hard. We can indeed do a Beta testing with a group of real users, but it is not always practical to do this between releases. In this talk we will discuss a model-based approach to model users’ emotional experience. The approach is rooted in a well-known theory from Psychology proposed by Ortony, Clore, and Collins. Compared to a machine learning approach, a more classical model based approach has the benefit that it is comprehensible (and explainable), but the challenge is indeed how to craft such a model. In the talk we will first discuss the structure of the underlying theory, and then we will discuss how a user model looks like and how we can use it to perform an automated user experience analysis.
Wishnu Prasetya
Researcher at the Utrecht University
Dr. Wishnu Prasetya is an assistant professor at Utrecht University, the Netherlands. He received his PhD at Utrecht University for his research in mechanical verification of self-stabilizing distributed systems. Dr. Prasetya has worked for many years in the field of software technology. His expertise includes compositional reasoning of temporal properties of distributed systems, symbolic based program verification, and automated software testing. His recent interest includes automated game testing and the use of agent-based AI in software testing.