Nederlandse Testdag 2024
29th of November 2024.
The 27th edition of the Dutch Testing Day (Nederlandse Testdag) will take place on Friday 29th of November 2024 at the Science Park in Amsterdam. For years, the Nederlandse Testdag has been the main event where science, education and the business world share new ideas and insights in the field of testing.
![conference_illustration_2024 Testdag Conference](https://testdag.testar.org/wp-content/uploads/2024/05/conference_illustration_2024.png)
Registration
![AmsSciencePark-2024 OU Heerlen front](https://testdag.testar.org/wp-content/uploads/2024/05/AmsSciencePark-2024.jpg)
Organisation
The 25th Nederlandse Testdag organization members:
![Dr. Machiel van der Bijl](https://testdag.testar.org/wp-content/uploads/2023/07/machiel_van_der_bijl.jpg)
Dr. Machiel van der Bijl
CEO and founder Axini
![Bas Dijkstra](https://testdag.testar.org/wp-content/uploads/2023/07/Bas_Dijkstra.jpg)
Bas Dijkstra
The Nederlandse Testdag board members:
![Prof. dr. Tanja Vos](https://testdag.testar.org/wp-content/uploads/2023/07/tanja-vos-rt-cropped-500.jpg)
Prof. dr. Tanja Vos
Hoogleraar Software Engineering Open Universiteit
![Dr. Petra van den Bos](https://testdag.testar.org/wp-content/uploads/2023/07/Petra_van_den_Bos.jpg)
Dr. Petra van den Bos
Call for Presentations
The Call for Presentations is open until Friday, July 26, 5 PM CEST.
The theme for the 2024 conference is
“Testing of, with or by AI”
In the last few years, the impact of Artificial Intelligence on software development and testing processes and practices has grown exponentially. Since then, it seems like everyone has formed their opinion on AI on software testing, ranging from
“In five years, AI will have made all of us obsolete”
to
“AI is just another fad, and it’s not really helpful at all”
The truth, as always, is probably somewhere in the middle, and we would love to hear what you think.
Our theme opens up the opportunity to talk about AI from various angles:
- Testing of AI systems – Have you worked on testing AI systems such as LLMs, neural networks or chatbots? We would love to hear about your experiences. How did you determine what is ‘right’ and what is ‘wrong’?
- Testing with AI systems – Have you used AI systems to support your testing efforts? How did it help you? What have you learned about the benefits and the drawbacks of AI in your testing?
- Testing by AI systems – Have AI systems taken over (part of) your testing efforts completely? How did you get there? What lessons have learned along the way? And how did it make your life as a tester better or easier?
We are looking to build an exciting program around our central theme. More specifically, we are looking for:
- Presentations (40 minutes) about your testing methods, test techniques, new insights or test tools and experiences gained in your test projects.
- Presentations (40 minutes) on the research project you are working on. Even if your thesis or publication is not ready for publication yet, we invite you to share your findings!
- Lightning talks (15 minutes), these can be discussions on any topic – talks, comments, forecasts and predictions, anything goes, as long as it is related to the conference topic. If you want to demonstrate a new tool or technique you are working on and are looking for some honest feedback, this is the place to get it, too!
All presentations and test talks can make use of a beamer and audio assistance.
To submit your proposal, please use this Google Form.
We are looking forward to your submission!
Program
TBD
Speakers
<TBD>
Keynote – The Journey of modelling in testing: from the past until today
Modeling has a long history. State diagrams were invented by David Harel in the 1980s. From the beginning, the core idea of modeling was to manage complexity. Today, the Unified Modeling Language (UML) is widely accepted. However, despite all the efforts of the International Requirements Engineering Board (IREB) to establish models in requirements engineering, its main application area remains system architecture and design.
The idea of model-based testing (MBT) emerged in 1999. It was originally conceived as an automated, tool-based approach. Take a test case generator, feed it a UML model from the system design, and obtain test cases from it with little to no effort.
Unfortunately, this approach has not proven successful for several reasons. One of them is the fact that testing is more than just checking against a design specification. Today, we understand MBT more broadly. The ISTQB defines model-based testing as “testing based on or involving models.”
In her keynote, Anne will take you on a journey from the early days of MBT to modern agile visual test design approaches. She will share her own experiences of good practices and common pitfalls that she has gathered over more than 15 years of practical application. As passionate as she is about MBT, it would be surprising if Anne didn’t end up convincing you as well.
Anne Kramer
Success manager at Smartesting
Anne Kramer first came into contact with model-based test design in 2006. Since then she has been burning for the topic. Among other things, she was co-author of the “ISTQB FL Model-Based Tester” curriculum and lead author of the English-language textbook “Model-Based Testing Essentials” (Wiley, 2016). After many years of working as a process consultant, project manager and trainer at sepp.med, Anne joined the French tool manufacturer Smartesting in April 2022. Since then, she has been fully dedicated to visual test design.
Keynote – The future of models in testing
The utilization of models in product development has seen a significant surge over the past years. For instance, models are employed to generate code for subsystems or produce interface code. Furthermore, they provide substantial opportunities for innovative testing methods, opening new avenues for efficient product development.
In software testing the trend of automation is clearly visible. More and more test steps are being automated, integrating seamlessly into the CI/CD pipeline. However, the primary challenge lies in the maintenance of these automated test cases. The absence of good programming practices can lead the test suite into a maintenance hell.
Another emerging trend is the incorporation of formal models during the product design phase. They serve as the blueprint for generating code, rendering handwritten codes obsolete. This technique has found notable success in high-tech environments, indicating a potential paradigm shift in the software development process.
As software development begins to embrace formality, test development too can adapt to a similar formal approach. This transformation can be realized with the help of model-based testing. It allows for the generation of test cases directly from a formal model that outlines the system’s behavior. From a maintenance perspective, this approach appears promising, although it also presents its own set of challenges.
In the foreseeable future, it is likely that we will see an extensive use of modeling for not just the system-under-test (for instance, an autonomous car) but also its surrounding environment. This includes factors like weather conditions, pedestrians, and other vehicular traffic, leading to a comprehensive virtual ecosystem. With such advancements, the future of testing is looking extremely promising, paving the way for more reliable and efficient product development.
Bryan Bakker,
Senior Test Architect at Sioux Technologies
After receiving his master’s degree in computer science in 1998 Bryan Bakker has worked as software engineer on different technical systems. Several years later Bryan has specialized in testing of embedded software in multidisciplinary environments; in these environments the software interfaces with other disciplines like mechanics, electronics and optics. He has worked on e.g. medical products, professional security systems, semiconductor equipment, and electron microscopy systems.
The last years Bryan focuses on test automation, reliability testing, design for testability and model-based testing. Bryan is also a tutor of several different test related courses and a frequent speaker at international conferences.
Flaky Testing: The CI/CD Silent Killer
In this presentation, Erik will take you into the fascinating world of Flaky Testing. What exactly are these? Why are they so dangerous for the CI/CD pipeline? This presentation will also address common root causes and explore what we as testers can do to avoid becoming victims of this silent killer for the CI/CD pipeline.
Erik Deibert
Functional Cluster Test Architect Haystaq
Erik has over 25 years of experience in the field of testing. He began his career as a tester at an Automotive company and later transitioned to a role as a Test Automation Architect at haystaq after a brief stint in Logistics. Currently, he serves as a Test Architect at ASML, where he focuses on quality in the broadest sense. Over the last 10 years, he has gained extensive experience with CI/CD pipelines and has become deeply interested in preventing flaky testing.