Nederlandse Testdag 2024
29th of November 2024.
The 27th edition of the Dutch Testing Day (Nederlandse Testdag) will take place on Friday 29th of November 2024 at the Science Park in Amsterdam. For years, the Nederlandse Testdag has been the main event where science, education and the business world share new ideas and insights in the field of testing.
Registration CLOSED
Organisation
The 25th Nederlandse Testdag organization members:
Dr. Machiel van der Bijl
CEO and founder Axini
Bas Dijkstra
The Nederlandse Testdag board members:
Prof. dr. Tanja Vos
Hoogleraar Software Engineering Open Universiteit
Dr. Petra van den Bos
Assistent Professor at Universiteit Twente
Program
(subject to change)
09:15 – 09:30 Welcome
09:30 – 10:15 Keynote – Hans Dekkers
10:15 – 10:45 Coffee break and marketplace
10:45 – 12:15 Accepted talks
10:45 – 11:30 Petra van den Bos
11:30 – 12:15 Ahmed Khalifa
12:15 – 13:15 Lunch, networking, marketplace with stands from tool providers, universities, service providers, etc
13:15 – 14:45 Accepted talks
13:15 – 14:00 Andreea Cosariu
14:00 – 14:45 Joshua Lobato de Mesquita
14:45 – 15:15 Coffee break and marketplace
15:15 – 16:00 Accepted talks
15:15 – 16:00 Michel Nass
16:00 – 17:00 Panel with Huib Schoots, Linda van de Vooren, Bart Knaack en Hans Dekkers
17:00 Drinks, snacks, networking, marketplace with stands from tool providers, universities, service providers, etc.
Speakers
Hans Dekkers - Keynote
Lecturer at University of Amsterdam
Ahmed Khalifa
Quality Engineering Manager at Accenture
Andreea Cosariu
Manager QA
Joshua Lobato de Mesquita
Test Consultant at Allianz Direct via Valori
Michel Nass
Ph.D. in Software Engineering
Petra van den Bos
Linda van de Vooren
Test Consultant & Coach at Bartosz ICT
Bart Knaack
IT Consultant at Professional Testing
Huib Schoots
Principal Quality Consultant at Sogeti
Navigating Complex Systems: Lessons from the Digital Infrastructure for the Environmental Planning Act and the Role of AI
The digital system for the Environmental Planning Act consists of a large number of interconnected systems. The realization of this system took many years. On January 1st of this year, the law came into effect, and the system was put into operation. Until the very end, there were concerns, including from the Senate, about whether this was a responsible decision. These concerns were difficult to fully alleviate, despite a learning approach and enormous testing efforts. I would like to share our experiences from this interesting process and reflect on how things can be improved and what role AI can play in this.
Panel member
Hans Dekkers
Lecturer University of Amsterdam
Hans Dekkers, born in 1970, studied computer science at the VU (Vrije Universiteit). Since 2017, he has been advising the government in various roles, such as assessing large ICT projects and serving as the CIO of the digital system for the Environmental Planning Act. Hans has been affiliated with the University of Amsterdam (UvA) for 20 years, where he passionately contributes to the Master’s program in Software Engineering. At the UvA, he also established the successful Programming and AI minors. Before joining the UvA, Hans worked on interesting projects at organizations such as the Amsterdam Police, Emendo (tools to make legacy systems Y2K-compliant), and SNT.
Hans Dekkers, 1970, studeerde informatica aan de VU. Sinds 2017 adviseert hij de overheid in verschillende rollen, zoals bij het toetsen van grote ICT projecten en als CIO van het digitaal stelsel van de omgevingswet. Al 20 jaar is Hans verbonden aan de Universiteit van Amsterdam waar hij met ziel en zaligheid een bijdrage levert aan de master Software Engineering. Bij de UvA heeft hij ook de succesvolle minoren Programmeren en de minor AI opgezet. Voordat hij bij de UvA kwam heeft Hans aan interessante projecten mogen werken bij onder meer de Politie Amsterdam, Emendo (tooling om legacy systemen geschikt te maken voor jaar 2000) en SNT.
An AI Experiment: building a Test Automation Framework using Generative AI
Like many testers, I was initially skeptical about AI’s role in our field. Could it really enhance our work, or is it just another overhyped technology that disrupts our jobs and compromises the quality we strive to maintain? To confront these doubts, I conducted an experiment to see if AI truly delivers on its promises or if it’s a threat to our profession.
In this presentation, I will walk you through the steps I took to build a test automation framework using ChatGPT, sharing not only the insights and challenges encountered along the way, but also the failures and the problem-solving approaches.
The framework is intended for UI testing and it is based on Java, Selenide and Gradle.
My experiment didn’t just test AI’s capabilities; it exposed the gains and debts we face when using AI in testing.
Did the experiment succeed in proving that AI can help, or did it confirm the fears that AI is more of a hindrance than a help? If it did succeed, under what conditions? If it didn’t, what were the reasons, and what does this mean for the future of testing? I will expose the results during the presentation.
Andreea Cosariu
Manager QA at FNT Software
Experienced in developing long-term strategies, refining testing practices, and optimizing workflows across the organization.
A background in diverse roles, including Software Test Engineer and QA Architect, has enabled gaining expertise in both the technical and strategic aspects of testing. Currently serving as a QA Manager, the focus is on streamlining QA processes and enhancing team efficiency. Committed to driving innovation and continuous improvement, motivated by the belief that there is always a new perspective to explore and better solutions to uncover.
Transforming the Tester Role
In this presentation, we explore the impact of Artificial Intelligence (AI) and Large Language Models (LLM) on the field of testing. Based on our own experiences and personal journeys, we will share how AI can assist testers in various areas such as requirements review, test case creation, test automation code, and bug ticket creation.
Combining practical examples with theoretical insights, we provide an overview of the possibilities and limitations of AI in software testing. Additionally, we talk about best practices and techniques such as prompt engineering, essential for effectively utilizing AI tools in the testing process.
We will include a 15-minute discussion based on open questions such as:
- What are the main challenges in integrating AI into testing processes?
- How can testers adapt their skills to effectively use AI tools?
- What ethical considerations should be taken into account when using AI in testing?
- What are your thoughts on how the tester role will transform due to AI in the near future?
Joshua Lobato de Mesquita
Test Consultant at Allianz Direct via Valori
Test Specialist having over 8 years of experience in functional test execution and test coordination, focusing on process improvement and collaboration with end users to understand their needs. Approaching testing from the user’s perspective, ensuring their journey is central to the work. Adapting to dynamic work environments, and continuously driven to explore new testing methods.
Applying Behaviour-Driven Development for Game Creation
How to test computer games? In computer games, a player interacts with advanced (AI) agents, and deals with extensive game worlds. While computer games can be immensely complex, and bugs show up in well-known games, testing has not been picked up as much in the game software engineering community, as it has in traditional software engineering. In this talk I will show how Behavior-Driven Development, which is a popular technique for specification and testing in traditional software engineering, can be applied in game software engineering as well. Specifically, I will present the highlights of (i) a framework to help express game behaviors as BDD scenarios, (ii) a method to apply BDD in game development, and (iii) tooling to apply BDD in Unity 3D, a major game development platform.
Petra van den Bos
Assistent Professor at Universiteit Twente
I am an assistant professor in the Formal Methods and Tools group of the University of Twente. My current research focusses on software correctness and software quality in general, and on model-based testing specifically. I like working on theory (formal methods), that can be applied in practice as well. Previously, I had a postdoc position in the Formal Methods and Tools group of the University of Twente. Before that, I had a PhD position in the Software Science group of the Radboud University, where I completed my thesis “Coverage and Games in Model-Based Testing”.
A Novel Assessment Tool: Enhancing Test Automation Frameworks with Twelve-Factor App Principles
In this digital era, software application is delivered as web applications or Software-as-a-Service (SaaS). This confirms the fact that the priority of software development should be on scalability, portability, and agile development. The Twelve-Factor App methodology gives a standard guideline for the development of such applications that are in line with these objectives. This presentation sheds light on how these principles can be extended beyond application development and used to build and maintain effective test automation frameworks.
For the first time, this tool introduces the use of the Twelve-Factor principles in quality engineering practices. I developed an assessment tool that helps you to evaluate the level of your current test automation framework’s compliance with the Twelve-Factor App methodology across various environments. This assessment, combined with a Large Language Model (LLM) analysis of the assessment answers, will reveal your framework’s strengths and growth areas. Furthermore, the tool will also have recommendations and action steps for implementing the principles of the Twelve-Factor “applicable practices” in test automation.
The analysis results provided by the LLM offer deep insights to optimize test automation strategies. This optimization translates into significant improvement to the overall test automation framework. The identified strengths and growth areas will be translated into clear recommendations and action items, which guide teams on how to implement these improvements in their test environments.
This solution is generic and can be used by any team that is developing or maintaining a test automation framework. It is mainly relevant for software development and quality engineering teams that are looking to improve their test automation strategies and guarantee that their frameworks are prepared for the needs of modern application development. The adoption of this new assessment will improve the scalability, portability, and agility of the test automation frameworks. Thus, they will be in line with the highest standards of software delivery.
Ahmed Khalifa
Quality Engineering Manager at Accenture
Backed by strong credentials including a master degree in quality management, ISTQB certification, Lean Six Sigma Green Belt certification; advanced command of various testing tools, quality standards and control techniques; and cross-platform skills in Windows, Linux and Unix.
Always seeking new challenges, where I can make use of my quality management and process improvement skills; benefit from my quality and engineering studies, my technical working experience, while contributing intensively towards the achievement of tangible and intangible organizational objectives.
Explore the Power of Generative AI
This presentation explores the transformative advancements in
generative AI technology. It introduces large language models (LLMs)
like GPT, Gemini, and Claude, explaining their training processes and
the powerful Transformer architecture enabling near-human performance on
various tasks.
The presentation traces the evolution of agents from
human-operated systems to sophisticated AI agents, highlighting AI’s
growing role in traditionally human tasks. It examines emerging
AI-enabled interfaces that revolutionize human-computer interaction,
making technology more intuitive and accessible. Practical applications
of AI, including robots trained via video recognition and advanced
self-driving car systems, are showcased to emphasize AI’s tangible
benefits in real-world scenarios. Participants will gain valuable
insights and practical knowledge applicable across various fields and
industries.
Michel Nass
Ph.D. in Software Engineering
Experienced Test Specialist with a demonstrated history of working in the computer software industry. Skilled in Coaching, Test Automation, Test Management, Software Testing, and Java. Strong professional with a Master of Science (M.Sc.) focused in Computer Science from Chalmers and a Ph.D. in Software Engineering from BTH.
Panel member
Linda van de Vooren
Test Consultant / Quality Coach / Speaker at Bartosz ICT
Panel member
Bart Knaack
IT Consultant at Professional Testing
Bart Knaack has been active in IT for 30 years, 25 of which have been in the field of testing and quality assurance. You might encounter him as a speaker at international conferences, as well as regional ones, since knowledge and expertise can be shared anywhere. Currently, he is a CI/CD coach, helping teams set up pipelines for building, (automated) testing, and deployment.
His interest in AI is reflected in the workshops he gives on testing AI as well as using AI to write programs.
Zijn interesse voor AI uit zich in de workshops die hij op dit gebied geeft over het testen van AI maar ook het gebruiken van AI om programma’s te schrijven.
Panel member
Huib Schoots
Principal Quality Consultant at Sogeti