Within the ELSA Lab, innovations are explored through concrete use cases that serve as real‑world contexts for studying and shaping responsible AI. These use cases enable the lab to integrate ethical, legal, and societal aspects (ELSA) throughout the entire lifecycle of AI development and deployment. By working closely with technology developers, users, companies, policymakers, and societal actors, the ELSA Lab examines how AI applications function in practice and how values such as transparency, fairness, accountability, and sustainability can be embedded in their design and use. In this way, the use cases act not only as testbeds for AI innovation, but also as learning environments that generate actionable insights for responsible, context‑specific AI implementation.
Each case undertakes an ELSA Scan and one or more Quadruple Helix (QH) workshops throughout the ELSA lab cycle in which ELSA aspects are first identified with the AI developers. Next, by involving additional stakeholders during a workshop, using the ELSA Impact tool, new ELSA aspects and improvement suggestions may be found for a redesign of the technology as well as changes to the context of the use case, such as legal and organisational adjustments.