In less than a decade, artificial intelligence (AI) went from beating humans at chess to drafting legal briefs, diagnosing diseases, and generating photorealistic films on command. Quantum computing is now drawing billion-euro investments from governments betting it will crack problems today’s supercomputers cannot touch, while cyber threats have grown more intelligent, more targeted, and more systemic, pushing the world to rethink what trust even means in a digital society.
But these are no longer siloed disciplines, and as the lines blur, the technologies that define the next decade will not only reshape industries, but will rewrite the rules of computation, security, and control.
But what gets built, and how, is going to influence everything from winning the global tech race to whether we can truly rely on this ‘digital’ life.
Artificial Intelligence
AI went mainstream in 2023 with the release of ChatGPT. Since then, there has been an explosion of AI tools being launched, and in 2025, AI Agents are the buzzword. AI is now, no longer niche, it’s being woven into every aspect of modern society.
Generative AI (GenAI) models like GPT, Claude, Stable Diffusion, and Mid-Journey have already made a significant impact in creative and analytical work, from marketing copy to coding and even drug discovery and protein modelling. And this is set to continue over the coming years, as GenAI moves beyond static tools into co-creative systems, becoming a core part of how ideas are explored and executed, with McKinsey identifying “63 GenAI use cases spanning 16 business functions that could deliver total value in the range of $2.6 trillion to $4.4 trillion in economic benefits annually when applied across industries.”
AI is not just limited to business, it is also becoming part of our everyday lives, embedded in the things we do and use, and the environments we live in. Autonomous, or driverless, vehicles are pushing level 4 and 5 autonomy in urban environments, aiming to remove the need for human oversight entirely. But the drive to implement ‘smart’ technology is not limited to cars; homes, offices, and public spaces that adapt to human behaviour in real-time is already redefining our expectations, while personalised digital AI assistants are close to becoming the norm.
This exponential growth of AI into society raises deep ethical and governance challenges, with data bias, algorithmic opacity, and the potential for misuse in surveillance or manipulation being key issues. The EU’s recently implemented AI Act is the world’s first comprehensive legal framework for AI, to ensure AI is used in ways that are safe and transparent, categorising AI systems by risk, and setting strict rules for how each type must be developed and deployed.
As the world moves towards an AI future, one of the most contentious impacts will be in the workforce. The World Economic Forum’s ‘Future of Work 2025-2030’ report suggests a potential net growth of 78 million jobs, but the shift will not be painless, or uniform. Many industries will see dramatic shifts in job design, and this will mean reskilling and continuous learning for many to become and stay employable.
Quantum Computing
Quantum computing is ground-breaking technology that uses the principles of quantum mechanics, to process information in ways fundamentally different, and far beyond the capabilities of ‘classic’ computers. According to IBM, quantum systems could eventually solve tasks in seconds that would take classical supercomputers thousands of years.
But it is not just the tasks, the potential applications for quantum computing are huge across many industries, and the EIT Deep Tech Talent Initiative’s Pledgers’ Share & Connect in March focused on the work some of our Pledgers are carrying out in this arena.
In cybersecurity, however, quantum poses both a threat and a promise: it could easily crack today’s encryption standards but also enable new forms of quantum-safe cryptography. The EU has issued a roadmap and timeline to start using post-quantum cryptography (PQC), with the protection of critical infrastructures transitioned to PQC by the end of 2030 at the latest.
In the race for quantum supremacy, the Quantum Flagship is one of the European Commission’s most ambitious long-term research and innovation initiatives, to support the work of hundreds of quantum researchers over 10 years. It consists of a coherent set of research and innovation projects selected through a thorough peer-review process. The goal is to consolidate and expand European scientific leadership and excellence in this research area, to kick-start a competitive European industry in Quantum Technologies and to make Europe a dynamic and attractive region for innovative research, business and investments in this field.
Cybersecurity
Between 2015 and 2025, global cybercrime damage costs grew approximately 250%, and as we live our lives more and more online, we are all ‘at risk’ of an attack. This means cybersecurity is no longer just a concern for industry IT departments, it is a key focus for economic stability, personal and public safety, and geopolitical resilience.
In recent years, with an increase in the use of AI as a weapon, such as deep-fake fraud and self-mutating malware, AI and machine learning are now essential tools for cybersecurity teams. Allowing for real-time pattern detection and analysis and automating responses, these tools can help reduce incident response times from days to minutes, which is vital as threats grow. The EIT Deep Tech Talent Initiative’s Pledgers’ Share & Connect back in February focused on the work some of our Pledgers are carrying out in this arena.
However, the future for cybersecurity is not about simply stopping intrusions or eliminating risks, but about building adaptive, intelligent systems that are private and resilient by design and, more importantly, that people trust, and not simply stopping intrusions or eliminating risks. The European Digital Identity (EUDI) Regulation is set to revolutionise digital identity in the EU by enabling the creation of a universal, trustworthy, and secure European digital identity wallet, due to be launched in 2026. Aligned with existing cybersecurity legislation, and compliant with cybersecurity requirements, citizens will have the power to choose which aspects of their identity and data they share with third parties, ensuring privacy and control over personal information.
As regulators and users demand better safeguards for sensitive information, privacy-enhancing technologies (PETs) are gaining traction. There are many types of PETs, such as homophonic encryption which enables computations on encrypted data without decrypting it first, and trusted execution environments which are secure hardware or software environments within a computer system that provide a secure and isolated area for executing sensitive code or operations. But all of them have a similar aim, that is to act as a safeguard, ensuring that personal information remains private, even when being used for data collaboration and analysis.
EIT Deep Tech Talent Initiative Course Catalogue
The EIT Deep Tech Talent Initiative has assembled a catalogue of over 200 courses and training programmes (as of July 2025), of which 88 (39%) are dedicated to Artificial Intelligence & Machine Learning (including big data), 54 (24%) are focused on cybersecurity, and 35 (15.5%) on cybersecurity and data protection.
These courses offer a great opportunity for European talent to skill, upskill, or reskill in these in-demand sectors.