Edit in admin

Why AI Literacy Matters in 2025

In 2025, being AI literate is quickly becoming as important as reading, writing, and even basic digital skills. However, it is not just about asking better questions in ChatGPT or using smart assistants to save time, it is understanding how these systems work, what powers them, and how they impact society at every level.

AI literacy requires more than just the use of technology; it needs critical thinking.

And the stakes are high, because as AI becomes more embedded in every aspect of our lives from education and healthcare, to hiring and government services, a lack of public understanding leads to serious risks. The World Economic Forum now classifies AI literacy as a civic skill, essential for participating in democratic processes, and without it, people can be left vulnerable to misinformation, biased systems, or decisions made by opaque algorithms.

Whether it is over-relying on AI outputs or blindly trusting systems that have not been ethically tested, the consequences are personal and societal.

But bridging this knowledge gap is about more than digital upskilling, it is about giving individuals the tools they need to think critically, protect their rights, and take part in shaping how AI is used.

The EU AI act and its role in promoting ai awareness

The EU AI Act (AIA) was formally adopted in 2024, is the world’s first major legislation aimed at regulating artificial intelligence. It sets harmonised rules for the development, placement on the market and use of AI systems in the European Union, to ensure AI is used in ways that are safe and transparent.

The AIA prohibits eight practices:

  • Harmful AI-based manipulation and deception
  • Harmful AI-based exploitation of vulnerabilities
  • Social scoring
  • Individual criminal offence risk assessment or prediction
  • Untargeted scraping of the internet or CCTV material to create or expand facial recognition databases
  • Emotion recognition in workplaces and education institutions
  • Biometric categorisation to deduce certain protected characteristics
  • Real-time remote biometric identification for law enforcement purposes in publicly accessible spaces

It categorises AI systems by risk, unacceptable, high, limited, and minimal, and sets strict rules for how each type must be developed and deployed, and is already having a ripple effect well beyond the tech sector.

But its impact goes further than regulation. By raising public understanding of how AI works and where it’s being used, it is helping users become more informed and critical consumers of AI technologies. And practical requirements in the Act are pushing AI literacy to the forefront, meaning that companies deploying high-risk AI systems, like those used in recruitment or education, must now ensure their staff are adequately trained, to understand how AI systems function, recognising their limitations, and being aware of potential biases.

In other words, AI literacy is now not just a technical skill, but a compliance issue!

What is Article 4 of the AIA?

Article 4 requires organisations, both those building AI systems and those putting them to use, to make sure everyone involved understands how AI works, including its risks and impacts.

By AI literacy, the law means that people have enough knowledge and awareness to deploy AI wisely. This includes understanding where AI can help or harm, and knowing what legal rights and responsibilities are at play, and covers all employees, contractors, service providers and other third parties dealing with AI on behalf of an organisation.

There’s no strict training checklist set by Brussels, but organisations must cover key areas if they want to comply:

  • A basic understanding of AI, what it is and how it works, including benefits and dangers.
  • Clarity about the organisation’s role, are you the AI creator or just using someone else’s tool?
  • Awareness of risks tied to the AI systems in question, especially if it’s ‘high risk’.
  • Tailored programs that reflect users’ backgrounds, levels of technical knowledge and the real-world context of their work.

Although formal testing isn’t mandatory, organisations are expected to consider how well their people understand AI and build appropriate training or guidance. For higher risk systems, extra measures may be needed under Article 26, which introduces an obligation that staff dealing with the AI systems in practice are sufficiently trained to handle the system and ensure human oversight.

The Article 4 rule came into force on 2 February 2025, but its enforcement doesn’t start until 3 August 2026, which gives organisations time to set up internal records, such as training logs, guidance materials or other proof, but not necessarily formal certificates.

Compliance with Article 4 is about matching an organisation’s AI literacy efforts to who its people are, what AI it is using, and how it might affect others. A flexible, risk-based approach is all that’s needed, but it must be real.

How the EIT Deep Tech Talent Initiative is contributing to improve AI literacy

The EIT Deep Tech Talent Initiative’s mission is to skill, reskill, and upskill the European workforce to help bridge the talent gap and build a strong pool of deep tech talent.

To date, it has assembled a catalogue of over 200 courses and training programmes (as of June 2025), of which 37% are dedicated to Artificial Intelligence and Machine Learning, and offers a great opportunity for European professionals to improve their AI literacy.

For learners who want to benefit from the EIT Deep Tech Talent training in a personalised way, the EIT Deep Tech Talent Community is an online space that facilitates the path to career growth and fosters expertise in deep tech for adult learners in companies and on the job market, but also pupils and students of higher education with an interest in deep tech.

By creating an account on the platform, learners who want to upgrade their skills in deep tech can:

  • Build their own profile based on their preferred courses and deep tech areas
  • Get a personalised offer by the platform’s AI-based Course Matching Tool
  • Be visible to our partnering organisations and companies (Pledgers)

Additionally, the Tech Radar, built in collaboration with Pledgers of the Initiative, is a dynamic digital tool built around 35 technologies relevant to deep tech, including Artificial Intelligence and Machine Learning (including big data), that visualises emerging technologies and the organisations active in them. Underneath the information, users will find a list of relevant courses available through the Initiative’s course catalogue, along with over 60 use cases that anchor these technologies into real world solutions.

At the end of the day, becoming AI literate is one of the smartest investments you can make for yourself, but the goal is learning how to ask the right questions to become a more aware and responsible participant in an AI-driven world.

Partners