Edit in admin

The art of securing LLM

50h certification on LLM security: 25h of online theory + 25h of hands-on workshops in Paris and Athens. Build AI guardrails via a competitive ELO system based on attack scenarios. For cybersecurity engineers and data scientists.

Provided by: datacraft

Date and application:

There is currently no dates added.

Apply now
Bookmark

Course Description

This 50-hour advanced workshop, a collaboration between datacraft and AIandMe, equips professionals to tackle critical security challenges in LLMs (Large Language Models) through competitive, real-world simulations. Combining EU regulatory frameworks (e.g., GDPR, AI Act, Cyber Resilience Act) with automated red-teaming, it targets AI engineers, cybersecurity experts, and compliance officers. The course also introduces standards such as the OWASP framework for agentic and LLM top 10 risks.

Objectives:

Gain deep understanding of AI system vulnerabilities, adversarial prompting techniques, and protection methods.
Develop practical skills to design and implement effective security guardrails.
Raise awareness of ethical and compliance issues in AI security, including GenAI standards (e.g., OWASP).

Knowledge and skills acquired:

Identify key vulnerabilities in AI models.
Understand and apply adversarial attack and defence techniques.
Assess model robustness against attacks.
Understand ethical and regulatory challenges (AI Act, GDPR, Cyber Resilience Act).
Familiarize with OWASP security standards.
Course structure:

Theoretical Phase (25h, online, self-paced):

Basics of LLMs and vulnerabilities, international standards, multi-layered security approaches, AI governance and ethics.
Practical Workshop (25h, in-person in Paris & Athens):

Identify and counter common attacks, deploy countermeasures, secure a real-world case, present results and recommendations.
Teaching and assessment methods:

Combination of theoretical presentations, practical demos, interactive exercises, and a final challenge.
Evaluation based on quizzes, active participation, quality of implemented safeguards, and final challenge performance.
Ranking via an ELO scoring system rewarding resistance to rare and common attacks.
Key features:

Hands-on securing of a medical AI model (MED-GPT): configuring guardrails, benchmarking, deployment, continuous monitoring, and evaluation using a harmful question dataset.
Dual leaderboard system:

Public leaderboard tracks real-time circumvention attempts.
Private leaderboard ranks participants by ELO score weighted by exploited vulnerability rarity.
Note: Venue and catering are not included for the practical sessions.

Course details

Venue

Multiple venues

Deep tech fields

Artificial Intelligence & Machine Learning (including Big Data)
Cybersecurity & Data Protection

Country

France, Greece

Course language

English

Fee

Free course

Duration (hours)

50

Certificate provided

Yes

Skills addressed

Artificial intelligence security;
Prompting adversarial techniques;
Automated red-teaming;

Course format

Hybrid

Target group

Professional development learners, Life-long learners

Quality check

Approved

Dates

Current no dates scheduled

Course provider

datacraft

Datacraft is a leading organization committed to bridging the deep tech skills gap across Europe by empowering individuals and organizations through specialized training, mentoring, and networking op..

Apply now

Ready to take the next step in your journey? Apply now and embark on a transformative learning experience. Whether you’re pursuing a passion or advancing your career, we’re here to help you succeed. Don’t wait any longer – seize the opportunity and apply today!

Apply to course

Partners