1. Journey
  2. /
  3. Foundation
  4. /
  5. Practical AI Safety Basics

Practical AI Safety Basics

Hands-on introduction to finding and fixing AI vulnerabilities

0/9 completed

Topics

01

Build Your First Safety Tool

Create a simple AI output validator

⏱️ 1 hourBeginner
→
02

Building AI Safety Research Artifacts

Learn to package and present AI safety research for maximum impact and visibility

⏱️ 60 minutesBeginner
→
03

Red Teaming Fundamentals

Learn to think like an attacker to build better defenses

⏱️ 4 hoursBeginner
→
04

Basic Interpretability

Peek inside AI models to understand their behavior

⏱️ 5 hoursIntermediate
→
05

Prompt Injection Attacks

Understand and defend against prompt injection

⏱️ 20 minutesBeginner
→
06

Jailbreak Techniques

Learn about AI jailbreaking methods and defenses

⏱️ 20 minutesBeginner
→
07

Safety Evaluation Methods

Build your first safety benchmark

⏱️ 6 hoursIntermediate
→
08

AI in Education Safety

Safety considerations for AI deployment in educational settings

⏱️ 4-6 hoursAdvanced
→
09

AI on Consumer Hardware

Safety considerations for AI deployment on consumer devices

⏱️ 4-6 hoursAdvanced
→
← Back to Foundation
⚡Pre-rendered at build time

Created By

Veylan Solmira

AI Safety Researcher & Educator

✉️ veylan@example.com💼 LinkedIn🐙 GitHub

Quick Links

  • Home
  • AI Safety Journey
  • Featured Work
  • Interactive Roadmap

About This Project

The AI Safety Research Compiler is a comprehensive curriculum designed to systematically develop AI safety research capabilities. It features dual learning modes, hands-on experiments, and philosophical explorations.

This project represents original work in AI safety education, including case studies, interactive notebooks, and philosophical essays.

Learn more about the project →

© 2025 Veylan Solmira. All rights reserved.

Built with Next.js, TypeScript, and a commitment to AI safety