1. Journey
  2. /
  3. Foundation

Foundation

Foundation⏱️ 3 months

Get your hands dirty with AI safety fundamentals

Progress0 / 40 topics
0% complete

Modules

Module 1

Understanding AI Risks

→

Deep dive into AI security threats and systemic risks

📚 7 topics⏱️ 2 weeks
Module 2

AI Safety Policy & Ethics Primer

→

Introduction to governance, ethics, and policy approaches in AI safety

📚 5 topics⏱️ 2 weeks
Module 3

AI Safety: Why It Matters

→

Understand the landscape of AI risks and your role in addressing them

📚 5 topics⏱️ 1 week
Module 4

Mathematical & Technical Foundations

→

Essential mathematics and programming for AI safety research

📚 7 topics⏱️ 8 weeks
Module 5

Machine Learning Fundamentals

→

Core ML concepts with safety considerations

📚 4 topics⏱️ 6 weeks
Module 6

Essential ML for Safety

→

Just enough ML to be dangerous (in a good way)

📚 3 topics⏱️ 3 weeks
Module 7

Practical AI Safety Basics

→

Hands-on introduction to finding and fixing AI vulnerabilities

📚 9 topics⏱️ 2 weeks
project
← Back to JourneyStart First Module →
⚡Pre-rendered at build time

Created By

Veylan Solmira

AI Safety Researcher & Educator

✉️ veylan@example.com💼 LinkedIn🐙 GitHub

Quick Links

  • Home
  • AI Safety Journey
  • Featured Work
  • Interactive Roadmap

About This Project

The AI Safety Research Compiler is a comprehensive curriculum designed to systematically develop AI safety research capabilities. It features dual learning modes, hands-on experiments, and philosophical explorations.

This project represents original work in AI safety education, including case studies, interactive notebooks, and philosophical essays.

Learn more about the project →

© 2025 Veylan Solmira. All rights reserved.

Built with Next.js, TypeScript, and a commitment to AI safety