1. Journey
  2. /
  3. Foundation
  4. /
  5. Understanding AI Risks

Understanding AI Risks

Deep dive into AI security threats and systemic risks

0/7 completed

Topics

01

AI Agency and Autonomy

Exploring goal-directed behavior and autonomous decision-making in AI systems

⏱️ Intermediate
→
02

The Control Problem

Understanding how to maintain control over advanced AI systems

⏱️ Intermediate
→
03

Data Poisoning

How malicious data can corrupt AI systems

⏱️ 20 minutesBeginner
→
04

The Impenetrability Problem

Challenges in understanding and inspecting advanced AI systems

⏱️ Intermediate
→
05

AI Situational Awareness

When AI systems understand their environment, context, and impact

⏱️ Intermediate
→
06

AI & Computer Security

Intersection of AI and traditional security

⏱️ 30 minutesIntermediate
→
07

AI Risk Assessment

Learn to identify and evaluate AI risks

⏱️ 2 hoursIntermediate
→
← Back to Foundation
⚡Pre-rendered at build time

Created By

Veylan Solmira

AI Safety Researcher & Educator

✉️ veylan@example.com💼 LinkedIn🐙 GitHub

Quick Links

  • Home
  • AI Safety Journey
  • Featured Work
  • Interactive Roadmap

About This Project

The AI Safety Research Compiler is a comprehensive curriculum designed to systematically develop AI safety research capabilities. It features dual learning modes, hands-on experiments, and philosophical explorations.

This project represents original work in AI safety education, including case studies, interactive notebooks, and philosophical essays.

Learn more about the project →

© 2025 Veylan Solmira. All rights reserved.

Built with Next.js, TypeScript, and a commitment to AI safety