A professional collection of AI/ML fundamentals, implemented from first principles to advanced applied projects.
Explore Projects • Key Features • Connect
- 📖 About
- ✅ Key Features
- 🚀 Project Showcase
- 🖼️ Showcase Gallery
- 🛠️ Technologies
- 📂 Repository Structure
- 🗺️ Roadmap / Future Work
- 🏁 Getting Started
- 📚 Learning Resources
- 🤝 Connect With Me
This repository serves as a comprehensive laboratory for my journey through the foundations of Artificial Intelligence. Driven by a "Back to Basics" and "First Principles" philosophy, this project documents my transition from academic theory (Stanford/MIT curriculum) to high-fidelity implementation.
Whether it's deriving the gradients for Logistic Regression or architecting a Deep Convolutional GAN, every line of code here is built with rigor, documentation, and a focus on visual performance analytics.
- ✅ From-scratch implementations: Neural networks and ML algorithms built using only NumPy.
- ✅ LaTeX math explanations: Detailed derivations for cost functions, loss functions, and optimization algorithms.
- ✅ Clean documented code: High-quality, professional code with comprehensive comments.
- ✅ Jupyter visualizations: Real-time plotting of training curves, decision boundaries, and model outputs.
- ✅ Real-world projects: Applied computer vision (YOLOv8) and generative models (GANs).
Focuses on classical algorithms, statistical learning theory, and supervised/unsupervised learning fundamentals.
- Includes: Linear/Logistic Regression, K-Means, PCA, Decision Trees.
Deep dives into multi-layer perceptrons, convolutional neural networks (CNNs), and optimization techniques.
- Includes: Backpropagation, CNN Architectures, Regularization, Batch Norm.
exploration of sequences and linguistics in AI.
- Includes: Word Embeddings (Word2Vec), RNNs, LSTMs, and an introduction to Transformers.
Implementing agents that learn from interaction with their environment.
- Includes: Q-Learning, SARSA, and Deep Q-Networks (DQN).
This is where the theory meets the real world. This section highlights high-fidelity implementations of state-of-the-art architectures.
Complete inference and training pipeline for YOLOv8, including hardware acceleration (CUDA) and performance analytics.
Implementation of DCGAN for synthetic image generation, exploring adversarial loss and transposed convolutions.
[!NOTE] > Caption: Real-time object detection with YOLOv8 - detecting people, vehicles, and objects.
[!NOTE] > Caption: Fashion-MNIST: Real vs Synthetic images generated by GAN.
ai-foundations-lab/
├── 01-machine-learning/ # Classical ML algorithms
├── 02-deep-learning/ # Neural Networks & CNNs
├── 03-nlp/ # Natural Language Processing
├── 04-reinforcement-learning/ # RL Agents & Environments
├── 05-applied-projects/ # SOTA Architectures (YOLO, GANs)
│ ├── Generative Adversarial Networks/
│ └── Object-Detection-YOLO/
├── assets/ # Repository images & media
└── requirements.txt # Project dependencies
- Vision Transformers (ViT): Implementation and comparison with CNNs.
- More RL environments: Training agents on Mujoco/Gymnasium environments.
- LLM fine-tuning: Experiments with LoRA and QLoRA on open-source LLMs.
- Web app deployments: Serving models via FastAPI and Docker.
-
Clone the repository:
git clone https://github.com/DaviBonetto/ai-foundations-lab.git cd ai-foundations-lab -
Set up the environment:
python -m venv .venv source .venv/bin/activate # On Windows: .venv\Scripts\activate pip install -r requirements.txt
-
Explore the notebooks: Launch Jupyter and dive into any module!
This project is inspired by and follows the academic rigor of graduate-level computer science courses:
Stanford University:
- CS229: Machine Learning - Supervised learning, unsupervised learning, learning theory
- CS230: Deep Learning - Neural networks, CNNs, RNNs, optimization techniques
- CS224N: Natural Language Processing with Deep Learning - Word embeddings, transformers, attention mechanisms
- CS234: Reinforcement Learning - MDPs, Q-learning, policy gradients, deep RL
- CS221: Artificial Intelligence: Principles and Techniques - Search, reasoning, learning
- CS336: Language Models - Transformer architecture, LLM foundations
Additional Resources:
- MIT 6.S191: Introduction to Deep Learning
- Research papers and textbooks by leading AI researchers
Note: This is a self-directed learning project using publicly available course materials. No formal enrollment or academic credit.



