diff --git a/2025/git/01_Git_and_Github_Basics/solution.md b/2025/git/01_Git_and_Github_Basics/solution.md new file mode 100644 index 0000000000..201f724dd3 --- /dev/null +++ b/2025/git/01_Git_and_Github_Basics/solution.md @@ -0,0 +1,26 @@ +Task 1: Fork and Clone the Repository +git clone +cd 2025/git/01_Git_and_Github_Basics +mkdir week-4-challenge +cd week-4-challenge +git init +vim info.txt +git add info.txt +git commit -m "Initial commit: Add info.txt with introductory content" +Task 3: Configure Remote URL with PAT and Push/Pull +git remote set-url origin https://@github.com/apurva-kri/90DaysOfDevOps.git +git push -u origin main +git pull origin main +Task 4: Explore Your Commit History +git log +Task 5: Advanced Branching and Switching +git branch feature-update +git switch feature-update +git add info.txt +git commit -m "Feature update: Enhance info.txt with additional details" +git push origin feature-update + +Branching strategies are essential in collaborative development because they help teams work together efficiently and safely. By isolating features and bug fixes in separate branches, developers can work without affecting the stability of the main codebase. This separation also enables parallel development, allowing multiple team members to work on different tasks simultaneously without interfering with each other’s progress. + +Well-defined branching practices help reduce merge conflicts by keeping changes organized and focused within their own branches. Finally, they make code reviews more effective, as reviewers can easily understand the purpose of a branch, evaluate changes in isolation, and ensure quality before merging into the main branch. + diff --git a/2025/git/01_Git_and_Github_Basics/week-4-challenge b/2025/git/01_Git_and_Github_Basics/week-4-challenge new file mode 160000 index 0000000000..72f79fb8b1 --- /dev/null +++ b/2025/git/01_Git_and_Github_Basics/week-4-challenge @@ -0,0 +1 @@ +Subproject commit 72f79fb8b11275f7412e2cc8b295a0a14256a362 diff --git a/2025/networking/Solution/Task1.md b/2025/networking/Solution/Task1.md new file mode 100644 index 0000000000..9955b8db41 --- /dev/null +++ b/2025/networking/Solution/Task1.md @@ -0,0 +1 @@ +this is my submission for week one challenge \ No newline at end of file diff --git a/2025/networking/Solution/task2.md b/2025/networking/Solution/task2.md new file mode 100644 index 0000000000..8d8d7adf01 --- /dev/null +++ b/2025/networking/Solution/task2.md @@ -0,0 +1,2 @@ +1 comment + diff --git a/2026/day-02/task-02/linux-architecture-notes.md b/2026/day-02/task-02/linux-architecture-notes.md new file mode 100644 index 0000000000..829476e8be --- /dev/null +++ b/2026/day-02/task-02/linux-architecture-notes.md @@ -0,0 +1,57 @@ +# Day 02 – Linux Architecture, Processes, and systemd +# Core Components of Linux + +## Kernel + +- Core of the OS; talks directly to hardware +- Manages CPU, memory, disk, devices +- Handles process scheduling and system calls + +## User Space + +- Where users and applications run +- Includes shell (bash), utilities, libraries +- Programs request resources from the kernel + +## Init / systemd (PID 1) + +- First process started by the kernel +- Initializes the system after boot +- Starts and manages background services + +# How Processes Are Created & Managed +## Process Creation + +- fork() → Creates a copy of the parent process +- exec() → Replaces process memory with a new program +- Each process gets a unique PID + +## Process States (Important for Troubleshooting) + +- Running (R) → Actively using CPU +- Sleeping (S) → Waiting for event/input (most common state) +- Stopped (T) → Paused (e.g., via kill -STOP) +- Zombie (Z) → Finished execution but parent hasn’t collected status +- Idle → Kernel process doing nothing +- Uninterruptible Sleep (D) → Waiting on I/O (disk/network), cannot be interrupted + +The kernel scheduler manages CPU time and priorities to ensure multitasking works efficiently. + +## What systemd does and why it matters +# what it does: +- Starts services at boot(SSh, netwroking, docker, etc) +- Manages and monitor services +- Restart failed services automatically +- Handles logging(journald) +- Control targets(run levels like multi-user, graphical) +# Why it matters +- Faster boot with parallel startup +- Centralized service management +- Better reliability and monitoring +- Essential for Devops & server administration +## 5 Linux commands I'd use daily +- ps-aux -View running processes +- top/htop - Monitor CPU & Memory usage +- systemctl - Manages Services like start, stop, restart, status, enable, disable etc +- journalctl - check logs +- kill - stop or signal a process \ No newline at end of file diff --git a/2026/day-03/task-03/linux-commands-cheatsheet.md b/2026/day-03/task-03/linux-commands-cheatsheet.md new file mode 100644 index 0000000000..3d3f095491 --- /dev/null +++ b/2026/day-03/task-03/linux-commands-cheatsheet.md @@ -0,0 +1,25 @@ +## 🐧Linux Command Cheat Sheet +# 1. Process Management +- ps - Display currently running processes +- ps aux - show all processes with detailed info +- top - Real-time system process monitoring +- htop - interractive process viewer +- kill - Terminate a process by PID +- kill -9 - Force kill a process +- pkill - Kill process by name + +# 📁2. File System Commands +- ls -l - List files with detailed permissions +- ls -a - Show hidden files +- cd - Change directory +- pwd - show current directory +- rm -r - Remove directory recursively +- cat - Display file content +- chmod 755 - Change file permissions + +# 🌐3. Networking & Troubleshooting +- ping - Check network connectivity +- curl - Fetch data from URL(API testing) +- wget - Downloads file from internet +- dig - DNS lookup for domain +- ip addr - Show IP address configuration \ No newline at end of file diff --git a/2026/day-04/task-04/linux-practice.md b/2026/day-04/task-04/linux-practice.md new file mode 100644 index 0000000000..aae8b6a4dc --- /dev/null +++ b/2026/day-04/task-04/linux-practice.md @@ -0,0 +1,37 @@ +## Day 04 – Linux Practice: Processes and Services +# Service Management (systemd) +- systemctl status ssh - Check the current status of the SSH service (running, stopped, failed, logs) +- systemctl status sshd -Check the status of the SSH daemon (used in some distributions like CentOS/RHEL) +- systemctl is-enabled ssh -Verify whether the SSH service is enabled to start automatically at boot +- systemctl cat ssh -Display the complete unit file configuration of the SSH service +- systemctl list-units -List all currently active units loaded in memory by systemd +- systemctl list-units --type=service -List only active service-type units (filters out other unit types like mount, socket, etc.) +- systemctl list-units --failed -Display all failed units to quickly identify services that crashed or failed to start +# Process Monitoring (ps) +- ps -Display processes running in the current terminal session +- ps aux -Show all running processes with detailed information (user, CPU, memory, etc.) +- ps aux | head -Show the first 10 processes from the detailed process list +- ps aux | head -n 5 -Display only the first 5 processes from the full process list +- ps aux | tail -n 5 -Display the last 5 processes from the full process list +- ps aux | grep ssh -Filter and show only processes related to SSH +- ps aux | grep ssh | grep -v grep -Show SSH-related processes while excluding the grep command itself +- ps aux --sort=-%cpu -Diplay all processes sorted by highest CPU usage first +- ps aux --sort=-%mem -Display all processes sorted by highest memory usage first +- ps aux --sort=-%cpu | head -n 5 -Show the top 5 processes consuming the most CPU +- ps aux --sort=-%mem | head -n 5 -Show the top 5 processes consuming the most memory +- ps aux --sort=-%cpu | grep ssh | head -n 5 -Display the top 5 SSH-related processes sorted by CPU usage +- ps -C sshd -Show processes that match the exact command name sshd +- ps -C sshd --sort=-%cpu | head -n 5 -Display the top 5 sshd processes sorted by CPU usage +# Log Inspection (tail, journalctl) +- tail -n 5 file.txt -Display the last 5 lines of a file +- tail -n 50 filename.log -Display the last 50 lines of a log file +- tail -n 50 filename.log -Display the last 50 lines of a log file (used for reviewing recent activity) +- tail -f filename.log -Monitor a file in real time as new lines are added +- tail -n 5 -f filename.log -Show the last 5 lines and continue monitoring the file live +- tail -n 5 /var/log/auth.log -Display the last 5 SSH authentication log entries (Ubuntu/Debian systems) +- tail -f /var/log/auth.log -Monitor SSH authentication logs in real time +- journalctl -u ssh -Display all logs related to the SSH service (systemd-based systems) +- journalctl -u ssh -n 5 -Show the last 5 log entries for the SSH service +- journalctl -u ssh -f -Monitor SSH service logs live using systemd journal + + diff --git a/2026/day-05/task-05/linux-troubleshooting-runbook.md b/2026/day-05/task-05/linux-troubleshooting-runbook.md new file mode 100644 index 0000000000..9261b44131 --- /dev/null +++ b/2026/day-05/task-05/linux-troubleshooting-runbook.md @@ -0,0 +1,110 @@ +# Day 05 – Linux Troubleshooting Drill: CPU, Memory, and Logs +Target Service : SSH +## Environment basics +- `uname -a` -> Tell me everything about this system. uname is unix name and -a indicates all information. +``` +Linux ip-172-31-44-23 6.14.0-1018-aws #18~24.04.1-Ubuntu SMP Mon Nov 24 19:46:27 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux +``` +- `lsb_release -a` -> Tell me the Linux distribution details +``` +Distributor ID: Ubuntu +Description: Ubuntu 24.04.1 LTS +Release: 24.04 +Codename: noble +``` +# Filesystem sanity +- `mkdir /tmp/runbook-demo` +- `cp /etc/hosts /tmp/runbook-demo/hosts-copy` +- `ls -l /tmp/runbook-demo` - Filesystem writable. Copy successful. Permissions intact. +``` +total 4 +-rw-r--r-- 1 ubuntu ubuntu 221 Feb 8 22:08 hosts-copy +``` +# CPU / Memory +- `ps -o pid,pcpu,comm -p 30607` -Process-level view +``` + PID %CPU COMMAND + 30607 0.1 sshd + ``` + - `free -h` -Memory overview + ``` + total used free shared buff/cache available + Mem: 914Mi 383Mi 129Mi 2.8Mi 569Mi 530Mi + Swap: 0B 0B 0B + ``` +# Disk / IO + - `df -h` + ``` + Filesystem Size Used Avail Use% Mounted on + /dev/root 6.8G 2.9G 3.9G 42% / + tmpfs 458M 0 458M 0% /dev/shm + tmpfs 183M 928K 182M 1% /run + tmpfs 5.0M 0 5.0M 0% /run/lock + efivarfs 128K 3.8K 120K 4% /sys/firmware/efi/efivars + /dev/nvme0n1p16 881M 151M 669M 19% /boot + /dev/nvme0n1p15 105M 6.2M 99M 6% /boot/efi + tmpfs 92M 12K 92M 1% /run/user/1000 + ``` + - `du -sh /var/log` - log directory size + ``` + 178M /var/log -Total visible log size ≈ 178 MB /Disk usage normal + ``` + # Network + -`ss -tlunp` - “Show me all TCP & UDP ports that are currently listening, with numeric IPs and the process name.” + - ss = socket statistics + - t → TCP + - l → Listening sockets only + - u → UDP + - n → Show numbers (don’t resolve names) + - p → Show process using the port + ``` + Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process + udp UNCONN 0 0 127.0.0.1:323 0.0.0.0:* + udp UNCONN 0 0 127.0.0.54:53 0.0.0.0:* + ``` + # ping - Can I reach this host over the network? It sends small network packets (ICMP Echo Requests) and waits for replies. + - `ping google.com` + ``` + --- www.google.com ping statistics --- + 5 packets transmitted, 5 received, 0% packet loss, time 4005ms + ``` + # Logs + - `journalctl -u ssh -n 2` -Modern Linux systems (like your Ubuntu 24.04) use systemd, and it stores service logs in a centralized logging system called the journal. This shows the last 2 log entries. + ``` + Feb 08 22:33:53 ip-172-31-44-23 sshd[30508]: Accepted publickey for ubuntu from 157.49.9.58 port 51173 ssh2: RSA> + Feb 08 22:33:53 ip-172-31-44-23 sshd[30508]: pam_unix(sshd:session): session opened for user ubuntu(uid=1000) by> + ``` + - `tail -n 50 /var/log/auth.log | grep ssh`- Take only the last 5 lines of the log file. + - `grep ssh /var/log/auth.log | tail -n 5` - Find all sshd logs first, then show the last 5 of those. + ``` + 2026-02-08T21:24:51.274485+00:00 ip-172-31-44-23 sshd[30011]: Accepted publickey for ubuntu from 157.49.28.21 port 64832 ssh2: RSA SHA256:HBvpRqJvL+gVCvTfZNpc5T6VWAfFukD48PFoOIj5s08 + 2026-02-08T21:24:51.276370+00:00 ip-172-31-44-23 sshd[30011]: pam_unix(sshd:session): session opened for user ubuntu(uid=1000) by ubuntu(uid=0) + ``` + # If This Worsens (Next Steps) + If SSH becomes unstable (high CPU, refusing connections, hanging sessions): + - `sudo systemctl restart ssh` + If issue persists: + - `sudo systemctl status ssh` + - `journalctl -u ssh -n 100` + If repeated failures: + Check port conflicts (ss -tlunp) + Confirm firewall rules (sudo ufw status) + Validate no disk full issues (df -h) + # Increase Log Verbosity (Temporary) -If logs are unclear or insufficient: + - edit ssh config - `sudo nano /etc/ssh/sshd_config` + - Change or add: `LogLevel VERBOSE` + - Then restart SSH: -`sudo systemctl restart ssh` + - This provides: Detailed authentication logs ,Connection debugging info, More granular failure reasons + # Deep Process & System Analysis -If CPU/memory spikes or SSH freezes: + - Check live process behavior: `top` + - If disk-related symptoms appear: - `df -h` + + + + + + + + + + diff --git a/2026/day-06/task-06/file-io-practice.md b/2026/day-06/task-06/file-io-practice.md new file mode 100644 index 0000000000..1303016b72 --- /dev/null +++ b/2026/day-06/task-06/file-io-practice.md @@ -0,0 +1,50 @@ + +# Practice Basic File Read/Write + +## 1. Creating file And Writing text to a file +`touch notes.txt` + +`echo "Learning basic linux file operations." > notes.txt` + +## 2. Appending new lines + +`echo "Using touch to create files" >> notes.txt` { append new line to a file } + +`echo "Using echo to write content" >> notes.txt` + +`echo "understanding overwite with >" >> notes.txt` + +`echo "Appending new lines using >>" | tee -a notes.txt` { -a = append and **tee** use to write text on file along with display} + +## 3. Reading Full And Part of File +`cat notes.txt` + +**Output Snippet:** +``` +Learning basic Linux file operations +Using touch to create files +Using echo to write content +Understanding overwrite with > +Appending new lines using >> +Reading files using cat +Viewing partial content with head +Checking last lines with tail +Using tee to write and display +``` + +`head -n 3 notes.txt` {head -n 3 = print 3 lines from top} + +**Output Snippet:** +``` +Learning basic Linux file operations +Using touch to create files +Using echo to write content +``` +`tail -n 3 notes.txt` {tail -n 3 = print last three lines} + +**Output Snippet:** +``` +Viewing partial content with head +Checking last lines with tail +Using tee to write and display +``` \ No newline at end of file diff --git a/2026/day-07/task-07/day-07-linux-fs-and-scenarios.md b/2026/day-07/task-07/day-07-linux-fs-and-scenarios.md new file mode 100644 index 0000000000..573a6252a9 --- /dev/null +++ b/2026/day-07/task-07/day-07-linux-fs-and-scenarios.md @@ -0,0 +1,84 @@ +# 🗂 Linux File System Hierarchy – Practice Notes +## 🔹 Core Directories (Must Know) +- /(Root) -The root directory is the top-level directory in Linux. Everything starts from here. Command `la -l /` , I would use this when I need to understand the overall filesystem structure or navigate to system-level directories. Eg: bin, etc, home +- /Home - Contains personal directories for normal users. `ls -l /home`, I would use this when I need to access or manage user files and personal data. Eg: ubuntu +- /root - Home directory for the root (administrator) user. `ls -l /root`I would use this when I am logged in as root and need access to root-specific configuration or scripts. Eg: .bashrc , .profile +- /etc - Stores system-wide configuration files. `ls -l /etc` I would use this when I need to modify service configurations (e.g., SSH, networking, users). Eg: ssh/ ,passwd, hosts +- /var/log -Contains system and application log files. Very important for troubleshooting. `ls -l /var/log` I would use this when I am investigating system errors, service failures, or login issues. Eg: auth.log, syslog, journal/ +- /tmp -Stores temporary files created by users and applications. `ls -l /tmp` I would use this when I need a safe place to test file operations without affecting production data. Eg: runbook-demo, systemd-private-* +## 🔹 Additional Directories (Good to Know) +- /bin -Contains essential system command binaries required for booting and basic operations. `ls -l /bin` I would use this when I want to verify where core Linux commands are located. Eg: ls, cp, mv +- /usr/bin - Contains most user-level command binaries and applications. `ls -l /usr/bin` I would use this when I need to check if a program is installed on the system. Eg: git, python3, vim +- /opt - Used for installing optional or third-party software. `ls -l /opt` I would use this when I install custom software like Jenkins, Docker packages, or vendor applications. Eg: May be empty or contain application folders +# Hands on task +`du -sh /var/log/* 2>/dev/null | sort -h | tail -5` - Show me the 5 largest log files in /var/log. +- du -sh /var/log/* +- du → Disk usage +- -s → Summary (don’t go inside subfolders deeply) +- -h → Human readable (MB, GB instead of bytes) +- /var/log/* → All files/folders inside /var/log +- 2>/dev/null - This hides permission errors + - 2> = redirect error output + - /dev/null = throw it away +- sort -h -sort in human-readable size order, Now logs are sorted from smallest → largest. +- tail -5 - show last 5 lines,Since sorted smallest → largest,the last 5 are the biggest +- OUTPUT +``` +4.0K /var/log/boot.log +12M /var/log/syslog +85M /var/log/journal +``` +# Part 2: Scenario-Based Practice +- Question: How do you check if the 'nginx' service is running? +- Step by step solution +- Step 1 : Check service status `systemctl status nginx` - Why this command? It shows if the service is active, failed, or stopped +- Step 2: If service is not found, list all services - `systemctl list-units --type=service` Why this command? To see what services exist on the system +- Step 3: Check if service is enabled on boot `systemctl is-enabled nginx` - Why this command? To know if it will start automatically after reboot +- What i learned - Always check status first, then investigate based on what you see. + +# Scenario 1: Service Not Starting +``` +A web application service called 'myapp' failed to start after a server reboot. +What commands would you run to diagnose the issue? +Write at least 4 commands in order. +``` +- Step 1: `systemctl status myapp` - To check whether the service is running, failed, inactive, or stuck — and see the immediate error message. +- step 2: - `journalctl -u myapp -n 50` - To view the last 50 log entries for the service and identify the exact failure reason (config error, port conflict, permission issue, etc.). +- Step 3: `systemctl is-enabled myapp` -To check if the service is configured to automatically start on boot. +- `systemctl list-units --type=service | grep myapp` -To confirm the service is properly registered with systemd and recognized by the system. + +# Scenario 2: High CPU Usage +``` +Your manager reports that the application server is slow. +You SSH into the server. What commands would you run to identify +which process is using high CPU? +``` +- Step 1: `top` -To view live CPU usage and quickly identify which process is consuming the most CPU in real time. +- step 2: `ps aux --sort=-%cpu | head -10` To list processes sorted by highest CPU usage and display the top 10 consumers. +- step 3: `ps -fp ` To get detailed information about the specific process — including parent process and command path. +- step 4: `top -p ` To monitor only that specific process and confirm whether CPU usage remains high. + +# Scenario 3: Script Not Executing (Permission denied) +- `ls -l /home/user/.backup.sh` - check current permissions +- ``` -rw-r--r-- 1 user user 245 Feb 10 21:30 backup.sh ``` - no execulatable permission which is x - cannot run the script +- Add execute permission - `chmod =x /home/user/backup.sh` - Adds execute permission for owner, group, and others. Now the system allows the file to be executed as a program. +- Verify It Worked - `ls -l /home/user/backup.sh` - +```-rwxr-xr-x 1 user user 245 Feb 10 21:30 backup.sh +``` +- owner, group and others can read, write and excute +- Run the script - `./backup.sh` -Now it should execute. +# 🎯 Core Concept +- To run a file directly with ./filename: +- It must have x permission +- It must be in your current directory (or use full path) +- It must have a valid interpreter if it’s a script + +# Scenario 4: Finding Logs for docker Service +- Since docker is managed by systemd, its logs are stored in journald. +- Step 1: Check Service Status - `systemctl status docker` - To confirm whether the service is running, failed, or inactive and see recent log lines directly in the status output. +- Step 2: View Recent Logs - `journalctl -u docker -n 50` - To view the last 50 log entries for the docker service and identify errors, crashes, or startup issues. +- step 3: Follow Logs in Real-Time -`journalctl -u docker -f` -To monitor live logs while restarting Docker or reproducing the issue (similar to tail -f). +- step 4: Option but strong command to filter logs within a specific time window when the issue occurred `journalctl -u docker --since "10 minutes ago"` + + + diff --git a/2026/day-08/task-08/day-08-cloud-deployment.md b/2026/day-08/task-08/day-08-cloud-deployment.md new file mode 100644 index 0000000000..5b2709a00b --- /dev/null +++ b/2026/day-08/task-08/day-08-cloud-deployment.md @@ -0,0 +1,35 @@ +# Day 08 – Cloud Server Setup: Docker, Nginx & Web Deployment + ## Install Docker & Nginx + - Update system - `sudo apt update && sudo apt upgrade -y` + - Install Nginx - `sudo apt install nginx -y` + - Check Nginx status - `systemctl status nginx` + - check Nginx Logs - `journalctl -u nginx -n 50` + - Security Group Configuration - Opened port 80 (HTTP) in Security Group, allowed inbound traffic from 0.0.0.0/0 for testing. + - Test Web Access - http://your-instance-ip - Successfully saw the Nginx Welcome Page + ![Welcome to Nginx](image.png) + - Extract Nginx Logs - `ls -l /var/log/nginx` , `cat /var/log/nginx/access.log` + - Create the file & Save Logs to File - `touch nginx-logs.txt` `sudo cp /var/log/nginx/access.log nginx-logs.txt` + - Then fix ownership: `sudo chown ubuntu:ubuntu nginx-logs.txt` + - Confirm File Exists - `ls -l nginx-logs.txt` & exit server + - On local Download It - `scp -i linux_tasks.pem ubuntu@ec2-51-20-8-125.eu-north-1.compute.amazonaws.com:~/nginx-logs.txt .` + - Check the logs - `cat nginx-logs.txt` - the logs of nginx will show here + # ⚠️ Challenges Faced + + - ❌ Tried installing docker instead of docker.io (wrong package name). + - ❌ SCP failed because file did not exist in the home directory. + - ❌ Permission denied while copying logs from /var/log/nginx/. + - ❌ Initially ran scp from inside EC2 instead of local machine. + + # ✅ How I Solved Them + - Verified correct package names using apt search. + - Confirmed file existence using ls -l. + - Used sudo cp and chown to fix permissions. + - Understood that scp must be run from the local machine to download files. + + # 📚 What I Learned + - The importance of verifying file paths before using scp. + - How Linux file ownership (chown) affects access and downloads. + - Default Nginx log locations (/var/log/nginx/access.log, error.log). + - Difference between running commands locally vs remotely. + - Real-world troubleshooting mindset: verify → fix → test → validate. + diff --git a/2026/day-08/task-08/image.png b/2026/day-08/task-08/image.png new file mode 100644 index 0000000000..d1fc192674 Binary files /dev/null and b/2026/day-08/task-08/image.png differ diff --git a/2026/day-09/task-09/day-09-user-management.md b/2026/day-09/task-09/day-09-user-management.md new file mode 100644 index 0000000000..9268aee3d3 --- /dev/null +++ b/2026/day-09/task-09/day-09-user-management.md @@ -0,0 +1,117 @@ +# day-09-user-management.md +## 🎯 Objective +- Practice Linux user and group management by: +- Creating users and setting passwords +- Creating and assigning groups +- Managing shared directories with proper permissions +- Testing real access control scenarios + +# Task 1: Create Users + ``` +sudo useradd -m tokyo +sudo useradd -m berlin +sudo useradd -m professor +``` +# Set Password +``` +sudo useradd -m tokyo +sudo useradd -m berlin +sudo useradd -m professor +``` +# Check home directories: +` ls -l /home/` +# 👥 Task 2: Create Groups +``` +sudo groupadd developers +sudo groupadd admins +``` +# ✅ Verify Groups +``` +cat /etc/group | grep developers +cat /etc/group | grep admins + +``` +# Task 3: Assign Users to Groups +``` +sudo usermod -aG developers tokyo +sudo usermod -aG developers,admins berlin +sudo usermod -aG admins professor +``` +# ✅ Verify Group Membership +``` +groups tokyo +groups berlin +groups professor +``` +# 📁 Task 4: Shared Directory Setup +` sudo mkdir /opt/dev-pr-oject` - Create Directory +`sudo chown root:developers /opt/dev-project` - Set Group Owner +`sudo chmod 775 /opt/dev-project` - Set Permissions (775) +- Test as tokyo +``` +su - tokyo +touch /opt/dev-project/tokyo.txt +exit +``` +- Test as berlin +``` +su - berlin +touch /opt/dev-project/berlin.txt +exit +``` +- Verify +``` +ls -ld /opt/dev-project +ls -l /opt/dev-project +``` +# 👨‍👩‍👧 Task 5: Team Workspace +- Create User +``` +sudo useradd -m nairobi +sudo passwd nairobi +``` +- Create group +``` +sudo groupadd project-team +``` +- Add users to group +``` +sudo usermod -aG project-team nairobi +sudo usermod -aG project-team tokyo +``` +- Create workspace directory +``` +sudo mkdir /opt/team-workspace +sudo chown root:project-team /opt/team-workspace +sudo chmod 775 /opt/team-workspace +``` +- test as nairobi +``` +su - nairobi +touch /opt/team-workspace/nairobi.txt +exit +``` +- Verify +``` +ls -ld /opt/team-workspace +ls -l /opt/team-workspace +``` +# 📚 Key Learnings + +- Difference between user ownership and group ownership +- Importance of chmod and chown in access control +- How usermod -aG appends users to groups +- How directory permissions control file creation +- Real-world structure for team-based access management + +# 🏆 Summary + +- Day 09 strengthened my understanding of: + +✔ Linux access control +✔ User & group administration +✔ Shared workspace design +✔ Permission troubleshooting + +Hands-on practice made the concepts much clearer than theory alone. + diff --git a/2026/day-10/task-10/day-10-file-permissions.md b/2026/day-10/task-10/day-10-file-permissions.md new file mode 100644 index 0000000000..d30124eb88 --- /dev/null +++ b/2026/day-10/task-10/day-10-file-permissions.md @@ -0,0 +1,44 @@ +# Day 10 – File Permissions & File Operations Challenge +## Challenge Tasks +# Task 1: Create Files +- Create empty file - `touch devops.txt` +- Create notes.txt with some content using cat or echo - `echo "This is my devops notes file" > notes.txt` +- Create script.sh using vim with content: echo "Hello DevOps" - `vim script.sh` +- Chek permissions - `ls -l` +# Task 2: Read Files +- Read notes.txt using cat - `cat notes.txt` +- View script.sh in vim read-only mode - `vim script.sh` and exit by press esc & :q + enter +- Display first 5 lines of /etc/passwd using head - `head -n 5 /etc/passwd` +- Display last 5 lines of /etc/passwd using tail - `tail -n 5 /etc/passwd` +# Task 3: Understand & Modify Permissions +# 1 : Make script.sh executable → run it with ./script.sh +- step 1: first check the permission by `ls -l script.sh` > -rw-rw-r-- 1 ubuntu ubuntu 20 Feb 13 11:27 script.sh +- Step 2: It doesn't have exceute permission to run as an script hence set permission to execute - `chmod 775 script.sh` +- now check the permission - `ls -l script.sh` > -rwxrwxr-x 1 ubuntu ubuntu 20 Feb 13 11:27 script.sh +- Now run it as an script - `./script.sh` > Hello Devops , will show the file content as it gets the execute permission. +# 2 : Set devops.txt to read-only (remove write for all) +- Set devops.txt to read-only (remove write for all) - `chmod 444 devops.txt` > -r--r--r-- 1 ubuntu ubuntu 0 Feb 13 11:24 devops.txt OR we can give execute and read only permission > `chmod 555 devops.txt` > -r-xr-xr-x 1 ubuntu ubuntu 0 Feb 13 11:24 devops.txt +- Set notes.txt to 640 (owner: rw, group: r, others: none) > `chmod 640 notes.txt` > -rw-r----- 1 ubuntu ubuntu 29 Feb 13 11:25 notes.txt +- Create directory project/ with permissions 755 +Steps: +- `mkdir project` +- `ls -ld project` - to check the permission of the directory - drwxrwxr-x 2 ubuntu ubuntu 4096 Feb 13 11:47 project +- `chmod 755 project` - permission set to directory - drwxr-xr-x 2 ubuntu ubuntu 4096 Feb 13 11:47 project +# Task 4: Test Permissions +- Try writing to a read-only file - what happens? - You cannot write to a read-only file because it does not have write (w) permission, so the system blocks any modification attempts. +- Try executing a file without execute permission - You cannot execute a file without execute (x) permission because the system does not allow it to run as a program. +- Document the error messages - +# Issue: +- I initially got confused between the passwd command and the /etc/passwd file. + +# Clarification: +- passwd is a command used to set or change user passwords. +- /etc/passwd is a system file that stores user account information (username, UID, home directory, shell), not actual passwords. + +# Key Learning: +- Similar names in Linux can represent different things — one is a command, the other is a configuration file. + + + + + diff --git a/2026/day-11/task-11/day-11-file-ownership.md b/2026/day-11/task-11/day-11-file-ownership.md new file mode 100644 index 0000000000..6f1fe030b6 --- /dev/null +++ b/2026/day-11/task-11/day-11-file-ownership.md @@ -0,0 +1,48 @@ +# Day 11 – File Ownership Challenge (chown & chgrp) + +## Files & Directories Created +- devops-file.txt +- team-notes.txt +- project-config.yaml +- app-logs/ +- heist-project/ +- heist-project/vault/ +- heist-project/plans/ +- heist-project/vault/gold.txt +- heist-project/vault/strategy.conf +- bank-heist/ +- bank-heist/access-codes.txt +- bank-heist/blueprints.pdf +- bank-heist/escape-plan.txt + + +## Ownership Changes +### Task 2: Basic chown Operations +**Before** + +![Before](image-2.png) + +**after** + +![After](image-1.png) + + +## Commands Used +- `touch devops-file.txt` +- `sudo chown tokyo devops-file.txt` (changing user of **file**) +- `sudo groupadd heist-team` +- `sudo chown ubuntu:heist-team team-notes.txt` (changing group of **file**) +- `touch project-config.yaml` +- `sudo chown professor:heist-team project-config.yaml` (changing both **user** and **group** of **file**) +- `mkdir app-logs` +- `sudo chown berlin:heist-team app-logs` (changing both **user** and **group** of **directory**) +- `mkdir -p heist-project/vault` +- `touch heist-project/vault/gold.txt` +- `sudo chown -R professor:planners heist-project` (changing both **user** and **group** **-R recursively**) +- `sudo chown tokyo:vault-team bank-heist/access-codes.txt` (changing **user** and **group** by given path of file) + +## What I Learned +- Difference between file owner and group +- How chown and chgrp control access in Linux +- Importance of recursive ownership in real projects + diff --git a/2026/day-11/task-11/image-1.png b/2026/day-11/task-11/image-1.png new file mode 100644 index 0000000000..0946eb9a80 Binary files /dev/null and b/2026/day-11/task-11/image-1.png differ diff --git a/2026/day-11/task-11/image-2.png b/2026/day-11/task-11/image-2.png new file mode 100644 index 0000000000..1d9eca11bb Binary files /dev/null and b/2026/day-11/task-11/image-2.png differ diff --git a/2026/day-11/task-11/image.png b/2026/day-11/task-11/image.png new file mode 100644 index 0000000000..af2f11b485 Binary files /dev/null and b/2026/day-11/task-11/image.png differ diff --git a/2026/day-13/task-13/day-13-lvm.md b/2026/day-13/task-13/day-13-lvm.md new file mode 100644 index 0000000000..f4bf1c638f --- /dev/null +++ b/2026/day-13/task-13/day-13-lvm.md @@ -0,0 +1,83 @@ +# Day 13 – Linux Volume Management (LVM) +## Commands Used +- `sudo su` / `sudo -i` +- `lsblk` (list all blocks) + +*After Attaching volumes(ESB)* + +![vlomue](image.png) + +--- + +- `pvcreate` / `pvs` + +*After creating pysical volume* + +![pvcreate](image-1.png) + +--- +- `vgcreate` / `vgs` + +*Creating volume group using two physical volume nvme1n1 and nvme2n1* + +![vgcreate](image-2.png) + +--- + +- `lvcreate` / `lvs` + +*Creating Logical volume ussing volume group* + +![lvcreate](image-3.png) + +--- + +- `mkfs.ext4 ` +- `mount ` + +*After mounting logical volume* + +![mount](image-4.png) + +--- + +- `lvextend -L +300M /dev/aws_vg/aws_lv` **OR** `lvresize -L +300M /dev/aws_vg/aws_lv` + +*extend 220 M.B. size on logical volume* + +![extendvolume](image-5.png) + +**Logical Volume is 772MB But Filesystem size is still 452MB(You must resize the filesystem manually).** + + +![resize](image-7.png) +--- + +- `resize2fs /dev/aws_vg/aws_lv` + +![resize](image-6.png) + + +*verify* + +![resizd](image-8.png) +--- + +- `lvresize -r -L +200M /dev/aws_vg/aws_lv` + +r = resize the filesystem + +![extend](image-9.png) + + +*verify* + +![verify](image-10.png) + +--- + +# What I Learned +LVM Architecture Hierarchy +- Physical Volumes (PV) +- Volume Groups (VG) +- Logical Volumes (LV) \ No newline at end of file diff --git a/2026/day-13/task-13/image-1.png b/2026/day-13/task-13/image-1.png new file mode 100644 index 0000000000..2047a3b4fa Binary files /dev/null and b/2026/day-13/task-13/image-1.png differ diff --git a/2026/day-13/task-13/image-10.png b/2026/day-13/task-13/image-10.png new file mode 100644 index 0000000000..f8c469e34c Binary files /dev/null and b/2026/day-13/task-13/image-10.png differ diff --git a/2026/day-13/task-13/image-2.png b/2026/day-13/task-13/image-2.png new file mode 100644 index 0000000000..679fd639c3 Binary files /dev/null and b/2026/day-13/task-13/image-2.png differ diff --git a/2026/day-13/task-13/image-3.png b/2026/day-13/task-13/image-3.png new file mode 100644 index 0000000000..b6b31f0bd8 Binary files /dev/null and b/2026/day-13/task-13/image-3.png differ diff --git a/2026/day-13/task-13/image-4.png b/2026/day-13/task-13/image-4.png new file mode 100644 index 0000000000..5290c16182 Binary files /dev/null and b/2026/day-13/task-13/image-4.png differ diff --git a/2026/day-13/task-13/image-5.png b/2026/day-13/task-13/image-5.png new file mode 100644 index 0000000000..d76b6ae8f7 Binary files /dev/null and b/2026/day-13/task-13/image-5.png differ diff --git a/2026/day-13/task-13/image-6.png b/2026/day-13/task-13/image-6.png new file mode 100644 index 0000000000..795e2eb384 Binary files /dev/null and b/2026/day-13/task-13/image-6.png differ diff --git a/2026/day-13/task-13/image-7.png b/2026/day-13/task-13/image-7.png new file mode 100644 index 0000000000..32c3e154e0 Binary files /dev/null and b/2026/day-13/task-13/image-7.png differ diff --git a/2026/day-13/task-13/image-8.png b/2026/day-13/task-13/image-8.png new file mode 100644 index 0000000000..2ae81fa91e Binary files /dev/null and b/2026/day-13/task-13/image-8.png differ diff --git a/2026/day-13/task-13/image-9.png b/2026/day-13/task-13/image-9.png new file mode 100644 index 0000000000..8af3e1134c Binary files /dev/null and b/2026/day-13/task-13/image-9.png differ diff --git a/2026/day-13/task-13/image.png b/2026/day-13/task-13/image.png new file mode 100644 index 0000000000..8b3a948dd9 Binary files /dev/null and b/2026/day-13/task-13/image.png differ diff --git a/2026/day-14/task-14/day-14-networking.md b/2026/day-14/task-14/day-14-networking.md new file mode 100644 index 0000000000..05be723b3f --- /dev/null +++ b/2026/day-14/task-14/day-14-networking.md @@ -0,0 +1,52 @@ +# OSI Model (L1–L7) vs TCP/IP Stack +## OSI Model (7 Layers) +- A conceptual framework with 7 layers (Physical → Application) to understand how data moves through a network. +- Mainly used for learning and troubleshooting. +## TCP/IP Model (4 Layers) +- Practical model used in real-world networking (Link, Internet, Transport, Application). +- Simpler and aligns with how the internet actually works. + +# Where Protocols Sit in the Stack +- IP → Internet Layer (Handles addressing & routing) +- TCP/UDP → Transport Layer (Handles data delivery & ports) +- HTTP/HTTPS → Application Layer (Used for web communication) +- DNS → Application Layer (Resolves domain names to IP addresses) + +# Real Example +## curl https://example.com +- Application Layer → HTTPS +- Transport Layer → TCP +- Internet Layer → IP +- So it works as: Application (HTTPS) → TCP → IP → Network + +# 🌐 Networking Hands-On Checklist +## 1️⃣ Identity +- `hostname -I` - Server IP is 172.31.44.23 (private IP inside cloud network). Confirms instance network configuration. +## 2️⃣ Reachability +- `ping google.com` - Average latency ~20–30 ms, 0% packet loss. Indicates stable internet connectivity. ping checks whether another device/server is reachable over the network and measures how long it takes to respond. It asks: “Are you there?” and measures how fast the reply comes back. +![ping](image.png) +## 3️⃣ Path Check +- `traceroute google.com` OR `tracepath google.com` - Multiple hops across ISP backbone. No major timeouts. One hop showed slightly higher latency (~80 ms). So it shows the path (hops) your network traffic takes to reach a destination server. It tells you how your data travels across the internet — step by step. +![traceroute](image-1.png) +## 4️⃣ Open Ports +- `ss -tulpn` - Found SSH service listening on port 22 (sshd). Confirms remote access is active. So basically it Shows all TCP and UDP services that are currently listening, and tells which process is using them.” +## 5️⃣ Name Resolution +- `dig google.com` - Domain resolved to IP 142.250.x.x. Confirms DNS resolution working properly. dig stands for Domain Information Groper. It is used to query DNS servers and check how a domain name resolves to an IP address. In simple words - dig asks: “What is the IP address for this domain?” +## 6️⃣ HTTP Check +- `curl -I https://google.com` - Received HTTP/1.1 200 OK (or 301 redirect). Confirms application layer communication works. +## 7️⃣ Connection Snapshot +- `netstat -an | head` - Multiple LISTEN states (port 22). Few ESTABLISHED connections (active SSH session). netstat (network statistics) is a command used to display network connections, routing tables, interface statistics, and open ports on a system. + +# 🔎 Mini Task: Port Probe & Interpret +## 🔹 Step 1: Identify a Listening Port +- `ss -tulnp` - SSH running on port 22 +``` +tcp LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=1234)) +``` +## 🔹 Step 2: Test From Same Machine +- `nc -zv localhost 22`One line interpretaion - Port 22 is reachable; SSH service is running and accepting connections. +- Here `nc` stands for Netcat used to check is a port is open and reachable. `-z` means zero I/O mode , it checks if the port is open without sending any data. It just check the connection. `-v` is Verbose mode means it shows details output, without -v the output would be minimal +## If port 22 not reachable than we can check: +- if the service is running? - `systemctl status ssh` +- is firewall blocking the port? - `sudo ufw status` +- In short port 22 is reachable on localhost; if not, I would check service status and firewall rules. `UFW` stands for Uncomplicated Firewall. UFW controls which ports are allowed or blocked on your server. \ No newline at end of file diff --git a/2026/day-14/task-14/image-1.png b/2026/day-14/task-14/image-1.png new file mode 100644 index 0000000000..581893acf7 Binary files /dev/null and b/2026/day-14/task-14/image-1.png differ diff --git a/2026/day-14/task-14/image.png b/2026/day-14/task-14/image.png new file mode 100644 index 0000000000..e268afcda9 Binary files /dev/null and b/2026/day-14/task-14/image.png differ diff --git a/2026/day-15/task-15/day-15-networking-concepts.md b/2026/day-15/task-15/day-15-networking-concepts.md new file mode 100644 index 0000000000..b22fceafcf --- /dev/null +++ b/2026/day-15/task-15/day-15-networking-concepts.md @@ -0,0 +1,119 @@ +# Day 15 – Networking Concepts: DNS, IP, Subnets & Ports +## Task 1: DNS – How Names Become IPs +- What Happens When You Type google.com in a Browser? +- 1️⃣ Your system checks its local DNS cache. +- 2️⃣ If not found, it queries a DNS resolver (usually your ISP or public DNS like 8.8.8.8). +- 3️⃣ The resolver finds the IP address for google.com. +- 4️⃣ Your browser connects to that IP using TCP (usually port 443 for HTTPS). +- In short: Domain → DNS lookup → IP address → Web connection +## DNS Record Types (One Line Each) +- A → Maps a domain to an IPv4 address. +- AAAA → Maps a domain to an IPv6 address. +- CNAME → Alias record that points one domain to another domain. +- MX → Specifies mail servers for a domain. +- NS → Defines authoritative name servers for a domain. +## Run DNS Lookup +- `dig google.com` - +``` +google.com. 4 IN A 216.58.211.14 +``` +- A Record IP: 216.58.211.14 +- TTL: 4 seconds - TTL (Time To Live) means how long the DNS response can be cached before it must be refreshed. +- DNS translates human-readable domain names into IP addresses so computers can communicate. +# 🌐 Task 2: IP Addressing +## What is an IPv4 Address? +- An IPv4 address is a 32-bit numerical label assigned to a device on a network. +``` +192.168.1.10 +``` +- It is structured as four octets (8 bits each) separated by dots. Each octet ranges from 0 to 255. +## Public vs Private IP Address +- Public IP is accessible over the internet and assigned by your ISP. Eg: 8.8.8.8 - (Google Public DNS) +- Private IP - Used inside local networks. Not directly accessible from the internet. +- Private IP Ranges - These ranges are reserved for internal networks: +- 10.0.0.0 – 10.255.255.255 +- 172.16.0.0 – 172.31.255.255 +- 192.168.0.0 – 192.168.255.255 +- These IPs require NAT (Network Address Translation) to access the internet. +## Identify Your Private IP +- `ip addr show` +``` + inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 + ``` + - Since 172.17.x.x falls within the private range (172.16–172.31), this is a private IP address. + ## ONE LINE SUMMARY + - IPv4 addresses uniquely identify devices on a network, and private IPs are used internally while public IPs are internet-facing. + + # Task 3: CIDR & Subnetting (Classless Inter-Domain Routing - CIDR is a way to write IP addresses with how big the network is) + ## What does /24 mean in 192.168.1.0/24? + - The /24 means: First 24 numbers (bits) are for the network & last 8 numbers (bits) are for devices (computers, servers, etc.), Since IPv4 has 32 total bits, we do: + ``` + 32 - 24 = 8 bits for devices +``` +- 8 bits means - 2⁸ = 256 total IP addresses but 1 IP is for network & 1 IP is for broadcast. So usable 256 - 2 = 254 usable devices +## 🧮 How Many Devices? +- /24 - 254 usable devices - used in small office network +- /16 - 65534 usable devices , used in large orgaix=zation, much bigger network +- /28 - 14 usable devices , very small network used for small subnet like servers +## why do we subnet +- Imagine you have 1 big network with 10,000 devices. That would be: +❌ Hard to manage +❌ Slow (too much broadcast traffic) +❌ Less secure +So we divide it into smaller groups. Think of subnetting like: +🏢 Big building → divided into floors +Each floor → separate group +It keeps things organized and secure. + +# Quick exercise — fill in: +|CIDR|Subnet Mask|total IPs|Usable hosts| +|---|---|---|---| +|/24|255.255.255.0|256|254| +|/16|255.255.0.0|65,536|65,534| +|/28|255.255.255.240|16|14| + +# Task 4: Ports – The Doors to Services +## What is a Port? +- A port is a numbered endpoint on a computer used to identify a specific service or application. In simple words: IP address tells you the house. Port tells you the door. Example: Same server IP but Different services run on different ports +## Why Do We Need Ports? +- Because one server can run many services at the same time: Website, SSH, Database, DNS. Ports help the system know which service should receive the data. + +# 🔹 Common Ports +| port | Service | +| --- | --- | +| 22 | SSH (Secure Shell) | +| 80 | HTTP(Web traffic) | +| 443 | HTTPS {Secure web traffic} | +| 53 | DNS | +| 3306 | MySQL | +| 6379 | Redis | +| 27017 | MongoDB | + +## 🔍 Check Listening Ports +- `ss -tulpn` +``` +tcp LISTEN 0 128 0.0.0.0:22 +tcp LISTEN 0 128 0.0.0.0:80 +``` +## 🔹 Match Ports to Services +- Example observation: +- Port 22 → SSH (sshd) +- Port 80 → Nginx / Apache (Web server) +- If using cloud VM, you’ll almost always see: 22 -> sshd +- `Summary` - Ports allow multiple services to run on the same machine by assigning each service a unique communication number. + +# 1️⃣ You run curl http://myapp.com:8080 — what networking concepts are involved? +- First, DNS resolves myapp.com to an IP address. +- Then TCP establishes a connection to port 8080 on that IP. +- Finally, HTTP (Application layer) sends the request over TCP over IP. +- In short: DNS → TCP → Port 8080 → HTTP +# 2️⃣ Your app can't reach database at 10.0.1.50:3306 — what would you check first? +- First, verify network reachability (ping or traceroute). +- Then check if port 3306 (MySQL) is listening using ss -tulpn on the DB server. +- Finally, check firewall rules or security groups blocking the port. + +# 📚 What I Learned (3 Key Points) +- DNS converts domain names into IP addresses, enabling applications to communicate over the network. +- Ports act as entry points for services, allowing multiple applications (SSH, HTTP, MySQL, etc.) to run on the same server. +- Network troubleshooting follows a logical flow: +- Check DNS → Verify reachability → Confirm open ports → Inspect firewall/security rules. diff --git a/2026/day-16/task-16/day-16-shell-scripting.md b/2026/day-16/task-16/day-16-shell-scripting.md new file mode 100644 index 0000000000..f467530b37 --- /dev/null +++ b/2026/day-16/task-16/day-16-shell-scripting.md @@ -0,0 +1,86 @@ +## Day 16 – Shell Scripting Basics +# Task 1: Your First Script +- Create a file hello.sh - `vim hello.sh` +- Add the shebang line #!/bin/bash at the top - `#!/bin/bash` +- Print Hello, DevOps! using echo - `echo "Hello, devops!"` +- Make it executable and run it - `chmod 755 hello.sh` , `./hello.sh` -> O/P - Hello, Devops! + +# Task 2: Variables +- Create variables.sh - `vim variable.sh` +- A variable for your NAME - `name="Apurva"` +- A variable for your ROLE (e.g., "DevOps Engineer") - `role="Devops engineer"` +- Print: Hello, I am and I am a - `echo "I am $name and I am $role"` + +# Task 3: User Input with read +- Create greet.sh - `vim greet.sh` +- Asks the user for their name using read - `read -p "What is your username" name` +- Asks for their favourite tool - `read -p "what is your fav tool?" tool` +- Prints: Hello , your favourite tool is - `echo "your username is $name & your fav tool is $tool"` + +# Task 4: If-Else Conditions +- Create check_number.sh - `vim check_umber.sh` +- Takes a number using read - `read -p "Enter the number" number` +- Prints whether it is positive, negative, or zero - +``` +read -p "Enter a number: " number + +if ! [[ "$number" =~ ^-?[0-9]+$ ]]; then + echo "Invalid input. Please enter a valid integer." +elif [ "$number" -gt 0 ]; then + echo "Positive number." +elif [ "$number" -lt 0 ]; then + echo "Negative number." +else + echo "Zero." +fi +``` +- Create file_check.sh - `vim file_check.sh` +- Checks if the file exists using -f +``` +#!/bin/bash + +read -p "Enter file name: " file + +if [ -f "$file" ]; then + echo "File exists." +else + echo "File does not exist." +fi +``` +- Prints appropriate message +![message](image.png) + +# Task 5: Combine It All +- Stores a service name in a variable (e.g., nginx, sshd) - `service="nginx"` +- Asks the user: "Do you want to check the status? (y/n)" - `read -p "Do you want to check the status of $service?(y/n)" choice` +- If y — runs systemctl status and prints whether it's active or not, If n — prints "Skipped." +``` +#!/bin/bash + +# Store service name +service="nginx" + +# Ask user +read -p "Do you want to check the status of $service? (y/n): " choice + +if [ "$choice" = "y" ]; then + # Check if service is active + if systemctl is-active --quiet "$service"; then + echo "$service is ACTIVE." + else + echo "$service is NOT ACTIVE." + fi +elif [ "$choice" = "n" ]; then + echo "Skipped." +else + echo "Invalid input. Please enter y or n." +fi +``` +![status](image-1.png) + +## Key Learning +- Learned the importance of the shebang (#!/bin/bash), execution permissions, and how the shell processes scripts step by step. + +- Practiced using variables, echo, and read, and understood the difference between single vs double quotes for variable expansion. + +- Implemented conditional logic (if-else), file checks (-f), and service status validation using systemctl to build real-world automation scripts. diff --git a/2026/day-16/task-16/image-1.png b/2026/day-16/task-16/image-1.png new file mode 100644 index 0000000000..1f7727e71b Binary files /dev/null and b/2026/day-16/task-16/image-1.png differ diff --git a/2026/day-16/task-16/image.png b/2026/day-16/task-16/image.png new file mode 100644 index 0000000000..f54dd1cb19 Binary files /dev/null and b/2026/day-16/task-16/image.png differ diff --git a/2026/day-17/task-17/day-17-scripting.md b/2026/day-17/task-17/day-17-scripting.md new file mode 100644 index 0000000000..7b41c059e1 --- /dev/null +++ b/2026/day-17/task-17/day-17-scripting.md @@ -0,0 +1,97 @@ +## Day 17 – Shell Scripting: Loops, Arguments & Error Handling +# Task 1: For Loop +- Create for_loop.sh that Loops through a list of 5 fruits and prints each one +``` +#!/bin/bash + +# List of fruits +fruits=("Apple" "Banana" "Mango" "Orange" "Grapes") +- Prints numbers 1 to 10 using a for loop +# Loop through each fruit +for fruit in "${fruits[@]}" +do + echo "Fruit: $fruit" +done +``` +- Create count.sh that Prints numbers 1 to 10 using a for loop +``` +#!/bin/bash + +for num in {1..10} +do + echo $num +done +``` +# Task 2: While Loop +- Takes a number from the user, Counts down to 0 using a while loop, Prints "Done!" at the end +``` +#!/bin/bash + +read -p "Enter a number: " num + +while [ "$num" -ge 0 ] +do + echo $num + ((num--)) +done + +echo "Done!" +``` +# Task 3: Command-Line Arguments +- Accepts a name as $1, Prints Hello, !, If no argument is passed, prints "Usage: ./greet.sh " +``` +if [ $# -eq 0 ]; then + echo "Usage:./greet.sh" +else + echo "Hello, $1" +fi +``` +- Create args_demo.sh that:Prints total number of arguments `$#`, Prints all arguments $@, Prints the script name ($0) +``` +#!/bin/bash +echo "Script name: $0" +echo "Total number of arguments: $#" +echo "All arguments: $@" +``` +# Task 4: Install Packages via Script +``` +#!/bin/bash + +packages=("nginx" "curl" "wget") + +echo "**** Updating packages list ****" +# -qq = very quietly, -q = quiet or --quiet , --quiet --quiet . +apt-get update -qq + + +for package in "${packages[@]}"; +do + echo "***** Checking $package ***** " + if dpkg -s "$package" &> /dev/null; then + version=$(dpkg -s "$package" | awk '/^Version:/ {print $2}') + echo "$package is already Installed Version: $version" + + else + echo "***** $package not found. Installing ***** " + + if apt-get install -y "$package" &> /dev/null; then + echo "Successfully installed $package" + else + echo "Failed to Install $package" + fi + fi +done +``` +# Task 5: Error Handling +- Create safe_script.sh that:Uses set -e at the top (exit on error), Tries to create a directory /tmp/devops-test, Tries to navigate into it, Creates a file inside +Uses || operator to print an error if any step fails +``` +#!/bin/bash + +# Exit immediately if a command exits with non-zero status +set -e + +echo "Creating directory..." +mkdir /tmp/devops-test || { echo "Directory already exists"; exit 1; } +# The { ...; } is a command group. It lets you run multiple commands after ||. +``` \ No newline at end of file diff --git a/2026/day-18/task-18/day-18-scripting.md b/2026/day-18/task-18/day-18-scripting.md new file mode 100644 index 0000000000..489e888aa8 --- /dev/null +++ b/2026/day-18/task-18/day-18-scripting.md @@ -0,0 +1,172 @@ +## Day 18 – Shell Scripting: Functions & Slightly Advanced Concepts +# Task 1: Basic Functions +- Create functions.sh with:A function greet that takes a name as argument and prints Hello, ! , A function add that takes two numbers and prints their sum Call both functions from the script +``` +#!/bin/bash + +greet() + +{ +read -p "Enter your name" name +echo "Hello, $name!" +} + +greet + +add() +{ + read -p "Enter the number 1" num1 + read -p "Enter the number 2" num2 + sum=$((num1+num2)) + echo "The sum is $sum" +} + +add +``` +# Task 2: Functions with Return Values +- Create disk_check.sh with:A function check_disk that checks disk usage of / using df -h, A function check_memory that checks free memory using free -h, A main section that calls both and prints the results +``` +#/bin/bash + +check_disk() + +{ + echo "----Disk Usage Root-----" + df -h / + +} +check_memory() +{ + echo "---memory status---" + free -h +} +check_disk +check_memory +``` +# Task 3: Strict Mode — set -euo pipefail +- Create strict_demo.sh with set -euo pipefail at the top +- Try using an undefined variable — what happens with set -u? +- Try a command that fails — what happens with set -e? +- Try a piped command where one part fails — what happens with set -o pipefail? +- Document: What does each flag do? +- set -e →, set -u → & set -o pipefail +``` +#!/bin/bash +set -euo pipefail + +echo "Strict mode enabled!" + +# undefined variable test +echo "vale of name is: $name" + +# Trying failing command +echo "Trying to list a non-existent file..." +ls name.txt + +# pipe failure test +echo "Testing pipe failure..." +grep "text" name.txt | sort +echo "script completed!" +``` +- set -e → Exit immediately if any command fails (non-zero exit code). +- set -u → Treat undefined variables as errors and stop the script. +- set -o pipefail → If any command in a pipeline fails, the whole pipeline fails. +# Task 4: Local Variables +- Create local_demo.sh with:A function that uses local keyword for variables, Show that local variables don't leak outside the function & Compare with a function that uses regular variables +``` +#!/bin/bash + +echo "=== Demonstrating local vs global variables ===" +echo + +# Function using local variable +function_with_local() { + local message="I am LOCAL" + echo "Inside function_with_local: $message" +} + +# Function using normal variable (global) +function_without_local() { + message="I am GLOBAL" + echo "Inside function_without_local: $message" +} + +# Call first function +function_with_local + +# Try accessing variable outside +echo "Outside after function_with_local: ${message:-Not Set}" +echo + +# Call second function +function_without_local + +# Access variable outside +echo "Outside after function_without_local: $message" +``` +# Task 5: Build a Script — System Info Reporter +- Create system_info.sh that uses functions for everything: + +- A function to print hostname and OS info +- A function to print uptime +- A function to print disk usage (top 5 by size) +- A function to print memory usage +- A function to print top 5 CPU-consuming processes +- A main function that calls all of the above with section headers +- Use set -euo pipefail at the top +``` +#!/bin/bash + +#A function to print hostname and OS info + +set -euo pipefail +printsystem_info() +{ + echo "---system information--" + echo "Host Name: $(hostname)" + echo "OS: $(uname -s)" + echo "kernel: $(uname -r)" +} + +# A function to print uptime +print_uptime() +{ + echo "---uptime---" + uptime -p + echo +} +# A function to print disk usage (top 5 by size) + +disk_usage() +{ + echo "---disk usage---" + sudo df -h |sort -rh | head -n 5 +} +# A function to print memory usage + +memory_usage() +{ + echo "---memory usage---" + free -h +} +#A function to print top 5 CPU-consuming processes +print_top_process() +{ + echo "--top 5 CPU consuming process---" + ps aux --sort=-%cpu | head -n 6 +} +#A main function that calls all of the above with section headers + +main() +{ + echo "----system information report---" + printsystem_info + print_uptime + disk_usage + memory_usage + print_top_process + echo "--Report completed successfully---" + +} +main +``` \ No newline at end of file diff --git a/2026/day-19/day-19-project.md b/2026/day-19/day-19-project.md new file mode 100644 index 0000000000..dd8fb42d93 --- /dev/null +++ b/2026/day-19/day-19-project.md @@ -0,0 +1,59 @@ + +# Day 19 – Shell Scripting Project: Log Rotation, Backup & Crontab +## Task 1: Log Rotation Script + +This script ensures your disk doesn't fill up with stale logs. It targets logs older than a 5 Days. + +[script: log_rotate.sh](task-19/log_rotate.sh) + +``` +-c → create a new archive +-z → compress using gzip +-f → specify the filename of the archive +``` + +**OUTPUT** + +Screenshot 2026-02-22 at 12 48 25 AM + + +## Task 2. Server Backup Script (backup.sh) + +Standardized backups are vital. This script uses tar for archiving and includes a cleanup mechanism for backups older than 14 Days. + +[scripts: backup.sh](task-19/backup.sh) + +**OUTPUT** + +Screenshot 2026-02-22 at 12 56 36 AM + + +## Task 3. Crontab Entries +``` +# Log rotation daily at 2 AM +0 2 * * * /Users/fahadjaseem/documents/work/90DaysOfDevOps/2026/day-19/scripts/log_rotate.sh /var/log/myapp >> /var/log/rotate.log 2>&1 + +# Server backup every Sunday at 3 AM +0 3 * * 0 /Users/fahadjaseem/documents/work/90DaysOfDevOps/2026/day-19/scripts/backup.sh /etc /backup >> /var/log/backup.log 2>&1 + +# Daily 1 AM +0 1 * * * /Users/fahadjaseem/documents/work/90DaysOfDevOps/2026/day-19/scripts/maintenance.sh +``` + +## Task 4. Maintenance Script + +The maintenance.sh combines both functions with logging. + +[script: maintenance.sh](task-19/maintanance.sh) + +**OUTPUT** + +![Output](image.png) + +## Key Learnings + +- Exit Codes Matter: Using exit 1 on errors prevents a script from continuing blindly and potentially deleting the wrong files. + +- The find Command is King: The ability to filter by time (-mtime) and execute actions (-exec) makes it the most powerful tool for cleanup tasks. + +- Error Handling is Crucial: Always check if directories exist before operating on them, verify backup integrity after creation, and provide meaningful error messages. This makes scripts production-ready and prevents data loss. diff --git a/2026/day-19/image.png b/2026/day-19/image.png new file mode 100644 index 0000000000..e47605a9c2 Binary files /dev/null and b/2026/day-19/image.png differ diff --git a/2026/day-19/task-19/backup.sh b/2026/day-19/task-19/backup.sh new file mode 100644 index 0000000000..45f31a1518 --- /dev/null +++ b/2026/day-19/task-19/backup.sh @@ -0,0 +1,56 @@ +#!/bin/bash + +# backup.sh +# Usage: ./backup.sh + +# Display usage + +if [ $# -ne 2 ]; then + echo "Usage: $0 " + exit 1 +fi + +source_dir=$1 +backup_dir=$2 + +# Check if source exists and is a directory + +if [ ! -d "$source_dir" ]; then + echo "source directory $source_dir does not exist" + exit 1 +fi + +# generate timestamp + +timestamp=$(date '+%Y-%m-%d-%H:%M:%S') + +#ARCHIVE_NAME="backup-$TIMESTAMP.tar.gz" +#ARCHIVE_PATH="$DEST/$ARCHIVE_NAME" + +# function to create a backup +backup(){ + # Create a compressed tar.gz archive of the source directory in the backup directory + tar -czf "${backup_dir}/backup_${timestamp}.tar.gz" "${source_dir}" > /dev/null + + # Check if tar command succeeded + if [ $? -eq 0 ]; then + echo "Backup created successfully: backup_${timestamp}" + # List all backups in the backup directory with their sizes + du -h "$backup_dir"/*.tar.gz + else + echo "Backup failed" + exit 1 + fi +} + +# Call the backup function +backup + +delete_oldbackup() +{ + echo "Deleting old backups" + find "$backup_dir" -type f -name "backup_*.tar.gz" -mmin +15 -exec rm -f {} \; +exit 0 +} + +delete_oldbackup \ No newline at end of file diff --git a/2026/day-19/task-19/demo/file1.txt b/2026/day-19/task-19/demo/file1.txt new file mode 100644 index 0000000000..e69de29bb2 diff --git a/2026/day-19/task-19/demo/file2.txt b/2026/day-19/task-19/demo/file2.txt new file mode 100644 index 0000000000..e69de29bb2 diff --git a/2026/day-19/task-19/demo/mylog.log.gz b/2026/day-19/task-19/demo/mylog.log.gz new file mode 100644 index 0000000000..ec252f94be Binary files /dev/null and b/2026/day-19/task-19/demo/mylog.log.gz differ diff --git a/2026/day-19/task-19/log_rotate.sh b/2026/day-19/task-19/log_rotate.sh new file mode 100644 index 0000000000..8f772426e5 --- /dev/null +++ b/2026/day-19/task-19/log_rotate.sh @@ -0,0 +1,28 @@ +#!/bin/bash + +# Store the first argument as the log directory +LOG_DIR="$1" + +#check if the argument passed +if [ $# -ne 1 ]; then + echo "Usage: ./log_rotate.sh " + exit 1 +fi + +# Exit with an error if the directory doesn't exist +if [ ! -d "$LOG_DIR" ]; then + echo "Directory doesn't exist: $LOG_DIR" + exit 1 +fi + +# Compress .log files older than 7 days and count them +compressed=$(find "$LOG_DIR" -type f -name "*.log" -mtime +1 -exec gzip {} \; -print | wc -l) + + +# Delete .gz files older than 30 days and count them +deleted=$(find "$LOG_DIR" -type f -name "*.gz" -mmin +2 -delete -print | wc -l) + + +# Prints how many files were compressed and deleted +echo "Compressed $compressed files." +echo "Deleted $deleted files." \ No newline at end of file diff --git a/2026/day-19/task-19/maintanance.sh b/2026/day-19/task-19/maintanance.sh new file mode 100644 index 0000000000..35b707c451 --- /dev/null +++ b/2026/day-19/task-19/maintanance.sh @@ -0,0 +1,33 @@ +#!/bin/bash + +# ============================== +# Maintenance Script +# ============================== + +LOG_FILE="/var/log/maintanance.log" +# Function to log messages with timestamp +log() +{ + echo "$(date '+%Y:%m:%d %H:%M:%S') - $1" | tee -a "$LOG_FILE" +} + +#Source both scripts + +rotate_logs() +{ + log "Starting log rotation..." + sudo bash ./log_rotate.sh /home/ubuntu/logs 2>&1 | tee -e "$LOG_FILE" + echo "Log rotation completed." +} + +run_backup(){ + log "starting backup.." + sudo bash ./backup.sh /app-logs /backups 2>&1 | tee -e "$LOG_FILE" + log "Backup completed" +} + +#call the functions +rotate_logs +run_backup + +log "Maintanance Completed Successfully!" diff --git a/2026/day-20/task-20/day-20-solution.md b/2026/day-20/task-20/day-20-solution.md new file mode 100644 index 0000000000..b4ece11c70 --- /dev/null +++ b/2026/day-20/task-20/day-20-solution.md @@ -0,0 +1,67 @@ +## Task 1: Input and Validation +``` +#!/bin/bash + +log_file=$1 + +if [ $# -ne 1 ]; then + echo "Usage: ./log_analyzer.sh " + exit 1 +elif [ ! -f "log_file" ]; then + echo "Log file doesn't exist" + exit 1 +fi +``` +## Task 2: Error Count +``` +# Count the total number of lines in the log file +total_lines_count=$(wc -l < "$log_file") + +# Count the number of error messages +error_message_count=$( grep -c "ERROR" "$log_file") +``` +## Task 3: Critical Events +``` +#Search for lines containing the keyword CRITICAL with line numbers +critical_event=$(grep -n "CRITICAL" "$log_file" | sed 's/^\([0-9]*\):/Line \1:/') +``` +## Task 4: Top Error Messages +``` +top_error=$( grep "ERROR" "$log_file" | awk '{$1=$2=$3=""; print}' | sort | uniq -c | sort -rn | head -5) +``` +## Task 5: Summary Report +``` +summary_report="log_report_$(date +%Y-%m-%s).txt" + +{ + echo "Date of Analysis: $(date)" + echo "Log file name: $log_file" + echo "Total lines processed: $total_lines_count" + echo "Total error count: $error_message_count" + echo "Top 5 error message: $top_error" + echo "List of critical events: $critical_event" + +} | tee "$summary_report" + +echo "Summary report generated: $summary_report" +``` +## Task 6 (Optional): Archive Processed Logs +``` +# Create an archive/ directory if it doesn't exist +archive_dir="./archive_dir" + +if [ ! -d "$archive_dir" ]; then + mkdir "$archive_dir" +fi + +# Move the processed log file into archive/ after analysis +mv "$log_file" "$archive_dir/" + +# Print a confirmation message +echo "$log_file" moved to "$archive_dir" + +echo "Log analysis completed" +``` +# OUTPUT +![output](image-1.png) +![output2](image-2.png) \ No newline at end of file diff --git a/2026/day-20/task-20/image-1.png b/2026/day-20/task-20/image-1.png new file mode 100644 index 0000000000..7f2a40e625 Binary files /dev/null and b/2026/day-20/task-20/image-1.png differ diff --git a/2026/day-20/task-20/image-2.png b/2026/day-20/task-20/image-2.png new file mode 100644 index 0000000000..ba76aa9923 Binary files /dev/null and b/2026/day-20/task-20/image-2.png differ diff --git a/2026/day-20/task-20/image.png b/2026/day-20/task-20/image.png new file mode 100644 index 0000000000..4dc4f60afa Binary files /dev/null and b/2026/day-20/task-20/image.png differ diff --git a/2026/day-21/task-21/shell_scripting_cheatsheet.md b/2026/day-21/task-21/shell_scripting_cheatsheet.md new file mode 100644 index 0000000000..de3f05e428 --- /dev/null +++ b/2026/day-21/task-21/shell_scripting_cheatsheet.md @@ -0,0 +1,232 @@ + +# Day 21 – Shell Scripting Cheat Sheet: Build Your Own Reference Guide +## Task 1: Basics +### Shebang +Tells system which interpreter to use. + +### Execution +- chmod +x script.sh (make executable) +- ./script.sh (run directly) +- bash script.sh (run via bash) + +### Comments: +Use # for single-line or inline notes. + +### Variables +``` +NAME="Fahad" + +echo $NAME +echo "$NAME" +echo '$NAME' +``` +Difference: +- $NAME → value +- "$NAME" → preserves spaces +- '$NAME' → literal text + + +### Reading User Input +`read -p "Enter name: " NAME` +`echo "Hello $NAME"` + + +### Command Line Arguments +``` +echo "Script name: $0" +echo "First arg: $1" +echo "Total args: $#" +echo "All args: $@" +echo "Exit status: $?" +``` + +`./script.sh file1 file2` + +--- + +## Task 2. Operators and Conditionals +### String Comparison +- Strings: = (equal), != (not equal), -z (string is empty), -n (not empty). +- Integers: -eq (equal), -ne (not equal), -lt (less than), -gt (greater than), -le / -ge. + +### File Tests + +`-f`: Is a regular file | `-d`: Is a directory | `-e`: Path exists. + +`-r/-w/-x`: Is readable/writable/executable. + +### IF/ELSE +``` +if [ -f file.txt ]; then +echo "Exists" +elif [ -d folder ]; then +echo "Directory exists" +else +echo "Not found" +fi +``` + +### Logical Operators +``` +[ -f file ] && echo "Exists" + +[ -f file ] || echo "Not exists" + +[ ! -f file ] +``` + +### Case Statement +``` +case $VAR in +start) +echo "Starting" +;; +stop) +echo "Stopping" +;; +*) +echo "Unknown" +;; +esac +``` + +--- + +## 3. Loops +``` +# List-based For +for item in apple orange banana; do echo $item; done + +# C-style For +for ((i=0; i<5; i++)); do echo $i; done + +# While Loop (Runs while true) +while [ $COUNT -gt 0 ]; do ((COUNT--)); done + +# Looping over Files +for file in *.log; do mv "$file" "${file}.bak"; done + +# Reading Command Output/File line-by-line +ls | while read line; do echo "Processing $line"; done +``` + +--- + +## Task 4: Functions +``` +# Definition +deploy_app() { + local version=$1 # local scope variable + echo "Deploying version $version" + return 0 # Return status (0-255) +} + +# Calling +deploy_app "v1.2.0" + +# Result capture +status=$(deploy_app "v2") # Captures echo output, not return value +``` + +--- + +## Task 5: Text Processing Commands +### GREP +``` +grep "error" file.log +grep -i "error" file.log +grep -r "error" /var/log +grep -c "error" file.log +grep -n "error" file.log +grep -v "info" file.log +grep -E "error|fail" file.log +``` + +### AWK +``` +awk '{print $1}' file.txt + +awk -F: '{print $1}' /etc/passwd + +awk 'BEGIN {print "Start"} {print $1} END {print "End"}' file.txt +``` + +### SED +``` +sed 's/old/new/g' file.txt + +sed -i 's/error/warning/g' file.txt + +sed '2d' file.txt +``` + +### CUT +``` +cut -d: -f1 /etc/passwd +cut -c1-5 file.txt +``` + +### SORT +``` +sort file.txt +sort -n file.txt +sort -r file.txt +sort -u file.txt +``` + +### Uniq +``` +uniq file.txt +uniq -c file.txt +``` + +### Tr +``` +tr 'a-z' 'A-Z' < file.txt + +tr -d ' ' < file.txt +``` + +### wc +``` +wc -l file.txt +wc -w file.txt +wc -c file.txt +``` + +### Head and Tail +``` +head -n 10 file.txt + +tail -n 10 file.txt + +tail -f file.log +``` + +--- + +### Task 6: Useful Patterns and One-Liners + +- Delete files older than 30 days: find /path -type f -mtime +30 -delete + +- Count lines in all logs: wc -l *.log + +- Check if service is active: systemctl is-active --quiet nginx || echo "Nginx Down" + +- Replace string in multiple files: grep -rl 'old' . | xargs sed -i 's/old/new/g' + +- Monitor disk usage alert: df -h | awk '$5 > 90 {print $0}' (Prints partitions > 90% full). + +--- + +### Task 7: Error Handling and Debugging + +- set -e: Script exits immediately if a command returns a non-zero status. + +- set -u: Exit if an undefined variable is used. + +- set -o pipefail: Catch errors hidden inside pipes (e.g., grep | sed). + +- set -x: Print every command before executing (Trace mode). + +- Trap: trap 'rm -f /tmp/lock' EXIT (Execute cleanup code when script finishes/fails). diff --git a/2026/day-22/task-22/git-commands.md b/2026/day-22/task-22/git-commands.md new file mode 100644 index 0000000000..de343d29bd --- /dev/null +++ b/2026/day-22/task-22/git-commands.md @@ -0,0 +1,173 @@ +## Day 22 – Introduction to Git: My First Repository +# Task 1: Install and Configure Git +git1 + +# Task 2: Create Your Git Project +Screenshot 2026-03-11 052925 + +Screenshot 2026-03-11 061430 + +# Task 3: Create Your Git Commands Reference +## Git Commands Reference +# Setup & Config + +- git --version , Shows the installed Git version. +``` +git --version +``` +- Sets your global Git username , git config --global user.name +``` +git config --global user.name "Rahul Sharma" +``` +- git config --global user.email +- Sets your global Git email address. +``` +git config --global user.email "rahul@gmail.com" +``` +- git config --list +- Displays all Git configuration settings. +``` +git config --list +``` +# Basic Workflow +- git init , Initializes a new Git repository. +``` +git init +``` +# mkdir , Creates a new directory. +``` +mkdir devops-git-practice +``` +# cd , Changes the current directory. +``` +cd devops-git-practice +``` +# touch , Creates a new file. +``` +touch git-commands.md +``` +# Viewing Changes - git status +- Shows the current status of the repository including tracked and untracked files. +``` +git status +``` +# Task 4: Stage and Commit- Staging & Committing + +# git add - Adds files to the staging area. +``` +git add git-commands.md +``` +# git commit -m , Saves staged changes with a message. +``` +git commit -m "Add git commands reference documentation" +``` +![commit](image.png) +# git log - Displays commit history. +``` +git log +``` +# git log --oneline - Shows a compact version of commit history. +``` +git log --oneline +``` +![git oneline](image-1.png) + +# Visual Git Workflow (Very Important) +``` +Working Directory + │ + │ git add + ▼ +Staging Area + │ + │ git commit + ▼ +Git Repository (History) +``` +## Task 5- Make More Changes and Build History +## Branching +# git branch - Shows all branches in the repository. +``` +git branch +``` +# git checkout - Switches to another branch. +``` +git checkout main +``` +# Task 6: Understand the Git Workflow +- Step 1: Create the Notes File +``` +touch day-22-notes.md +``` +- What is the difference between git add and git commit? +- Ans: git add moves changes from the working directory to the staging area. It prepares the files that will be included in the next commit. Git commit saves the staged changes into the Git repository as a permanent snapshot with a message explaining what changed. +- Example workflow: +``` +git add file.txt +git commit -m "Added new feature" +``` +- What does the staging area do? Why doesn't Git just commit directly? +- Ans: The staging area is a place where you prepare changes before committing them. It allows you to select exactly which changes should go into the next commit. Git uses a staging area so developers can group related changes together instead of committing everything at once. + +- What information does git log show you? +- Ans: git log shows the commit history of the repository. +It includes: +- Commit ID (unique hash) +- Author name and email +- Date of the commit +- Commit message + +- What is the .git/ folder and what happens if you delete it? +- Ans: The .git folder is the hidden directory that stores all Git data for the repository. It contains commits, branches, configuration, and history. If the .git folder is deleted, the project is no longer a Git repository and all version history is lost. + +- What is the difference between a working directory, staging area, and repository? +- Working Directory +This is the folder where you edit and create files on your computer. + +Staging Area +This is where files are prepared before committing. Files are added here using git add. + +Repository +This is where Git permanently stores committed versions of files along with the project history. +- List all branches in your repo +``` +git branch +``` +- Create a New Branch Called feature-1 +``` +git branch feature-1 +``` +- Switch to feature-1 +``` +git checkout feature-1 +``` +## Create a new branch and switch to it in a single command — call it feature-2 +``` +- git checkout -b feature-2 +``` +## Try using git switch to move between branches — how is it different from git checkout? +``` +git switch feature -1 +``` +## Difference Between checkout and switch +- git checkout - Older command used for switching branches AND restoring files +- git switch - Newer command used only for switching branches, simpler and safer + +## Make a Commit on feature-1 (Not on Main) +- git switch feature-1 +- ### git branch +Lists all branches in the repository. +Example: +git branch + +- git add git-commands.md +- git commit -m "Add branching commands documentation" +- Switch Back to main and Verify - The commit from feature-1 is NOT in main. +![git switch](image-3.png) +![git main](image.png) + +- Delete a Branch You No Longer Need +``` +git branch -D feature-2 +``` +## Task 3: Push to GitHub diff --git a/2026/day-22/task-22/image-1.png b/2026/day-22/task-22/image-1.png new file mode 100644 index 0000000000..606bf2528d Binary files /dev/null and b/2026/day-22/task-22/image-1.png differ diff --git a/2026/day-22/task-22/image-2.png b/2026/day-22/task-22/image-2.png new file mode 100644 index 0000000000..8019c002e3 Binary files /dev/null and b/2026/day-22/task-22/image-2.png differ diff --git a/2026/day-22/task-22/image.png b/2026/day-22/task-22/image.png new file mode 100644 index 0000000000..538d500863 Binary files /dev/null and b/2026/day-22/task-22/image.png differ diff --git a/2026/day-23/task-23/day-23-notes.md b/2026/day-23/task-23/day-23-notes.md new file mode 100644 index 0000000000..d4153835bd --- /dev/null +++ b/2026/day-23/task-23/day-23-notes.md @@ -0,0 +1,202 @@ +# Day 23 – Git Branching & Working with GitHub +## Task 1: Understanding Branches +## 1. What is a branch in Git? + +A branch in Git is a separate line of development that allows developers to work on new features, fixes, or experiments without affecting the main project. Each branch has its own commits and history until it is merged back. + +--- + +## 2. Why do we use branches instead of committing everything to main? + +Branches help keep the main branch stable and production-ready. Developers can work on new features or bug fixes in separate branches without breaking the main codebase. Once the changes are tested and ready, they can be merged into the main branch. + +--- +## 3. What is HEAD in Git? + +HEAD is a pointer that tells Git which branch or commit you are currently working on. Most of the time, HEAD points to the latest commit on the current branch. + +Example: +If you are on the main branch, HEAD points to the latest commit on main. + +## 4. What happens to your files when you switch branches? + +When you switch branches, Git updates the files in your working directory to match the version stored in that branch. Files may change, appear, or disappear depending on what exists in that branch. + +## Task 2: Branching Commands — Hands-On +- List all branches in your repo +``` +git branch +``` +- Create a New Branch Called feature-1 +``` +git branch feature-1 +``` +- Switch to feature-1 +``` +git checkout feature-1 +``` +## Create a new branch and switch to it in a single command — call it feature-2 +``` +- git checkout -b feature-2 +``` +## Try using git switch to move between branches — how is it different from git checkout? +``` +git switch feature -1 +``` +## Difference Between checkout and switch +- git checkout - Older command used for switching branches AND restoring files +- git switch - Newer command used only for switching branches, simpler and safer + +## Make a Commit on feature-1 (Not on Main) +- git switch feature-1 +- ### git branch +Lists all branches in the repository. +Example: +git branch + +- git add git-commands.md +- git commit -m "Add branching commands documentation" +- Switch Back to main and Verify - The commit from feature-1 is NOT in main. + +- Delete a Branch You No Longer Need +``` +git branch -D feature-2 +``` +## Task 3: Push to GitHub +- What is origin? - origin is the default name for the main remote repository that your local repo is connected to. When you clone a repository, Git automatically creates origin. +-Example: +``` +git clone https://github.com/user/project.git +git remote -v +you will see: +origin https://github.com/user/project.git (fetch) +origin https://github.com/user/project.git (push) +``` +- So origin usually means:Your GitHub repository where you push your code, eg: git push origin main + +- What is upstream? - upstream is usually used when you fork someone else's repository. +In this case: + +origin → your forked repository +upstream → the original repository you forked from +``` +- Example structure: +Original repo (someone else's project) + ↑ + upstream + ↑ +Your fork on GitHub + ↑ + origin + ↑ + Your local machine + ``` + ## Task 4: Pull from GitHub + - What is the difference between git fetch and git pull? + - git fetch downloads the latest changes from the remote repository but does NOT modify your local working files. It only updates the remote tracking branches. Fetch means download updates only. + - git pull downloads changes AND immediately merges them into your current branch. + +## Task 5: Clone vs Fork +- What is the difference between clone and fork? +Clone creates a copy of a repository from GitHub (or another remote) to your local machine. +You use it when you want to download a repository and start working on it locally. +``` +GitHub Repository + │ + │ git clone + ▼ +Your Local Machine +``` +Clone = copy repo from remote → local machine + +- A fork creates a copy of someone else's repository in your own GitHub account. +You usually fork a project when you do not have write access to the original repository but want to contribute. +Forking is done on GitHub's website, not through a Git command. + +- When would you clone vs fork? +Use clone when you want to work directly on a repository you have access to. +- This is common in: +Your own repositories +Team/company projects +Internal projects where you already have write permission +``` +GitHub Repo + │ + │ git clone + ▼ +Local Machine + │ + │ commit + push + ▼ +Same Repository +``` +- Use fork when you want to contribute to a repository but you do NOT have write access. + +This is common in: +Open-source projects +External repositories +Projects owned by other organizations +-You cannot push directly. +So you: +1️⃣ Fork the repository +2️⃣ Clone your fork +3️⃣ Push changes to your fork +4️⃣ Create a Pull Request +``` +Original Repo + │ + │ fork + ▼ +Your GitHub Repo + │ + │ clone + ▼ +Local Machine + │ + │ push + ▼ +Your Fork + │ + │ Pull Request + ▼ +Original Repo +``` +- After forking, how do you keep your fork in sync with the original repo? +To keep your fork updated with the original repository, you need to connect the original repo as upstream and pull changes from it. +## Step 1: Clone Your Fork +- First, clone your fork from GitHub: +``` git clone https://github.com/YOUR-USERNAME/repository-name.git ``` +- Step 2: Add the Original Repository as upstream +``` git remote add upstream https://github.com/ORIGINAL-OWNER/repository-name.git ``` +-check remotes: +``` git remote -v ``` +Example o/p- +``` +origin https://github.com/YOUR-USERNAME/repository-name.git +upstream https://github.com/ORIGINAL-OWNER/repository-name.git +``` +- Step 3: Fetch Changes from the Original Repo - Download updates from the original repo: +```git fetch upstream``` This downloads changes but does not merge them yet. +-Step 4: Update Your Local Branch - Switch to the main branch: +``` git switch main `` +Merge updates from upstream: +``` git merge upstream/main ``` Now your local repo has the latest changes. +- Step 5: Push Updates to Your Fork - Update your fork on GitHub: +``` git push origin main``` +- Full Sync Workflow +``` +git fetch upstream +git switch main +git merge upstream/main +git push origin main +``` +``` +Original Repo (upstream) + │ + │ fetch + ▼ +Local Repository + │ + │ push + ▼ +Your Fork (origin) diff --git a/2026/day-24/task-24/day-24-notes.md b/2026/day-24/task-24/day-24-notes.md new file mode 100644 index 0000000000..dfd8c4af81 --- /dev/null +++ b/2026/day-24/task-24/day-24-notes.md @@ -0,0 +1,169 @@ +## Day 24 – Advanced Git: Merge, Rebase, Stash & Cherry Pick +# Task 1: Git Merge — Hands-On +- 1 - Create a new branch feature-login from main, add a couple of commits to it +- git switch -c feature-login - create a branch feature-login as well as switch to the same branch +- Add two commits to feature-login branch and merge it to master- it will get fast forwrd merge because main had no new commits after the branch was created. Git simply moves the main pointer forward. +``` +A --- B --- C +main +feature-login +``` +- 2: Now create another branch feature-signup, add commits to it — but also add a commit to main before merging +Merge commit created, because both branches now have different histories. +``` +A --- D -------- M + \ / + B ---- C +``` +Here M = merge commit +## What is a fast-forward merge? +- A fast-forward merge happens when the target branch has no new commits since the feature branch was created. Git simply moves the branch pointer forward instead of creating a new merge commit. +``` +Example: +main: A +feature: A --- B --- C +After merge: +main: A --- B --- C +``` +## When does Git create a Merge Commit? +- Git creates a merge commit when both branches have new commits and their histories have diverged. +Example: +``` +main: A --- D + \ +feature: B --- C +``` +After merge: +``` +A --- D ------ M + \ / + B --- C +``` +M is the merge commit. +## What is a Merge Conflict? +- A merge conflict occurs when Git cannot automatically combine changes from two branches. This usually happens when: +The same line of code is edited in both branches +The same file is modified differently +Binary files (images, etc.) are changed in both branches +Example conflict scenario: Changes in same file and line +``` +main branch: Hello World +feature branch: Hello Git +``` +Git cannot decide which one to keep, so it asks the user to resolve the conflict manually. +``` +✅ Short summary +Situation Result +Main has no new commits - Fast-forward merge +Both branches changed - Merge commit +Same line edited in both branches - Merge conflict +``` +## Task 2: Git Rebase — Hands-On +- Create a branch feature-dashboard from main, add 2-3 commits -> ```git switch -c feature-dashboard``` +- While on main, add a new commit (so main moves ahead) +- What does rebase actually do to your commits? +- Rebase moves your branch commits and reapplies them on top of another branch. It rewrites commit history by creating new commits. +``` +Before rebase: +main: A --- E +feature: B --- C --- D + +After rebase: +A --- E --- B' --- C' --- D' +``` +## How is the history different from a merge? +- Merge keeps branch history and creates a merge commit. +``` +A --- E ---- M + \ / + B--C--D +``` +- Rebase replays commits and creates a linear history. +``` +A --- E --- B' --- C' --- D' +``` +So the history looks cleaner and easier to read. +## Why should you never rebase commits that have been pushed and shared? +- Because rebase rewrites commit history. If commits were already pushed: +- Other developers may already have those commits. +- Rewriting them causes history conflicts. +- It can break the repository for others. +- Rule: +❗ Never rebase public/shared branches. + +## When would you use rebase vs merge? +- Use Rebase -> Clean up local commits , Update feature branch with latest main, Maintain linear history, Before creating pull request +``` +So Rebase is, reapplies commits from one branch onto another branch to create a linear history. +``` +- use Merge -> Combine branches safely , When branch is shared, Preserve full branch history , Team collaboration +``` +Merge is, combines histories of two branches by creating a merge commit. +``` +## Task 3: Squash Commit vs Merge Commit +- What does squash merging do? +- Squash merging combines multiple commits from a feature branch into a single commit before adding it to the main branch. This creates a clean and simple commit history. Command - ``` git merge --squash feature-profile``` +## When would you use squash merge vs regular merge? +Use squash merge when: There are many small commits, Commit history is messy & You want a clean history +Use regular merge when: You want to preserve complete commit history, Working with a team & Important commits should stay visible +## What is the trade-off of squashing? +- The trade-off is that you lose the detailed commit history of the feature branch. This means you cannot easily see: +individual fixes +step-by-step changes +who made specific small changes +Simple Visual Summary +SQUASH +``` +A --- S +``` +MERGE +``` +A --- M + / \ + B C +``` +## Task 4: Git Stash — Hands-On +- Start making changes to a file but do not commit, Now imagine you need to urgently switch to another branch — try switching. What happens? You will get the message that stash the items before you switch +![git_stash](image.png) +## What is the difference between git stash pop and git stash apply? +- git stash pop -> Applies the stash and removes it from the stash list +- git stash apply -> Applies the stash but keeps it in the stash list + +## When would you use stash in a real-world workflow? +- Developers use git stash when: +They are in the middle of work +They need to quickly switch branches +They are not ready to commit changes + +- Example situations: +Urgent bug fix on another branch +Pulling latest changes +Testing something quickly +Stash temporarily saves unfinished work without committing it. + +## Useful stash commands +- git stash -> save changes +- git stash list -> view stashes +- git stash pop -> apply + remove stash +- git stash apply -> apply stash only +- git stash drop stash@{0} -> delete stash +- git stash clear -> delete all stashes +## Task 5: Cherry Picking +- Create a branch feature-hotfix, make 3 commits with different changes - ```git checkout -b feature-hotfix``` +- Switch to main & Cherry-pick only the second commit from feature-hotfix onto main - ```git log --oneline``` by this command you will find the second commit id. ```git cherry-pick 953aaa7``` +- Got the conflict which i resolved manually and ran the command- ```git cherry-pick --continue``` but still got an error where it says - Waiting for your editor to close the file... Vim: Error reading input, exiting... +Vim: Finished. So i ran the command - ``` git commit -m "Cherry-picked commit 953aaa7"``` and check the graph -- ```git log --oneline --graph``` +``` +* a2e208d (HEAD -> master) Cherry-picked commit 953aaa7 +* 329c683 Added picture +* 2c2dadf Add profile feature +* 3fe48e4 new line added +* c08f35e (origin/master, origin/HEAD) Merge branch 'TrainWithShubham:master' into master +|\ +| * 4cba04c Added day 16 tasks +| * 929b190 Added day 15 tasks +``` +It shows like this where the commit ``` a2e208d (HEAD -> master) Cherry-picked commit 953aaa7 ``Git created the final commit using the staged changes from the cherry-pick. That commit 953aaa7 became a2e208d +- Even though you ran git commit manually, the commit is still considered a cherry-picked commit because the changes came from the cherry-pick operation. +- How you can confirm - ```git show a2e208d ``` +![git_show](image-1.png) diff --git a/2026/day-24/task-24/image-1.png b/2026/day-24/task-24/image-1.png new file mode 100644 index 0000000000..98e85b4ea0 Binary files /dev/null and b/2026/day-24/task-24/image-1.png differ diff --git a/2026/day-24/task-24/image.png b/2026/day-24/task-24/image.png new file mode 100644 index 0000000000..a68a819359 Binary files /dev/null and b/2026/day-24/task-24/image.png differ diff --git a/2026/day-25/task-25/day-25-notes.md b/2026/day-25/task-25/day-25-notes.md new file mode 100644 index 0000000000..daecb30a10 --- /dev/null +++ b/2026/day-25/task-25/day-25-notes.md @@ -0,0 +1,186 @@ +## Day 25 – Git Reset vs Revert & Branching Strategies +## Task +You'll learn how to undo mistakes safely — one of the most important skills in Git. You'll also explore branching strategies used by real engineering teams to manage code at scale. +# Task 1: Git Reset — Hands-On +- Difference between --soft, --mixed, and --hard +``` +| Reset Type | Commit History | Staging Area | Working Directory | +| ------------------- | -------------- | -------------------- | ----------------- | +| `--soft` | Moves HEAD | Keeps staged changes | No change | +| `--mixed` (default) | Moves HEAD | Unstages changes | Keeps changes | +| `--hard` | Moves HEAD | Clears staging | Deletes changes | +``` +## Which one is destructive? +- git reset --hard , Because it Deletes commits & Removes file changes from the working directory. If not backed up, the changes are lost permanently. +## When would you use each one? +- Soft Reset: Fix commit messages, Combine commits, Re-commit quickly +- Mixed Reset: Undo commit but keep code changes, Modify files before committing again +- Hard Reset: Completely discard changes, Reset repository to a previous state +## Should you use reset on pushed commits? +- ❌ Generally no, Because Reset rewrites history, Other developers may already have those commits & It causes conflicts when pulling or pushing. Use git revert instead for shared repositories. + +- SHORT ANSWER: ```Git reset moves the HEAD pointer to a previous commit and optionally modifies the staging area and working directory depending on the reset mode.``` + +## Task 2: Git Revert — Hands-On +- How is git revert different from git reset? +``` +| Git Reset | Git Revert | +| ---------------------------- | -------------------- | +| Moves the branch pointer | Creates a new commit | +| Deletes commits from history | Keeps commit history | +| Rewrites history | Preserves history | +``` +Eg: Reset +``` +X --- Y --- Z + ↑ + reset here +``` +Result: X +Eg: Revert +``` +X --- Y --- Z --- Y' +``` +Y remains, but its changes are undone & creates a new commit Y' +## Why is revert safer for shared branches? +- Because it does not rewrite history. In team environments: Other developers may already have pulled the commits, Reset would change commit history and cause conflicts. Revert simply adds another commit, so history remains consistent. + +## When would you use revert vs reset? +- Use git revert : On shared branches (main, master, production), When a bad commit has already been pushed & When working in a team +- Use git reset : On local branches, When commits have not been pushed yet & When cleaning up commit history + +- Short Interview Definition: Git revert creates a new commit that undoes changes from a previous commit while keeping the original commit in history. +## Task 3: Git Reset vs Git Revert - Summary + +| Feature | `git reset` | `git revert` | +|--------|-------------|-------------| +| **What it does** | Moves the branch pointer (HEAD) to a previous commit and optionally changes the staging area and working directory | Creates a new commit that **undoes the changes** made by a previous commit | +| **Removes commit from history?** | Yes, it can remove commits from history | No, the original commit remains in history | +| **Safe for shared/pushed branches?** | ❌ No, because it rewrites history | ✅ Yes, because it preserves history | +| **When to use** | When working locally and you want to undo or modify recent commits | When a bad commit has already been pushed and you need to undo it safely | + +## Task 4: Branching Strategies +- A branching strategy is a set of rules that defines how and when developers create, use, and merge branches in Git. It keeps teamwork organized and prevents code conflicts. + + +### 1) GitFlow + +**How it works (short):** +A structured model with two long-lived branches: `main` (production) and `develop` (integration). Work happens in short-lived `feature/*` branches from `develop`. Releases are stabilized in `release/*`, and urgent fixes go to `hotfix/*` from `main`. + +**Simple Flow (text diagram):** +``` +main ────────────────────────────────●────────────── + ↑ +hotfix ──────────────────●──────────┤ + ↓ │ +develop ──────●──────────●──────────●────── + ↑ ↑ +feature/A ────● │ + │ +feature/B ───────────────● +``` +### How it works +GitFlow uses multiple long-lived branches, each with a specific role: + +| Branch | Purpose | +|--------|---------| +| `main` | Only production-ready, released code | +| `develop` | Ongoing development work lives here | +| `feature/*` | One branch per new feature, branched off `develop` | +| `release/*` | Final testing/stabilization before going live | +| `hotfix/*` | Emergency fixes applied directly to `main` | + +### Pros and Cons +| Pros | Cons | +|------|------| +| Very structured and organized | Many branches to manage | +| Clear release process | Slow — not ideal for daily shipping | +| Supports parallel development | Merge conflicts are common | +| Multiple versions can be maintained | Overkill for small teams | + +## 2. GitHub Flow + +### How it works +Super simple — only two things exist: `main` and a short-lived feature branch. + +1. Create a branch from `main` +2. Make your changes and commit +3. Open a Pull Request (PR) +4. Get it reviewed +5. Merge into `main` +6. Deploy immediately + +### Flow diagram +``` +main ──●──────────────────●──────────────────●── + └── feature/login ─┘ └── feature/dashboard ──┘ + (PR + review + merge) (PR + review + merge) +``` +### When / where it's used +- Startups shipping fast +- SaaS products with continuous delivery +- Open source projects (e.g. React, Vue) +- Small to medium teams + +### Pros and Cons +| Pros | Cons | +|------|------| +| Simple and easy to learn | No staging / release buffer | +| Fast — ship every day | Risky without strong automated tests | +| Perfect for CI/CD pipelines | Not great for managing multiple versions | + +## 3. Trunk-Based Development + +### How it works +Everyone commits directly to `main` (the "trunk") — or merges tiny short-lived branches within hours (not days). Incomplete features are hidden using **feature flags** so the app stays working at all times. + +- No long-lived branches +- CI (Continuous Integration) runs on every commit +- `main` is always in a deployable state + +### Flow diagram +``` +main ──●──●──●──────────●──●──●──────────────────► + ↑ ↑ + short branch short branch + (merged same day) + + [feature flag hides unfinished work] +``` + +### When / where it's used +- High-performance engineering teams (Google, Meta, Netflix) +- Teams with mature CI/CD and strong test coverage +- When you want zero merge conflicts + +### Pros and Cons +| Pros | Cons | +|------|------| +| Fewest merge conflicts | Requires discipline and feature flags | +| Forces frequent integration | Needs solid automated test coverage | +| Codebase always up to date | Hard for junior teams to adopt | +| Scales well for large teams | Mistakes can affect everyone immediately | + +--- + +## Quick Comparison + +| | GitFlow | GitHub Flow | Trunk-Based | +|---|---------|-------------|-------------| +| Complexity | High | Low | Medium | +| Speed to ship | Slow | Fast | Fastest | +| Best team size | Large | Small–Medium | Any (with discipline) | +| Release style | Scheduled | Continuous | Continuous | +| Merge conflicts | Common | Rare | Rarest | + +## Answers + +### Which strategy for a startup shipping fast? +**GitHub Flow.** It's simple, fast, and has almost no overhead. Create a branch, make changes, open a PR, merge, ship. A startup doesn't need the complexity of GitFlow. + +### Which strategy for a large team with scheduled releases? +**GitFlow.** When you have many developers, a QA phase, and a release calendar, GitFlow's structure pays off. The release branch gives a buffer to stabilize before going live, and hotfixes are clean and traceable. + +### Which does a popular open-source project use? +**React (facebook/react)** uses a **GitHub Flow** style — feature branches with PRs merged into `main`. You can verify at [github.com/facebook/react](https://github.com/facebook/react) by checking the branches and pull requests tabs. \ No newline at end of file diff --git a/2026/day-26/task-26/day-26-notes.md b/2026/day-26/task-26/day-26-notes.md new file mode 100644 index 0000000000..c84d7713b5 --- /dev/null +++ b/2026/day-26/task-26/day-26-notes.md @@ -0,0 +1,226 @@ +# Day 26 – GitHub CLI: Manage GitHub from Your Terminal +Every time you switch to the browser to create a PR, check an issue, or manage a repo — you lose context. The GitHub CLI (gh) lets you do all of that without leaving your terminal. For DevOps engineers, this is essential — especially when you start automating workflows, scripting PR reviews, and managing repos at scale. + +## Task 1: Install and Authenticate +### Step 1. Install GitHub CLI +- Install the GitHub CLI (`gh`) on your system. +```winget install --id GitHub.cli``` +### Step 2 — Restart the terminal +After installation: Close PowerShell / VS Code & Open a new terminal +- run ```gh --version``` +- Expected o/p: +``` +gh version 2.x.x +https://github.com/cli/cli/releases +``` +### Step 3 — Login to GitHub +``` gh auth login``` +- Choose: +``` +GitHub.com +HTTPS +Login with browser +``` +- It will open your browser for authentication. +### Step 4 — Verify login +``` gh auth status``` +- ✓ Logged in to github.com as ap**** + +## Authentication Methods Supported by gh +- Authentication Methods Supported by gh +### 1. Web Browser Authentication +- Opens a browser window +- You log in and authorize the CLI +- Most common and recommended method + +### 2. Personal Access Token (PAT) +- You can authenticate using a GitHub Personal Access Token. +```gh auth login --with-token``` +Useful for: automation, CI/CD pipelines + +### 3. SSH Authentication +- Uses your existing SSH key configured with GitHub. +Best for: secure Git operations, developers already using SSH + +### 4. GitHub Enterprise Authentication +- Allows authentication with private GitHub Enterprise servers. +## Task2 : Working with Repositories using GitHub CLI +### 1. Create a New Repository from the Terminal +Create a public repository with a README: + +```bash +```gh repo create test-repo --public --clone --add-readme``` +- Explanation: +test-repo → repository name +--public → makes the repository public +--clone → automatically clones it locally +--add-readme → adds a README file +``` +### 2. Clone a Repository using GitHub CLI +- ```gh repo clone username/repo-name``` +### 3. View Repository Details +```gh repo view test-repo``` +### 4. List All Your Repositories +```gh repo list``` +-you can limit the result: +```gh repo list --limit 20``` +### 5. Open Repository in Browser +```gh repo view --web``` +### 6. Delete the repository +```gh repo delete test-repo``` + +## Managing GitHub Issues using GitHub CLI + +### 1. Create an Issue from the Terminal + +Create an issue with a title, body, and label: +```gh issue create --title "Bug: Login page error" \``` +```--body "Users are unable to log in due to an API failure." \``` +``` --label bug``` + +- Explanation: +--title → Issue title +--body → Description of the issue +--label → Assign a label to categorize the issue + +### 3. View a Specific Issue +``` gh issue view 12``` +This shows: +Issue title +Description +Labels +Status +Comments + +### 4. Close an Issue from the Terminal +```hg issue close 12``` Here 12 is a issue number + +### How could you use gh issue in a script or automation? +- The gh issue command can be used in automation scripts to manage issues automatically. +***Automatically create issues for CI/CD failures +``` gh issue create --title "Build Failed" --body "The CI pipeline failed. Please check logs."``` + +### Managing Pull Requests using GitHub CLI + +### 1. Create a Branch and Make Changes +- create a new branch ```git checkout -b feature-upt``` +- Make changes to a file and commit them: +``` +git add . +git commit -m "Update documentation" +``` +- Push the branch to GitHub: +``` git push origin feature-upt``` + +### 2. Create a Pull Request from the Terminal +``` +gh pr create --title "Update documentation" \ +--body "Improved documentation and added new examples." +``` +` This will create a pull request from your branch to the main branch. +### 3. List All Open Pull Requests +```gh pr list``` +### 4. View Pull Request Details +```gh pr view 15``` +### 5. Merge a Pull Request from the Terminal +```gh pr merge 15``` +- Merge Methods Supported by ```gh pr merge``` +GitHub CLI supports three merge methods: + +Method Command Description +Merge Commit gh pr merge --merge Keeps all commits and creates a merge commit +Squash Merge gh pr merge --squash Combines all commits into one commit +Rebase Merge gh pr merge --rebase Reapplies commits on top of the base branch + +## How to Review Someone Else's Pull Request using gh +- View the PR:```gh pr view ``` +- Checkout the PR locally: ```gh pr checkout ``` +- Review the changes: ```git diff``` +- Approve the PR: ```gh pr review --approve``` +- Request changes: ```gh pr review --request-changes``` +- Add a comment: ```gh pr comment --body "Looks good!"``` + +## GitHub Actions & Workflows (Preview) + +GitHub CLI allows you to interact with GitHub Actions workflows directly from the terminal. + +--- + +### 1. List Workflow Runs + +You can list workflow runs for the current repository: + +```bash +gh run list +``` +### Example Output +``` +STATUS TITLE WORKFLOW BRANCH EVENT ID +✓ Update documentation CI Pipeline main push 123456 +✓ Fix API tests CI Pipeline main push 123455 +``` +### 2. View the Status of a Specific Workflow Run +``` gh run view ``` Eg: ```gh run view 123456``` +You can also open it in the browser: +```gh run view 123456 --web``` +- List Available Workflows +```gh workflow list``` +Example: +``` +NAME STATE +CI Pipeline active +Deploy Workflow active +``` +### How could gh run and gh workflow be useful in a CI/CD pipeline? +- The GitHub CLI commands help developers and DevOps engineers manage CI/CD pipelines directly from the terminal. +Eg: +- Monitor CI/CD pipeline status ```gh run list``` +This allows engineers to quickly check if builds or deployments are successful. +- Debug workflow failures ```gh run view - This helps inspect logs and identify why a pipeline failed. +- Trigger or manage workflows +Using gh workflow commands, teams can manage automation workflows without leaving the terminal. + +## Useful GitHub CLI (`gh`) Tricks + +The GitHub CLI provides powerful commands that allow developers to interact with GitHub directly from the terminal. + +--- + +### 1. `gh api` – Call the GitHub API + +You can make raw GitHub API requests directly from the terminal. + +Example: Get information about the authenticated user. + +```bash +gh api user +``` +Use Case +- Automate GitHub tasks +- Retrieve repository data +- Integrate GitHub with scripts + +### gh gist – Manage GitHub Gists +- A Gist is a feature on GitHub that lets you store and share small pieces of code, text, or notes online. +- gh gist – Manage GitHub Gists ```gh gist create file.txt``` +- create a public gist ```gh gist create file.txt --public +-list your gist ```gh gist list``` + +### gh release – Manage Releases +- Create and manage GitHub releases. ```gh release create v1.0.0``` +- create a release with notes ```gh release create v1.0.0 --notes "Initial release"``` +- ```gh release list``` + +### 4. gh alias – Create Command Shortcuts +- 4. gh alias – Create Command Shortcuts ```gh alias set prs "pr list"``` +now you can run ```gh prs``` to list pull request +Use Case +-Save time with frequently used commands +-Simplify long CLI commands + +### 5. gh search repos – Search Repositories +- 5. gh search repos – Search Repositories ```gh search repos devops``` +- limit results: ```gh search repos devops --limit 10``` +- Search by language: ```gh search repos "kubernetes language:go"``` + +Note: ```gh pr create --fill``` auto-fills the PR title and body from your commits \ No newline at end of file diff --git a/2026/day-29/task-29/day-29-docker-basics.md b/2026/day-29/task-29/day-29-docker-basics.md new file mode 100644 index 0000000000..93dc312428 --- /dev/null +++ b/2026/day-29/task-29/day-29-docker-basics.md @@ -0,0 +1,179 @@ +# Day 29 – Introduction to Docker +## Task 1: What is Docker? +- Docker is a platform for containerization — a way to package software so it runs consistently across different environments. +- Instead of saying "it works on my machine," Docker lets you bundle your application along with everything it needs (code, runtime, libraries, config) into a single unit called a container. That container runs the same way everywhere — on your laptop, a colleague's machine, or a cloud server. + +## What is a Container & Why Do We Need Them? +- A container is a lightweight, standalone, executable package that includes everything needed to run a piece of software: +1. Application code +2. Runtime (e.g., Node.js, Python) +3. Libraries & dependencies +4. Configuration files + +- Think of it like a shipping container in the real world — it boxes up your "cargo" (app) in a standard format that can be loaded onto any "ship" (server) without worrying about what's inside or how it was built. +- A container freezes the environment your app needs and carries it everywhere. +``` +[ Your App Code ] +[ Runtime (Python 3.11) ] ← All bundled together +[ Libraries & Deps ] in one container +[ Config & Env Variables ] +``` +-Now it runs identically on every machine that has Docker installed — no setup, no conflicts. + +## Containers vs Virtual Machines — what's the real difference? +### How a Virtual Machine Works +- A VM simulates an entire physical computer — including its own OS, kernel, drivers, and hardware. Each VM carries a full operating system (GBs of overhead) and runs on a hypervisor (e.g., VMware, VirtualBox, Hyper-V) that emulates hardware. +``` +┌──────────────────────────────────────┐ +│ Your Host Machine │ +│ │ +│ ┌─────────┐ ┌─────────┐ │ +│ │ VM 1 │ │ VM 2 │ │ +│ │─────────│ │─────────│ │ +│ │ App A │ │ App B │ │ +│ │ Libs/ │ │ Libs/ │ │ +│ │ Deps │ │ Deps │ │ +│ │ Guest │ │ Guest │ │ +│ │ OS │ │ OS │ │ +│ └────┬────┘ └────┬────┘ │ +│ └──────┬─────┘ │ +│ Hypervisor │ +│ Host OS │ +│ Hardware │ +└──────────────────────────────────────┘ +``` +### How a Container Works +- Containers share the host OS kernel and only bundle the app + its dependencies — nothing more. No duplicate OS. No hypervisor. Just isolated processes running on the same kernel. +``` +┌──────────────────────────────────────┐ +│ Your Host Machine │ +│ │ +│ ┌─────────┐ ┌─────────┐ │ +│ │ Cont 1 │ │ Cont 2 │ │ +│ │─────────│ │─────────│ │ +│ │ App A │ │ App B │ │ +│ │ Libs/ │ │ Libs/ │ │ +│ │ Deps │ │ Deps │ │ +│ └────┬────┘ └────┬────┘ │ +│ └──────┬─────┘ │ +│ Container Runtime │ +│ Host OS Kernel (SHARED) │ +│ Hardware │ +└──────────────────────────────────────┘ +``` +### Side-by-Side Comparison + +| Feature | Virtual Machine | Container | +|---|---|---| +| **Size** | GBs (full OS inside) | MBs (just app + libs) | +| **Boot Time** | Minutes | Milliseconds – Seconds | +| **OS** | Full guest OS per VM | Shares host OS kernel | +| **Isolation** | Strong (hardware-level) | Good (process-level) | +| **Performance** | Slower (emulation overhead) | Near-native speed | +| **Portability** | Heavier, harder to move | Lightweight, runs anywhere | +| **Density** | ~10s per host | ~100s per host | +| **Security** | Very strong boundary | Strong, but shared kernel | +| **Use Case** | Full OS isolation needed | App packaging & scaling | + +## What is the Docker architecture? (daemon, client, images, containers, registry) Draw or describe the Docker architecture in your own words. +- The Docker client is what you interact with directly — when you run docker build or docker run in your terminal, that's the client sending instructions. +- The Docker daemon (dockerd) is the background process that does the actual work — building images, running containers, managing networks and volumes. The client talks to it over a REST API. +- Images are read-only blueprints. They're built in layers (each RUN, COPY, or FROM in a Dockerfile adds a layer), and layers are cached and reused across images to save space. +- Containers are running instances of images — isolated processes on your machine. You can run many containers from the same image simultaneously. +- The registry (Docker Hub by default) is the remote store where images live. When you docker pull, the daemon fetches an image from the registry. When you docker push, you upload one. +![Docker-arch](image.png) + +### Here's how the pieces connect in plain terms: +- You type docker run nginx in your terminal (the client) +- The client sends that instruction to the daemon over a REST API +- The daemon checks if the nginx image exists locally — if not, it pulls it from the registry +- The image is a stack of read-only layers (base OS → libs → app) +- The daemon spins up a container — a live, writable process built on top of that image +- Multiple containers can run from the same image simultaneously, each fully isolated +The daemon is the brain of the whole operation — the client is just how you talk to it. + +## Task 2: Install Docker +- Install Docker on your machine (or use a cloud instance), Verify the installation & Run the hello-world container +- Verify the docker - ```docker --version``` OR ```docker info``` + +- ```docker run hello-world``` - O/P below:- +![Output-hello-world](image-1.png) + +- What just happened — explained +1. Step 1 — Client talked to daemon ```The Docker client contacted the Docker daemon.``` +Your ```docker run hello-world``` command in Git Bash (client) sent the instruction to Docker Desktop running in the background (daemon). + +2. Step 2 — Image pulled from Docker Hub - ```The Docker daemon pulled the "hello-world" image from Docker Hub. (amd64)``` +The daemon checked your machine — no ```hello-world``` image found locally. So it went to Docker Hub and downloaded it. ```amd64``` means it pulled the version matching your laptop's processor architecture (Intel/AMD 64-bit). + +3. Step 3 — Container created and run- ```The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading.``` +The daemon took the image, spun up a container from it, and that container ran a tiny program whose only job was to print this message. + +4. Step 4 — Output streamed back to you - ```The Docker daemon streamed that output to the Docker client, which sent it to your terminal.``` +The container's output travelled from daemon → client → your Git Bash screen. + +## Task 3: Run Real Containers +1. Run an Nginx container and access it in your browser +```docker run -d -p 80:80 nginx``` + +2. Run an Ubuntu container in interactive mode — explore it like a mini Linux machine +```docker run -it ubuntu``` +![ubuntu-container](image-2.png) + +3. List all running containers +```docker ps``` +![dockerps](image-3.png) +Note: Here Ubuntu will not show because e exited it and its stopped + +4. List all containers (including stopped ones) +```docker ps -a``` +It will show all the containers including the stopped ones. +![allcontaines](image-4.png) + +5. Stop and remove a container +```docker stop 2ee82b72a142``` +Here 2ee82b72a142 is a container ID. +```docker rm 2ee82b72a142``` + +## Task 4: Explore +1. Run a container in detached mode — what's different? +- There are two ways to run a container — attached and detached. +- With attached mode`without -d` your terminal gets **blocked**. Nginx output streams directly to your screen. You can't type any other commands. To stop it you have to press `Ctrl+C`. +- However, with detached mode Docker starts the container in the **background** and immediately gives you back your terminal. It just prints the container ID and you're free to run more commands. + +2. Give a container a custom name +```docker run -d -p 8080:80 --name my-nginx nginx``` +- Without --name, Docker assigns a random name like quirky_hopper or sad_einstein. That's fine for experiments but annoying when you want to stop, inspect, or remove a specific container. +- Naming rules +Lowercase letters, numbers, hyphens, underscores +Must be unique — two running containers can't share the same name +``` +# Good names +--name my-nginx +--name web-server +--name app_v2 +``` +3. Map a port from the container to your host +- This is the `-p` flag — one of the most important Docker concepts. A container runs in its own isolated network. Even if nginx is listening on port 80 inside the container, your browser on your laptop **cannot reach it** — because the container's network is invisible to the outside by default. +Syntax - -p : +``` +# Your laptop's 8080 → container's 80 +docker run -d -p 8080:80 nginx +# Access at http://localhost:8080 +``` + +4. Check logs of a running container +```docker logs b1f7ce10459e``` OR ```docker logs my-nginx``` Here my-nginx is a container custom name we can use container ID or name both. + +5. Run a command inside a running container +```docker exec -it b1f7ce10459e bash``` +- Why we are using exec not run? +This **enters a container that is already running** in the background — like sneaking into a running machine through a side door. +- The difference in one line: +``` +docker run = start a new container + enter it +docker exec = enter a container that is already running +``` + + + diff --git a/2026/day-29/task-29/image-1.png b/2026/day-29/task-29/image-1.png new file mode 100644 index 0000000000..9809413487 Binary files /dev/null and b/2026/day-29/task-29/image-1.png differ diff --git a/2026/day-29/task-29/image-2.png b/2026/day-29/task-29/image-2.png new file mode 100644 index 0000000000..9aa0a27ac4 Binary files /dev/null and b/2026/day-29/task-29/image-2.png differ diff --git a/2026/day-29/task-29/image-3.png b/2026/day-29/task-29/image-3.png new file mode 100644 index 0000000000..195ce0b077 Binary files /dev/null and b/2026/day-29/task-29/image-3.png differ diff --git a/2026/day-29/task-29/image-4.png b/2026/day-29/task-29/image-4.png new file mode 100644 index 0000000000..33e1b0abae Binary files /dev/null and b/2026/day-29/task-29/image-4.png differ diff --git a/2026/day-29/task-29/image.png b/2026/day-29/task-29/image.png new file mode 100644 index 0000000000..3cc98d7dda Binary files /dev/null and b/2026/day-29/task-29/image.png differ diff --git a/2026/day-30/task-30/day-30-images.md b/2026/day-30/task-30/day-30-images.md new file mode 100644 index 0000000000..31a12474c8 --- /dev/null +++ b/2026/day-30/task-30/day-30-images.md @@ -0,0 +1,96 @@ +# Day 30 – Docker Images & Container Lifecycle +## Task 1: Docker Images +1. Pull the nginx, ubuntu, and alpine images from Docker Hub +`docker pull nginx` , `docker pull ubuntu` , `docker pull alpine` +2. List all images on your machine — note the sizes +`docker images` +3. Ubuntu vs Alpine — Why Size Difference? +- Ubuntu is a full linux distro which includes package managers, system utilities, libraries. Size 119MB disk usage and whereas alpine is a minimal linux distro with 13.1MB. Its a lightweight tool , musl libc instead of glibc +![size](image.png) +4. Inspect an image - `docker image inspect ubuntu` - you will see detailed JSON o/p: +Important Fields: +Id → Unique image ID +RepoTags → Image name & tag +Created → When image was built +Size → Image size +Architecture → (amd64, arm, etc.) +OS → Linux +Layers → Image layers +Env variables → Default environment settings +Cmd → Default command when container runs +👉 This helps you understand: + +How the image is built +What runs inside it + +5. Remove an Image - `docker rmi ubuntu` +- if container is using it then force remove - `docker rmi -f ubuntu` + +## Task 2: Image Layers +1. Run docker image history nginx — what do you see? +![dockerimagehistory](image-1.png) + +2. Each line is a layer. Note how some layers show sizes and some show 0B +- Each row is a layer, Layers with soze > 0 , these layers add actual data. Eg: installing packages, copying lines. These increases image size. +- layers with size 0B are metadata layers. Eg: CMD, ENV, EXPOSE. They dont add files , just instructions +3. What Are Docker Layers? +- Docker images are built using multiple read-only layers stacked on top of each other. Each layers represents - +1. A change +2. A command in Dockerfile +- Example (Simple Dockerfile) +``` +FROM ubuntu +RUN apt-get update +RUN apt-get install nginx +COPY . /app +CMD ["nginx"] +``` +Creates layers like:Base OS (ubuntu), Update packages, Install nginx, Copy files, Start command + +5. Why Does Docker Use Layers? +- Faster builds - If a layer doesn’t change → Docker reuses it. Eg: If only code changes → OS + dependencies are reused +- Storage Efficiency - Layers are shared between images - Eg: ubuntu used by multiple images → stored once +- faster Downloads - Only new/changed layers are pulled +- Version Controls - Easy to track changes between image versions +``` +Docker images are built using a layered architecture where each layer represents a change made by a Dockerfile instruction. Layers are read-only and stacked on top of each other to form the final image. + +Docker uses layers to enable caching, reduce storage usage, speed up builds, and allow reuse of common components across multiple images. +``` +## Task 3: Container Lifecycle +- Practice the full lifecycle on one container: +1. Create a Container (Without Starting) - `docker create --name mynginx nginx` - only create will not run +2. Start the Container - `docker start mynginx` +3. Pause the Container - `docker pause mynginx` +4. Unpause the Container -`docker unpause mynginx` +5. Stop the Container - `docker stop mynginx` +6. Restart the Container - `docker restart mynginx` +7. Kill the Container - `docker kill mynginx` +8. Remove the Container - `docker rm mynginx` + +## Task 4: Working with Running Containers +1. Run Nginx in Detached Mode - `docker run -d --name mynginx -p 8080:80 nginx` +2. View logs - `docker logs mynginx` - It willshow Access logs, Errors, Startup logs +3. View Real-Time Logs (Follow Mode) - `docker logs -f mynginx` - Live requests will appear as you refresh browser +4. Exec into the Container (Interactive Mode) - `docker exec -it mynginx bash` +5. Run Single Command (Without Entering Container) - `docker exec mynginx ls /usr/share/nginx/html` +6. Inspect the Container - `docker inspect mynginx` +``` +Running a container in detached mode allows it to run in the background. Logs can be viewed using docker logs, and real-time logs with the -f flag. The docker exec command is used to interact with a running container or execute commands inside it. The docker inspect command provides detailed metadata including IP address, port mappings, and mounted volumes. +``` +## ask 5: Cleanup +1. Stop All Running Containers (One Command) - `docker stop $(docker ps -q)` +- docker ps -q → gets IDs of running containers +- docker stop → stops all of them +2. Remove All Stopped Containers - `docker container prune` +- Alternative (force, no prompt): `docker container prune -f` +3. Remove Unused Images - `docker image prune` +- Remove ALL unused images (more aggressive): `docker image prune -a` +4. Check Docker Disk Usage - `docker system df` + +![dockerdf](image-2.png) + + + + + diff --git a/2026/day-30/task-30/image-1.png b/2026/day-30/task-30/image-1.png new file mode 100644 index 0000000000..a42c78ec01 Binary files /dev/null and b/2026/day-30/task-30/image-1.png differ diff --git a/2026/day-30/task-30/image-2.png b/2026/day-30/task-30/image-2.png new file mode 100644 index 0000000000..80c46da83c Binary files /dev/null and b/2026/day-30/task-30/image-2.png differ diff --git a/2026/day-30/task-30/image.png b/2026/day-30/task-30/image.png new file mode 100644 index 0000000000..eb1be8c82b Binary files /dev/null and b/2026/day-30/task-30/image.png differ diff --git a/2026/day-31/task-31/day-31-dockerfile.md b/2026/day-31/task-31/day-31-dockerfile.md new file mode 100644 index 0000000000..038ae6dbda --- /dev/null +++ b/2026/day-31/task-31/day-31-dockerfile.md @@ -0,0 +1,88 @@ +# Day 31 – Dockerfile: Build Your Own Images +## Task 1: Your First Dockerfile +1. Create a folder called my-first-image - `mkdir my-first-image` +2. Inside it, create a Dockerfile that: +Uses ubuntu as the base image +Installs curl +Sets a default command to print "Hello from my custom image!" + +### Dockerfile +``` +FROM ubuntu + +WORKDIR /app + +#install curl + +RUN apt-get update && apt-get install -y curl + +#default command + +CMD ["echo", "Hello from my custom image!"] +``` +Step:1 `docker build -t my-ubuntu:v1 .` # build an image & . is the current folder where the Dockerfile is located. +Step:2 `docker images` - will show the docker image "my-ubuntu & the tag is V1" +step:3 `docker run my-ubuntu:v1` - created a container from the image +Output: `Hello from my custom image!` + +## Task 2: Dockerfile Instructions +``` +FROM nginx + +FROM ubuntu + +RUN apt-get update && apt-get install -y curl + +WORKDIR /app + +COPY app.sh /app/app.sh + +EXPOSE 8080 + +CMD ["./app.sh"] +``` +## Write in your notes: When would you use CMD vs ENTRYPOINT? +- Use ENTRYPOINT when your container has a single, fixed purpose — like a tool (ping, curl, ffmpeg). The command should always run and shouldn't change. +- Use CMD when you want a default behavior that users can easily override at runtime +- Use both together when you want a fixed executable (ENTRYPOINT) but flexible default arguments (CMD) — this is the most common production pattern. + +## Task 4: Build a Simple Web App Image +### Dockerfile +``` +FROM nginx:alpine + +WORKDIR /app + +COPY index.html /usr/share/nginx/html/ +``` +1. Step:1 - `docker build -t webapp-demo:v1 .` +2. Step:2 - `docker run -d -p 8080:80 webapp-demo:v1` +2. Step:3 run the index file on browser on port 8080. Make sure it is availble in security group. + +## Task 5: .dockerignore +1. Step:1 `vim .dockerignore` +2. Step:2 make the files and folders - mkdir node_modules, touch test.md, touch .env & mkdir .git +3. Step:3 vim Dockerfile +``` +FROM ubuntu + +WORKDIR /app + +COPY . . + +CMD ["ls", "-a"] +``` +4. `docker build -t ignore-demo:v1 .` +5. `docker run ignore-demo` +6. Output - It will not show dockerignore files +``` +. +.. +.dockerignore +Dockerfile +app.sh +devops-nginx-demo +``` +## Task 6: Build Optimization +### Note: Why does layer order matter for build speed? +- Docker builds images layer by layer and caches each one. When a layer changes, Docker invalidates the cache for that layer and every layer after it — forcing them all to rebuild. By putting frequently changing lines (like COPY . .) at the bottom, the expensive layers above (like RUN npm install) stay cached and are skipped. This can reduce build time from minutes to just seconds. \ No newline at end of file diff --git a/Images/Git Advanced.png b/Images/Git Advanced.png new file mode 100644 index 0000000000..ff36597c76 Binary files /dev/null and b/Images/Git Advanced.png differ diff --git a/feature.txt b/feature.txt new file mode 100644 index 0000000000..c90fd529f7 --- /dev/null +++ b/feature.txt @@ -0,0 +1 @@ +New feature diff --git a/infor.txt b/infor.txt new file mode 100644 index 0000000000..084ccfda91 --- /dev/null +++ b/infor.txt @@ -0,0 +1,5 @@ +My name is apurva +I am living in delhi +Currently working and learnign devops. +this is a bug +this is fine