Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file modified 01_materials/slides/slides_01.pdf
Binary file not shown.
Binary file modified 01_materials/slides/slides_03.pdf
Binary file not shown.
2 changes: 1 addition & 1 deletion 02_activities/assignments/Cohort_8/Assignment1.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
🚨 **Please review our [Assignment Submission Guide](https://github.com/UofT-DSI/onboarding/blob/main/onboarding_documents/submissions.md)** 🚨 for detailed instructions on how to format, branch, and submit your work. Following these guidelines is crucial for your submissions to be evaluated correctly.

#### Submission Parameters:
* Submission Due Date: `August 10, 2025`
* Submission Due Date: `November 17, 2025`
* Weight: 30% of total grade
* The branch name for your repo should be: `assignment-one`
* What to submit for this assignment:
Expand Down
2 changes: 1 addition & 1 deletion 02_activities/assignments/Cohort_8/Assignment2.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
🚨 **Please review our [Assignment Submission Guide](https://github.com/UofT-DSI/onboarding/blob/main/onboarding_documents/submissions.md)** 🚨 for detailed instructions on how to format, branch, and submit your work. Following these guidelines is crucial for your submissions to be evaluated correctly.

#### Submission Parameters:
* Submission Due Date: `August 17, 2025`
* Submission Due Date: `November 24, 2025`
* Weight: 70% of total grade
* The branch name for your repo should be: `assignment-two`
* What to submit for this assignment:
Expand Down
10 changes: 8 additions & 2 deletions 02_activities/assignments/DC_Cohort/Assignment2.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,9 @@ The store wants to keep customer addresses. Propose two architectures for the CU
**HINT:** search type 1 vs type 2 slowly changing dimensions.

```
Your answer...
Type 1 Slowly Changing Dimension is used when the history of the customer’s address is not important. In this approach, the customer’s address record will be overwritten whenever a new address is entered. The store can use this method if they do not need to keep the old address for purposes such as audit, analytics, fraud management, or marketing personalization. This approach is suitable when the store prefers simple processing, requires less storage, and only needs a basic system.

If the store requires a more advanced system and needs to keep historical address information for audit, analytics, fraud management, or marketing personalization, then the store should use Type 2. In Type 2, the old address data is retained, and a new record (row) is created every time the customer changes address. This means one customer can have multiple address records stored over time.
```

***
Expand Down Expand Up @@ -183,5 +185,9 @@ Consider, for example, concepts of labour, bias, LLM proliferation, moderating c


```
Your thoughts...
Machine learning and artificial intelligence (AI) are relatively new concepts in human history. Because of this, they exist in a grey area of society where there are still no clear rules or regulations to guide or monitor their development. The capabilities of machine learning and AI are highly attractive, and every country, corporation, and researcher wants to gain a strong position in this fast-growing field. In the race to become a major player, many may overlook or ignore the ethical issues involved in developing these technologies in order to stay competitive in the market.

The author of the article clearly highlights these ethical concerns, such as hidden labour, bias in data and classification, and accountability and power. Many AI systems depend on thousands of low-paid workers who manually label data to train algorithms. Their work is often invisible, raising concerns about fairness and exploitation in the global tech industry. In addition, the data used to train AI models often carries social and cultural biases — related to gender, race, or culture — which can lead to unfair or discriminatory outcomes.

Finally, there is the issue of accountability, as it is often unclear who should be responsible when AI systems make mistakes or cause harm — the developers, companies, or the technology itself. This lack of transparency and regulation creates serious ethical challenges. Therefore, while AI offers great potential, it is crucial to ensure that its development and application are guided by strong ethical principles and social responsibility.
```
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
166 changes: 159 additions & 7 deletions 02_activities/assignments/DC_Cohort/assignment2.sql
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,12 @@ The `||` values concatenate the columns into strings.
Edit the appropriate columns -- you're making two edits -- and the NULL rows will be fixed.
All the other rows will remain the same.) */

SELECT
product_name || ', ' ||
COALESCE(product_size, '') || ' (' ||
COALESCE(product_qty_type, 'unit') || ')' AS product_details
FROM product;



--Windowed Functions
Expand All @@ -32,17 +38,68 @@ each new market date for each customer, or select only the unique market dates p
(without purchase details) and number those visits.
HINT: One of these approaches uses ROW_NUMBER() and one uses DENSE_RANK(). */

SELECT
customer_id,
market_date,
ROW_NUMBER() OVER (
PARTITION BY customer_id
ORDER BY market_date
) AS visit_number
FROM customer_purchases;

SELECT
customer_id,
market_date,
DENSE_RANK() OVER (
PARTITION BY customer_id
ORDER BY market_date
) AS visit_number
FROM customer_purchases;

/* 2. Reverse the numbering of the query from a part so each customer’s most recent visit is labeled 1,
then write another query that uses this one as a subquery (or temp table) and filters the results to
only the customer’s most recent visit. */


SELECT
customer_id,
market_date,
DENSE_RANK() OVER (
PARTITION BY customer_id
ORDER BY market_date DESC
) AS visit_number
FROM (
SELECT DISTINCT
customer_id,
market_date
FROM
customer_purchases);

SELECT*
FROM(
SELECT
customer_id,
market_date,
DENSE_RANK() OVER (PARTITION BY customer_id
ORDER BY market_date DESC)
AS visit_number
FROM (
SELECT DISTINCT customer_id, market_date
FROM customer_purchases)
) as visit_ranking

WHERE visit_number = 1;

/* 3. Using a COUNT() window function, include a value along with each row of the
customer_purchases table that indicates how many different times that customer has purchased that product_id. */

SELECT
customer_id,
product_id,
market_date,
COUNT(*) OVER (
PARTITION BY customer_id, product_id
) AS purchase_count
FROM customer_purchases;


-- String manipulations
Expand All @@ -57,11 +114,30 @@ Remove any trailing or leading whitespaces. Don't just use a case statement for

Hint: you might need to use INSTR(product_name,'-') to find the hyphens. INSTR will help split the column. */


SELECT
product_name,
TRIM(
SUBSTR(
product_name,
INSTR(product_name, '-') + 1
)
) AS product_description
FROM product;

SELECT
product_name,
CASE
WHEN INSTR(product_name, '-') > 0 THEN
TRIM(SUBSTR(product_name, INSTR(product_name, '-') + 1))
ELSE NULL
END AS product_description
FROM product;

/* 2. Filter the query to show any product_size value that contain a number with REGEXP. */


SELECT *
FROM product
WHERE product_size REGEXP '[0-9]';

-- UNION
/* 1. Using a UNION, write a query that displays the market dates with the highest and lowest total sales.
Expand All @@ -73,8 +149,37 @@ HINT: There are a possibly a few ways to do this query, but if you're struggling
3) Query the second temp table twice, once for the best day, once for the worst day,
with a UNION binding them. */

--1) Create a CTE/Temp Table to find sales values grouped dates;

WITH sales_per_day AS (
SELECT
market_date,
SUM(quantity * cost_to_customer_per_qty) AS total_sales
FROM customer_purchases
GROUP BY market_date
)

--2) Create another CTE/Temp table with a rank windowed function on the previous query to create "best day" and "worst day";

, ranked_sales AS (
SELECT
market_date,
total_sales,
RANK() OVER (ORDER BY total_sales DESC) AS best_rank,
RANK() OVER (ORDER BY total_sales ASC) AS worst_rank
FROM sales_per_day
)
-- 3) Query the second temp table twice, once for the best day, once for the worst day, with a UNION binding them. */

SELECT market_date, total_sales, 'Highest' AS type
FROM ranked_sales
WHERE best_rank = 1

UNION

SELECT market_date, total_sales, 'Lowest' AS type
FROM ranked_sales
WHERE worst_rank = 1;

/* SECTION 3 */

Expand All @@ -89,6 +194,18 @@ Think a bit about the row counts: how many distinct vendors, product names are t
How many customers are there (y).
Before your final group by you should have the product of those two queries (x*y). */

SELECT
v.vendor_name,
p.product_name,
vi.original_price,
SUM(5 * vi.original_price) AS total_potential_revenue
FROM vendor_inventory vi
JOIN vendor v ON vi.vendor_id = v.vendor_id
JOIN product p ON vi.product_id = p.product_id
CROSS JOIN customer c
GROUP BY v.vendor_name, p.product_name
ORDER BY v.vendor_name, p.product_name;



-- INSERT
Expand All @@ -97,19 +214,45 @@ This table will contain only products where the `product_qty_type = 'unit'`.
It should use all of the columns from the product table, as well as a new column for the `CURRENT_TIMESTAMP`.
Name the timestamp column `snapshot_timestamp`. */


CREATE TABLE product_units AS
SELECT
*,
CURRENT_TIMESTAMP AS snapshot_timestamp
FROM product
WHERE product_qty_type = 'unit';

/*2. Using `INSERT`, add a new row to the product_units table (with an updated timestamp).
This can be any product you desire (e.g. add another record for Apple Pie). */


INSERT INTO product_units (
product_id,
product_name,
product_size,
product_category_id,
product_qty_type,
snapshot_timestamp
)
VALUES (
24,
'Apple Pie',
'small',
'Freshly Prepared Food',
'unit',
CURRENT_TIMESTAMP
);

-- DELETE
/* 1. Delete the older record for the whatever product you added.

HINT: If you don't specify a WHERE clause, you are going to have a bad time.*/


DELETE FROM product_units
WHERE product_name = 'Apple Pie'
AND snapshot_timestamp < (
SELECT MAX(snapshot_timestamp)
FROM product_units
WHERE product_name = 'Apple Pie'
);

-- UPDATE
/* 1.We want to add the current_quantity to the product_units table.
Expand All @@ -128,6 +271,15 @@ Finally, make sure you have a WHERE statement to update the right row,
you'll need to use product_units.product_id to refer to the correct row within the product_units table.
When you have all of these components, you can run the update statement. */

--First, add a new column, current_quantity to the table using the following syntax.

ALTER TABLE product_units
ADD current_quantity INT;


UPDATE product_units AS pu
SET current_quantity = (
SELECT COALESCE(MAX(vi.quantity), 0)
FROM vendor_inventory vi
WHERE vi.product_id = pu.product_id
)
WHERE pu.product_qty_type = 'unit';
Binary file not shown.
39 changes: 18 additions & 21 deletions 03_instructional_team/markdown_slides/slides_01.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,17 +39,26 @@ $ echo "Data Sciences Institute"

---

# About Us (Anjali)

- Holds a Bachelor’s in Electrical Engineering from the University of Mumbai, India
- IT professional with 13 years of experience in software development, analysis, and design using a variety of programming languages and platforms for diverse clients
- DSI Cohort 3 and LS for subsequent cohorts
- Experienced in Agile environments, mentoring, and fostering collaborative, solution-driven work cultures.
- Works as an Emergency Early Childhood Educator and Education Assistant, supporting students in Elementary (K–5) public school, Peel District School Board
- Passionate about art, crocheting, and gardening; and actively volunteers with Ecosource’s Community Cultivator Program, growing food for local food banks
# About Us (Edward)

- Graduated from the Master of Science in Applied Computing program at UofT
- Currently working as a Research Analyst at the University Health Network
- Will start PhD in Medical Biophysics at UofT in September
- Have worked on creating course material for data science and machine learning at UofT
- Hobbies: Gaming, Crochet, Archery

![bg right:35% w:350](./images/01_anjali.png)
![bg right:35% w:350](./images/01_edward.png)

---

# About Us (Moniz)

- Master in Biomedical Engineering 🎓
- Project and Data Coordinator in Healthcare setting 🏥
- DSI cohort 3 📈
- Hobbies: camping 🏕️ travelling ✈️ and see the world

![bg right:35% w:350](./images/01_moniz.png)

---

Expand All @@ -66,18 +75,6 @@ $ echo "Data Sciences Institute"

---

# About Us (Sergii)

- Solution Architect with a PhD in Engineering
- Have national and international awards
- DSI cohort 5
- Over 25 years in the IT industry across diverse domains
- Proficient in multiple programming languages

![bg right:35% w:350](./images/01_sergii.png)

---


# Welcome / Course Content

Expand Down
6 changes: 3 additions & 3 deletions 03_instructional_team/markdown_slides/slides_03.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ For example, a query wanting to know the number of days in each month:
,months

FROM calendar
GROUP by months
GROUP BY months
```

- `GROUP BY` comes after a `WHERE` clause
Expand All @@ -71,7 +71,7 @@ For example, a query wanting to know the number of days in each month:
,years

FROM calendar
GROUP by years
GROUP BY years
```

---
Expand Down Expand Up @@ -117,7 +117,7 @@ For example, a query wanting to know the number of days in each month:

- `SUM` performs the sum total of any numeric column
- Be wary, SQLite may be more permissive for columns with numbers; it's best practice to coerce (`CAST`) these values into numbers before summing to be certain of their validity
- e.g. `CAST(SUM(column1) AS INTEGER) AS column1`
- e.g. `SUM(CAST(column1 AS INTEGER)) AS column1`
- SUM can accommodate multiple columns using the plus `+` operator
- e.g. `SUM(column1 + column2)`

Expand Down
Loading