diff --git a/02_activities/assignments/Cohort_8/Assignment2.md b/02_activities/assignments/Cohort_8/Assignment2.md
index 47118b2ba..5db7647e3 100644
--- a/02_activities/assignments/Cohort_8/Assignment2.md
+++ b/02_activities/assignments/Cohort_8/Assignment2.md
@@ -53,9 +53,60 @@ The store wants to keep customer addresses. Propose two architectures for the CU
**HINT:** search type 1 vs type 2 slowly changing dimensions.
-```
-Your answer...
-```
+
+Option A — Overwrite model (Type 1 Slowly Changing Dimension)
+
+Design
+- Single CUSTOMER_ADDRESS table with one row per customer address (current only).
+- Columns:
+ - customer_id
+ - address_line1
+ - address_line2
+ - city
+ - province_state
+ - postal_code
+ - country
+ - last_updated_timestamp
+
+
+When a customer changes their address, the existing row is updated (UPDATE statement). As only the latest address is retained; historical addresses no longer exist.
+
+This is fine for simple stores but there is no ability to audit or report on address history. So if there are multiple stores or you care about reporting, audting based on location (postal code) this information is lost.
+
+This corresponds to a **Type 1** slowly changing dimension.
+
+---
+
+Option B — History-preserving model (Type 2 Slowly Changing Dimension)
+
+Design
+- CUSTOMER_ADDRESS_HISTORY table that keeps every address change as a new row.
+- Columns:
+ - address_sk
+ - customer_id
+ - address_line1
+ - address_line2
+ - city
+ - province_state
+ - postal_code
+ - country
+ - effective_from (date)
+ - effective_to (date)
+ - is_current (BOOLEAN)
+
+
+When you change the address changes, insert a new row with effective_from = change_timestamp, set is_current = TRUE and set the previous row’s effective_to and is_current = FALSE.
+To get the current address: filter WHERE customer_id = X AND is_current = TRUE
+
+
+Now, we keep all address history for auditing and historical reporting
+
+This has a more complex queries and not needed for a small st
+
+Recommendation
+If the business requires historical reporting or auditing (e.g., mailing history, billing address history), use the Type 2 model.
+If only the current address matters for operational tasks (shipping, contact), and you want simplicity and less chance for errors (wrong query gives the old address), use Type 1.
+
***
@@ -84,6 +135,10 @@ FROM product
But wait! The product table has some bad data (a few NULL values).
Find the NULLs and then using COALESCE, replace the NULL with a blank for the first column with nulls, and 'unit' for the second column with nulls.
+SELECT
+ product_name || ', ' || COALESCE(product_size, '') || ' (' || COALESCE(product_qty_type, 'unit') || ')' AS product_detail
+FROM product;
+
**HINT**: keep the syntax the same, but edited the correct components with the string. The `||` values concatenate the columns into strings. Edit the appropriate columns -- you're making two edits -- and the NULL rows will be fixed. All the other rows will remain the same.
-
@@ -95,12 +150,44 @@ You can either display all rows in the customer_purchases table, with the counte
**HINT**: One of these approaches uses ROW_NUMBER() and one uses DENSE_RANK().
+SELECT
+ customer_id,
+ market_date,
+ -- DENSE_RANK assigns the same rank for tied dates (purchases on the same day) and moves to the next number without skipping.
+ DENSE_RANK() OVER (PARTITION BY customer_id ORDER BY market_date) as visit_number
+FROM customer_purchases
+GROUP BY customer_id, market_date
+ORDER BY customer_id, market_date;
+
2. Reverse the numbering of the query from a part so each customer’s most recent visit is labeled 1, then write another query that uses this one as a subquery (or temp table) and filters the results to only the customer’s most recent visit.
+SELECT
+ x.customer_id,
+ x.market_date
+FROM (
+
+ SELECT
+ customer_id,
+ market_date,
+
+ RANK() OVER (PARTITION BY customer_id ORDER BY market_date DESC) as recent_visit_rank
+ FROM customer_purchases
+ GROUP BY customer_id, market_date
+) x
+WHERE x.recent_visit_rank = 1
+ORDER BY x.customer_id;
3. Using a COUNT() window function, include a value along with each row of the customer_purchases table that indicates how many different times that customer has purchased that product_id.
-
+SELECT
+ *,
+
+ COUNT(product_id) OVER (PARTITION BY customer_id, product_id) as product_purchase_count
+FROM customer_purchases
+ORDER BY customer_id, product_id;
+
+
#### String manipulations
1. Some product names in the product table have descriptions like "Jar" or "Organic". These are separated from the product name with a hyphen. Create a column using SUBSTR (and a couple of other commands) that captures these, but is otherwise NULL. Remove any trailing or leading whitespaces. Don't just use a case statement for each product!
@@ -109,17 +196,60 @@ You can either display all rows in the customer_purchases table, with the counte
| Habanero Peppers - Organic | Organic |
**HINT**: you might need to use INSTR(product_name,'-') to find the hyphens. INSTR will help split the column.
+SELECT
+ product_name,
+
+ CASE
+ WHEN INSTR(product_name, '-') > 0
+
+ THEN TRIM(SUBSTR(product_name, INSTR(product_name, '-') + 2))
+ ELSE NULL
+ END AS description
+FROM product;
2. Filter the query to show any product_size value that contain a number with REGEXP.
-
+SELECT
+ *
+FROM product
+
+WHERE product_size REGEXP '[0-9]';
+
+
#### UNION
1. Using a UNION, write a query that displays the market dates with the highest and lowest total sales.
**HINT**: There are a possibly a few ways to do this query, but if you're struggling, try the following: 1) Create a CTE/Temp Table to find sales values grouped dates; 2) Create another CTE/Temp table with a rank windowed function on the previous query to create "best day" and "worst day"; 3) Query the second temp table twice, once for the best day, once for the worst day, with a UNION binding them.
-***
+
+WITH DailySales AS (
+ SELECT
+ market_date,
+ SUM(quantity * cost_to_customer_per_qty) as total_sales
+ FROM customer_purchases
+ GROUP BY market_date
+),
+RankedSales AS (
+ SELECT
+ market_date,
+ total_sales,
+ RANK() OVER (ORDER BY total_sales DESC) as rank_high,
+ RANK() OVER (ORDER BY total_sales ASC) as rank_low
+ FROM DailySales
+)
+
+SELECT market_date, total_sales, 'Highest' as sales_type
+FROM RankedSales
+WHERE rank_high = 1
+
+UNION
+
+
+SELECT market_date, total_sales, 'Lowest' as sales_type
+FROM RankedSales
+WHERE rank_low = 1;
## Section 3:
You can start this section following *session 5*.
@@ -136,22 +266,46 @@ Steps to complete this part of the assignment:
1. Suppose every vendor in the `vendor_inventory` table had 5 of each of their products to sell to **every** customer on record. How much money would each vendor make per product? Show this by vendor_name and product name, rather than using the IDs.
**HINT**: Be sure you select only relevant columns and rows. Remember, CROSS JOIN will explode your table rows, so CROSS JOIN should likely be a subquery. Think a bit about the row counts: how many distinct vendors, product names are there (x)? How many customers are there (y). Before your final group by you should have the product of those two queries (x\*y).
-
--
+SELECT
+ v.vendor_name,
+ p.product_name,
+ ROUND(SUM(5 * vi.original_price) * COUNT(DISTINCT c.customer_id), 2) AS potential_revenue
+FROM vendor v
+JOIN vendor_inventory vi ON v.vendor_id = vi.vendor_id
+JOIN product p ON vi.product_id = p.product_id
+CROSS JOIN customer c
+GROUP BY v.vendor_name, p.product_name
+ORDER BY v.vendor_name, p.product_name;
#### INSERT
1. Create a new table "product_units". This table will contain only products where the `product_qty_type = 'unit'`. It should use all of the columns from the product table, as well as a new column for the `CURRENT_TIMESTAMP`. Name the timestamp column `snapshot_timestamp`.
+DROP TABLE IF EXISTS product_units;
+
+CREATE TABLE product_units AS
+SELECT
+ p.*,
+ CURRENT_TIMESTAMP as snapshot_timestamp
+FROM product p
+WHERE p.product_qty_type = 'unit';
2. Using `INSERT`, add a new row to the product_unit table (with an updated timestamp). This can be any product you desire (e.g. add another record for Apple Pie).
--
+
+INSERT INTO product_units (product_id, product_name, product_size, product_category_id, product_qty_type, snapshot_timestamp)
+VALUES (999, 'Super-Sized Apple Pie', 'Extra Large', 3, 'unit', CURRENT_TIMESTAMP);
+
+
#### DELETE
1. Delete the older record for the whatever product you added.
**HINT**: If you don't specify a WHERE clause, [you are going to have a bad time](https://imgflip.com/i/8iq872).
--
+
+DELETE FROM product_units
+WHERE product_id = 999;
+
+
#### UPDATE
1. We want to add the current_quantity to the product_units table. First, add a new column, `current_quantity` to the table using the following syntax.
@@ -163,3 +317,16 @@ ADD current_quantity INT;
Then, using `UPDATE`, change the current_quantity equal to the **last** `quantity` value from the vendor_inventory details.
**HINT**: This one is pretty hard. First, determine how to get the "last" quantity per product. Second, coalesce null values to 0 (if you don't have null values, figure out how to rearrange your query so you do.) Third, `SET current_quantity = (...your select statement...)`, remembering that WHERE can only accommodate one column. Finally, make sure you have a WHERE statement to update the right row, you'll need to use `product_units.product_id` to refer to the correct row within the product_units table. When you have all of these components, you can run the update statement.
+
+UPDATE product_units
+SET current_quantity = (
+ SELECT COALESCE(
+ (
+ SELECT vi.quantity
+ FROM vendor_inventory vi
+ WHERE vi.product_id = product_units.product_id
+ ORDER BY vi.market_date DESC
+ LIMIT 1
+ ), 0 -- If the subquery above returns NULL (because it found no rows), set the quantity to 0.
+ )
+);
\ No newline at end of file
diff --git a/02_activities/assignments/Cohort_8/Book_store.pdf b/02_activities/assignments/Cohort_8/Book_store.pdf
new file mode 100644
index 000000000..0a23919bc
Binary files /dev/null and b/02_activities/assignments/Cohort_8/Book_store.pdf differ