Skip to content

Commit 5f23dc2

Browse files
committed
Merge branch 'master' of https://github.com/jzarnett/ece459
2 parents 064e81f + f49140a commit 5f23dc2

37 files changed

Lines changed: 64 additions & 58 deletions

lectures/L03-slides.tex

Lines changed: 21 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -239,17 +239,36 @@
239239

240240
You may be tempted to just always use \texttt{unwrap()}
241241

242+
243+
Cloudflare on November 18, 2025:
244+
\vspace{-4em}
242245
\begin{center}
243246
\includegraphics[width=0.3\textwidth]{images/coyote.jpg}
244247
\end{center}
245248

246-
Don't deny yourself the opportunity to add debug information.
249+
\end{frame}
250+
247251

248-
It's better to use \texttt{expect()}.
252+
\begin{frame}
253+
\frametitle{Cloudflare Issue}
254+
This is taken from the post-mortem Cloudflare published:
255+
256+
\begin{center}
257+
\includegraphics[width=\textwidth]{images/cloudflare-unwrap.png}
258+
\end{center}
259+
260+
A large number of features were unavailable and many websites were down!
249261

250262
\end{frame}
251263

252264

265+
\begin{frame}
266+
\frametitle{Building Debugging?}
267+
You can use it in debugging but... Don't deny yourself the opportunity to add debug information.
268+
269+
It's better to use \texttt{expect()}.
270+
\end{frame}
271+
253272
\begin{frame}
254273
\frametitle{Plan for the Future}
255274

lectures/L03.tex

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -107,7 +107,10 @@ \subsection*{Unwrap the Panic}
107107

108108
If we try to open a file but the file doesn't exist, that's an error but one that's foreseeable and we can handle it. There's three ways to handle it: a \texttt{match} expression (this is like the \texttt{switch} statement), \texttt{unwrap()}, and \texttt{expect()}.
109109

110-
You may be tempted to just always use \texttt{unwrap()} because it gives you the result and calls the \texttt{panic!} macro if there's an error. This, however, just shows you the lower level error that is the problem and you are denying yourself the opportunity to add information that will help you debug. For that reason, it's better to use \texttt{expect()}, which lets you add your own error message that will make it easier to find out where exactly things went wrong.
110+
You may be tempted to just always use \texttt{unwrap()} because it gives you the result and calls the \texttt{panic!} macro if there's an error. Ask Cloudflare how that went for them! In an incident report about an outage on November 18, 2025\footnote{ \url{https://blog.cloudflare.com/18-november-2025-outage/} }, they identified the cause of a 500 error as a call to \texttt{unwrap()} with no checking. That incident caused major outages across the internet for many customers. Could this have been avoided? Maybe -- checking and handling this error might have resulted in a less chaotic failure mode (e.g., all requests are denied), but it also might have been a lot easier to track down the cause of the problem.
111+
112+
113+
Even in development and debugging, using \texttt{unwrap()}, just shows you the lower level error that is the problem and you are denying yourself the opportunity to add information that will help you debug. For that reason, it's better to use \texttt{expect()}, which lets you add your own error message that will make it easier to find out where exactly things went wrong.
111114

112115
It's recommended to use \texttt{Result} types for functions you write too. Make your future self happy by giving yourself the information you need to debug what's gone wrong!
113116

lectures/L04.tex

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -137,12 +137,12 @@ \section*{Holodeck Safeties are Offline}
137137
\end{lstlisting}
138138

139139
Anyone who wants to use an unsafe function also has to sign away their life, or at least their safety, by acknowledging that they know the function in question is unsafe (by enclosing the call in an
140-
unsafe block), and promising to ensure the necessary safety condtions. (Readbacks are a safety convention used in aviation, among other places.)
140+
unsafe block), and promising to ensure the necessary safety conditions. (Readbacks are a safety convention used in aviation, among other places.)
141141

142142
If you try to use an unsafe function without it being in an unsafe block, the compiler will, naturally, forbid such a thing. Just smashing the unsafe block around it is enough to make the compiler quiet, but not a thorough code reviewer. They would ask about whether you've read carefully the documentation of the function in question and whether you are sure you're calling it with the right arguments\ldots You did read the documentation, right? Right?
143143

144144
Conversely, unsafe blocks don't have to be in unsafe functions; if not in an unsafe function, you're saying
145-
that the function is uncondtionally safe to call, i.e. you are encapsulating the unsafety.
145+
that the function is unconditionally safe to call, i.e. you are encapsulating the unsafety.
146146

147147
\paragraph{Mutable static variables.} Rust tries pretty hard to discourage you from using global variables, and they are right to do so. It's a quick shortcut and we do it a lot in course assignments, exercises, labs, and even exam questions. On an exam question, the thing I want to test is something like how you use the mutex and queue constructs to solve the problem, not how well you pass the mutex and queue pointers from the main thread to the newly created threads. In production code, though, global variables are really not recommended because of how harmful it is to good software engineering principles.
148148

lectures/L08-slides.tex

Lines changed: 11 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -156,11 +156,6 @@
156156
its cache, it will either {\it invalidate} or {\it update} the
157157
data.
158158
\end{itemize}
159-
~\\
160-
161-
For write-through caches: normally, when you write to an invalidated
162-
location, you bypass the cache and go directly to memory (aka {\bf
163-
write no-allocate}).
164159

165160

166161
\end{frame}
@@ -239,6 +234,9 @@
239234
\item Events are either from a processor ({\bf Pr}) or the {\bf Bus}.
240235
\end{itemize}
241236
\vfill
237+
\begin{changemargin}{2em}
238+
This table is for write-through write-allocate:
239+
\end{changemargin}
242240
\begin{center}
243241
\begin{tabular}{llll}
244242
{\bf State} & {\bf Observed} & {\bf Generated} & {\bf Next State}\\
@@ -248,6 +246,14 @@
248246
Invalid & PrWr & BusWr & Valid\\
249247
Invalid & PrRd & BusRd & Valid\\
250248
\end{tabular}
249+
\begin{changemargin}{2em}
250+
~\\
251+
There are write-allocate and write-no-allocate variants of write-through. In all cases, the written value goes to memory.\\[0em]
252+
\begin{itemize}
253+
\item write-allocate: Invalid/PrWr line has next state Valid: the written value is also immediately cached.
254+
\item write-no-allocate: Invalid/PrWr line has next state Invalid: the written value is not immediately cached.
255+
\end{itemize}
256+
\end{changemargin}
251257
\end{center}
252258

253259
\end{frame}

lectures/L08.tex

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ \subsection*{Write-Through Caches}
6666
data.
6767
\end{itemize}
6868

69-
Invalidation is the most common protocol. It means the data in the cache of other CPUs is not updated, it's just noted as being out of date (invalid). Normally, when you write to an invalidated location, you bypass the cache and go directly to memory (aka {\bf write no-allocate}). This kind of thing happens if you're just doing \texttt{x = 42;}---it doesn't matter what value of \texttt{x} was there before; you're just overwriting it.
69+
Invalidation is the most common protocol. It means the data in the cache of other CPUs is not updated, it's just noted as being out of date (invalid). In some cases, when you write to an invalidated location, you bypass the cache and go directly to memory (aka {\bf write no-allocate}). This kind of thing happens if you're just doing \texttt{x = 42;}---it doesn't matter what value of \texttt{x} was there before; you're just overwriting it.
7070

7171
If we want to do a read and there's a miss, we can ask around the other caches to see who has the most recent cached version. This is a bit like going into a room and yelling ``Does anybody have block\ldots?'', in some sort of multicast version of the card game ``Go Fish''. Regardless, the most recent value appears in memory, always, so if nobody else has it in cache (or they don't feel like sharing) you can get it from there.
7272

@@ -88,6 +88,11 @@ \subsection*{Write-Through Caches}
8888
Invalid & PrRd & BusRd & Valid\\
8989
\end{tabular}
9090
\end{center}
91+
There are write-allocate and write-no-allocate variants of write-through. In all cases, the written value goes to memory.
92+
\begin{itemize}[noitemsep]
93+
\item write-allocate: Invalid/PrWr line has next state Valid: the written value is also immediately cached.
94+
\item write-no-allocate: Invalid/PrWr line has next state Invalid: the written value is not immediately cached.
95+
\end{itemize}
9196

9297
\paragraph{Example.} For simplicity (this isn't an architecture course), assume all cache
9398
reads/writes are atomic. \footnote{If you're a hardware person, this line probably makes you cry. There's a whole lot that goes into making this work. There are potential write races, which have to be dealt with by contending for the bus and then completing the transaction, possibly restarting a command if necessary. If we have a split transaction bus it's really ugly, because we can have multiple interleaved misses. And down the rabbit hole we go.} Using the same example as before:

lectures/L09.tex

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ \subsection*{Accidentally Quadratic}
3939

4040
There are a number of good examples at \url{https://accidentallyquadratic.tumblr.com/} if you are interested in seeing some more, though they are in different languages and the most recent post seems to be from mid-2019. To find a Rust-specific example, I had to go pretty far back in the archives and came up with this one from 2016(!): \url{https://accidentallyquadratic.tumblr.com/post/153545455987/rust-hash-iteration-reinsertion}. Let's recap their explanation:
4141

42-
Rust's hash tables used a strategy called \textit{Robin-Hood Hashing}\footnote{See this paper! \url{https://cs.uwaterloo.ca/research/tr/1986/CS-86-14.pdf}}, which is based on open addressing with linear probing. Remember that open addressing is finding an alternate location in the event of a collision, rather than chaining and linear probing means you start from the bucket we should land in and move forward until you find a free space. The Robin-Hood part says if we have an item that's farther from its intended bucket than the current item, swap them. So if the order of items is 2 - 0 - 1, we'll swap until 0 - 1 - 2. This reduces the variance of items -- how far away from where they ``should'' be on average.
42+
Rust's hash tables used a strategy called \textit{Robin-Hood Hashing}\footnote{See this paper! \url{https://cs.uwaterloo.ca/research/tr/1986/CS-86-14.pdf}}, which is based on open addressing with linear probing. Remember that open addressing is finding an alternate location in the event of a collision, rather than chaining; linear probing means you start from the bucket we should land in and move forward until you find a free space. The Robin-Hood part says if we have an item that's farther from its intended bucket than the current item, swap them. So if the order of items is 2 - 0 - 1, we'll swap until 0 - 1 - 2. This reduces the variance of items---how far away from where they ``should'' be on average.
4343

4444
Suppose you want to copy the data to a new hash table. To illustrate the problem, start by copying the data to a table half the size of the original. This works fine for the first half, but then as the second hash table is getting full we have to scan longer and longer to find the right place for it to go. Because this linear scan of finding a free bucket is already within the linear loop of ``copy all items from table one to table two'' we have indeed found an accidentally quadratic situation. When the second table is full enough to trigger a resize, then the problem goes away... at least until the table is almost full. See the time taken diagram:
4545

@@ -239,7 +239,7 @@ \section*{Software Design Issues: Will it Parallelize?}
239239

240240

241241
\paragraph{Locking and Synchronization Points.}
242-
Think back to a concurrency course and the discussion of locking. We'll be coming back to this subject before too long. But for now, suffice it to say, that the more locks and locking we need, the less scalable the code is going to be. You may think of the lock as a resource. The more threads or processes that are looking to acquire that lock, the more ``resource contention'' we have, and the more waiting and coordination are going to be necessary. We're going to revisit the subject of wise use of locks in more detail soon.
242+
Think back to a concurrency course and the discussion of locking. We'll be coming back to this subject before too long. But for now, suffice it to say that the more locks and locking we need, the less scalable the code is going to be. You may think of the lock as a resource. The more threads or processes that are looking to acquire that lock, the more ``resource contention'' we have, and the more waiting and coordination are going to be necessary. We're going to revisit the subject of wise use of locks in more detail soon.
243243

244244
The previous paragraph applies as well to other concurrency constructs like semaphores, condition variables, etc. Any time a thread is forced to wait is going to be a limitation on the ability to parallelize the problem.
245245

lectures/L17-slides.tex

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@
118118
\begin{itemize}
119119
\item You can load a bunch of data and perform
120120
arithmetic.
121-
\item Intructions process multiple data items simultaneously.
121+
\item Instructions process multiple data items simultaneously.
122122
(Exact number is hardware-dependent).
123123
\end{itemize}
124124
For x86-class CPUs, MMX and SSE extensions provide SIMD instructions.

lectures/L17.tex

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66

77

88
\section*{Data and Task Parallelism}
9-
There are two broad categories of paralellism: data parallelism and
9+
There are two broad categories of parallelism: data parallelism and
1010
task parallelism. An analogy to data parallelism is hiring a call
1111
center to (incompetently) handle large volumes of support calls,
1212
\emph{all in the same way}. Assembly lines are an analogy to task

lectures/L18-slides.tex

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -268,7 +268,7 @@
268268
\frametitle{Scalar Replacement}
269269

270270
\emph{Scalar replacement} replaces an array read {\tt a[i]}
271-
occuring multiple times with a single read {\tt temp = a[i]} and references
271+
occurring multiple times with a single read {\tt temp = a[i]} and references
272272
to {\tt temp} otherwise.
273273

274274
It needs to know that {\tt a[i]} won't change

lectures/L18.tex

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -156,7 +156,7 @@ \subsection*{Loop Optimizations}
156156
but there may be others, which may be functions computable from a primary induction variable. \emph{Induction variable elimination} finds and eliminates (of course!) extra induction variables.
157157

158158
\emph{Scalar replacement} replaces an array read {\tt a[i]}
159-
occuring multiple times with a single read {\tt temp = a[i]} and references
159+
occurring multiple times with a single read {\tt temp = a[i]} and references
160160
to {\tt temp} otherwise. It needs to know that {\tt a[i]} won't change
161161
between reads.
162162

0 commit comments

Comments
 (0)