Skip to content

Commit cd0e1ae

Browse files
committed
adjustments for compat in exercise nodes
1 parent 7d40b1e commit cd0e1ae

File tree

3 files changed

+30
-81
lines changed

3 files changed

+30
-81
lines changed

lectures/eigen_I.md

Lines changed: 10 additions & 42 deletions
Original file line numberDiff line numberDiff line change
@@ -997,12 +997,6 @@ Here is one solution.
997997
We start by looking into the distance between the eigenvector approximation and the true eigenvector.
998998

999999
```{code-cell} ipython3
1000-
---
1001-
mystnb:
1002-
figure:
1003-
caption: Power iteration
1004-
name: pow-dist
1005-
---
10061000
# Define a matrix A
10071001
A = np.array([[1, 0, 3],
10081002
[0, 2, 0],
@@ -1040,20 +1034,14 @@ print('The real eigenvalue is', np.linalg.eig(A)[0])
10401034
plt.figure(figsize=(10, 6))
10411035
plt.xlabel('iterations')
10421036
plt.ylabel('error')
1037+
plt.title('Power iteration')
10431038
_ = plt.plot(errors)
10441039
```
10451040

1046-
+++ {"user_expressions": []}
1047-
10481041
Then we can look at the trajectory of the eigenvector approximation.
10491042

10501043
```{code-cell} ipython3
1051-
---
1052-
mystnb:
1053-
figure:
1054-
caption: Power iteration trajectory
1055-
name: pow-trajectory
1056-
---
1044+
10571045
# Set up the figure and axis for 3D plot
10581046
fig = plt.figure()
10591047
ax = fig.add_subplot(111, projection='3d')
@@ -1081,11 +1069,11 @@ ax.legend(points, ['actual eigenvector',
10811069
r'approximated eigenvector ($b_k$)'])
10821070
ax.set_box_aspect(aspect=None, zoom=0.8)
10831071
1072+
ax.set_title('Power iteration trajectory')
1073+
10841074
plt.show()
10851075
```
10861076

1087-
+++ {"user_expressions": []}
1088-
10891077
```{solution-end}
10901078
```
10911079

@@ -1119,21 +1107,14 @@ print(f'eigenvectors:\n {eigenvectors}')
11191107
plot_series(A, v, n)
11201108
```
11211109

1122-
+++ {"user_expressions": []}
1123-
11241110
The result seems to converge to the eigenvector of $A$ with the largest eigenvalue.
11251111

11261112
Let's use a [vector field](https://en.wikipedia.org/wiki/Vector_field) to visualize the transformation brought by A.
11271113

11281114
(This is a more advanced topic in linear algebra, please step ahead if you are comfortable with the math.)
11291115

11301116
```{code-cell} ipython3
1131-
---
1132-
mystnb:
1133-
figure:
1134-
caption: Convergence towards eigenvectors
1135-
name: eigen-conv
1136-
---
1117+
11371118
# Create a grid of points
11381119
x, y = np.meshgrid(np.linspace(-5, 5, 15),
11391120
np.linspace(-5, 5, 20))
@@ -1165,13 +1146,12 @@ plt.legend(lines, labels, loc='center left',
11651146
11661147
plt.xlabel("x")
11671148
plt.ylabel("y")
1149+
plt.title("Convergence towards eigenvectors")
11681150
plt.grid()
11691151
plt.gca().set_aspect('equal', adjustable='box')
11701152
plt.show()
11711153
```
11721154

1173-
+++ {"user_expressions": []}
1174-
11751155
Note that the vector field converges to the eigenvector of $A$ with the largest eigenvalue and diverges from the eigenvector of $A$ with the smallest eigenvalue.
11761156

11771157
In fact, the eigenvectors are also the directions in which the matrix $A$ stretches or shrinks the space.
@@ -1200,12 +1180,7 @@ Use the visualization in the previous exercise to explain the trajectory of the
12001180
Here is one solution
12011181

12021182
```{code-cell} ipython3
1203-
---
1204-
mystnb:
1205-
figure:
1206-
caption: Vector fields of the three matrices
1207-
name: vector-field
1208-
---
1183+
12091184
figure, ax = plt.subplots(1, 3, figsize=(15, 5))
12101185
A = np.array([[sqrt(3) + 1, -2],
12111186
[1, sqrt(3) - 1]])
@@ -1264,24 +1239,18 @@ for i, example in enumerate(examples):
12641239
ax[i].grid()
12651240
ax[i].set_aspect('equal', adjustable='box')
12661241
1242+
plt.title("Vector fields of the three matrices")
12671243
plt.show()
12681244
```
12691245

1270-
+++ {"user_expressions": []}
1271-
12721246
The vector fields explain why we observed the trajectories of the vector $v$ multiplied by $A$ iteratively before.
12731247

12741248
The pattern demonstrated here is because we have complex eigenvalues and eigenvectors.
12751249

12761250
We can plot the complex plane for one of the matrices using `Arrow3D` class retrieved from [stackoverflow](https://stackoverflow.com/questions/22867620/putting-arrowheads-on-vectors-in-a-3d-plot).
12771251

12781252
```{code-cell} ipython3
1279-
---
1280-
mystnb:
1281-
figure:
1282-
caption: 3D plot of the vector field
1283-
name: 3d-vector-field
1284-
---
1253+
12851254
class Arrow3D(FancyArrowPatch):
12861255
def __init__(self, xs, ys, zs, *args, **kwargs):
12871256
super().__init__((0, 0), (0, 0), *args, **kwargs)
@@ -1334,11 +1303,10 @@ ax.set_ylabel('y')
13341303
ax.set_zlabel('Im')
13351304
ax.set_box_aspect(aspect=None, zoom=0.8)
13361305
1306+
plt.title("3D plot of the vector field")
13371307
plt.draw()
13381308
plt.show()
13391309
```
13401310

1341-
+++ {"user_expressions": []}
1342-
13431311
```{solution-end}
13441312
```

lectures/inequality.md

Lines changed: 20 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ jupytext:
44
extension: .md
55
format_name: myst
66
format_version: 0.13
7-
jupytext_version: 1.15.1
7+
jupytext_version: 1.17.2
88
kernelspec:
99
display_name: Python 3 (ipykernel)
1010
language: python
@@ -95,8 +95,6 @@ import wbgapi as wb
9595
import plotly.express as px
9696
```
9797

98-
99-
10098
## The Lorenz curve
10199

102100
One popular measure of inequality is the Lorenz curve.
@@ -239,9 +237,6 @@ ax.legend()
239237
plt.show()
240238
```
241239

242-
243-
244-
245240
### Lorenz curves for US data
246241

247242
Next let's look at US data for both income and wealth.
@@ -333,7 +328,6 @@ ax.legend()
333328
plt.show()
334329
```
335330

336-
337331
One key finding from this figure is that wealth inequality is more extreme than income inequality.
338332

339333

@@ -402,7 +396,7 @@ G = \frac{A}{A+B}
402396
$$
403397

404398
where $A$ is the area between the 45-degree line of
405-
perfect equality and the Lorenz curve, while $B$ is the area below the Lorenze curve -- see {numref}`lorenz_gini2`.
399+
perfect equality and the Lorenz curve, while $B$ is the area below the Lorenze curve -- see {numref}`lorenz_gini2`.
406400

407401
```{code-cell} ipython3
408402
---
@@ -427,8 +421,6 @@ ax.legend()
427421
plt.show()
428422
```
429423

430-
431-
432424
```{seealso}
433425
The World in Data project has a [graphical exploration of the Lorenz curve and the Gini coefficient](https://ourworldindata.org/what-is-the-gini-coefficient)
434426
```
@@ -442,7 +434,6 @@ The code below computes the Gini coefficient from a sample.
442434
(code:gini-coefficient)=
443435

444436
```{code-cell} ipython3
445-
446437
def gini_coefficient(y):
447438
r"""
448439
Implements the Gini inequality index
@@ -503,11 +494,13 @@ for σ in σ_vals:
503494
Let's build a function that returns a figure (so that we can use it later in the lecture).
504495

505496
```{code-cell} ipython3
506-
def plot_inequality_measures(x, y, legend, xlabel, ylabel):
497+
def plot_inequality_measures(x, y, legend, xlabel, ylabel, title=None):
507498
fig, ax = plt.subplots()
508499
ax.plot(x, y, marker='o', label=legend)
509500
ax.set_xlabel(xlabel)
510501
ax.set_ylabel(ylabel)
502+
if title is not None:
503+
ax.set_title(title)
511504
ax.legend()
512505
return fig, ax
513506
```
@@ -546,7 +539,7 @@ We now know the series ID is `SI.POV.GINI`.
546539

547540
(Another way to find the series ID is to use the [World Bank data portal](https://data.worldbank.org) and then use `wbgapi` to fetch the data.)
548541

549-
To get a quick overview, let's histogram Gini coefficients across all countries and all years in the World Bank dataset.
542+
To get a quick overview, let's histogram Gini coefficients across all countries and all years in the World Bank dataset.
550543

551544
```{code-cell} ipython3
552545
---
@@ -572,7 +565,7 @@ plt.show()
572565

573566
We can see in {numref}`gini_histogram` that across 50 years of data and all countries the measure varies between 20 and 65.
574567

575-
Let us fetch the data `DataFrame` for the USA.
568+
Let us fetch the data `DataFrame` for the USA.
576569

577570
```{code-cell} ipython3
578571
data = wb.data.DataFrame("SI.POV.GINI", "USA")
@@ -583,7 +576,6 @@ data.columns = data.columns.map(lambda x: int(x.replace('YR','')))
583576

584577
(This package often returns data with year information contained in the columns. This is not always convenient for simple plotting with pandas so it can be useful to transpose the results before plotting.)
585578

586-
587579
```{code-cell} ipython3
588580
data = data.T # Obtain years as rows
589581
data_usa = data['USA'] # pd.Series of US data
@@ -616,8 +608,7 @@ In the previous section we looked at the Gini coefficient for income, focusing o
616608

617609
Now let's look at the Gini coefficient for the distribution of wealth.
618610

619-
We will use US data from the {ref}`Survey of Consumer Finances<data:survey-consumer-finance>`
620-
611+
We will use US data from the {ref}`Survey of Consumer Finances<data:survey-consumer-finance>`
621612

622613
```{code-cell} ipython3
623614
df_income_wealth.year.describe()
@@ -953,50 +944,44 @@ for σ in σ_vals:
953944
```{code-cell} ipython3
954945
---
955946
mystnb:
956-
figure:
957-
caption: Top shares of simulated data
958-
name: top_shares_simulated
959947
image:
960948
alt: top_shares_simulated
961949
---
962950
fig, ax = plot_inequality_measures(σ_vals,
963951
topshares,
964952
"simulated data",
965953
"$\sigma$",
966-
"top $10\%$ share")
954+
"top $10\%$ share",
955+
"Top shares of simulated data")
967956
plt.show()
968957
```
969958

970959
```{code-cell} ipython3
971960
---
972961
mystnb:
973-
figure:
974-
caption: Gini coefficients of simulated data
975-
name: gini_coef_simulated
976962
image:
977963
alt: gini_coef_simulated
978964
---
979965
fig, ax = plot_inequality_measures(σ_vals,
980966
ginis,
981967
"simulated data",
982968
"$\sigma$",
983-
"gini coefficient")
969+
"gini coefficient",
970+
"Gini coefficients of simulated data")
984971
plt.show()
985972
```
986973

987974
```{code-cell} ipython3
988975
---
989976
mystnb:
990-
figure:
991-
caption: Lorenz curves for simulated data
992-
name: lorenz_curve_simulated
993977
image:
994978
alt: lorenz_curve_simulated
995979
---
996980
fig, ax = plt.subplots()
997981
ax.plot([0,1],[0,1], label=f"equality")
998982
for i in range(len(f_vals)):
999983
ax.plot(f_vals[i], l_vals[i], label=f"$\sigma$ = {σ_vals[i]}")
984+
ax.set_title("Lorenz curves for simulated data")
1000985
plt.legend()
1001986
plt.show()
1002987
```
@@ -1037,9 +1022,6 @@ for f_val, l_val in zip(f_vals_nw, l_vals_nw):
10371022
```{code-cell} ipython3
10381023
---
10391024
mystnb:
1040-
figure:
1041-
caption: 'US top shares: approximation vs Lorenz'
1042-
name: top_shares_us_al
10431025
image:
10441026
alt: top_shares_us_al
10451027
---
@@ -1051,6 +1033,7 @@ ax.plot(years, top_shares_nw, marker='o', label="net wealth-lorenz")
10511033
10521034
ax.set_xlabel("year")
10531035
ax.set_ylabel("top $10\%$ share")
1036+
ax.set_title('US top shares: approximation vs Lorenz')
10541037
ax.legend()
10551038
plt.show()
10561039
```
@@ -1112,9 +1095,11 @@ def gini(y):
11121095
g_sum = np.sum(np.abs(y_1 - y_2))
11131096
return g_sum / (2 * n * np.sum(y))
11141097
```
1098+
11151099
```{code-cell} ipython3
11161100
gini(data.n_wealth.values)
11171101
```
1102+
11181103
Let's simulate five populations by drawing from a lognormal distribution as before
11191104

11201105
```{code-cell} ipython3
@@ -1125,6 +1110,7 @@ n = 2_000
11251110
μ_vals = -σ_vals**2/2
11261111
y_vals = np.exp(μ_vals + σ_vals*np.random.randn(n))
11271112
```
1113+
11281114
We can compute the Gini coefficient for these five populations using the vectorized function, the computation time is shown below:
11291115

11301116
```{code-cell} ipython3
@@ -1133,14 +1119,13 @@ gini_coefficients =[]
11331119
for i in range(k):
11341120
gini_coefficients.append(gini(y_vals[i]))
11351121
```
1122+
11361123
This shows the vectorized function is much faster.
11371124
This gives us the Gini coefficients for these five households.
11381125

11391126
```{code-cell} ipython3
11401127
gini_coefficients
11411128
```
1142-
```{solution-end}
1143-
```
1144-
1145-
11461129

1130+
```{solution-end}
1131+
```

lectures/pv.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -339,8 +339,6 @@ $$ (eq:Ainv)
339339
340340
Check this by showing that $A A^{-1}$ is equal to the identity matrix.
341341
342-
343-
344342
```{exercise-end}
345343
```
346344
@@ -473,8 +471,6 @@ following settings for $d$ and $p_{T+1}^*$:
473471
Plugging each of the above $p_{T+1}^*, d_t$ pairs into Equation {eq}`eq:ptpveq` yields:
474472
475473
1. $ p_t = \sum^T_{s=t} \delta^{s-t} g^s d_0 = d_t \frac{1 - (\delta g)^{T+1-t}}{1 - \delta g}$
476-
477-
478474
2. $p_t = \sum^T_{s=t} \delta^{s-t} g^s d_0 + \frac{\delta^{T+1-t} g^{T+1} d_0}{1 - \delta g} = \frac{d_t}{1 - \delta g}$
479475
3. $p_t = 0$
480476
4. $p_t = c \delta^{-t}$

0 commit comments

Comments
 (0)