Skip to content

Commit 7da86fe

Browse files
authored
Update job search lecture (#190)
* misc * misc * misc
1 parent 021d61b commit 7da86fe

File tree

3 files changed

+122
-45
lines changed

3 files changed

+122
-45
lines changed

lectures/autodiff.md

Lines changed: 10 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,12 @@ kernelspec:
1313

1414
# Adventures with Autodiff
1515

16+
17+
```{include} _admonition/gpu.md
18+
```
19+
20+
## Overview
21+
1622
This lecture gives a brief introduction to automatic differentiation using
1723
Google JAX.
1824

@@ -25,14 +31,15 @@ powerful implementations available.
2531
One of the best of these is the automatic differentiation routines contained
2632
in JAX.
2733

34+
While other software packages also offer this feature, the JAX version is
35+
particularly powerful because it integrates so well with other core
36+
components of JAX (e.g., JIT compilation and parallelization).
37+
2838
As we will see in later lectures, automatic differentiation can be used not only
2939
for AI but also for many problems faced in mathematical modeling, such as
3040
multi-dimensional nonlinear optimization and root-finding problems.
3141

3242

33-
```{include} _admonition/gpu.md
34-
```
35-
3643
We need the following imports
3744

3845
```{code-cell} ipython3

lectures/job_search.md

Lines changed: 109 additions & 35 deletions
Original file line numberDiff line numberDiff line change
@@ -17,11 +17,15 @@ kernelspec:
1717
```
1818

1919

20-
In this lecture we study a basic infinite-horizon job search with Markov wage
20+
In this lecture we study a basic infinite-horizon job search problem with Markov wage
2121
draws
2222

23-
The exercise at the end asks you to add recursive preferences and compare
24-
the result.
23+
```{note}
24+
For background on infinite horizon job search see, e.g., [DP1](https://dp.quantecon.org/).
25+
```
26+
27+
The exercise at the end asks you to add risk-sensitive preferences and see how
28+
the main results change.
2529

2630
In addition to what’s in Anaconda, this lecture will need the following libraries:
2731

@@ -49,23 +53,32 @@ We study an elementary model where
4953

5054
* jobs are permanent
5155
* unemployed workers receive current compensation $c$
52-
* the wage offer distribution $\{W_t\}$ is Markovian
5356
* the horizon is infinite
5457
* an unemployment agent discounts the future via discount factor $\beta \in (0,1)$
5558

56-
The wage process obeys
59+
### Set up
60+
61+
At the start of each period, an unemployed worker receives wage offer $W_t$.
62+
63+
To build a wage offer process we consider the dynamics
5764

5865
$$
59-
W_{t+1} = \rho W_t + \nu Z_{t+1},
60-
\qquad \{Z_t\} \text{ is IID and } N(0, 1)
66+
W_{t+1} = \rho W_t + \nu Z_{t+1}
6167
$$
6268

63-
We discretize this using Tauchen's method to produce a stochastic matrix $P$
69+
where $(Z_t)_{t \geq 0}$ is IID and standard normal.
70+
71+
We then discretize this wage process using Tauchen's method to produce a stochastic matrix $P$.
72+
73+
Successive wage offers are drawn from $P$.
74+
75+
### Rewards
6476

6577
Since jobs are permanent, the return to accepting wage offer $w$ today is
6678

6779
$$
68-
w + \beta w + \beta^2 w + \cdots = \frac{w}{1-\beta}
80+
w + \beta w + \beta^2 w +
81+
\cdots = \frac{w}{1-\beta}
6982
$$
7083

7184
The Bellman equation is
@@ -79,30 +92,50 @@ $$
7992

8093
We solve this model using value function iteration.
8194

95+
+++
96+
97+
## Code
8298

8399
Let's set up a `namedtuple` to store information needed to solve the model.
84100

85101
```{code-cell} ipython3
86102
Model = namedtuple('Model', ('n', 'w_vals', 'P', 'β', 'c'))
87103
```
88104

89-
The function below holds default values and populates the namedtuple.
105+
The function below holds default values and populates the `namedtuple`.
90106

91107
```{code-cell} ipython3
92108
def create_js_model(
93109
n=500, # wage grid size
94110
ρ=0.9, # wage persistence
95111
ν=0.2, # wage volatility
96112
β=0.99, # discount factor
97-
c=1.0 # unemployment compensation
113+
c=1.0, # unemployment compensation
98114
):
99115
"Creates an instance of the job search model with Markov wages."
100116
mc = qe.tauchen(n, ρ, ν)
101-
w_vals, P = jnp.exp(mc.state_values), mc.P
102-
P = jnp.array(P)
117+
w_vals, P = jnp.exp(mc.state_values), jnp.array(mc.P)
103118
return Model(n, w_vals, P, β, c)
104119
```
105120

121+
Let's test it:
122+
123+
```{code-cell} ipython3
124+
model = create_js_model(β=0.98)
125+
```
126+
127+
```{code-cell} ipython3
128+
model.c
129+
```
130+
131+
```{code-cell} ipython3
132+
model.β
133+
```
134+
135+
```{code-cell} ipython3
136+
model.w_vals.mean()
137+
```
138+
106139
Here's the Bellman operator.
107140

108141
```{code-cell} ipython3
@@ -135,13 +168,13 @@ $$
135168

136169
Here $\mathbf 1$ is an indicator function.
137170

138-
The statement above means that the worker accepts ($\sigma(w) = 1$) when the value of stopping
139-
is higher than the value of continuing.
171+
* $\sigma(w) = 1$ means stop
172+
* $\sigma(w) = 0$ means continue.
140173

141174
```{code-cell} ipython3
142175
@jax.jit
143176
def get_greedy(v, model):
144-
"""Get a v-greedy policy."""
177+
"Get a v-greedy policy."
145178
n, w_vals, P, β, c = model
146179
e = w_vals / (1 - β)
147180
h = c + β * P @ v
@@ -153,8 +186,7 @@ Here's a routine for value function iteration.
153186

154187
```{code-cell} ipython3
155188
def vfi(model, max_iter=10_000, tol=1e-4):
156-
"""Solve the infinite-horizon Markov job search model by VFI."""
157-
189+
"Solve the infinite-horizon Markov job search model by VFI."
158190
print("Starting VFI iteration.")
159191
v = jnp.zeros_like(model.w_vals) # Initial guess
160192
i = 0
@@ -171,29 +203,47 @@ def vfi(model, max_iter=10_000, tol=1e-4):
171203
return v_star, σ_star
172204
```
173205

174-
### Computing the solution
206+
207+
+++
208+
209+
## Computing the solution
175210

176211
Let's set up and solve the model.
177212

178213
```{code-cell} ipython3
179214
model = create_js_model()
180215
n, w_vals, P, β, c = model
181216
182-
%time v_star, σ_star = vfi(model)
217+
v_star, σ_star = vfi(model)
183218
```
184219

185-
We run it again to eliminate compile time.
220+
Here's the optimal policy:
186221

187222
```{code-cell} ipython3
188-
%time v_star, σ_star = vfi(model)
223+
fig, ax = plt.subplots()
224+
ax.plot(σ_star)
225+
ax.set_xlabel("wage values")
226+
ax.set_ylabel("optimal choice (stop=1)")
227+
plt.show()
189228
```
190229

191230
We compute the reservation wage as the first $w$ such that $\sigma(w)=1$.
192231

193232
```{code-cell} ipython3
194-
res_wage = w_vals[jnp.searchsorted(σ_star, 1.0)]
233+
stop_indices = jnp.where(σ_star == 1)
234+
stop_indices
235+
```
236+
237+
```{code-cell} ipython3
238+
res_wage_index = min(stop_indices[0])
239+
```
240+
241+
```{code-cell} ipython3
242+
res_wage = w_vals[res_wage_index]
195243
```
196244

245+
Here's a joint plot of the value function and the reservation wage.
246+
197247
```{code-cell} ipython3
198248
fig, ax = plt.subplots()
199249
ax.plot(w_vals, v_star, alpha=0.8, label="value function")
@@ -228,13 +278,37 @@ $$
228278
$$
229279

230280

231-
When $\theta < 0$ the agent is risk sensitive.
281+
When $\theta < 0$ the agent is risk averse.
232282

233283
Solve the model when $\theta = -0.1$ and compare your result to the risk neutral
234284
case.
235285

236286
Try to interpret your result.
237287

288+
You can start with the following code:
289+
290+
```{code-cell} ipython3
291+
292+
RiskModel = namedtuple('Model', ('n', 'w_vals', 'P', 'β', 'c', 'θ'))
293+
294+
def create_risk_sensitive_js_model(
295+
n=500, # wage grid size
296+
ρ=0.9, # wage persistence
297+
ν=0.2, # wage volatility
298+
β=0.99, # discount factor
299+
c=1.0, # unemployment compensation
300+
θ=-0.1 # risk parameter
301+
):
302+
"Creates an instance of the job search model with Markov wages."
303+
mc = qe.tauchen(n, ρ, ν)
304+
w_vals, P = jnp.exp(mc.state_values), mc.P
305+
P = jnp.array(P)
306+
return RiskModel(n, w_vals, P, β, c, θ)
307+
308+
```
309+
310+
Now you need to modify `T` and `get_greedy` and then run value function iteration again.
311+
238312
```{exercise-end}
239313
```
240314

@@ -311,25 +385,25 @@ model_rs = create_risk_sensitive_js_model()
311385
312386
n, w_vals, P, β, c, θ = model_rs
313387
314-
%time v_star_rs, σ_star_rs = vfi(model_rs)
388+
v_star_rs, σ_star_rs = vfi(model_rs)
315389
```
316390

317-
We run it again to eliminate the compilation time.
318-
319-
```{code-cell} ipython3
320-
%time v_star_rs, σ_star_rs = vfi(model_rs)
321-
```
391+
Let's plot the results together with the original risk neutral case and see what we get.
322392

323393
```{code-cell} ipython3
324-
res_wage_rs = w_vals[jnp.searchsorted(σ_star_rs, 1.0)]
394+
stop_indices = jnp.where(σ_star_rs == 1)
395+
res_wage_index = min(stop_indices[0])
396+
res_wage_rs = w_vals[res_wage_index]
325397
```
326398

327399
```{code-cell} ipython3
328400
fig, ax = plt.subplots()
329-
ax.plot(w_vals, v_star, alpha=0.8, label="RN $v$")
330-
ax.plot(w_vals, v_star_rs, alpha=0.8, label="RS $v$")
331-
ax.vlines((res_wage,), 150, 400, ls='--', color='darkblue', alpha=0.5, label=r"RV $\bar w$")
332-
ax.vlines((res_wage_rs,), 150, 400, ls='--', color='orange', alpha=0.5, label=r"RS $\bar w$")
401+
ax.plot(w_vals, v_star, alpha=0.8, label="risk neutral $v$")
402+
ax.plot(w_vals, v_star_rs, alpha=0.8, label="risk sensitive $v$")
403+
ax.vlines((res_wage,), 100, 400, ls='--', color='darkblue',
404+
alpha=0.5, label=r"risk neutral $\bar w$")
405+
ax.vlines((res_wage_rs,), 100, 400, ls='--', color='orange',
406+
alpha=0.5, label=r"risk sensitive $\bar w$")
333407
ax.legend(frameon=False, fontsize=12, loc="lower right")
334408
ax.set_xlabel("$w$", fontsize=12)
335409
plt.show()

lectures/newtons_method.md

Lines changed: 3 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -20,18 +20,14 @@ kernelspec:
2020

2121
One of the key features of JAX is automatic differentiation.
2222

23-
While other software packages also offer this feature, the JAX version is
24-
particularly powerful because it integrates so closely with other core
25-
components of JAX, such as accelerated linear algebra, JIT compilation and
26-
parallelization.
23+
We introduced this feature in {doc}`autodiff`.
2724

28-
The application of automatic differentiation we consider is computing economic equilibria via Newton's method.
25+
In this lecture we apply automatic differentiation to the problem of computing economic equilibria via Newton's method.
2926

3027
Newton's method is a relatively simple root and fixed point solution algorithm, which we discussed
3128
in [a more elementary QuantEcon lecture](https://python.quantecon.org/newton_method.html).
3229

33-
JAX is almost ideally suited to implementing Newton's method efficiently, even
34-
in high dimensions.
30+
JAX is ideally suited to implementing Newton's method efficiently, even in high dimensions.
3531

3632
We use the following imports in this lecture
3733

0 commit comments

Comments
 (0)