Skip to content

Commit dcaa880

Browse files
authored
Image Rendering problem resolved (aimacode#1178)
1 parent d2d3f31 commit dcaa880

6 files changed

+23
-23
lines changed

notebooks/chapter19/Learners.ipynb

+2-2
Original file line numberDiff line numberDiff line change
@@ -318,7 +318,7 @@
318318
"\n",
319319
"By default we use dense networks with two hidden layers, which has the architecture as the following:\n",
320320
"\n",
321-
"<img src=\"images/nn.png\" width=500/>\n",
321+
"<img src=\"images/nn.png\" width=\"500\"/>\n",
322322
"\n",
323323
"In our code, we implemented it as:"
324324
]
@@ -500,7 +500,7 @@
500500
"name": "python",
501501
"nbconvert_exporter": "python",
502502
"pygments_lexer": "ipython3",
503-
"version": "3.7.2"
503+
"version": "3.6.9"
504504
}
505505
},
506506
"nbformat": 4,

notebooks/chapter19/Loss Functions and Layers.ipynb

+3-3
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@
4040
"cell_type": "markdown",
4141
"metadata": {},
4242
"source": [
43-
"<img src= images/mse_plot.png width=500/>"
43+
"<img src=\"images/mse_plot.png\" width=\"500\"/>"
4444
]
4545
},
4646
{
@@ -88,7 +88,7 @@
8888
"cell_type": "markdown",
8989
"metadata": {},
9090
"source": [
91-
"<img src= images/corss_entropy_plot.png width=500/>"
91+
"<img src=\"images/corss_entropy_plot.png\" width=\"500\"/>"
9292
]
9393
},
9494
{
@@ -390,7 +390,7 @@
390390
"name": "python",
391391
"nbconvert_exporter": "python",
392392
"pygments_lexer": "ipython3",
393-
"version": "3.7.2"
393+
"version": "3.6.9"
394394
}
395395
},
396396
"nbformat": 4,

notebooks/chapter19/Optimizer and Backpropagation.ipynb

+3-3
Original file line numberDiff line numberDiff line change
@@ -251,7 +251,7 @@
251251
"cell_type": "markdown",
252252
"metadata": {},
253253
"source": [
254-
"<img src=\"images/backprop.png\" width=500/>"
254+
"<img src=\"images/backprop.png\" width=\"500\"/>"
255255
]
256256
},
257257
{
@@ -260,7 +260,7 @@
260260
"source": [
261261
"Applying optimizers and back-propagation algorithm together, we can update the weights of a neural network to minimize the loss function with alternatively doing forward and back-propagation process. Here is a figure form [here](https://medium.com/datathings/neural-networks-and-backpropagation-explained-in-a-simple-way-f540a3611f5e) describing how a neural network updates its weights:\n",
262262
"\n",
263-
"<img src=\"images/nn_steps.png\" width=700></img>"
263+
"<img src=\"images/nn_steps.png\" width=\"700\"></img>"
264264
]
265265
},
266266
{
@@ -303,7 +303,7 @@
303303
"name": "python",
304304
"nbconvert_exporter": "python",
305305
"pygments_lexer": "ipython3",
306-
"version": "3.7.2"
306+
"version": "3.6.9"
307307
}
308308
},
309309
"nbformat": 4,

notebooks/chapter19/RNN.ipynb

+6-6
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@
1212
"\n",
1313
"Recurrent neural networks address this issue. They are networks with loops in them, allowing information to persist.\n",
1414
"\n",
15-
"<img src=\"images/rnn_unit.png\" width=500/>"
15+
"<img src=\"images/rnn_unit.png\" width=\"500\"/>"
1616
]
1717
},
1818
{
@@ -21,7 +21,7 @@
2121
"source": [
2222
"A recurrent neural network can be thought of as multiple copies of the same network, each passing a message to a successor. Consider what happens if we unroll the above loop:\n",
2323
" \n",
24-
"<img src=\"images/rnn_units.png\" width=500/>"
24+
"<img src=\"images/rnn_units.png\" width=\"500\"/>"
2525
]
2626
},
2727
{
@@ -30,7 +30,7 @@
3030
"source": [
3131
"As demonstrated in the book, recurrent neural networks may be connected in many different ways: sequences in the input, the output, or in the most general case both.\n",
3232
"\n",
33-
"<img src=\"images/rnn_connections.png\" width=700/>"
33+
"<img src=\"images/rnn_connections.png\" width=\"700\"/>"
3434
]
3535
},
3636
{
@@ -303,7 +303,7 @@
303303
"\n",
304304
"Autoencoders are an unsupervised learning technique in which we leverage neural networks for the task of representation learning. It works by compressing the input into a latent-space representation, to do transformations on the data. \n",
305305
"\n",
306-
"<img src=\"images/autoencoder.png\" width=800/>"
306+
"<img src=\"images/autoencoder.png\" width=\"800\"/>"
307307
]
308308
},
309309
{
@@ -314,7 +314,7 @@
314314
"\n",
315315
"Autoencoders have different architectures for different kinds of data. Here we only provide a simple example of a vanilla encoder, which means they're only one hidden layer in the network:\n",
316316
"\n",
317-
"<img src=\"images/vanilla.png\" width=500/>\n",
317+
"<img src=\"images/vanilla.png\" width=\"500\"/>\n",
318318
"\n",
319319
"You can view the source code by:"
320320
]
@@ -479,7 +479,7 @@
479479
"name": "python",
480480
"nbconvert_exporter": "python",
481481
"pygments_lexer": "ipython3",
482-
"version": "3.6.8"
482+
"version": "3.6.9"
483483
}
484484
},
485485
"nbformat": 4,

notebooks/chapter24/Image Edge Detection.ipynb

+6-6
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@
6969
"cell_type": "markdown",
7070
"metadata": {},
7171
"source": [
72-
"<img src=\"images/gradients.png\" width=700/>"
72+
"<img src=\"images/gradients.png\" width=\"700\"/>"
7373
]
7474
},
7575
{
@@ -105,7 +105,7 @@
105105
"cell_type": "markdown",
106106
"metadata": {},
107107
"source": [
108-
"<img src=\"images/stapler.png\" width=500/>\n",
108+
"<img src=\"images/stapler.png\" width=\"500\"/>\n",
109109
"\n",
110110
"We will use `matplotlib` to read the image as a numpy ndarray:"
111111
]
@@ -226,7 +226,7 @@
226226
"cell_type": "markdown",
227227
"metadata": {},
228228
"source": [
229-
"<img src=\"images/derivative_of_gaussian.png\" width=400/>"
229+
"<img src=\"images/derivative_of_gaussian.png\" width=\"400\"/>"
230230
]
231231
},
232232
{
@@ -318,7 +318,7 @@
318318
"cell_type": "markdown",
319319
"metadata": {},
320320
"source": [
321-
"<img src=\"images/laplacian.png\" width=200/>"
321+
"<img src=\"images/laplacian.png\" width=\"200\"/>"
322322
]
323323
},
324324
{
@@ -334,7 +334,7 @@
334334
"cell_type": "markdown",
335335
"metadata": {},
336336
"source": [
337-
"<img src=\"images/laplacian_kernels.png\" width=300/>"
337+
"<img src=\"images/laplacian_kernels.png\" width=\"300\"/>"
338338
]
339339
},
340340
{
@@ -400,7 +400,7 @@
400400
"name": "python",
401401
"nbconvert_exporter": "python",
402402
"pygments_lexer": "ipython3",
403-
"version": "3.7.2"
403+
"version": "3.6.9"
404404
}
405405
},
406406
"nbformat": 4,

notebooks/chapter24/Objects in Images.ipynb

+3-3
Original file line numberDiff line numberDiff line change
@@ -306,7 +306,7 @@
306306
"source": [
307307
"The bounding boxes are drawn on the original picture showed in the following:\n",
308308
"\n",
309-
"<img src=\"images/stapler_bbox.png\" width=500/>"
309+
"<img src=\"images/stapler_bbox.png\" width=\"500\"/>"
310310
]
311311
},
312312
{
@@ -324,7 +324,7 @@
324324
"\n",
325325
"[Ross Girshick et al.](https://arxiv.org/pdf/1311.2524.pdf) proposed a method where they use selective search to extract just 2000 regions from the image. Then the regions in bounding boxes are feed into a convolutional neural network to perform classification. The brief architecture can be shown as:\n",
326326
"\n",
327-
"<img src=\"images/RCNN.png\" width=500/>"
327+
"<img src=\"images/RCNN.png\" width=\"500\"/>"
328328
]
329329
},
330330
{
@@ -446,7 +446,7 @@
446446
"name": "python",
447447
"nbconvert_exporter": "python",
448448
"pygments_lexer": "ipython3",
449-
"version": "3.7.2"
449+
"version": "3.6.9"
450450
}
451451
},
452452
"nbformat": 4,

0 commit comments

Comments
 (0)