Skip to content

Commit af77378

Browse files
committed
typos
1 parent c3fac15 commit af77378

File tree

6 files changed

+271
-303
lines changed

6 files changed

+271
-303
lines changed

doc/pub/week48/html/week48-reveal.html

Lines changed: 24 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -243,7 +243,7 @@ <h2 id="random-forest-algorithm-reminder-from-last-week">Random Forest Algorithm
243243
<p>The algorithm described here can be applied to both classification and regression problems.</p>
244244

245245
<p>We will grow of forest of say \( B \) trees.</p>
246-
<ol>
246+
<ul>
247247
<p><li> For \( b=1:B \)
248248
<ol type="a"></li>
249249
<p><li> Draw a bootstrap sample from the training data organized in our \( \boldsymbol{X} \) matrix.</li>
@@ -259,8 +259,9 @@ <h2 id="random-forest-algorithm-reminder-from-last-week">Random Forest Algorithm
259259
<p>
260260
</ol>
261261
<p>
262-
<p><li> Output then the ensemble of trees \( \{T_b\}_1^{B} \) and make predictions for either a regression type of problem or a classification type of problem.</li>
263-
</ol>
262+
</ul>
263+
<p>
264+
<p>Finally we output then the ensemble of trees \( \{T_b\}_1^{B} \) and make predictions for either a regression type of problem or a classification type of problem. </p>
264265
</section>
265266

266267
<section>
@@ -1361,8 +1362,7 @@ <h2 id="statistical-analysis-and-optimization-of-data">Statistical analysis and
13611362
<section>
13621363
<h2 id="machine-learning">Machine learning </h2>
13631364

1364-
<p>The following topics will be covered</p>
1365-
<ol>
1365+
<ul>
13661366
<p><li> Linear methods for regression and classification:
13671367
<ol type="a"></li>
13681368
<p><li> Ordinary Least Squares</li>
@@ -1393,7 +1393,7 @@ <h2 id="machine-learning">Machine learning </h2>
13931393
<p><li> Regression</li>
13941394
</ol>
13951395
<p>
1396-
</ol>
1396+
</ul>
13971397
</section>
13981398

13991399
<section>
@@ -1468,17 +1468,17 @@ <h2 id="starting-your-machine-learning-project">Starting your Machine Learning P
14681468
<section>
14691469
<h2 id="choose-a-model-and-algorithm">Choose a Model and Algorithm </h2>
14701470

1471-
<ol>
1471+
<ul>
14721472
<p><li> Supervised?</li>
14731473
<p><li> Start with the simplest model that fits your problem</li>
14741474
<p><li> Start with minimal processing of data</li>
1475-
</ol>
1475+
</ul>
14761476
</section>
14771477

14781478
<section>
14791479
<h2 id="preparing-your-data">Preparing Your Data </h2>
14801480

1481-
<ol>
1481+
<ul>
14821482
<p><li> Shuffle your data</li>
14831483
<p><li> Mean center your data</li>
14841484
<ul>
@@ -1500,51 +1500,38 @@ <h2 id="preparing-your-data">Preparing Your Data </h2>
15001500
<p><li> Can be hit or miss</li>
15011501
</ul>
15021502
<p>
1503-
<p><li> When to do train/test split?</li>
1504-
</ol>
1503+
<p><li> When to do train/test split?</li>
1504+
</ul>
15051505
</section>
15061506

15071507
<section>
15081508
<h2 id="which-activation-and-weights-to-choose-in-neural-networks">Which activation and weights to choose in neural networks </h2>
15091509

1510-
<ol>
1510+
<ul>
15111511
<p><li> RELU? ELU? GELU? etc</li>
15121512
<p><li> Sigmoid or Tanh?</li>
1513-
<p><li> Set all weights to 0?</li>
1514-
<ul>
1515-
1516-
<p><li> Terrible idea</li>
1517-
</ul>
1518-
<p>
1519-
<p><li> Set all weights to random values?</li>
1520-
<ul>
1521-
1522-
<p><li> Small random values</li>
1513+
<p><li> Set all weights to 0? Terrible idea</li>
1514+
<p><li> Set all weights to random values? Small random values</li>
15231515
</ul>
1524-
<p>
1525-
</ol>
15261516
</section>
15271517

15281518
<section>
15291519
<h2 id="optimization-methods-and-hyperparameters">Optimization Methods and Hyperparameters </h2>
1530-
<ol>
1531-
<p><li> Stochastic gradient descent
1532-
<ol type="a"></li>
1533-
<p><li> Stochastic gradient descent + momentum</li>
1534-
</ol>
1535-
<p>
1536-
<p><li> State-of-the-art approaches:</li>
15371520
<ul>
1538-
1539-
<p><li> RMSProp</li>
1540-
1541-
<p><li> Adam</li>
1542-
1543-
<p><li> and more</li>
1521+
<p><li> Stochastic gradient descent</li>
1522+
<ul>
1523+
<p><li> Stochastic gradient descent + momentum</li>
15441524
</ul>
15451525
<p>
1526+
<p><li> State-of-the-art approaches:
1527+
<ol type="a"></li>
1528+
<p><li> RMSProp</li>
1529+
<p><li> Adam</li>
1530+
<p><li> and more</li>
15461531
</ol>
15471532
<p>
1533+
</ul>
1534+
<p>
15481535
<p>Which regularization and hyperparameters? \( L_1 \) or \( L_2 \), soft
15491536
classifiers, depths of trees and many other. Need to explore a large
15501537
set of hyperparameters and regularization methods.

doc/pub/week48/html/week48-solarized.html

Lines changed: 23 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -353,7 +353,7 @@ <h2 id="random-forest-algorithm-reminder-from-last-week">Random Forest Algorithm
353353
<p>The algorithm described here can be applied to both classification and regression problems.</p>
354354

355355
<p>We will grow of forest of say \( B \) trees.</p>
356-
<ol>
356+
<ul>
357357
<li> For \( b=1:B \)
358358
<ol type="a"></li>
359359
<li> Draw a bootstrap sample from the training data organized in our \( \boldsymbol{X} \) matrix.</li>
@@ -364,8 +364,9 @@ <h2 id="random-forest-algorithm-reminder-from-last-week">Random Forest Algorithm
364364
<li> split the node into daughter nodes</li>
365365
</ol>
366366
</ol>
367-
<li> Output then the ensemble of trees \( \{T_b\}_1^{B} \) and make predictions for either a regression type of problem or a classification type of problem.</li>
368-
</ol>
367+
</ul>
368+
<p>Finally we output then the ensemble of trees \( \{T_b\}_1^{B} \) and make predictions for either a regression type of problem or a classification type of problem. </p>
369+
369370
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
370371
<h2 id="random-forests-compared-with-other-methods-on-the-cancer-data">Random Forests Compared with other Methods on the Cancer Data </h2>
371372

@@ -1377,8 +1378,7 @@ <h2 id="statistical-analysis-and-optimization-of-data">Statistical analysis and
13771378
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
13781379
<h2 id="machine-learning">Machine learning </h2>
13791380

1380-
<p>The following topics will be covered</p>
1381-
<ol>
1381+
<ul>
13821382
<li> Linear methods for regression and classification:
13831383
<ol type="a"></li>
13841384
<li> Ordinary Least Squares</li>
@@ -1405,7 +1405,7 @@ <h2 id="machine-learning">Machine learning </h2>
14051405
<li> Kernel methods</li>
14061406
<li> Regression</li>
14071407
</ol>
1408-
</ol>
1408+
</ul>
14091409
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
14101410
<h2 id="learning-outcomes-and-overarching-aims-of-this-course">Learning outcomes and overarching aims of this course </h2>
14111411

@@ -1470,15 +1470,15 @@ <h2 id="starting-your-machine-learning-project">Starting your Machine Learning P
14701470
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
14711471
<h2 id="choose-a-model-and-algorithm">Choose a Model and Algorithm </h2>
14721472

1473-
<ol>
1473+
<ul>
14741474
<li> Supervised?</li>
14751475
<li> Start with the simplest model that fits your problem</li>
14761476
<li> Start with minimal processing of data</li>
1477-
</ol>
1477+
</ul>
14781478
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
14791479
<h2 id="preparing-your-data">Preparing Your Data </h2>
14801480

1481-
<ol>
1481+
<ul>
14821482
<li> Shuffle your data</li>
14831483
<li> Mean center your data</li>
14841484
<ul>
@@ -1493,37 +1493,31 @@ <h2 id="preparing-your-data">Preparing Your Data </h2>
14931493
<li> Decorrelates data</li>
14941494
<li> Can be hit or miss</li>
14951495
</ul>
1496-
<li> When to do train/test split?</li>
1497-
</ol>
1496+
<li> When to do train/test split?</li>
1497+
</ul>
14981498
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
14991499
<h2 id="which-activation-and-weights-to-choose-in-neural-networks">Which activation and weights to choose in neural networks </h2>
15001500

1501-
<ol>
1501+
<ul>
15021502
<li> RELU? ELU? GELU? etc</li>
15031503
<li> Sigmoid or Tanh?</li>
1504-
<li> Set all weights to 0?</li>
1505-
<ul>
1506-
<li> Terrible idea</li>
1507-
</ul>
1508-
<li> Set all weights to random values?</li>
1509-
<ul>
1510-
<li> Small random values</li>
1504+
<li> Set all weights to 0? Terrible idea</li>
1505+
<li> Set all weights to random values? Small random values</li>
15111506
</ul>
1512-
</ol>
15131507
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
15141508
<h2 id="optimization-methods-and-hyperparameters">Optimization Methods and Hyperparameters </h2>
1515-
<ol>
1516-
<li> Stochastic gradient descent
1517-
<ol type="a"></li>
1518-
<li> Stochastic gradient descent + momentum</li>
1519-
</ol>
1520-
<li> State-of-the-art approaches:</li>
15211509
<ul>
1522-
<li> RMSProp</li>
1523-
<li> Adam</li>
1524-
<li> and more</li>
1510+
<li> Stochastic gradient descent</li>
1511+
<ul>
1512+
<li> Stochastic gradient descent + momentum</li>
15251513
</ul>
1514+
<li> State-of-the-art approaches:
1515+
<ol type="a"></li>
1516+
<li> RMSProp</li>
1517+
<li> Adam</li>
1518+
<li> and more</li>
15261519
</ol>
1520+
</ul>
15271521
<p>Which regularization and hyperparameters? \( L_1 \) or \( L_2 \), soft
15281522
classifiers, depths of trees and many other. Need to explore a large
15291523
set of hyperparameters and regularization methods.

doc/pub/week48/html/week48.html

Lines changed: 23 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -430,7 +430,7 @@ <h2 id="random-forest-algorithm-reminder-from-last-week">Random Forest Algorithm
430430
<p>The algorithm described here can be applied to both classification and regression problems.</p>
431431

432432
<p>We will grow of forest of say \( B \) trees.</p>
433-
<ol>
433+
<ul>
434434
<li> For \( b=1:B \)
435435
<ol type="a"></li>
436436
<li> Draw a bootstrap sample from the training data organized in our \( \boldsymbol{X} \) matrix.</li>
@@ -441,8 +441,9 @@ <h2 id="random-forest-algorithm-reminder-from-last-week">Random Forest Algorithm
441441
<li> split the node into daughter nodes</li>
442442
</ol>
443443
</ol>
444-
<li> Output then the ensemble of trees \( \{T_b\}_1^{B} \) and make predictions for either a regression type of problem or a classification type of problem.</li>
445-
</ol>
444+
</ul>
445+
<p>Finally we output then the ensemble of trees \( \{T_b\}_1^{B} \) and make predictions for either a regression type of problem or a classification type of problem. </p>
446+
446447
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
447448
<h2 id="random-forests-compared-with-other-methods-on-the-cancer-data">Random Forests Compared with other Methods on the Cancer Data </h2>
448449

@@ -1454,8 +1455,7 @@ <h2 id="statistical-analysis-and-optimization-of-data">Statistical analysis and
14541455
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
14551456
<h2 id="machine-learning">Machine learning </h2>
14561457

1457-
<p>The following topics will be covered</p>
1458-
<ol>
1458+
<ul>
14591459
<li> Linear methods for regression and classification:
14601460
<ol type="a"></li>
14611461
<li> Ordinary Least Squares</li>
@@ -1482,7 +1482,7 @@ <h2 id="machine-learning">Machine learning </h2>
14821482
<li> Kernel methods</li>
14831483
<li> Regression</li>
14841484
</ol>
1485-
</ol>
1485+
</ul>
14861486
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
14871487
<h2 id="learning-outcomes-and-overarching-aims-of-this-course">Learning outcomes and overarching aims of this course </h2>
14881488

@@ -1547,15 +1547,15 @@ <h2 id="starting-your-machine-learning-project">Starting your Machine Learning P
15471547
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
15481548
<h2 id="choose-a-model-and-algorithm">Choose a Model and Algorithm </h2>
15491549

1550-
<ol>
1550+
<ul>
15511551
<li> Supervised?</li>
15521552
<li> Start with the simplest model that fits your problem</li>
15531553
<li> Start with minimal processing of data</li>
1554-
</ol>
1554+
</ul>
15551555
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
15561556
<h2 id="preparing-your-data">Preparing Your Data </h2>
15571557

1558-
<ol>
1558+
<ul>
15591559
<li> Shuffle your data</li>
15601560
<li> Mean center your data</li>
15611561
<ul>
@@ -1570,37 +1570,31 @@ <h2 id="preparing-your-data">Preparing Your Data </h2>
15701570
<li> Decorrelates data</li>
15711571
<li> Can be hit or miss</li>
15721572
</ul>
1573-
<li> When to do train/test split?</li>
1574-
</ol>
1573+
<li> When to do train/test split?</li>
1574+
</ul>
15751575
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
15761576
<h2 id="which-activation-and-weights-to-choose-in-neural-networks">Which activation and weights to choose in neural networks </h2>
15771577

1578-
<ol>
1578+
<ul>
15791579
<li> RELU? ELU? GELU? etc</li>
15801580
<li> Sigmoid or Tanh?</li>
1581-
<li> Set all weights to 0?</li>
1582-
<ul>
1583-
<li> Terrible idea</li>
1584-
</ul>
1585-
<li> Set all weights to random values?</li>
1586-
<ul>
1587-
<li> Small random values</li>
1581+
<li> Set all weights to 0? Terrible idea</li>
1582+
<li> Set all weights to random values? Small random values</li>
15881583
</ul>
1589-
</ol>
15901584
<!-- !split --><br><br><br><br><br><br><br><br><br><br>
15911585
<h2 id="optimization-methods-and-hyperparameters">Optimization Methods and Hyperparameters </h2>
1592-
<ol>
1593-
<li> Stochastic gradient descent
1594-
<ol type="a"></li>
1595-
<li> Stochastic gradient descent + momentum</li>
1596-
</ol>
1597-
<li> State-of-the-art approaches:</li>
15981586
<ul>
1599-
<li> RMSProp</li>
1600-
<li> Adam</li>
1601-
<li> and more</li>
1587+
<li> Stochastic gradient descent</li>
1588+
<ul>
1589+
<li> Stochastic gradient descent + momentum</li>
16021590
</ul>
1591+
<li> State-of-the-art approaches:
1592+
<ol type="a"></li>
1593+
<li> RMSProp</li>
1594+
<li> Adam</li>
1595+
<li> and more</li>
16031596
</ol>
1597+
</ul>
16041598
<p>Which regularization and hyperparameters? \( L_1 \) or \( L_2 \), soft
16051599
classifiers, depths of trees and many other. Need to explore a large
16061600
set of hyperparameters and regularization methods.
0 Bytes
Binary file not shown.

0 commit comments

Comments
 (0)