Skip to content

Commit

Permalink
Lab10
Browse files Browse the repository at this point in the history
for cs344 at Calvin University, spring 2020
  • Loading branch information
mrsillydog committed Apr 18, 2020
1 parent 1241d51 commit 66cf93c
Show file tree
Hide file tree
Showing 4 changed files with 88 additions and 0 deletions.
16 changes: 16 additions & 0 deletions lab10/lab10_1.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
Exercise 10.1
a. I would rather use Keras. If I were a more experienced model developer and knew how to utilize the greater flexibility of Tensorflow to my advantage, I would probably choose to use it, but as a novice Keras has my vote due to it being significantly simpler to use.
b.
Task 1:
dnn_regressor = train_nn_regression_model(
learning_rate=0.0005,
steps=2000,
batch_size=100,
hidden_units=[20, 5],
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)

Task 2:
Final RMSE (on test data): 106.96
62 changes: 62 additions & 0 deletions lab10/lab10_2.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
Exercise 10.2
a. The Adagrad optimizer modifies the learning rate for each feature in the model dynamically, as opposed to modifying all features by the same unchanging learning rate. This perpetually decreases the effective learning rate.
b.
Task 1
_ = train_nn_regression_model(
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.006),
steps=4000,
batch_size=100,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)

Final RMSE (on training data): 67.03
Final RMSE (on validation data): 66.90

Task 2
_, adagrad_training_losses, adagrad_validation_losses = train_nn_regression_model(
my_optimizer=tf.train.AdagradOptimizer(learning_rate=0.6),
steps=600,
batch_size=100,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)

Final RMSE (on training data): 67.99
Final RMSE (on validation data): 67.88

_, adam_training_losses, adam_validation_losses = train_nn_regression_model(
my_optimizer=tf.train.AdamOptimizer(learning_rate=0.01),
steps=600,
batch_size=100,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)

Final RMSE (on training data): 68.56
Final RMSE (on validation data): 68.34

Task 3

z_score_normalize:

_ = train_nn_regression_model(
my_optimizer=tf.train.GradientDescentOptimizer(learning_rate=0.0015),
steps=5000,
batch_size=100,
hidden_units=[10, 10],
training_examples=normalized_training_examples,
training_targets=training_targets,
validation_examples=normalized_validation_examples,
validation_targets=validation_targets)

Final RMSE (on training data): 67.24
Final RMSE (on validation data): 66.93

c. I skipped the optional challenge.
5 changes: 5 additions & 0 deletions lab10/lab10_3.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
Exercise 10.3
a. The confusion matrix indicates which classes, or digits, are commonly misclassified as each other. It displays both the number of each class that were correctly classified, and the number examples of the class that were incorrectly classified as each other class.
b. The TensorFlow network architecture had 2 hidden layers, just like the Keras network architecture given in class, but instead of having a 512 node layer with a relu activation and a 10 node layer with a softmax activation, the TF hidden layers each had 100 nodes with a relu activation function.
I couldn't seem to make any significant improvements over the baseline testset accuracy for this task.
c. At 10 steps, the images are basically completely pixelated and randomly colored. There doesn't seem to be any pattern at all. At 1000 steps, they're still grainy, but some patterns are apparent and easily discerned.
5 changes: 5 additions & 0 deletions lab10/lab10_4.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
Exercise 10.4
a. i. For both cats and dogs, there are 1000 training images and 500 validation images in the datasets. Each image is a 150x150 image.
ii. The first CNN has an input layer, unlike the one we did in class, and additionally has an extra final max pooling layer in between the final convolution layer and the flattening layer that the example we did in class didn't have. Also, every layer has significantly more nodes than the one we did in class, with the exception of the final Dense layer.
iii. The obvious pattern is that the representations become increasingly less understandable to the human eye as the processing of the image through the neural network continues. At first, you can more or less tell that the representation is of a cat, but by the end it seems to just be random colored pixels.
b. I skipped Exercises 2 & 3.

0 comments on commit 66cf93c

Please sign in to comment.