Skip to content

Commit 3c8b017

Browse files
authored
Clarify steps if reload weights for activation based priming
1 parent 64b643a commit 3c8b017

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

ch7/priming/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ You can optionally explore turning the `Lrate` parameter down to .01 or even low
6464

6565
Next, we can see to what extent residual activation from one trial to the next can bias processing. To set up this test, we want the network to have a weight-based bias to respond with the `b` output:
6666

67-
* Click `Set Env` and select `TrainB`, and do `Step Epoch`, to ensure that it generally responds `b` (if you experimented with learning rate, or did a lot of tests with the weight-based case, it might be better to start fresh with `Init`, `Open Trained Wts` and then the `TrainB` etc).
67+
* Click `Set Env` and select `TrainB`, and do `Step Epoch`, to ensure that it generally responds `b`. (If you experimented with learning rate, or did a lot of tests with the weight-based case, it might be better to start fresh with `Init`, `Open Trained Wts` and then `Step Epoch` with the `TrainB` until you see that it generally responds to `b`).
6868

6969
Next, we will use the `TrainAll` patterns for testing, because they alternate between the `a` and `b` versions of each input when presented sequentially -- we will test for the extent to which the residual activation from the `a` item can bias processing on the subsequent `b` case. Note that we are recording the response of the network in the *minus* phase, and then the specific `Output` is clamped in the plus phase (even during testing), so we can observe the effects of e.g., the `0_a` `Output` activation (with the `a` pattern) on the tendency to bias the network to produce an `a` response again for the 0 input, despite the weights being biased in favor of producing the `b` output.
7070

0 commit comments

Comments
 (0)