Skip to content

Commit 79bc0d7

Browse files
authored
update activation-based priming
1 parent 5fbfaf3 commit 79bc0d7

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

ch7/priming/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -68,11 +68,11 @@ Next, we will use the `AltAB` patterns for testing, because they alternate betwe
6868

6969
* Click `Set Env` and select `Test alt AB` to use this full set of alternating `AltAB` patterns during _testing_, and then switch to `Test` instead of `Train` mode, and do `Init` (which will not initialize the weights because we are in `Test` mode), `Run` to see the baseline level of responding, while looking at the `Test Trial Plot`.
7070

71-
This is a baseline, because we are still clearing all of the activation out of the network between each input, due to the `Decay` parameter being set to the default of 1. You should see that the network responds _consistently_ to both instances of the same input pattern. For example, if it responds `a` to the first `0` input, then it also responds `a` to the second input right after that. Similarly, if the network responds `b` to the first trial of an input pattern, then it also responds `b' to the second trial of the input pattern. There is no biasing toward `a` after the first trial, and no evidence of activation priming here.
71+
This is a baseline, because we are still clearing all of the activation out of the network between each input, due to the `Decay` parameter being set to the default of 1. You should see that the network responds _consistently_ to both instances of the same input pattern. For example, if it responds `a` to the first `0` input, then it also responds `a` to the second input right after that. Similarly, if the network responds `b` to the first trial of an input pattern, then it also responds `b` to the second trial of the input pattern. There is no biasing toward `a` after the first trial, and no evidence of activation priming here.
7272

7373
* Set `Decay` to 0 instead of 1, and do another `Init` and `Run`. You should now observe a very different pattern, where the responses to the second trial of an input pattern are more likely to be `a` than the first trial of the same input pattern. This looks like a "sawtooth" kind of jaggy pattern in the test plot.
7474

75-
> **Question 7.8:** Comparing the 1st trials and 2nd trials of each input pattern (the 1st and 2nd 0, the 1st and 2nd 1, and so on), report the number of times the network responded 'b' to the first trial and 'a' to the second trial. How does this number of instances of activation-based priming compare to the 0 instances observed at baseline with Decay set to 1?.
75+
> **Question 7.8:** Comparing the 1st trials and 2nd trials of each input pattern (the 1st and 2nd 0, the 1st and 2nd 1, and so on), report the number of times the network responded `b` to the first trial and `a` to the second trial. How does this number of instances of activation-based priming compare to the 0 instances observed at baseline with Decay set to 1?
7676
7777
You can explore the extent of residual activity needed to show this activation-based priming by adjusting the `Decay` parameter and running `Test` again. (Because no learning takes place during testing, you can explore at will, and go back and verify that Decay = 1 still produces mostly `b`'s). In our tests increasing Decay (using this efficient search sequence: 0, .5, .8, .9, .95, .98, .99), we found a critical transition between .98 and .99. That is, a tiny amount of residual activation with Decay = .98 (= .02 residual activity) was capable of driving some activation-based priming. This suggests that the network is delicately balanced between the two attractor states, and even a tiny bias can push it one way or the other. The similar susceptibility of the human brain to such activation-based priming effects suggests that it too may exhibit a similar attractor balancing act.
7878

0 commit comments

Comments
 (0)