Skip to content

Commit 2be679c

Browse files
authored
Update README.md renumber Qs to match new sim order
1 parent c298d9e commit 2be679c

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

ch10/ss/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ There are several important lessons from looking at the weights. First, the netw
6969

7070
* Poke around some more at the network's weights, and document a relatively clear example of how the representations across the OrthoCode and Hidden layers make sense in terms of the input/output mapping being performed. Looking at the front row of OrthoCode in the central pool is a good strategy, for keeping track of where you are, and don't neglect the weights from Hidden to Phon output which can be particularly informative (use the above figures to decode which phonemes are encoded -- the first and last 3 pools are consonants, with vowels in the middle). Each pool on the input has the letters of the alphabet in order, from left-to-right, bottom-to-top. You may want to decrease the Min/Max range in the color scale in the upper right (e.g., to -0.8 to 0.8) to better see the strongest weights.
7171

72-
> **Question 10.7:** Specify what OrthoCode units you have chosen (unit pool, row, col position within pool), what letters those OrthoCode units encode, and how the hidden unit(s) combine the OrthoCode units together -- describe how this combination of letters across locations makes sense in terms the need for both spatial invariance and conjunctive encoding of multiple letters.
72+
> **Question 10.4:** Specify what OrthoCode units you have chosen (unit pool, row, col position within pool), what letters those OrthoCode units encode, and how the hidden unit(s) combine the OrthoCode units together -- describe how this combination of letters across locations makes sense in terms the need for both spatial invariance and conjunctive encoding of multiple letters.
7373
7474
# Nonword Pronunciation
7575

@@ -98,7 +98,7 @@ The total percentages for both our model, PMSP (where reported) and the comparab
9898

9999
We tried to determine for each error why the network might have produced the output it did. In many cases, this output reflected a valid pronunciation present in the training set, but it just didn't happen to be the pronunciation that the list-makers chose. This was particularly true for the Glushko (1979) exception list (for the network and for people). Also, the McCann & Besner (1987) lists contain four words that have a "j" in the final set of phonemes after the vowel (the *coda*), `faije`, `jinje`, `waije`, `binje`, which never occurs in the training set (i.e., in the entire corpus of English monosyllabic words). These words were excluded by PMSP, and we discount them here too. Nevertheless, the network did sometimes get these words correct.
100100

101-
> **Question 10.8:** Can you explain why the present model was sometimes able to pronounce the "j" in the coda correctly, even though none of the training words had a "j" there? (Hint: Think about the effect of translating words over different positions, e.g., the word "jet," in terms of the input the model receives.)
101+
> **Question 10.5:** Can you explain why the present model was sometimes able to pronounce the "j" in the coda correctly, even though none of the training words had a "j" there? (Hint: Think about the effect of translating words over different positions, e.g., the word "jet," in terms of the input the model receives.)
102102
103103
One final aspect of the model that bears on empirical data is its ability to simulate naming latencies as a function of different word features. The features of interest are word frequency and consistency (as enumerated in the Probe codes listed above). The empirical data shows that, as one might expect, higher frequency and more consistent words are named faster than lower frequency and inconsistent words. However, frequency interacts with consistency, such that the frequency effect decreases with increasing consistency (e.g., highly consistent words are pronounced at pretty much the same speed regardless of their frequency, whereas inconsistent words depend more on their frequency). The PMSP model shows the appropriate naming latency effects (and see that paper for more discussion of the empirical literature).
104104

0 commit comments

Comments
 (0)