You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: ch10/sem/README.md
+3-3
Original file line number
Diff line number
Diff line change
@@ -14,7 +14,7 @@ This network takes a while to train, so we will start by loading in pre-trained
14
14
15
15
To start, let's examine the weights of individual units in the network.
16
16
17
-
* Select `Wts / r.Wt`, and then select various `Hidden` units at random to view.
17
+
* Select `Wts` -> `r.Wt`, and then select various `Hidden` units at random to view.
18
18
19
19
You should observe sparse patterns of weights, with different units picking up on different patterns of words in the input. However, because the input units are too small to be labeled, you can't really tell which words a given unit is activated by. The `Wt Words` button provides a solution to this problem.
20
20
@@ -48,7 +48,7 @@ To probe these distributed representations further, we can present words to the
48
48
49
49
You should see that attention and spelling are only related by around 0.06, indicating low similarity. This should match your overall intuition: we talk about attention as being critical for solving the binding problem in several different situations, but we don't talk much about the role of attention in spelling.
50
50
51
-
* Compare several other words that the network should know about from reading this textbook (tip: Click `Envs` in the left control panel, then `Train`, then `Words` in the window that appears to see a list of all the words, and scroll through that to see what words are in the valid list (these are words with frequency greater than 5, and not purely syntactic).
51
+
* Compare several other words that the network should know about from reading this textbook. (Tip: Click `Envs` in the left control panel, then `Train`, then `Words` in the window that appears to see a list of all the words, and scroll through that to see what words are in the valid list. These are words with frequency greater than 5, and not purely syntactic.)
52
52
53
53
> **Question 10.2:** Report the correlation values for several additional sets of Words comparisons, along with how well each matches your intuitive semantics from having read this textbook yourself.
54
54
@@ -129,7 +129,7 @@ We can present this same quiz to the network, and determine how well it does rel
129
129
130
130
* Press the `Quiz All` button, and then click on the `Validate Epoch` tab to see the overall results. You will see a table of each question, with the Response as the answer with the highest correlation, as shown in each of the columns. At the end you will see summary statistics for overall performance in the `Total` row, with the `Correct` column showing the percent correct.
131
131
132
-
You should observe that the network does pretty well, but not perfectly, getting .8 = 80 percent correct. The network does a very good job of rejecting the obviously unrelated answer C, but it does not always match our sense of A being better than B. In question 6, the B phrase was often mentioned in the context of the question phrase, but as a *contrast* to it, not a similarity. Because the network does not have the syntactic knowledge to pick up on this kind distinction, it considers them to be closely related because they appear together. This probably reflects at least some of what goes on in humans -- we have a strong association between "black" and "white" even though they are opposites. However, we can also use syntactic information to further refine our semantic representations -- a skill that is lacking in this network, which is taken up in the final simulation in this chapter.
132
+
You should observe that the network does pretty well, but not perfectly, getting .8 = 80 percent correct. The network does a very good job of rejecting the obviously unrelated answer C, but it does not always match our sense of A being better than B. In question 6, the B phrase was often mentioned in the context of the question phrase, but as a *contrast* to it, not a similarity. Because the network does not have the syntactic knowledge to pick up on this kind distinction, it considers them to be closely related because they appear together. This probably reflects at least some of what goes on in humans; we have a strong association between "black" and "white" even though they are opposites. However, we can also use syntactic information to further refine our semantic representations. This skill is lacking in this network, and is taken up in the final simulation in this chapter.
0 commit comments