Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug in HMM normalization? #30

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Bug in HMM normalization? #30

wants to merge 1 commit into from

Conversation

vene
Copy link
Contributor

@vene vene commented Sep 22, 2016

Hi,

I was trying seqlearn for a realistic problem and the HMM performance was much worse than a structured perceptron (1% vs ~30% F-score). Thinking something was wrong I even tried the implementation in NLTK which performs almost as well as the perceptron.

Upon inspecting the learned parameters I find unexpected results, while NLTK looked sensible. After going through the calculation step by step I think the problem is this; with this patch I get good performance.

I should write a failing test to actually verify that this is correct, and I will try to do so, but it might take a while to find the time...

@@ -68,10 +68,10 @@ def fit(self, X, y, lengths):
final_prob -= logsumexp(final_prob)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another observation: final_prob is the same as init_prob here. Should this be Y[end - 1] instead?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

like #29?

@vene
Copy link
Contributor Author

vene commented Sep 22, 2016

I suppose so indeed! At first glance the nltk implementation doesn't seem to have an equivalent. Maybe it would be good to write up the specifics of the model implemented here.

On September 22, 2016 6:45:46 PM EDT, Mikhail Korobov [email protected] wrote:

kmike commented on this pull request.

@@ -68,10 +68,10 @@ def fit(self, X, y, lengths):
final_prob -= logsumexp(final_prob)

like #29?

You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub:
#30

Sent from my Android device with K-9 Mail. Please excuse my brevity.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants