Skip to content

Conversation

Alex-Jordan
Copy link
Contributor

I was using answerHints.pl for an answer that should be like ln|x|, and wanted a special message if the student entered ln(x). I noticed the answer hint was given even when the student enters ln|x|. And even though the default setting for checkCorrect is 0.

It turned out that checkCorrect was only respected if the specified wrong answer was given as code, rather than given as a MathObject. So I adjusted the logic to wrap a check around the entire loop, checking if checkCorrect is 1 or if the score is less than 1. While doing this, it seemed a few other things should be checked outside of the loop as well. Namely, the check for checkTypes, and the check for if a prior message should be replaced.

@dpvc
Copy link
Member

dpvc commented Sep 29, 2025

We had a discussion of some of this in my response to #1171, were I argued that

My reasoning for removing the conditions in #974 was that if an answer is given explicitly, it should be checked, regardless of whether it or the student answer is correct. If it is explicitly in the AnswerHints list, then if it is the correct answer, its message is clearly meant to be used, and you shouldn't have to add another flag to get it. The checkTypes and checkCorrect flag were really designed for use with a CODE reference, not with explicit answers ... . Any explicit answer should be processed, except when there is already a message in place and the replaceMessage flag is not set. The reason for that exception is that it stops the later messages from being used if an earlier one has been issued.

I still stand by this. If you are trying to distinguish ln|x| from ln(x) and If the student has entered ln|x| and it is matching the ln(x) in the answer hints list, then that means that the test points are not being handle properly, not that the AnswerHints code is not working. Dealing with this properly is a little bit subtle. The wiki documentation for the Formula object suggests using either

$f = Formula("ln(|x|)")->with(test_points => [[-3],[-2],[-1],[1],[2],[3]]);

or

$f = Formula("ln(|x|)")->with(test_at => [[-1],[1]]);

in order to force checks at negative values that will distinguish between the two forms. Without that, there is no guarantee that the points selected for ln|x| will include any negative values, and so the two may not be able to be distinguished without forcing the needed points. With either of the two forms above, ln|x| will fail to match ln(x) and will produce the message The domain of your function doesn't match that of the correct answer if the student enters ln(x) rather than ln|x|.

You don't give the code of the problem that inspired this PR, so I can't tell if you have included such test points, but there are additional subtleties when using AnswerHits that you need to be aware of.

First, when two formulas $f and $g are being compared as $f == $g, the test points of $f (the one on the left) are the ones that get used. This is so that when an answer checker runs, when it checks $correct == $student, the points from the correct answer (as set by the author, as in the code above) will be used. (Note that this means that if a custom checker uses $student == $correct, the points set by the author would be lost.)

When the answer hints are checked, however, the correct answer isn't involved, as the check is between the answer from the hint list and the student answer. That is, if the hint answer is $wrong, the check is effectively $wrong == $student. So to get the answer message for that answer, you would need that equality to be true.

If $wrong is ln(x) and $student is ln|x|, then it is the test points for $wrong that are used, and if random points are used, under normal circumstances, only positive ones will be chosen for ln(x), and so ln(x) and ln|x| will always agree and will be recorded as a match. This is what you are seeing in your case.

To distinguish ln(x) from ln|x| when ln(x) is on the left, you also need to provide test points that include values that distinguish them (i.e., negative values). So you would need to start with something like

$f = Formula("ln(x)")->with(test_at => [[-1], [1]]);

but if you try this, you will see that $f == Formula("ln(x)") will fail with a message about not being able to evaluate the formula at -1, which is true. That means that in addition to test_at, you also need to include checkUndefinedPoints => 1 so that the fact that ln(x) is not defined at -1 is considered to be significant. That is, with

$f = Formula("ln(x)")->with(test_at => [[-1], [1]], checkUndefinedPoints => 1);

then $f == Formula("ln(x)") will be true, but $f == Formula("ln|x|") will be false. This is what you need in order to distinguish the two functions when ln(x) is on the left.

With this in mind, if you have

$ans = Formula("ln|x|")->with(test_at => [[-1], [1]]);

as the correct answers, then use

$ans->cmp->withPostFilter(AnswerHints(
  Formula("ln(x)")->with(test_at => [[-1], [1]], checkUndefinedPoints => 1) => [
    "(your message here)",
    replaceMessage => 1,
  ]
))

as the answer evaluator. Note that replaceMessage => 1 is needed since the original check of the correct answer to the student one will have produced the message about the domain mismatch, which will have to be replaced.

One would have liked to have used

$ans->cmp->withPostFilter(AnswerHints(
  "ln(x)" => [
    "(your message here)",
    replaceMessage => 1,
    cmp_options=> [test_at => [[-1], [1]], checkUndefinedPoints => 1]
  ]
))

or better yet

$ans = Formala("ln|x|");
$ans->cmp(test_at => [[-1], [1]], checkUndefinedPoints => 1)->withPostFilter(AnswerHints(
  "ln(x)" => ["(your message here)", replaceMessage => 1]
))

instead, but it turns out that the AnswerHints code doesn't properly tell the context about its flags, and so these options don't get passed on in a way that the Formula object can use them during comparisons. It is not hard to fix that, though, so I would recommend adding

	my $context = $self->context;
	my $hash = $context->{answerHash};
	$context->{answerHash} = $ans;

to the AnswerHints::Compare function after the line that sets the score to 0, and then

	$context->{answerHash} = $hash;

before the return at the end. I might also move

$context->{answerHash} = $hash;

to right before the return in that function as well, so that the main answer hash flags will be available throughout. (The changes to AnswerHints::Compare are still needed, however, to get the cmp_options to be properly handled.)

One might argue that needing to specify the test points is more complicated for authors and that it is easier to use the checkCorrect setting to handle this, but I would respond that if one is trying to make fine distinctions between functions like this, then it is important to be thinking about exactly the issues that arise, here, and that by hiding that complication you do a disservice to the authors by clouding the true issue that is involved, and it is one that they should be aware of in other situations as well. These are things they should be thinking about, and not something we should hide from them.

If you really want to move forward with a change like this, then I would recommend setting the default for checkCorrect to undef, and only skip an explicit (that is, non-CODE) answer if checkCorrect is explicitly set to 0. That is, force the author to be explicit about wanting not to check this specific answer if it is marked correct. This should not harm existing problems where checkCorrect is used for CODE answers. But because you are excusing the author from thinking about why a student answer of ln|x| is matching the incorrect answer of ln(x) in the hints, this will lead to subtler failures of the problem.

In particular, if you aren't taking care to force negative test points with ln|x| itself already, then he problem will incorrectly accept ln(x) as correct for some sets of random points (when there isn't a negative one in the set) because then ln(x) will match the correct answer ln|x| initially, and then you would not compare against ln(x) (your changes to AnswerHints would skip this hint), so would not get the answer hint for that case. That is, if you think using ln(x) in the answer hints will catch a student's use of ln(x), that won't always be true if you don't still run the test when the answer is "correct". That is another reason to always do the check for explicit answers in the answer hint list.

So even with your changes, you have to be thinking about test points properly, and if you are already doing that for the correct answer, you should be doing so for the answer hints as well. Skipping answers when the student answer is "correct" in this case leads to marking some incorrect answers as correct.

So I stand by my original claim above: explicit answers in the list should always be checked.

I agree with you, however, that some of the checks should be moved earlier. Again, in #1171, I mentioned that

If you all do decide to make this change, I'd recommend moving the test to before the loop over the wrong list, since it gets used by both conditions in the code below. It would also be good to move the answer message check to above the loop as well, since it is in both conditions, too.

but since that PR was closed without merging, this was not done at that time. I also point out that

the documentation for AnswerHints needs improvement, in particular, that the checkTypes and checkCorrect options only apply to CODE answers, and that if a custom checker is provided, it will be called with the AnswerHints answers as the $correct answer, so the checker needs to take $correct into account in order to process those.

Of course the checkTypes flag only makes sense for CODE answers, since if the types aren't correct, and explicit answer will not match, so it is not needed for that. The checkCorrect flag probably should never have been there in the first place, as a CODE checker could easily check for that itself, but it is probably too late to remove it now. Its use should be better documented, however, for sure.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants