Skip to content

Conversation

@zyksir
Copy link
Collaborator

@zyksir zyksir commented Oct 21, 2025

Motivation

In the previous PR (#182), @jiapingW discovered that for some Qwen3 thinking models, the output was not as expected — an unexpected <think>\n\n</think>\n\n appeared when enable_thinking was set to False.

This PR creates a new Parser for Qwen3 thinking models. For the non-thinking model(e.g Qwen/Qwen3-30B-A3B-Instruct-2507), you can go with the old qwen parser; while for the new thinking models like Qwen/Qwen3-30B-A3B, you can use qwen3-thinking parser.

Modifications

Related Issues

Accuracy Test

The following code is used for test.

from transformers import AutoTokenizer
from specforge.data.preprocessing import preprocess_conversations
from specforge.data.template import TEMPLATE_REGISTRY
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-30B-A3B", trust_remote_code=True)
messages = [[
    {"role": "user", "content": "Who are you?"},
    {"role": "assistant", "content": "I am a model."},
    {"role": "user", "content": "What is your name?"},
    {"role": "assistant", "content": "My name is QwQ."},
    {"role": "user", "content": "What is 1+1"},
    {"role": "assistant", "content": "=2."}
]]
chat_template = TEMPLATE_REGISTRY.get('qwen3-thinking')
max_length = 2048

conversation = tokenizer.apply_chat_template(
        messages,
        tokenize=False,
        add_generation_prompt=False,
        enable_thinking=True,
    )

print(conversation)

conversation = preprocess_conversations(tokenizer, messages, chat_template, max_length)
print([tokenizer.decode(conversation['input_ids'][0].view(-1))])

This is the result:

['<|im_start|>user\nWho are you?<|im_end|>\n<|im_start|>assistant\nI am a model.<|im_end|>\n<|im_start|>user\nWhat is your name?<|im_end|>\n<|im_start|>assistant\nMy name is QwQ.<|im_end|>\n<|im_start|>user\nWhat is 1+1<|im_end|>\n<|im_start|>assistant\n\n\n\n\n=2.<|im_end|>\n']
['<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\nWho are you?<|im_end|>\n<|im_start|>assistant\nI am a model.<|im_end|>\n<|im_start|>user\nWhat is your name?<|im_end|>\n<|im_start|>assistant\nMy name is QwQ.<|im_end|>\n<|im_start|>user\nWhat is 1+1<|im_end|>\n<|im_start|>assistant\n=2.<|im_end|>\n']

Benchmark & Profiling

Checklist

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @zyksir, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses an issue with Qwen3 thinking models producing unwanted <think> tags in their output when the thinking feature is disabled. It introduces a specialized Qwen3ThinkingParser and a corresponding chat template to intelligently handle these models. The new parser dynamically configures itself based on whether thinking is enabled, and a general parsing improvement ensures that specified ignored tokens are removed from the conversation, resulting in cleaner and more accurate model output.

Highlights

  • New Parser for Qwen3 Thinking Models: Introduced a new Qwen3ThinkingParser class, inheriting from GeneralParser, specifically designed to handle the unique output characteristics of Qwen3 models that incorporate 'thinking' capabilities.
  • Dynamic Token Handling: The Qwen3ThinkingParser dynamically adjusts the assistant_message_separator and sets an ignored_token based on the enable_thinking flag, ensuring correct parsing and clean output whether the thinking feature is active or not.
  • Removal of Unwanted Tokens: A general mechanism was added to the GeneralParser to remove any specified ignored_token strings from the conversation, directly addressing the issue of unexpected <think> tags appearing in the output.
  • New Chat Template Registration: A new chat template named qwen3-thinking has been registered, linking it to the Qwen3ThinkingParser and explicitly defining the <think>\n\n</think>\n\n string as an ignored_token for removal.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new parser, Qwen3ThinkingParser, to handle Qwen3 thinking models, addressing an issue where unexpected <think> tags appeared when thinking was disabled. The changes include adding the new parser class, updating the preprocess_conversations function to use the new parser based on the chat_template.parser_type, and registering a new chat template named qwen3-thinking. The review focuses on ensuring the correctness and maintainability of the new parser and its integration.

f"{self.chat_template.end_of_turn_token}<|im_start|>assistant\n"
)
self.chat_template.ignored_token = "<think>\n\n</think>\n\n"
return super().parse(conversation, max_length, preformatted, **kwargs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The super().parse method returns input_ids and loss_mask. However, the return type annotation Dict[str, List[torch.Tensor]] suggests a dictionary is expected. This discrepancy could lead to confusion or errors if the caller expects a dictionary. Consider updating the return type annotation or modifying the return statement to return a dictionary.

Suggestion:

return {"input_ids": input_ids, "loss_mask": loss_mask}

Alternatively, update the return type annotation to Tuple[torch.Tensor, torch.Tensor] to match the actual return type.

Suggested change
return super().parse(conversation, max_length, preformatted, **kwargs)
return {"input_ids": input_ids, "loss_mask": loss_mask}

messages, tokenize=False, add_generation_prompt=False, **kwargs
)

if self.chat_template.ignored_token:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Consider adding a check to ensure self.chat_template.ignored_token is not None or empty before calling replace. This prevents potential errors if the token is not defined.

if self.chat_template.ignored_token and self.chat_template.ignored_token != "":
    conversation = conversation.replace(self.chat_template.ignored_token, "")
Suggested change
if self.chat_template.ignored_token:
if self.chat_template.ignored_token and self.chat_template.ignored_token != "":
conversation = conversation.replace(self.chat_template.ignored_token, "")

@zyksir zyksir requested a review from FlamingoPg as a code owner October 21, 2025 08:10
@jiapingW
Copy link
Contributor

I think the implementation of adding an ignore_token is concise and reasonable. That's great!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants