In ollama.py/OllamaLanguageModel/infer()
response = self._ollama_query(
prompt=prompt,
model=self._model,
structured_output_format='json'
if self.format_type == core_types.FormatType.JSON
else 'yaml',
model_url=self._model_url,
**combined_kwargs,
)
yield [core_types.ScoredOutput(score=1.0, output=response['response'])]
The bug is caused by empty response['response'], response['thinking'] instead. The reason is updating of Ollama, I guess

In ollama.py/OllamaLanguageModel/infer()
The bug is caused by empty response['response'], response['thinking'] instead. The reason is updating of Ollama, I guess