French Response. Let's simplify them drastically #1354
Replies: 3 comments 8 replies
-
Ping a few french contributors And of course pinging the language leader |
Beta Was this translation helpful? Give feedback.
-
+1, BTW, I noticed inconsistencies with Whisper (small-int8) outputs: for example, the output "Quelles fenêtres sont ouvertes" is sometimes returned with "Quels fenêtres sont ouvertes". |
Beta Was this translation helpful? Give feedback.
-
My opinion is, if we just want an acknowledgment, then With that said, as the service is brand new, I can understand that people do not trust it yet, so a full acknowledgment can help to trust the service, moreover when you don't have a visual feedback on your old phone. The ideal solution would be to have a |
Beta Was this translation helpful? Give feedback.
-
Hello 👋
This discussion is a case to simplify the responses of our French intent massively.
I decided to write that post in English because I believe it could be interesting for other language leaders if they want to do something similar.
Context
Today, the French responses are very long and very specific.
name_of_the_speficif_light
turned on"name_of_the_speficif_light
set to 47%"As I said: Long and specific.
Issues
Issue 1: Our responses are badly cached
As explained in the chapter 2 in Year of the Voice, Home Assistant is doing a good job caching the voice response to be re-used as much as possible.
This is particularly true for the few of us that have decided to use a local TTS component in their pipeline (Piper).
If the same text is output twice for two different queries, the second TTS is free, because it is already cached.
Conclusion: The more specific our responses are, the less cached they can be, the longer the response time is, the more expensive the "CPU cost" is (Wherever it is: Locally or in Nabu Casa cloud)
Issue 2: Our responses slows us down
Our French intents, and by extension almost every language, are in an early stage of the development process.
This is a phase where we want to be able to contribute fast in order to support as much functionality as possible.
I would welcome without any issue the complexity of our responses if the project was mature, but as it is not the case, I am questioning its usefulness right now
Today, supporting a new functionality means having to deal with the complexity of its answers, and most of us are trying to stick to the conventions already in place in the project.
For example, we have different answers if we query
Conclusion: The more specific our responses are, the more complex our project is becoming, and the slower we become
Issue 3: Our responses are too specific but not mature enough, so we are creating some very unnatural responses sometimes
For example, today if you ask about the state of a température sensor, the answer will be
the state of the device living room temperature is 23.04 °C
Which may be the most unnatural way of answering this question.
This is because we are specific.... but not specific enough.
It's like we opened a Pandora box trying to be specific, and we struggle to close it now.
And other commun example of lack of maturity is the fact that we are failing to detect masculine vs feminine form. So if I ask to turn off lights, I'll get as a reply
lumières éteins
which makes it unnatural.Conclusion: Being specific will always raise weird exceptions where the answers feel unnatural... And this only way to fix it is even more specificity
Proposed solution
We drastically simplify all our responses, so that
Concrete proposal
OK
orD'accord
. Note, this is the "Alexa's way" ... even they, being way more mature than us, decided not to care about specifying their answers.OK
OK
47%
23.7 °C
La machine a café est allumée ?
, I getNon, off
)No
I'll ping the few of us that are contributing to the French intents, but ultimately, I am waiting for an answer from our language leader.
Thx!
JLo
Beta Was this translation helpful? Give feedback.
All reactions