Description
So, this is a crazy idea. LLM's are very good at making English text, rewording things and understanding context. What if we gave an LLM a source (such as the ESV) and a backtranslation and said, "make more of the backtranslation using the ESV as a source." It could add explications, different contexts and immitate phrase reordering. Moreover, we could also add Bible reference material to the context and it should be able to make the source have better target context, mirroring what the existing backtranslations have, both scripturally and culturally.
We could take these newly generated "target aligned source " and then (optionally), could give them to the Translators and let them correct them to be more accurate to what they should say. After that optional step, we can feed it to a trained NLLB model that is only trained on backtranslation and target data and it would the spit out pretty close target data.
@ddaspit - what do you think?