-
-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use big LLM to better align source and enhance source corpora #622
Comments
More ideas:
How would we prompt this for the LLM? What would we tell the LLM? What would we put in the context window? Should we fine tune a 7GB model each time? A 70GB model (using Apollo) once on the 2 H100's? or inference off of a 400 GB model? Which would give the best results? Re-enforcement learning of using a whole bunch of translations and their back-translations? A way to do a "spike" without LLM's
For each type, use the following amounts of training data: This test would show us the upper limit (type 3) for this concept - both if it helps with partial NT's (crossbow) or throughout the whole Bible. We need to find a set of Bibles with back-translations to be used as references for these experiments. |
Recommendation - get at least 5 Bibles and do types 1-3 (no LLM) and use that data to direct and prioritize future work. |
@woodwardmw - this may be interesting to you as well. I don't know if you want to test it out. |
Yeah, very interesting. I like the idea of training on the back translation and then creating extra "back translation" to use as a source for inference. As long as it can be generated without going too far from the actual Bible text. My feeling is that the way forward in general is to keep the current NLLB (or MADLAD) model as the main translation engine, and to focus on LLM pre- and post-processing to improve results. |
For "extra" back translation, ie. LLM-BT (T1->S1(LLM)) or NLLB-BT(T1->S1(NLLB), evaluation for "naturalness", consider UniEval, a human judgement evaluator (coherence, consistency, fluency, relevance)trained on T5 using BM5. Fluency is the closest to what you may call “naturalness.” We can even explore a good recipe/formula consists of a combination of these scores to represent "naturalness". Another method to evaluate "naturalness" is use a LLM prompt for custom evaluation metric. |
I have an interesting test case from the past couple of days that we could use to try this. We've been given a full Bible in a Quechua language, and we've tried to finetune both NLLB (via Serval) and MADLAD. We made the NLLB model available to the person who requested it, and they said it wasn't good. Which was actually reassuring to me, because my MADLAD training was really struggling too. I was getting a lot of model collapse, and the best I could get was 50 CHRF on the training data. (I calculate CHRF scores on the training data for exactly this kind of scenario - so we can catch a model that isn't training well, and not put it into production). I suspect it's quite a dynamic translation, and obviously a very different language family, and pairing it with either an English or Spanish published source text (we've tried both) just doesn't give close enough translation pairs to be able to train a good model. So this seems to be the kind of use case where we could take a published English or Spanish Bible, and the Quechua text, and try to get an LLM to amend the English/Spanish text to be closer to the Quechua. |
@woodwardmw - is there any backtranslation for that Quechua translation? Also, I don't think the LLM could "guess" the back translation of an existing translation, even with examples - this is more for creating an "enhanced source" that wouldn't be used for training, but only for pretranslating. If there was a full backtranslation, then we might be able to get something to work. |
@johnml1135 There's no back translation that I'm aware of in this case. I agree that an LLM guessing the back translation is a difficult task, but to me it's not as difficult as training an LLM to do translation from scratch in a low-resource language. You could input the low-resource language Bible text, the high-resource language Bible text in several versions, and potentially some word alignment data. Then you either rely on in-context learning, or try to fine-tune an LLM on this task, using languages for which we have good quality back translations available as supervised target data. This was my original idea for situations where we don't have a back translation. And I agree it's a difficult task, but I'm wondering if it's worth trying... |
We should probably test the simpler case where we have a back translation available to see what kind of benefit we get. Once we have that information, we can determine if it is worth the effort to tackle the more difficult case of no back translation data. |
Yep, sounds good. Here's an extreme example from eng-qub: NUM 1:26-27 |
Wow, quite a difference. |
Here is a bit what LLAMA 3.1 can do. Without the extra promting for "preserving word order and idoms", it pretty much just spit out a standard translation. With enough direction through, it appears to mirror it more closely, though still adds words like "and" for "[And] light came into being". Input to LLAMA 3.1Transliterate from Swahili to english, preserving word order and the Swahili idoms: \v 1 Mwanzo, Mungu aliumba mbingu na dunia. Output from LLAMA 3.1Here is the transliteration from Swahili to English, preserving word order and Swahili idioms: Further LLAMA 3.1 promptWhat does the Swahili word ikawako mean Further LLAMA 3.1 responseIn Swahili, "ikawako" is a verb phrase that can be broken down into several parts: |
Here is a document where I am trying the "take this backtranslation and make more: https://docs.google.com/document/d/18fr9JJI71zh_ClbF5gAQ4gI-erJW31mj0ZerWyY8vAg/edit?usp=sharing. Much more work is needed. |
Note that LLMs often know Swahili very well, so doing this on a low-resource language will be much harder. (Although Llama's analysis of ikawako is pretty much completely wrong!) I'm planning to experiment today with training a LRL->eng model first, then asking an LLM to combine that with a published English translation to guess an English back translation. |
So, this is a crazy idea. LLM's are very good at making English text, rewording things and understanding context. What if we gave an LLM a source (such as the ESV) and a backtranslation and said, "make more of the backtranslation using the ESV as a source." It could add explications, different contexts and immitate phrase reordering. Moreover, we could also add Bible reference material to the context and it should be able to make the source have better target context, mirroring what the existing backtranslations have, both scripturally and culturally.
We could take these newly generated "target aligned source " and then (optionally), could give them to the Translators and let them correct them to be more accurate to what they should say. After that optional step, we can feed it to a trained NLLB model that is only trained on backtranslation and target data and it would the spit out pretty close target data.
@ddaspit - what do you think?
The text was updated successfully, but these errors were encountered: