Multiple Targets in a Gradient Boosting Model #3
Replies: 4 comments 1 reply
-
@robert-robison Thanks for your interest in the project!
Thanks!
We are training multiple trees within the same model, where one tree is built for each parameter. This is similar to training a model for multi-class classification. There a built-in routines in LightGBM for that, so you`d only need to reformulate the problem a little.
Indeed, MLPs would be a reasonable choice to learn the parameters of the time series model, since this gets us around the scaling problem of decision trees, where you need to train as many trees as there are parameters. For MLPs this is easy since you`d only need to adjust the size of the output layer. However, since the concept of Hyper-Networks is available already and since we wanted to leverage the strength of decision trees, we opted for LightGBM as a model for learning the parameters of a target model. |
Beta Was this translation helpful? Give feedback.
-
Woah, I never understood until right now that that's how LightGBM multiclass worked. Awesome. So it's basically what I said here:
But that capability is built-in to LightGBM. Thanks for the response! |
Beta Was this translation helpful? Give feedback.
-
Is there a place in the documentation or in the source code where I can see how lightgbm can train multiple trees at once for each of the parameters? I was also wondering how this would be implemented. Great work on the paper I found it really interesting! |
Beta Was this translation helpful? Give feedback.
-
@AoEtechman Please find my answers below.
Thanks for your interest in our project.
We haven't released the code yet. We are training multiple trees within the same model, where one tree is built for each parameter. This is similar to training a model for multi-class classification. There a built-in routines in LightGBM for that, so you`d only need to reformulate the problem a little. Here is a pseudo-code params = {
"learning_rate": some value,
"other_param": some value
"num_class": n_params # here goes the number of parameters
} The output of lgb is now Let me know in case of further questions. |
Beta Was this translation helpful? Give feedback.
-
Love the work you all are doing, I've had similar ideas in the past, and I think this will work really well.
One question I had after reading your paper: how are you using Gradient Boosted models to predict multiple parameters at the same time? Seems like a task more suited for a neural network, but I could be missing something. I guess you could be training several GBMs at once, updating each one iteration at a time? Or is there a way to structure it to work with a single tree?
Anyway, thanks for the work you all are doing!
Beta Was this translation helpful? Give feedback.
All reactions