You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think most modern LLMs should be able to understand code even without the comments (at least understand most of it).
This means in some cases, giving the comments along with the code is probably wasting tokens.
An option to remove the comments would help with providing even more code to LLMs (especially with the new "file select" feature).
Or one could try going even further and minifying the code, I'd be curious how well the LLM would handle that (I guess that could be tested manually, I'll do some tests next time I need to send large amounts of code).
Maybe even, one (configurable) feature that could work nicely with #6 is to remove comments/minify only when the token limit is reached: when the extension detects the token limit is reached, it removes comments one by one until removing one gets us below the token limit. And if it removes all the comments and we are still above the token limit, it starts minifying the files one by one. And if you want to get really crazy fancy, you can use a LLM (with a configured provided API key / API URL) to make decisions on which files to minify or not based on their filename and the rest of the prompt / the question the files are provided to help with, but that's a bit too crazy maybe :)
(again, amazing project, and really impressed by the reactivity to suggestions).
The text was updated successfully, but these errors were encountered:
I think most modern LLMs should be able to understand code even without the comments (at least understand most of it).
This means in some cases, giving the comments along with the code is probably wasting tokens.
An option to remove the comments would help with providing even more code to LLMs (especially with the new "file select" feature).
Or one could try going even further and minifying the code, I'd be curious how well the LLM would handle that (I guess that could be tested manually, I'll do some tests next time I need to send large amounts of code).
Maybe even, one (configurable) feature that could work nicely with #6 is to remove comments/minify only when the token limit is reached: when the extension detects the token limit is reached, it removes comments one by one until removing one gets us below the token limit. And if it removes all the comments and we are still above the token limit, it starts minifying the files one by one. And if you want to get really crazy fancy, you can use a LLM (with a configured provided API key / API URL) to make decisions on which files to minify or not based on their filename and the rest of the prompt / the question the files are provided to help with, but that's a bit too crazy maybe :)
(again, amazing project, and really impressed by the reactivity to suggestions).
The text was updated successfully, but these errors were encountered: