Skip to content

adding ftc coach to resources#493

Closed
MonishSaravana wants to merge 1 commit intogamemanual0:mainfrom
MonishSaravana:feature/ftc-coach
Closed

adding ftc coach to resources#493
MonishSaravana wants to merge 1 commit intogamemanual0:mainfrom
MonishSaravana:feature/ftc-coach

Conversation

@MonishSaravana
Copy link

Added FTC coach (https://coach.team31000.org/), which is an AI portfolio analysis tool meant to take in engineering portfolios, and give users a detailed breakdown of their portfolios' strengths, weaknesses, and actionable improvements for every award.

@abidingabi
Copy link
Member

gm0 is not a venue for self-promotion; we only recommend resources that are widely established. We also do not recommend inspirebait or AI slop (this is both).

Please either make substantive PRs or do not make any; as it stands your PRs have been a waste of everybody’s time.

@abidingabi abidingabi closed this Mar 5, 2026
@MonishSaravana
Copy link
Author

MonishSaravana commented Mar 5, 2026

I don't think this is AI Slop at all when hundreds of teams have used it and seem to love it,
and come back and use it for future competitions.

People submit testimonials on their own accord and has been the most popular thread in the portfolios channel in the FTC Discord for around a month now. I waited to see if the community genuinely likes the tool and waited for a substantial number of users, before I submitted a PR to GM0. If I instantly tried to the PR out, then I would agree it would be excessive self promotion but there are many teams now who use this tool, all over the world.

I don't know what constitutes as "widely established" when there are very likely tools in the useful resources page that have had less teams use it than FTC Coach.

https://web.archive.org/web/20231203044359/https://ftctutorials.com/en/

^ is listed on there when the links don't seem to work.

I do agree the last PR was not high quality, and was my fault and I apologize but I do not think this is a useless PR by any means.

It happens to be our team who made the tool but I don't think that should get it auto rejected, as long as there's legitimate value being provided that you can check in the FTC Discord or the testimonials on the page itself.

@novabansal
Copy link
Contributor

Use of a large language model in this manner actively hinders learning outcomes from FIRST Tech Challenge. It is a skill to be able to critically analyze your own content within the Engineering Portfolio against the given rubrics, and to be able to reach out within the educational and engineering community to obtain additional feedback. One could argue that not all teams have the ability to find connections who can help them with their portfolio, but in the age of the internet, that is far from true - anyone from anywhere can help out, including fellow students online. To that end, analyzing peer content manually instead of with AI is also an educational opportunity that this tool would diminish.

There are also cognitive risks. A randomized study found that LLM users consistently underperform across neural, linguistic, and behavioral measures over a four-month period, with students happily substituting AI for the difficult work of learning and ceding control to it with minimal effort [1]. A systematic review of 70 empirical studies found that when learners interact with LLMs, risks extend to cognitive and behavioral outcomes, including reduced neural activity, over-reliance, diminished independent learning skills, and a loss of student agency [2].

Large language models are also structurally unfit for the type of knowledge this tool claims to possess. Hallucination and generalization still plague large language models, especially when considering narrow technical domains like FTC. I can't imagine a large percentage of training data covers FTC, much less FTC judging with current updates to the game. This poses an ethical risk - what if your tool provides incorrect information to a team?

I could go on, but overall, including a large language model like this poses acute risks to the educational efficacy of the program, and while it is an amazing accomplishment from a technical perspective, I find it wholly unsuited for GM0.

@Eeshwar-Krishnan
Copy link
Collaborator

Eeshwar-Krishnan commented Mar 5, 2026

The idea of the tool is neat. However, I tend to agree with the noninclusion and I am going to second the vote for it. I have been following that thread in the ftc discord for a bit, but my thoughts are kind of:

  1. You correctly pointed out that we have some outdated/dead links in the manual. We are a volunteer team and pruning those is an ongoing battle. However, the existence of stale links doesn't bypass the 'widely established' requirement for new tools. A popular Discord thread is a great start, but we look for sustained, multi-season reliance across the global community. It's a bar we keep to avoid short lifespan hype projects from having to be constantly cycled out, which was an issue in the past.

  2. The tool itself is nondeterministic and black boxed being centered around AI. That makes it extremely hard to recommend from the get go because we have zero way of verifying that the tool remains relevant as time goes on, or it's overall accuracy, and rules change beyond good faith into the tool creator itself. Already, gm0 has issues with the manpower required to verify the links we do have (as you have identified yourself), so this presents a pretty significant manpower burden. I've been keeping an eye on the ftc discord thread and while it doesn't give objectively bad advice, I'm not certain I've really seen evidence that the benefit is large enough to ignore that manpower burden.

  3. The aggressively large attribution requests in the site red flag it for promotion. While gm0 doesn't explicitly disallow inspire promotion, the combination of that with what looks like paid features on a different part of the site (https://coach.team31000.org/frc-impact) for a similar tool are a huge red flag. Again, a large amount of trust inherently is going into whatever resources are linked that they will remain in the state as presented.

For what it's worth, I'm not inherently against the idea of LLMs. I do think it's an interesting tool, and it's a neat idea. However, the verification burden is just too high and there are a bit too many red flags for me to give an approval on this personally, and after discussion with others that is where we stand right now on it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants