Created an llms.txt file for textual #5929
Replies: 3 comments
-
If your goal is to feed this into an Agent for vibe coding Textual apps, I would say that close to 80% of this file is completely unnecessary and not helping the agent in any way. There's far too much information in here that is just not relevant. This might be relevant if you were trying to vibe code the Textual framework itself. Which you shouldn't do, anyway. Agents work better when they are given small amounts of very high quality data. This is the opposite |
Beta Was this translation helpful? Give feedback.
-
@ybugrara any updates on this file? Are you having good results with it? |
Beta Was this translation helpful? Give feedback.
-
The main thing I found helped LLMs deal with textual is keeping a local copy of the textual source and docs, telling the LLMs to use it, and also telling them to use pilot to do headless testing wherever possible. The thing about agents, and we are all moving from vibing to agents, is that they dont' necessarily have a real terminal to work with all the time - they need pilot. Incidentally, Pilot is textuals superpower in the AI age - agents, esp now with claude 2 and codex, and agent templating are all that is needed to put a massive amount of compute to work on a UI. It's way more efficient than trying to tell them to spin up a bunch of virts and get playwright going and test web apps. Just my 2 cents. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi all,
I wanted to share a recent llm.txt file I created for textual with the information as of 7-6-25. You can use this for your llm to provide it with context to vibe code a solution. I needed something for myself and used Gemini 2.5 Flash Lite for the aggregation and summarization of the textual github repo. This was my first pass at generating the file, so if anyone finds any issues let me know and I'll update...
llm.txt
Beta Was this translation helpful? Give feedback.
All reactions