Conversation
Whose idea was it to add this dependency in the first place? (it was me)
Whose idea was it to add this dependency in the first place? (it was me)
Some things still need to be (re)implemented, including - but probably not limited to - forced actions, request ID error logging, periodic wake-ups, minimum wait times, and various configuration options. I also want to rework how actions work, which is kinda big.
I don't remember why I didn't do this to begin with.
I did this so I'd be able to use Promise.withResolvers().
How embarrassing...
I verified that they get logged by trying to use an invalid API key.
I need to rethink how things get queued up. Having a message backlog causes the LLM to receive outdated context. This might not matter for some games, but it makes Cookie Clicker really hard for the LLM to play.
This might seem like a step backward, but I have a plan.
Not yet sure if this is smart or way overcomplicating things.
This was an initial design doc that I didn't intend to commit. It's mostly inaccurate by now.
|
I was busy so I couldn't review today, I'll do that tomorrow (probably in ~7–8 hours). |
|
I've probably figured out how to work the OpenAI API issues out so I'll try later today (assuming nothing happens between then). edit: it turns out that a ton of stuff is coming up soon which means I can't do comparison tests so this will have to be delayed, not sure by how long While we're here I think I should mention that the issue mentioned in #18 probably isn't actually a problem on your end, but instead possibly a problem with regards to my account and the (imo weird) way it works with orgs vs personal, but I'll do a double-check when I test out Jippity "v2".
I have a feeling this is because the prompt is that he's a streamer AI meant to entertain and play games, so it thinks that interacting with "chat" is entertaining them while playing NeuroPilot, the "game", because our game name's default is set to Visual Studio Code. (Also there's likely training data that indicates that programming would be "boring" on stream, even though the dev stream proves otherwise, but regardless this seems to be the logic that asking "chat" on what to do next is a way to "entertain them" while programming.)
Most likely down to how those two models interpret their incoming context as opposed to their training data. Although the idea that the evolution made it more clueless to who is Vedal, is very funny to me. |
Pasu4
left a comment
There was a problem hiding this comment.
I didn't look at everything that thoroughly, but from skimming through the files nothing stood out to me as "wrong" or "bad". There's a few things I would format differently but that's just a matter of personal preference. Note though that I'm completely self-taught on the subject of TypeScript, NeuroPilot was my first project in the language.
I also had Jippity create a website with NeuroPilot, which now works, but also ran into the issue that he kept asking what to do next. If you're still planning to add a frontend, my suggestion would be to add a "live stream chat" you can use to talk to Jippity / give suggestions, maybe with randomized user names so that Jippity doesn't think he has only one viewer. Also, it would be nice to have the option to control manually when Jippity acts/talks, instead of being on a fixed interval. For example, when testing NeuroPilot, Jippity may talk multiple times in the time it takes me to write a message in the chat window.
Other than that, the known issue in the README should be removed as it's fixed by this PR, if I understand the "Update" paragraph correctly.
| log.debug("waitForStimulus: timeout reached, resolving with IdleTimer"); | ||
| this.stimulusResolver.resolve({ type: "IdleTimer" }); | ||
| } | ||
| }, 1000); |
There was a problem hiding this comment.
Currently Jippity does not use the environment variable and the default value specified in the README for the action interval. This change uses the environment variable, applies the minimum of 1000ms, and uses the default value of 10000ms.
| }, 1000); | |
| }, Math.max(1000, parseInt(process.env.JIPPITY_INTERVAL_MS ?? "10000"))); |
| "If you are feeling lonely because no one is talking in chat, you can make up stories about Randy and Neuro. " + | ||
| "Try to limit your monologues to a sentence or two at a time. " + |
There was a problem hiding this comment.
Removes telling Jippity twice that he can talk about Randy and Neuro.
Also adds a bit of information about Vedal, Neuro and Evil. I mainly added this because Jippity usually thinks Neuro is male, but a bit of basic information about them would probably be useful considering that Neuro & co. usually appear in game jam games, which I assume is one of the intended use cases for Jippity.
I'm not sure if that's the best way to give an LLM background information, and I'm definitely not an expert on how LLMs process information, as you can see from the last dev stream's editing issues.
| "If you are feeling lonely because no one is talking in chat, you can make up stories about Randy and Neuro. " + | |
| "Try to limit your monologues to a sentence or two at a time. " + | |
| "Try to limit your monologues to a sentence or two at a time. " + | |
| "\n" + | |
| "Neuro-sama is a popular AI streamer portrayed as a young girl. " + | |
| "She was created by Vedal, a programmer with a turtle avatar, who is considered her father. " + | |
| "Neuro-sama has a little sister called Evil, who tries to act evil but is actually kinder than her sister.\n" + | |
| "\n" + |
|
@EnterpriseScratchDev How did you get Jippity to work? Tried connecting NeuroPilot to it and got this error: |
Turns out gpt-4o-mini (the default) doesn't support reasoning effort, this should probably be caught by the backend and it should remove reasoning effort afterwards (or make the default higher and mention the model must support reasoning effort and tools). I did a small test run with Jippity (gpt-5-mini) as a sanity check, this is the final result:
And here are the runtime logs: Quite confused on why the |

I've forced myself to learn some basic JS/TS concepts and have majorly redesigned Jippity.
I believe this redesign resolves #16 and #17, but it's hard to say if it's an overall improvement or not.
From the user's perspective, nothing needs to be changed apart from running
npm installagain.However, there are some parameters that you'll likely want to change at the top of the
Jippityclass.Making users edit the source code to change basic parameters isn't my plan long-term, but I wanted to get some feedback on these changes before adding config options that may be removed later.
@KTrain5169 and @Pasu4, I'd appreciate if you could switch to this branch and give me any feedback you have. No doubt the code is still quite messy, but I believe it's functionally solid. Thanks in advance.
Yapping
This definitely looks as janky as what's on
mainright now, maybe even jankier, but it seems to be more stable. I haven't run into any issues while testing it (other than reaching the context window limit).Jippity made this site using NeuroPilot. It's a bit of a struggle to get it to actually do anything. It keeps asking "chat" what it should do next.

Update
I made some changes to how old messages get trimmed in order to keep response times "low".
Although response times are lower, I still needed to set the timeout in NeuroPilot to 60 seconds.
Responses are definitely faster now, but the LLM might not be getting enough context now.


It's like it has no context for what it's working on and only knows that Vedal sometimes gives it cookies and sometimes denies it.
Funnily enough,
gpt-5-miniconsistently thinks that "Vedal" is some kind of API. I think GPT4 had some awareness that Vedal was a streamer - or at least a person.