Skip to content

Commit 6eab540

Browse files
authored
Merge pull request #55 from ferrislucas/default-to-gpt4-turbo-preview
Default to gpt4 turbo preview
2 parents 5891d73 + e6272b3 commit 6eab540

File tree

5 files changed

+75
-44
lines changed

5 files changed

+75
-44
lines changed

.gitignore

+2-1
Original file line numberDiff line numberDiff line change
@@ -19,4 +19,5 @@ dist/
1919
# Operating system files
2020
.DS_Store
2121
Thumbs.db
22-
tmp
22+
tmp
23+
prompts

README.md

+69-38
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,29 @@
11
# Promptr
22

3-
Promptr is a CLI tool that lets you use plain English to instruct GPT3 or GPT4 to make changes to your codebase. This is most effective with GPT4 because of its larger context window, but GPT3 is still useful for smaller scopes.
3+
Promptr is a CLI tool that lets you use plain English to instruct OpenAI LLM models to make changes to your codebase. <br /><br />
4+
## Usage
5+
6+
`promptr [options] -p "your instructions" <file1> <file2> <file3> ...`
7+
8+
<br />
9+
<br />
10+
11+
## Examples
12+
__Cleanup the code in a file__
13+
```bash
14+
$ promptr -p "Cleanup the code in src/index.js"
15+
```
16+
Promptr recognizes that the file `src/index.js` is referenced in the prompt, so the content of `src/index.js` is sent to the model along with the user's prompt.
17+
<br />The model's response is automatically applied to the relevant files.
18+
<br /><br />
19+
<br />
20+
21+
__Alphabetize the methods in all of the javascript files__
22+
```bash
23+
$ promptr -p "Alphabetize the method names in all of these files" $(git ls-tree -r --name-only HEAD | grep ".js" | tr '\n' ' ')
24+
```
25+
The command above uses `git-tree`, `grep`, and `tr` to pass a list of javascript file paths to promptr.
26+
427
<br /><br />
528

629
The PR's below are good examples of what can be accomplished using Promptr. You can find links to the individual commits and the prompts that created them in the PR descriptions.
@@ -10,65 +33,72 @@ The PR's below are good examples of what can be accomplished using Promptr. You
1033

1134
I've found this to be a good workflow:
1235
- Commit any changes, so you have a clean working area.
13-
- Author your prompt in a text file. The prompt should be specific clear instructions.
36+
- Author your prompt in a file. The prompt should be specific clear instructions.
1437
- Make sure your prompt contains the relative paths of any files that are relevant to your instructions.
1538
- Use Promptr to execute your prompt. Provide the path to your prompt file using the `-p` option:
1639
`promptr -p my_prompt.txt`
17-
*If you have access to GPT4 then use the `-m gpt4` option to get the best results.*
1840

19-
Complex requests can take a while. If a task is too complex then the request will timeout - try breaking the task down into smaller units of work when this happens. When the response is ready, promptr applies the changes to your filesystem. Use your favorite git UI to inspect the results.
41+
Promptr applies the model's response to your files. Use your favorite git UI to inspect the results.
2042

2143
<br /><br />
44+
## Templating
2245

46+
Promptr supports templating using [liquidjs](https://liquidjs.com/), which allows users to incorporate templating commands within their prompt files. This feature enhances the flexibility and reusability of prompts, especially when working on larger projects with repetitive patterns or standards.
2347

24-
## Examples
25-
__Cleanup the code in a file__
26-
Promptr recognizes that the file `src/index.js` is referenced in the prompt, so the contents of `src/index.js` is automatically sent to the model along with the prompt.
27-
```bash
28-
$ promptr -p "Cleanup the code in src/index.js"
29-
```
30-
<br />
48+
#### Using Includes
3149

32-
__Alphabetize the methods in all of the javascript files__
33-
<br />
34-
This example uses `git-tree`, `grep`, and `tr` to pass a list of javascript file paths to promptr:
35-
```bash
36-
$ promptr -m gpt4 -p "Alphabetize the method names in all of these files" $(git ls-tree -r --name-only HEAD | grep ".js" | tr '\n' ' ')
50+
Projects can have one or more "includes"—reusable snippets of code or instructions—that can be included from a prompt file. These includes may contain project-specific standards, instructions, or code patterns, enabling users to maintain consistency across their codebase.
51+
52+
For example, you might have an include file named `_poject.liquid` with the following content:
53+
54+
```liquid
55+
This project uses Node version 18.
56+
Use yarn for dependency management.
57+
Use import not require in Javascript.
58+
Don't include `module.exports` at the bottom of Javascript classes.
59+
Alphabetize method names and variable declarations.
3760
```
38-
<br />
3961

40-
__Given some tests, ask the model for an implementation that makes the tests pass__
41-
<br />
42-
The following example asks GPT4 to modify app/models/model.rb so that the tests in spec/models/model_spec.rb will pass:
43-
```bash
44-
$ promptr -m gpt4 -t test-first spec/models/model_spec.rb app/models/model.rb -o app/models/model.rb
62+
In your prompt file, you can use the `render` function from liquidjs to include this include file in a prompt file that you're working with:
63+
64+
```liquid
65+
{% render '_project.liquid' %}
66+
// your prompt here
4567
```
46-
<br /><br />
4768

48-
## Usage
69+
This approach allows for the development of reusable include files that can be shared across multiple projects or within different parts of the same project.
4970

50-
`promptr -m <model> [options] <file1> <file2> <file3> ...`
71+
#### Example Use Cases
5172

52-
<br />
53-
<br />
73+
- **Project-Wide Coding Standards**: Create an include file with comments outlining coding standards, and include it in every new code file for the project.
74+
75+
- **Boilerplate Code**: Develop a set of boilerplate code snippets for different parts of the application (e.g., model definitions, API endpoints) and include them as needed.
76+
77+
- **Shared Instructions**: Maintain a set of instructions or guidelines for specific tasks (e.g., how to document functions) and include them in relevant prompt files.
78+
79+
By leveraging the templating feature, prompt engineers can significantly reduce redundancy and ensure consistency in prompt creation, leading to more efficient and standardized modifications to the codebase.
80+
81+
<br /><br />
5482

5583
## Options
56-
- `-m, --model <model>`: Optional flag to set the model, defaults to `gpt-4-0613`. Using the value "gpt3" will use the `gpt-3.5-turbo-0613` model.
57-
- `-d, --dry-run`: Optional boolean flag that can be used to run the tool in dry-run mode where only the prompt that will be sent to the model is displayed. No changes are made to your filesystem when this option is used.
58-
- `-i, --interactive`: Optional boolean flag that enables interactive mode where the user can provide input interactively. If this flag is not set, the tool runs in non-interactive mode.
59-
- `-p, --prompt <prompt>`: Optional string flag that specifies the prompt to use in non-interactive mode. If this flag is not set then a blank prompt is used. A path or a url can also be specified - in this case the content at the specified path or url is used as the prompt. The prompt is combined with the tempate to form the payload sent to the model.
60-
- `-t, --template <templateName | templatePath | templateUrl>`: Optional string flag that specifies a built in template name, the absolute path to a template file, or a url for a template file that will be used to generate the output. The default is the built in `refactor` template. The available built in templates are: `empty`, `refactor`, `swe`, and `test-first`. The prompt is interpolated with the template to form the payload sent to the model.
61-
- `-x` Optional boolean flag. Promptr parses the model's response and applies the resulting operations to your file system when using the default template. You only need to pass the `-x` flag if you've created your own template, and you want Promptr to parse and apply the output in the same way that the built in "refactor" template output is parsed and applied to your file system.
62-
- `-o, --output-path <outputPath>`: Optional string flag that specifies the path to the output file. If this flag is not set, the output will be printed to stdout.
63-
- `-v, --verbose`: Optional boolean flag that enables verbose output, providing more detailed information during execution.
64-
- `-dac, --disable-auto-context`: Prevents files referenced in the prompt from being automatically included in the context sent to the model.
65-
- `--version`: Display the version and exit
84+
85+
| Option | Description |
86+
| ------ | ----------- |
87+
| `-p, --prompt <prompt>` | Specifies the prompt to use in non-interactive mode. A path or a url can also be specified - in this case the content at the specified path or url is used as the prompt. The prompt can leverage the liquidjs templating system. |
88+
| `-m, --model <model>` | Optional flag to set the model, defaults to `gpt-4-1106-preview`. Using the value "gpt3" will use the `gpt-3.5-turbo` model. |
89+
| `-d, --dry-run` | Optional boolean flag that can be used to run the tool in dry-run mode where only the prompt that will be sent to the model is displayed. No changes are made to your filesystem when this option is used. |
90+
| `-i, --interactive` | Optional boolean flag that enables interactive mode where the user can provide input interactively. If this flag is not set, the tool runs in non-interactive mode. |
91+
| `-t, --template <templateName | templatePath | templateUrl>` | Optional string flag that specifies a built in template name, the absolute path to a template file, or a url for a template file that will be used to generate the output. The default is the built in `refactor` template. The available built in templates are: `empty`, `refactor`, `swe`, and `test-first`. The prompt is interpolated with the template to form the payload sent to the model. |
92+
| `-x` | Optional boolean flag. Promptr parses the model's response and applies the resulting operations to your file system when using the default template. You only need to pass the `-x` flag if you've created your own template, and you want Promptr to parse and apply the output in the same way that the built in "refactor" template output is parsed and applied to your file system. |
93+
| `-o, --output-path <outputPath>` | Optional string flag that specifies the path to the output file. If this flag is not set, the output will be printed to stdout. |
94+
| `-v, --verbose` | Optional boolean flag that enables verbose output, providing more detailed information during execution. |
95+
| `-dac, --disable-auto-context` | Prevents files referenced in the prompt from being automatically included in the context sent to the model. |
96+
| `--version` | Display the version and exit |
6697

6798
Additional parameters can specify the paths to files that will be included as context in the prompt. The parameters should be separated by a space.
6899

69100
<br />
70101
<br />
71-
72102
## Requirements
73103
- Node 18
74104
- [API key from OpenAI](https://beta.openai.com/account/api-keys)
@@ -108,3 +138,4 @@ npm run test-binary
108138
## License
109139

110140
Promptr is released under the [MIT License](https://opensource.org/licenses/MIT).
141+

src/CliState.js

+1-1
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ export default class CliState {
1313
this.program.option('-t, --template <template>', 'Teplate name, template path, or a url for a template file')
1414
this.program.option('-o, --output-path <outputPath>', 'Path to output file. If no path is specified, output will be printed to stdout.')
1515
this.program.option('-v, --verbose', 'Verbose output')
16-
this.program.option('-m, --model <model>', 'Specify the model to use', 'gpt4')
16+
this.program.option('-m, --model <model>', 'Specify the model to use', 'gpt-4-1106-preview')
1717
this.program.option('-dac, --disable-auto-context', 'Prevents files referenced in the prompt from being automatically included in the context sent to the model.');
1818

1919
this.program.version(version, '--version', 'Display the current version')

src/services/OpenAiGptService.js

+2-2
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,8 @@ import SystemMessage from "./SystemMessage.js";
77
export default class OpenAiGptService {
88

99
static async call(prompt, model, requestJsonOutput = true) {
10-
if (model == "gpt3") model = "gpt-3.5-turbo-0613";
11-
if (model == "gpt4") model = "gpt-4-0613";
10+
if (model == "gpt3") model = "gpt-3.5-turbo";
11+
if (model == "gpt4") model = "gpt-4-1106-preview";
1212

1313
const configuration = new Configuration({
1414
apiKey: process.env.OPENAI_API_KEY

test/OpenAiGptService.test.js

+1-2
Original file line numberDiff line numberDiff line change
@@ -104,9 +104,8 @@ describe('OpenAiGptService', () => {
104104
const prompt = 'What is the capital of France?';
105105
const expectedResult = 'The capital of France is Paris.';
106106
const models = ['gpt3', 'gpt4'];
107-
const expectedModels = ['gpt-3.5-turbo-0613', 'gpt-4-0613'];
107+
const expectedModels = ['gpt-3.5-turbo', 'gpt-4-1106-preview'];
108108

109-
const configStub = sinon.stub(ConfigService, 'retrieveConfig').resolves({ api: { temperature: 0.5 } });
110109
const openaiStub = sinon.stub(OpenAIApi.prototype, 'createChatCompletion').resolves({
111110
data: {
112111
choices: [

0 commit comments

Comments
 (0)