Skip to content

Commit edf8471

Browse files
committed
Big update with overhauls for langchain, anthropic and more. Breaking changes.
1. Support for openrouter removed 2. Support for Hugging Face removed 3. Anthropic support added 4. Update to the latest flavor of langchain with LECL 5. Chat Memory disabled for sidekick (to be put back in the future) 6. All config files moved to $HOME/.aicodebot/ 7. Experimental features "sidekick-agent" and "learn" removed.
1 parent 49b205e commit edf8471

18 files changed

+137
-658
lines changed

README.md

+13-23
Original file line numberDiff line numberDiff line change
@@ -2,11 +2,16 @@
22

33
## Your AI-powered coding sidekick
44

5-
AICodeBot is a coding assistant designed to make your coding life easier. Think of it as your AI version of a pair programmer. Perform code reviews, create helpful commit messages, debug problems, and help you think through building new features. A team member that accelerates the pace of development and helps you write better code.
5+
AICodeBot is a terminal-based coding assistant designed to make your coding life easier.
6+
Think of it as your AI version of a pair programmer.
7+
Perform code reviews, create helpful commit messages, debug problems, and help you think through building new features.
8+
A team member that accelerates the pace of development and helps you write better code.
69

710
We've planned to build out multiple different interfaces for interacting with AICodeBot. To start, it's a [command-line tool](https://pypi.org/project/aicodebot/) that you can install and run in your terminal, and a [GitHub Action for Code Reviews](https://github.com/marketplace/actions/aicodebot-code-review).
811

9-
Status: This project is in its early stages, but it already improves the software development workflow, and has a healthy roadmap of features (below).
12+
Status: This project was built before AI Coding Assistants were cool. 🤓 As such, much of the functionality has
13+
been replicated in various IDEs. Where AICodeBot shines is a) it's in the terminal, not GUI, and b) it can be used
14+
in processes like GitHub actions.
1015

1116
We're using AICodeBot to build AICodeBot, and it's upward spiraling all the time.️ We're looking for contributors to help us build it out. See [CONTRIBUTING](https://github.com/TechNickAI/AICodeBot/blob/main/CONTRIBUTING.md) for more.
1217

@@ -92,11 +97,12 @@ Commands:
9297
sidekick Coding help from your AI sidekick
9398
```
9499

95-
### Open AI key setup
100+
### API Key setup
96101

97-
The first time you run it, you'll be prompted to enter your OpenAI API Key, which is required, as we use OpenAI's large language models for the AI. You can get one for free by visiting your [API key settings page](https://platform.openai.com/account/api-keys).
102+
AICodeBot supports multiple Large Language Models, including Anthropic's Claude 3.x, and OpenAI's GPT-3/4x.
103+
Pull requests for Gemini or Ollama are welcomed, but we feel these two do the trick.
98104

99-
Note: You will be billed by OpenAI based on how much you use it. Typical developers will use less than $10/month - which if you are a professional developer you'll likely more than make up for with saved time and higher quality work. See [OpenAI's pricing page](https://openai.com/pricing/) for more details. Also, see the note about increasing your token size and using better language models below.
105+
The first time you run AICodeBot, you'll be prompted to enter your API keys
100106

101107
## Integration with GitHub Actions
102108

@@ -161,30 +167,15 @@ It's also not a "build a site for me in 5 minutes" tool that takes a well-constr
161167

162168
## Configuring the language model to use
163169

164-
Not all OpenAI accounts have GPT-4 API access enabled. By default, AICodeBot will use GPT-4. If your OpenAI account supports it, we will check the first time you run it. If your OpenAI API does not support GPT-4, you can ask to be added to the waitlist [here](https://openai.com/waitlist/gpt-4-api). In our testing, GPT-4 is the best model and provides the best results.
165-
166170
To specify a different model, you can set the `language_model` in your `$HOME/.aicodebot.yaml` file. For example:
167171

168172
```yaml
169173
openai_api_key: sk-*****
170174
language_model: gpt-3.5-turbo
171175
personality: Stewie
172-
version: 1.2
173-
```
174-
175-
You can also use openrouter.ai to get access to advanced models like GPT-4 32k and Anthropic's 100k model for larger context windows. See [openrouter.ai](https://openrouter.ai) for more details. Here's a sample config:
176-
177-
```yaml
178-
openai_api_key: sk-*****
179-
openrouter_api_key: sk-or-****
180-
language_model_provider: OpenRouter
181-
language_model: openai/gpt-4-32k # or anthropic/claude-2 for 100k token limit
182-
personality: Stewie
183-
version: 1.2
176+
version: 1.3
184177
```
185178
186-
Note: We'll be adding more options for AI models in the future, including those that can be run locally, such as [GPT4all](https://gpt4all.io/) and HuggingFace's [Transformers](https://huggingface.co/transformers/).
187-
188179
### Understanding Tokens and Using Commands Efficiently
189180
190181
In AI models like OpenAI's GPT-4, a "token" is a piece of text, as short as a character or as long as a word. The total tokens in an API call, including input and output, affect the cost, time, and whether the call works based on the maximum limit.
@@ -205,8 +196,7 @@ The context is too large (21414) for any of the models supported by your API key
205196
There are a couple of things you can do:
206197

207198
1. Load fewer files into the context (only what you need to work with)
208-
2. Apply for GPT-4-32k access from OpenAI by contacting them.
209-
3. Use openrouter.ai - this allows you to use the full power of GPT-4-32k, which offers a 4x larger context window. See [openrouter.ai](https://openrouter.ai) for more details. Once you sign up and set your `openrouter_api_key` in your `$HOME/.aicodebot.yaml` file, you can have access to larger models. Soon we will have support for Claude 2, which has a 100k token limit.
199+
2. Use Anthropic's Claude, which has much larger context window
210200

211201
## Development / Contributing
212202

aicodebot/agents.py

-82
This file was deleted.

aicodebot/cli.py

+3-6
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
from aicodebot import AICODEBOT, version as aicodebot_version
2-
from aicodebot.commands import alignment, commit, configure, debug, learn, review, sidekick, sidekick_agent
2+
from aicodebot.commands import alignment, commit, configure, debug, review, sidekick
33
from aicodebot.config import read_config
44
from aicodebot.output import get_console
5-
import click, langchain, os, sys
5+
import click, langchain_core, os, sys
66

77
# -------------------------- Top level command group ------------------------- #
88

@@ -26,7 +26,7 @@ def cli(ctx, debug_output):
2626
os.environ["OPENAI_API_KEY"] = existing_config["openai_api_key"]
2727

2828
# Turn on langchain debug output if requested
29-
langchain.debug = debug_output
29+
langchain_core.debug = debug_output
3030

3131

3232
# -------------------------- Subcommands ------------------------- #
@@ -37,9 +37,6 @@ def cli(ctx, debug_output):
3737
cli.add_command(debug)
3838
cli.add_command(review)
3939
cli.add_command(sidekick)
40-
if os.getenv("AICODEBOT_ENABLE_EXPERIMENTAL_FEATURES"):
41-
cli.add_command(learn)
42-
cli.add_command(sidekick_agent)
4340

4441

4542
if __name__ == "__main__": # pragma: no cover

aicodebot/commands/__init__.py

+2-3
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,7 @@
22
from .commit import commit
33
from .configure import configure
44
from .debug import debug
5-
from .learn import learn
65
from .review import review
7-
from .sidekick import sidekick, sidekick_agent
6+
from .sidekick import sidekick
87

9-
__all__ = ["alignment", "commit", "configure", "debug", "learn", "review", "sidekick", "sidekick_agent"]
8+
__all__ = ["alignment", "commit", "configure", "debug", "review", "sidekick"]

aicodebot/commands/alignment.py

+4-4
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77

88

99
@click.command()
10-
@click.option("-t", "--response-token-size", type=int, default=350)
10+
@click.option("-t", "--response-token-size", type=int, default=500)
1111
def alignment(response_token_size):
1212
"""A message about AI Alignment 🤖 + ❤"""
1313
console = get_console()
@@ -18,13 +18,13 @@ def alignment(response_token_size):
1818
lmm = LanguageModelManager()
1919

2020
with Live(OurMarkdown(f"Talking to {lmm.model_name} via {lmm.provider}"), auto_refresh=True) as live:
21-
chain = lmm.chain_factory(
22-
prompt=prompt,
21+
llm = lmm.model_factory(
2322
response_token_size=response_token_size,
2423
temperature=CREATIVE_TEMPERATURE,
2524
streaming=True,
2625
callbacks=[RichLiveCallbackHandler(live, console.bot_style)],
2726
)
27+
chain = prompt | llm
2828

29-
response = chain.run({})
29+
response = str(chain.invoke({}))
3030
live.update(OurMarkdown(response))

aicodebot/commands/commit.py

+4-4
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ def commit(response_token_size, yes, skip_pre_commit, files): # noqa: PLR0915
6969
)
7070
else:
7171
console.print("Running pre-commit checks...")
72-
result = subprocess.run(["pre-commit", "run", "--files"] + files)
72+
result = subprocess.run(["pre-commit", "run", "--files"] + files, check=False)
7373
if result.returncode != 0:
7474
console.print("🛑 Pre-commit checks failed. Please fix the issues and try again.")
7575
return
@@ -85,13 +85,13 @@ def commit(response_token_size, yes, skip_pre_commit, files): # noqa: PLR0915
8585

8686
console.print("Analyzing the differences and generating a commit message")
8787
with Live(OurMarkdown(f"Talking to {lmm.model_name} via {lmm.provider}"), auto_refresh=True) as live:
88-
chain = lmm.chain_factory(
89-
prompt=prompt,
88+
llm = lmm.chain_factory(
9089
response_token_size=response_token_size,
9190
streaming=True,
9291
callbacks=[RichLiveCallbackHandler(live, console.bot_style)],
9392
)
94-
response = chain.run({"diff_context": diff_context, "languages": languages})
93+
chain = prompt | llm
94+
response = chain.invoke({"diff_context": diff_context, "languages": languages})
9595
live.update(OurMarkdown(response))
9696

9797
commit_message_approved = not console.is_terminal or click.confirm(

aicodebot/commands/configure.py

+1-11
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,9 @@
11
from aicodebot import AICODEBOT
22
from aicodebot.config import get_config_file, read_config
33
from aicodebot.helpers import create_and_write_file
4-
from aicodebot.lm import openai_supported_engines
54
from aicodebot.output import get_console
65
from aicodebot.prompts import DEFAULT_PERSONALITY, PERSONALITIES
7-
import click, openai, os, sys, webbrowser, yaml
6+
import click, os, sys, webbrowser, yaml
87

98

109
@click.command()
@@ -68,15 +67,6 @@ def write_config_file(config_data):
6867

6968
config_data["openai_api_key"] = click.prompt("Please enter your OpenAI API key").strip()
7069

71-
# Validate the API key
72-
try:
73-
openai.api_key = config_data["openai_api_key"]
74-
with console.status("Validating the OpenAI API key", spinner=console.DEFAULT_SPINNER):
75-
openai_supported_engines(config_data["openai_api_key"])
76-
except Exception as e:
77-
raise click.ClickException(f"Failed to validate the API key: {str(e)}") from e
78-
console.print("✅ The API key is valid.")
79-
8070
# ---------------------- Collect the personality choice ---------------------- #
8171

8272
# Pull the choices from the name from each of the PERSONALITIES

aicodebot/commands/debug.py

+4-4
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ def debug(ctx, command):
1616
# Run the command and capture its output
1717
command_str = " ".join(command)
1818
console.print(f"Executing the command:\n{command_str}", highlight=False)
19-
process = subprocess.run(command_str, shell=True, capture_output=True, text=True) # noqa: S602
19+
process = subprocess.run(command_str, shell=True, capture_output=True, text=True, check=False) # noqa: S602
2020

2121
# Print the output of the command
2222
output = f"Standard Output:\n{process.stdout}\nStandard Error:\n{process.stderr}"
@@ -36,12 +36,12 @@ def debug(ctx, command):
3636
lmm = LanguageModelManager()
3737

3838
with Live(OurMarkdown(f"Talking to {lmm.model_name} via {lmm.provider}"), auto_refresh=True) as live:
39-
chain = lmm.chain_factory(
40-
prompt=prompt,
39+
llm = lmm.model_factory(
4140
streaming=True,
4241
callbacks=[RichLiveCallbackHandler(live, console.bot_style)],
4342
)
44-
response = chain.run({"command_output": output, "languages": ["unix", "bash", "shell"]})
43+
chain = prompt | llm
44+
response = chain.invoke({"command_output": output, "languages": ["unix", "bash", "shell"]})
4545
live.update(OurMarkdown(response))
4646

4747
sys.exit(process.returncode)

0 commit comments

Comments
 (0)