Skip to content
This repository has been archived by the owner on May 13, 2024. It is now read-only.

Commit

Permalink
Some bug fixes (#37)
Browse files Browse the repository at this point in the history
  • Loading branch information
Chris Lemke authored Mar 23, 2023
1 parent 4bdab51 commit ca7edea
Show file tree
Hide file tree
Showing 10 changed files with 225 additions and 98 deletions.
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ repos:
exclude: "workflow/src/libs"

- repo: https://github.com/charliermarsh/ruff-pre-commit
rev: "v0.0.255"
rev: "v0.0.258"
hooks:
- id: ruff
args: [--fix, --exit-non-zero-on-fix]
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -199,7 +199,7 @@ You can tweak the workflow to your liking. The following parameters are availabl
- **ChatGPT transformation prompt**: Use this prompt to automatically transform either highlighted text through Universal actions or by adding a hotkey to process the content of the clipboard.
- **ChatGPT aliases**: If you use a certain prompt over and over again you can create an alias for it. This will save you from typing the same prompt over and over again. It is similar to the aliases in the command line. Format `alias=prompt;`
- **ChatGPT jailbreak prompt**: Add your ChatGPT jailbreak prompt which will be automatically included to your request. You can use it by hitting <kbd>⌘</kbd> <kbd>⏎</kbd>. Default: `None`.
- **InstructGPT model**: Following models are available: `Ada`, `Babbage`, `Curie`, `Davinci`, `Code-Davinci`, `Code-Cushman`. Default: `Davinci`. ([Read more](https://platform.openai.com/docs/models/overview))
- **InstructGPT model**: Following models are available: `Ada`, `Babbage`, `Curie`, `Davinci`. Default: `Davinci`. ([Read more](https://platform.openai.com/docs/models/overview))
- **ChatGPT model**: Following models are available: `ChatGPT-3.5`, `GPT-4` ([limited beta](https://openai.com/waitlist/gpt-4-api)), `GPT-4 (32k)` ([limited beta](https://openai.com/waitlist/gpt-4-api)). Default: `ChatGPT-3.5`. ([Read more](https://platform.openai.com/docs/models/overview))
- **Temperature**: The temperature determines how greedy the generative model is (between `0` and `2`). If the temperature is high, the model can output words other than the highest probability with a fairly high probability. The generated text will be more diverse, but there is a higher probability of grammar errors and the generation of nonsense . Default: `0`.
- **Maximum tokens**: The maximum number of tokens to generate in the completion. Default (InstructGPT): `50`. Default (ChatGPT): `4096`.
Expand Down
130 changes: 120 additions & 10 deletions info.plist
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,16 @@
<key>vitoclose</key>
<false/>
</dict>
<dict>
<key>destinationuid</key>
<string>F127DDE4-1464-48AA-AE75-648EA4013D87</string>
<key>modifiers</key>
<integer>131072</integer>
<key>modifiersubtext</key>
<string>Speak the answer 🗣️</string>
<key>vitoclose</key>
<false/>
</dict>
</array>
<key>2CD4970E-43C0-4C91-BE0F-3FB55C662C1D</key>
<array>
Expand Down Expand Up @@ -217,6 +227,19 @@
<false/>
</dict>
</array>
<key>51AA54E6-C0D7-4A43-B5C9-D18B0A805FA9</key>
<array>
<dict>
<key>destinationuid</key>
<string>A714CC5B-5F89-4D9A-A71D-DB9DD4F85305</string>
<key>modifiers</key>
<integer>0</integer>
<key>modifiersubtext</key>
<string></string>
<key>vitoclose</key>
<false/>
</dict>
</array>
<key>51EEBBF2-BE53-4A88-965F-02AC2D651EF5</key>
<array>
<dict>
Expand Down Expand Up @@ -472,6 +495,16 @@
</array>
<key>8D3B95CA-78A1-4E37-920A-29C777780804</key>
<array>
<dict>
<key>destinationuid</key>
<string>51AA54E6-C0D7-4A43-B5C9-D18B0A805FA9</string>
<key>modifiers</key>
<integer>131072</integer>
<key>modifiersubtext</key>
<string>Speak the answer 🗣️</string>
<key>vitoclose</key>
<false/>
</dict>
<dict>
<key>destinationuid</key>
<string>A714CC5B-5F89-4D9A-A71D-DB9DD4F85305</string>
Expand Down Expand Up @@ -800,6 +833,29 @@
<false/>
</dict>
</array>
<key>F127DDE4-1464-48AA-AE75-648EA4013D87</key>
<array>
<dict>
<key>destinationuid</key>
<string>703A86F6-9BC5-4F81-8E45-AB13862691F3</string>
<key>modifiers</key>
<integer>0</integer>
<key>modifiersubtext</key>
<string></string>
<key>vitoclose</key>
<false/>
</dict>
<dict>
<key>destinationuid</key>
<string>45B5F36B-A668-4C79-A272-2948ABAF36C3</string>
<key>modifiers</key>
<integer>0</integer>
<key>modifiersubtext</key>
<string></string>
<key>vitoclose</key>
<false/>
</dict>
</array>
<key>F3547C17-564C-4320-BDF6-EF35EDAF73C2</key>
<array>
<dict>
Expand Down Expand Up @@ -1096,6 +1152,26 @@ RESPONSE:'{query}'</string>
<key>version</key>
<integer>1</integer>
</dict>
<dict>
<key>config</key>
<dict>
<key>argument</key>
<string>{query}</string>
<key>passthroughargument</key>
<true/>
<key>variables</key>
<dict>
<key>always_speak</key>
<string>1</string>
</dict>
</dict>
<key>type</key>
<string>alfred.workflow.utility.argument</string>
<key>uid</key>
<string>51AA54E6-C0D7-4A43-B5C9-D18B0A805FA9</string>
<key>version</key>
<integer>1</integer>
</dict>
<dict>
<key>config</key>
<dict>
Expand Down Expand Up @@ -1658,6 +1734,26 @@ sys.stdout.write(ast.literal_eval(sys.argv[1])["text"])</string>
<key>version</key>
<integer>2</integer>
</dict>
<dict>
<key>config</key>
<dict>
<key>argument</key>
<string>{query}</string>
<key>passthroughargument</key>
<true/>
<key>variables</key>
<dict>
<key>always_speak</key>
<string>1</string>
</dict>
</dict>
<key>type</key>
<string>alfred.workflow.utility.argument</string>
<key>uid</key>
<string>F127DDE4-1464-48AA-AE75-648EA4013D87</string>
<key>version</key>
<integer>1</integer>
</dict>
<dict>
<key>config</key>
<dict>
Expand Down Expand Up @@ -2439,7 +2535,7 @@ You can tweak the workflow to your liking. The following parameters are availabl
- **ChatGPT transformation prompt**: Use this prompt to automatically transform either highlighted text through Universal actions or by adding a hotkey to process the content of the clipboard.
- **ChatGPT aliases**: If you use a certain prompt over and over again you can create an alias for it. This will save you from typing the same prompt over and over again. It is similar to the aliases in the command line. Format `alias=prompt;`
- **ChatGPT jailbreak prompt**: Add your ChatGPT jailbreak prompt which will be automatically included to your request. You can use it by hitting &lt;kbd&gt;&lt;/kbd&gt; &lt;kbd&gt;&lt;/kbd&gt;. Default: `None`.
- **InstructGPT model**: Following models are available: `Ada`, `Babbage`, `Curie`, `Davinci`, `Code-Davinci`, `Code-Cushman`. Default: `Davinci`. ([Read more](https://platform.openai.com/docs/models/overview))
- **InstructGPT model**: Following models are available: `Ada`, `Babbage`, `Curie`, `Davinci`. Default: `Davinci`. ([Read more](https://platform.openai.com/docs/models/overview))
- **ChatGPT model**: Following models are available: `ChatGPT-3.5`, `GPT-4` ([limited beta](https://openai.com/waitlist/gpt-4-api)), `GPT-4 (32k)` ([limited beta](https://openai.com/waitlist/gpt-4-api)). Default: `ChatGPT-3.5`. ([Read more](https://platform.openai.com/docs/models/overview))
- **Temperature**: The temperature determines how greedy the generative model is (between `0` and `2`). If the temperature is high, the model can output words other than the highest probability with a fairly high probability. The generated text will be more diverse, but there is a higher probability of grammar errors and the generation of nonsense . Default: `0`.
- **Maximum tokens**: The maximum number of tokens to generate in the completion. Default (InstructGPT): `50`. Default (ChatGPT): `4096`.
Expand Down Expand Up @@ -2625,6 +2721,17 @@ Please refer to OpenAI's [safety best practices guide](https://platform.openai.c
<key>ypos</key>
<real>1670</real>
</dict>
<key>51AA54E6-C0D7-4A43-B5C9-D18B0A805FA9</key>
<dict>
<key>colorindex</key>
<integer>3</integer>
<key>note</key>
<string>Sets: "always_speak"</string>
<key>xpos</key>
<real>295</real>
<key>ypos</key>
<real>475</real>
</dict>
<key>51EEBBF2-BE53-4A88-965F-02AC2D651EF5</key>
<dict>
<key>colorindex</key>
Expand Down Expand Up @@ -3006,6 +3113,17 @@ Please refer to OpenAI's [safety best practices guide](https://platform.openai.c
<key>ypos</key>
<real>1495</real>
</dict>
<key>F127DDE4-1464-48AA-AE75-648EA4013D87</key>
<dict>
<key>colorindex</key>
<integer>3</integer>
<key>note</key>
<string>Sets: "always_speak"</string>
<key>xpos</key>
<real>205</real>
<key>ypos</key>
<real>1110</real>
</dict>
<key>F3547C17-564C-4320-BDF6-EF35EDAF73C2</key>
<dict>
<key>colorindex</key>
Expand Down Expand Up @@ -3241,14 +3359,6 @@ Please refer to OpenAI's [safety best practices guide](https://platform.openai.c
<string>Davinci</string>
<string>text-davinci-003</string>
</array>
<array>
<string>Code-Davinci</string>
<string>code-davinci-002</string>
</array>
<array>
<string>Code-Cushman</string>
<string>code-cushman-001</string>
</array>
<array>
<string>GPT-4</string>
<string>gpt-4</string>
Expand Down Expand Up @@ -3543,7 +3653,7 @@ Please refer to OpenAI's [safety best practices guide](https://platform.openai.c
</dict>
</array>
<key>version</key>
<string>1.3.0</string>
<string>1.3.1</string>
<key>webaddress</key>
<string>https://github.com/chrislemke/ChatFred</string>
</dict>
Expand Down
8 changes: 6 additions & 2 deletions workflow/src/history_manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,9 +46,13 @@ def provide_history():
if __history_type == "search" and prompt != "":
history = [tuple[0] for tuple in process.extract(prompt, history, limit=20)]

if prompt != "":
if prompt in ["", " "]:
history.insert(0, (str(uuid.uuid1()), "...", "Talk to ChatGPT 💬", "0"))
else:
history.insert(0, (str(uuid.uuid1()), prompt, "Talk to ChatGPT 💬", "0"))

non_hist_text = [prompt, "..."]

response_dict = {
"variables": {
"user_prompt": prompt,
Expand All @@ -61,7 +65,7 @@ def provide_history():
"arg": [entry[0], entry[1]],
"autocomplete": entry[1],
"icon": {
"path": f"./{'icon.png' if index == 0 and entry[1] == prompt else 'magnifying_glass.png'}"
"path": f"./{'icon.png' if index == 0 and entry[1] in non_hist_text else 'magnifying_glass.png'}"
},
}
for index, entry in enumerate(history)
Expand Down
8 changes: 4 additions & 4 deletions workflow/src/libs/Levenshtein-0.20.9.dist-info/RECORD
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,12 @@ Levenshtein-0.20.9.dist-info/COPYING,sha256=h_G9SlK0ApR2toT83VH-08ayVA5GLTwtO_yb
Levenshtein-0.20.9.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
Levenshtein-0.20.9.dist-info/METADATA,sha256=vtRAVE08_DzEkMa02aRLYKbx-nyVg18C7jUb-gkKYrk,3411
Levenshtein-0.20.9.dist-info/RECORD,,
Levenshtein-0.20.9.dist-info/WHEEL,sha256=pRHZb-iwv4SVV1EHWzl2OvElmydUOLWzREX5w9i-CLc,104
Levenshtein-0.20.9.dist-info/WHEEL,sha256=qs8Yz0ZbqM5VOyCb1-BnB3rFDQdhd1TcVdDfo2VM4zc,104
Levenshtein-0.20.9.dist-info/top_level.txt,sha256=RvPuPevm_7j5S0VBmjD7AQr8RlHYl2CTA_4GqFW-gfA,12
Levenshtein/StringMatcher.py,sha256=i0xwHTR1dmzQsijtjxXFOHrwWiDNidQd29WNkYD06qg,2275
Levenshtein/__init__.py,sha256=PErygLmnBJwidH9M0FiKIL9qdK6tRo9yivwE8lxlFTk,16115
Levenshtein/__init__.pyi,sha256=mQJjf8RiWDLrqwZNpSyA05ysVSu7KjsgcY-4O4pxb9A,2887
Levenshtein/__pycache__/StringMatcher.cpython-311.pyc,,
Levenshtein/__pycache__/__init__.cpython-311.pyc,,
Levenshtein/levenshtein_cpp.cpython-311-darwin.so,sha256=PuX0yLQZvUAlSn_PpyVvgKDoe7Jkiv-83IB3rKqp78s,399234
Levenshtein/__pycache__/StringMatcher.cpython-310.pyc,,
Levenshtein/__pycache__/__init__.cpython-310.pyc,,
Levenshtein/levenshtein_cpp.cpython-310-darwin.so,sha256=Z6RRcKRQeUs5K6dTal3NyLkEc4BcYk2GUC6LaHdf9B8,398978
Levenshtein/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
2 changes: 1 addition & 1 deletion workflow/src/libs/Levenshtein-0.20.9.dist-info/WHEEL
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
Wheel-Version: 1.0
Generator: skbuild 0.16.4
Root-Is-Purelib: false
Tag: cp311-cp311-macosx_11_0_arm64
Tag: cp310-cp310-macosx_11_0_arm64

Loading

0 comments on commit ca7edea

Please sign in to comment.