Skip to content

Commit

Permalink
changed requirements
Browse files Browse the repository at this point in the history
  • Loading branch information
NavodPeiris committed Oct 9, 2024
1 parent 297231a commit 7eb565d
Show file tree
Hide file tree
Showing 4 changed files with 15 additions and 7 deletions.
7 changes: 5 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,13 +88,16 @@ transcript will also indicate the timeframe in seconds where each speaker speaks
### Transcription example:

```
import os
from speechlib import Transcriptor
file = "obama_zach.wav" # your audio file
voices_folder = "" # voices folder containing voice samples for recognition
language = "en" # language code
log_folder = "logs" # log folder for storing transcripts
modelSize = "tiny" # size of model to be used [tiny, small, medium, large-v1, large-v2, large-v3]
quantization = False # setting this 'True' may speed up the process but lower the accuracy
ACCESS_TOKEN = "your hf key" # get permission to access pyannote/[email protected] on huggingface
ACCESS_TOKEN = "huggingface api key" # get permission to access pyannote/[email protected] on huggingface
# quantization only works on faster-whisper
transcriptor = Transcriptor(file, log_folder, language, modelSize, ACCESS_TOKEN, voices_folder, quantization)
Expand All @@ -112,7 +115,7 @@ res = transcriptor.custom_whisper("D:/whisper_tiny_model/tiny.pt")
res = transcriptor.huggingface_model("Jingmiao/whisper-small-chinese_base")
# use assembly ai model
res = transcriptor.assemby_ai_model("your api key")
res = transcriptor.assemby_ai_model("assemblyAI api key")
res --> [["start", "end", "text", "speaker"], ["start", "end", "text", "speaker"]...]
```
Expand Down
9 changes: 7 additions & 2 deletions library.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,13 +70,16 @@ transcript will also indicate the timeframe in seconds where each speaker speaks
### Transcription example:

```
import os
from speechlib import Transcriptor
file = "obama_zach.wav" # your audio file
voices_folder = "" # voices folder containing voice samples for recognition
language = "en" # language code
log_folder = "logs" # log folder for storing transcripts
modelSize = "tiny" # size of model to be used [tiny, small, medium, large-v1, large-v2, large-v3]
quantization = False # setting this 'True' may speed up the process but lower the accuracy
ACCESS_TOKEN = "your hf key" # get permission to access pyannote/[email protected] on huggingface
ACCESS_TOKEN = "huggingface api key" # get permission to access pyannote/[email protected] on huggingface
# quantization only works on faster-whisper
transcriptor = Transcriptor(file, log_folder, language, modelSize, ACCESS_TOKEN, voices_folder, quantization)
Expand All @@ -94,7 +97,9 @@ res = transcriptor.custom_whisper("D:/whisper_tiny_model/tiny.pt")
res = transcriptor.huggingface_model("Jingmiao/whisper-small-chinese_base")
# use assembly ai model
res = transcriptor.assemby_ai_model("your api key")
res = transcriptor.assemby_ai_model("assemblyAI api key")
res --> [["start", "end", "text", "speaker"], ["start", "end", "text", "speaker"]...]
```

#### if you don't want speaker names: keep voices_folder as an empty string ""
Expand Down
4 changes: 2 additions & 2 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

setup(
name="speechlib",
version="1.1.9",
version="1.1.10",
description="speechlib is a library that can do speaker diarization, transcription and speaker recognition on an audio file to create transcripts with actual speaker names. This library also contain audio preprocessor functions.",
packages=find_packages(),
long_description=long_description,
Expand All @@ -19,7 +19,7 @@
"Programming Language :: Python :: 3.10",
"Operating System :: OS Independent",
],
install_requires=["transformers", "torch", "torchaudio", "pydub", "pyannote.audio", "speechbrain==0.5.16", "accelerate", "faster-whisper", "openai-whisper", "assemblyai"],
install_requires=["transformers>=4.36.2, <5.0.0", "torch>=2.1.2, <3.0.0", "torchaudio>=2.1.2, <3.0.0", "pydub>=0.25.1, <1.0.0", "pyannote.audio>=3.1.1, <4.0.0", "speechbrain>=0.5.16, <1.0.0", "accelerate>=0.26.1, <1.0.0", "faster-whisper>=0.10.1, <1.0.0", "openai-whisper>=20231117, <20240927", "assemblyai"],
python_requires=">=3.8",
)

Expand Down
2 changes: 1 addition & 1 deletion setup_instruction.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ for publishing:
pip install twine

for install locally for testing:
pip install dist/speechlib-1.1.9-py3-none-any.whl
pip install dist/speechlib-1.1.10-py3-none-any.whl

finally run:
twine upload dist/*
Expand Down

0 comments on commit 7eb565d

Please sign in to comment.