Skip to content

Commit 0aaaaa0

Browse files
committed
Add a standalone live captions demo
Uses microphone captured speech to display Moonshine transcriptions as live captions on the console.
1 parent 88230f0 commit 0aaaaa0

3 files changed

Lines changed: 389 additions & 19 deletions

File tree

moonshine/demo/README.md

Lines changed: 184 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -1,28 +1,193 @@
11
# Moonshine Demos
22

3-
This directory contains various scripts the demonstrate the capabilities of the Moonshine ASR models.
3+
This directory contains various scripts to demonstrate the capabilities of the
4+
Moonshine ASR models.
45

5-
## onnx_standalone.py
6+
- [Moonshine Demos](#moonshine-demos)
7+
- [Demo: Standalone file transcription with ONNX](#demo-standalone-file-transcription-with-onnx)
8+
- [Demo: Live captioning from microphone input](#demo-live-captioning-from-microphone-input)
9+
- [Installation.](#installation)
10+
- [0. Setup environment](#0-setup-environment)
11+
- [1. Clone the repo and install extra dependencies](#1-clone-the-repo-and-install-extra-dependencies)
12+
- [Running the demo](#running-the-demo)
13+
- [Script notes](#script-notes)
14+
- [Speech truncation and hallucination](#speech-truncation-and-hallucination)
15+
- [Running on a slower processor](#running-on-a-slower-processor)
16+
- [Metrics](#metrics)
17+
- [Citation](#citation)
618

7-
This script demonstrates how to run a Moonshine model with the `onnxruntime` package alone, without depending on `torch` or `tensorflow`. This enables running on SBCs such as Raspberry Pi. Follow the instructions below to setup and run.
819

9-
* Install `onnxruntime` (or `onnxruntime-gpu` if you want to run on GPUs) and `tokenizers` packages using your Python package manager of choice, such as `pip`.
20+
# Demo: Standalone file transcription with ONNX
1021

11-
* Download the `onnx` files from huggingface hub to a directory.
22+
The script [`onnx_standalone.py`](/moonshine/demo/onnx_standalone.py)
23+
demonstrates how to run a Moonshine model with the `onnxruntime`
24+
package alone, without depending on `torch` or `tensorflow`. This enables
25+
running on SBCs such as Raspberry Pi. Follow the instructions below to setup
26+
and run.
1227

13-
```shell
14-
mkdir moonshine_base_onnx
15-
cd moonshine_base_onnx
16-
wget https://huggingface.co/UsefulSensors/moonshine/resolve/main/onnx/base/preprocess.onnx
17-
wget https://huggingface.co/UsefulSensors/moonshine/resolve/main/onnx/base/encode.onnx
18-
wget https://huggingface.co/UsefulSensors/moonshine/resolve/main/onnx/base/uncached_decode.onnx
19-
wget https://huggingface.co/UsefulSensors/moonshine/resolve/main/onnx/base/cached_decode.onnx
20-
cd ..
21-
```
28+
1. Install `onnxruntime` (or `onnxruntime-gpu` if you want to run on GPUs) and `tokenizers` packages using your Python package manager of choice, such as `pip`.
2229

23-
* Run `onnx_standalone.py` to transcribe a wav file
30+
2. Download the `onnx` files from huggingface hub to a directory.
2431

25-
```shell
26-
moonshine/moonshine/demo/onnx_standalone.py --models_dir moonshine_base_onnx --wav_file moonshine/moonshine/assets/beckett.wav
27-
['Ever tried ever failed, no matter try again fail again fail better.']
28-
```
32+
```shell
33+
mkdir moonshine_base_onnx
34+
cd moonshine_base_onnx
35+
wget https://huggingface.co/UsefulSensors/moonshine/resolve/main/onnx/base/preprocess.onnx
36+
wget https://huggingface.co/UsefulSensors/moonshine/resolve/main/onnx/base/encode.onnx
37+
wget https://huggingface.co/UsefulSensors/moonshine/resolve/main/onnx/base/uncached_decode.onnx
38+
wget https://huggingface.co/UsefulSensors/moonshine/resolve/main/onnx/base/cached_decode.onnx
39+
cd ..
40+
```
41+
42+
3. Run `onnx_standalone.py` to transcribe a wav file
43+
44+
```shell
45+
moonshine/moonshine/demo/onnx_standalone.py --models_dir moonshine_base_onnx --wav_file moonshine/moonshine/assets/beckett.wav
46+
['Ever tried ever failed, no matter try again fail again fail better.']
47+
```
48+
49+
50+
# Demo: Live captioning from microphone input
51+
52+
https://github.com/user-attachments/assets/aa65ef54-d4ac-4d31-864f-222b0e6ccbd3
53+
54+
This folder contains a demo of live captioning from microphone input, built on Moonshine. The script runs the Moonshine ONNX model on segments of speech detected in the microphone signal using a voice activity detector called [`silero-vad`](https://github.com/snakers4/silero-vad). The script prints scrolling text or "live captions" assembled from the model predictions to the console.
55+
56+
The following steps have been tested in a `uv` (v0.4.25) virtual environment on these platforms:
57+
58+
- macOS 14.1 on a MacBook Pro M3
59+
- Ubuntu 22.04 VM on a MacBook Pro M2
60+
- Ubuntu 24.04 VM on a MacBook Pro M2
61+
62+
## Installation
63+
64+
### 0. Setup environment
65+
66+
Steps to set up a virtual environment are available in the [top level README](/README.md) of this repo. Note that this demo is standalone and has no requirement to install the `useful-moonshine` package. Instead, you will clone the repo.
67+
68+
### 1. Clone the repo and install extra dependencies
69+
70+
You will need to clone the repo first:
71+
72+
```shell
73+
git clone git@github.com:usefulsensors/moonshine.git
74+
```
75+
76+
Then install the demo's requirements:
77+
78+
```shell
79+
uv pip install -r moonshine/moonshine/demo/requirements.txt
80+
```
81+
82+
There is a dependency on `torch` because of `silero-vad` package. There is no
83+
dependency on `tensorflow`.
84+
85+
#### Ubuntu: Install PortAudio
86+
87+
Ubuntu needs PortAudio for the `sounddevice` package to run. The latest version (19.6.0-1.2build3 as of writing) is suitable.
88+
89+
```shell
90+
sudo apt update
91+
sudo apt upgrade -y
92+
sudo apt install -y portaudio19-dev
93+
```
94+
95+
## Running the demo
96+
97+
First, check that your microphone is connected and that the volume setting is not muted in your host OS or system audio drivers. Then, run the script:
98+
99+
``` shell
100+
python3 moonshine/moonshine/demo/live_captions.py
101+
```
102+
103+
By default, this will run the demo with the Moonshine Base model using the ONNX runtime. The optional `--model_name` argument sets the model to use: supported arguments are `moonshine/base` and `moonshine/tiny`.
104+
105+
When running, speak in English language to the microphone and observe live captions in the terminal. Quit the demo with `Ctrl+C` to see a full printout of the captions.
106+
107+
An example run on Ubuntu 24.04 VM on MacBook Pro M2 with Moonshine base ONNX
108+
model:
109+
110+
```console
111+
(env_moonshine_demo) parallels@ubuntu-linux-2404:~$ python3 moonshine/moonshine/demo/live_captions.py
112+
Error in cpuinfo: prctl(PR_SVE_GET_VL) failed
113+
Loading Moonshine model 'moonshine/base' (ONNX runtime) ...
114+
Press Ctrl+C to quit live captions.
115+
116+
hine base model being used to generate live captions while someone is speaking. ^C
117+
118+
model_name : moonshine/base
119+
MIN_REFRESH_SECS : 0.2s
120+
121+
number inferences : 25
122+
mean inference time : 0.14s
123+
model realtime factor : 27.82x
124+
125+
Cached captions.
126+
This is an example of the Moonshine base model being used to generate live captions while someone is speaking.
127+
(env_moonshine_demo) parallels@ubuntu-linux-2404:~$
128+
```
129+
130+
For comparison, this is the `faster-whisper` base model on the same instance.
131+
The value of `MIN_REFRESH_SECS` was increased as the model inference is too slow
132+
for a value of 0.2 seconds. Our Moonshine base model runs ~ 7x faster for this
133+
example.
134+
135+
```console
136+
(env_moonshine_faster_whisper) parallels@ubuntu-linux-2404:~$ python3 moonshine/moonshine/demo/live_captions.py
137+
Error in cpuinfo: prctl(PR_SVE_GET_VL) failed
138+
Loading Faster-Whisper float32 base.en model ...
139+
Press Ctrl+C to quit live captions.
140+
141+
r float32 base model being used to generate captions while someone is speaking. ^C
142+
143+
model_name : base.en
144+
MIN_REFRESH_SECS : 1.2s
145+
146+
number inferences : 6
147+
mean inference time : 1.02s
148+
model realtime factor : 4.82x
149+
150+
Cached captions.
151+
This is an example of the Faster Whisper float32 base model being used to generate captions while someone is speaking.
152+
(env_moonshine_faster_whisper) parallels@ubuntu-linux-2404:~$
153+
```
154+
155+
## Script notes
156+
157+
You may customize this script to display Moonshine text transcriptions as you wish.
158+
159+
The script `live_captions.py` loads the English language version of Moonshine base ONNX model. It includes logic to detect speech activity and limit the context window of speech fed to the Moonshine model. The returned transcriptions are displayed as scrolling captions. Speech segments with pauses are cached and these cached captions are printed on exit.
160+
161+
### Speech truncation and hallucination
162+
163+
Some hallucinations will be seen when the script is running: one reason is speech gets truncated out of necessity to generate the frequent refresh and timeout transcriptions. Truncated speech contains partial or sliced words for which transcriber model transcriptions are unpredictable. See the printed captions on script exit for the best results.
164+
165+
### Running on a slower processor
166+
167+
If you run this script on a slower processor, consider using the `tiny` model.
168+
169+
```shell
170+
python3 ./moonshine/moonshine/demo/live_captions.py --model_name moonshine/tiny
171+
```
172+
173+
The value of `MIN_REFRESH_SECS` will be ineffective when the model inference time exceeds that value. Conversely on a faster processor consider reducing the value of `MIN_REFRESH_SECS` for more frequent caption updates. On a slower processor you might also consider reducing the value of `MAX_SPEECH_SECS` to avoid slower model inferencing encountered with longer speech segments.
174+
175+
### Metrics
176+
177+
The metrics shown on program exit will vary based on the talker's speaking style. If the talker speaks with more frequent pauses, the speech segments are shorter and the mean inference time will be lower. This is a feature of the Moonshine model described in [our paper](https://arxiv.org/abs/2410.15608). When benchmarking, use the same speech, e.g., a recording of someone talking.
178+
179+
180+
# Citation
181+
182+
If you benefit from our work, please cite us:
183+
```
184+
@misc{jeffries2024moonshinespeechrecognitionlive,
185+
title={Moonshine: Speech Recognition for Live Transcription and Voice Commands},
186+
author={Nat Jeffries and Evan King and Manjunath Kudlur and Guy Nicholson and James Wang and Pete Warden},
187+
year={2024},
188+
eprint={2410.15608},
189+
archivePrefix={arXiv},
190+
primaryClass={cs.SD},
191+
url={https://arxiv.org/abs/2410.15608},
192+
}
193+
```

0 commit comments

Comments
 (0)