-
Notifications
You must be signed in to change notification settings - Fork 152
Feature/spech to text #349
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Ho @mappweb any PR should bring a comprehensive description of what you worked on, with code examples, use case, and unit tests. There is also a template for publishing PRs that can guide you in providing all the necessary information. |
|
Furthermore you target the 2.x branch not main. |
|
Hi @ilvalerione. Added example. |
|
I don't understand how it integrates with the framework. It's seems a standalone implementation. By the way, can you use appropriate code formatting? As I said it should also be documented, target the 2.x branch, and tested eventually. |
|
@ilvalerione An agent that interacts by voice. This would be the first layer of transcription. |
|
Any news about this feature? |
|
So @mappweb Why don't you keep that logic to own agent ? |
//Example
class SpeechToTextAgent extends Transcribe
{
public function transcribe(): TranscribeProviderInterface
{
return new OpenAITranscribeProvider(
config('neuron-ai.providers.openai.api_key'),
'whisper-1',
'es',
);
}
}
//Agent
$fullPath = '.../.../.oga'
$agent = SpeechToTextAgent::make();
$string = $agent->transcribeAudio(new AudioContent($fullPath, SourceType::URL));