OpenAI Whisper tutorial: How to use OpenAI Whisper
Introducing Whisper: OpenAI's Groundbreaking Speech Recognition System
Whisper stands tall as OpenAI's cutting-edge speech recognition solution, expertly honed with 680,000 hours of web-sourced multilingual and multitask data. This robust and versatile dataset cultivates exceptional resilience to accents, ambient noise, and technical terminology. Furthermore, it supports seamless transcription in various languages and translation into English. OpenAI graciously unveils models and codes, paving the way for ingenious developers to construct valuable applications that harness the remarkable potential of speech recognition.
How to use Whisper
The whisper model is available on GitHub. You can download it with the following command directly in the Jupyter notebook:
!pip install git+https://github.com/openai/whisper.git
Whisper needs ffmpeg
installed on the current machine to work. Maybe you already have it installed,
but its likely your local machine needs this program to be installed first.
OpenAI refers to multiple ways to install this package, but we will be using the Scoop package manager. Here is a tutorial how to do it manually
In the Jupyter Notebook you can install it with the following command:
irm get.scoop.sh | iex
scoop install ffmpeg
After the installation a restart of is required if you are using your local machine.
Now we can continue. Next we import all necessary libraries:
import whisper
import os
import numpy as np
import torch
Using a GPU is the preferred way to use Whisper. If you are using a local machine, you can check
if you have a GPU available.
The first line results False
, if Cuda compatible Nvidia GPU is not available and True
if it
is available. The second line of
code sets the model to preference GPU whenever it is available.
torch.cuda.is_available()
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
Now we can load the Whipser model. The model is loaded with the following command:
model = whisper.load_model("base", device=DEVICE)
print(
f"Model is {'multilingual' if model.is_multilingual else 'English-only'} "
f"and has {sum(np.prod(p.shape) for p in model.parameters()):,} parameters."
)
Please keep in mind, that there are multiple different models available. You can find all of them here. Each one of them has tradeoffs between accuracy and speed (compute needed). We will use the 'base' model for this tutorial.
Next you need to load your audio file you want to transcribe.
audio = whisper.load_audio("audio.mp3")
audio = whisper.pad_or_trim(audio)
mel = whisper.log_mel_spectrogram(audio).to(model.device)
The detect_language
function detects the language of your audio file:
_, probs = model.detect_language(mel)
print(f"Detected language: {max(probs, key=probs.get)}")
We transcribe the first 30 seconds of the audio using the DecodingOptions and the decode command. Then print out the result:
options = whisper.DecodingOptions(language="en", without_timestamps=True, fp16 = False)
result = whisper.decode(model, mel, options)
print(result.text)
Next we can transcribe the whole audio file.
result = model.transcribe("../input/audiofile/audio.mp3")
print(result["text"])
This will print out the whole audio file transcribed, after the execution has finished.
You can find the full code as Jupyter Notebook here
But how can I leverage this knowledge?
Now it's up to you to create your own Whisper application. Get creative and have fun!
I'm sure you will find a lot of useful applications for Whisper. Best way is to identify a problem around you and craft a solution to it. Maybe during our AI Hackathons?
Thank you for reading. If you enjoyed this tutorial you can find more and continue reading on our dedicated AI tutorial page - Fabian Stehle, Junior Data Scientist at New Native