![]() ![]() It runs slightly slower than Whisper on GPU for the small, medium and large models (1.3× slower). Whisper.CPP is faster than Whisper on GPU for the tiny (1.8× faster) and base models (1.3× faster).Whisper.CPP is always much faster than Whisper on CPU, over 6 times faster for the tiny model up to over 7 times faster for the large one.We can see some differences depending on the model we use: The Python version uses the “BeamSearch” decoder (default parameter), while Whisper.cpp uses the “Greedy” decoder.The time spent for loading the models was not excluded.All files were converted to 16KHz WAV format before starting transcribing.“10×” means that it takes 1mn to transcribe a 10mn audio file.I ran some tests to see how fast Whisper.CPP runs compared to the Python version on CPU (Intel Core i7-13700KF 3.4 GHz) and the Python version on GPU (12GB Dual GeForce RTX 3060). Note that it currently does not work on GPU yet (CPU only) but is optimized for Apple silicon. ![]() it is supported on almost all platforms (macOS, iOS, Android, Linux, FreeBSD, WebAssembly, Windows, Raspberry Pi).Thanks to Georgi Gerganov, things got a lot easier! Whisper was originally written in Python (which is used for many A.I. Installing Whisper is pretty easy but having CUDA working with your GPU drivers can be tricky… “ Transcribe your podcast for free using only a laptop and Whisper” A few months ago, I wrote an article about podcast transcription with Whisper (from OpenAI): ![]()
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |