Even being in Tauri this application just by doing these things takes around 120MB on my M3 Max. It's truly astonishing how modern desktop apps are essentially doing nothing and yet consume so much resources.
- it sets icon on the menubar
- it display a window where I can choose which model to use
I feel the same astonishment!
Our computers surely are today faster and stronger and smaller than yesterdays', but did this really translate in something tangible for a user?
I feel that besides boot-up, thanks to SSDs rather than gigaHertz, it's not any faster.
It's like, all this extra power is used to the maximum, for good and bad reasons, but not focused on making 'it' faster.
I get a bit puzzled to why my mac could freeze half a second when I 'cmd+a' in some 1000+ files-full folder.
Why doesn't Excel appear instantly, and why is it 2.29GB now when Excel 98 for Mac was.. 154.31MB?
Why is a LAN transfer between two computers still as slow as 1999, 10ishMB/s, when both can simultaneously download at > 100MB/s?
I'm not starting with GB-memory-hoarding tabs, when you think about it, it's managed well as a whole, holding 700+ tabs without complaining.
And what about logs?
This is a new branch of philosophy, open Console and witness the era of
hyperreal siloxal, where computational potential expands asymptotically while user
experience flatlines into philosophical absurdity?
It me takes longer to install a large Mac program from the .dmg than it takes to download it in the first place. My internet connection is fairly slow and my disk is an SSD. The only hypothesis that makes sense to me is that MacOS is still riddled with O[n] or even O[n^2] algorithms that have never been improved and this incompetence has been made less visible by ever-faster hardware.
A piece of evidence supporting this hypothesis: rsync (a program written by people who know their craft) on MacOS does essentially the same job as Time Machine, but the former is orders of magnitude faster than the latter.
You can make this app yourself in an hour if you're on Linux and can do some scripting. Mockup below for illustration, but this is the beating heart of a real script:
# whisper-live.sh: run once and it listens (blocking), run again and it stops listening.
if ! test -f whisper.quit ; then
touch whisper.quit
notify-send -a whisper "listening"
m="/usr/share/whisper.cpp-model-tiny.en-q5_1/ggml-tiny.en-q5_1.bin"
txt="$(ffmpeg -hide_banner -loglevel -8 -f pulse -i default -f wav pipe:1 < whisper.quit \
| whisper-cli -np -m "$m" -f - -otxt -sns 2>/dev/null \
| tr \\n " " | sed -e 's/^\s*//' -e 's/\s\s*$//')"
rm -f whisper.quit
notify-send -a whisper "done listening"
printf %s "$txt" | wtype -
else
printf %s q > whisper.quit
fi
You can trivially modify it to use wl-copy to copy to clipboard instead, if you prefer that over immediately sending the text to the current window. I set up sway to run a script like this on $mod+Shift+w so it can be done one-handed -- not push to listen, but the script itself toggles listen state on each invocation, so push once to start, again to stop.
In theory, Handy could be developed by hand-rolling assembly. Maybe even binary machine code.
- It would probably be much faster, smaller and use less memory. But...
- It would probably not be cross-platform (Handy works on Linux, MacOS, and Windows)
- It would probably take years or decades to develop (Handy was developed by a single dev in single digit months for the initial version)
- It would probably be more difficult to maintain. Instead of re-using general purpose libraries and frameworks, it would all be custom code with the single purpose of supporting Handy.
- Also, Handy uses an LLM for transcription. LLM's are known to require a lot of RAM to perform well. So most of the RAM is probably being used by the transcription model. An LLM is basically a large auto-complete, so you need a lot of RAM to store all the mappings to inputs and outputs. So the hand-rolled assembly version could still use a lot of RAM...
The tech industry has such inefficiencies nearly everywhere. There's no good explanation why an AI model that knows so much could be smaller than a typical OS installation.
I could once optimize a solution to produce over 500x improvement. I cannot write about how this came, but it was much easier than initially expected.
But do you start onnx and whisper.cpp on fresh install / start? I did nothing. I literally just installed the app and started it without selectin a model.
Oh interesting. I totally misread the original comment, I didn't realize you're talking about RAM usage. 120MB is quite a lot. This surprises me too. There's nothing fancy going on really until the model is chosen.
I wanted speech-to-text in arbitrary applications on my Linux laptop, and I realized that loading the model was one of the slowest parts. So a daemon process, which triggers recording on/off using SIGUSR2, records using `pw-record` and passes the data to a loaded whisper model, which finally types the text using `ydotool` turned out to be a relatively simple application to build. ~200 lines in Go, or ~150 in Rust (check history for Rust version).
Just for fun. I like both languages. I thought Rust would be better fit on account of interop with whisper.cpp, but turns out the use of cgo was straight forward in this case. I like that the Go version has minimal 3rd party dependencies compared to the Rust version.
It relies on `pw-record` for recording audio and `ydotool` for triggering keyboard input. These are Linux specific. I don't know about Windows, but on my Mac I have a not-yet-public Swift + whisper + CoreAudio + Accessibility based solution that provides similar functionality.
(I think the first link is easier to read (CSS/formatting/dark mode), slightly more compact, and contains a link to the original HN post. It's also simple to recreate the HN link manually by inspecting the ID.)
I mean... why would I want this app instead of some other app? Just because it's written in the language of the week? If it said "20% faster than xyz" it would be a much better marketing than saying it's written in rust, even though more than half the code is typescript.
It's primarily this. I'm a novice Rust developer and really would like to improve the code quality across the board, and some of this comes to attracting the right kind of developers to help. Maybe "Rust" in the title helps, maybe it doesn't. Clearly HN doesn't like it and that's okay.
I stated my need for help on the about page as well
> This is my first Rust project, and it shows. There are bugs, rough edges, and architectural decisions that could be better. I’m documenting the known issues openly because I want everyone to understand what they are getting into, and encourage improvement in the project.
> Maybe "Rust" in the title helps, maybe it doesn't. Clearly HN doesn't like it and that's okay
HN definitely likes it, when it is used in the correct context. Using Rust in the title is a soft promise for better reliability and quality for the software than on average. But it starts to get controversial when Rust is not purely the controlling part of the software anymore. So people start to complain because it can be misleading marketing which is based on the promise that Rust can offer.
Fair enough, most of the critical code in this case is written in Rust. A Rust transcription library popped out of the project `transcription-rs`. And there is a real-time audio library I'd like to put out which allows for filters. I could have called out to ffmpeg or similar, but I chose to implement an audio pipeline myself (for better or worse)
So makes sense, but there are benefits to writing a desktop application backend in Rust for the ecosystem as well.
For me I do tend to prefer apps written in rust/go(/c/etc-copiled) as they are usually less problematic to install (quie often single binary; less headache compared to python stuff for example) and most of the time less resource hungry (anything JS/electron based)... in the end "convenient shortcut to convey aforementioned benefits" :)
It's targeting a very specific group of devs who like to follow trendy stuff..
To that group saying something is "made in rust" is equivalent to saying "it's modern, fast, secure, and made by an expert programmer not some plebe who can't keep up with the times"
not sure this might help, but when you launch the .appimage in a terminal, it shows you the command to extract the files it contains (to speed the loading) ; this might help you find the files you're searching for, maybe :)
built something similar for terminal lovers. It's a CLI tool built in Python called hns [1] and uses faster-whisper for completely local speech-to-text. It automatically copies the transcription to the clipboard as well as writes to stdout so you seamlessly paste the transcription in any other application or pipe/redirect it to other programs/files.
It’s way better. iPhone’s is awful. On macOS, interestingly, the built in dictation seems a bit better than on iOS, but still not as good as Whisper and Parakeet. Worth noting I have never used Whisper Small, only large and turbo. Another comment says Parakeet is the default now, though, despite what the site says.
The default recommendation is Parakeet (mainly because it runs fast on a lot more hardware), but definitely think people should experiment with different models and see what is best for them. Personally I found Whisper Medium to be far better than Turbo and Large for my speech, and Parakeet is about on par with Medium, but each have their own quirks.
This is local, but I've found that external inference is fast enough, as long as you're okay with the possible lack of privacy. My PC isn't beefy enough to really run whisper locally without impacting my workflow, so I use Groq via a shell script. It records until I tell it to stop, then it either copies it to the clipboard or writes it into the last position the cursor was in.
What computer are you using? You really should give Parakeet a try, I find it runs in a few hundred milliseconds even on a Skylake i5 from 10 years ago.
I've tried a lot of them, and the best I found so far is Edge browsers built in microsoft (natural) voices, which I call via javascript or the browsers read aloud function.
Curious your use case, I now have quite a lot of experience with releasing desktop apps, and I have done some accessibility work as well, and may be curious to put together a TTS toolkit as well into a desktop app (or even Handy)
Wow, this is much faster and higher quality than the meloTTS program I was using before, and has many more voices available... although it doesn't appear to support Japanese.
Read Aloud allows you to select from a variety of text-to-speech voices, including those provided natively by the browser, as well as by text-to-speech cloud service providers such as Google Wavenet, Amazon Polly, IBM Watson, and Microsoft. Some of the cloud-based voices may require additional in-app purchase to enable.
...
the shortcut keys ALT-P, ALT-O, ALT-Comma, and ALT-Period can be used to Play/Pause, Stop, Rewind, and Forward, respectively.
I understand that it uses ML models. My point is that it is an end-user application making use of such models. It is recording audio, passing it to the model, and pasting in the resulting text to the focused input. The fact that the middle step happens to involve an ML model is not really intrinsic to anything the app does. If there was a good speech to text program that did not use ML, the app could use that instead and not really be any different.
To be fair on the other side there is a fair lack of specific ML inference libraries in Rust, and this project is pushing some of that forward with Parakeet at the very least. The Rust library `transcribe-rs` came from it and hopefully will support more models in the future.
While certainly it's not an ML project in the sense of I am not training models, the inference stack is just as important. The fact is the application does do inference using ONNX and Whisper.cpp.
Right now there is fairly minimal processing done to the audio. There is a VAD filter to reduce the non-speech areas. But there is no noise-reduction as such. The audio pipeline could support it though, so if you know any good real time noise reduction filters let me know. Would love to improve the SNR into the models
I find state of the art speech to text models like Whisper and Nvidia Parakeet are a lot better than macOS dictation. I use them through MacWhisper, but this is basically the same.
Even being in Tauri this application just by doing these things takes around 120MB on my M3 Max. It's truly astonishing how modern desktop apps are essentially doing nothing and yet consume so much resources.
- it sets icon on the menubar - it display a window where I can choose which model to use
That's it. 120MB FOR doing nothing.
I feel the same astonishment! Our computers surely are today faster and stronger and smaller than yesterdays', but did this really translate in something tangible for a user? I feel that besides boot-up, thanks to SSDs rather than gigaHertz, it's not any faster. It's like, all this extra power is used to the maximum, for good and bad reasons, but not focused on making 'it' faster. I get a bit puzzled to why my mac could freeze half a second when I 'cmd+a' in some 1000+ files-full folder.
Why doesn't Excel appear instantly, and why is it 2.29GB now when Excel 98 for Mac was.. 154.31MB? Why is a LAN transfer between two computers still as slow as 1999, 10ishMB/s, when both can simultaneously download at > 100MB/s? I'm not starting with GB-memory-hoarding tabs, when you think about it, it's managed well as a whole, holding 700+ tabs without complaining.
And what about logs? This is a new branch of philosophy, open Console and witness the era of hyperreal siloxal, where computational potential expands asymptotically while user experience flatlines into philosophical absurdity?
It me takes longer to install a large Mac program from the .dmg than it takes to download it in the first place. My internet connection is fairly slow and my disk is an SSD. The only hypothesis that makes sense to me is that MacOS is still riddled with O[n] or even O[n^2] algorithms that have never been improved and this incompetence has been made less visible by ever-faster hardware.
A piece of evidence supporting this hypothesis: rsync (a program written by people who know their craft) on MacOS does essentially the same job as Time Machine, but the former is orders of magnitude faster than the latter.
You can make this app yourself in an hour if you're on Linux and can do some scripting. Mockup below for illustration, but this is the beating heart of a real script:
You can trivially modify it to use wl-copy to copy to clipboard instead, if you prefer that over immediately sending the text to the current window. I set up sway to run a script like this on $mod+Shift+w so it can be done one-handed -- not push to listen, but the script itself toggles listen state on each invocation, so push once to start, again to stop.It's a matter of trade-offs.
In theory, Handy could be developed by hand-rolling assembly. Maybe even binary machine code.
- It would probably be much faster, smaller and use less memory. But...
- It would probably not be cross-platform (Handy works on Linux, MacOS, and Windows)
- It would probably take years or decades to develop (Handy was developed by a single dev in single digit months for the initial version)
- It would probably be more difficult to maintain. Instead of re-using general purpose libraries and frameworks, it would all be custom code with the single purpose of supporting Handy.
- Also, Handy uses an LLM for transcription. LLM's are known to require a lot of RAM to perform well. So most of the RAM is probably being used by the transcription model. An LLM is basically a large auto-complete, so you need a lot of RAM to store all the mappings to inputs and outputs. So the hand-rolled assembly version could still use a lot of RAM...
The tech industry has such inefficiencies nearly everywhere. There's no good explanation why an AI model that knows so much could be smaller than a typical OS installation.
I could once optimize a solution to produce over 500x improvement. I cannot write about how this came, but it was much easier than initially expected.
See also: Wirth's Law: https://en.wikipedia.org/wiki/Wirth%27s_law
A lot of the bloat comes from dependencies like ONNX or whisper.cpp to accelerate running the model itself
While the UI is doing “nothing” most of the bloat is not from the UI
But do you start onnx and whisper.cpp on fresh install / start? I did nothing. I literally just installed the app and started it without selectin a model.
Oh interesting. I totally misread the original comment, I didn't realize you're talking about RAM usage. 120MB is quite a lot. This surprises me too. There's nothing fancy going on really until the model is chosen.
Yeah exactly. Tbh Tauri is touted as more lightweight than Electron but I have never seen a Tauri application that lived up to this claim.
Shameless plug: A brutally minimalist Linux only, whisper.cpp only app: https://github.com/daaku/whispy
I wanted speech-to-text in arbitrary applications on my Linux laptop, and I realized that loading the model was one of the slowest parts. So a daemon process, which triggers recording on/off using SIGUSR2, records using `pw-record` and passes the data to a loaded whisper model, which finally types the text using `ydotool` turned out to be a relatively simple application to build. ~200 lines in Go, or ~150 in Rust (check history for Rust version).
I'm very curious about the rewrite. Was Rust slowing you down too much?
Just for fun. I like both languages. I thought Rust would be better fit on account of interop with whisper.cpp, but turns out the use of cgo was straight forward in this case. I like that the Go version has minimal 3rd party dependencies compared to the Rust version.
Why Linux only? Isn't Go and Whisper.cpp cross platform?
It relies on `pw-record` for recording audio and `ydotool` for triggering keyboard input. These are Linux specific. I don't know about Windows, but on my Mac I have a not-yet-public Swift + whisper + CoreAudio + Accessibility based solution that provides similar functionality.
That was my guess. Crossplatform Audio input isn't exactly as trivial as using pipewire.
Why does the title specify the language used when it's not even mentioned on the home page?
If it's Rust or Go it means I won't have to fuss with a runtime like Python or JS, nor a C++ build system
You dont have that using Electron app as well. The runtime is bundled with the binary.
Honestly the only thing I avoid these days is python. If something is written in python I generally give it a miss, especially if it has a GUI.
I just copied the title verbatim from the original Show HN: https://hw.leftium.com/#/item/44302416
In case you also have a problem with not using the original HN link: https://news.ycombinator.com/item?id=44302416
(I think the first link is easier to read (CSS/formatting/dark mode), slightly more compact, and contains a link to the original HN post. It's also simple to recreate the HN link manually by inspecting the ID.)
Marketing. Honestly, might not be good here since it is not library and not completely written in Rust.
Marketing for what exactly?
I mean... why would I want this app instead of some other app? Just because it's written in the language of the week? If it said "20% faster than xyz" it would be a much better marketing than saying it's written in rust, even though more than half the code is typescript.
I think there are tangible benefits to this being “not Java or JavaScript”. Or any language that brings a resource intensive runtime with it.
More than half is TypeScript to be fair.
The title also mentions that it’s open source, so it could be marketing for potential contributors.
It's primarily this. I'm a novice Rust developer and really would like to improve the code quality across the board, and some of this comes to attracting the right kind of developers to help. Maybe "Rust" in the title helps, maybe it doesn't. Clearly HN doesn't like it and that's okay.
I stated my need for help on the about page as well
> This is my first Rust project, and it shows. There are bugs, rough edges, and architectural decisions that could be better. I’m documenting the known issues openly because I want everyone to understand what they are getting into, and encourage improvement in the project.
> Maybe "Rust" in the title helps, maybe it doesn't. Clearly HN doesn't like it and that's okay
HN definitely likes it, when it is used in the correct context. Using Rust in the title is a soft promise for better reliability and quality for the software than on average. But it starts to get controversial when Rust is not purely the controlling part of the software anymore. So people start to complain because it can be misleading marketing which is based on the promise that Rust can offer.
Fair enough, most of the critical code in this case is written in Rust. A Rust transcription library popped out of the project `transcription-rs`. And there is a real-time audio library I'd like to put out which allows for filters. I could have called out to ffmpeg or similar, but I chose to implement an audio pipeline myself (for better or worse)
So makes sense, but there are benefits to writing a desktop application backend in Rust for the ecosystem as well.
I'm not sure if it's purely down to "hype".
For me I do tend to prefer apps written in rust/go(/c/etc-copiled) as they are usually less problematic to install (quie often single binary; less headache compared to python stuff for example) and most of the time less resource hungry (anything JS/electron based)... in the end "convenient shortcut to convey aforementioned benefits" :)
It's targeting a very specific group of devs who like to follow trendy stuff..
To that group saying something is "made in rust" is equivalent to saying "it's modern, fast, secure, and made by an expert programmer not some plebe who can't keep up with the times"
> and made by an expert programmer
Quite the opposite. You have to be more of an expert programmer to achieve those same goals in C. Rust lowers the skill bar.
Anyways, I agree that the editorialization here is silly.
But also, I am unashamed that "in Rust" does increase my interest in a piece of software, for several of the reasons you mentioned.
Read the creator's description in the original Show HN: https://hw.leftium.com/#/item/44302416
I love it.
How do you clear the history of recordings?
Next version will have it! The main branch already has I’ve just not released the next version yet
I don't think it's possible (yet), but only the last five recordings are stored.
Nicely done! Seeing that it uses a port of Whisper, here's my shameless plug for a gnome extension I made using Whisper:
https://extensions.gnome.org/extension/8238/gnome-speech2tex...
Awesome . I was looking to build this on my own. Will look at the code and consider contributing cheers.
Hey author of Handy here! Would absolutely love any help, please let me know if there's any way I can make contributing easier!
Hi mate is there a way to make this persistent, so I can give a long dictation instead of holding down the space bar all the time?
Yes! Turn off “push to talk”, it will activate when you click the shortcut and stop when you click it again
Cool, you just might've saved me some carpal tunnel in the long run xD.
I guess there's no way for the AppImage to use GPU compute, right? Not that it matters much because parakeet is fast enough on CPU anyway.
I think the Whisper models will all use GPU. Only the Parakeet model is limited to CPU.
(I'm unfamiliar with AppImage. Was the model included in the app image, or was there a download after selecting the model?)
not sure this might help, but when you launch the .appimage in a terminal, it shows you the command to extract the files it contains (to speed the loading) ; this might help you find the files you're searching for, maybe :)
Whisper uses Vulkan and Metal acceleration with whisper.cpp
Parakeet is currently CPU only
built something similar for terminal lovers. It's a CLI tool built in Python called hns [1] and uses faster-whisper for completely local speech-to-text. It automatically copies the transcription to the clipboard as well as writes to stdout so you seamlessly paste the transcription in any other application or pipe/redirect it to other programs/files.
[1]: https://github.com/primaprashant/hns
How good will this local model be compared to, say, your iphone builtin STT?
It’s way better. iPhone’s is awful. On macOS, interestingly, the built in dictation seems a bit better than on iOS, but still not as good as Whisper and Parakeet. Worth noting I have never used Whisper Small, only large and turbo. Another comment says Parakeet is the default now, though, despite what the site says.
Author here!
The default recommendation is Parakeet (mainly because it runs fast on a lot more hardware), but definitely think people should experiment with different models and see what is best for them. Personally I found Whisper Medium to be far better than Turbo and Large for my speech, and Parakeet is about on par with Medium, but each have their own quirks.
I'll update the site soon!
That's really interesting about medium being better than large. I never bothered trying the smaller models since the big ones were fast enough.
Benchmarks definitely say otherwise, but my anecdotal experience says medium is the best for this application with my voice and microphone
This is local, but I've found that external inference is fast enough, as long as you're okay with the possible lack of privacy. My PC isn't beefy enough to really run whisper locally without impacting my workflow, so I use Groq via a shell script. It records until I tell it to stop, then it either copies it to the clipboard or writes it into the last position the cursor was in.
What computer are you using? You really should give Parakeet a try, I find it runs in a few hundred milliseconds even on a Skylake i5 from 10 years ago.
just a heads up. There are many more accurate and faster models than Whisper nowadays. https://huggingface.co/spaces/hf-audio/open_asr_leaderboard
It also uses one of the fastest and most accurate on the ASR leaderboard, Parakeet.
Very cool. Uses whiper small uder the hood.
https://github.com/openai/whisper
nvidia parakeet v3 was the default out of the box and it works surprisingly well
it offers all the different sizes of openai models too
Amazing! I have been desperately wanting this. Livecaptions doesn't seem to be maintained super well.
+1, happy user and a humble contributor.
You're awesome Vlad!
Nice! There's also the VoiceInk open-source project https://github.com/Beingpax/VoiceInk/
MacOS only.
Anyone know of the opposite? A really easy-to-use text-to-speech program that is cross-platform?
I've tried a lot of them, and the best I found so far is Edge browsers built in microsoft (natural) voices, which I call via javascript or the browsers read aloud function.
Checkout https://github.com/rany2/edge-tts , which exposes it as a Python library and a CLI tool.
I’ve been enjoying Kokoro
Amazing what it can do with only 82M parameters
https://www.kokorotts.io/
Curious your use case, I now have quite a lot of experience with releasing desktop apps, and I have done some accessibility work as well, and may be curious to put together a TTS toolkit as well into a desktop app (or even Handy)
piper's amy voice is pleasant enough to me for reading articles, and it's instantaneous and trivial to use, just download the binary and model file.
Wow, this is much faster and higher quality than the meloTTS program I was using before, and has many more voices available... although it doesn't appear to support Japanese.
Thank you!
I've used Speech Note, which works well for STT and TTS.
Been having fun with this one
https://addons.mozilla.org/en-CA/firefox/addon/read-aloud/
Read Aloud allows you to select from a variety of text-to-speech voices, including those provided natively by the browser, as well as by text-to-speech cloud service providers such as Google Wavenet, Amazon Polly, IBM Watson, and Microsoft. Some of the cloud-based voices may require additional in-app purchase to enable.
...
the shortcut keys ALT-P, ALT-O, ALT-Comma, and ALT-Period can be used to Play/Pause, Stop, Rewind, and Forward, respectively.
TypeScript 53.9% Rust 44.9%
FYI
The README is very clear about it:
Frontend: React + TypeScript with Tailwind CSS for the settings UI Backend: Rust for system integration, audio processing, and ML inference
Lmao. At least it's typescript and not JavaScript!
Who’s gonna tell him?
Yeah. Rust compiles to machine code.
I thought it was a clever joke
Don’t you dare!
How handy is this for coding? ;)
That's great, nice to see more and more projects of Machine learning being written in rust
It’s not really a machine learning project. It’s an application that calls existing models.
Repo says:
CPU-optimized speech recognition with Parakeet models
I understand that it uses ML models. My point is that it is an end-user application making use of such models. It is recording audio, passing it to the model, and pasting in the resulting text to the focused input. The fact that the middle step happens to involve an ML model is not really intrinsic to anything the app does. If there was a good speech to text program that did not use ML, the app could use that instead and not really be any different.
To be fair on the other side there is a fair lack of specific ML inference libraries in Rust, and this project is pushing some of that forward with Parakeet at the very least. The Rust library `transcribe-rs` came from it and hopefully will support more models in the future.
While certainly it's not an ML project in the sense of I am not training models, the inference stack is just as important. The fact is the application does do inference using ONNX and Whisper.cpp.
More than half the code is typescript.
It's typescript because it is a Tauri app which uses the system webview to render the UI.
Most of the audio code/inference code is Rust or bindings to libraries like whisper.cpp
Is it able to isolate the speaker from background noises / voices?
Right now there is fairly minimal processing done to the audio. There is a VAD filter to reduce the non-speech areas. But there is no noise-reduction as such. The audio pipeline could support it though, so if you know any good real time noise reduction filters let me know. Would love to improve the SNR into the models
this is a great landing page. I downloaded.
great onboarding too, using it now.
Very handy, thanks!
Landing page is indeed very refreshing
thank you!!!
how’s it differ from macos dictation?
I find state of the art speech to text models like Whisper and Nvidia Parakeet are a lot better than macOS dictation. I use them through MacWhisper, but this is basically the same.
It downloads the model at first execution and also checks versions in github.
That is ok for what is brings. Nice program. Very "handy".
If you prefer a more stripped down version: the original releases (0.1.0 and 0.1.1) shipped with Whisper tiny included and no auto-update feature
[flagged]
How can I call this library from C++?
[flagged]