Setup
Meeting transcription works out of the box — no calendar connection required. OpenWhispr detects meetings automatically through process and microphone monitoring.Grant screen recording (macOS)
macOS requires screen recording permission to capture meeting audio. You’ll be prompted during onboarding.
Start a meeting
Meeting detection is on by default. When a meeting starts (Zoom, Teams, FaceTime, Google Meet), a notification asks if you want to record.
How detection works
OpenWhispr uses two primary signals to detect meetings, with an optional third:- Process monitoring — detects Zoom, Teams, FaceTime, and Webex
- Microphone activity — catches browser-based calls like Google Meet
- Calendar awareness (optional) — when connected, shows the event name in the notification and auto-fills meeting details in the note
Live transcription
Meeting audio is transcribed in real time via the OpenAI Realtime API. During a meeting:- Text appears as it’s spoken
- Speaker labels are assigned live and refined after the call
- A dedicated meeting hotkey lets you start/stop independently from dictation
Speaker diarization
OpenWhispr identifies who’s speaking during a meeting:- Live labels — assigned during recording
- Post-processing — clusters are refined into stable speaker groups
- Voice fingerprints — attach voice profiles to contacts so recognized speakers carry their name across meetings
- One-click reassignment — fix any mislabeled speakers after the call
After the meeting
Meeting transcriptions are saved as notes. You can:- Review and edit in the note editor
- Apply AI actions to clean up or summarize
- Search across past meetings using semantic search