kvack.dev

Detect a muted microphone

Published on

The Audio Context Analyser in JavaScript can be used to check if a microphone is muted (or more correctly, if sound is getting picked up). This is done by analysing the audio input from the microphone in real-time.

Basically, this is what we need to do:

  1. Create an Audio Context object, which represents an audio graph that can be used to process and analyse audio signals.
  2. Create an Analyser Node, which allows you to extract frequency data from an audio signal.
  3. Create a MediaStreamSource Node to access the audio input from a microphone.
  4. Connect the MediaStreamSource Node to the Analyser Node, so that the Analyser Node can analyse the audio input.
  5. Use the Analyser Node to continuously analyse the audio input from the microphone. If the input is zero for some time, it can be interpreted as the microphone being muted.

Let's have a look at how this can be done in React.

First, we create a hook that will be responsible for analysing the audio input. It will return a boolean isMuted as well as functions to start and stop the audio analysis.

const useMicMuteCheck = () => {
  const audioContextRef = useRef()
  const analyserRef = useRef()
  const streamSourceRef = useRef()

  const intervalRef = useRef()
  const timeoutRef = useRef()

  const [isMuted, setIsMuted] = useState(false)

  useEffect(() => {
    // Create an Audio Context Analyser node
    audioContextRef.current = new AudioContext()
    analyserRef.current = audioContextRef.current.createAnalyser()

    // Clean up on unmount
    return () => {
      if (audioContextRef.current) {
        audioContextRef.current.close()
      }
    }
  }, [])

  const startAnalysing = (stream) => {
    // Connect a MediaStreamSource Node to the Analyser Node
    streamSourceRef.current =
      audioContextRef.current.createMediaStreamSource(stream)
    streamSourceRef.current.connect(analyserRef.current)

    // Analyse the audio input every 100 ms
    intervalRef.current = setInterval(() => analyseAudio(), 100)
  }

  const stopAnalysing = () => {
    // Clean up
    clearInterval(intervalRef.current)
    intervalRef.current = null

    clearTimeout(timeoutRef.current)
    timeoutRef.current = null

    streamSourceRef.current.disconnect()

    setIsMuted(false)
  }

  const analyseAudio = () => {
    // Get frequency amplitude values
    const data = new Uint8Array(analyserRef.current.frequencyBinCount)
    analyserRef.current.getByteFrequencyData(data)

    // Check for audio input by looking at the frequency amplitudes
    const didRegisterSound = data.some((amplitude) => amplitude > 0)

    if (didRegisterSound) {
      // Clear the timeout if sound was registered
      clearTimeout(timeoutRef.current)
      timeoutRef.current = null

      setIsMuted(false)
    } else {
      if (!timeoutRef.current) {
        // Set isMuted after a second of silence
        timeoutRef.current = setTimeout(() => setIsMuted(true), 1000)
      }
    }
  }

  return {
    startAnalysing,
    stopAnalysing,
    isMuted,
  }
}

So, what's going on here?
We set up an Audio Context Analyser on mount. When a component using the hook calls startAnalysing, a MediaStreamSource Node is created with the provided audio stream. The source node is then connected to the analyser node and we start analysing the audio every 100 ms.

We get the data we need (frequency amplitude values) by passing an array to the analyser's getByteFrequencyData method. The number of values returned is the same as the frequencyBinCount, so we make sure to set the array size to that value.

If any frequency has an amplitude larger than 0 (meaning some audio input was registered), we clear the timeout that is responsible for setting isMuted. In case there isn't any audio input, we start a timeout (if not already started) and give the analyser one second to register sounds before setting isMuted.

We also added a function stopAnalysing that can be called to disconnect the stream, stop the timeouts and clean up.

Next, let's create a React component with a button for listening to the microphone input. By using our new hook we can show a warning when the microphone seems to be muted.

const MicMuteDetector = () => {
  const streamRef = useRef()
  const [isListening, setIsListening] = useState(false)
  const { isMuted, startAnalysing, stopAnalysing } = useMicMuteCheck()

  const handleClick = async () => {
    if (!streamRef.current) {
      // Access the microphone
      try {
        streamRef.current = await navigator.mediaDevices.getUserMedia({
          audio: true,
        })
        startAnalysing(streamRef.current)
        setIsListening(true)
      } catch (error) {
        console.log('Failed to produce media stream:', error)
      }
    } else {
      // Clean up
      stopAnalysing()
      streamRef.current.getTracks().forEach((track) => track.stop())
      streamRef.current = null

      setIsListening(false)
    }
  }

  return (
    <>
      <button onClick={handleClick}>
        {isListening ? 'Stop listening' : 'Start listening'}
      </button>
      {isMuted && <p>It seems like your microphone is muted</p>}
    </>
  )
}

When clicking the button we will try to access the microphone and pass the audio stream to the analyser. If the microphone is muted, a warning message will appear under the button. Clicking the button again will stop the analyser and the stream.

Thanks to the Audio Context Analyser, we can now give a friendly nudge to let the user know their microphone may be on mute.