Adding Voice Input to Your React App

As I began working on my latest React project, I realized the importance of incorporating voice input to enhance user experience. This feature allows users to interact with the app using voice commands, which can be particularly useful for accessibility and convenience. In this post, I’ll walk you through how I implemented voice input in my React app using the react-speech-recognition library.

Thank me by sharing on Twitter 🙏

Setting Up Voice Recognition

To get started, I installed the react-speech-recognition package using npm.

Plaintext
npm install react-speech-recognition
npm install @types/react-speech-recognition --save-dev

Next, I imported the necessary components from the library.

TypeScript
import SpeechRecognition, { useSpeechRecognition } from 'react-speech-recognition';

Implementing Voice Input

Here’s how I implemented voice input in my React app:

  1. Import and Use the Hook:
    I used the useSpeechRecognition hook to manage speech recognition in my component.
TypeScript
   import React from 'react';
   import SpeechRecognition, { useSpeechRecognition } from 'react-speech-recognition';

   const App = () => {
     const { transcript, listening, browserSupportsSpeechRecognition } = useSpeechRecognition();
     const { startListening, stopListening } = SpeechRecognition;

     if (!browserSupportsSpeechRecognition) {
       return <span>Browser doesn't support speech recognition.</span>;
     }

     return (
       <div>
         <textarea value={transcript} />
         <button onClick={listening ? stopListening : startListening}>
           {listening ? 'Stop' : 'Speak'}
         </button>
       </div>
     );
   };
  1. Editing the Transcribed Text:
    To allow editing of the transcribed text, I added an onChange event handler to the text area.
TypeScript
   import React, { useState } from 'react';
   import SpeechRecognition, { useSpeechRecognition } from 'react-speech-recognition';

   const App = () => {
     const { transcript, listening, browserSupportsSpeechRecognition } = useSpeechRecognition();
     const { startListening, stopListening } = SpeechRecognition;
     const [editedText, setEditedText] = useState('');

     if (!browserSupportsSpeechRecognition) {
       return <span>Browser doesn't support speech recognition.</span>;
     }

     useEffect(() => {
       setEditedText(transcript);
     }, [transcript]);

     return (
       <div>
         <textarea value={editedText} onChange={(e) => setEditedText(e.target.value)} />
         <button onClick={listening ? stopListening : startListening}>
           {listening ? 'Stop' : 'Speak'}
         </button>
       </div>
     );
   };

Handling Common Issues

As I worked with speech recognition, I encountered a common issue related to async/await syntax: “ReferenceError: regeneratorRuntime is not defined.” This error occurs when your JavaScript code uses modern features like async/await but the environment doesn’t support them natively. To fix this, I installed the regenerator-runtime package and imported it at the top of my main JavaScript file.

Plaintext
npm install regenerator-runtime
TypeScript
import 'regenerator-runtime/runtime';

This ensures that the necessary polyfills are included, allowing async/await to work correctly.

Conclusion

Implementing voice input in my React app was a rewarding experience that significantly enhanced user interaction. By using the react-speech-recognition library, I was able to integrate speech recognition efficiently and effectively. This feature can be a valuable addition to a wide range of applications, from simple text editors to complex interfaces. As you explore these methods, you’ll find that voice input can make your apps more accessible and user-friendly.

Share this:

Leave a Reply