JavaScript tutorial: Add speech recognition to your web app

Posted on 06-06-2019 , by: admin , in , 0 Comments

While browsers are marching toward supporting speech recognition and more futuristic capabilities, web application developers are typically constrained to the keyboard and mouse. But what if we could augment keyboard and mouse interactions with other modes of interaction, like voice commands or hand positions?

In this series of posts, we’ll build up a basic map explorer with multimodal interactions. First up are voice commands. However, we’ll first need to lay out the structure of our app before we can incorporate any commands.

Our app, bootstrapped with create-react-app, will be a full-screen map powered by the React components for Leaflet.js. After running create-react-app, yarn add leaflet, and yarn add react-leaflet, we’ll open up our App component and define our Map component:

import React, { Component } from 'react';
import { Map, TileLayer } from 'react-leaflet'
import './App.css';
class App extends Component {
state = {
center: [41.878099, -87.648116],
zoom: 12,
};
updateViewport = (viewport) => {
this.setState({
center: viewport.center,
zoom: viewport.zoom,
});
};
render() {
const {
center,
zoom,
} = this.state;
return (
<div className="App">
<Map
style={{height: '100%', width: '100%'}}
center={center}
zoom={zoom}
onViewportChange={this.updateViewport}>
<TileLayer
url="https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png"
attribution="&copy; <a href=&quot;http://osm.org/copyright&quot;>OpenStreetMap</a> contributors"
/>
</Map>
</div>
)
}
}
export default App;

The App component is a stateful component that keeps track of the center and zoom properties, passing them into the Map component. When the user interacts with the maps via the built-in mouse and keyboard interactions, we’re notified to update our state with the new center and zoom levels.

With a full-screen component defined, our app looks like the following: