Offline Speech Recognition using Vosk (Kaldi-ASR) : meta-offline-voice-agent
=========================================================================

meta-offline-voice-agent is the AGL Layer using Vosk API based on Kaldi ASR Toolkit to enable Offline Speech Recognition capabilities for Automotive Grade Linux.


WIP
========
The layer in its current state contains the Vosk library, and is capable of recognizing speech, as verified from the test scripts contained in https://github.com/alphacep/vosk-api/tree/master/python/example.

### Working features:
- [vosk-api (python)](https://github.com/alphacep/vosk-api/tree/master/python)
- [vosk-websocket-server](https://github.com/alphacep/vosk-server/tree/master/websocket)


Testing vosk-api on AGL
======================

### 1. Initializing the build environment:

The `agl-offline-voice-agent` feature needs to be enabled when including aglsetup.sh:

```shell
$ source meta-agl/scripts/aglsetup.sh -m qemux86-64 -b build-voice-qemux86-64 agl-demo agl-offline-voice-agent ${AGL_META_PYTHON}

$ bitbake agl-demo-platform
```

### 2. Running the image:

```shell
$ runqemu tmp/deploy/images/qemux86-64/agl-demo-platform-qemux86-64.qemuboot.conf kvm serialstdio slirp publicvnc audio
```

### 3. Run the test with ptest-runner:

```shell
$ ptest-runner python3-vosk-api
```

### Currently supported targets:
- QEMU x86-64: Work in progress.

Maintainers:
- Aman Arora <aman.arora9848@gmail.com>