summaryrefslogtreecommitdiffstats
path: root/meta-offline-voice-agent/README.md
diff options
context:
space:
mode:
authorMalik Talha <talhamalik727x@gmail.com>2023-10-06 01:46:44 +0500
committerJan-Simon Moeller <jsmoeller@linuxfoundation.org>2023-10-06 13:19:50 +0000
commit713efdf66dca8e60c3db1e720a9bb2bd074c40f3 (patch)
tree2a0442fee5812d4d5c19016ab479ab3daa30406b /meta-offline-voice-agent/README.md
parent88775acac57bdd2184180ad672a410b1155f1e1f (diff)
Fix Scipy, OpenBlas, and NumPy library linking issues
This fixes the linking issues primarily between Scipy and OpenBlas caused due to Scipy expecting a different name for OpenBlas dynamic linking library. Bug-AGL: SPEC-4925 Change-Id: Idb8f620134d63e7d9425a0df8942370430b3f700 Signed-off-by: Malik Talha <talhamalik727x@gmail.com>
Diffstat (limited to 'meta-offline-voice-agent/README.md')
-rw-r--r--meta-offline-voice-agent/README.md6
1 files changed, 0 insertions, 6 deletions
diff --git a/meta-offline-voice-agent/README.md b/meta-offline-voice-agent/README.md
index abde819a..bed8b35e 100644
--- a/meta-offline-voice-agent/README.md
+++ b/meta-offline-voice-agent/README.md
@@ -59,12 +59,6 @@ EXTRA_IMAGE_FEATURES += "ptest-pkgs"
The above method may be the easiest one but it's not recommended because `ptests` increase the image build times by a substantial amount. You can look into the official [vosk-api docs](https://alphacephei.com/vosk/install) for usage and other ways of testing.
### Test Snips
-(**Important**) Currently, there are some library linking issues between NumPy, SciPy, and OpenBLAS. While we investigate and fix them you need to use `LD_PRELOAD` method as a workaround for Snips to work properly. Input the following command as soon as you boot into the target image:
-```shell
-$ export LD_PRELOAD=/usr/lib/libopenblas.so.0
-```
-
-
In order to test the Snips NLU Intent Engine you can use the sample [pre-trained model](https://github.com/malik727/snips-model-agl), by default it automatically gets built into the target image when you include this layer. To perform inference using this model you can run the following command inside your target image:
```shell
$ snips-inference parse /usr/share/nlu/snips/model/ -q "your command here"