You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -730,44 +730,47 @@ The onboard high-performance microphone of the Nicla Vision is the MP34DT06JTR f
730
730
731
731
The OpenMV IDE includes some examples to get started using the Nicla Vision onboard microphone that can be found on **File > Examples > Audio**. We are going to use the one called `micro_speech.py` to test the machine-learning speech recognition capabilities of the board.
732
732
733
-
First, download the pre-trained model file from the [example repository](https://raw.githubusercontent.com/iabdalkader/microspeech-yesno-model/main/model.tflite) and copy it to the Nicla Vision storage drive.
733
+
First, download the pre-trained model file from the [example repository](https://raw.githubusercontent.com/iabdalkader/microspeech-yesno-model/main/model.tflite) and **copy** it to the Nicla Vision **storage drive**.
734
734
735
735

736
736
737
737
Reset the board and run the following code on the OpenMV IDE.
# Starts the audio streaming and processes incoming audio to recognize speech commands.
770
+
# If a callback is passed, listen() will loop forever and call the callback when a keyword
771
+
# is detected. Alternatively, `listen()` can be called with a timeout (in ms), and it
772
+
# returns if the timeout expires before detecting a keyword.
773
+
speech.listen(callback=callback, threshold=0.8)
771
774
```
772
775
After running the code, the matches will be printed on the Serial Monitor if the board hears a `No` or a `Yes`, turning on the red and green LED respectively.
0 commit comments