🖌️
Developer Portal
  • ThirdEye Gen Developer Portal
  • Android Studio Development
    • Getting Started In Android Studio
    • Build Your First App in Java
    • Developer Guidelines
    • Integrating Voice Commands
    • Configuring audio with WebRTC
    • Flashlight/torch control during video capture
  • VisionEye SLAM Developement
    • Getting Started in Unity
    • ThirdEye Alpha1 Unity Development
  • General
    • OTA Update Instructions
Powered by GitBook
On this page
  • Configure Speech Recognizer before and after call
  • Configure your application to us the built-in speaker
  • Modifying WebRTC Configuration

Was this helpful?

  1. Android Studio Development

Configuring audio with WebRTC

Configure Speech Recognizer before and after call

X2's Speech Recognizer will by default have control of the system microphone. Please make sure the voice recognizer is paused before initiating a WebRTC call as the glasses continuously access the microphone. Such resources (microphone) can’t be shared with multiple processes at the same time. To pause microphone, execute this before initiating/accepting the call:

const val THIRD_EYE_VOICE_COMMAND_INTENT_PACKAGE =
"com.thirdeyegen.api.voicecommand"

fun pauseVoiceRecognizer(context: Context) {
    val intent =
        Intent(THIRD_EYE_VOICE_COMMAND_INTENT_PACKAGE)
intent.putExtra("instructions", "interrupt")
context.sendBroadcast(intent)
}

And when the call is over, resume the recognizer like this:

fun resumeVoiceRecognizer(context: Context) {
    val intent =
        Intent(THIRD_EYE_VOICE_COMMAND_INTENT_PACKAGE)
    intent.putExtra("instructions", "resume")
    context.sendBroadcast(intent)
}

Configure your application to us the built-in speaker

Another issue might be the glasses might be trying to stream audio through the headphone line. To make sure that the glasses are routing audio through the in-built speakers please execute this line during/before the call:

private var audioManager = getSystemService(Context.AUDIO_SERVICE) as
AudioManager

audioManager.isSpeakerphoneOn = true
audioManager.mode = AudioManager.MODE_IN_COMMUNICATION

Modifying WebRTC Configuration

If the audio has a lot of noise, after making modification specified in the previous sections, please make sure that you are using correct WebRTC audio configuration like this:

AudioDeviceModule adm = createAudioDeviceModule();

peerConnectionFactory = PeerConnectionFactory.builder()
    .setVideoEncoderFactory(encoderFactory)
    .setVideoDecoderFactory(decoderFactory)
    .setAudioDeviceModule(adm)//Important
    .setOptions(options)
    .createPeerConnectionFactory();
    
private AudioDeviceModule createAudioDeviceModule() {
    return builder(activity)
        .setAudioSource(MediaRecorder.AudioSource.MIC)
        .setUseHardwareAcousticEchoCanceler(false) //Important
        .setUseHardwareNoiseSuppressor(false) //Important
        // .setInputSampleRate(16000)
        .createAudioDeviceModule();
}

When using Twilio, to make these modifications a developer will have to download the Twilio source code from GitHub and make changes there. This code should be used that code as a dependency, instead of using a pre-built Gradle dependency. This fixes the noise, and is required to be added when creating peer connection factory.

PreviousIntegrating Voice CommandsNextFlashlight/torch control during video capture

Last updated 3 years ago

Was this helpful?