100ms Logo

100ms

Docs

Search docs
/

Audio Share (Beta)

This feature is still in Beta. To know more or report any issues, feel free to reach out to us over Discord.

This feature is the analog of screen capture, but for audio. There may be cases where the application needs to stream music which is either stored in the device locally or from some other app present on the device in the room where the peer is joined.

Examples of such use cases can be a FM like application where the host would want to stream music while also interacting with others in the room or a host in a gaming app who would want to stream music from their device in the room along with their regular audio track.

Android Setup

  • The Audio share option currently available in Android 10 and above.

How does audio share work in android

100ms SDK uses the MediaProjection APIs of Android to capture the device audio and stream it along with the user's regular audio track. To achieve this SDK starts a foreground service and starts capturing the device audio and muxes the bytes with the data collected from mic, so that the stream contains both system music and mic data.

This API gives apps the ability to copy the audio being played by other apps which have set its usage to USAGE_MEDIA, USAGE_GAME, or USAGE_UNKNOWN. (Audio from apps like youtube etc can be captured)

Following is the snippet on how to use this:

You also need to pass the intent from android native side to HMSSDK in the following way :

In your app's MainActivity add -

import live.hms.hmssdk_flutter.HmssdkFlutterPlugin import android.app.Activity import android.content.Intent import live.hms.hmssdk_flutter.Constants override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) { super.onActivityResult(requestCode, resultCode, data) if (requestCode == Constants.AUDIO_SHARE_INTENT_REQUEST_CODE && resultCode == Activity.RESULT_OK){ HmssdkFlutterPlugin.hmssdkFlutterPlugin?.requestAudioShare(data) } }

DO NOT forget to add the permission for foreground service in AndroidManifest.xml

<uses-permission android:name="android.permission.FOREGROUND_SERVICE" />

How to stream device audio from the app

To start streaming device audio , app needs to call the startAudioShare method of HMSSDK, which takes in three parameters - The first one is HMSActionResultListener which is a callback object needed to inform about success or failure of the action The second one is one of the modes of type HMSAudioMixingMode in which the user wants to stream. This can be one out of the three available types - TALK_ONLY : only data captured by mic will be streamed in the room TALK_AND_MUSIC: data captured by mic as well as playback audio being captured from device will be streamed in the room MUSIC_ONLY: only the playback audio being captured from device will be streamed in the room

Following is the snippet on how to use this:

hmsSDK.startAudioShare(hmsActionResultListener: hmsActionResultListener,audioMixingMode: HMSAudioMixingMode.TALK_AND_MUSIC);

How to change mode

To change the mode the user is streaming audio, call the setAudioMixingMode API and pass one of the modes out of TALK_ONLY or TALK_AND_MUSIC or MUSIC_ONLY

hmsSDK.setAudioMixingMode(audioMixingMode:HMSAudioMixingMode.MUSIC_ONLY);

Note that TALK_ONLY mode is equivalent to regular mode, that is without starting this API

How to stop audio sharing

To stop capturing device audio and streaming into the room, call the stopAudioShare API and provide a HMSActionResultListener to listen to the success or error callbacks.

hmsSDK.stopAudioShare(hmsActionResultListener: hmsActionResultListener);

iOS Setup

  • Minimum iOS version required to support Audio Share is iOS 13

How audio sharing works in iOS

The audio that you share goes to other peers through the mic channel. To be able to share audio you need to setup the sdk to use a custom audio source instead of default mic. To do that you pass an instance of custom audio source to HMSAudioTrackSettings on your hmssdk instance.

How to use hmssdk to share audio from a file

  1. You create an instance of HMSAudioFilePlayerNode and an instance of HMSMicNode like below: HMSAudioFilePlayerNode required a parameter type String which will be use at control music player in room.
HMSAudioFilePlayerNode audioFilePlayerNode = HMSAudioFilePlayerNode("nodeName"); HMSMicNode micNode = HMSMicNode();
  1. Next, you create an instance of HMSAudioMixerSource, passing an array of nodes that we created in the step above like below:
HMSAudioMixerSource audioMixerSource = HMSAudioMixerSource(nodes: [audioFilePlayerNode, micNode]);
  1. Next, you pass this custom audio source to the 'audioSource' parameter of HMSAudioTrackSetting that you set on hmssdk instance like so:
HMSAudioTrackSetting audioSettings = HMSAudioTrackSetting(..., audioSource: audioMixerSource); HMSTrackSetting = HMSTrackSetting(..., audioTrackSetting: audioSettings); HMSSDK hmsSDK = HMSSDK(hmsTrackSetting: trackSetting); hmsSDK.build();

That's all you need to setup the sdk to use your custom audio source.

  1. You call play function on audioFilePlayerNode to play a file on local device in meeting room with it's file url like below:
HMSAudioFilePlayerNode audioFilePlayerNode = HMSAudioFilePlayerNode("nodeName"); audioFilePlayerNode.play(fileUrl: ...);

Note the parameter value in HMSAudioFilePlayerNode must be same as define at time of initializing HMSSDK.

How to change mix volume of different nodes

You can use volume property on nodes to control the volume.

audioFilePlayerNode.setVolume(0.5); micNode.setVolume(0.9);

Note volume value range from 0.0 to 1.0

How to schedule multiple audio files for back-to-back playback

You can set 'interrupts' parameter to false to tell audioFilePlayerNode to not interrupt the current file playback, but schedule the file after the current file is finished. Like below:

audioFilePlayerNode.play(fileUrl: url to file 1) audioFilePlayerNode.play(fileUrl: url to file 2, interrupts: false) audioFilePlayerNode.play(fileUrl: url to file 3, interrupts: false) ...

How to play multiple files concurrently

You can pass multiple instances of audioFilePlayerNode and pass them as nodes when creating audioMixerSource like so:

HMSAudioFilePlayerNode backgroundMusicNode = HMSAudioFilePlayerNode("backgroundMusicNode") backgroundMusicNode.setVolume(0.2) HMSAudioFilePlayerNode audioFilePlaybackNode = HMSAudioFilePlayerNode("audioFilePlaybackNode") audioFilePlaybackNode.setVolume(0.5) HMSMicNode micNode = HMSMicNode() HMSAudioMixerSource audioMixerSource = HMSAudioMixerSource(nodes: [backgroundMusicNode, audioFilePlaybackNode, micNode])

Now, you can play a looping background music at low volume and an audio file at the same time:

backgroundMusicNode.play(fileUrl: ..., loops: true) audioFilePlayerNode.play(fileUrl: ...)

How to pause, resume, stop playback and more

You can use following interfaces on HMSAudioFilePlayerNode to pause, resume or stop playback and more:

audioFilePlayerNode.pause() audioFilePlayerNode.resume() audioFilePlayerNode.stop() bool isPlaying = await audioFilePlayerNode.isPlaying() double currentPlaybackTime = audioFilePlayerNode.currentDuration() double totalPlaybackDuration = audioFilePlayerNode.duration()

How to share audio that's playing on your iPhone

Note: iOS allows to get access to audio playing on iOS device (for example, from another app like spotify) only while broadcating your entire iPhone screen. So for this to work you should implement screen sharing in your app. You can follow along here to set it up Screen Share

Now once you have implemented the screen share feature from above. You can follow below steps to enable system audio broadcasting while sharing your screen:

  1. You get an instance of HMSScreenBroadcastAudioNode and add it to your mixer.
HMSScreenBroadcastAudioReceiverNode screenAudioNode = HMSScreenBroadcastAudioReceiverNode() HMSAudioMixerSource audioMixerSource = HMSAudioMixerSource(nodes: [audioFilePlaybackNode, micNode, screenAudioNode])

Note: you can pass only a single instance of HMSMicNode and HMSScreenBroadcastAudioNode to HMSAudioMixerSource, else you will receive an error.

Now your mixer source is set to receive audio from your broadcast extension.

  1. Next, you need to setup broadcast extension to send audio to the main app.

Broadcast extension receives audio that's playing on your iOS device in processSampleBuffer function in your RPBroadcastSampleHandler class. To send audio from broadcast extension to main app, you call process(audioSampleBuffer) function on HMSScreenRenderer:

override func processSampleBuffer(_ sampleBuffer: CMSampleBuffer, with sampleBufferType: RPSampleBufferType) { ... case RPSampleBufferType.audioApp: _ = self.screenRenderer.process(audioSampleBuffer: sampleBuffer) break ... }

Now your broadcast extension is set to be send audio to the main app.

And that's it. Now your custom mixer source in the main app can receive the audio from broadcast extension as well.