Home

 / Blog / 

How FixHealth is leveraging AI with Live Video Stack

How FixHealth is leveraging AI with Live Video Stack

December 14, 20233 min read

Share

FixHealth main case study logo.png

FixHealth is a global platform that provides access to high-quality physiotherapy care. Founded in 2020 by Dr. Sheetal Mundhada, the team built a platform aiming to make pain relief as accessible and convenient as watching a Netflix movie. Physiotherapy sessions occur at the patient’s home, with no wait time, and are administered by experienced, high-quality physiotherapists.

Since its launch, the platform has grown to hundreds of thousands of sessions across thousands of cities in the US, UK, and India.

Background

FixHealth’s use case for video was straightforward: add a video calling feature in their web app for video consultations between patients and their physiotherapists.

Initially, the team provided 1-on-1 physiotherapy sessions online via Zoom. However, their vision was always to incorporate technology to aid the physiotherapist. By utilizing modern posture detection machine learning models, they could measure outcomes across sessions and provide real-time feedback to patients.

Initial Implementation

Raunak Khandelwal, co-founder, initially selected Twilio to build an integrated live video 1-on-1 call with TensorFlow models running on the physiotherapist’s browser, detecting posture in real-time.

However, they quickly encountered two issues:

  • Twilio didn’t offer a natural plugin interface to access video frames, leading to the double rendering of video – once on a hidden canvas for ML models and once for actual display.
  • Twilio left much of the audio-video optimization, including handling disconnections and setting optimal initial settings, to the developer, resulting in a noticeable drop in video quality.

After two months of experimentation, physiotherapists expressed dissatisfaction with the sluggish performance of the systems, attributed to the inefficiencies in the machine learning models, and raised concerns about subpar audio-video quality. The video quality would have frequent drops, even under favourable internet conditions.

Switching to 100ms

When 100ms approached them, Raunak was immediately excited by our first-class plugin interface for running ML models.

Lead engineer Ashhar Akhlaque started to build a proof of concept (POC) and integrated their ML models, going live in just one week. The improvements were immediate:

  • 100ms’ template configuration and built-in degradation handling significantly enhanced video quality.
  • 100ms’ first-class plugin interface allowed the entire app to run smoothly without slowing down the physiotherapists’ systems.

Over the next two months, they transitioned their customers from Zoom to 100ms, with real-time posture-detection models.

Their platform now uses ML to aid in root-problem detection, provide real-time feedback to the patient and physiotherapist on posture correctness, and also measure improvement across sessions.

General

Share