Google

Flutter Forward Holobooth

Flutter Forward Holobooth

We partnered with Google and the Tensor Flow team to create a virtual photo booth experience showcasing Flutter and machine learning. The demo was featured as part of the Flutter Forward event in January 2023.

TensorFlow.js for facial detection

The Holobooth builds on the first version of the Photo Booth app from Google I/O 2021. This version uses the camera plugin for web to detect a user’s face within the frame of a camera and the TensorFlow.jsMedaPipe FaceMesh model to live map facial features within the frame of a camera. This allowed us to create an immersive and futuristic photo booth experience where a user’s facial expressions are reflected in real-time on a virtual Dash or Sparky.

Rive background animations and avatars

Beyond face detection, we needed to animate the app's background and Dash and Sparky avatars to create a high-quality, interactive, and fun experience for users. We turned to Rive, a web app built in Flutter specializing in highly performant animations, to add a spark to Dash and Sparky. We used Rive State Machines to control how the avatars responded to movements so the avatar could mimic a user's behavior and expressions. By utilizing the FaceMesh model’s feature detection, we were able to correlate the detected features to specific coordinates on our avatar models. To transform the input from the models and determine the appearance of the avatar on screen we use the StateMachineController.

Sharing your experience with the help of Firebase

Playing with the animations and avatars in real-time sounds pretty fun, but sharing the fun makes it that much cooler! Using Cloud Functions for Firebase and Cloud Storage for Firebase, we were able to create a GIF or video that users can share to social media. With the click of a button you can share your Holobooth experience immediately with friends who can click into your post and create their very own!

Scalable architecture

Of course, scalable architecture and testing practices are at the heart of everything we do. We used Very Good CLI to generate the project, which meant our first commit had null safety, internationalization, and 100% unit and widget test coverage. Check out the open source code for how we break down our code into a layered architecture.

Machine learning and what’s to come

Flutter Forward Holobooth was our first major project incorporating machine learning into a Flutter app. Much of our initial development work involved experimenting with various machine learning models and figuring out how to incorporate TensorFlow.js into a Flutter web experience. We’re excited to further explore how to incorporate machine learning into digital experiences for users on any device. Give Holobooth a try!

Industry
Technology
Project Type
Showcase for building a web app with Flutter and Firebase
VGV Services
small check mark icon
Engineering
small check mark icon
Open Source
small check mark icon
Program Management

We partnered with Google and the Tensor Flow team to create a virtual photo booth experience showcasing Flutter and machine learning. The demo was featured as part of the Flutter Forward event in January 2023.

TensorFlow.js for facial detection

The Holobooth builds on the first version of the Photo Booth app from Google I/O 2021. This version uses the camera plugin for web to detect a user’s face within the frame of a camera and the TensorFlow.jsMedaPipe FaceMesh model to live map facial features within the frame of a camera. This allowed us to create an immersive and futuristic photo booth experience where a user’s facial expressions are reflected in real-time on a virtual Dash or Sparky.

Rive background animations and avatars

Beyond face detection, we needed to animate the app's background and Dash and Sparky avatars to create a high-quality, interactive, and fun experience for users. We turned to Rive, a web app built in Flutter specializing in highly performant animations, to add a spark to Dash and Sparky. We used Rive State Machines to control how the avatars responded to movements so the avatar could mimic a user's behavior and expressions. By utilizing the FaceMesh model’s feature detection, we were able to correlate the detected features to specific coordinates on our avatar models. To transform the input from the models and determine the appearance of the avatar on screen we use the StateMachineController.

Sharing your experience with the help of Firebase

Playing with the animations and avatars in real-time sounds pretty fun, but sharing the fun makes it that much cooler! Using Cloud Functions for Firebase and Cloud Storage for Firebase, we were able to create a GIF or video that users can share to social media. With the click of a button you can share your Holobooth experience immediately with friends who can click into your post and create their very own!

Scalable architecture

Of course, scalable architecture and testing practices are at the heart of everything we do. We used Very Good CLI to generate the project, which meant our first commit had null safety, internationalization, and 100% unit and widget test coverage. Check out the open source code for how we break down our code into a layered architecture.

Machine learning and what’s to come

Flutter Forward Holobooth was our first major project incorporating machine learning into a Flutter app. Much of our initial development work involved experimenting with various machine learning models and figuring out how to incorporate TensorFlow.js into a Flutter web experience. We’re excited to further explore how to incorporate machine learning into digital experiences for users on any device. Give Holobooth a try!

By the Numbers

,