Deep Learning in Android using TensorFlow Lite with Flutter

VAIBHAV HARIRAMANI
12 min readJun 20, 2021

Deploy TensorFlow Lite models in Android for Machine Learning Projects without using ML toolkit.

Now a days Everyone is working on Web App Development and Machine Learning. I was also working on some projects So I decided to integrate Machine Learning into a Mobile App there were very few articles on that. So I decided to write about using flutter framework with tensorflow lite. This tutorial aims to deliver demonstrative application of using the TensorFlow lite machine learning library in a Flutter project to perform binary image classification — cats vs dogs, a fundamental use case.

Today Everybody have a smartphone equipped with fantastic GPUs capable not only to run video games but also perform ML calculations directly on the device. You can think about this like your smartphone received an artificial biological brain. This can sound not realistically, but it is! You can deploy a trained neural network into such an artificial brain and connect it to your Flutter app!

Sounds good enough? Then let’s start.

App

What we’ll have at the end of this tutorial? We’ll develop a simple Android app to identify “Cat or Dog” but we will learn much more than that.

As you probably guessed, our app will use Deep neural network (DNN) to analyze the photo and classify it into one of two classes: Cat or Dog.

Dataset

As in any ML project, 1st step you should do is to find the Dataset for your ML model. Otherwise, it can be a very painful task to craft the Dataset by yourself.

What is Dataset? In our case, this is a big collection of photos of different categories labeled with a “cat” or “dog” tag.

Fortunately, there is s place where you can find ready to use datasets. This is the Kaggle.com portal.

For example, here is what we are looking for: https://www.kaggle.com/tongpython/cat-and-dog

Download and extract the archive somewhere on your computer.

Before going further, some readers would be comfortable with a high-level understanding of what’s going on with those DNNs. If you’re not familiar with CNN logic, I would recommend to checkout my blog on the A Brief Intro to Convolutional Neural Networks

The main idea is to make use of the TensorFlow Lite plugin and classify an image of an animal and classify whether it’s a dog or a cat. Along the way, we will also make use of the Image Picker library to fetch images from the device gallery or storage. The main process will be to load our pre-trained cat/dog model using the TensorFlow Lite library and classify the test animal image based on it.

Using the TensorFlow Lite library will help us load as well as apply the model for image classification on mobile. Image classification is a computer vision task that works to identify and categorize various elements of images and/or videos. Image classification models are trained to take an image as input and output one or more labels describing the image.

So, let’s get started!

Training the ML model

To do this, we’ll need a pre-trained classification model intended for mobile use.

Image classification with TensorFlow Lite Model Maker.

Create a model with default options

The first step is to install TensorFlow Lite Model Maker.

$ pip install -q tflite-model-maker

Obtaining the dataset

Let’s use the common cats and dogs dataset to create a TF Lite Model to classify them. The first step is to download the dataset and then create the test and validation set path.

Split the data

The next step is to load this data and split it into a training and testing set. This is done using the ImageClassifierDataLoader module from tflite_model_maker.

Create the image classifier

You can now use the model maker to create a new classifier from this dataset. The model maker will then train the model using the default parameters.

Evaluate the model

When the retraining process is done, you can evaluate the model on the new data. You notice that you get a model with high accuracy in just a couple of steps. The reason behind this that the TensorFlow Lite model maker uses transfer learning with a pre-trained model. This happens in the create step as seen in the last paragraph.

loss, accuracy = model.evaluate(test_data)

Exporting the model

At this point, you can export the model and use it in your application. The export function is responsible for doing that. Let’s export this model to the current working directory.

model.export(export_dir='.')

As expected, you can see the new TensorFlow Lite model.

We can find an even more quick solution: a cloud-based CNN instance. we’ll use the Teachable Machine web portal for the training of our CNN and downloading the resulting model.

Image classification with Teachable Machine web portal.
If you’d prefer, you can also train your own model using Teachable Machine, a no-code model building service offered by TensorFlow.

Open this link in your browser and click “get started”:

Now choose “Image project”:

At this step we should enter the names of your Classes for classification. Replace “Class 1” with “Dog”, and “Class 2” with “Cat”.

Now let’s upload the image set for “Dog” class. Click “Upload”. File selection dialog should appear.

Open the folder where you extracted your dataset. Then go to “dataset/training_set”. You will find two folders here:

Quite naturally, open “dogs” folder and select all the images in it. Then click “Open”.

Do the same with dataset for the “cat” class. Now click “Train model” button. Your CNN is learning now:

It should take a couple of minutes. After training, you can verify how good your trained model in recognition of samples outside its training set. Choose “File” as a source for validation and click the button “Choose images from your files”:

In the file selection dialog, open the folder where your Dataset was extracted and go to “dataset/test_set” folder. Then pick some “dog” image.

As you can see, your CNN classified it correctly with 100% accuracy:

Now verify some “cat” image:

Wow, this is an impressive result! You can verify more images by your choice.

Now it’s time to download your trained model. You will actually download a specific format of representation of your model. For Flutter, we have to choose “Tensorflow Lite” format. Use the default “model conversion type”. Click “Download my model”:

It will take a couple minutes. Finally, you will see downloaded zip archive with those two files:

“labels.txt” contains, well, label names “Cat” and “Dog”.

“model_unquant.tflite” — contains the representation of your trained model.

Flutter: preparation of the project

Create a new Flutter project

First, we need to create a new Flutter project. For that, make sure that the Flutter SDK and other Flutter app development-related requirements are properly installed. If everything is properly set up, then in order to create a project, we can simply run the following command in the desired local directory:

flutter create catDogIdentifier

After the project has been set up, we can navigate inside the project directory and execute the following command in the terminal to run the project in either an available emulator or an actual device:

flutter run

After successfully building, we will get the following result in the emulator screen:

Creating an Image View on Screen

Here, we are going to implement the UI to fetch an image from the device library and display it on the app screen. For fetching the image from the gallery, we’re going to make use of the Image Picker library. This library offers modules to fetch image and video sources from the device camera, gallery, etc.

First, we need to install the image_picker and tflitelibrary. For that, we need to copy the text provided in the following code snippet and paste it into the pubspec.yaml file of our project:

image_picker: ^0.8.0+3
tflite: ^1.1.2

For Android, we need to add the following setting to the android object of the ./android/app/build.gradle file:

aaptOptions {
noCompress 'tflite'
noCompress 'lite'
}

Here, we need to check to see if the app builds properly by executing a flutter run command.

If an error occurs, we may need to increase the minimum SDK version to ≥19 in ./android/app/build.gradle file for the tflite plugin to work.

minSdkVersion 19

Once the app builds properly, we’ll be ready to use the TensorFlow Lite package in our Flutter project.

Now, we need to import the necessary packages in the main.dart file of our project:

import 'dart:io';
import 'package:flutter/material.dart';
import 'package:image_picker/image_picker.dart';

In main.dart file, we will have the MyHomePage stateful widget class. In this class object, we need to initialize a constant to store the image file once fetched and loading value. Here, we are going to do that using the _imageFile File type variable & _loading bool type variable:

bool _loading = true;
File _imageFile;

We will also declare _output List for result from model

List _output;

Now, we are going to implement the UI, which will enable users to pick and display an image. The UI will have an image view section and a button that allows users to pick the image from the gallery or from camera. The overall UI template is provided in the code snippet below:

@override
Widget build(BuildContext context) {
return Scaffold(
backgroundColor: Color(0x004242),
body: Container(
padding: EdgeInsets.symmetric(horizontal: 24),
child: Column(
crossAxisAlignment: CrossAxisAlignment.start,
children: [
SizedBox(height: 50,),
Text('Geeky Bawa',
style: TextStyle(color: Colors.white, fontSize: 20),
),
SizedBox(height: 5,),
Text('Cats and Dogs Detector App',
style: TextStyle(
color: Colors.white,
fontWeight: FontWeight.w500,
fontSize: 30,
),
),
SizedBox(height: 50,),
Center(
child: _loading
? Container(
width: 350,
child: Column(
children: [
Image.asset('assets/cat_dog_icon.png'),
SizedBox(
height: 50,
)
],
),
)
: Container(
child: Column(children: [
Container(
height: 250,
child: Image.file(_image),
),
SizedBox(height: 20,),
_output != null ? Text('${_output[0]['label']}',
style: TextStyle(
color: Colors.white,
fontSize: 15,
),) : Container(),
SizedBox(height: 10,)
],),
),
),
Container(
width: MediaQuery.of(context).size.width,
child: Column(
children: [
GestureDetector(
onTap: () {},
child: Container(
width: MediaQuery.of(context).size.width - 250,
alignment: Alignment.center,
padding: EdgeInsets.symmetric(horizontal: 10, vertical: 18),
decoration: BoxDecoration(
color: Colors.redAccent,
borderRadius: BorderRadius.circular(6),
),
child: Text(
'Capture a Photo',
style: TextStyle(
color: Colors.white,
fontSize: 16,
),
),
),
),
SizedBox(
height: 5,
),
GestureDetector(
onTap: () {},
child: Container(
width: MediaQuery.of(context).size.width - 250,
alignment: Alignment.center,
padding:
EdgeInsets.symmetric(horizontal: 10, vertical: 18),
decoration: BoxDecoration(
color: Colors.redAccent,
borderRadius: BorderRadius.circular(6),
),
child: Text(
'Select a Photo',
style: TextStyle(
color: Colors.white,
fontSize: 16,
),
),
),
),
],
),
)
],
),
),
);
}

Here, we’ve used a Container widget with a card-like style for the image display. We have used conditional rendering to display a placeholder image until the actual image is selected and loaded to the display. We’ve also used a GestureDetectorwidget to render a buttons just below the image view section.

Using TensorFlow Lite for Image Classification

First, we need to import the package into our main.dart file, as shown in the code snippet below:

import 'package:tflite/tflite.dart';

Performing Image classification

Now, we are going to write code to actually perform image classification. First, we need to initialize a variable to store the result of the classification:

List _output;

This _output List type variable will store the result of the model’s classification.

Loading the Model

Now, we need to load the model files into the app. For that, we’re going to configure a function called loadImageModel. Then, by making use of the loadModel method provided by the Tflite instance, we’re going to load our model files in the assets folder to our app. We need to set the model and labels parameter inside the loadModel method, as shown in the code snippet below:

function called loadModel to load model and labels file

loadModel() async{
await Tflite.loadModel(
model: 'assets/model_unquant.tflite',
labels: 'assets/labels.txt'
);
}

Next, we need to call the function inside the initState method so that the function triggers as soon as we enter the screen:

@override
void initState(){
super.initState();
loadModel().then((value){
setState(() {});
});
}

Next, we need to devise a function called detectImage that takes an image file as a parameter. The overall implementation of the function is provided in the code snippet below:

detectImage(File image) async {
var output = await Tflite.runModelOnImage(
path: image.path,
numResults: 2,
threshold: 0.6,
imageMean: 127.5,
imageStd: 127.5);
setState(() {
_output = output;
_loading = false;
});
}

Here, we have used the runModelOnImage method provided by the Tflite instance to classify the selected image. As parameters, we have passed the image path, result quantity, classification threshold, and other optional configurations for better classification. After the successful classification, we have set the result to the _output list.

Function to fetch and display the image

Next, we’re going to implement a function that enables users to open the gallery, select an image, and then show the image in the image view section. The overall implementation of the function is provided in the code snippet below:

pickImage() async{
var image = await picker.getImage(source: ImageSource.camera);
// if(image == null) return null;

setState(() {
if (image != null) {
_image = File(image.path);
} else {
return null;
}
});

detectImage(_image);
}

pickGalleryImage() async{
var image = await picker.getImage(source: ImageSource.gallery);
if(image == null) return null;

setState(() {
_image = File(image.path);
});

detectImage(_image);
}

Here, we have initialized the ImagePicker instance and used the getImage method provided by it to fetch the image from the gallery or from the camera to the image variable. Then, we set the _imageFile state to the result of the fetched image using the setState method. This will cause the main build method to re-render and show the image on to the screen.

Next, we need to call the pickImage() function and pickGalleryImage() function in the onTap property of the GestureDetector widget, as shown in the code snippet below:

below:

Container(
width: MediaQuery.of(context).size.width,
child: Column(
children: [
GestureDetector(
onTap: () {
pickImage();
},
child: Container(
width: MediaQuery.of(context).size.width - 250,
alignment: Alignment.center,
padding:
EdgeInsets.symmetric(horizontal: 10, vertical: 18),
decoration: BoxDecoration(
color: Colors.redAccent,
borderRadius: BorderRadius.circular(6),
),
child: Text(
'Capture a Photo',
style: TextStyle(
color: Colors.white,
fontSize: 16,
),
),
),
),
SizedBox(
height: 5,
),
GestureDetector(
onTap: () {
pickGalleryImage();
},
child: Container(
width: MediaQuery.of(context).size.width - 250,
alignment: Alignment.center,
padding:
EdgeInsets.symmetric(horizontal: 10, vertical: 18),
decoration: BoxDecoration(
color: Colors.redAccent,
borderRadius: BorderRadius.circular(6),
),
child: Text(
'Select a Photo',
style: TextStyle(
color: Colors.white,
fontSize: 16,
),
),
),
),
],
),
)

Hence, we will get the result as shown in the emulator screenshot below:

Inside the Column widget, we’ve mapped through the result of the classification using the Container method and displayed the result in the percentage format inside the Text widget.

below:

_output != null ? Text('${_output[0]['label']}', style: TextStyle(
color: Colors.white,
fontSize: 15,
),) : Container(),
SizedBox(height: 10,)

Hence, we will get the result as shown in the demo below:

We can see that as soon as we select the image from the gallery, the classification result is displayed on the screen as well — the on-device nature of the model allows predictions to happen in real-time.

And that’s it! We have successfully implemented our cat-or-dog classifier in a Flutter app using TensorFlow Lite.

Conclusion

In this tutorial, we were able to build a demo app that correctly classifies images of cats and dogs image.

The overall process was simplified and made easy due to the availability of the TensorFlow Lite library for Flutter, as well as a pre-trained model. The model files were made available for this tutorial, but, you can create your own trained models using Teachable Machine, a no-code service provided by TensorFlow.

Now, the challenge can be to train your own model and load it into the Flutter app, and then apply the model to classify images. The TensorFlow Lite library is capable of other machine learning processes like object detection, pose estimate, gesture detection, etc.

All code available on GitHub.

--

--

VAIBHAV HARIRAMANI

Hi there! I am Vaibhav Hariramani a Travel & Tech blogger who love to seek out new technologies and experience cool stuff.