Profile PictureYanuar Bimantoro
Open menu

Face Recognition System Using Flutter & Node.js

January 25, 2023

|

-- views

(Updated on February 2, 2023)

article cover

Currently, facial recognition is widely used in addition to security, it can also be used to validate things that someone does, such as attendance and so on. In this tutorial, we will build a backend using Node.js and a mobile frontend using Flutter.

Prerequisites

Before going further, you need to have the below installed on your systems:

  • Node.js
  • Node Package Manager (NPM, will be automatically installed after install node.js)
  • Flutter
  • Ngrok (For connecting backend to our mobile app without deploying to server)
  • Backend Section

    Later there will be 2 endpoints that will be used, namely as follows:

  • POST /add (Saving face data to the database, will reject if face already registered)
  • POST /detect (Detecting requested face with those in database, and return name)
  • First Endpoints (/add)

    This endpoint requires JSON body as follows

    JSON
    {
    "image": "base64 face image",
    "name": "username, attached to face"
    }

    If face successfully registered will return status code 200 with response like this

    JSON
    {
    "message": "Face successfully registered"
    }

    If face already registered in the database will return status code 400 with response like this

    JSON
    {
    "message": "Face already registered"
    }

    Second Endpoints (/detect)

    This endpoint requires JSON body as follows

    JSON
    {
    "image": "base64 face image"
    }

    If face successfully recognized, will return status code 200 with response like this

    JSON
    {
    "message": "Face successfully recognized",
    "name": "name attached to face",
    "age": 40.5,
    "gender": "detected gender based on face image"
    }

    If face not recognized will return status code 400 with response like this

    JSON
    {
    "message": "Face not registered"
    }

    Implementation

    In the previous section, the criteria for backend have been explained, now time for the implementation. We will use typescript-express-starter package for skipping the boilerplate.

  • npm install -g typescript-express-starter
  • npx typescript-express-starter "face_recognition_backend"
  • select typeorm template
  • cd face_recognition_backend
  • code .
  • In this tutorial, I will use SQLite for simplicity, go to src/databases/index.ts, remove all connection DB information, then add database path, like this:

    TYPESCRIPT
    src/databases/index.ts
    export const dbConnection: ConnectionOptions = {
    type: 'sqlite',
    database: 'database.sql',
    synchronize: true,
    logging: false,
    entities: [join(__dirname, '../**/*.entity{.ts,.js}')],
    migrations: [join(__dirname, '../**/*.migration{.ts,.js}')],
    subscribers: [join(__dirname, '../**/*.subscriber{.ts,.js}')],
    cli: {
    entitiesDir: 'src/entities',
    migrationsDir: 'src/migration',
    subscribersDir: 'src/subscriber',
    },
    };

    Install SQLite package npm i sqlite3 then run with npm run dev

    First, we need to create an interface for Face, go to src/interfaces create face.interface.ts

    with id, name, face.

    TYPESCRIPT
    face.interface.ts
    export interface Face {
    id: number;
    name: string;
    face: string;
    gender: string;
    age: number;
    }

    Create an entity to be saved in the database. Go to src/entities copy file users.entity.ts to face.entity.ts, in this face table implements Face interface. It will be like this:

    TYPESCRIPT
    face.entity.ts
    import { Face } from '@/interfaces/face.interface';
    import { IsNotEmpty } from 'class-validator';
    import { BaseEntity, Entity, PrimaryGeneratedColumn, Column } from 'typeorm';
    @Entity()
    export class FaceEntity extends BaseEntity implements Face {
    @PrimaryGeneratedColumn()
    id: number;
    @Column()
    @IsNotEmpty()
    name: string;
    @Column()
    @IsNotEmpty()
    face: string;
    @Column()
    gender: string;
    @Column()
    age: number;
    }

    After create an interface and entity, create a Data Transfer Object (DTO) for both endpoints (/add & /detect), go to src/dtos, create a new file called face.dto.ts with class validator for validating type in the object request, example case it will reject a name / image field if not a string or an empty string.

    TYPESCRIPT
    face.dto.ts
    import { IsString, IsNotEmpty } from 'class-validator';
    export class AddFaceDto {
    @IsString()
    @IsNotEmpty()
    public name: string;
    @IsString()
    @IsNotEmpty()
    public image: string;
    }
    export class DetectFaceDto {
    @IsString()
    @IsNotEmpty()
    public image: string;
    }

    Now create a service for processing request, go to src/services, create a new file called face.service.ts, create function for both endpoint. We will fill each function later.

    TYPESCRIPT
    face.service.ts
    import { EntityRepository, Repository } from 'typeorm';
    import { FaceEntity } from '@/entities/face.entity';
    import { AddFaceDto } from '@/dtos/face.dto';
    @EntityRepository()
    class FaceService extends Repository<FaceEntity> {
    public async addFace(faceData: AddFaceDto): Promise<any> {}
    public async detectFace(faceData: DetectFaceDto): Promise<any> {}
    }
    const faceService = new FaceService();
    export default faceService;

    We need a controller to handle incoming requests, and direct them to the service function, go to src/controllers, create a new file called face.controller.ts

    TYPESCRIPT
    face.controller.ts
    import { NextFunction, Request, Response } from 'express';
    import faceService from '@/services/face.service';
    import { AddFaceDto, DetectFaceDto } from '@/dtos/face.dto';
    class FaceController {
    public addFace = async (req: Request, res: Response, next: NextFunction): Promise<void> => {
    try {
    const faceData: AddFaceDto = req.body;
    const result = await faceService.addFace(faceData);
    res.status(200).json(result);
    } catch (error) {
    next(error);
    }
    };
    public detectFace = async (req: Request, res: Response, next: NextFunction): Promise<void> => {
    try {
    const faceData: DetectFaceDto = req.body;
    const result = await faceService.detectFace(faceData);
    res.status(200).json(result);
    } catch (error) {
    next(error);
    }
    };
    }
    export default FaceController;

    Mapping endpoint routing, go to src/routes, create a new file called face.route.ts

    TYPESCRIPT
    face.route.ts
    import { Router } from 'express';
    import FaceController from '@controllers/face.controller';
    import { Routes } from '@interfaces/routes.interface';
    import validationMiddleware from '@middlewares/validation.middleware';
    import { AddFaceDto, DetectFaceDto } from '@/dtos/face.dto';
    class FaceRoute implements Routes {
    public path = '/face';
    public router = Router();
    public usersController = new FaceController();
    constructor() {
    this.initializeRoutes();
    }
    private initializeRoutes() {
    this.router.post(`${this.path}/add`, validationMiddleware(AddFaceDto, 'body'), this.usersController.addFace);
    this.router.post(`${this.path}/detect`, validationMiddleware(DetectFaceDto, 'body'), this.usersController.detectFace);
    }
    }
    export default FaceRoute;

    Don’t forget to add routing in server.ts

    TYPESCRIPT
    server.ts
    const app = new App([new FaceRoute()]);

    Time for implementing service, install package for processing face feature npm i @vladmandic/face-api @tensorflow/tfjs-node, not using the original one (face-api) because it is no longer maintained. Create folder models on src (src/models), download required model at here, then moved it to the models folder.

    Add new import on face.service.ts

    TYPESCRIPT
    face.service.ts
    import * as tf from '@tensorflow/tfjs-node';
    import * as faceapi from '@vladmandic/face-api';
    import path from 'path';

    Add new functions for initialize TensorFlow model

    TYPESCRIPT
    // Load models from disk and only run once when startup
    public async initModels(): Promise<any> {
    try {
    const modelPathRoot = '../models';
    await faceapi.tf.setBackend('tensorflow');
    await faceapi.tf.enableProdMode();
    await faceapi.tf.ENV.set('DEBUG', false);
    await faceapi.tf.ready();
    console.log('Loading FaceAPI models');
    const modelPath = path.join(__dirname, modelPathRoot);
    await faceapi.nets.ssdMobilenetv1.loadFromDisk(modelPath);
    await faceapi.nets.tinyFaceDetector.loadFromDisk(modelPath);
    await faceapi.nets.faceLandmark68Net.loadFromDisk(modelPath);
    await faceapi.nets.faceRecognitionNet.loadFromDisk(modelPath);
    await faceapi.nets.faceExpressionNet.loadFromDisk(modelPath);
    await faceapi.nets.ageGenderNet.loadFromDisk(modelPath);
    } catch (error) {
    logger.error(error);
    }
    }

    Initiate on server.ts

    TYPESCRIPT
    import faceService from './services/face.service';
    faceService.initModels();

    Extend express body limit size to 50mb or whatever you want, since the real image from camera will reach megabytes size, go to app.ts at function initializeMiddlewares()

    TYPESCRIPT
    app.ts
    private initializeMiddlewares() {
    this.app.use(morgan(LOG_FORMAT, { stream }));
    this.app.use(cors({ origin: ORIGIN, credentials: CREDENTIALS }));
    this.app.use(hpp());
    this.app.use(helmet());
    this.app.use(compression());
    this.app.use(express.json({ limit: '50mb' }));
    this.app.use(express.urlencoded({ limit: '50mb', extended: true, parameterLimit: 50000 }));
    this.app.use(cookieParser());
    }

    After that go back again to face.service.ts, add new function for processing face data

    TYPESCRIPT
    // Returning face descriptor, age, and gender
    private async getFaceComputation(base64Image: string): Promise<any> {
    const buffer = Buffer.from(base64Image, 'base64');
    const decoded = tf.node.decodeImage(buffer);
    const casted = decoded.toFloat();
    const tensor = casted.expandDims(0);
    const tfOptions = new faceapi.TinyFaceDetectorOptions();
    const result = await faceapi
    .detectSingleFace(tensor, tfOptions)
    .withFaceLandmarks()
    .withFaceExpressions()
    .withFaceDescriptor()
    .withAgeAndGender();
    // Disponse required for avoid memory leak
    tf.dispose([decoded, casted, tensor]);
    return result;
    }
    // Converting Float32Array descriptor into a base64 string for saving to database
    private encodeBase64(descriptor: any) {
    return btoa(String.fromCharCode(...new Uint8Array(descriptor.buffer)));
    }
    // Converting base64 string descriptor into a Float32Array
    // To be acceptable by tensorflow
    private decodeBase64(encodedDescriptor: any) {
    return new Float32Array(new Uint8Array([...atob(encodedDescriptor)].map(c => c.charCodeAt(0))).buffer);
    }

    On addFace function, fill with this code

    TYPESCRIPT
    public async addFace(faceData: AddFaceDto): Promise<any> {
    tf.engine().startScope();
    let alreadyRegistered = false;
    // Processing requested face
    const processedFaceData = await this.getFaceComputation(faceData.image);
    // All faces data in database
    const faces: Face[] = await FaceEntity.find();
    // Comparing requested face with all data
    for (let index = 0; index < faces.length; index++) {
    const element = faces[index];
    // Compute distance between 2 face
    const distance = faceapi.euclideanDistance(processedFaceData.descriptor, this.decodeBase64(element.face));
    // If distance below 0.3 it will recognized as same face
    // The smaller the value the better
    // But here i used 0.3
    if (distance < 0.3) {
    alreadyRegistered = true;
    break;
    }
    }
    tf.engine().endScope();
    // If already registered returning bad request
    if (alreadyRegistered) {
    throw new HttpException(400, 'Face already registered');
    } else {
    // If not registered, save new data into database
    await FaceEntity.save({ ...processedFaceData, face: this.encodeBase64(processedFaceData.descriptor), name: faceData.name });
    return { message: 'Face successfully registered' };
    }
    }

    Next, on detectFace function, will with this code

    TYPESCRIPT
    public async detectFace(faceData: DetectFaceDto): Promise<any> {
    tf.engine().startScope();
    let returnFace: Face;
    // Processing requested face
    const processedFaceData = await this.getFaceComputation(faceData.image);
    // All faces data in database
    const faces: Face[] = await FaceEntity.find();
    // Comparing requested face with all data
    for (let index = 0; index < faces.length; index++) {
    const element = faces[index];
    // Compute distance between 2 face
    const distance = faceapi.euclideanDistance(processedFaceData.descriptor, this.decodeBase64(element.face));
    // If distance below 0.3 it will recognized as same face
    // The smaller the value the better
    // But here i used 0.3
    if (distance < 0.3) {
    returnFace = element;
    break;
    }
    }
    tf.engine().endScope();
    // If face not undefined return result
    if (returnFace) {
    return returnFace;
    } else {
    throw new HttpException(400, 'Face not registered');
    }
    }

    Great all preparations for backend already completed, now time for testing, go to src/http, create face.http file. Convert image to base64 converter, then copied it into this file. In here I will use a Dwayne Johnson image.

    A visual depiction of what is being written about
    TYPESCRIPT
    # baseURL
    @baseURL = http://localhost:3000/face
    @image = "base64 converted image"
    ###
    # Add Face
    POST {{ baseURL }}/add
    Content-Type: application/json
    {
    "name": "Dwayne Johnson",
    "image": {{image}}
    }
    ###
    # Detect Face
    POST {{ baseURL }}/detect
    Content-Type: application/json
    {
    "image": {{image}}
    }

    Run request, if you used VS Code, you can click on here

    A visual depiction of what is being written about

    After the first running it will return success

    A visual depiction of what is being written about

    When w tried to run once again, it will return bad request, because same face has already registered

    A visual depiction of what is being written about

    Now try run /detect endpoint, if face already registered will be returning face information

    A visual depiction of what is being written about

    Don't be happy just yet, now the process is only up to 50% 😄. Next, we will be integrating this RESTful service with mobile apps Flutter.

    Run Ngrok

    Now we will be running ngrok server, so our mobile app can connect to localhost, run with ngrok http 3000, output will be like this:

    A visual depiction of what is being written about

    Use forwarding URL to mobile app.

    Mobile Section

    For mobile, there will be 3 screens:

  • Home (there will be 2 buttons, for add face and for detect face)
  • Camera (for viewing livestream from camera, and send data to backend, with parameters if add or detect)
  • Result (display information about detected face)
  • Okay, the description of the mobile application has been explained. Let’s create flutter project.

  • Create new flutter project flutter create face_recognition_mobile
  • cd face_recognition_mobile
  • flutter pub add camera google_mlkit_face_detection flutter_bloc image_editor dio
  • Flutter project has been created, now time to set up those package for android and iOS:

    Android Setup

    Change the minimum, target, and compile Android SDK in your android/app/build.gradle file.

  • minSdkVersion: 21
  • targetSdkVersion: 31
  • compileSdkVersion: 31
  • iOS Setup

  • Minimum iOS Deployment Target: 10.0
  • Xcode 13 or newer
  • Swift 5
  • ML Kit only supports 64-bit architectures (x86_64 and arm64). Check this list to see if your device has the required device capabilities.
  • Since ML Kit does not support 32-bit architectures (i386 and armv7), you need to exclude armv7 architectures in Xcode in order to run flutter build ios or flutter build ipa. More info here.

    Go to Project > Runner > Building Settings > Excluded Architectures > Any SDK > armv7

    A visual depiction of what is being written about

    Then your Podfile should look like this:

    SWIFT
    # add this line:
    $iOSVersion = '10.0'
    post_install do |installer|
    # add these lines:
    installer.pods_project.build_configurations.each do |config|
    config.build_settings["EXCLUDED_ARCHS[sdk=*]"] = "armv7"
    config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'] = $iOSVersion
    end
    installer.pods_project.targets.each do |target|
    flutter_additional_ios_build_settings(target)
    # add these lines:
    target.build_configurations.each do |config|
    if Gem::Version.new($iOSVersion) > Gem::Version.new(config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'])
    config.build_settings['IPHONEOS_DEPLOYMENT_TARGET'] = $iOSVersion
    end
    end
    end
    end

    Notice that the minimum IPHONEOS_DEPLOYMENT_TARGET is 10.0, you can set it to something newer but not older.

    Read more how to integrate those package here: camera, google_mlkit_face_detection. Setup done, let’s move to flutter code.

    Implementation

    DART
    main.dart
    import 'package:face_recognition_mobile/cubit/face_cubit.dart';
    import 'package:flutter/material.dart';
    import 'package:flutter_bloc/flutter_bloc.dart';
    import 'pages/home_page.dart';
    void main() {
    runApp(const MyApp());
    }
    class MyApp extends StatelessWidget {
    const MyApp({super.key});
    Widget build(BuildContext context) {
    return BlocProvider(
    create: (context) => FaceCubit(),
    child: MaterialApp(
    title: 'Flutter Face Recognition',
    theme: ThemeData(
    primarySwatch: Colors.blue,
    ),
    home: const HomePage(title: 'Flutter Face Recognition'),
    ),
    );
    }
    }

    Create a class to hold API endpoint URLs

    DART
    constants/api_constants.dart
    class APIConstants {
    static const String baseUrl =
    'https://4071-2001-448a-2020-e604-9a2-3af2-5487-1f8d.ap.ngrok.io';
    static const String detectUrl = '/face/detect';
    static const String addUrl = '/face/add';
    }

    Create a request & response model for each endpoint

    DART
    model/add_request.dart
    class AddRequest {
    AddRequest({
    required this.image,
    required this.name,
    });
    late final String image;
    late final String name;
    AddRequest.fromJson(Map<String, dynamic> json) {
    image = json['image'];
    name = json['name'];
    }
    Map<String, dynamic> toJson() {
    final data = <String, dynamic>{};
    data['image'] = image;
    data['name'] = name;
    return data;
    }
    }

    DART
    model/detect_request.dart
    class DetectRequest {
    DetectRequest({
    required this.image,
    });
    late final String image;
    DetectRequest.fromJson(Map<String, dynamic> json) {
    image = json['image'];
    }
    Map<String, dynamic> toJson() {
    final data = <String, dynamic>{};
    data['image'] = image;
    return data;
    }
    }

    DART
    model/detect_response.dart
    class DetectResponse {
    DetectResponse({
    required this.message,
    required this.name,
    required this.age,
    required this.gender,
    });
    late final String message;
    late final String name;
    late final double age;
    late final String gender;
    DetectResponse.fromJson(Map<String, dynamic> json) {
    message = json['message'];
    name = json['name'];
    age = json['age'];
    gender = json['gender'];
    }
    Map<String, dynamic> toJson() {
    final data = <String, dynamic>{};
    data['message'] = message;
    data['name'] = name;
    data['age'] = age;
    data['gender'] = gender;
    return data;
    }
    }

    Create a cubit named face, for state management and connecting to backend.

    DART
    cubit/face_cubit.dart
    import 'dart:convert';
    import 'dart:io';
    import 'package:dio/dio.dart';
    import 'package:face_recognition_mobile/constants/api_constants.dart';
    import 'package:face_recognition_mobile/model/add_request.dart';
    import 'package:face_recognition_mobile/model/detect_request.dart';
    import 'package:face_recognition_mobile/model/detect_response.dart';
    import 'package:flutter/material.dart';
    import 'package:flutter_bloc/flutter_bloc.dart';
    part 'face_state.dart';
    class FaceCubit extends Cubit<FaceState> {
    FaceCubit() : super(FaceInitial());
    final _dio = Dio();
    Future<void> addFace(String name, File image) async {
    try {
    emit(FaceLoading());
    final request = AddRequest(
    image: base64.encode(await image.readAsBytes()), name: name);
    await _dio.post(
    '${APIConstants.baseUrl}${APIConstants.addUrl}',
    data: request.toJson(),
    );
    emit(AddFaceSuccess());
    } on DioError catch (e) {
    emit(FaceError(e.response!.data['message']));
    }
    }
    Future<void> detectFace(File image) async {
    try {
    emit(FaceLoading());
    final request = DetectRequest(
    image: base64.encode(await image.readAsBytes()),
    );
    final result = await _dio.post(
    '${APIConstants.baseUrl}${APIConstants.detectUrl}',
    data: request.toJson(),
    );
    emit(DetectFaceSuccess(DetectResponse.fromJson(result.data)));
    } on DioError catch (e) {
    emit(FaceError(e.response!.data['message']));
    }
    }
    }

    DART
    cubit/face_state.dart
    part of 'face_cubit.dart';
    abstract class FaceState {}
    class FaceInitial extends FaceState {}
    class FaceLoading extends FaceState {}
    class AddFaceSuccess extends FaceState {}
    class DetectFaceSuccess extends FaceState {
    final DetectResponse data;
    DetectFaceSuccess(this.data);
    }
    class FaceError extends FaceState {
    final String message;
    FaceError(this.message);
    }

    Later an image stream will be showing a face contour, it will be looked like this, to achieve this we will create a custom painter.

    A visual depiction of what is being written about

    Convert coordinates from ML Kit package so can be readable by Flutter, referred from here.

    DART
    util/coordinates_translator.dart
    import 'dart:io';
    import 'dart:ui';
    // ignore: depend_on_referenced_packages
    import 'package:google_mlkit_commons/google_mlkit_commons.dart';
    double translateX(
    double x, InputImageRotation rotation, Size size, Size absoluteImageSize) {
    switch (rotation) {
    case InputImageRotation.rotation90deg:
    return x *
    size.width /
    (Platform.isIOS ? absoluteImageSize.width : absoluteImageSize.height);
    case InputImageRotation.rotation270deg:
    return size.width -
    x *
    size.width /
    (Platform.isIOS
    ? absoluteImageSize.width
    : absoluteImageSize.height);
    default:
    return x * size.width / absoluteImageSize.width;
    }
    }
    double translateY(
    double y, InputImageRotation rotation, Size size, Size absoluteImageSize) {
    switch (rotation) {
    case InputImageRotation.rotation90deg:
    case InputImageRotation.rotation270deg:
    return y *
    size.height /
    (Platform.isIOS ? absoluteImageSize.height : absoluteImageSize.width);
    default:
    return y * size.height / absoluteImageSize.height;
    }
    }

    DART
    util/face_painter.dart
    import 'package:flutter/material.dart';
    import 'package:google_mlkit_face_detection/google_mlkit_face_detection.dart';
    import 'coordinates_translator.dart';
    class FaceDetectorPainter extends CustomPainter {
    FaceDetectorPainter(this.face, this.absoluteImageSize, this.rotation);
    final Face face;
    final Size absoluteImageSize;
    final InputImageRotation rotation;
    void paint(Canvas canvas, Size size) {
    final Paint facePaint = Paint()
    ..style = PaintingStyle.stroke
    ..strokeWidth = 1.0
    ..color = Colors.blue;
    void paintContour(FaceContourType type) {
    final faceContour = face.contours[type];
    if (faceContour?.points != null) {
    for (var i = 0; i < faceContour!.points.length; i++) {
    final point = faceContour.points[i];
    final startOffset = Offset(
    translateX(point.x.toDouble(), rotation, size, absoluteImageSize),
    translateY(point.y.toDouble(), rotation, size, absoluteImageSize),
    );
    canvas.drawCircle(startOffset, 1, facePaint);
    canvas.drawLine(
    startOffset,
    i < faceContour.points.length - 1
    ? Offset(
    translateX(faceContour.points[i + 1].x.toDouble(), rotation,
    size, absoluteImageSize),
    translateY(faceContour.points[i + 1].y.toDouble(), rotation,
    size, absoluteImageSize),
    )
    : type == FaceContourType.face
    ? Offset(
    translateX(faceContour.points[0].x.toDouble(), rotation,
    size, absoluteImageSize),
    translateY(faceContour.points[0].y.toDouble(), rotation,
    size, absoluteImageSize),
    )
    : startOffset,
    facePaint,
    );
    }
    }
    }
    // Paint every available face countour
    paintContour(FaceContourType.face);
    paintContour(FaceContourType.leftEyebrowTop);
    paintContour(FaceContourType.leftEyebrowBottom);
    paintContour(FaceContourType.rightEyebrowTop);
    paintContour(FaceContourType.rightEyebrowBottom);
    paintContour(FaceContourType.leftEye);
    paintContour(FaceContourType.rightEye);
    paintContour(FaceContourType.upperLipTop);
    paintContour(FaceContourType.upperLipBottom);
    paintContour(FaceContourType.lowerLipTop);
    paintContour(FaceContourType.lowerLipBottom);
    paintContour(FaceContourType.noseBridge);
    paintContour(FaceContourType.noseBottom);
    paintContour(FaceContourType.leftCheek);
    paintContour(FaceContourType.rightCheek);
    }
    bool shouldRepaint(FaceDetectorPainter oldDelegate) {
    return oldDelegate.absoluteImageSize != absoluteImageSize ||
    oldDelegate.face != face;
    }
    }

    Custom painter for face done, now time implements widget and page. When navigating to camera page from home page, it will be shown a dialog to fill username, to classify what name for processed face. So the code for dialog will be like this.

    DART
    widget/name_dialog.dart
    import 'package:face_recognition_mobile/pages/camera_page.dart';
    import 'package:face_recognition_mobile/widget/name_dialog.dart';
    import 'package:flutter/material.dart';
    class HomePage extends StatefulWidget {
    const HomePage({super.key, required this.title});
    final String title;
    State<HomePage> createState() => _HomePageState();
    }
    class _HomePageState extends State<HomePage> {
    Widget build(BuildContext context) {
    return Scaffold(
    appBar: AppBar(
    title: Text(widget.title),
    ),
    body: Column(
    mainAxisAlignment: MainAxisAlignment.center,
    crossAxisAlignment: CrossAxisAlignment.center,
    children: <Widget>[
    Center(
    child: SizedBox(
    width: 100,
    child: ElevatedButton(
    onPressed: () {
    showDialog(
    context: context, builder: (_) => const NameDialog());
    },
    child: const Text('Add'),
    ),
    ),
    ),
    Center(
    child: SizedBox(
    width: 100,
    child: ElevatedButton(
    onPressed: () {
    Navigator.push(
    context,
    MaterialPageRoute(
    builder: (_) => const CameraPage(
    isAdd: false,
    ),
    ),
    );
    },
    child: const Text('Detect'),
    ),
    ),
    )
    ],
    ),
    );
    }
    }

    Great, so how to show error message (if available) from backend? We will use snackbar for this, so create a wrapper for snackbar to avoid same boilerplate and also easy to called.

    DART
    widget/snackbar.dart
    import 'package:flutter/material.dart';
    void showSnackbar(
    BuildContext context,
    String message,
    bool isSuccess, {
    bool floating = true,
    Color? color,
    }) {
    ScaffoldMessenger.of(context)
    ..hideCurrentSnackBar()
    ..showSnackBar(
    SnackBar(
    content: Text(
    message,
    style: const TextStyle(color: Colors.white, fontSize: 12),
    ),
    behavior: floating ? SnackBarBehavior.floating : SnackBarBehavior.fixed,
    backgroundColor: color ?? (isSuccess ? Colors.green : Colors.red),
    ),
    );
    }

    Let’s continue to pages, first go to home page, as mentioned before in this page there will be 2 buttons for add and detect.

    DART
    pages/home_page.dart
    import 'package:face_recognition_mobile/pages/camera_page.dart';
    import 'package:face_recognition_mobile/widget/name_dialog.dart';
    import 'package:flutter/material.dart';
    class HomePage extends StatefulWidget {
    const HomePage({super.key, required this.title});
    final String title;
    State<HomePage> createState() => _HomePageState();
    }
    class _HomePageState extends State<HomePage> {
    Widget build(BuildContext context) {
    return Scaffold(
    appBar: AppBar(
    title: Text(widget.title),
    ),
    body: Column(
    mainAxisAlignment: MainAxisAlignment.center,
    crossAxisAlignment: CrossAxisAlignment.center,
    children: <Widget>[
    Center(
    child: SizedBox(
    width: 100,
    child: ElevatedButton(
    onPressed: () {
    showDialog(
    context: context, builder: (_) => const NameDialog());
    },
    child: const Text('Add'),
    ),
    ),
    ),
    Center(
    child: SizedBox(
    width: 100,
    child: ElevatedButton(
    onPressed: () {
    Navigator.push(
    context,
    MaterialPageRoute(
    builder: (_) => const CameraPage(
    isAdd: false,
    ),
    ),
    );
    },
    child: const Text('Detect'),
    ),
    ),
    )
    ],
    ),
    );
    }
    }

    Now move to the camera page, code for this page will be long, so I apologize even though this article is already very long 😄.

    DART
    page/camera.page
    import 'dart:developer';
    import 'dart:io';
    import 'dart:math' as math;
    import 'package:camera/camera.dart';
    import 'package:face_recognition_mobile/cubit/face_cubit.dart';
    import 'package:face_recognition_mobile/pages/result_page.dart';
    import 'package:face_recognition_mobile/util/face_painter.dart';
    import 'package:face_recognition_mobile/widget/snackbar.dart';
    import 'package:flutter/foundation.dart';
    import 'package:flutter/material.dart';
    import 'package:flutter_bloc/flutter_bloc.dart';
    import 'package:google_mlkit_face_detection/google_mlkit_face_detection.dart';
    import 'package:image_editor/image_editor.dart';
    class CameraPage extends StatefulWidget {
    const CameraPage({Key? key, required this.isAdd, this.name})
    : super(key: key);
    // To distinguish whether the process is adding or detecting
    final bool isAdd;
    // If process is adding will have name from dialog before navigating to this page
    final String? name;
    State<CameraPage> createState() => _CameraPageState();
    }
    class _CameraPageState extends State<CameraPage> with WidgetsBindingObserver {
    CameraController? controller;
    List<CameraDescription>? _cameras;
    int cameraIndex = 0;
    String? cameraException;
    File? cameraFile;
    bool takingPicture = false;
    double zoomLevel = 0.0, minZoomLevel = 0.0, maxZoomLevel = 0.0;
    late FaceDetector faceDetector;
    CustomPaint? _customPaint;
    int counter = 0;
    bool _isBusy = false;
    List<double> processedFrame = [];
    bool imageCanSend = false;
    void initState() {
    _initCamera();
    super.initState();
    }
    void didChangeAppLifecycleState(AppLifecycleState state) {
    final cameraController = controller;
    // App state changed before we got the chance to initialize.
    if (cameraController != null && !cameraController.value.isInitialized) {
    return;
    }
    // Disposing camera stream for release unused memory
    // And avoiding memory leak
    if (state == AppLifecycleState.inactive) {
    cameraController?.dispose();
    } else if (state == AppLifecycleState.resumed) {
    onNewCameraSelected(cameraController!.description);
    }
    }
    void dispose() {
    _stopCamera();
    super.dispose();
    }
    // Camera Function
    Future<void> _initCamera() async {
    // Initialize face detector options
    faceDetector = FaceDetector(
    options: FaceDetectorOptions(
    enableLandmarks: true,
    enableClassification: true,
    enableContours: true,
    ),
    );
    // Check for available cameras
    try {
    _cameras = await availableCameras();
    } catch (e) {
    if (e is CameraException) {
    cameraExceptionParser(e);
    } else {
    cameraException = "Can't initialize camera";
    }
    showSnackbar(context, cameraException!, false);
    }
    // If multiple cameras available, e.g back and front
    // Then will be forced to use front camera
    try {
    CameraDescription? cameraDescription;
    for (var i = 0; i < _cameras!.length; i++) {
    final element = _cameras![i];
    if (element.lensDirection == CameraLensDirection.front) {
    cameraDescription = element;
    cameraIndex = i;
    setState(() {});
    break;
    }
    }
    // Otherwise will be use defaul camera
    if (cameraDescription == null && _cameras!.isNotEmpty) {
    cameraDescription = _cameras!.first;
    }
    // Assign camera controller with max resolution and audio false
    controller = CameraController(cameraDescription!, ResolutionPreset.max,
    enableAudio: false);
    controller!.initialize().then((_) {
    if (!mounted) {
    return;
    }
    // Assign default zoom level
    controller?.getMinZoomLevel().then((value) {
    zoomLevel = value;
    minZoomLevel = value;
    });
    controller?.getMaxZoomLevel().then((value) {
    maxZoomLevel = value;
    });
    controller?.startImageStream(_processCameraImage);
    setState(() {});
    }).catchError((Object e) {
    if (e is CameraException) {
    cameraExceptionParser(e);
    } else {
    cameraException = "Can't initialize camera";
    }
    showSnackbar(context, cameraException!, false);
    });
    } catch (e) {
    if (e is CameraException) {
    cameraExceptionParser(e);
    } else {
    cameraException = "Can't initialize camera";
    }
    showSnackbar(context, cameraException!, false);
    }
    }
    // Stop camera stream then disposing camera and face detector
    // For better memory management
    Future _stopCamera() async {
    if (controller != null && controller!.value.isStreamingImages) {
    await controller!.stopImageStream();
    }
    if (cameraFile != null) {
    await cameraFile!.delete();
    }
    await controller?.dispose();
    await faceDetector.close();
    controller = null;
    }
    // Re-Assign previous camera controller if app inactive then active again
    void onNewCameraSelected(CameraDescription cameraDescription) {
    controller = CameraController(cameraDescription, ResolutionPreset.max,
    enableAudio: false);
    controller!.initialize().then((_) {
    if (!mounted) {
    return;
    }
    setState(() {});
    }).catchError((Object e) {
    if (e is CameraException) {
    cameraExceptionParser(e);
    } else {
    cameraException = "Can't initialize camera";
    }
    showSnackbar(context, cameraException!, false);
    });
    setState(() {});
    }
    // Parsing camera package error to be readable by user
    void cameraExceptionParser(CameraException e) {
    switch (e.code) {
    case 'CameraAccessDenied':
    cameraException = 'User denied camera access.';
    break;
    default:
    cameraException = "Can't initialize camera";
    break;
    }
    }
    // Converting camera into an image file
    Future<void> takePicture() async {
    if (controller != null) {
    try {
    takingPicture = true;
    setState(() {});
    // Stop current camera stream
    if (controller!.value.isStreamingImages) {
    await controller!.stopImageStream();
    }
    // Taking picture
    final xfile = await controller!.takePicture();
    // There's a bug with camera package that's the resuul of and front camera image will be flipped
    // To fix this, will use image_editor package to flip to original one like at camera stream
    if (_cameras![cameraIndex].lensDirection == CameraLensDirection.front) {
    // 1. read the image from disk into memory
    final tempFile = File(xfile.path);
    Uint8List? imageBytes = await tempFile.readAsBytes();
    // 2. flip the image on the X axis
    final ImageEditorOption option = ImageEditorOption();
    option.addOption(const FlipOption(horizontal: true));
    imageBytes = await ImageEditor.editImage(
    image: imageBytes, imageEditorOption: option);
    // 3. write the image back to disk
    if (imageBytes != null) {
    await tempFile.delete();
    await tempFile.writeAsBytes(imageBytes);
    cameraFile = tempFile;
    } else {
    cameraFile = File(xfile.path);
    }
    } else {
    cameraFile = File(xfile.path);
    }
    if (widget.isAdd) {
    BlocProvider.of<FaceCubit>(context)
    .addFace(widget.name!, cameraFile!);
    } else {
    BlocProvider.of<FaceCubit>(context).detectFace(cameraFile!);
    }
    takingPicture = false;
    setState(() {});
    log('Take Picture');
    } catch (e) {
    log('Camera Exception: $e');
    }
    }
    }
    // If response from backend not success, will delete the cameraFile then re-stream camera
    Future<void> clearCameraFile() async {
    if (cameraFile != null) {
    await cameraFile!.delete();
    }
    cameraFile = null;
    processedFrame.clear();
    imageCanSend = false;
    setState(() {});
    if (controller != null && controller!.value.isStreamingImages) {
    await controller?.stopImageStream();
    }
    await controller?.startImageStream(_processCameraImage);
    }
    // Processing face detection on camera stream, will be processed every 5 frame
    // And is not busy taking picture or uploading file to backend
    // For better memory management
    Future _processCameraImage(CameraImage image) async {
    if (counter % 5 == 0) {
    if (_isBusy) return;
    _isBusy = true;
    setState(() {});
    // Write buffer from image plane
    final WriteBuffer allBytes = WriteBuffer();
    for (final Plane plane in image.planes) {
    allBytes.putUint8List(plane.bytes);
    }
    final bytes = allBytes.done().buffer.asUint8List();
    // Assign image size from original camera width and height
    final Size imageSize =
    Size(image.width.toDouble(), image.height.toDouble());
    // Check camera orientation
    final camera = _cameras![cameraIndex];
    final imageRotation =
    InputImageRotationValue.fromRawValue(camera.sensorOrientation);
    if (imageRotation == null) return;
    // Check image format
    final inputImageFormat =
    InputImageFormatValue.fromRawValue(image.format.raw);
    if (inputImageFormat == null) return;
    // Converted camera resolution into 720p, with supported platform
    // Android: 720 x 480
    // iOS: 640: 480
    final planeData = image.planes.map(
    (Plane plane) {
    return InputImagePlaneMetadata(
    bytesPerRow: plane.bytesPerRow,
    height: Platform.isAndroid ? 720 : 640,
    width: 480,
    );
    },
    ).toList();
    // Input image data to be processed by MLKit
    final inputImageData = InputImageData(
    size: imageSize,
    imageRotation: imageRotation,
    inputImageFormat: inputImageFormat,
    planeData: planeData,
    );
    final inputImage =
    InputImage.fromBytes(bytes: bytes, inputImageData: inputImageData);
    final List<Face> faces = await faceDetector.processImage(inputImage);
    // Painting face
    if (faces.isNotEmpty) {
    final painter = FaceDetectorPainter(
    faces.first,
    inputImage.inputImageData!.size,
    inputImage.inputImageData!.imageRotation);
    _customPaint = CustomPaint(painter: painter);
    } else {
    _customPaint = null;
    }
    for (Face face in faces) {
    // If total processed frame more than 10, then
    // Current image can be send to backend
    // This is for avoiding face already processed after then page open
    if (processedFrame.length > 10) {
    imageCanSend = true;
    processedFrame.clear();
    } else {
    processedFrame.add(0);
    }
    // If landmark was enabled with FaceDetectorOptions:
    final FaceLandmark? nose = face.landmarks[FaceLandmarkType.noseBase];
    final FaceLandmark? leftEye = face.landmarks[FaceLandmarkType.leftEye];
    final FaceLandmark? rightEye =
    face.landmarks[FaceLandmarkType.rightEye];
    // Will process if face straight to the camera
    // With recognized left eye & right eye & nose
    // You can add more face landmark for validating if face straight to the camera
    if (leftEye != null && rightEye != null && nose != null) {
    final math.Point<int> leftEyePos = leftEye.position;
    final math.Point<int> rightEyePos = rightEye.position;
    final math.Point<int> nosePos = nose.position;
    log('Position: Left(${leftEyePos.x}) Right(${rightEyePos.x}) Nose(${nosePos.x})');
    // If already taking picture will ignore
    if (!takingPicture && imageCanSend) {
    await takePicture();
    }
    }
    // If all process done then update current process not busy
    _isBusy = false;
    if (mounted) {
    setState(() {});
    }
    }
    }
    // Counting frame
    // Don't let counter go out of control forever
    if (counter == 1000) {
    counter = 0;
    } else {
    counter++;
    }
    }
    Widget build(BuildContext context) {
    return BlocListener<FaceCubit, FaceState>(
    listener: (context, state) async {
    if (state is FaceError) {
    // Re-init camera stream
    await clearCameraFile();
    showSnackbar(context, state.message, false);
    } else if (state is AddFaceSuccess) {
    showSnackbar(context, 'Face added successfully', true);
    await Future.delayed(const Duration(seconds: 3));
    // Navigating back to home page
    Navigator.pop(context);
    } else if (state is DetectFaceSuccess) {
    // Re-init camera stream
    await clearCameraFile();
    // Then disposing
    await _stopCamera();
    // Navigate to result page
    Navigator.pushReplacement(
    context,
    MaterialPageRoute(
    builder: (context) => ResultPage(data: state.data),
    ),
    );
    }
    },
    child: Scaffold(
    appBar: AppBar(
    elevation: 0,
    leading: IconButton(
    onPressed: () {
    Navigator.pop(context);
    },
    icon: const Icon(
    Icons.chevron_left_rounded,
    size: 30,
    ),
    ),
    title: const Text(
    'Camera Page',
    ),
    ),
    body: Padding(
    padding: const EdgeInsets.symmetric(horizontal: 20, vertical: 20),
    child: cameraView(),
    ),
    ),
    );
    }
    Widget cameraView() {
    final size = MediaQuery.of(context).size;
    // calculate scale depending on screen and camera ratios
    // this is actually size.aspectRatio / (1 / camera.aspectRatio)
    // because camera preview size is received as landscape
    // but we're calculating for portrait orientation
    var scale = size.aspectRatio *
    (controller != null && controller!.value.isInitialized
    ? controller!.value.aspectRatio
    : 0);
    // to prevent scaling down, invert the value
    if (scale < 1) scale = 1 / scale;
    // Showing camera file when not null
    // Indicating the face still processed at backend
    return cameraFile != null
    ? Stack(
    fit: StackFit.expand,
    children: [
    Positioned.fill(
    child: Transform.scale(
    scale: scale,
    child: Image.file(
    cameraFile!,
    width: double.maxFinite,
    height: double.maxFinite,
    fit: BoxFit.cover,
    ),
    ),
    ),
    Positioned.fill(
    child: Transform.scale(
    scale: scale,
    child: Container(
    color: Colors.black12,
    child: const Center(
    child: CircularProgressIndicator(
    valueColor: AlwaysStoppedAnimation(Colors.white),
    ),
    ),
    ),
    ),
    )
    ],
    )
    : controller == null || !controller!.value.isInitialized
    ? const SizedBox()
    : Stack(
    fit: StackFit.expand,
    children: [
    Transform.scale(
    scale: scale,
    child: Center(child: CameraPreview(controller!)),
    ),
    if (_customPaint != null)
    Transform.scale(scale: scale, child: _customPaint!),
    ],
    );
    }
    }

    Wow, now lets output a result from detect image face into a new page

    DART
    pages/result_page.dart
    import 'package:face_recognition_mobile/model/detect_response.dart';
    import 'package:flutter/material.dart';
    class ResultPage extends StatelessWidget {
    const ResultPage({super.key, required this.data});
    final DetectResponse data;
    Widget build(BuildContext context) {
    return Scaffold(
    appBar: AppBar(
    title: const Text('Result'),
    ),
    body: SizedBox(
    width: double.maxFinite,
    child: Column(
    mainAxisAlignment: MainAxisAlignment.center,
    children: <Widget>[
    Text(
    'Name: ${data.name}',
    style: const TextStyle(fontSize: 16),
    ),
    Text(
    'Age: ${data.age}',
    style: const TextStyle(fontSize: 16),
    ),
    Text(
    'Gender: ${data.gender}',
    style: const TextStyle(fontSize: 16),
    ),
    ],
    ),
    ),
    );
    }
    }

    Greattt!!!!, all preparations have been done, time for testing this mobile app.

    Let’s check on detect if our face never been added before

    Nice, face detected never registered before, let’s add face

    Face successfully added, now test detect again to confirmed

    Excellent, face successfully recognized and surprisingly detected age is the same age as when I made this article. That’s it, thank you for reading this article, there will be more interesting article don’t forget to subscribe to get the newest update✨✨.

    Full code in this article available at here.

    👍

    ❤️

    👏

    🎉

    Share this article

    Updates delivered to your inbox!

    A periodic update about my life, recent blog posts, how-tos, and discoveries.

    No spam - unsubscribe at any time!

    Related articles