Flutter Plugin for Google ML Kit Vision

Overview

Google ML Kit Plugin

(https://pub.dev/packages/google_ml_vision)

A Flutter plugin to use the capabilities of on-device Google ML Kit

Usage

To use this plugin, add google_ml_vision as a dependency in your pubspec.yaml file.

Using an ML Vision Detector

1. Create a GoogleVisionImage.

Create a GoogleVisionImage object from your image. To create a GoogleVisionImage from an image File object:

final File imageFile = getImageFile();
final GoogleVisionImage visionImage = GoogleVisionImage.fromFile(imageFile);

2. Create an instance of a detector.

final BarcodeDetector barcodeDetector = GoogleVision.instance.barcodeDetector();
final FaceDetector faceDetector = GoogleVision.instance.faceDetector();
final ImageLabeler labeler = GoogleVision.instance.imageLabeler();
final TextRecognizer textRecognizer = GoogleVision.instance.textRecognizer();

You can also configure all detectors, except TextRecognizer, with desired options.

final ImageLabeler labeler = GoogleVision.instance.imageLabeler(
  ImageLabelerOptions(confidenceThreshold: 0.75),
);

3. Call detectInImage() or processImage() with visionImage.

final List<Barcode> barcodes = await barcodeDetector.detectInImage(visionImage);
final List<Face> faces = await faceDetector.processImage(visionImage);
final List<ImageLabel> labels = await labeler.processImage(visionImage);
final VisionText visionText = await textRecognizer.processImage(visionImage);

4. Extract data.

a. Extract barcodes.

for (Barcode barcode in barcodes) {
  final Rectangle<int> boundingBox = barcode.boundingBox;
  final List<Point<int>> cornerPoints = barcode.cornerPoints;

  final String rawValue = barcode.rawValue;

  final BarcodeValueType valueType = barcode.valueType;

  // See API reference for complete list of supported types
  switch (valueType) {
    case BarcodeValueType.wifi:
      final String ssid = barcode.wifi.ssid;
      final String password = barcode.wifi.password;
      final BarcodeWiFiEncryptionType type = barcode.wifi.encryptionType;
      break;
    case BarcodeValueType.url:
      final String title = barcode.url.title;
      final String url = barcode.url.url;
      break;
  }
}

b. Extract faces.

for (Face face in faces) {
  final Rectangle<int> boundingBox = face.boundingBox;

  final double rotY = face.headEulerAngleY; // Head is rotated to the right rotY degrees
  final double rotZ = face.headEulerAngleZ; // Head is tilted sideways rotZ degrees

  // If landmark detection was enabled with FaceDetectorOptions (mouth, ears,
  // eyes, cheeks, and nose available):
  final FaceLandmark leftEar = face.getLandmark(FaceLandmarkType.leftEar);
  if (leftEar != null) {
    final Point<double> leftEarPos = leftEar.position;
  }

  // If classification was enabled with FaceDetectorOptions:
  if (face.smilingProbability != null) {
    final double smileProb = face.smilingProbability;
  }

  // If face tracking was enabled with FaceDetectorOptions:
  if (face.trackingId != null) {
    final int id = face.trackingId;
  }
}

c. Extract labels.

for (ImageLabel label in labels) {
  final String text = label.text;
  final String entityId = label.entityId;
  final double confidence = label.confidence;
}

d. Extract text.

String text = visionText.text;
for (TextBlock block in visionText.blocks) {
  final Rect boundingBox = block.boundingBox;
  final List<Offset> cornerPoints = block.cornerPoints;
  final String text = block.text;
  final List<RecognizedLanguage> languages = block.recognizedLanguages;

  for (TextLine line in block.lines) {
    // Same getters as TextBlock
    for (TextElement element in line.elements) {
      // Same getters as TextBlock
    }
  }
}

5. Release resources with close().

barcodeDetector.close();
faceDetector.close();
labeler.close();
textRecognizer.close();

Getting Started

See the example directory for a complete sample app using Google Machine Learning.

Comments
  • ALL_FACE contour positions wrong on iOS

    ALL_FACE contour positions wrong on iOS

    Same flutter code, using the same contour indexes (based on official doc) - iOS clearly has an issue. The code worked fine with the dead firebase_ml_vision plugin.

    The code below looks okay to me, since it's pretty much the same as on Android. But I guess there is some issue with the contour parts 🤔 order.https://github.com/brianmtully/flutter_google_ml_vision/blob/e8dfedc8335fdd4bde861008cf290df6523f94e9/ios/Classes/FaceDetector.m#L123

    Android | iOS ------------ | ------------- android | ios

    opened by shliama 5
  • Face detection for iOS not working

    Face detection for iOS not working

    Thanks for created this plugin. It's really useful. But i have a problem with this plugin. I used the example code but face detection totally not working for iOS. On Android it's working.

    https://user-images.githubusercontent.com/17062085/119457275-b327ab00-bd65-11eb-92aa-89f6e955c9ac.mp4

    Device info:

    • iPhone 6S
    • iOS 14.4.1
    log
    [Nusawork] findWriterForTypeAndAlternateType:119: unsupported file format 'public.heic'
    [Nusawork] findWriterForTypeAndAlternateType:119: unsupported file format 'public.heic'
    flutter: path: /private/var/mobile/Containers/Data/Application/33276A8E-8373-41B4-8CA8-389B492DFD52/tmp/image_picker_EBC530C9-94FB-4F43-A3E4-ED922EB80521-2839-000001DA9BE2991F.jpg
    flutter: []
    2021-05-25 2:20:28.351 PM Nusawork[2839/0x1057d3880] [lvl=3] +[MLKITx_CCTClearcutUploader crashIfNecessary] Multiple instances of CCTClearcutUploader were instantiated. Multiple uploaders function correctly but have an adverse affect on battery performance due to lock contention.
    Initialized TensorFlow Lite runtime.
    [Nusawork] findWriterForTypeAndAlternateType:119: unsupported file format 'public.heic'
    [Nusawork] findWriterForTypeAndAlternateType:119: unsupported file format 'public.heic'
    flutter: path: /private/var/mobile/Containers/Data/Application/33276A8E-8373-41B4-8CA8-389B492DFD52/tmp/image_picker_7C9F0886-5B5A-4C57-84AC-E49A0B2B62F5-2839-000001DAB1875B05.jpg
    
    flutter doctor -v
    [✓] Flutter (Channel stable, 2.2.0, on Mac OS X 10.15.7 19H2 darwin-x64, locale en-EC)
        • Flutter version 2.2.0 at /Users/yudisetiawan/Downloads/flutter
        • Framework revision b22742018b (10 days ago), 2021-05-14 19:12:57 -0700
        • Engine revision a9d88a4d18
        • Dart version 2.13.0
    
    [✓] Android toolchain - develop for Android devices (Android SDK version 30.0.2)
        • Android SDK at /Users/yudisetiawan/Library/Android/sdk
        • Platform android-30, build-tools 30.0.2
        • Java binary at: /Applications/Android Studio.app/Contents/jre/jdk/Contents/Home/bin/java
        • Java version OpenJDK Runtime Environment (build 1.8.0_242-release-1644-b3-6915495)
        • All Android licenses accepted.
    
    [✓] Xcode - develop for iOS and macOS
        • Xcode at /Applications/Xcode.app/Contents/Developer
        • Xcode 12.4, Build version 12D4e
        • CocoaPods version 1.10.1
    
    [✓] Chrome - develop for the web
        • Chrome at /Applications/Google Chrome.app/Contents/MacOS/Google Chrome
    
    [✓] Android Studio (version 4.1)
        • Android Studio at /Applications/Android Studio.app/Contents
        • Flutter plugin can be installed from:
          🔨 https://plugins.jetbrains.com/plugin/9212-flutter
        • Dart plugin can be installed from:
          🔨 https://plugins.jetbrains.com/plugin/6351-dart
        • Java version OpenJDK Runtime Environment (build 1.8.0_242-release-1644-b3-6915495)
    
    [✓] IntelliJ IDEA Community Edition (version 2020.2.3)
        • IntelliJ at /Applications/IntelliJ IDEA CE.app
        • Flutter plugin version 55.1.2
        • Dart plugin version 202.8443
    
    [✓] VS Code (version 1.55.2)
        • VS Code at /Applications/Visual Studio Code.app/Contents
        • Flutter extension version 3.21.0
    
    [✓] Connected device (3 available)
        • Yudi’s iPhone (mobile) • 1b7890540306c7d8155bacffabc03043d4c28bf9 • ios            • iOS 14.4.1
        • macOS (desktop)        • macos                                    • darwin-x64     • Mac OS X 10.15.7 19H2 darwin-x64
        • Chrome (web)           • chrome                                   • web-javascript • Google Chrome 90.0.4430.212
    
    • No issues found!
    
    opened by CoderJava 4
  • Thank you for this!

    Thank you for this!

    The official firebase_ml_vision package has been dead/unmaintained for months now. This was a very easy switch-out and everything seems to work as before.

    Thank you for creating this!

    opened by acoutts 4
  • Why ALL_POINTS face contour isn't included?

    Why ALL_POINTS face contour isn't included?

    https://github.com/brianmtully/flutter_google_ml_vision/blob/a6ab314fe549696e16af6591aab8056b867ff7e6/android/src/main/java/com/brianmtully/flutter/plugins/googlemlvision/GMLKFaceDetector.java#L109

    Man I had to debug it for hour until decided to go check the source code (my bad). Any reason to not include the ALL_POINTS face contour?

    opened by shliama 3
  • CameraPreviewScanner different results iOS

    CameraPreviewScanner different results iOS

    First thanks for your work to implement the standalone ML Kit because the firebase_ml_vision is not working with Flutter 2. I having different results with the example CameraPreviewScanner and PictureScanner. I try changing the ResolutionPreset.high and ResolutionPreset.veryHigh but still the same results.

    Results from android 10, device Xiaomi mi 9T : Screen Shot 2021-04-29 at 16 36 11

    Result from iOS, device iPhone x with iOS 14:

    Screen Shot 2021-04-29 at 16 28 05

    opened by thegenet 3
  • Exception: Null check operator used on a null value

    Exception: Null check operator used on a null value

    First of all... Love this package. Thank you for all of the work you put in so far.

    Now to my problem: Everything worked like a charm, but since the barcode detection process is pretty heavy, I want to outsource this process in a seperate Isolate. Here is my stripped (pseudo) code:

    Future<void> callerFunction() async {
        String path = "getting path from camera (file is XFile)";
        await compute(doDetection, path);
    }
    
    FutureOr<String> doDetection(String imagePath) async {
        // initialize barcode detector
        final BarcodeDetector _barcodeDetector =
          GoogleVision.instance.barcodeDetector();
    
        // detect barcodes in camera image
        final GoogleVisionImage visionImage =
            GoogleVisionImage.fromFilePath(imagePath);
    
        // the error is thrown in 
        final List<Barcode> barcodes =
            await _barcodeDetector.detectInImage(visionImage);
    
        // other code here, but this code does not get executed after error is thrown
    }
    

    After the refactoring I get the following error: Exception: Null check operator used on a null value After a bit of digging, I am pretty sure that this is because of the following line:

    final List<Barcode> barcodes =
            reply!.map((barcode) => Barcode._(barcode)).toList();
    

    The reply is null.

    So is it possible to do the recognition in a seperate Isolate? or how can I fix this issue? Any help or hints is appreciated. If you need more information just let me know :).

    Best regards, Louis :)

    opened by Throvn 2
  • Returning detected Text from Image

    Returning detected Text from Image

    Hello,

    I am using the latest version and I have a quick & dirty function to take a photo and to read the text from that photo.

    onPressed: () async {
                            final picker = ImagePicker();
    
                            final pickedFile = await picker.getImage(
                              source: ImageSource.camera,
                            );
    
                            var finalImageFile = File(pickedFile!.path);
                            logger.d(finalImageFile);
                            final GoogleVisionImage visionImage =
                                GoogleVisionImage.fromFile(finalImageFile);
    
                            final TextRecognizer textRecognizer =
                                GoogleVision.instance.textRecognizer();
                            final VisionText visionText =
                                await textRecognizer.processImage(visionImage);
    
                            String? text = visionText.text;
                            logger.d(text);
    
                            for (TextBlock block in visionText.blocks) {
                              final Rect boundingBox = block.boundingBox!;
                              final List<Offset> cornerPoints = block.cornerPoints;
                              final String? text = block.text;
                              final List<RecognizedLanguage> languages =
                                  block.recognizedLanguages;
    
                              for (TextLine line in block.lines) {
                                // Same getters as TextBlock
                                for (TextElement element in line.elements) {
                                  logger.d(element.text!);
                                  logger.d('element');
                                }
                              }
                            }
                            logger.d('endloop');
                            logger.d(text!);
                            textRecognizer.close();
                          },
    

    I am using a Logger Package, to debug on a physical device.

    At first, following error appears immediately after opening the camera (So it's not related to this package)

    [Camera] Failed to read exposureBiasesByMode dictionary: Error Domain=NSCocoaErrorDomain Code=4864 "*** -[NSKeyedUnarchiver _initForReadingFromData:error:throwLegacyExceptions:]: data is NULL" UserInfo={NSDebugDescription=*** -[NSKeyedUnarchiver _initForReadingFromData:error:throwLegacyExceptions:]: data is NULL}

    After taking a photo it returns the path correctly.

    But, how do I return all the detected text? This is not clear to me. Even though I have several log function in the foor loop for example, it doesn't return any text.

    Is there something I'm doing wrong? I just need the recognised text, without any color, preview etc.

    Thank you for any kind of help - it's really appreciated! :)

    opened by Patrick-Vogt 2
  • Potential performance enhancements?

    Potential performance enhancements?

    First and foremost, thank you so much for this package. I had no issues setting this up and it seems to be working as intended. This may be out of scope for the library but I wanted to explore any potential areas for performance gains. I'm testing on an older physical device (iPhone 6S from 2015) and it runs fairly smooth using the 100ms delay between detection calls and I even upped the resolution to high. On a 60fps device I guess the 100ms delay equates to roughly detecting every 6 frames? What are everyone else's experience regarding frame rendering, battery life, memory, CPU, etc.? Using DevTools performance profiler the raster UI and UI thread times are steady and on average well below the threshold for jank, so all good there.

    I came across this older flutter issue discussing image processing in isolates and garbage collection and wondered if it applied at all to this package? Are isolates in general something that could be used for performance gains?

    opened by cswkim 2
  • Build error on Android

    Build error on Android

    FAILURE: Build failed with an exception.

    • What went wrong: Execution failed for task ':onesignal_flutter:generateReleaseRFile'.

    In project 'google_ml_vision' a resolved Google Play services library dependency depends on another at an exact version (e.g. "[10.2.1, 17.3.99]", but isn't being resolved to that version. Behavior exhibited by the library will be unknown.

    Dependency failing: com.onesignal:OneSignal:3.16.0 -> com.google.firebase:[email protected][10.2.1, 17.3.99], but fire base-messaging version was 17.3.4.

    The following dependencies are project dependencies that are direct or have transitive dependencies that lead to the art ifact with the issue. -- Project 'google_ml_vision' depends on project 'onesignal_flutter' which depends onto com.onesignal:[email protected] -- Project 'google_ml_vision' depends on project 'onesignal_flutter' which depends onto com.onesignal:[email protected]{strictl y 3.16.0} -- Project 'google_ml_vision' depends on project 'onesignal_flutter' which depends onto com.google.firebase:firebase-mes [email protected]{strictly 17.3.4}

    For extended debugging info execute Gradle from the command line with ./gradlew --info :google_ml_vision:assembleDebug t o see the dependency paths to the artifact. This error message came from the strict-version-matcher-plugin Gradle plugin , report issues at https://github.com/google/play-services-plugins and disable by removing the reference to the plugin ( "apply 'strict-version-matcher-plugin'") from build.gradle.

    opened by ashkryab-fl 1
  • [Android] Face detection only works with Samsung smartphones

    [Android] Face detection only works with Samsung smartphones

    I have initialized a face detector object using google_ml_vision v. ^5.0.0.

    I'm using Flutter CameraController. Each time the method controller.startImageStream() is called, the image taken from CameraPreview is saved and processed in order to creare an image metadata object:

    CameraImage? mlCameraImage;
    GoogleVisionImageMetadata? mlMetaData;
      
    Future<void> setInputImage(CameraImage image, int rotationDegrees) async {
        mlCameraImage = image;
        late ImageRotation rotation;
        switch(rotationDegrees) {
          case 0:
            rotation = ImageRotation.rotation0;
            break;
          case 90:
            rotation = ImageRotation.rotation90;
            break;
          case 180:
            rotation = ImageRotation.rotation180;
            break;
          case 270:
            rotation = ImageRotation.rotation270;
            break;
        }
        mlMetaData = GoogleVisionImageMetadata(
            rawFormat: image.format.raw,
            size: Size(image.width.toDouble(),image.height.toDouble()),
            planeData: image.planes.map((currentPlane) => GoogleVisionImagePlaneMetadata(
                bytesPerRow: currentPlane.bytesPerRow,
                height: currentPlane.height,
                width: currentPlane.width
            )).toList(),
            rotation: rotation,
        );
      }
    

    Then I use mlCameraImage and mlMetaData as input values for face detection algorithm.

    My detector is

    _mlDetector = GoogleVision.instance.faceDetector(
            FaceDetectorOptions(enableClassification: true,
            enableContours: true)
        );
    

    This configuration has excellent performances with Samsung smartphones, but doesn't actually work with other smartphones (for example, Xiaomi) or with tablets (even if Samsung tablets).

    I tried to rotate my input image using all existing ImageRotation objects, but can't notice any particular change in my app behavior.

    Any help would be very welcome, thanks!

    opened by nicola-sarzimadidini 1
  • Bump dependencies

    Bump dependencies

    I had issue when adding this library with latest flutter fire libraries

    It always tried to resolve very very old Google Vision libs 0.0.60

    SO I just added constrains to pubspec

    opened by radvansky-tomas 1
  • Error when compiling IOS

    Error when compiling IOS

    For some reason whenever I want to compile I get this error:

    Undefined symbol: OBJC_CLASS$_MLKTextRecognizer Undefined symbol: OBJC_CLASS$_MLKBarcodeScanner Undefined symbol: OBJC_CLASS$_MLKImageLabelerOptions Undefined symbol: OBJC_CLASS$_MLKVisionImage Undefined symbol: OBJC_CLASS$_MLKImageLabeler Undefined symbol: _MLKFaceLandmarkTypeRightCheek Undefined symbol: _MLKFaceLandmarkTypeMouthLeft Undefined symbol: OBJC_CLASS$_MLKBarcodeScannerOptions Undefined symbol: _MLKFaceLandmarkTypeLeftEye Undefined symbol: _MLKFaceContourTypeUpperLipBottom Undefined symbol: _MLKFaceLandmarkTypeMouthBottom Undefined symbol: _MLKFaceContourTypeLowerLipBottom Undefined symbol: _MLKFaceLandmarkTypeLeftEar Undefined symbol: _MLKFaceLandmarkTypeLeftCheek Undefined symbol: _MLKFaceLandmarkTypeRightEar Undefined symbol: _MLKFaceContourTypeNoseBottom Undefined symbol: _MLKFaceContourTypeUpperLipTop Undefined symbol: _MLKFaceContourTypeRightEyebrowBottom Undefined symbol: _MLKFaceContourTypeLeftEyebrowTop Undefined symbol: _MLKFaceContourTypeRightEyebrowTop Undefined symbol: _MLKFaceContourTypeRightEye Undefined symbol: _MLKFaceContourTypeNoseBridge Undefined symbol: _MLKFaceContourTypeFace Undefined symbol: _MLKFaceContourTypeLeftEye Undefined symbol: _MLKFaceContourTypeLeftEyebrowBottom Undefined symbol: _MLKFaceLandmarkTypeMouthRight Undefined symbol: OBJC_CLASS$_MLKFaceDetector Undefined symbol: _MLKFaceContourTypeLowerLipTop Undefined symbol: _MLKFaceContourTypeLeftCheek Undefined symbol: OBJC_CLASS$_MLKFaceDetectorOptions Undefined symbol: _MLKFaceLandmarkTypeRightEye Undefined symbol: _MLKFaceLandmarkTypeNoseBase Undefined symbol: _MLKFaceContourTypeRightCheek

    opened by Lawati97 0
  • Confidence is null of TextRecognizer

    Confidence is null of TextRecognizer

    Confidence is null of TextRecognizer The confidence of the blocks inside TextBlock is null, irregardless of the text I take a photo off. It resembles an issue at the firebase repo.

    I get the arbitarily

    return visionText.blocks[0].text
    

    And I get the text mostly right (in English).

    I would like to be able to take TextBlock with the highest confidence.

    opened by Lelelo1 1
  • Not able to detect text in Arabic language

    Not able to detect text in Arabic language

    Hello,

    I am trying to detect text from an image which contains text written in Arabic language. But every time it return empty string. Here's the minimum reproducible code :-

    void checkText() async { VisionText visionText; visionImage = GoogleVisionImage.fromFile(File.fromUri(Uri.parse(prov.getImage.path); TextRecognizer textRecognizer = GoogleVision.instance.textRecognizer(); visionText = await textRecognizer.processImage(visionImage); print("Detetcted text ---> ${visionText.text}"); }

    The link to image i am processing -> https://www.verifave.com/wp-content/uploads/2020/11/Old-Egyptian-Driving-License-.png

    Can someone suggest what's wrong here or am I missing some configuration.

    opened by bharat8 0
  • How are bounding boxes interpreted?

    How are bounding boxes interpreted?

    I followed the basic docs to create the FaceDetector. However, I'm not quite sure how the bounding box values are evoked, because the values I get are out of range for every mobile device.

    Example:

    Rect.fromLTRB(1285.0, 2859.0, 3054.0, 4627.0)

    I want to add the bounding box to the taken image.

    Am I missing something?

    opened by fluttered 0
Owner
Brian M Tully
Brian M Tully
A Fast QR Reader widget for Flutter. For both Android and iOS

Fast QR Reader View Plugin See in pub A Flutter plugin for iOS and Android allowing access to the device cameras to scan multiple type of codes (QR, P

null 287 Oct 4, 2022
A note-taking app powered by Google services such as Google Sign In, Google Drive, and Firebase ML Vision.

Smart Notes A note-taking app powered by Google services such as Google Sign In, Google Drive, and Firebase ML Vision. This is an official entry to Fl

Cross Solutions 88 Oct 26, 2022
Ml kit ocr - Plugin which provides native ML Kit OCR APIs

MLKit OCR Plugin which provides native ML Kit OCR APIs Requirements Android Set

Madhav tripathi 0 Aug 3, 2022
A Flutter example to use Google Maps in iOS and Android apps via the embedded Google Maps plugin Google Maps Plugin

maps_demo A Flutter example to use Google Maps in iOS and Android apps via the embedded Google Maps plugin Google Maps Plugin Getting Started Get an A

Gerry High 41 Feb 14, 2022
Flutter implementation of Google Mobile Vision.

flutter_mobile_vision Flutter implementation for Google Mobile Vision. Based on Google Mobile Vision. Android Samples -=- iOS Samples Liked? ⭐ Star th

Eduardo Folly 449 Nov 12, 2022
Google Vision images REST API Client

Native Dart package that integrates Google Vision features, including image labeling, face, logo, and landmark detection, optical character recognition (OCR), and detection of explicit content, into applications.

C Davis 5 Sep 1, 2022
A Translator App Which is Build using Flutter, Speech To Text, Google ML Kit, Google Translator and Text To Speech.

AI Translator This is a Translator App Which is Build using Flutter, Speech To Text, Google ML Kit, Google Translator and Text To Speech. Download App

null 4 Jul 16, 2022
Flutter-Shop-UI-Kit - Create An E-commerce App UI kit Using Flutter

Flutter Shop UI kit If you are planning to create an e-commerce app using Flutte

Abu Anwar 390 Nov 18, 2022
A starter kit for beginner learns with Bloc pattern, RxDart, sqflite, Fluro and Dio to architect a flutter project. This starter kit build an App Store app as a example

Flutter Starter Kit - App Store Example A starter kit for beginner learns with Bloc pattern, RxDart, sqflite, Fluro and Dio to architect a flutter pro

kw101 669 Nov 16, 2022
A flutter plugin that implements google's standalone ml kit

A flutter plugin that implements google's standalone ml kit

Bharat Biradar 389 Nov 19, 2022
A flutter plugin that implements google's standalone ml kit

Google's ML Kit for Flutter Google's ML Kit for Flutter is a set of Flutter plugins that enable Flutter apps to use Google's standalone ML Kit. Featur

kyle reginaldo 2 Aug 29, 2022
Flutter sample app using MLKit Vision API for text recognition

Flutter ML Kit Vision This a sample Flutter app integrated with the ML Kit Vision API for recognition of email addresses from an image. NOTE: The ML K

Souvik Biswas 21 Oct 12, 2022
A unique flutter application aimed at helping people getting their vitals using Photoplethysmography and Computer Vision

A unique flutter application aimed at helping people getting their vitals using Photoplethysmography and Computer Vision Current Goals: Use the camera

Smaranjit Ghose 36 Nov 21, 2022
A flutter widget that show the camera stream and allow ML vision recognition on it, it allow you to detect barcodes, labels, text, faces...

Flutter Camera Ml Vision A Flutter package for iOS and Android to show a preview of the camera and detect things with Firebase ML Vision. Installation

Rushio Consulting 253 Nov 8, 2022
Simple face recognition authentication (Sign up + Sign in) written in Flutter using Tensorflow Lite and Firebase ML vision library.

FaceNetAuthentication Simple face recognition authentication (Sign up + Sign in) written in Flutter using Tensorflow Lite and Google ML Kit library. S

Marcos Carlomagno 273 Nov 25, 2022
Codeflow 19 Sep 29, 2022
ReverseHand is a mobile application that was created with the vision of helping to reduce any power imbalances that consumers may face when seeking trade services.

ReverseHand is a mobile application that was created with the vision of helping to reduce any power imbalances that consumers may face when seeking trade services. To achieve this, the mobile application allows consumers to make their needs for services known in the form of job listings, where tradesmen are able to place bids in order to be chosen and hired.

COS 301 - 2022 7 Nov 2, 2022
Google places picker plugin for flutter. Opens up the google places picker on ios and android returning the chosen place back to the flutter app.

flutter_places_dialog Shows a places picker dialog in ios and android, returning the data in the places picker to the app. Getting Started Generate yo

null 46 Jan 5, 2022
Google mobile ads applovin - AppLovin mediation plugin for Google Mobile Ads (Flutter).

AppLovin mediation plugin for Google Mobile Ads Flutter Google Mobile Ads Flutter mediation plugin for AppLovin. Use this package as a library depende

Taeho Kim 1 Jul 5, 2022
A Flutter plugin to use the Firebase ML Kit.

mlkit A Flutter plugin to use the Firebase ML Kit. ⭐ Only your star motivate me! ⭐ this is not official package The flutter team now has the firebase_

Naoya Yoshizawa 384 Nov 11, 2022