Audio classification Tflite package for flutter (iOS & Android).

Overview

TFlite Audio Plugin for Flutter

pub package likes License: MIT style: effective dart


Audio classification Tflite package for flutter (iOS & Android). Can also support Google Teachable Machine models.

If you are a complete newbie to audio classification, you can read the tutorial here. Credit to Carolina for writing a comprehensive article.

To keep this project alive, consider giving a star or a like. Pull requests or bug reports are also welcome.


Recording Inference result

Table of Contents


About This Plugin

The plugin has several features that you can use:

  1. Audio recognition for stored audio files. There are some things to note however:

    • Can only run inferences on mono wav files. (In the future, an audio converter will be included in this plugin.)
    • Avoid using very large audio files. This may cause adverse effects.
    • For best results, make sure the sample rate of the wav file is similar to the inputSize. For example, GTM models have an input size of 44032. So a sample rate of 44100 should be used. Similarly, decodedWav models have a inputSize of 16000, so a sample rate of 16000 should be used.
  2. Audio recognition for recordings. You can adjust the following with this plugin:

    • Recording length/time (bufferRate)
    • SampleRate
    • Number of inferences/recording
  3. Abiliy to tune your model's output, such as reducing false positives. Please look a the parameters below for more information.


It can also support several model types:

  1. Models from Google Teachable Machine

    • For beginners with little to no machine learning knowledge. You can read can read the tutorial here if you are a newbie.
    • Training can be done here
  2. Raw audio inputs.

    • Can recognize the following inputs: float32[recordingLength, 1] or float32[1, recordingLength]
    • For more information on how to train your own model, take a look here.
  3. Supports models with decoded wave inputs.

    • Supports two inputs: float32[recordingLength, 1] and int32[1]
    • For more information on how to train your own model. Take a look here
    • To train a decoded wave with MFCC, take a look here
  4. (Currently worked on feature) Raw audio with additional dynamic inputs. Take a look at this branch for work on progress

    • Supports two inputs: float32[recordingLength, 1] and float32[dynamic input, 1]
    • Also supports reverse inputs: float32[1, recordingLength] and float32[1, dynamic input]
    • Will support dynamic outputs
    • Will add dynamic support for different input/output data types
    • Add support on iOS
  5. (Future feature) Spectogram, MFCC, mel as an input type. Will support model from this tutorial.


Known Issues/Commonly asked questions

  1. My Model won't load

    You need to configures permissions and dependencies to use this plugin. Please follow the steps below:

  2. How to adjust the recording length/time

    There are two ways to reduce adjust recording length/time:

    • You can increase the recording time by adjusting the bufferSize to a lower value.

    • You can also increase recording time by lowering the sample rate.

    Note: That stretching the value too low will cause problems with model accuracy. In that case, you may want to consider lowering your sample rate as well. Likewise, a very low sample rate can also cause problems with accuracy. It is your job to find the sweetspot for both values.

  3. How to reduce false positives in my model

    To reduce false positives, you may want to adjust the default values of detectionThreshold=0.3 and averageWindowDuration=1000 to a higher value. A good value for both respectively are 0.7 and 1500. For more details about these parameters, please visit this section.

  4. I am getting build errors on iOS

    There are several ways to fix this:

    • Some have reported to fix this issue by replacing the following line:

      target 'Runner' do
        use_frameworks! 
        use_modular_headers!
        #pod 'TensorFlowLiteSelectTfOps' #Old line
        pod'TensorFlowLiteSelectTfOps','~> 2.6.0' #New line
      
        flutter_install_all_ios_pods File.dirname(File.realpath(__FILE__))
      end
    • Others have fixed this issue building the app without the line: pod 'TensorFlowLiteSelectTfOps. Then rebuilding the app by re-adding the line again.

  5. I am getting TensorFlow Lite Error on iOS. - Regular TensorFlow ops are not supported by this interpreter. Make sure you apply/link the Flex delegate before inference

    • Please make sure that you have enabled ops-select on your podfile - step 4 & Xcode - step 5 and build gradle - step 3

    • If you tried above, please run the example on a device (not emulator). If you still recieved this error, its very likely that theres an issue with cocoapod or Xcode configuration. Please check the issue #7

    • If you recieved this error from your custom model (not GTM), its likely that you're using unsupported tensorflow operators for tflite, as found in issue #5. For more details on which operators are supported, look at the official documentation here

    • Take a looking at issue number 4 if none of the above works.

  6. (iOS) App crashes when running Google's Teachable Machine model

    Please run your simulation on actual iOS device. Running your device on M1 macs should also be ok.

    As of this moment, there's limited support for x86_64 architectures from the Tensorflow Lite select-ops framework. If you absolutely need to run it on an emulator, you can consider building the select ops framework yourself. Instructions can be found here

  7. (Android) Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0xfffffff4 in tid 5403

    It seems like the latest tflite package for android is causing this issue. Until this issue is fixed, please run this package on an actual Android Device.


Please Read If You Are Using Google's Teachable Machine. (Otherwise Skip)


BE AWARE: Google's Teachable Machine requires select tensorflow operators to work. This feature is experimental and will cause the following issues:

  1. Increase the overall size of your app. If this is unnacceptable for you, it's recommended that you build your own custom model. Tutorials can be found in the About this plugin section

  2. Emulators for iOS do not work due to limited support for x86_64 architectures. You need to run your simulation on an actual device. Issue can be found here

  3. You will need to manually implement ops-select on your podfile - step 4 & Xcode - step 5 and build gradle - step 3


How to add tflite model and label to flutter:


  1. Place your custom tflite model and labels into the asset folder.
  2. In pubsec.yaml, link your tflite model and label under 'assets'. For example:
  assets:
    - assets/decoded_wav_model.tflite
    - assets/decoded_wav_label.txt


How to use this plugin


Please look at the example on how to implement these futures.

  1. Import the plugin. For example:
import 'package:tflite_audio/tflite_audio.dart';
  1. To load your model:
  //Example for decodedWav models
   TfliteAudio.loadModel(
        model: 'assets/conv_actions_frozen.tflite',
        label: 'assets/conv_actions_label.txt',
        inputType: 'decodedWav');


  //Example for Google's Teachable Machine models
    TfliteAudio.loadModel(
        model: 'assets/google_teach_machine_model.tflite',
        label: 'assets/google_teach_machine_label.txt',
        inputType: 'rawAudio');

  //Example if you want to take advantage of all optional parameters from loadModel()
    TfliteAudio.loadModel(
      model: 'assets/conv_actions_frozen.tflite',
      label: 'assets/conv_actions_label.txt',
      inputType: 'decodedWav',
      outputRawScores: false, 
      numThreads: 1,
      isAsset: this.isAsset,
    );
  1. To start and listen to the stream for inference results:

    • Declare stream value

      Stream<Map<dynamic, dynamic>> recognitionStream;
    • If you want to use the recognition stream for recording:

      //Example values for Google's Teachable Machine models
      recognitionStream = TfliteAudio.startAudioRecognition(
        sampleRate: 44100,
        bufferSize: 22016,
        )
      
      //Example values for decodedWav
      recognitionStream = TfliteAudio.startAudioRecognition(
        sampleRate: 16000,
        bufferSize: 2000,
        )
        
      //Example for advanced users who want to utilise all optional parameters from this package. 
      //Note the values are default.
      recognitionStream = TfliteAudio.startAudioRecognition(
        sampleRate: 44100,
        bufferSize: 22016,
        numOfInferences: 5,
        detectionThreshold: 0.3, 
        averageWindowDuration = 1000,
        minimumTimeBetweenSamples = 30,
        suppressionTime = 1500,
        )
      
    • If you want to use the recognition stream for stored audio files.

      //Example values for both GTM or decodedwav
      recognitionStream = TfliteAudio.startFileRecognition(
        audioDirectory: "assets/sampleAudio.wav",
        );
      
      //Example for advanced users who want to utilise all optional parameters from this package. 
      recognitionStream = TfliteAudio.startFileRecognition(
        audioDirectory: "assets/sampleAudio.wav",
        detectionThreshold: 0.3,
        averageWindowDuration: 1000,
        minimumTimeBetweenSamples: 30,
        suppressionTime: 1500,
        );
    • Listen for results

      String result = '';
      int inferenceTime = 0;
      
      recognitionStream.listen((event){
            result = event["inferenceTime"];
            inferenceTime = event["recognitionResult"];
            })
          .onDone(
             //Do something here when stream closes
           );
  2. To forcibly cancel recognition stream

    TfliteAudio.stopAudioRecognition();

Rough guide on the parameters

  • outputRawScores - Will output the result as an array in string format. For example '[0.2, 0.6, 0.1, 0.1]'

  • numThreads - Higher threads will reduce inferenceTime. However, will utilise the more cpu resource.

  • isAsset - is your model, label or audio file in the asset file? If yes, set true. If the files are outside (such as external storage), set false.

  • numOfInferences - determines how many times you want to loop the recording and inference. For example: numOfInference = 3 will repeat the recording three times, so recording length will be (1 to 2 seconds) x 3 = (3 to 6 seconds). Also the model will output the scores three times.

  • sampleRate - A higher sample rate may improve accuracy. Recommened values are 16000, 22050, 44100

  • recordingLength - determines the size of your tensor input. If the value is not equal to your tensor input, it will crash.

  • bufferSize - Make sure this value is equal or below your recording length. Be aware that a higher value may not allow the recording enough time to capture your voice. A lower value will give more time, but it'll be more cpu intensive. Remember that the optimal value varies depending on the device.

  • detectionThreshold - Will ignore any predictions where its probability does not exceed the detection threshold. Useful for situations where you pickup unwanted/unintentional sounds. Lower the value if your model's performance isn't doing too well.

  • suppressionMs - If your detection triggers too early, the result may be poor or inaccurate. Adjust the values to avoid this situation.

  • averageWindowDurationMs - Use to remove earlier results that are too old.

  • minimumTimeBetweenSamples - Ignore any results that are coming in too frequently


Android Installation & Permissions

  1. Add the permissions below to your AndroidManifest. This could be found in /android/app/src folder. For example:

    ">
    
        
    
        
    
  2. Edit the following below to your build.gradle. This could be found in /app/src/For example:

    aaptOptions {
            noCompress 'tflite'

NOTE: Skip below if your are not using Google Teachable Machine (Android)


  1. Enable select-ops under dependencies in your build gradle.

    dependencies {
        compile 'org.tensorflow:tensorflow-lite-select-tf-ops:+'
    }

iOS Installation & Permissions

  1. Add the following key to Info.plist for iOS. This ould be found in /ios/Runner

    
         
          NSMicrophoneUsageDescription
         
    
         
          Record audio for playback
         
    
  2. Change the deployment target to at least 12.0. This could be done by:

    • Open your project workspace on xcode

    • Select root runner on the left panel

    • Under the info tab, change the iOS deployment target to 12.0

  3. Open your podfile in your iOS folder and change platform ios to 12.

    platform :ios, '12.0'

NOTE: Skip below if your are not using Google Teachable Machine (iOS)


  1. Add `pod 'TensorFlowLiteSelectTfOps' under target.

    target 'Runner' do
      use_frameworks! 
      use_modular_headers!
      pod 'TensorFlowLiteSelectTfOps' #Add this line here. 
    
      flutter_install_all_ios_pods File.dirname(File.realpath(__FILE__))
    end
  2. Force load Select Ops for Tensorflow. To do that:

    • Open your project on xcode

    • click on runner under "Targets"

    • Click on "Build settings" tab

    • Click on "All" tab

    • Click on the empty space which is on the right side of "Other Links Flag"

    • Add: -force_load $(SRCROOT)/Pods/TensorFlowLiteSelectTfOps/Frameworks/TensorFlowLiteSelectTfOps.framework/TensorFlowLiteSelectTfOps

  3. Install the ops-select package to pod. To do this:

    • cd into iOS folder

    • Run flutter pub get on terminal

    • Run pod install on terminal

    • Run flutter clean on terminal


References

  1. https://github.com/tensorflow/examples/tree/master/lite/examples/speech_commands
  2. https://www.tensorflow.org/lite/guide/ops_select
Comments
  • iOS issue with Background service plugin / outputRawScores

    iOS issue with Background service plugin / outputRawScores

    Hi @Caldarie. I'm testing the app on iOS but the package doesn't work. I have followed the guidelines for the implementation but it still doesn't work. This is the exception:

    `Unhandled Exception: MissingPluginException(No implementation found for method loadModel on channel tflite_audio) #0 MethodChannel._invokeMethod (package:flutter/src/services/platform_channel.dart:165:7)

    ══╡ EXCEPTION CAUGHT BY SERVICES LIBRARY ╞══════════════════════════════════════════════════════════ The following MissingPluginException was thrown while activating platform stream on channel AudioRecognitionStream: MissingPluginException(No implementation found for method listen on channel AudioRecognitionStream)

    When the exception was thrown, this was the stack: #0 MethodChannel._invokeMethod (package:flutter/src/services/platform_channel.dart:165:7) #1 EventChannel.receiveBroadcastStream. (package:flutter/src/services/platform_channel.dart:506:9) ════════════════════════════════════════════════════════════════════════════════════════════════════`

    How can I fix it???

    Thank you.

    opened by bobosette 63
  • Counting specific sound occurances in the audio

    Counting specific sound occurances in the audio

    I am trying to count the number of specific sound occurrences in the audio

    The problem I have is that I am calling TfliteAudio.startAudioRecognition and trying to listen to the events steam I am receiving events every 1 second. And I can't find the possibility to increase events' frequency to receive events every 50-100 ms. Is it possible to decrease interval duration to 50-100 ms?

    Another problem I have is that event['recognitionResult'] always returns "1 Result": result: {hasPermission=true, inferenceTime=75, recognitionResult=1 Result} However, there are more than 1 repetitions of sound I am trying to count in each 1-second interval. Should it work like this and what does number "1" means, is this number of the sound in a single audio interval or something else?

    Is it possible to implement specific sound counting with this package or I should look somewhere else? Any feedback would be helpful, thanks!

    opened by nazdream 27
  • Making predictions with MFCC/stored audio file

    Making predictions with MFCC/stored audio file

    Hi,

    I'm very new to the topics flutter and tensorflow. Just so you know that maybe some things I ask may not make any sense :).

    I'm trying to build an app that allows me to record some audio samples. Then I would like to do some classification with the recorded files.

    My questions are:

    • Is it possible to make a prediction with a recorded file instead of using the audio stream? (á la model.predict(data) like in python/tensorflow)
    • I'm using mfcc in my trained model. I expect that I would need to do some transformation with the recorded audio files to load them with the model (as I'm doing it in python). To which degree is that even possible with this plugin?

    I hope you understand my problem.

    Thanks in advance!

    enhancement help wanted 
    opened by PeteSahad 17
  • Reducing false positives/ non divisible bufferRate outputs NaN

    Reducing false positives/ non divisible bufferRate outputs NaN

    Hi @Caldarie. I'm facing an issue reguarding the detection. I create my model with a lot of samples to recognize a certain noise, it works pretty well but tflite_audio recognizes also other noises like the one I would like to recognize. How can I fix this to adjust precision? Maybe I have to play with this parameters: detectionThreshold, averageWindowDuration, minimumTimeBetweenSamples, suppressionTime??

    Thank you

    bug 
    opened by bobosette 14
  • Tensorflow Lite errors when running in iOS devices

    Tensorflow Lite errors when running in iOS devices

    Running my application in iOS device (iPhone 7 with iOS 14.4) it crashes when the model is processing the data. I believe that happens due to Tensorflow Lite Errors (see output) but I have no idea how to fix it:

    carolinaalbuquerque ~/Documents/audio_recognition_app (*main) > flutter run
    Launching lib/main.dart on iPhone de Carolina in debug mode...
    Automatically signing iOS for device deployment using specified development team in Xcode project: 5SSNTW7HP4
    Running Xcode build...                                                  
     └─Compiling, linking and signing...                        19,5s
    Xcode build done.                                           28,9s
    (lldb) 2021-02-20 18:18:18.623713+0000 Runner[414:13363] Warning: Unable to create restoration in progress marker file
    fopen failed for data file: errno = 2 (No such file or directory)       
    Errors found! Invalidating cache...                                     
    fopen failed for data file: errno = 2 (No such file or directory)       
    Errors found! Invalidating cache...                                     
    Installing and launching...                                        36,1s
    Initialized TensorFlow Lite runtime.
    TensorFlow Lite Error: Regular TensorFlow ops are not supported by this interpreter. Make sure you apply/link the Flex delegate before inference.
    TensorFlow Lite Error: Node number 2 (FlexSize) failed to prepare.
    
    
    Failed to create the interpreter with error: Failed to allocate memory for input tensors.
    ["0 Background Noise", "1 Bell", "2 Whistle", "3 Xylophone"]
    Activating Dart DevTools...                                         5,9s
    Syncing files to device iPhone de Carolina...                       176ms
    
    Flutter run key commands.
    r Hot reload. 🔥🔥🔥
    R Hot restart.
    h Repeat this help message.
    d Detach (terminate "flutter run" but leave application running).
    c Clear the screen
    q Quit (terminate the application on the device).
    An Observatory debugger and profiler on iPhone de Carolina is available at: http://127.0.0.1:53066/z1fkaZhV7VE=/
    
    Flutter DevTools, a Flutter debugger and profiler, on iPhone de Carolina is available at:
    http://127.0.0.1:9101?uri=http%3A%2F%2F127.0.0.1%3A53066%2Fz1fkaZhV7VE%3D%2F
    
    Running with unsound null safety
    For more information see https://dart.dev/null-safety/unsound-null-safety
    requesting permission
    start microphone
    recordingBuffer length: 11008
    recordingBuffer length: 22016
    recordingBuffer length: 33024
    recordingBuffer length: 44032
    reached threshold
    Running model
    * thread #21, queue = 'conversionQueue', stop reason = EXC_BAD_ACCESS (code=1, address=0x0)
        frame #0: 0x00000001d6d0a128 libsystem_platform.dylib`_platform_memmove + 72
    libsystem_platform.dylib`_platform_memmove:
    ->  0x1d6d0a128 <+72>: stnp   x12, x13, [x0]
        0x1d6d0a12c <+76>: stnp   x14, x15, [x0, #0x10]
        0x1d6d0a130 <+80>: subs   x2, x2, #0x40             ; =0x40 
        0x1d6d0a134 <+84>: b.ls   0x1d6d0a158               ; <+120>
    Target 0: (Runner) stopped.
    Lost connection to device.
    

    I am using a Google Teachable Machine model and I followed these steps for iOS configuration.

    I have tested already in Android device and it works perfectly but I need to guarantee iOS support!

    opened by cmalbuquerque 11
  • Android Permission bug

    Android Permission bug

    Hi @Caldarie . I found another issue to fix. On my app it's the dart code which ask for permission (It asks for all the permissions inside the home page > I need to do this cause tflite_audio is not the only package that need permissions). When the app ask for the permission and user grants them, I don't know why but tflite_audio shows a dialog with this message: 'Microphone permission denied. Go to settings etc..'. But it isn't true cause user granted that permission. After a lot of time, I found the issue inside TfliteAudioPlugin.java, raw 330 (inside onRequestPermissionResult() method). I don't know why but it seems like that method doesn't understand that permission has been already granted. Can you provide a little update on this thing? Thank you

    opened by bobosette 10
  • Recognition Raw Scores returns [NaN, NaN, NaN, NaN]

    Recognition Raw Scores returns [NaN, NaN, NaN, NaN]

    Hi @Caldarie I found a problem when I used GTM model. The Raw Score returns Nan with the latest version 0.2.1+1

    D/AudioRecord(27796): stop(1446): 0x7543619a00, mActive:0 D/AudioRecord(27796): ~AudioRecord(1446): mStatus 0 D/AudioRecord(27796): stop(1446): 0x7543619a00, mActive:0 D/Tflite_audio(27796): Recording stopped. V/Tflite_audio(27796): Raw Scores: [NaN, NaN, NaN, NaN] D/Tflite_audio(27796): Recognition stopped. V/Tflite_audio(27796): result: {hasPermission=true, inferenceTime=89, recognitionResult=Background Noise} D/Tflite_audio(27796): Recognition Stream stopped

    But I tried a non-GTM model and it works fine.

    opened by kyledevfy 10
  • Permission request error

    Permission request error

    D/Tflite_audio( 7874): Check for permissions
    D/Tflite_audio( 7874): Permission requested.
    E/EventChannel#startAudioRecognition( 7874): Failed to open event stream
    E/EventChannel#startAudioRecognition( 7874): java.lang.NullPointerException: Attempt to invoke virtual method 'void android.app.Activity.requestPermissions(java.lang.String[], int)' on a null object reference
    E/EventChannel#startAudioRecognition( 7874): 	at androidx.core.app.ActivityCompat.requestPermissions(ActivityCompat.java:502)
    E/EventChannel#startAudioRecognition( 7874): 	at flutter.tflite_audio.TfliteAudioPlugin.requestMicrophonePermission(TfliteAudioPlugin.java:310)
    E/EventChannel#startAudioRecognition( 7874): 	at flutter.tflite_audio.TfliteAudioPlugin.checkPermissions(TfliteAudioPlugin.java:303)
    E/EventChannel#startAudioRecognition( 7874): 	at flutter.tflite_audio.TfliteAudioPlugin.onListen(TfliteAudioPlugin.java:221)
    E/EventChannel#startAudioRecognition( 7874): 	at io.flutter.plugin.common.EventChannel$IncomingStreamRequestHandler.onListen(EventChannel.java:188)
    E/EventChannel#startAudioRecognition( 7874): 	at io.flutter.plugin.common.EventChannel$IncomingStreamRequestHandler.onMessage(EventChannel.java:167)
    E/EventChannel#startAudioRecognition( 7874): 	at io.flutter.embedding.engine.dart.DartMessenger.handleMessageFromDart(DartMessenger.java:85)
    E/EventChannel#startAudioRecognition( 7874): 	at io.flutter.embedding.engine.FlutterJNI.handlePlatformMessage(FlutterJNI.java:818)
    E/EventChannel#startAudioRecognition( 7874): 	at android.os.MessageQueue.nativePollOnce(Native Method)
    E/EventChannel#startAudioRecognition( 7874): 	at android.os.MessageQueue.next(MessageQueue.java:335)
    E/EventChannel#startAudioRecognition( 7874): 	at android.os.Looper.loop(Looper.java:206)
    E/EventChannel#startAudioRecognition( 7874): 	at android.app.ActivityThread.main(ActivityThread.java:8512)
    E/EventChannel#startAudioRecognition( 7874): 	at java.lang.reflect.Method.invoke(Native Method)
    E/EventChannel#startAudioRecognition( 7874): 	at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:602)
    E/EventChannel#startAudioRecognition( 7874): 	at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1130)
    
    ════════ Exception caught by services library ══════════════════════════════════
    The following PlatformException was thrown while activating platform stream on channel startAudioRecognition:
    PlatformException(error, Attempt to invoke virtual method 'void android.app.Activity.requestPermissions(java.lang.String[], int)' on a null object reference, null, null)
    
    When the exception was thrown, this was the stack
    #0      StandardMethodCodec.decodeEnvelope
    package:flutter/…/services/message_codecs.dart:597
    #1      MethodChannel._invokeMethod
    package:flutter/…/services/platform_channel.dart:158
    <asynchronous suspension>
    #2      EventChannel.receiveBroadcastStream.<anonymous closure>
    package:flutter/…/services/platform_channel.dart:545
    <asynchronous suspension>
    ════════════════════════════════════════════════════════════════════════════════
    
    opened by andrejvujic 8
  • Error while runing on IOS

    Error while runing on IOS

    Getting this error every time when I'm trying to listen to sounds exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false: IsFormatSampleRateAndChannelCountValid(format)'

    only on IOS (12, debug mode), on Android everything is ok

    opened by virtyos 6
  • Google Teachable Machine raw output returns NaN

    Google Teachable Machine raw output returns NaN

    I have the same problem but only on one device. I created a model with Google Teachable Machine and tested this on two devices:

    Samsung Galaxy S9 Plus The first label is always detected here. The logged raw scores are: [NaN, NaN, NaN]

    Samsung Galaxy S20 The detection works perfectly here

    Both were tested under the same conditions.

    The S20 outputs a NaN in one of hundreds of cases. The S9 always outputs NaN. I haven't yet been able to get a score on the S9.

    Any suggestions?

    Originally posted by @fabian-rump in https://github.com/Caldarie/flutter_tflite_audio/issues/10#issuecomment-894099752

    opened by Caldarie 5
  • iOS build error (Solved. Pinned for reference)

    iOS build error (Solved. Pinned for reference)

    The solution may be here https://github.com/tensorflow/tensorflow/issues/52042

    Build Error

    duplicate symbol '_TfLiteXNNPackDelegateCreate' in:
        /Users/sky/Downloads/flutter_tflite_audio-master/example/ios/Pods/TensorFlowLiteSelectTfOps/Frameworks/TensorFlowLiteSelectTfOps.framework/TensorFlowLiteSelectTfOps(xnnpack_delegate.o)
        /Users/sky/Downloads/flutter_tflite_audio-master/example/ios/Pods/TensorFlowLiteC/Frameworks/TensorFlowLiteC.framework/TensorFlowLiteC
    duplicate symbol '_TfLiteXNNPackDelegateDelete' in:
        /Users/sky/Downloads/flutter_tflite_audio-master/example/ios/Pods/TensorFlowLiteSelectTfOps/Frameworks/TensorFlowLiteSelectTfOps.framework/TensorFlowLiteSelectTfOps(xnnpack_delegate.o)
        /Users/sky/Downloads/flutter_tflite_audio-master/example/ios/Pods/TensorFlowLiteC/Frameworks/TensorFlowLiteC.framework/TensorFlowLiteC
    duplicate symbol '_TfLiteXNNPackDelegateGetThreadPool' in:
        /Users/sky/Downloads/flutter_tflite_audio-master/example/ios/Pods/TensorFlowLiteSelectTfOps/Frameworks/TensorFlowLiteSelectTfOps.framework/TensorFlowLiteSelectTfOps(xnnpack_delegate.o)
        /Users/sky/Downloads/flutter_tflite_audio-master/example/ios/Pods/TensorFlowLiteC/Frameworks/TensorFlowLiteC.framework/TensorFlowLiteC
    duplicate symbol '_TfLiteXNNPackDelegateOptionsDefault' in:
        /Users/sky/Downloads/flutter_tflite_audio-master/example/ios/Pods/TensorFlowLiteSelectTfOps/Frameworks/TensorFlowLiteSelectTfOps.framework/TensorFlowLiteSelectTfOps(xnnpack_delegate.o)
        /Users/sky/Downloads/flutter_tflite_audio-master/example/ios/Pods/TensorFlowLiteC/Frameworks/TensorFlowLiteC.framework/TensorFlowLiteC
    ld: 4 duplicate symbols for architecture arm64
    
    opened by zxl777 4
  • issue using google's yamnet model

    issue using google's yamnet model

    Hi,

    Can you use this library to run google's yamnet tflite version model (used this one https://tfhub.dev/google/lite-model/yamnet/classification/tflite/1)? when i am trying to use i am getting generic error PFA. Screenshot_20221228_163747

    Thanks!

    opened by hbtalha7 1
  • openai whisper

    openai whisper

    Hi,

    could you use this library to run openai whisper with a tflite model? In the examples there are always labels provided, but for whisper there would not be any labels I think?

    Thanks!

    opened by SomeCodingUser 4
  • implement function for extracting MFCCs in dart

    implement function for extracting MFCCs in dart

    Hello, i being able to perform extractions of MFCCs with a lot of performance on android smartphones.

    I advise you to look at an implementation I made in this repository. I've used this same implementation to classify bee audio and have achieved 90% accuracy so far.

    This implementation follows the study of another implementation in python that I found in Kaggle that is in this link.

    What makes this implementation really efficient is the FFT used. This implementation of the FFT is not naive. Look at the repository of this implementation here.

    opened by certainlyWrong 3
  • startRecording() called on an uninitialized AudioRecord.

    startRecording() called on an uninitialized AudioRecord.

    Future<Timer?> startListningClap(BuildContext context) async {
        // if service already running
        if (await FlutterForegroundTask.isRunningService) {
          setForceStopFlashlight(false);
          return Timer.periodic(const Duration(milliseconds: 500), (Timer ct) {
            try {
              clapAudioSubscriber.cancel();
            } catch (_) {}
    
            try {
              recognitionStream = TfliteAudio.startAudioRecognition(
                sampleRate: 44100,
                bufferSize: /*22016*/ 11016,
                detectionThreshold: 0.3,
              );
            } catch (_) {}
    
            // start listning to clap/whistle
            clapAudioSubscriber = recognitionStream.listen(
                (event) async {
                  try {
                    if (clapServiceStatus == true &&
                        event['recognitionResult'] == 'clap') {
                      // stop listening when clap detected
    
                      ct.cancel();
                      UtilityFunctions.showPhoneFoundAlertDialog(
                          context, () => stopStartClapListning(context));
                      // if vibration is set to on then vibrate phone
                      bool clapVib = prefs.getBool('clapVibration') ?? false;
                      if (await (Vibration.hasVibrator()) == true && clapVib) {
                        Vibration.vibrate(duration: 1000, amplitude: 255);
                      }
    
                      // if flashlight is set to on then turn flashlight
                      bool clapFlash = prefs.getBool('clapFlashLight') ?? false;
                      if (clapFlash) {
                        turnOnFlashLight();
                      }
    
                      // play melody if enabled by user
                      if (clapMelody == true) playMelody(volume);
                    }
                  } catch (_) {}
                },
                cancelOnError: true,
                onError: (_) {
                  clapAudioSubscriber.cancel();
                },
                onDone: () {
                  clapAudioSubscriber.cancel();
                });
          });
        }
        return null;
      }
    

    E/AndroidRuntime(10013): Process: com.example.flutter_application_test, PID: 10013 E/AndroidRuntime(10013): java.lang.IllegalStateException: startRecording() called on an uninitialized AudioRecord. E/AndroidRuntime(10013): at android.media.AudioRecord.startRecording(AudioRecord.java:1147) E/AndroidRuntime(10013): at flutter.tflite_audio.Recording.start(Recording.java:91) E/AndroidRuntime(10013): at flutter.tflite_audio.TfliteAudioPlugin.record(TfliteAudioPlugin.java:592) E/AndroidRuntime(10013): at flutter.tflite_audio.TfliteAudioPlugin.lambda$GvBCQqT11rP0XXTQzopagqcPxcA(Unknown Source:0) E/AndroidRuntime(10013): at flutter.tflite_audio.-$$Lambda$TfliteAudioPlugin$GvBCQqT11rP0XXTQzopagqcPxcA.run(Unknown Source:2) E/AndroidRuntime(10013): at java.lang.Thread.run(Thread.java:923) I/ExceptionHandle(10013): at android.media.AudioRecord.startRecording(AudioRecord.java:1147) I/ExceptionHandle(10013): at flutter.tflite_audio.Recording.start(Recording.java:91) I/ExceptionHandle(10013): at flutter.tflite_audio.TfliteAudioPlugin.record(TfliteAudioPlugin.java:592) I/ExceptionHandle(10013): at flutter.tflite_audio.TfliteAudioPlugin.lambda$GvBCQqT11rP0XXTQzopagqcPxcA(Unknown Source:0) I/ExceptionHandle(10013): at flutter.tflite_audio.-$$Lambda$TfliteAudioPlugin$GvBCQqT11rP0XXTQzopagqcPxcA.run(Unknown Source:2) I/ExceptionHandle(10013): at java.lang.Thread.run(Thread.java:923) D/TfliteAudio(10013): Parameters: {detectionThreshold=0.3, minimumTimeBetweenSamples=0, method=setAudioRecognitionStream, numOfInferences=1, averageWindowDuration=0, audioLength=0, sampleRate=44100, suppressionTime=0, bufferSize=11016} D/TfliteAudio(10013): AudioLength has been readjusted. Length: 44032 D/TfliteAudio(10013): Transpose Audio: false D/TfliteAudio(10013): Check for permission. Request code: 13 D/TfliteAudio(10013): Permission already granted.

    opened by taimoor522 1
  • How to handle models generating multiple outputs

    How to handle models generating multiple outputs

    Hi, Is there a way to manage multiple output models? I am trying to implement this model indeed: https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/models/modify/model_maker/audio_classification.ipynb which is based on Yamnet model and generates 2 outputs: first one from Yamnet model, 2nd one from trained model. Implementing flutter_tflite_audio only gives me access to first output (and by default ask for labels of 1rst model/output only) Thank you Fabrice

    opened by tiofabby 2
Releases(0.3.0)
  • 0.3.0(Mar 18, 2022)

    0.3.0

    • BREAK CHANGE: Recording bufferSize now takes in 2x the number of samples. To keep the same recording length, simply divide your previous bufferSize by 2.
    • Experimental: Support MFCC, melSpectrogram and spectrogram inputs
    • Feature: Can automatically or manually set audio length
    • Feature: Can automatically or manually transpose input shape
    • Improvement: Stability of asyncronous operations with RxJava and RxSwift
    • Improvement: (iOS) Removed meta info when extracting data from audio file.
    • Improvement: (Android) Splicing algorithm passes all test case. Audio recogntion should now be more accurate.
    • Fixed: (iOS) Duplicate symbol error. Set version of TensorFlowLite to 2.6.0. Problem found here.
    • Fixed: (Android & iOS) Incorrect padding when splicing audio file. All test cases have passed.
    Source code(tar.gz)
    Source code(zip)
  • 0.2.1+1(Dec 3, 2021)

    • Fixed inaccurate numOfInference count for iOS and android.
    • Improved recognition accuracy for Google Teachable Machine models
    • Fixed memory crash on android
    • Improved memory performance on iOS
    • Added feature to output raw scores
    • moved inputType to loadModel() instead of startAudioRecognition()
    Source code(tar.gz)
    Source code(zip)
  • 0.2.0(Oct 17, 2021)

  • 0.1.9(Sep 26, 2021)

  • 0.1.8+1(Jul 20, 2021)

    • Added null safety compatability
    • Fix the problem with bridge NSNumber to Float
    • Merged rawAudioRecognize() and decodedWavRecognize() on native platforms
    • Set detection parameters to 0 for better performance.
    Source code(tar.gz)
    Source code(zip)
  • V0.1.7+1(Mar 26, 2021)

    • (0.1.7) Fixed iOS bug where stream wont close when permission has been denied.
    • (0.1.7) Added feature where you can adjust the detection sensitivity of the model
    • (0.1.7+1) Hotfixed iOS crash when casting double to float for detectionThreshold
    Source code(tar.gz)
    Source code(zip)
Owner
Michael Nguyen
Educator with a background in Business and Language Acquisition. Loves to develop side projects in machine learning and Flutter for fun.
Michael Nguyen
A Flutter audio plugin (Swift/Java) to play remote or local audio files on iOS / Android / MacOS and Web

AudioPlayer A Flutter audio plugin (Swift/Java) to play remote or local audio files on iOS / Android / MacOS and Web. Online demo Features Android / i

Erick Ghaumez 489 Dec 18, 2022
Audio Recorder Jordan AlcarazAudio Recorder [156⭐] - Record audio and store it locally by Jordan Alcaraz.

Audio recorder Record audio and store it locally Usage To use this plugin, add audio_recorder as a dependency in your pubspec.yaml file. Android Make

Jordan Alcaraz 172 Jan 4, 2023
WaVe - an audio streaming platform which gives the best user experience without any compromise in the audio quality

WaVe is an audio streaming platform which gives the best user experience without any compromise in the audio quality, and there is even space for the users to explore their creativity. And makes it more efficient with the AI features.

OmarFayadhd 1 May 31, 2022
A Flutter package for both android and iOS which provides Audio recorder

social_media_recorder A Flutter package for both android and iOS which provides

subhikhalifeh 16 Dec 29, 2022
Play simultaneously music/audio from assets/network/file directly from Flutter, compatible with android / ios / web / macos, displays notifications

?? assets_audio_player ?? Play music/audio stored in assets files (simultaneously) directly from Flutter (android / ios / web / macos). You can also u

Florent CHAMPIGNY 651 Dec 24, 2022
Apps For streaming audio via url (Android, iOS & Web ). Developed with Dart & Flutter ❤

Flutter Sleep App (Dicoding Submission : Learn to Make Flutter Apps for Beginners) Stream Great collection of high-definition sounds that can be mixed

Utrodus Said Al Baqi 13 Nov 29, 2022
Just_audio: a feature-rich audio player for Android, iOS, macOS and web

just_audio just_audio is a feature-rich audio player for Android, iOS, macOS and web. Mixing and matching audio plugins The flutter plugin ecosystem c

Ensar Yusuf Yılmaz 2 Jun 28, 2022
A opensource, minimal and powerful audio player for android

A opensource, minimal and powerful audio player for android

Milad 7 Nov 2, 2022
Official Flutter SDK for LiveKit. Easily add real-time video and audio to your Flutter apps.

LiveKit Flutter SDK Official Flutter SDK for LiveKit. Easily add real-time video and audio to your Flutter apps. This package is published to pub.dev

LiveKit 116 Dec 14, 2022
Audio player app in Flutter. Created as a tutorial for learning Flutter.

Music Player: create a simple Flutter music player app This is a Flutter project used during a series of articles on I should go to sleep. License Cop

Michele Volpato 11 May 5, 2022
Virlow Flutter Recorder - an open-source Flutter application that can transcribe recorded audio

The Virlow Flutter Recorder is an open-source Flutter application that can transcribe recorded audio, plus it includes TL;DR and Short Hand Notes for your transcription. It also consists of a rich text editor that allows you to edit the transcription plus add any additional notes you require.

null 12 Dec 26, 2022
Flutter plugin that can support audio recording and level metering

flutter_audio_recorder English | 简体中文 Flutter Audio Record Plugin that supports Record Pause Resume Stop and provide access to audio level metering pr

RMBR ONE 108 Dec 13, 2022
Flutter plugin for sound. Audio recorder and player.

Flutter Sound user: your documentation is there The CHANGELOG file is here Overview Flutter Sound is a Flutter package allowing you to play and record

null 764 Jan 2, 2023
A Flutter audio-plugin to playing and recording sounds

medcorder_audio Flutter record/play audio plugin. Developed for Evrone.com Funded by Medcorder Medcorder.com Getting Started For help getting started

Evrone 106 Oct 29, 2022
Flutter (web-only at this moment) plugin for recording audio (using the microphone).

Microphone Flutter (web-only at this moment) plugin for recording audio (using the microphone). Getting started To learn more about the plugin and get

null 28 Sep 26, 2022
Flutter plugin for sound. Audio recorder and player.

Sounds Sounds is a Flutter package allowing you to play and record audio for both the android and ios platforms. Sounds provides both a high level API

Brett Sutton 75 Dec 8, 2022
A feature-packed audio recorder app built using Flutter

Qoohoo Submission ??️ An audio recording/playing app. ??️ This is a basic audio

Akshay Maurya 24 Dec 22, 2022
Audio Input Mixer made in Flutter (UI only)

Audio Input Mixer UI Design in Flutter Audio Input Mixer made in Flutter (UI only) This project is an attempt to design a simple one screen Audio Mixe

Praharsh Bhatt 6 Jul 16, 2022
🎹 SoundCloud Audio Player for Flutter.

SoundCloud Audio Player for Flutter Demo on Youtube SoundCloud style audio player in Flutter Features Requirements Support License Features SoundCloud

Sasha Prokhorenko 50 Sep 26, 2022