top of page

Using TensorFlow Lite for Image Processing in Kotlin Android Apps


Using TensorFlow Lite for Image Processing in Kotlin Android Apps

In today's digital era, image processing has become an integral part of many Android applications. From applying filters to performing complex transformations, image processing techniques enhance the visual appeal and functionality of mobile apps.


In this blog, we will explore how to implement image processing in Android apps using Kotlin, one of the popular programming languages for Android development, and TensorFlow Lite.


Prerequisites


Before diving into image processing, ensure that you have the following prerequisites:

  1. Android Studio: The official IDE for Android app development.

  2. Kotlin: A modern programming language for Android development.

  3. Basic knowledge of Android app development.

Setting up the Project


To get started, follow these steps:

  1. Open Android Studio and create a new project.

  2. Select "Empty Activity" and click "Next."

  3. Provide a name for your project and select the desired package name and location.

  4. Choose the minimum SDK version and click "Finish."

Once the project is set up, we can proceed with image processing implementation.


Step 1: Import Required Libraries To perform image processing tasks, we need to import the following libraries in the app-level build.gradle file:

implementation 'org.jetbrains.kotlinx:kotlinx-coroutines-android:1.6.0-RC1'
implementation 'androidx.camera:camera-camera2:1.3.0-alpha07'
implementation 'androidx.camera:camera-lifecycle:1.3.0-alpha07'
implementation 'androidx.camera:camera-view:1.3.0-alpha07'
implementation 'org.tensorflow:tensorflow-lite:2.7.0'

Step 2: Capture and Display the Image To process an image, we need to capture it first. Add a button in the app's layout file (e.g., activity_main.xml) for capturing the image. Here's an example:

<Button
android:id="@+id/captureButton"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Capture Image"
    />

Next, open the MainActivity.kt file and add the following code inside the onCreate method to capture the image:

import androidx.camera.core.ImageCapture
import androidx.camera.core.ImageCaptureException
import androidx.camera.core.ImageProxy

class MainActivity : AppCompatActivity() {

    private lateinit var imageCapture: ImageCapture

    override fun onCreate(savedInstanceState: Bundle?) {
        super.onCreate(savedInstanceState)
        setContentView(R.layout.activity_main)

        val captureButton: Button = findViewById(R.id.captureButton)
        captureButton.setOnClickListener {
            takePhoto()
        }

        val cameraProviderFuture = ProcessCameraProvider.getInstance(this)
        cameraProviderFuture.addListener({
            val cameraProvider = cameraProviderFuture.get()

            imageCapture = ImageCapture.Builder()
                .build()

            val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA

            val preview = Preview.Builder()
                .build()
                .also {
                    it.setSurfaceProvider(viewFinder.surfaceProvider)
                }

            try {
                cameraProvider.unbindAll()
                cameraProvider.bindToLifecycle(
                    this, cameraSelector, preview, imageCapture
                )
            } catch (exc: Exception) {
                Log.e(TAG, "Error: ${exc.message}")
            }
        }, ContextCompat.getMainExecutor(this))
    }

    private fun takePhoto() {
        val imageCapture = imageCapture ?: returnval photoFile = File(
            outputDirectory,
            "IMG_${System.currentTimeMillis()}.jpg"
        )

        val outputOptions = ImageCapture.OutputFileOptions.Builder(photoFile).build()

        imageCapture.takePicture(
            outputOptions,
            ContextCompat.getMainExecutor(this),
            object : ImageCapture.OnImageSavedCallback {
                override fun onError(exc: ImageCaptureException) {
                    Log.e(TAG, "Photo capture failed: ${exc.message}", exc)
                }

                override fun onImageSaved(output: ImageCapture.OutputFileResults) {
                    val savedUri = Uri.fromFile(photoFile)
                    val msg = "Photo capture succeeded: $savedUri"
                    Toast.makeText(baseContext, msg, Toast.LENGTH_SHORT).show()
                }
            }
        )
    }
}

Step 3: Implement Image Processing Now that we have captured the image, we can proceed with image processing. For simplicity, we will demonstrate how to apply a grayscale filter to the captured image using the TensorFlow Lite library.


First, add the grayscale model file (e.g., grayscale.tflite) to the "assets" folder of your project. Ensure that the grayscale model is trained and compatible with TensorFlow Lite.

Next, create a new Kotlin class called "ImageProcessor" and add the following code:

import org.tensorflow.lite.Interpreter
import android.graphics.Bitmap

class ImageProcessor(private val modelPath: String) {

    private lateinit var interpreter: Interpreter

    init {
        val options = Interpreter.Options()
        interpreter = Interpreter(File(modelPath), options)
    }

    fun processImage(bitmap: Bitmap): Bitmap {
        val inputShape = interpreter.getInputTensor(0).shape()
        val inputSize = inputShape[1] * inputShape[2] * inputShape[3]
        val outputShape = interpreter.getOutputTensor(0).shape()
        val outputSize = outputShape[1] * outputShape[2] * outputShape[3]

        val inputBuffer = ByteBuffer.allocateDirect(inputSize).apply {
            order(ByteOrder.nativeOrder())
            rewind()
        }

        val outputBuffer = ByteBuffer.allocateDirect(outputSize).apply {
            order(ByteOrder.nativeOrder())
            rewind()
        }

        val scaledBitmap = Bitmap.createScaledBitmap(bitmap, inputShape[2], inputShape[1], false)
        scaledBitmap.copyPixelsToBuffer(inputBuffer)

        interpreter.run(inputBuffer, outputBuffer)

        val outputBitmap = Bitmap.createBitmap(outputShape[2], outputShape[1], Bitmap.Config.ARGB_8888)
        outputBuffer.rewind()
        outputBitmap.copyPixelsFromBuffer(outputBuffer)

        return outputBitmap
    }
}

Step 4: Display the Processed Image To display the processed image, add an ImageView in the activity_main.xml layout file:

<ImageView
android:id="@+id/processedImage"
android:layout_width="match_parent"
android:layout_height="wrap_content"
    />

Finally, modify the MainActivity.kt file as follows to display the processed image:

import android.graphics.BitmapFactory

class MainActivity : AppCompatActivity() {

    // ...private lateinit var imageProcessor: ImageProcessor

    override fun onCreate(savedInstanceState: Bundle?) {
        // ...

        imageProcessor = ImageProcessor("grayscale.tflite")
    }

    private fun takePhoto() {
        // ...

        imageCapture.takePicture(
            outputOptions,
            ContextCompat.getMainExecutor(this),
            object : ImageCapture.OnImageSavedCallback {
                override fun onError(exc: ImageCaptureException) {
                    // ...
                }

                override fun onImageSaved(output: ImageCapture.OutputFileResults) {
                    val savedUri = Uri.fromFile(photoFile)
                    val bitmap = BitmapFactory.decodeFile(savedUri.path)

                    val processedBitmap = imageProcessor.processImage(bitmap)
                    processedImage.setImageBitmap(processedBitmap)
                }
            }
        )
    }
}

Conclusion


In this blog post, we explored how to implement image processing in Android apps using Kotlin. We covered the steps to capture and display an image, as well as how to apply a grayscale filter using TensorFlow Lite.


By following this guide, you can enhance your Android apps with powerful image processing capabilities. Remember to explore further and experiment with different image processing techniques to create stunning visual experiences in your applications.

Blog for Mobile App Developers, Testers and App Owners

 

This blog is from Finotes Team. Finotes is a lightweight mobile APM and bug detection tool for iOS and Android apps.

In this blog we talk about iOS and Android app development technologies, languages and frameworks like Java, Kotlin, Swift, Objective-C, Dart and Flutter that are used to build mobile apps. Read articles from Finotes team about good programming and software engineering practices, testing and QA practices, performance issues and bugs, concepts and techniques. 

bottom of page