top of page

How to use Core ML in Swift iOS Apps


How to use Core ML in Swift iOS Apps

Core ML is a framework provided by Apple that allows developers to integrate machine learning models into their iOS applications effortlessly. By leveraging the power of Core ML, developers can enhance their apps with intelligent features like image recognition, natural language processing, and more.


In this blog, we will explore the potential use cases of Core ML in Swift iOS apps and delve into the specific use case of image categorizations.


Use Cases where Core ML fits in

  1. Image Recognition: Core ML enables the integration of pre-trained image recognition models into iOS apps. This can be utilized in applications such as augmented reality, object detection, and image classification.

  2. Natural Language Processing: Core ML can process and analyze natural language, allowing developers to build applications with features like sentiment analysis, language translation, chatbots, and speech recognition.

  3. Recommendation Systems: By leveraging Core ML, developers can build recommendation systems that provide personalized content, product recommendations, and suggestions based on user preferences and behavior.

  4. Anomaly Detection: Core ML can be used to detect anomalies in data, enabling developers to build applications that identify unusual patterns or outliers in various domains such as fraud detection, network monitoring, and predictive maintenance.

  5. Audio and Sound Analysis: Core ML's capabilities can be harnessed to analyze and process audio, enabling applications like voice recognition, speech synthesis, and music classification.

Using Core ML for Image Classification


To showcase how to use Core ML, we'll build an iOS app that uses Core ML to classify images. We'll leverage a pre-trained model called MobileNetV2, which can identify objects in images.


MobileNetV2 is a convolutional neural network architecture that is designed for mobile devices. It is based on an inverted residual structure, which allows it to achieve high performance while keeping the number of parameters and computational complexity low.


Let's get started!


Step 1: Set Up the Project


To start integrating Core ML into your Swift iOS app, follow these steps:

  1. Launch Xcode and create a new project: Open Xcode and select "Create a new Xcode project" from the welcome screen or go to File → New → Project. Choose the appropriate template for your app (e.g., Single View App) and click "Next."

  2. Configure project details: Provide the necessary details such as product name, organization name, and organization identifier for your app. Select the language as Swift and choose a suitable location to save the project files. Click "Next."

  3. Choose project options: On the next screen, you can select additional options based on your project requirements. Ensure that the "Use Core Data," "Include Unit Tests," and "Include UI Tests" checkboxes are unchecked for this particular example. Click "Next."

  4. Choose a location to save the project: Select a destination folder where you want to save your project and click "Create."

  5. Import Core ML framework: In Xcode's project navigator, select your project at the top, then select your target under "Targets." Go to the "General" tab and scroll down to the "Frameworks, Libraries, and Embedded Content" section. Click on the "+" button and search for "CoreML.framework." Select it from the list and click "Add."

  6. Add the MobileNetV2 model: To use the MobileNetV2 model for image classification, you need to add the model file to your project. Download the MobileNetV2.mlmodel file from a reliable source or create and train your own model using tools like Create ML or TensorFlow. Once you have the model file, simply drag and drop it into your Xcode project's file navigator. Ensure that the model file is added to your app's target by checking the checkbox next to your target name in the "Target Membership" section of the File Inspector panel.

  7. Check Core ML compatibility: Verify that the Core ML model you're using is compatible with the version of Core ML framework you have imported. You can find the compatibility information in the Core ML model's documentation or the source from where you obtained the model.

With these steps completed, you have set up your Xcode project to integrate Core ML and are ready to move on to implementing the image classification logic using the MobileNetV2 model.


Step 2: Add the Core ML Model


Drag and drop the MobileNetV2.mlmodel file into your Xcode project. Ensure that the model file is added to your app's target.


Step 3: Create the Image Classifier


In your project, create a new Swift class called ImageClassifier. Import Core ML and Vision frameworks. Declare a class variable for the ML model:

import CoreML
import Vision

class ImageClassifier {
    private let model = MobileNetV2()
    
    // Image classification logic
}

Step 4: Implement the Image Classification Logic


Inside the ImageClassifier class, add a method called classifyImage that takes a UIImage as input and returns the classification results:

func classifyImage(_ image: UIImage, completion: @escaping (Result<[VNClassificationObservation], Error>) -> Void) {
    guard let ciImage = CIImage(image: image) else {
        completion(.failure("Failed to convert image to CIImage"))
        return
    }
    
    let imageRequestHandler = VNImageRequestHandler(ciImage: ciImage)
    
    do {
        try imageRequestHandler.perform([createClassificationRequest(completion: completion)])
    } catch {
        completion(.failure(error))
    }
}

private func createClassificationRequest(completion: @escaping (Result<[VNClassificationObservation], Error>) -> Void) -> VNCoreMLRequest {
    let request = VNCoreMLRequest(model: model) { request, error in
    guard let classifications = request.results as? [VNClassificationObservation] else {
            completion(.failure("Failed to classify image"))
            return
        }
        
        completion(.success(classifications))
    }
    
    return request
}

Step 5: Integrate the Image Classifier in your App


In your app's view controller or any other appropriate place, create an instance of the ImageClassifier class and call the classifyImage method to classify an image:

let imageClassifier = ImageClassifier()

func classify(image: UIImage) {
    imageClassifier.classifyImage(image) { result in
    switch result {
        case .success(let classifications):
            // Handle the classification results
            print(classifications)
        case .failure(let error):
            // Handle the error
            print(error)
        }
    }
}

Conclusion


Core ML empowers iOS developers to incorporate machine learning capabilities seamlessly into their Swift apps. In this blog, we explored the potential use cases of Core ML and focused on image classification as a specific example. By following the steps outlined above, you can integrate a pre-trained Core ML model, such as MobileNetV2, into your app and perform image classification with ease. Core ML opens up a world of possibilities for creating intelligent and engaging applications that cater to the needs of modern users.


Happy coding!

Blog for Mobile App Developers, Testers and App Owners

 

This blog is from Finotes Team. Finotes is a lightweight mobile APM and bug detection tool for iOS and Android apps.

In this blog we talk about iOS and Android app development technologies, languages and frameworks like Java, Kotlin, Swift, Objective-C, Dart and Flutter that are used to build mobile apps. Read articles from Finotes team about good programming and software engineering practices, testing and QA practices, performance issues and bugs, concepts and techniques. 

bottom of page