Click here to Skip to main content
15,791,739 members
Articles / Artificial Intelligence / Machine Learning

Building an MLOps Model API

Rate me:
Please Sign up or sign in to vote.
5.00/5 (2 votes)
10 May 2021CPOL1 min read 6.1K   44   4  
In this article we build the model API to support the prediction service.
Here we build an API that will load our model from the production registry to enable the prediction service described in the Google MLOps Maturity Model.

In this series of articles, we’ll walk you through the process of applying CI/CD to the AI tasks. You’ll end up with a functional pipeline that meets the requirements of level 2 in the Google MLOps Maturity Model. We’re assuming that you have some familiarity with Python, Deep Learning, Docker, DevOps, and Flask.

In the previous article, we discussed the unit testing step in our ML CI/CD pipeline. In this one, we’ll build the model API to support the prediction service.

The diagram below shows where we are in our project process.

Image 1

And the code files’ structure is the following:

Image 2

Most of the code in this article is virtually the same as in the previous one, so we’ll only look at the differences.

Find the full code in this repository as the snippets shown below are condensed versions.

The file, which orchestrates the program execution within the container, looks as follows:

import tensorflow as tf
from tensorflow.keras.models import load_model
import jsonpickle
import data_utils, email_notifications
import sys
import os
from import storage
import datetime
import numpy as np
import jsonpickle
import cv2
from flask import flash,Flask,Response,request,jsonify
import threading
import requests
import time
# If you're running this container locally and you want to access the API via local browser, use
# Starting flask app
app = Flask(__name__)
# general variables declaration
model_name = 'best_model.hdf5'
bucket_name = 'automatictrainingcicd-aiplatform'
global model
def before_first_request():
 def initialize_job():
  if len(tf.config.experimental.list_physical_devices('GPU')) > 0:
  global model
  # Checking if there's any model saved at testing on GCS
  model_gcs = data_utils.previous_model(bucket_name,model_name)
  # If any model exists at prod, load it, test it on data and use it on the API
  if model_gcs[0] == True:
   model_gcs = data_utils.load_model(bucket_name,model_name)
   if model_gcs[0] == True:
     model = load_model(model_name)
    except Exception as e:
     email_notifications.exception('Something went wrong trying to production model. Exception: '+str(e))
    email_notifications.exception('Something went wrong when trying to load production model. Exception: '+str(model_gcs[1]))
  if model_gcs[0] == False:
   email_notifications.send_update('There are no artifacts at model registry. Check GCP for more information.')
  if model_gcs[0] == None:
   email_notifications.exception('Something went wrong when trying to check if production model exists. Exception: '+model_gcs[1]+'. Aborting execution.')
 thread = threading.Thread(target=initialize_job)
@app.route('/init', methods=['GET','POST'])
def init():
 message = {'message': 'API initialized.'}
 response = jsonpickle.encode(message)
 return Response(response=response, status=200, mimetype="application/json")
@app.route('/', methods=['POST'])
def index():
 if request.method=='POST':
   #Converting string that contains image to uint8
   image = np.fromstring(,np.uint8)
   image = image.reshape((128,128,3))
   image = [image]
   image = np.array(image)
   image = image.astype(np.float16)
   result = model.predict(image)
   result = np.argmax(result)
   message = {'message': '{}'.format(str(result))}
   json_response = jsonify(message)
   return json_response
  except Exception as e:
   message = {'message': 'Error'}
   json_response = jsonify(message)
   email_notifications.exception('Something went wrong when trying to make prediction via Production API. Exception: '+str(e)+'. Aborting execution.')
   return json_response
  message = {'message': 'Error. Please use this API in a proper manner.'}
  json_response = jsonify(message)
  return json_response
def self_initialize():
 def initialization():
  global started
  started = False
  while started == False:
    server_response = requests.get('')
    if server_response.status_code == 200:
     print('API has started successfully, quitting initialization job.')
     started = True
    print('API has not started. Still attempting to initialize it.')
 thread = threading.Thread(target=initialization)
if __name__ == '__main__':

The file differs from its previous version only in the part where it loads the model from the production registry. The differences are:

  • status = storage.Blob(bucket=bucket, name='{}/{}'.format('testing',model_filename)).exists(storage_client) by status = storage.Blob(bucket=bucket, name='{}/{}'.format('production',model_filename)).exists(storage_client)
  • blob1 = bucket.blob('{}/{}'.format('testing',model_filename)) by blob1 = bucket.blob('{}/{}'.format('production',model_filename))


In our Dockerfile, replace

RUN git clone


RUN git clone

Once you have built and run the container locally, you should get a fully functional prediction service accessible at through POST requests.

Next Steps

In the next series of articles, we’ll see how to chain the individual containers together into an actual pipeline, with some help from Kubernetes, Jenkins, and Google Cloud Platform. Stay tuned!

This article is part of the series 'Automatic Training, Testing, and Deployment of AI using CI/CD View All


This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

Written By
United States United States
Sergio Virahonda grew up in Venezuela where obtained a bachelor's degree in Telecommunications Engineering. He moved abroad 4 years ago and since then has been focused on building meaningful data science career. He's currently living in Argentina writing code as a freelance developer.

Comments and Discussions

-- There are no messages in this forum --