Click here to Skip to main content
15,896,269 members
Articles / Hosted Services / Azure

Deploying Models at Scale on Azure - Part 3: Deploying and Scaling TensorFlow Models

Rate me:
Please Sign up or sign in to vote.
5.00/5 (2 votes)
30 Mar 2022CPOL10 min read 5K   3  
How to publish a TensorFlow model
This a Part 3 of a 3-part series of articles that demonstrate how to take AI models built using various Python AI frameworks and deploy and scale them using Azure ML Managed Endpoints. In this article, we use online endpoints to publish a TensorFlow model. Then, we create an Azure Function as a public proxy to this endpoint. In addition, we explore configuration options for managed endpoints, such as autoscaling and the blue-green deployment concept.

This article is a sponsored article. Articles such as these are intended to provide you with information on products and services that we consider useful and of value to developers

Views

Daily Counts

This article is part of the series 'Deploying Models at Scale View All

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Architect
Poland Poland
Jarek has two decades of professional experience in software architecture and development, machine learning, business and system analysis, logistics, and business process optimization.
He is passionate about creating software solutions with complex logic, especially with the application of AI.

Comments and Discussions