Build and Deploy ML Models with Amazon Sagemaker

A Comprehensive Tutorial to Amazon SageMaker for smooth deployment of your projects .| ProjectPro

Build and Deploy ML Models with Amazon Sagemaker
 |  BY Shivanshu Aggarwal

A survey by O’Reilly in 2020 found that Amazon Sagemaker is the second most used machine learning platform after Tensorflow.

With over 100,000 active users globally, Amazon SageMaker has quickly become a go-to tool for companies looking to incorporate machine learning into their products or services. One interesting example of Amazon SageMaker's production-level use is by the company DeepVision. DeepVision is a startup that uses computer vision and machine learning to improve safety measures in the construction industry. They have developed a product called "SiteEye," which uses cameras and machine learning to monitor construction sites in real-time to detect and alert workers about potential hazards.


Build an End-to-End AWS SageMaker Classification Model

Downloadable solution code | Explanatory videos | Tech Support

Start Project

To build and deploy their machine learning models, DeepVision used Amazon SageMaker. They were able to use SageMaker's pre-built algorithms and libraries to quickly and easily train their ML models and then deploy them to the edge (i.e., on-site at the construction sites) using SageMaker's deployment options.

Because of Amazon SageMaker, DeepVision could bring their product to the market faster and at a cost lower than the one if they had built their own machine learning infrastructure from scratch. Today, SiteEye is being used at construction sites worldwide to help improve safety measures and reduce accidents.

But what makes Amazon SageMaker an important part of the applications like SiteEye? Read this article till the end as we find out the answer to the question by delving into the details of how SageMaker works and how it can be used to train, evaluate and deploy ML models in your applications.

What is Amazon SageMaker?

Amazon SageMaker is a fully managed machine learning platform that allows data scientists and developers to build, train, and deploy machine learning models quickly and easily. It features an integrated Jupyter notebook for data exploration and analysis, as well as optimized machine learning algorithms for running against large data sets in a distributed environment.

What is Amazon SageMaker

 

SageMaker also supports building customized algorithms and frameworks and allows for flexible distributed training options. Models can be easily deployed into a secure and scalable environment through SageMaker Studio or the SageMaker Console.

Now, let us explore various features that make Amazon SageMaker unique and standout from other tools in the market.

ProjectPro Free Projects on Big Data and Data Science

Why use Amazon SageMaker?

There are numerous capabilities of Amazon SageMaker that any developer or data scientist can leverage. Some of the following are:

  • Fully managed: SageMaker is a fully-managed service, which means it’s AWS's responsibility to set up or maintain the underlying infrastructure.

  • Easy to use: SageMaker provides a simple, intuitive interface that makes it easy to work, even if one is new to machine learning (ML).

  • Wide range of algorithms and frameworks: SageMaker supports a variety of algorithms and frameworks, including TensorFlow, PyTorch, and scikit-learn, making it suitable for a wide range of machine learning tasks.

  • Scalability: SageMaker is highly scalable, and allows easily train and deploy ML models at any scale.

  • Cost-effective: SageMaker can help reduce the cost of building ML models by up to 70%, making it an economical choice for organizations of all sizes.

  • Integration with other AWS services: SageMaker integrates seamlessly with other services, such as Amazon Simple Storage Service(S3) and Amazon Elastic Compute Cloud (EC2), making it easy to incorporate machine learning into existing workflow and infrastructure.

  • Time-saving: SageMaker automates many of the tasks, by creating a pipeline starting from data preparation and ML model training, which saves time and resources.

  • Customization: SageMaker supports customization by allowing custom-made algorithms along with in-built algorithms, giving complete control over the ML models.

  • Monitoring and debugging: SageMaker provides tools for monitoring and debugging ML models, making identifying and fixing issues easy.

Build a Job Winning Data Engineer Portfolio with Solved End-to-End Big Data Projects

Let us now explore the SageMaker architecture to understand what makes Amazon SageMaker unique and popular among the masses.

Deep Dive into Amazon SageMaker Architecture

The below architecture includes an Amazon S3 bucket that contains a processed dataset and an Amazon SageMaker training job to train on the dataset and create a predictive model. A SageMaker endpoint is where the trained model is deployed and can be invoked through Amazon API Gateway. Amazon S3 is also used to store model artefacts and predictions. Amazon CloudWatch is used for logs generated during training and endpoint output.

Amazon SageMaker Architecture

Source: aws.amazon.com

Here's what valued users are saying about ProjectPro

I think that they are fantastic. I attended Yale and Stanford and have worked at Honeywell,Oracle, and Arthur Andersen(Accenture) in the US. I have taken Big Data and Hadoop,NoSQL, Spark, Hadoop Admin, Hadoop projects. I have been happy with every project. They have really brought me into the...

Ray han

Tech Leader | Stanford / Yale University

I come from Northwestern University, which is ranked 9th in the US. Although the high-quality academics at school taught me all the basics I needed, obtaining practical experience was a challenge. This is when I was introduced to ProjectPro, and the fact that I am on my second subscription year...

Abhinav Agarwal

Graduate Student at Northwestern University

Not sure what you are looking for?

View All Projects

Amazon SageMaker Tutorial

The first step in the model training process is preparing input data in an appropriate format and structure. Once data is prepared, various machine learning models can be trained on it and then evaluated to ensure they are performing correctly and do not suffer from underfitting or overfitting.

After evaluating the model, it can be deployed for production use to make predictions on new data. The model's performance should also be monitored to ensure it is functioning correctly and has not degraded due to changes in data or business concepts.

Amazon SageMaker Project Pipeline

 

How to Prepare Data using Amazon SageMaker?

Amazon SageMaker provides various tools and features to help prepare the data for machine learning tasks. It provides Processing Jobs to prepare the data. Data scientists or python developers can use Boto3, the AWS SDK for Python, to access AWS services like Glue, S3, and EC2 to fulfill ETL operations.

Additionally, SageMaker Studio provides a Data Wrangler feature that provides an end-to-end solution to import, prepare, transform, and analyze data.

Amazon SageMaker Data Wrangler

Source: aws.amazon.com

Also, data wrangler provides the following core functionalities:

  • Import – Connect to and import data from Amazon S3, Amazon Athena, Amazon Redshift, Snowflake, and Databricks.

  • Data Flow – A data flow allows you to specify a series of steps for preparing data for machine learning. This includes combining data from various sources, applying various transformations to the data, and creating a workflow for data preparation that can be integrated into an ML pipeline.

  • Transform – Clean and transform the dataset by applying standard techniques such as formatting string, vector, and numeric data and adding features through techniques such as text and date/time embedding and categorical encoding.

  • Generate Data Insights – Automatically verify data quality and detect abnormalities in the data with Data Wrangler Data Insights and Quality Report.

  • Analyze – Data Wrangler allows you to analyze the features in your dataset at any stage of the data preparation process. You can use the built-in data visualization tools such as scatter plots and histograms to visualize your data and use data analysis tools like target leakage analysis and quick modelling to understand how the features in your dataset are correlated.

  • Export – Export the data preparation workflow to a different location like:

      • Amazon S3 bucket

      • Amazon SageMaker Model Building Pipelines –  Data can be exported directly to the model building pipelines.

      • Amazon SageMaker Feature Store – To store the features and their data in a centralized store.

    • Python script – Store the data and their transformations in a Python script for the custom workflows.

Ace your Big Data engineer interview by working on unique end-to-end solved Big Data Projects using Hadoop

How to train a model using Amazon SageMaker?

Once data is prepared, the next step is building machine learning models and training them. Depending on the use case, algorithms are selected for training which will then be compared and evaluated. The diagram below illustrates the process of training and deploying a model using Amazon SageMaker:

Training a model using Amazon SageMaker

Source: aws.amazon.com

The area labelled Amazon SageMaker contains the two components of SageMaker: model training and model deployment.

To train a model using Amazon SageMaker, a training job has to be created which includes several key pieces of information:

  1. The location of training data: This can be an S3 bucket or on a local file system that is accessible to the SageMaker training instances.

  2. The computation resources to be used for training: SageMaker provides a variety of options for training instances, including CPU-only, GPU-accelerated, and clusters of instances with both CPU and GPU capabilities. 

  3. The location to store the output of the training job: This is typically an S3 bucket, and contains the trained model artefacts as well as any output generated by the training code.

  4. The training algorithm: SageMaker provides a number of built-in algorithms and pre-trained models that can be used out of the box. Alternatively, a custom algorithm or model can be used by packaging it as a Docker container and specifying the registry path in the training job.

The following options are available for a training algorithm:

  1. Utilize an algorithm provided by SageMaker – SageMaker offers a range of built-in training algorithms and pre-trained models that may meet the needs and provide a quick solution for model training.

  2. Use SageMaker Debugger – It allows inspection of training parameters and data during the training process when working with TensorFlow, PyTorch, Apache MXNet, or the XGBoost algorithm. It also automatically detects and alerts users to common errors such as extreme parameter values.

  3. Utilize Apache Spark with SageMaker – SageMaker offers a library that can be utilized within Apache Spark to train models with SageMaker. This library provided by SageMaker is similar in usage to Apache Spark MLLib.

  4. Submit custom code for training with deep learning frameworks – Custom Python code that uses TensorFlow, PyTorch, or Apache MXNet can be used for model training.

  5. Use own custom algorithms – Code can be combined and packaged into a Docker image, which can then be specified in the registry path of a SageMaker ‘CreateTrainingJob’ API call.

The training job in SageMaker launches ML compute instances and utilizes the specified training code and dataset to train the model. The resulting model artifacts and other outputs are saved in the designated S3 bucket.

How to deploy a model using Amazon SageMaker?

After the machine learning model is trained, it can be deployed using Amazon SageMaker Endpoint to get the predictions in various ways, as per the use case:

  • Real-time hosting services – For persistent, real-time endpoints that make a single prediction at a time.

  • Serverless Inference – If workloads that have idle periods between traffic spurts and can tolerate cold starts.

  • Asynchronous Inference – For requests with large payload sizes up to 1GB, long processing times, and near real-time latency requirements.

  • Batch transform – If predictions are required for an entire dataset.

SageMaker also provides capabilities to manage resources and optimize inference performance:

  • SageMaker Edge Manager – A service that allows for optimization, security, monitoring, and maintenance of machine learning models on a variety of edge devices including smart cameras, robots, personal computers, and mobile devices when deploying machine learning models. 

  • Amazon SageMaker Neo –  A machine learning model compiler that optimises models to run faster, with a smaller footprint, and on a broader set of platforms. It enables developers to train models once and deploy them anywhere in the cloud or at the edge. Neo can optimize models built in MXNet, PyTorch, and TensorFlow, and supports models trained using other frameworks as well. It can optimize models for a variety of hardware platforms, including Arm, Intel, and Qualcomm architectures.

How to Validate a Model using Amazon SageMaker?

Evaluating a model's performance and accuracy is a crucial step after training it. This step helps in determining whether the trained model meets the desired business goals or not. Among data science teams, it is common to utilize multiple models using different methods and evaluate them individually.

For instance, different business rules might be applied to each model and various measures can be used to assess the suitability of each model. Metrics like sensitivity versus specificity may also be taken into account while aiming to track ML models’ performance.

The model can be evaluated on historical data as well as new live data.

  1. Offline testing — It can be conducted using historical data to send requests to the model for inferences. The trained model can be deployed to an alpha endpoint and historical data can be used to send inference requests to it. To send the requests, use a Jupyter notebook in the SageMaker notebook instance or either the AWS SDK for Python (Boto) or the high-level Python library provided by SageMaker. It can be done before deploying the model to the SageMaker endpoint.

  2. Online testing with live data —Evaluating the model's performance using live data can be done through A/B testing with production variants in SageMaker. These variants are models that share the same inference code and are deployed on the same endpoint. It is possible to route a small percentage of live traffic to a model variant being evaluated, such as 20%. Once the model's performance is satisfactory, all traffic can be directed to the updated model.

Unlock the ProjectPro Learning Experience for FREE

For example: In order to test multiple models by distributing traffic between them, it is necessary to specify the percentage of traffic that should be routed to each model. This can be done by assigning a weight to each production variant in the endpoint configuration.

Model Validation using Amazon SageMaker

Source: aws.amazon.com

How to Monitor a Model in Production using Amazon SageMaker?

Amazon SageMaker Model Monitor can continuously monitor the quality of machine learning models in real-time after deployment to a production environment. This includes setting up alerts for deviations in model quality, such as data drift and anomalies. Model Monitor uses Amazon CloudWatch Logs to collect and store log files, which can be specified to be saved in an Amazon S3 bucket. By proactively detecting and addressing deviations in model quality, it is possible to maintain and improve the performance of deployed machine learning algorithms.

For example: Model Monitor can be used to monitor a batch transform job as well as a real-time endpoint. For a batch transform, instead of receiving requests to an endpoint and tracking the predictions, Model Monitor will monitor inference inputs and outputs.

Monitoring a model in production using Amazon SageMaker

Source: aws.amazon.com

Access to a curated library of 250+ end-to-end industry projects with solution code, videos and tech support.

Request a demo

Amazon SageMaker Tutorial in Action with Code

Let’s take a familiar use case- Customer churn probability

Why Predicting Customer Churn is an important use case in the industry?

It is important for businesses to understand why and when a customer might stop using their services or switch to a competitor, in order to take proactive measures to retain them. This is due to the high cost of customer acquisition and satisfaction, and the potential cost of losing a customer to a competitor. Once the ML Model predicts churned customers, one way to encourage them to remain with a business is to provide incentives or offer upgrades for new packages.

Steps to create Customer Churn Prediction Model

Initial Setup:

  1. Create a Notebook Instance and select ‘ml.m4.xlarge’ instance type.

Creating a Notebook Instance using Amazon SageMaker

 

  1. Specify the Amazon S3 bucket and prefix for storing training and model data within the same region.

  2. Provide the ARN of the IAM role that gives training and hosting access to the data.

  3. Run the following code to start a SageMaker session, first need to import the boto3 library, and SageMaker.
    Starting a SageMaker session

 

  1. Importing the other python libraries needed to complete the various tasks.
    Importing the required libraries

 

Data Reading and Processing:

ML model is trained on historical records. In this example, We are using mobile operators’ historical records of customers who churned and also who continued using the service. Once ML algorithm training on both the classes (churns and non-churns) is done, model evaluation is done by passing random and unseen customer data and having the model predict whether this customer is going to churn.

  1. Downloading data from S3 using boto3 and using in the notebook instance.
    Boto3 is AWS SDK for Python, which allows code that uses AWS services like S3 and EC2.
    Downloading data from S3 using boto3

 

This is a relatively small dataset, with only 5,000 records and 21 attributes.
Sample Dataset for the Amazon SageMaker project

 

"Churn?" is the binary target attribute that the ML model will try to predict.

  1. Data exploration is generally carried out next and based on that, existing features might be typecasted, normalised, encoded or even dropped and new features might be created.
    Data Exploration using Amazon SageMaker

 

Data Exploration using Amazon SageMaker

 

Data Exploration using Amazon SageMaker

  1. We are using the ML algorithm XGBoost which uses gradient-boosted trees to provide accurate predictions. SageMaker's implementation of XGBoost can be used to train data stored in either a CSV or LibSVM format, with the predictor variable in the first column and no header row. It’s necessary to convert categorical features into numeric ones before training.
    Data Preparation

 

  1. Now data will be divided into training, validation, and test sets. This will help prevent overfitting the model and test the model's performance on data it hasn't already seen. Once done, it can upload to S3.

Splitting the Datasetr

Model Training:

  1. To begin training, the locations of the XGBoost algorithm containers must be specified. Then, because the CSV file format is being used, TrainingInput must be created for the training function to use as pointers to the files in S3.
    Training the model

 

  1. Other parameters like type of training instances and XGBoost hyperparameters are specified during training.
    Specifying the hyperparameters

Model Hosting:

Once the model is trained, it can be deployed to a hosted endpoint if real-time predictions are generated.

Model Hosting

Model Evaluating:

Model validation can be done on historical data before hosting the model, or it can be done on live data after the model has been hosted.

  1. After having a hosted endpoint running, real-time predictions can be made from the model by making an HTTP POST request.
    Model Evaluation

 

  1. Once the predictions are generated, the probabilities cutoff can be adjusted to have a fine separation between the two classes Further, performance metrics like Precision, Recall, and F1 score can also be calculated.

Get FREE Access to Data Analytics Example Codes for Data Cleaning, Data Munging, and Data Visualization

Amazon SageMaker Examples

By Using Amazon Sagemaker:

  • Airbnb can improve the accuracy of its revenue management system. 

  • Intuit, the financial software company, developed a model for detecting fraudulent tax returns and improving the accuracy of its TurboTax software.

  • Lyft, the ride-sharing company, can improve the accuracy of its dynamic pricing model. 

  • Netflix uses AWS SageMaker to predict user preferences and improve recommendation algorithms for its streaming service.

  • The New York Times can personalize its news recommendations and improve its ad targeting.

What’s the point of learning so much about SageMaker if you don’t get to explore its practical applications? None, right? So, don’t miss this opportunity of gaining access to end-to-end solved projects in Data Science and Big Data by ProjectPro.  

FAQs

Amazon launched SageMaker in November 2017.

Yes, SageMaker can be used for ETL operations.

SageMaker is a PaaS offering from Amazon Web Services.

 

PREVIOUS

NEXT

Access Solved Big Data and Data Science Projects

About the Author

ShivanshuAggarwal

Has around 9 years of experience in Data Science and Analytics. Skilled in leading teams and projects. Holds MBA in Business Analytics from NMIMS. Experienced in solving business problems using disciplines such as Machine Learning, Deep Learning, Reinforcement learning and Operational Research.

Meet The Author arrow link