Quickly Deploying Gradio on AWS

You’ve built a demo for your machine learning model with Gradio. Now, deploy it with a permanent link on an AWS instance

Abubakar Abid
5 min readJan 25, 2021

If you’ve built a machine learning model that works well, you might want to demo it, so that others can try it out. The Gradio library makes it really easy to create a shareable GUI for your model, as long as the model is running on your local computer. But what if you need a long-term hosted solution?

In this tutorial, I’m going to show you, step by step, how to create and deploy your machine learning model and UI on an AWS EC2 instance. This is useful if you need a permanent link for your model interface to share within your company or organization.

I’m going to use this image denoising model and the interface I created for it using Gradio as an example:

A state-of-the-art image denoising model, Noise2Same, along with its Gradio interface. You can try out the interface here: https://www.gradio.app/g/Noise2Same

It’s a quick process to get your model up too, so let’s get started!

The Gradio Preliminaries

I’m assuming that you’re already somewhat familiar with the Gradio library (https://github.com/gradio-app/gradio). If not, this quick start is a good place to get started. As a refresher, gradio is a Python library that lets you create UIs around your machine learning model by specifying 3 things: a Python function, input component, and an output component.

The AWS EC2 Preliminaries

I’m also assuming that you already have an EC2 instance running Linux and Python3 and you know how to SSH into the machine to run commands on it. If not, here’s a good tutorial to get started with an EC2 instance and a tutorial to install Python3 on your instance.

Step 1: Create a Gradio App File for Your Model

The first step is to create a file that launches the Gradio GUI for your model. Give it a simple name like demo.py. This file should:

  • Load the model
  • Define a prediction function using the model
  • Launch a Gradio interface with the prediction function and appropriate UI components

Here’s an example such file from the Noise2Same repo, which contains a state-of-the-art image denoising model:

You should run this file locally (for example, by running python3 demo.py) to make sure that the Gradio app launches successfully.

Step 2: Make the 1 change needed to run Gradio on an EC2 instance

You’re going to need to make just one change so that the Gradio app runs successfully on the EC2 instance. That’s adding a server_name argument to the Interface class and setting it to be 0.0.0.0 . So the modified code becomes:

This allows your Gradio app to be accessible to users from other computers (i.e. across the web).

Step 3: Publish your model as a GitHub repo & include a requirements.txt file

Once your code is ready, make a GitHub repository for your account with all of the necessary files, including the Gradio app file that you just created. You can use a different version control system if you’d like — we just need to get the code onto the EC2 instance.

We also need to make sure that we designate all of the Python libraries that we will need as dependencies, along with their versions if necessary. So create a requirements.txt file in the root of your repo. Don’t forget gradio. In the case of the Noise2Same repo, this is what the requirements look like:

Once you have added these files to your GitHub project, push your updated repo to GitHub.

Step 4: SSH into your EC2 instance and clone the repo & install the dependencies

SSH into your EC2 instance, and then do the following:

  • (Recommended) create a new tmux session by typing tmux (this will allow the app to continue running even after you end your connection)
  • In the tmux session, clone the repository for your Gradio application
  • Run pip3 install -r requirements.txt to install all of the requirements for your Python project

Step 5: Run the Gradio app

Finally, run your Gradio application! For example, if your Gradio file was called demo.py , then simply run the following command:

python3 demo.py

Step 6: Allow web traffic to the port

Now, if everything ran smoothly, you should get something like the following output:

Expected output when running your Gradio scirpt

This tells you that in our case, the Gradio app is running on port 7860. We now need to allow people to visit that port on our machine. To do that, we edit the security group associated with our instance:

  • Navigate to Network & Security settings on the left-hand navigation, and click Security Groups.
  • Find the security group connected to your instance
  • Choose Inbound rules and click Edit inbound rules
  • Click the Add rule button
  • For the new rule, under Type, select “Custom TCP.” In Port Range, type the port number (in our case 7860).
  • In the Source field, enter the range of IP addresses that should be able to access the machine. If you select My IP only your current IP address (i.e. your machine) will be able to visit the machine. If you select Anywhere then anyone across the web will be able to access the interface. You can specify the range of IP addresses as well if you’d like finer control over access.
  • Then click Save rules

And that’s it! Your Gradio interface should now be accessibly by simply typing the public IP address of your machine followed by colon, then the port number. In my case, that was: 35.160.54.199:7860 .

Hope that helps! Although this solution lets you quickly deploy a Gradio interface, this is not meant to be a production-ready deployment! For that, you’ll want to read more detailed guides, or talk to a Gradio expert.

If you’d like to consult with a Gradio expert in launching your own Gradio interface or learning about advanced features like authentication, security, and traffic handling, then send us a message at support@gradio.app!

--

--