Web Application For Predictiv Model

After developing a model from important variables and statistical models, operationalizing the model allows organizations gain consistent access to the built in intelligence. Embeding the model in a web application gives consistent access to the intelligence orgainzation wide. Web Application for Predictive Model

The value of a model lies in the quality of its prediction.

Motivation

Ultimately Data Science is about solving problems and answering questions based on quantifiable empirical data. The old adage that has proven true states that a problem is half solved, when question to the problem is clearly articulated. Chasing solutions without knowing what the question to a problem leads to no where. On the predictive modeling blog, I have discussed the steps to collect, tidy and analyze the data we used to build the model.

The question in the use case for the blog comes from a farmer (hypothetical), who wanted to plant coffee and is in search of a farm with conducive temperature climate for farming coffee. The farmer wanted a model that predict the weather condition of a farm land in the US, if given the latitude of the location. To answer his question, we have collected, explored and visualized the historic temperature data for major US metropolitan areas found at a National Oceanic and Atmospheric Administration - NOAA website, and applied statistical linear regression to generate a model that predicts the temperature when latitude of the location is given.

On this blog, we will entertain a scenario where the farmer wants to operationalize the model. That is that he wants to make the model available to everyone of his agents that are scattered around the country. The agents can use the model intelligence to scout a suitable farm for purchase or rent to plant coffee. When they consider a land, they can input the latitude, gathered from their smartphone GPS app, in the web application that contains the model, and get in return a temperature prediction that will guide them whether or not to considered the farm for coffee planting.

As mentioned, this blog is a follow up to Predictive Model blog found here.

Operationalize the model

Once the model is generated, it can be made available consumption in a number of ways. The two options we will discuss here are a Shiny web application, a graphic user interface framework from RStudio and web Application Programming Interface (API) based on R packages.

Firing up a web URL and entering latitude directly into the app is friendlier option, so we start with making model available with Shiny Web Application. One of the reasons to access model via Web API could be to give the option of integrating the model in pre-existing applications. I will discuss building in-house Web API server for our model directly from R later on this blog. The user doesn’t need to understand how to setup API server, but can send a request to the web API server and gets the predicted temperature back from the server.

Model avilable with Shiny Web Application

Shiny is one of the best web framework to create a user friendly interface for users. For this application, we are using graphic sliding bar to input the latitude and see the predicted temperature on the right side of the application. As the latitude changes, the predicted temperature also changes based on the model. The script for building the application is hosted my github repository, and can be seen here. Once the application is developed, the next step is hosting it in the cloud. The application is deployed on shinyapps.io (RStudio app hosting service). Feel free to play around the latitude and see the temperature prediction changes.

In case you have issue with your browser, I had some issue with Firefox, use Google’s Chrome web browser. Or just click this link to go direct tot he app. Warning: It may take few seconds to load the application - be patient!

Model Available with Web API

The second method of making the model available is through an API. There are two steps to set up and get a temperature prediction based on the regression model we developed. First is setting the API server. The second is sending a RESful Web API query to fetch temperature from the client.

Setting the Web API

Disclaimer: This is only partly tested from my machine. For production, a more robust testing and other API hosting provider should be considered based on the traffic and security requirements.

There are number of R packages that are built to make an HTTP and WebSocket server available. One of those is httpuv package. The package “Allows R code to listen for and interact with HTTP and WebSocket clients, so you can serve web traffic directly out of your R process.” Based on httpuv functions, another package named jug is built. For this API sever demo, we will be using the jug. Since we already have our model built as discussed in the last blog, we will simply load load the model in the API server. The following script shows the steps.

library(jug)         #API Server
library(magrittr)    #Use the pipe function '%>%'

#new_temp function
new_temp <- function(newLat) {
                               newData <- data.frame(newLat = as.numeric(newLat))
                               round(predict(regression.lm, newData), 2)
                             }
#testing the function
new_temp(22)



# API server launch
jug() %>% 
         post("/temp_api", decorate(new_temp)) %>% 
                              simple_error_handler() %>% 
                                                 serve_it()

When the above script is executed on R console, it will create a Web API interface access to the model. And this is what you should see Serving the jug at http://127.0.0.1:8080. From here, when a user makes a RESTful Web API request, the server responds with predicted temperature value. (Although it isn’t covered here, API response could return in XML or JASON formats.)

Fetching prediction with web API

For our simple example we will use the curl function on Linux and MAC OSX to get the temperate prediction from our own API server. While you have the server in the listen mode, type the following command to send the API request.

`curl -s --data "newLat=22" http://127.0.0.1:8080/temp_api`

Here temperature prediction is requested for new latitude 22.

Take Away

After developing a model from important variables and statistical models, ope rationalizing the model allows organizations gain consistent access to the built in intelligence. Here we developed an interactive web application that can easily get accessed from mobile or stationary computer. We also showed how to setup a web API server embedded with the model serving the model to a request from a client. For a big data that contains hundreds or thousands of feature, a Machine Learning algorithms are used to generate model logic. Perhaps that is good subject for next blog. Stay tuned!

Code

The code for the shiny app can be found here here on my github repository.

Credit:

Diagram 1 - Concept borrowed from RStudio.
Milton Bache - Magritter vignette
Bart Smeets - Create a Simple Web API for your R Functions
RStudio - HTTP and WebSocket Server Library