Materials ML API

Using computational methods such as density functional theory (DTF) to calculate the properties of materials is both computationally expensive and time consuming. For large systems, calculations can take weeks to months to complete and require access to high performance hardware. As a result, machine learning has gained considerable traction as a solution to alleviate these issues.

In materials science machine learning is in a favorable position due to the wealth of information available. For the past decade high-throughput frameworks such as AFLOW, yielded large databases contain millions of unique structures with their computed properties. Therefore, a number of models have emerged in the past few years.

Libray download and white paper:

plmf diagram

In particular, our group and collaborators have developed models of our own using the AFLOW database as a training set. The first model known as property-labeled materials fragments (plmf) represents a crystal structure as a colored periodic graph. Graphs are constructed by determining the topological neighbors using a Voronoi tessellation. The nodes of the graph are then decorated using a set of reference properties of the atomic species located at this vertex. Edges of the graph are also decorated by taking the difference between the reference properties for the two neighboring atoms. Finally, the graph is partitioned into smaller subgraphs, known as fragments, which act as the feature vectors for the model. This model was trained using Gradient Boosting Decision Trees and is able to predict the following properties: band gap, band gap type, bulk and shear moduli, Debye temperature, heat capacities at constant pressure or volume, thermal conductivity and thermal expansion.

the second model, known as the molar fraction descriptor (mfd) predicts vibrational properties of a material based on the chemical composition alone. This model was trained using nonlinear support vector machines with a radial basis function kernel. It is able to predict the vibrational free energy, heat capacity, and vibrational entropy.

The problem

While our group developed two models capable of accurate predictions, machine learning discipline is still reasonably new within the realm of materials science. As a result, the average user lacks the expertise to effectively utilize machine learning codebases. Therefore, our goal was to create an accessible means to integrate machine learning frameworks into a materials discovery workflow.

To do so, I developed the AFLOW-ML API, which provides the community a web accessible interface that distils functionality down to its essence: from user input, return a prediction. This way, our machine learning models are widely accessible and unburdens the user from having to understand the intricacies of machine learning or struggle installing and setting up existing codebases.

Design

aflow ml design

When designing the API I wanted to keep things are simple as possible. In early iterations, I considered only having a single endpoint, where different actions would affect the response. However, it quickly became apparent that this was not ideal for a number of reasons.

The biggest issue was that depending on the size of the structure, getting the prediction from one of the models could take a great deal of time. Therefore, it goes without saying that having the logic to fetch the prediction sit within the code for the route would be terrible. The solution was to employ a task queue in order to run the predictions asynchronously.

The code for the models where written in Python, so the stack I chose to use was Flask (web app framework), Celery (distributed task queue) and Redis (as the broker for Celery). Due to predictions now being handled as async task, the API consisted of two routes:

  • {model}/prediction
  • prediction/result/{id}

API usage involves uploading a material structure to a POST endpoint, {model}/prediction, and retrieving a prediction object from a GET endpoint, /prediction/result/{id}. With the first endpoint users would specify the model via the url (plmf or mfd) and POST the contents of a POSCAR file e.g.

curl http://aflow.org/API/aflow-ml/v1.0/plmf/prediction--data-urlencode file@test.poscar

When a structure is posted, the prediction is added to the task queue. The endpoint then responds with the following json response:

{
 "id": String,
 "model": String,
 "results_endpoint": String
}

The keyword id is the task id for the prediction. It is used to fetch the results of the prediction once the job has completed. Fetching results is handled by the second endpoint e.g:

curl http://aflow.org/API/aflow-ml/v1.0/prediction/result/{id}

If the task has not completed the endpoint responds with a task object:

{
 "status": String,
 "description": String
}

Depending on the status, one can decide wherever or not to poll this endpoint. When the status is set to 'SUCCESS' the results of the prediction will be appended to the response:

// plmf model
{
 "status": String,
 "description": String,
 "model": String,
 "citation": String,
 "ml_egap_type": String,
 "ml_egap": Number,
 "ml_energy_per_atom": Number,
 "ml_ael_bulk_modulus_vrh": Number,
 "ml_ael_shear_modulus_vrh": Number,
 "ml_agl_debye": Number,
 "ml_agl_heat_capacity_Cp_300K": Number,
 "ml_agl_heat_capacity_Cv_300K": Number,
 "ml_agl_heat_capacity_Cp_300K_per_atom": Number,
 "ml_agl_heat_capacity_Cv_300K_per_atom": Number,
 "ml_agl_thermal_conductivity_300K": Number,
 "ml_agl_thermal_expansion_300K": Number
}

// mfd model
{
 "status": String,
 "description": String,
 "model": String,
 "citation": String,
 "ml_Cv": Number,
 "ml_Fvib": Number,
 "ml_Svib": Number
}

Python Client

While the API may be accessed using any HTTP library Requests (Python), URLSession (iOS SDK), HttpURLConnection (Java), axios (JavaScript), etc, we wanted to make using our API as simple as possible. As a result, we created a Python client for the AFLOW-ML API available for download (here). Using the client is very straightforward: one simply opens a poscar file, creates an instance of the AFLOWmlAPI() object and call the method get_prediction(file_str, model_name), which returns the prediction as a dictionary. An example is shown below:

from aflowml.client import AFLOWmlAPI

with open('your.poscar', 'r') as input_file:
    aflowML = AFLOWmlAPI()
    data = aflowML.get_prediction(
        input_file.read(),
        'plmf'
    )

CLI

Finally, to streamline use even more, when the Python client is installed, it also installs a command line interface (CLI). The CLI exposes all the functionality of the Python client and is targeted at users who are not familiar with Python or using REST APIs. For a list of all its arguments please refer to the publication (here). However, to illustrate its simplicity, a prediction can be retrieved using following command:

aflow-ml your.poscar --model=plmf

Deploying

To deploy I decided to go with the typical pattern I've done for other Python web apps (Django, Flask) e.g using NGINX for a reverse proxy and Gunicorn as the WSGI HTTP server to serve the actual app. For the first iteration I did everything system side and while this is straightforward for someone with experience, it may prove a challenge for those unfamiliar. Considering the person who inherits this project after me may not have the same skill set, I wanted to make sure deploying was painless.

As a result I decided to give Docker a go, as for everyone was talking about it, and up until this point I didn't have a chance to test it in a production environment. To my surprise "dockerizing" the app was very easy and from this project I've created a simple boilerplate for dockerizing a Flask + Celery + Redis + MongoDB + NGINX + Gunicorn app (here).