APIs and machine learning: how to predict a company’s success

4 min reading
Developers / 05 April 2016
APIs and machine learning: how to  predict a company’s success
APIs and machine learning: how to  predict a company’s success

BBVA API Market

The big crystal ball to read the present and anticipate the future. This has what automatic learning, or machine learning has turned into, with a touch of fantasy. The large-scale use of data to create predictive models that perfect themselves on their own as they obtain results from their predictions. A type of artificial intelligence that learns by itself and gives business guidelines in sectors such as banking, technology and energy.

The companies that offer machine learning services have also understood, almost better than anyone else, that the development of their APIs and their use are a huge support for companies that want to begin implementing automatic learning processes in their strategic decisions. They make it possible to do things in days that used to take weeks. Not just companies such as BigML, which has its API REST BigML.io, but also giants such as Google, Amazon and IBM have them. 

BigML, a Spanish-American startup

BigML.io is an API REST to easily develop and apply predictive models to the projects of any company. It’s a very flexible application development interface: it can be used to implement supervised and unsupervised automatic learning tasks and to create all the processes required for a more complex machine learning system. With BigML.io, you can say that any company has the easiest predictive models, but also the most complex, within the scope of its development equipment. The BigML API REST is always used with standard HTTP methods.

What can be done with BigML.io?

– Real time predictions.

– Access to anomaly databases, models and detectors.

– Use of the BigML resources through programming. There are four types of resources: source, dataset, model and prediction. The normal flow in the use of BigML.io is the use of training data to create a source, which would constitute a dataset to create a model. This model, which has a constant data input, is used to establish the predictions.   

The training data of the future model normally comes in a table. Each row is an instance or example of each column, a field or attribute. These fields are also called predictors or co-variables. Within an automatic learning process, one of the columns, normally the last, represents a special attribute called objective or destination field, which assigns a label or type for each instance (dataset row). In these types of cases, there is a labeled dataset and a supervised learning process.

The idea is that one data source can create several datasets. In turn, a dataset can generate various model and one unique model, various predictions. If the objective field is a category, we are looking at a classification model. If it’s a number, it’s a regression model. Within automatic learning, the processes that include a dataset are the best, because this increases its efficiency.

If there is a dataset without an objective field, we would be looking at an un supervised learning process, with non-tagged data (without labels). These types of datasets are normally used to create anomaly detectors. Whilst the models make predictions, the anomaly detectors rate anomalies. This increases the efficiency of the model’s predictions. 

Large companies also have machine learning APIs 

Google is known as the search giant, but that definition began to fall short a few years ago. The company Mountain View also has an application development interface to make predictions called Google Prediction API. Predictions that can anticipate the bad status of a company or a startup and discover possible solutions for specific problems.

This Google interface is an API RESTful, which works asynchronously and is based in the cloud. It also enables developers to incorporate training datasets on the fly to develop the predictive model. In which fields of business does Google Prediction API work?:

– Customer sentiment analysis.

– Recommendation systems.

– Spam detection.

– Sales opportunity analysis.

– Identification of fraud or suspicious activities.

These are two interesting tutorials for using the API to create services for clients and detecting fraud in the health field:

Another large company that has spent a lot of time in the automatic learning field, and very successfully at that, is IBM. It’s star product is IBM Watson, the artificial intelligence platform that uses cognitive computing and machine learning  to make predictions for other companies in fields such as data or natural language processing This is why is has various APIs and SDKs in Node, Java, Python, for iOS operating systems and also for Unity

– API for functionalities such as language: this API is used to develop applications that are able to understand language, extract value and knowledge from the work and improve its performance over time.

– API for data processing: facilitates the management of Big Data and simplifies the work with respect to another tool: Watson Data Insight.

– API for image processing.

– API for speech functionalities

A third large company that has automatic learning APIs is Amazon, specifically Amazon Machine Learning API. The interface facilitates the development of predictive applications and hosts them in the cloud with Amazon Web Services. The full package. The development teams have viewing wizards and tools to create models without having to implement prediction generation codes or administer infrastructures. 

Amazon Machine Learning API gives companies that use it for the daily management of their business model very important competitive advantages:

– Fraud detection.

– Content personalization.

– Propensity models for marketing campaigns.

– Document classification.

– Client renewal prediction.

– Recommendation of solutions for customer support services.

If you are interested in the world of APIs, find out more about BBVA’s APIs here.

Follow us on @BBVAAPIMarket

It may interest you