Jupyter Notebooks are arguably one of the most popular tools used by Data Engineers and Data Scientists worldwide. Data ETLs, machine learning training, experimentation, model testing, model inference — all can be done from Jupyter Notebook interface itself. These notebooks are also excellent at generating visual reports, dashboards and training ML models. While Jupyter Notebook is an awesome IDE for the above tasks, it’s not easy to put these notebooks into some automated pipeline to perform these tasks on a recurring basis. But reports, dashboards and ML models need regular refresh, based on new incoming data.
Often people resort to…
Transfer Learning is a recognised hack to build high performance deep learning models with limited datasets and much less compute. It’s widely known that deep learning gives great results but that it also needs huge datasets and large compute capacity. Even when deep learning is the best tool to solve a problem, large dataset availability and / or high compute costs can be significant limiting variables. Transfer learning might come to the rescue in such cases.
Transfer learning allows us to take the learnings gained while solving another problem within the…
KNIME is a free and open source data analytics, reporting and integration platform. It allows us to create workflows to ingest data, analyse it, pre-process, train and build Data Science models. It has great support for exporting different kinds of models built on its platform into PMML format.
In this article we will
Today numerous platforms and frameworks support exporting models to PMML format, making it easier to deploy and integrate these models with other applications.
Analytics platforms like Alteryx, RapidMiner, SaS, Dataiku have a direct way of exporting trained models in PMML. Models built using open source libs like sklearn, xgboost, lightGBM, etc, can also be exported to PMML using libraries like nyoka.
Today we will see how PMML model can be deployed (to AWS / GCP / Azure / local machine) in couple of minutes.
H2O.ai is one of the most popular AutoML platforms, helping citizen data scientists create predictive ML models in minutes. Being free and open source makes it all the more attractive for the growing ML community.
H2O.ai offers Flow UI which is a notebook like environment allowing users to easily import their data, explore and analyse data and generate predictive models from them. Interface is automated and intuitive to ensure users, without advanced data science expertise, can make good progress in creating ML models
Today we will try to set up a single node H2O.ai …
H2O.ai helps businesses to create machine learning models to extract insights from their data, without having in-house expertise in designing and tuning such models. It is one of the most popular AutoML platforms helping Citizen Data scientists import their business data and create highly effective machine learning models from them.
But unlike other software deliverables, ML models are tricky to use. “Deploying” a ML model refers to the process of setting up a production pipeline and workflow, where inputs are fed to the ML model and it returns back scored output. Generally the team responsible and capable of setting up…
Once we have designed, trained and perfected our Deep Learning model, quick and easy way to deploy this model to evaluate it with real life inputs and to demo it to others, is always welcome. While there are numerous comprehensive DL model serving frameworks like Tensorflow Serving, MXNet Model Serving (MMS), most of them are designed for scalable production deployment, are resource intensive and time consuming to script and setup.
ML Developers, looking for cheap and lightweight solution often end up writing light RESTful endpoint using Flask, and hosting it publicly and consuming these REST apis using curl or some…