More opportunities for customization and predictive modeling
Machine learning and artificial intelligence is as hot topic as ever! At Datrics, we keep on evolving our no-code analytics platform, so that teams could build data experiments faster, cooperate on the data analytics platform and make more out of their data.In this month’s updates we have focused on the modeling possibilities with no- and low-code. Firstly, neural prophet model was added to increase the time series forecasting capabilities in Datrics. We have also updated the model performance dashboard, so that you could easier assess the quality of the model, check out the metrics and make the decision to proceed. Last but not least, there are more possibilities to work with custom code and custom object, including models.
Model performance dashboard
Model performance dashboard is available for all the modeling bricks, as well as the Predict brick. Depending on the model, performance dashboard includes the main metrics and list pf charts to access the quality and details of the results. There’s the list of the charts in the left panel of the model performance dashboard. Navigation among graphs allows easier to find the information you may need and scroll through the reports.
Via model performance dashboard in Datrics, one may save the model to assets, download the model or use what-if analysis.
Neural prophet - new model for time series forecasting
Neural Prophet is a time-series forecasting model inspired by Facebook Proph and AR-Net (for autocorrelation modeling). It combines Neural Networks and traditional time-series algorithms.
How to train Neural prophet in Datrics?
Neural Prophet includes all the components from the original Prophet model: trend, seasonality, recurring events, and regressors. Moreover, Neural Prophet provides support for auto-regression and lagged covariates. That's particularly relevant in the kinds of applications in which the near-term future depends on the current state of the system.
Neural prophet brick in Datrics supports 2 setup modes: Simple and Advanced.
In a simple mode, you need to define the date column and frequency, target column, and, optionally, future regressors and autoregression components. Future regressors are the external variables which have known future values. For instance, we may have the future interest rates known, or have a weather forecast. In case the predictions for the future regressors are not accurate, it negatively impacts quality of the forecasts made by neural prophet.
As the result, one gets the trained model and dataset with predictions. Performance metrics and the plots, featuring the forecasting results and all the model’s components, are available in the Model performance dashboard.
Now we have trained the model and would like to get the forecast. To get the results we will need to run the Predict brick. Predict brick uses trained Neural prophet model and future dataset to perform to calculate forecast.
Future dataset will consist out of the dates we need in our forecast, future regressors, and lagged features, if used in the model training. You may prepare the future dataset yourself, or use Prepare Future Dataset brick, which will automatically prepare the data in the format required by the Predict brick.
Predict brick returns dataset with the forecasted values and performance indicators with plots with forecasts in the Model performance dashboard.
Custom code with custom types
In the previous updates we have talked about the capability to use non-data (custom) inputs/outputs for custom bricks. This allows you to create the atomised bricks for specific features and tasks and reuse those bricks in multiple steps. For example, have a dedicated brick with the new model and predict for this model. The atomised bricks make the support and maintenance of the pipelines much simpler, and increases the options for usage in various use cases.
Today we are adding even more flexibility to the work with the custom objects. It becomes easy to use the same object in different stages in the pipelines, as well as among the pipelines.
You may export the custom type in .pkl format to the object storage (AWS S3 or Google Cloud Storage) with the Export brick. You should have an object storage data source created. Then connect custom type as an input to the Export brick, select the data source and defile the path to the file, where you would like to store the object. There are two behavior options is the file already exists in the desired destination: replace file or fail export.
In the pipeline below, we have created the custom brick to train KNN classifier. The brick returns trained model. We are saving it in the AWS S3, each time the model is retrain it will be saved in the same location.
When the custom type is already stored in the object storage, you may load it to the pipeline with the Load Custom Type brick. To load the file, you just need to specify the data source and the file path.Following the previous example, we have the custom model saved in AWS S3, which can be loaded to the pipeline and used in the custom Predict brick.
This functionality allows data scientist create custom brick for the specified features, just like standard Datrics bricks, and use the bricks and inputs/outputs independently in the different pipelines. It also provides the option to use the custom models and transformer in production version of the pipelines.
XGBoost serialization into json
Last but not least, in this month updates we have extended model serialization option to XGBoost models (Classification, Regression). Model serialization allows one to download the model in json format. The option available for all the built-in models in Datrics.