ForeTiS.evaluation.eval_metrics

Module Contents

Functions

smape(y_true, y_pred)

Function delivering Symmetric Mean Absolute Percentage Error between prediction and actual values

mape(y_true, y_pred)

Function delivering Mean Absolute Percentage Error between prediction and actual values

get_evaluation_report(y_true, y_pred[, prefix, ...])

Get values for common evaluation metrics

ForeTiS.evaluation.eval_metrics.smape(y_true, y_pred)

Function delivering Symmetric Mean Absolute Percentage Error between prediction and actual values :param y_true: actual values :param y_pred: prediction values :return: sMAPE between prediction and actual values

Parameters:
  • y_true (numpy.array) –

  • y_pred (numpy.array) –

Return type:

float

ForeTiS.evaluation.eval_metrics.mape(y_true, y_pred)[source]

Function delivering Mean Absolute Percentage Error between prediction and actual values :param y_true: actual values :param y_pred: prediction values :return: MAPE between prediction and actual values

Parameters:
  • y_true (numpy.array) –

  • y_pred (numpy.array) –

Return type:

float

ForeTiS.evaluation.eval_metrics.get_evaluation_report(y_true, y_pred, prefix='', current_model_name=None)[source]

Get values for common evaluation metrics

Parameters:
  • y_true (numpy.array) – true values

  • y_pred (numpy.array) – predicted values

  • prefix (str) – prefix to be added to the key if multiple eval metrics are collected

  • current_model_name (str) – name of the current model according to naming of .py file in package model

Returns:

dictionary with common metrics

Return type:

dict