diff --git a/_freeze/docs/src/how_to_guides/timeseries/execute-results/md.json b/_freeze/docs/src/how_to_guides/timeseries/execute-results/md.json
new file mode 100644
index 0000000..77c46b5
--- /dev/null
+++ b/_freeze/docs/src/how_to_guides/timeseries/execute-results/md.json
@@ -0,0 +1,10 @@
+{
+ "hash": "2db56facb6438b999de6409bd63fd4b9",
+ "result": {
+ "markdown": "---\ntitle: How to Conformalize a Time Series Model\n---\n\n\n\n\n\nTime series data is prevalent across various domains, such as finance, weather forecasting, energy, and supply chains. However, accurately quantifying uncertainty in time series predictions is often a complex task due to inherent temporal dependencies, non-stationarity, and noise in the data. In this context, Conformal Prediction offers a valuable solution by providing prediction intervals which offer a sound way to quantify uncertainty. \n\nThis how-to guide demonstrates how you can conformalize a time series model using Ensemble Batch Prediction Intervals (EnbPI) [@xu2022conformal]. This method enables the updating of prediction intervals whenever new observations are available. This dynamic update process allows the method to adapt to changing conditions, accounting for the potential degradation of predictions or the increase in noise levels in the data.\n\n## The Task at Hand \n\nInspired by [MAPIE](https://mapie.readthedocs.io/en/latest/examples_regression/4-tutorials/plot_ts-tutorial.html), we employ the Victoria electricity demand dataset. This dataset contains hourly electricity demand (in GW) for Victoria state in Australia, along with corresponding temperature data (in Celsius degrees). \n\n\n::: {.cell execution_count=2}\n``` {.julia .cell-code}\nusing CSV, DataFrames\ndf = CSV.read(\"./dev/artifacts/electricity_demand.csv\", DataFrame)\n```\n:::\n\n\n## Feature engineering\n\nIn this how-to guide, we only focus on date, time and lag features.\n\n### Date and Time-related features\n\nWe create temporal features out of the date and hour:\n\n::: {.cell execution_count=3}\n``` {.julia .cell-code}\nusing Dates\ndf.Datetime = Dates.DateTime.(df.Datetime, \"yyyy-mm-dd HH:MM:SS\")\ndf.Weekofyear = Dates.week.(df.Datetime)\ndf.Weekday = Dates.dayofweek.(df.Datetime)\ndf.hour = Dates.hour.(df.Datetime) \n```\n:::\n\n\nAdditionally, to simulate sudden changes caused by unforeseen events, such as blackouts or lockdowns, we deliberately reduce the electricity demand by 2GW from February 22nd onward. \n\n::: {.cell execution_count=4}\n``` {.julia .cell-code}\ncondition = df.Datetime .>= Date(\"2014-02-22\")\ndf[condition, :Demand] .= df[condition, :Demand] .- 2\n```\n:::\n\n\n### Lag features\n\n::: {.cell execution_count=5}\n``` {.julia .cell-code}\nusing ShiftedArrays\nn_lags = 5\nfor i = 1:n_lags\n DataFrames.transform!(df, \"Demand\" => (x -> ShiftedArrays.lag(x, i)) => \"lag_hour_$i\")\nend\n\ndf_dropped_missing = dropmissing(df)\ndf_dropped_missing\n```\n:::\n\n\n## Train-test split\n\nAs usual, we split the data into train and test sets. We use the first 90% of the data for training and the remaining 10% for testing.\n\n::: {.cell execution_count=6}\n``` {.julia .cell-code}\nfeatures_cols = DataFrames.select(df_dropped_missing, Not([:Datetime, :Demand]))\nX = Matrix(features_cols)\ny = Matrix(df_dropped_missing[:, [:Demand]])\nsplit_index = floor(Int, 0.9 * size(y , 1)) \nprintln(split_index)\nX_train = X[1:split_index, :]\ny_train = y[1:split_index, :]\nX_test = X[split_index+1 : size(y,1), :]\ny_test = y[split_index+1 : size(y,1), :]\n```\n:::\n\n\n## Loading model using MLJ interface\n\nAs our baseline model, we use a boosted tree regressor:\n\n::: {.cell execution_count=7}\n``` {.julia .cell-code}\nusing MLJ\nEvoTreeRegressor = @load EvoTreeRegressor pkg=EvoTrees verbosity=0\nmodel = EvoTreeRegressor(nrounds =100, max_depth=10, rng=123)\n```\n:::\n\n\n## Conformal time series\n\nNext, we conformalize the model using EnbPI. First, we will proceed without updating training set residuals to build prediction intervals. The result is shown in the following figure:\n\n::: {.cell execution_count=8}\n``` {.julia .cell-code}\nusing ConformalPrediction\n\nconf_model = conformal_model(model; method=:time_series_ensemble_batch, coverage=0.95)\nmach = machine(conf_model, X_train, y_train)\ntrain = [1:split_index;]\nfit!(mach, rows=train)\n\ny_pred_interval = MLJ.predict(conf_model, mach.fitresult, X_test)\nlb = [ minimum(tuple_data) for tuple_data in y_pred_interval]\nub = [ maximum(tuple_data) for tuple_data in y_pred_interval]\ny_pred = [mean(tuple_data) for tuple_data in y_pred_interval]\n```\n:::\n\n\n::: {.cell execution_count=9}\n\n::: {.cell-output .cell-output-display execution_count=10}\n![](timeseries_files/figure-commonmark/cell-10-output-1.svg){}\n:::\n:::\n\n\nWe can use `partial_fit` method in EnbPI implementation in ConformalPrediction in order to adjust prediction intervals to sudden change points on test sets that have not been seen by the model during training. In the below experiment, sample_size indicates the batch of new observations. You can decide if you want to update residuals by sample_size or update and remove first $n$ residuals (shift_size = n). The latter will allow to remove early residuals that will not have a positive impact on the current observations. \n\nThe chart below compares the results to the previous experiment without updating residuals:\n\n::: {.cell execution_count=10}\n``` {.julia .cell-code}\nsample_size = 10\nshift_size = 10\nlast_index = size(X_test , 1)\nlb_updated , ub_updated = ([], [])\nfor step in 1:sample_size:last_index\n if last_index - step < sample_size\n y_interval = MLJ.predict(conf_model, mach.fitresult, X_test[step:last_index , :])\n partial_fit(mach.model , mach.fitresult, X_test[step:last_index , :], y_test[step:last_index , :], shift_size)\n else\n y_interval = MLJ.predict(conf_model, mach.fitresult, X_test[step:step+sample_size-1 , :])\n partial_fit(mach.model , mach.fitresult, X_test[step:step+sample_size-1 , :], y_test[step:step+sample_size-1 , :], shift_size) \n end \n lb_updatedᵢ= [ minimum(tuple_data) for tuple_data in y_interval]\n push!(lb_updated,lb_updatedᵢ)\n ub_updatedᵢ = [ maximum(tuple_data) for tuple_data in y_interval]\n push!(ub_updated, ub_updatedᵢ)\nend\nlb_updated = reduce(vcat, lb_updated)\nub_updated = reduce(vcat, ub_updated)\n```\n:::\n\n\n::: {.cell execution_count=11}\n\n::: {.cell-output .cell-output-display execution_count=12}\n![](timeseries_files/figure-commonmark/cell-12-output-1.svg){}\n:::\n:::\n\n\n## Results\n\nIn time series problems, unexpected incidents can lead to sudden changes, and such scenarios are highly probable. As illustrated earlier, the model's training data lacks information about these change points, making it unable to anticipate them. The top figure demonstrates that when residuals are not updated, the prediction intervals solely rely on the distribution of residuals from the training set. Consequently, these intervals fail to encompass the true observations after the change point, resulting in a sudden drop in coverage.\n\nHowever, by partially updating the residuals, the method becomes adept at capturing the increasing uncertainties in model predictions. It is important to note that the changes in uncertainty occur approximately one day after the change point. This delay is attributed to the requirement of having a sufficient number of new residuals to alter the quantiles obtained from the residual distribution.\n\n## References\n\n",
+ "supporting": [
+ "timeseries_files"
+ ],
+ "filters": []
+ }
+}
\ No newline at end of file
diff --git a/_freeze/docs/src/how_to_guides/timeseries/figure-commonmark/cell-10-output-1.svg b/_freeze/docs/src/how_to_guides/timeseries/figure-commonmark/cell-10-output-1.svg
new file mode 100644
index 0000000..1237297
--- /dev/null
+++ b/_freeze/docs/src/how_to_guides/timeseries/figure-commonmark/cell-10-output-1.svg
@@ -0,0 +1,54 @@
+
+
diff --git a/_freeze/docs/src/how_to_guides/timeseries/figure-commonmark/cell-12-output-1.svg b/_freeze/docs/src/how_to_guides/timeseries/figure-commonmark/cell-12-output-1.svg
new file mode 100644
index 0000000..843c977
--- /dev/null
+++ b/_freeze/docs/src/how_to_guides/timeseries/figure-commonmark/cell-12-output-1.svg
@@ -0,0 +1,94 @@
+
+
diff --git a/bib.bib b/bib.bib
index 9f2317f..663a89b 100644
--- a/bib.bib
+++ b/bib.bib
@@ -1,3 +1,18 @@
+@TechReport{xu2022conformal,
+ author = {Xu, Chen and Xie, Yao},
+ date = {2022-06},
+ institution = {arXiv},
+ title = {Conformal prediction set for time-series},
+ doi = {10.48550/arXiv.2206.07851},
+ note = {arXiv:2206.07851 [cs, stat] type: article},
+ url = {http://arxiv.org/abs/2206.07851},
+ urldate = {2023-07-22},
+ abstract = {When building either prediction intervals for regression (with real-valued response) or prediction sets for classification (with categorical responses), uncertainty quantification is essential to studying complex machine learning methods. In this paper, we develop Ensemble Regularized Adaptive Prediction Set (ERAPS) to construct prediction sets for time-series (with categorical responses), based on the prior work of [Xu and Xie, 2021]. In particular, we allow unknown dependencies to exist within features and responses that arrive in sequence. Method-wise, ERAPS is a distribution-free and ensemble-based framework that is applicable for arbitrary classifiers. Theoretically, we bound the coverage gap without assuming data exchangeability and show asymptotic set convergence. Empirically, we demonstrate valid marginal and conditional coverage by ERAPS, which also tends to yield smaller prediction sets than competing methods.},
+ annotation = {Comment: Strongly accepted by the Workshop on Distribution-Free Uncertainty Quantification at ICML 2022},
+ file = {arXiv Fulltext PDF:https\://arxiv.org/pdf/2206.07851.pdf:application/pdf},
+ keywords = {Statistics - Machine Learning, Computer Science - Machine Learning, Statistics - Methodology},
+}
+
@TechReport{kingma2017adam,
author = {Kingma, Diederik P. and Ba, Jimmy},
date = {2017-01},
@@ -3032,4 +3047,19 @@ @TechReport{martens2020optimizing
keywords = {Computer Science - Machine Learning, Computer Science - Neural and Evolutionary Computing, Statistics - Machine Learning},
}
+@TechReport{fong2021conformal,
+ author = {Fong, Edwin and Holmes, Chris},
+ date = {2021-06},
+ institution = {arXiv},
+ title = {Conformal {Bayesian} {Computation}},
+ doi = {10.48550/arXiv.2106.06137},
+ note = {arXiv:2106.06137 [stat] type: article},
+ url = {http://arxiv.org/abs/2106.06137},
+ urldate = {2023-07-19},
+ abstract = {We develop scalable methods for producing conformal Bayesian predictive intervals with finite sample calibration guarantees. Bayesian posterior predictive distributions, \$p(y {\textbackslash}mid x)\$, characterize subjective beliefs on outcomes of interest, \$y\$, conditional on predictors, \$x\$. Bayesian prediction is well-calibrated when the model is true, but the predictive intervals may exhibit poor empirical coverage when the model is misspecified, under the so called \$\{{\textbackslash}cal\{M\}\}\$-open perspective. In contrast, conformal inference provides finite sample frequentist guarantees on predictive confidence intervals without the requirement of model fidelity. Using 'add-one-in' importance sampling, we show that conformal Bayesian predictive intervals are efficiently obtained from re-weighted posterior samples of model parameters. Our approach contrasts with existing conformal methods that require expensive refitting of models or data-splitting to achieve computational efficiency. We demonstrate the utility on a range of examples including extensions to partially exchangeable settings such as hierarchical models.},
+ annotation = {Comment: 19 pages, 4 figures, 12 tables; added references and fixed typos},
+ file = {arXiv Fulltext PDF:https\://arxiv.org/pdf/2106.06137.pdf:application/pdf},
+ keywords = {Statistics - Methodology, Statistics - Computation},
+}
+
@Comment{jabref-meta: databaseType:biblatex;}
diff --git a/docs/Manifest.toml b/docs/Manifest.toml
index 241069d..31b15f7 100644
--- a/docs/Manifest.toml
+++ b/docs/Manifest.toml
@@ -2,7 +2,7 @@
julia_version = "1.9.1"
manifest_format = "2.0"
-project_hash = "347af1ad749e1c928f82064592bd19f36512aeff"
+project_hash = "fbcccb0b07a4f882cff058c86276ea686d16ca1d"
[[deps.ANSIColoredPrinters]]
git-tree-sha1 = "574baf8110975760d391c710b6341da1afa48d8c"
@@ -399,7 +399,7 @@ version = "2.2.0"
[[deps.ConformalPrediction]]
deps = ["CategoricalArrays", "ChainRules", "Flux", "LazyArtifacts", "LinearAlgebra", "MLJBase", "MLJEnsembles", "MLJFlux", "MLJModelInterface", "MLUtils", "NaturalSort", "Plots", "Serialization", "StatsBase"]
-path = ".."
+path = "/Users/patrickaltmeyer/.julia/dev/ConformalPrediction"
uuid = "98bfc277-1877-43dc-819b-a3e38c30242f"
version = "0.1.7"
diff --git a/docs/Project.toml b/docs/Project.toml
index 423a49a..3d47997 100644
--- a/docs/Project.toml
+++ b/docs/Project.toml
@@ -4,6 +4,7 @@ CSV = "336ed68f-0bac-5ca0-87d4-7b16caf5d00b"
CategoricalArrays = "324d7699-5711-5eae-9e2f-1d82baa6b597"
ConformalPrediction = "98bfc277-1877-43dc-819b-a3e38c30242f"
DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
+Dates = "ade2ca70-3891-5945-98fb-dc099432e06a"
DecisionTree = "7806a523-6efd-50cb-b5f6-3fa6f1930dbb"
Distributions = "31c24e10-a181-5473-b8eb-7969acd0382f"
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
@@ -32,6 +33,7 @@ PlutoUI = "7f904dfe-b85e-4ff6-b463-dae2292396a8"
Polynomials = "f27b6e38-b328-58d1-80ce-0feddd5e7a45"
PrettyTables = "08abe8d2-0d0c-5749-adfa-8a2ac140af0d"
Serialization = "9e88b42a-f829-5b0c-bbe9-9e923198166b"
+ShiftedArrays = "1277b4bf-5013-50f5-be3d-901d8477a67a"
StatsBase = "2913bbd2-ae8a-5f71-8c99-4fb6c76f3a91"
Transformers = "21ca0261-441d-5938-ace7-c90938fde4d4"
UnicodePlots = "b8865327-cd53-5732-bb35-84acbb429228"
diff --git a/docs/make.jl b/docs/make.jl
index 92bdb99..ea25350 100644
--- a/docs/make.jl
+++ b/docs/make.jl
@@ -42,6 +42,7 @@ makedocs(;
"Overview" => "how_to_guides/index.md",
"How to Conformalize a Deep Image Classifier" => "how_to_guides/mnist.md",
"How to Conformalize a Large Language Model" => "how_to_guides/llm.md",
+ "How to Conformalize a Time Series Model" => "how_to_guides/timeseries.md",
],
"🤓 Explanation" => [
"Overview" => "explanation/index.md",
diff --git a/docs/pluto/intro.jl b/docs/pluto/intro.jl
index 318b09d..770f0c8 100644
--- a/docs/pluto/intro.jl
+++ b/docs/pluto/intro.jl
@@ -1,5 +1,5 @@
### A Pluto.jl notebook ###
-# v0.19.22
+# v0.19.27
using Markdown
using InteractiveUtils
@@ -22,7 +22,10 @@ macro bind(def, element)
end
# ╔═╡ aad62ef1-4136-4732-a9e6-3746524978ee
+# ╠═╡ show_logs = false
begin
+ using Pkg
+ Pkg.develop(; url="https://github.com/JuliaTrustworthyAI/ConformalPrediction.jl")
using ConformalPrediction
using DecisionTree: DecisionTreeRegressor
using Distributions
@@ -36,9 +39,6 @@ begin
using PlutoUI
end;
-# ╔═╡ 8a2e7eb1-3a7a-49b0-83db-09966e8e891a
-pwd()
-
# ╔═╡ bc0d7575-dabd-472d-a0ce-db69d242ced8
md"""
# Welcome to `ConformalPrediction.jl`
@@ -368,7 +368,6 @@ Enjoy!
"""
# ╔═╡ Cell order:
-# ╠═8a2e7eb1-3a7a-49b0-83db-09966e8e891a
# ╟─bc0d7575-dabd-472d-a0ce-db69d242ced8
# ╠═aad62ef1-4136-4732-a9e6-3746524978ee
# ╟─55a7c16b-a526-41d9-9d73-a0591ad006ce
@@ -404,7 +403,7 @@ Enjoy!
# ╟─ad3e290b-c1f5-4008-81c7-a1a56ab10563
# ╟─b3a88859-0442-41ff-bfea-313437042830
# ╟─98cc9ea7-444d-4449-ab30-e02bfc5b5791
-# ╟─d1140af9-608a-4669-9595-aee72ffbaa46
+# ╠═d1140af9-608a-4669-9595-aee72ffbaa46
# ╟─f742440b-258e-488a-9c8b-c9267cf1fb99
# ╟─f7b2296f-919f-4870-aac1-8e36dd694422
# ╟─74444c01-1a0a-47a7-9b14-749946614f07
diff --git a/docs/setup_docs.jl b/docs/setup_docs.jl
index c45f9f5..11fecbe 100644
--- a/docs/setup_docs.jl
+++ b/docs/setup_docs.jl
@@ -8,6 +8,7 @@ setup_docs = quote
using ConformalPrediction
using CSV
using DataFrames
+ using Dates
using Flux
using MLJBase
using MLJFlux
@@ -15,6 +16,7 @@ setup_docs = quote
using Plots.PlotMeasures
using Random
using Serialization
+ using SharedArrays
using StatsBase
using Transformers
using Transformers.TextEncoders
diff --git a/docs/src/explanation/architecture.qmd b/docs/src/explanation/architecture.qmd
index e1d66b3..da51cde 100644
--- a/docs/src/explanation/architecture.qmd
+++ b/docs/src/explanation/architecture.qmd
@@ -8,9 +8,9 @@ The goal is to make this package as compatible as possible with MLJ to tab into
%%| echo: false
flowchart TB
mmi[MLJModelInterface]
- subgraph ConformalModel
+ subgraph ConformalModel
interval[ConformalInterval]
- set[ConformalSet]
+ set[ConformalProbabilisticSet]
prob[ConformalProbabilistic]
struct1([NaiveRegressor])
struct2([...])
@@ -22,7 +22,7 @@ flowchart TB
end
mmi --<:MMI.Interval--> interval
- mmi --<:MMI.Supervised--> set
+ mmi --<:MMI.ProbabilisticSet--> set
mmi --<:MMI.Probabilistic--> prob
```
diff --git a/docs/src/how_to_guides/timeseries.md b/docs/src/how_to_guides/timeseries.md
new file mode 100644
index 0000000..64887f0
--- /dev/null
+++ b/docs/src/how_to_guides/timeseries.md
@@ -0,0 +1,134 @@
+# How to Conformalize a Time Series Model
+
+Time series data is prevalent across various domains, such as finance, weather forecasting, energy, and supply chains. However, accurately quantifying uncertainty in time series predictions is often a complex task due to inherent temporal dependencies, non-stationarity, and noise in the data. In this context, Conformal Prediction offers a valuable solution by providing prediction intervals which offer a sound way to quantify uncertainty.
+
+This how-to guide demonstrates how you can conformalize a time series model using Ensemble Batch Prediction Intervals (EnbPI) (Xu and Xie 2022). This method enables the updating of prediction intervals whenever new observations are available. This dynamic update process allows the method to adapt to changing conditions, accounting for the potential degradation of predictions or the increase in noise levels in the data.
+
+## The Task at Hand
+
+Inspired by [MAPIE](https://mapie.readthedocs.io/en/latest/examples_regression/4-tutorials/plot_ts-tutorial.html), we employ the Victoria electricity demand dataset. This dataset contains hourly electricity demand (in GW) for Victoria state in Australia, along with corresponding temperature data (in Celsius degrees).
+
+``` julia
+using CSV, DataFrames
+df = CSV.read("./dev/artifacts/electricity_demand.csv", DataFrame)
+```
+
+## Feature engineering
+
+In this how-to guide, we only focus on date, time and lag features.
+
+### Date and Time-related features
+
+We create temporal features out of the date and hour:
+
+``` julia
+using Dates
+df.Datetime = Dates.DateTime.(df.Datetime, "yyyy-mm-dd HH:MM:SS")
+df.Weekofyear = Dates.week.(df.Datetime)
+df.Weekday = Dates.dayofweek.(df.Datetime)
+df.hour = Dates.hour.(df.Datetime)
+```
+
+Additionally, to simulate sudden changes caused by unforeseen events, such as blackouts or lockdowns, we deliberately reduce the electricity demand by 2GW from February 22nd onward.
+
+``` julia
+condition = df.Datetime .>= Date("2014-02-22")
+df[condition, :Demand] .= df[condition, :Demand] .- 2
+```
+
+### Lag features
+
+``` julia
+using ShiftedArrays
+n_lags = 5
+for i = 1:n_lags
+ DataFrames.transform!(df, "Demand" => (x -> ShiftedArrays.lag(x, i)) => "lag_hour_$i")
+end
+
+df_dropped_missing = dropmissing(df)
+df_dropped_missing
+```
+
+## Train-test split
+
+As usual, we split the data into train and test sets. We use the first 90% of the data for training and the remaining 10% for testing.
+
+``` julia
+features_cols = DataFrames.select(df_dropped_missing, Not([:Datetime, :Demand]))
+X = Matrix(features_cols)
+y = Matrix(df_dropped_missing[:, [:Demand]])
+split_index = floor(Int, 0.9 * size(y , 1))
+println(split_index)
+X_train = X[1:split_index, :]
+y_train = y[1:split_index, :]
+X_test = X[split_index+1 : size(y,1), :]
+y_test = y[split_index+1 : size(y,1), :]
+```
+
+## Loading model using MLJ interface
+
+As our baseline model, we use a boosted tree regressor:
+
+``` julia
+using MLJ
+EvoTreeRegressor = @load EvoTreeRegressor pkg=EvoTrees verbosity=0
+model = EvoTreeRegressor(nrounds =100, max_depth=10, rng=123)
+```
+
+## Conformal time series
+
+Next, we conformalize the model using EnbPI. First, we will proceed without updating training set residuals to build prediction intervals. The result is shown in the following figure:
+
+``` julia
+using ConformalPrediction
+
+conf_model = conformal_model(model; method=:time_series_ensemble_batch, coverage=0.95)
+mach = machine(conf_model, X_train, y_train)
+train = [1:split_index;]
+fit!(mach, rows=train)
+
+y_pred_interval = MLJ.predict(conf_model, mach.fitresult, X_test)
+lb = [ minimum(tuple_data) for tuple_data in y_pred_interval]
+ub = [ maximum(tuple_data) for tuple_data in y_pred_interval]
+y_pred = [mean(tuple_data) for tuple_data in y_pred_interval]
+```
+
+![](timeseries_files/figure-commonmark/cell-10-output-1.svg)
+
+We can use `partial_fit` method in EnbPI implementation in ConformalPrediction in order to adjust prediction intervals to sudden change points on test sets that have not been seen by the model during training. In the below experiment, sample_size indicates the batch of new observations. You can decide if you want to update residuals by sample_size or update and remove first *n* residuals (shift_size = n). The latter will allow to remove early residuals that will not have a positive impact on the current observations.
+
+The chart below compares the results to the previous experiment without updating residuals:
+
+``` julia
+sample_size = 10
+shift_size = 10
+last_index = size(X_test , 1)
+lb_updated , ub_updated = ([], [])
+for step in 1:sample_size:last_index
+ if last_index - step < sample_size
+ y_interval = MLJ.predict(conf_model, mach.fitresult, X_test[step:last_index , :])
+ partial_fit(mach.model , mach.fitresult, X_test[step:last_index , :], y_test[step:last_index , :], shift_size)
+ else
+ y_interval = MLJ.predict(conf_model, mach.fitresult, X_test[step:step+sample_size-1 , :])
+ partial_fit(mach.model , mach.fitresult, X_test[step:step+sample_size-1 , :], y_test[step:step+sample_size-1 , :], shift_size)
+ end
+ lb_updatedᵢ= [ minimum(tuple_data) for tuple_data in y_interval]
+ push!(lb_updated,lb_updatedᵢ)
+ ub_updatedᵢ = [ maximum(tuple_data) for tuple_data in y_interval]
+ push!(ub_updated, ub_updatedᵢ)
+end
+lb_updated = reduce(vcat, lb_updated)
+ub_updated = reduce(vcat, ub_updated)
+```
+
+![](timeseries_files/figure-commonmark/cell-12-output-1.svg)
+
+## Results
+
+In time series problems, unexpected incidents can lead to sudden changes, and such scenarios are highly probable. As illustrated earlier, the model’s training data lacks information about these change points, making it unable to anticipate them. The top figure demonstrates that when residuals are not updated, the prediction intervals solely rely on the distribution of residuals from the training set. Consequently, these intervals fail to encompass the true observations after the change point, resulting in a sudden drop in coverage.
+
+However, by partially updating the residuals, the method becomes adept at capturing the increasing uncertainties in model predictions. It is important to note that the changes in uncertainty occur approximately one day after the change point. This delay is attributed to the requirement of having a sufficient number of new residuals to alter the quantiles obtained from the residual distribution.
+
+## References
+
+Xu, Chen, and Yao Xie. 2022. “Conformal Prediction Set for Time-Series.” arXiv. .
diff --git a/docs/src/how_to_guides/timeseries.qmd b/docs/src/how_to_guides/timeseries.qmd
new file mode 100644
index 0000000..c1c19f3
--- /dev/null
+++ b/docs/src/how_to_guides/timeseries.qmd
@@ -0,0 +1,163 @@
+```{julia}
+#| echo: false
+include("$(pwd())/docs/setup_docs.jl")
+eval(setup_docs)
+```
+
+# How to Conformalize a Time Series Model
+
+Time series data is prevalent across various domains, such as finance, weather forecasting, energy, and supply chains. However, accurately quantifying uncertainty in time series predictions is often a complex task due to inherent temporal dependencies, non-stationarity, and noise in the data. In this context, Conformal Prediction offers a valuable solution by providing prediction intervals which offer a sound way to quantify uncertainty.
+
+This how-to guide demonstrates how you can conformalize a time series model using Ensemble Batch Prediction Intervals (EnbPI) [@xu2022conformal]. This method enables the updating of prediction intervals whenever new observations are available. This dynamic update process allows the method to adapt to changing conditions, accounting for the potential degradation of predictions or the increase in noise levels in the data.
+
+## The Task at Hand
+
+Inspired by [MAPIE](https://mapie.readthedocs.io/en/latest/examples_regression/4-tutorials/plot_ts-tutorial.html), we employ the Victoria electricity demand dataset. This dataset contains hourly electricity demand (in GW) for Victoria state in Australia, along with corresponding temperature data (in Celsius degrees).
+
+```{julia}
+using CSV, DataFrames
+df = CSV.read("./dev/artifacts/electricity_demand.csv", DataFrame)
+```
+
+## Feature engineering
+
+In this how-to guide, we only focus on date, time and lag features.
+
+### Date and Time-related features
+
+We create temporal features out of the date and hour:
+
+```{julia}
+using Dates
+df.Datetime = Dates.DateTime.(df.Datetime, "yyyy-mm-dd HH:MM:SS")
+df.Weekofyear = Dates.week.(df.Datetime)
+df.Weekday = Dates.dayofweek.(df.Datetime)
+df.hour = Dates.hour.(df.Datetime)
+```
+
+Additionally, to simulate sudden changes caused by unforeseen events, such as blackouts or lockdowns, we deliberately reduce the electricity demand by 2GW from February 22nd onward.
+
+```{julia}
+condition = df.Datetime .>= Date("2014-02-22")
+df[condition, :Demand] .= df[condition, :Demand] .- 2
+```
+
+### Lag features
+
+```{julia}
+using ShiftedArrays
+n_lags = 5
+for i = 1:n_lags
+ DataFrames.transform!(df, "Demand" => (x -> ShiftedArrays.lag(x, i)) => "lag_hour_$i")
+end
+
+df_dropped_missing = dropmissing(df)
+df_dropped_missing
+
+```
+
+## Train-test split
+
+As usual, we split the data into train and test sets. We use the first 90% of the data for training and the remaining 10% for testing.
+
+```{julia}
+features_cols = DataFrames.select(df_dropped_missing, Not([:Datetime, :Demand]))
+X = Matrix(features_cols)
+y = Matrix(df_dropped_missing[:, [:Demand]])
+split_index = floor(Int, 0.9 * size(y , 1))
+println(split_index)
+X_train = X[1:split_index, :]
+y_train = y[1:split_index, :]
+X_test = X[split_index+1 : size(y,1), :]
+y_test = y[split_index+1 : size(y,1), :]
+```
+
+## Loading model using MLJ interface
+
+As our baseline model, we use a boosted tree regressor:
+
+```{julia}
+using MLJ
+EvoTreeRegressor = @load EvoTreeRegressor pkg=EvoTrees verbosity=0
+model = EvoTreeRegressor(nrounds =100, max_depth=10, rng=123)
+```
+
+## Conformal time series
+
+Next, we conformalize the model using EnbPI. First, we will proceed without updating training set residuals to build prediction intervals. The result is shown in the following figure:
+
+```{julia}
+using ConformalPrediction
+
+conf_model = conformal_model(model; method=:time_series_ensemble_batch, coverage=0.95)
+mach = machine(conf_model, X_train, y_train)
+train = [1:split_index;]
+fit!(mach, rows=train)
+
+y_pred_interval = MLJ.predict(conf_model, mach.fitresult, X_test)
+lb = [ minimum(tuple_data) for tuple_data in y_pred_interval]
+ub = [ maximum(tuple_data) for tuple_data in y_pred_interval]
+y_pred = [mean(tuple_data) for tuple_data in y_pred_interval]
+```
+
+```{julia}
+#| echo: false
+#| output: true
+
+cutoff_point = findfirst(df_dropped_missing.Datetime .== Date("2014-02-15"))
+p1 = plot(df_dropped_missing[cutoff_point:split_index, [:Datetime]].Datetime, y_train[cutoff_point:split_index] , label="train", color=:blue, legend=:bottomleft)
+plot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime, y_test, label="test", color=:orange)
+plot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime ,y_pred, label ="prediction", color=:green)
+plot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime,
+ lb, fillrange = ub, fillalpha = 0.2, label = "PI without update of residuals", color=:green, linewidth=0)
+
+```
+
+We can use `partial_fit` method in EnbPI implementation in ConformalPrediction in order to adjust prediction intervals to sudden change points on test sets that have not been seen by the model during training. In the below experiment, sample_size indicates the batch of new observations. You can decide if you want to update residuals by sample_size or update and remove first $n$ residuals (shift_size = n). The latter will allow to remove early residuals that will not have a positive impact on the current observations.
+
+The chart below compares the results to the previous experiment without updating residuals:
+
+```{julia}
+
+sample_size = 10
+shift_size = 10
+last_index = size(X_test , 1)
+lb_updated , ub_updated = ([], [])
+for step in 1:sample_size:last_index
+ if last_index - step < sample_size
+ y_interval = MLJ.predict(conf_model, mach.fitresult, X_test[step:last_index , :])
+ partial_fit(mach.model , mach.fitresult, X_test[step:last_index , :], y_test[step:last_index , :], shift_size)
+ else
+ y_interval = MLJ.predict(conf_model, mach.fitresult, X_test[step:step+sample_size-1 , :])
+ partial_fit(mach.model , mach.fitresult, X_test[step:step+sample_size-1 , :], y_test[step:step+sample_size-1 , :], shift_size)
+ end
+ lb_updatedᵢ= [ minimum(tuple_data) for tuple_data in y_interval]
+ push!(lb_updated,lb_updatedᵢ)
+ ub_updatedᵢ = [ maximum(tuple_data) for tuple_data in y_interval]
+ push!(ub_updated, ub_updatedᵢ)
+end
+lb_updated = reduce(vcat, lb_updated)
+ub_updated = reduce(vcat, ub_updated)
+```
+
+```{julia}
+#| echo: false
+#| output: true
+
+p2 = plot(df_dropped_missing[cutoff_point:split_index, [:Datetime]].Datetime, y_train[cutoff_point:split_index] , label="train", color=:blue, legend=:bottomleft)
+plot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime, y_test, label="test", color=:orange)
+plot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime ,y_pred, label ="prediction", color=:green)
+plot!(df_dropped_missing[split_index+1 : size(y,1), [:Datetime]].Datetime,
+ lb_updated, fillrange = ub_updated, fillalpha = 0.2, label = "PI with adjusted residuals", color=:green, linewidth=0)
+
+ plot(p1,p2, layout= (2,1))
+
+```
+
+## Results
+
+In time series problems, unexpected incidents can lead to sudden changes, and such scenarios are highly probable. As illustrated earlier, the model's training data lacks information about these change points, making it unable to anticipate them. The top figure demonstrates that when residuals are not updated, the prediction intervals solely rely on the distribution of residuals from the training set. Consequently, these intervals fail to encompass the true observations after the change point, resulting in a sudden drop in coverage.
+
+However, by partially updating the residuals, the method becomes adept at capturing the increasing uncertainties in model predictions. It is important to note that the changes in uncertainty occur approximately one day after the change point. This delay is attributed to the requirement of having a sufficient number of new residuals to alter the quantiles obtained from the residual distribution.
+
+## References
\ No newline at end of file
diff --git a/docs/src/how_to_guides/timeseries_files/figure-commonmark/cell-10-output-1.svg b/docs/src/how_to_guides/timeseries_files/figure-commonmark/cell-10-output-1.svg
new file mode 100644
index 0000000..1237297
--- /dev/null
+++ b/docs/src/how_to_guides/timeseries_files/figure-commonmark/cell-10-output-1.svg
@@ -0,0 +1,54 @@
+
+
diff --git a/docs/src/how_to_guides/timeseries_files/figure-commonmark/cell-12-output-1.svg b/docs/src/how_to_guides/timeseries_files/figure-commonmark/cell-12-output-1.svg
new file mode 100644
index 0000000..843c977
--- /dev/null
+++ b/docs/src/how_to_guides/timeseries_files/figure-commonmark/cell-12-output-1.svg
@@ -0,0 +1,94 @@
+
+
diff --git a/src/ConformalPrediction.jl b/src/ConformalPrediction.jl
index 8032cda..4d6abec 100644
--- a/src/ConformalPrediction.jl
+++ b/src/ConformalPrediction.jl
@@ -3,7 +3,7 @@ module ConformalPrediction
# Conformal Models:
include("conformal_models/conformal_models.jl")
export ConformalModel
-export conformal_model, fit, predict
+export conformal_model, fit, predict, partial_fit
export available_models, tested_atomic_models
export set_size
export soft_assignment
diff --git a/src/conformal_models/conformal_models.jl b/src/conformal_models/conformal_models.jl
index 1a7d291..7873dd4 100644
--- a/src/conformal_models/conformal_models.jl
+++ b/src/conformal_models/conformal_models.jl
@@ -75,6 +75,7 @@ const TransductiveModel = Union{
CVPlusRegressor,
CVMinMaxRegressor,
NaiveClassifier,
+ TimeSeriesRegressorEnsembleBatch,
}
"A container listing all available methods for conformal prediction."
@@ -89,6 +90,7 @@ const available_models = Dict(
:cv_minmax => CVMinMaxRegressor,
:jackknife_plus_ab => JackknifePlusAbRegressor,
:jackknife_plus_ab_minmax => JackknifePlusAbMinMaxRegressor,
+ :time_series_ensemble_batch => TimeSeriesRegressorEnsembleBatch,
),
:inductive => Dict(:simple_inductive => SimpleInductiveRegressor),
),
diff --git a/src/conformal_models/transductive_regression.jl b/src/conformal_models/transductive_regression.jl
index aa78a3f..b0997bd 100644
--- a/src/conformal_models/transductive_regression.jl
+++ b/src/conformal_models/transductive_regression.jl
@@ -773,3 +773,129 @@ function MMI.predict(conf_model::JackknifePlusAbMinMaxRegressor, fitresult, Xnew
ŷ = reformat_interval(ŷ)
return ŷ
end
+
+# TimeSeries_Regressor_Ensemble_Batch_Prediction_Interval
+"Constructor for `TimeSeriesRegressorEnsemble`."
+mutable struct TimeSeriesRegressorEnsembleBatch{Model<:Supervised} <: ConformalInterval
+ model::Model
+ coverage::AbstractFloat
+ scores::Union{Nothing,AbstractArray}
+ heuristic::Function
+ nsampling::Int
+ sample_size::AbstractFloat
+ aggregate::Union{Symbol,String}
+end
+
+function TimeSeriesRegressorEnsembleBatch(
+ model::Supervised;
+ coverage::AbstractFloat=0.95,
+ heuristic::Function=f(y, ŷ) = abs(y - ŷ),
+ nsampling::Int=50,
+ sample_size::AbstractFloat=0.3,
+ aggregate::Union{Symbol,String}="mean",
+)
+ return TimeSeriesRegressorEnsembleBatch(
+ model, coverage, nothing, heuristic, nsampling, sample_size, aggregate
+ )
+end
+
+@doc raw"""
+ MMI.fit(conf_model::TimeSeriesRegressorEnsembleBatch, verbosity, X, y)
+
+For the [`TimeSeriesRegressorEnsembleBatch`](@ref) nonconformity scores are computed as
+
+``
+$ S_i^{\text{J+ab}} = s(X_i, Y_i) = h(agg(\hat\mu_{B_{K(-i)}}(X_i)), Y_i), \ i \in \mathcal{D}_{\text{train}} $
+``
+
+where ``agg(\hat\mu_{B_{K(-i)}}(X_i))`` denotes the aggregate predictions, typically mean or median, for each ``X_i`` (with ``K_{-i}`` the bootstraps not containing ``X_i``). In other words, B models are trained on boostrapped sampling, the fitted models are then used to create aggregated prediction of out-of-sample ``X_i``. The corresponding nonconformity score is then computed by applying a heuristic uncertainty measure ``h(\cdot)`` to the fitted value ``agg(\hat\mu_{B_{K(-i)}}(X_i))`` and the true value ``Y_i``.
+"""
+function MMI.fit(conf_model::TimeSeriesRegressorEnsembleBatch, verbosity, X, y)
+ samples, fitresult, cache, report, scores = ([], [], [], [], [])
+ nsampling = conf_model.nsampling
+ sample_size = conf_model.sample_size
+ aggregate = conf_model.aggregate
+ # bootstrap size
+ T = size(y, 1)
+ m = floor(Int, T * sample_size)
+ for _ in 1:nsampling
+ samplesᵢ = blockbootstrap(1:T, m)
+ yᵢ = y[samplesᵢ]
+ Xᵢ = selectrows(X, samplesᵢ)
+ μ̂ᵢ, cacheᵢ, reportᵢ = MMI.fit(
+ conf_model.model, 0, MMI.reformat(conf_model.model, Xᵢ, yᵢ)...
+ )
+ push!(samples, samplesᵢ)
+ push!(fitresult, μ̂ᵢ)
+ push!(cache, cacheᵢ)
+ push!(report, reportᵢ)
+ end
+ for t in 1:T
+ index_samples = indexin([v for v in samples if !(t in v) && any(t .> v)], samples)
+ selected_models = fitresult[index_samples]
+ Xₜ = selectrows(X, t)
+ yₜ = y[t]
+ ŷ = [
+ reformat_mlj_prediction(
+ MMI.predict(conf_model.model, μ̂₋ₜ, MMI.reformat(conf_model.model, Xₜ)...)
+ ) for μ̂₋ₜ in selected_models
+ ]
+ ŷₜ = _aggregate(ŷ, aggregate)
+ push!(scores, @.(conf_model.heuristic(yₜ, ŷₜ))...)
+ end
+ scores = filter(!isnan, scores)
+ conf_model.scores = scores
+ return (fitresult, cache, report)
+end
+
+@doc raw"""
+ partial_fit(conf_model::TimeSeriesRegressorEnsembleBatch, fitresult, X, y, shift_size)
+For the [`TimeSeriesRegressorEnsembleBatch`](@ref) Non-conformity scores are updated by the most recent data (X,y). shift_size
+determines how many points in Non-conformity scores will be discarded.
+
+"""
+function partial_fit(
+ conf_model::TimeSeriesRegressorEnsembleBatch, fitresult, X, y, shift_size=0
+)
+ ŷ = [
+ reformat_mlj_prediction(
+ MMI.predict(conf_model.model, μ̂₋ₜ, MMI.reformat(conf_model.model, X)...)
+ ) for μ̂₋ₜ in fitresult
+ ]
+ aggregate = conf_model.aggregate
+ ŷₜ = _aggregate(ŷ, aggregate)
+ push!(conf_model.scores, @.(conf_model.heuristic(y, ŷₜ))...)
+ conf_model.scores = filter(!isnan, conf_model.scores)
+ conf_model.scores = conf_model.scores[(shift_size + 1):length(conf_model.scores)]
+ return conf_model.scores
+end
+
+# Prediction
+@doc raw"""
+ MMI.predict(conf_model::TimeSeriesRegressorEnsemble, fitresult, Xnew)
+
+For the [`TimeSeriesRegressorEnsemble`](@ref) prediction intervals are computed as follows,
+
+``
+\hat{C}_{n,\alpha, B}^{J+ab}(X_{n+1}) = \left[ \hat{q}_{n, \alpha}^{-} \{\hat\mu_{agg(-i)}(X_{n+1}) - S_i^{\text{J+ab}} \}, \hat{q}_{n, \alpha}^{+} \{\hat\mu_{agg(-i)}(X_{n+1}) + S_i^{\text{J+ab}}\} \right] , i \in \mathcal{D}_{\text{train}}
+``
+
+where ``\hat\mu_{agg(-i)}`` denotes the aggregated models ``\hat\mu_{1}, ...., \hat\mu_{B}`` fitted on bootstrapped data (B) does not include the ``i``th data point. The jackknife``+`` procedure is more stable than the [`JackknifeRegressor`](@ref).
+"""
+function MMI.predict(conf_model::TimeSeriesRegressorEnsembleBatch, fitresult, Xnew)
+
+ # Get all bootstrapped predictions for each Xnew:
+ ŷ = [
+ reformat_mlj_prediction(
+ MMI.predict(conf_model.model, μ̂₋ᵢ, MMI.reformat(conf_model.model, Xnew)...)
+ ) for μ̂₋ᵢ in fitresult
+ ]
+ # Applying aggregation function on bootstrapped predictions across columns for each Xnew across rows:
+ aggregate = conf_model.aggregate
+ ŷ = _aggregate(ŷ, aggregate)
+ v = conf_model.scores
+ q̂ = StatsBase.quantile(v, conf_model.coverage)
+ ŷ = map(x -> (x .- q̂, x .+ q̂), eachrow(ŷ))
+ ŷ = reformat_interval(ŷ)
+ return ŷ
+end
diff --git a/src/conformal_models/utils.jl b/src/conformal_models/utils.jl
index 0514717..22df640 100644
--- a/src/conformal_models/utils.jl
+++ b/src/conformal_models/utils.jl
@@ -102,3 +102,16 @@ function size_indicator(ŷ::AbstractVector; bins=5, tol=1e-10)
return idx
end
+
+"""
+ blockbootstrap(time_series_data, block_szie)
+
+ Generate a sampling method, that block bootstraps the given data
+"""
+function blockbootstrap(time_series, block_size)
+ n = length(time_series)
+ bootstrap_sample = similar(time_series)
+ rand_block = rand(1:(n - block_size))
+ bootstrap_sample = time_series[rand_block:(rand_block + block_size - 1), :]
+ return vec(bootstrap_sample)
+end