A short tutorial on Fuzzy Time Series — Part II

A short tutorial on Fuzzy Time Series — Part IIWith an case study on Solar EnergyIn the first part of this tutorial I briefly explained time series analysis, fuzzy sets and what are the Fuzzy Time Series — FTS, with a short introduction to the pyFTS library..To finish we will employ some FTS models to model and predict solar radiation time series, useful on photovoltaic energy prediction.All examples of this tutorial are available on Google Colab, on http://bit.ly/short_tutorial_colab2 ..In short: the model learned the noise of the data.It is known that is not possible to completely eliminate the bias and variance side effects, and the best fit is achieved by the balance between them — this is the challenge of estimation models.Fuzzy Time Series parametersFonteSeveral parameters determine the best fit of an FTS model, but the principal ones are the partitioning and the order..These two parameters account for 90% (empirical value) of the accuracy of the model.1. PartitioningPartitioning is composed of three parameters, listed here according to their importance:1a) Number of partitions (or fuzzy sets)This is simply the most influential parameter in the model accuracy. The more fuzzy sets the more precise is the capture of the characteristics of the time series. And there is a trap that lies right here:Too few fuzzy sets generate underfitting, due to signal over simplification;Too much fuzzy sets generates overfitting, making the model to start learning noise on data;Several numbers of partition for the sine functionAccuracy of several partitionings for sine functionThe number of sets is a parameter that must be benchmarked. In time series that have been differentiated 10 partitions is a good number to start. In other cases, 35 partitions is a good number.1b) Partitioning typeThere are many types of partitioning, from Grid partitioning (GridPartitioner) where all sets are evenly distributed and have the same format, going through partitioners where sets have distinct sizes — such as entropy-based and cluster-based partitioners. I will not go deep into this discussion here, but for the curious there are several examples of partitioning types in the PYFTS/notebooks repository.Always start with Grid partitioning, then if it is the case, explore the other types.1c) Membership functionsThis is a parameter that has little real influence on the accuracy of the model, but depending on the case you may have a good reason for using a Gaussian or trapezoidal function instead of the triangular function, which is the default.One of the justifications may be the number of parameters (the Gaussian uses 2, the triangular 3 and the trapezoidal 4), the legibility of the model or even other issues related to the nature of the process and the data.Once again I will not delve into this discussion here, take a look at the PYFTS/notebooks repository for more details.2..OrderThe order of the model is the second most important parameter of the FTS, since they are autoregressive models (use lagged values to predict the next ones). The order parameter is the memory size of the model, or how much past information is needed to describe future events.To decide this parameter it is important to be familiar with the concept of Autocorrelation Function — ACF. The ACF is not only able to indicate the order as well as the indexes of the most important lags.2a) Number of lags (or order)The order is the amount of lags (past values) that are used by the models..Look at the ACF and see how many are the significant lags.Model accuracy by orderHowever there is a trap here: as more lags the model uses (especially if the number of partitions is large!) and as larger the model is, slowest the learning and inference becomes.In my experience, it takes no more than 3 lags to describe a time series behavior. But of course everything depends on the data.2b) Lags indexesBy default, the most recent lags, in order, are chosen by the model. But depending on the seasonality of the time series these may not be the best lags. So look at the ACF and see which lags indexes are the most significant.3. Types of methodsThe literature on FTS methods is very diversified, but two features are extremely important:3a) Weighted vs WeightlessThe weights increase the accuracy of the model, balancing which sets in the rules of the model are more influential for the forecast..If you have to choose, always prefer the weighted models !In the example below we can compare the model HOFTS (without weights), WHOFTS (with weights in consequent of the rules) and PWFTS (with weights in the consequent and the precedent of the rules):3b) Monovariate vs MultivariateMost of the time we only have time series with one variable — the endogenous variable..Other times this variable is aided by other information (the exogenous variables) from which we can take advantage.For instance the date, usually associated with time series measurements, is a very valuable information in the case of seasonal data — such as social, environmental, etc..The correlation coefficient point to simple linear relationship, so it should not be the unique tool you should use, the cross-entropy is a good alternative.Another tip: If you have a monovariate time series you can enrich your model by creating a multivariate series from it, where the other variables are transformations of the endogenous variable..For example, you can have a multivariate series with the original (endogenous) variable and the differentiated endogenous variable, providing extra information about the recent fluctuations of the values.Case Study: Solar radiationIt is time to have some fun!.We assume that neighbors things influence each other and have similar behaviors, hence the rules of a specific time are also influenced by neighboring hours and months.Let’s now take a look at the multivariate methods and see examples of the rules generated by them:mvfts.MultivariateFTS: Weightless first order method (order = 1);Jan,8hs,VL0 → VL1,VL2,VL3,VL4,VL5Jan,8hs,VL1 → VL1,VL2,VL3,VL4,VL5,L0,L1,L3,L4wmvfts.WeightedMultivariatedFTS: Weighted first order method;Jan,8hs,VL0 → VL2 (0.353), VL1 (0.253), VL4 (0.147), VL3 (0.207), VL5 (0.04)Jan,8hs,VL1 → VL2 (0.276), VL3 (0.172), VL1 (0.198), VL5 (0.083), VL4 (0.151), VL6 (0.021), L0 (0.036), L4 (0.005), L1 (0.036), L2 (0.021)cmvfts.ClusteredFTS: Weighted high order method;Jan11hsV3,Jan12hsL1 → Jan13hsVL6 (1.0)Jan12hsL1,Jan13hsVL6 → Jan15hsVL3 (1.0)Benchmarking the modelsObviously the main criterion for evaluating a predictive model is its accuracy..This parameter will feed back the output values in the input, by the next steps_ahead iterations.forecasts = model.predict(input_data, steps_ahead=48)For multivariate models this is a little bit tricky: we are not generating only the values of the endogenous variable but also the values of the exogenous variables.. More details

Leave a Reply