Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Modeling variable seasonal features with the Fourier transform

Tags: model seasonal

Florin AndreiFollowTowards Data Science--ListenShareModeling time series data and forecasting it are complex topics. There are many techniques that could be used to improve Model performance for a forecasting job. We will discuss here a technique that may improve the way forecasting models absorb and utilize time features, and generalize from them. The main focus will be the creation of the seasonal features that feed the time series forecasting model in training — there are easy gains to be made here if you include the Fourier transform in the feature creation process.This article assumes you are familiar with the basic aspects of time series forecasting — we will not discuss this topic in general, but only a refinement of one aspect of it. For an introduction to time series forecasting, see the Kaggle course on this topic — the technique discussed here builds on top of their lesson on seasonality.Consider the Quarterly Australian Portland Cement production dataset, showing the total quarterly production of Portland cement in Australia (in millions of tonnes) from 1956:Q1 to 2014:Q1.This is time series data with a trend, seasonal components, and other attributes. The observations are made quarterly, spanning several decades. Let’s take a look at the seasonal components first, using the periodogram plot (all code is included in the companion notebook, linked at the end).The periodogram shows the power density of spectral components (seasonal components). The strongest component is the one with a period equal to 4 quarters, or 1 year. This confirms the visual impression that the strongest up-and-down variations in the graph happen about 10 times per decade. There is also a component with a period of 2 quarters — that’s the same seasonal phenomenon, and it simply means the 4-quarter periodicity is not a simple sine wave, but has a more complex shape. We will ignore components with periods higher than 4, for simplicity.If you try to fit a model to this data, perhaps in order to forecast the cement production for times beyond the end of the dataset, it would be a good idea to capture this yearly seasonality in a feature or set of features, and provide those to the model in training. Let’s see an example.Seasonal components can be modeled in a number of different ways. Often, they are represented as time dummies, or as sine-cosine pairs. These are synthetic features with a period equal to the seasonality you’re trying to model, or a submultiple of it.Yearly time dummies:Yearly sine-cosine pairs:Time dummies can capture very complex waveforms of the seasonal phenomenon. On the flip side, if the period is N, then you need N time dummies — so, if the period is very long, you will need a lot of dummy columns, which may not be desirable. For non-penalized models, often just N-1 dummies are used — one is dropped to avoid collinearity issues (we will ignore that here).Sine/cosine pairs can model arbitrarily long time periods. Each pair will model a pure sine waveform with some initial phase. As the waveform of the seasonal feature becomes more complex, you will need to add more pairs (increase the order of the output from CalendarFourier).(Side note: we use a sine/cosine pair for each period we want to model. What we really want is just one column of A*sin(2*pi*t/T + phi) where A is the weight assigned by the model to the column, t is the time index of the series, and T is the component period. But the model cannot adjust the initial phase phi while fitting the data — the sine values are pre-computed. Luckily, the formula above is equivalent to: A1*sin(2*pi*t/T) + A2*cos(2*pi*t/T) and the model only needs to find the weights A1 and A2.)TLDR: What these two techniques have in common is that they represent seasonality via a set of repetitive features, with values that cycle as often as once per the time period being modeled (time dummies, and the base sine/cosine pair), or several times per period (higher order sine/cosine). And each one of these features has values varying within a fixed interval: from 0 to 1, or from -1 to 1. These are all the features we have to model seasonality here.Let’s see what happens when we fit a linear model on such seasonal features. But first, we need to add trend features to the features collection used to train the model.Let’s create trend features and then join them with the seasonal_year time dummies generated above:This is the X dataframe (features) that will be used to train/validate the model. We’re modeling the data with quadratic trend features, plus the 4 time dummies needed for the yearly seasonality. The y dataframe (target) will be just the cement production numbers.Let’s carve a validation set out of the data, containing the year 2010 observations. We will fit a linear model on the X features shown above and the y target represented by cement production (the test portion), and then we will evaluate model performance in validation. We will also do all of the above with a trend-only model that will only fit the trend features but ignores seasonality.Blue is train, red is validation. The full model is the sharp, thin line. The trend-only model is the wide, fuzzy line.This is not bad, but there is one glaring issue: the model has learned a constant-amplitude yearly seasonality. Even though the yearly variation actually increases in time, the model can only stick to fixed-amplitude variations. Obviously, this is because we gave the model only fixed-amplitude seasonal features, and there's nothing else in the features (target lags, etc) to help it overcome this issue.Let’s dig deeper into the signal (the data) to see if there's anything there that could help with the fixed-amplitude issue.The periodogram will highlight all spectral components in the signal (all seasonal components in the data), and will provide an overview of their overall strength, but it’s an aggregate, a sum over the whole time interval. It says nothing about how the amplitude of each seasonal component may vary in time across the dataset.To capture that variation, you have to use the Fourier spectrogram instead. It’s like the periodogram, but performed repeatedly over many time windows across the entire data set. Both periodogram and spectrogram are available as methods in the scipy library.Let’s plot the spectrogram for the seasonal components with periods of 2 and 4 quarters, mentioned above. As always, the full code is in the companion notebook linked at the end.What this diagram shows is the amplitude, the “strength” of the 2-quarter and 4-quarter components over time. They are pretty weak initially, but become very strong around 2010, which matches the variations you can see in the data set plot at the top of this article.What if, instead of feeding the model fixed-amplitude seasonal features, we adjust the amplitude of these features in time, matching what the spectrogram tells us? Would the final model perform better in validation?Let’s pick the 4-quarter seasonal component. We could fit a linear model (called the envelope model) on the order=2 trend of this component, on the train data (model.fit()), and extend that trend into validation / testing (model.predict()):The blue line is the strength of the variation of the 4-quarter component in the train data, fitted as a quadratic trend (order=2). The red line is the same thing, extended (predicted) over the validation data.We have modeled the variation in time of the 4-quarter seasonal component. We can take the output from the envelope model, and multiply it by the time dummies corresponding to the 4-quarter seasonal component:The 4 time dummies are not either 0 or 1 anymore. They’ve been multiplied by the component envelope, and now their amplitude varies in time, just like the envelope.Let’s train the main model again, now using the modified time dummies.We’re not aiming for a perfect fit here, since this is just a toy model, but it’s obvious the model performs better, it is more closely following the 4-quarter variations in the target. The performance metric has improved also, by 51%, which is not bad at all.Improving model performance is a vast topic. The behavior of any model does not depend on a single feature, or on a single technique. However, if you’re looking to extract all you can out of a given model, you should probably feed it meaningful features. Time dummies, or sine/cosine Fourier pairs, are more meaningful when they reflect the variation in time of the seasonality they are modeling.Adjusting the seasonal component’s envelope via the Fourier transform is particularly effective for linear models. Boosted trees do not benefit as much, but you can still see improvements almost no matter what model you use. If you’re using a hybrid model, where a linear model handles deterministic features (calendar, etc), while a boosted trees model handles more complex features, it is important that the linear model “gets it right”, therefore leaving less work to do for the other model.It is also important that the envelope models you use to adjust seasonal features are only trained on the train data, and they do not see any testing data while in training, just like any other model. Once you adjust the envelope, the time dummies contain information from the target — they are not purely deterministic features anymore, that can be computed ahead of time over any arbitrary forecast horizon. So at this point information could leak from validation/testing back into training data if you’re not careful.The data set used in this article is available here under the Public Domain (CC0) license:www.key2stats.comThe code used in this article can be found here:github.comA notebook submitted to the Store Sales — Time Series Forecasting competition on Kaggle, using ideas described in this article:www.kaggle.comA GitHub repo containing the development version of the notebook submitted to Kaggle is here:github.comAll images and code used in this article are created by the author.----Towards Data ScienceBS in Physics. MS in Data Science. Over a decade experience with cloud computing.Florin AndreiinTowards Data Science--Damian GilinTowards Data Science--24Khouloud El AlamiinTowards Data Science--21Florin AndreiinTowards Data Science--1QuantPy--2Amy Kim--1Alexander Tsoskounoglou--Vijay Reddiar--Can Ozdogar--Andrew Lukyanenko--HelpStatusBlogCareersPrivacyTermsAboutText to speechTeams



This post first appeared on VedVyas Articles, please read the originial post: here

Share the post

Modeling variable seasonal features with the Fourier transform

×

Subscribe to Vedvyas Articles

Get updates delivered right to your inbox!

Thank you for your subscription

×