Real-world time series data are often generated from several sources of variation. Learning representations that capture the factors contributing to this variability enables better understanding of the data via its underlying generative process and can lead to improvements in performance on downstream machine learning tasks. In this paper, we propose a novel generative approach for learning representations for the global and local factors of variation in time series data. The local representation of each sample models non-stationarity over time with a stochastic process prior, and the global representation of the sample encodes the time-independent characteristics. To encourage decoupling between the representations, we introduce a counterfactual regularization that minimizes the mutual information between the two variables. In experiments, we demonstrate successful recovery of the true local and global factors of variability on simulated data, and show that representations learned using our method lead to superior performance on downstream tasks on real-world datasets. We believe that the proposed way of defining representations is beneficial for data modelling and can yield better insights into the complexity of the real-world data.