TexTSC: Class-Texture Preserving Data Condensation for Time Series Classification
Abstract
Dataset condensation seeks to generate a small set of synthetic examples that can replace large real datasets for training, but existing methods for time series often rely on unstable training-trajectory matching or capture only limited signal structure. We present TexTSC, a condensation framework that preserves class structure using spectro-temporal second-order statistics instead of trajectory replay. TexTSC models each class’s “texture” as the co-activation pattern among intermediate teacher features, aligning Gram matrices of activations in time to capture temporal correlations and in frequency to capture spectral envelopes and harmonics. A short-lag autocorrelation term stabilizes local rhythm, while a lightweight gradient anchor at the final layer ensures discriminative power. TexTSC optimizes synthetic sequences directly, remains model-agnostic, and requires only closed-form statistics, making it simple and stable. Experiments on standard benchmarks show that TexTSC produces compact datasets that retain class-conditional structure and achieve higher classification accuracy than first-order or single-domain baselines.