Skip to yearly menu bar Skip to main content


Poster

Supervised Feature Selection via Ensemble Gradient Information from Sparse Neural Networks

Kaiting Liu · Zahra Atashgahi · Ghada Sokar · Mykola Pechenizkiy · Decebal Constantin Mocanu

MR1 & MR2 - Number 45
[ ]
Fri 3 May 8 a.m. PDT — 8:30 a.m. PDT

Abstract:

Feature selection algorithms aim to select a subset of informative features from a dataset to reduce the data dimensionality, consequently saving resource consumption and improving the model's performance and interpretability. In recent years, feature selection based on neural networks has become a new trend, demonstrating superiority over traditional feature selection methods. However, most existing methods use dense neural networks to detect informative features, which requires significant computational and memory overhead. In this paper, taking inspiration from the successful application of local sensitivity analysis on neural networks, we propose a novel resource-efficient supervised feature selection algorithm based on sparse multi-layer perceptron called ``GradEnFS". By utilizing the gradient information of various sparse models from different training iterations, our method successfully detects the informative feature subset. We performed extensive experiments on nine classification datasets spanning various domains to evaluate the effectiveness of our method. The results demonstrate that our proposed approach outperforms the state-of-the-art methods in terms of selecting informative features while saving resource consumption substantially. Moreover, we show that using a sparse neural network for feature selection not only alleviates resource consumption but also has a significant advantage over other methods when performing feature selection on noisy datasets.

Chat is not available.