Congratulations! The Paper Published by Liu Zhining, a 2019 Master’s Candidate, is Accepted by CCF-A Conference

Date:2021-06-29 Click:

The paper “Self-paced Ensemble for Highly Imbalanced Massive Data Classification” published by Liu Zhining, a 2019 Master’s candidate under the supervision of Prof. Chang Yi, was accepted by CCF-A conference (ICDE 2020).

Liu Zhining got his Bachelor’s Degree in Tang Aoqing Honor Students Program from Jilin University. He has been under the joint supervision of Prof. Chang Yi and Microsoft Research Institute from senior year, engaging in machine learning and data mining research work. This work was in collaboration with the researchers of Microsoft Research Asia, Cao Wei and Bian Jiang.

The International Conference on Data Engineering (ICDE) is one of the top three conferences in database field.

Paper Information:

First Author: Liu Zhining

Title: Self-paced Ensemble for Highly Imbalanced Massive Data Classification

Conference Name: 36th IEEE International Conference on Data Engineering (ICDE 2020)

Conference Category: CCF-A

Conference Date: April 20-24, Dallas, Texas, USA

Summary:

Many real-world applications reveal difficulties in learning classifiers from imbalanced data. The rising big data era has been witnessing more classification tasks with largescale but extremely imbalance and low-quality datasets. Most of existing learning methods suffer from poor performance or low computation efficiency under such a scenario. To tackle this problem, we conduct deep investigations into the nature of class imbalance, which reveals that not only the disproportion between classes, but also other difficulties embedded in the nature of data, especially, noises and class overlapping, prevent us from learning effective classifiers. Taking those factors into consideration, we propose a novel framework for imbalance classification that aims to generate a strong ensemble by self-paced harmonizing data hardness via under-sampling. Extensive experiments have shown that this new framework, while being very computationally efficient, can lead to robust performance even under highly overlapping classes and extremely skewed distribution. Note that, our methods can be easily adapted to most of existing learning methods (e.g., C4.5, SVM, GBDT and Neural Network) to boost their performance on imbalanced data.