1

Aligning Large Language Models via Fine-grained Supervision (ACL 2024)

Pre-trained large-scale language models (LLMs) excel at producing coherent articles, yet their outputs may be untruthful, toxic, or fail to align with user expectations. Current approaches focus on using reinforcement learning with human feedback …

Toward Robustness in Multi-label Classification: A Data Augmentation Strategy against Imbalance and Noise (AAAI 2024)

Multi-label classification poses challenges due to imbalanced and noisy labels in training data. We propose a unified data augmentation method, named BalanceMix, to address these challenges. Our approach includes two samplers for imbalanced labels, …

Debiasing Neighbor Aggregation for Graph Neural Network in Recommender Systems (CIKM 2022)

Graph neural networks (GNNs) have achieved remarkable success in recommender systems by representing users and items based on their historical interactions. However, little attention was paid to GNN's vulnerability to exposure bias -- users are …

Meta-Learning for Online Update of Recommender Systems (AAAI 2022)

Online recommender systems should be always aligned with users' current interest to accurately suggest items that each user would like. Since user interest usually evolves over time, the update strategy should be flexible to quickly catch users' …

COVID-EENet: Predicting Fine-Grained Impact of COVID-19 on Local Economies (AAAI 2022)

Assessing the impact of the COVID-19 crisis on economies is fundamental to tailor the responses of the governments to recover from the crisis. In this paper, we present a novel approach to assessing the economic impact with a large-scale credit card …

Task-Agnostic Undesirable Feature Deactivation Using Out-of-Distribution Data (NeurIPS 2021)

A deep neural network (DNN) has achieved great success in many machine learning tasks by virtue of its high expressive power. However, its prediction can be easily biased to undesirable features, which are not essential for solving the target task …

Robust Learning by Self-Transition for Handling Noisy Labels (KDD 2021)

Real-world data inevitably contains noisy labels, which induce the poor generalization of deep neural networks. It is known that the network typically begins to rapidly memorize false-labeled samples after a certain point of training. Thus, to …

PREMERE: Meta-Reweighting via Self-Ensembling for Point-of-Interest Recommendation (AAAI 2021)

Point-of-interest (POI) recommendation has become an important research topic in these days. The user check-in history used as the input to POI recommendation is very imbalanced and noisy because of sparse and missing check-ins. Although sample …

Carpe Diem, Seize the Samples Uncertain "At the Moment" for Adaptive Batch Selection (CIKM 2020)

The performance of deep neural networks is significantly affected by how well mini-batches are constructed. In this paper, we propose a novel adaptive batch selection algorithm called Recency Bias that exploits the uncertain samples predicted …

Hi-COVIDNet: Deep Learning Approach to Predict Inbound COVID-19 Patients and Case Study in South Korea (KDD 2020)

The escalating crisis of COVID-19 has put people all over the world in danger. Owing to the high contagion rate of the virus, COVID-19 cases continue to increase globally. To further suppress the threat of the COVID-19 pandemic and minimize its …