STAIR Research Group | Scalable & Trustworthy AI Research
STAIR Research Group | Scalable & Trustworthy AI Research
People
Projects
Talks
Publications
Light
Dark
Automatic
Preprint
a1: Steep Test-time Scaling Law via Environment Augmented Generation
Lingrui Mei
,
Shenghua Liu
,
Yiwei Wang
,
Baolong Bi
,
Yuyao Ge
,
Jun Wan
,
Yurong Wu
,
Xueqi Cheng
Cite
PDF
Innate Reasoning is Not Enough: In-Context Learning Enhances Reasoning Large Language Models with Less Overthinking
Yuyao Ge
,
Shenghua Liu
,
Yiwei Wang
,
Lingrui Mei
,
Lizhe Chen
,
Baolong Bi
,
Xueqi Cheng
Cite
DOI
PDF
Parameters vs. Context: Fine-Grained Control of Knowledge Reliance in Language Models
Baolong Bi
,
Shenghua Liu
,
Yiwei Wang
,
Yilong Xu
,
Junfeng Fang
,
Lingrui Mei
,
Xueqi Cheng
Cite
DOI
PDF
"Not Aligned" is Not "Malicious": Being Careful about Hallucinations of Large Language Models' Jailbreak
Lingrui Mei
,
Shenghua Liu
,
Yiwei Wang
,
Baolong Bi
,
Jiayi Mao
,
Xueqi Cheng
Cite
DOI
PDF
Adaptive Token Biaser: Knowledge Editing via Biasing Key Entities
Baolong Bi
,
Shenghua Liu
,
Yiwei Wang
,
Lingrui Mei
,
Hongcheng Gao
,
Yilong Xu
,
Xueqi Cheng
Cite
DOI
PDF
Context-DPO: Aligning Language Models for Context-Faithfulness
Baolong Bi
,
Shaohan Huang
,
Yiwei Wang
,
Tianchi Yang
,
Zihan Zhang
,
Haizhen Huang
,
Lingrui Mei
,
Junfeng Fang
,
Zehao Li
,
Furu Wei
,
Weiwei Deng
,
Feng Sun
,
Qi Zhang
,
Shenghua Liu
Cite
DOI
PDF
Decoding by Contrasting Knowledge: Enhancing LLMs' Confidence on Edited Facts
Baolong Bi
,
Shenghua Liu
,
Lingrui Mei
,
Yiwei Wang
,
Pengliang Ji
,
Xueqi Cheng
Cite
DOI
PDF
HiddenGuard: Fine-Grained Safe Generation with Specialized Representation Router
Lingrui Mei
,
Shenghua Liu
,
Yiwei Wang
,
Baolong Bi
,
Ruibin Yuan
,
Xueqi Cheng
Cite
DOI
PDF
Is Factuality Decoding a Free Lunch for LLMs? Evaluation on Knowledge Editing Benchmark
Baolong Bi
,
Shenghua Liu
,
Yiwei Wang
,
Lingrui Mei
,
Xueqi Cheng
Cite
DOI
PDF
Scalable Link Prediction on Large-Scale Heterogeneous Graphs with Large Language Models
Baolong Bi
,
Shenghua Liu
,
Yiwei Wang
,
Lingrui Mei
,
Xueqi Chen
Cite
DOI
PDF
»
Cite
×