UnravelingandMitigatingSafetyAlignmentDegradationofVision-LanguageModels
The safety alignment ability of Vision-Language Models (VLMs) is prone to be degraded by the integra
The safety alignment ability of Vision-Language Models (VLMs) is prone to be degraded by the integra
In the evolving landscape of autonomous vehicles, ensuring robust in-vehicle network (IVN) security
Generating diverse responses from large language models (LLMs) is crucial for applications such as p
Large Language Models (LLMs) have displayed remarkable performances across various complex tasks by
Ptychography is an advanced computational imaging technique in X-ray and electron microscopy. It has
The electric power sector is one of the largest contributors to greenhouse gas emissions in the worl
The robustness of LLMs to jailbreak attacks, where users design prompts to circumvent safety measure
Foundation models (FMs) such as large language models (LLMs) have significantly impacted many fields
There have been key advancements to building universal approximators for multi-goal collections of r
Large language models encode the correlational structure present in natural language by fitting segm
Fine-tuning Large Language Models (LLMs) has proven effective for a variety of downstream tasks. How
Recently, Knowledge Graphs (KGs) have been successfully coupled with Large Language Models (LLMs) to
Reinforcement learning (RL) is rapidly reaching and surpassing human-level control capabilities. How
Reliable estimation of treatment effects from observational data is important in many disciplines su
UniGlyph is a constructed language (conlang) designed to create a universal transliteration system u
Active Learning aims to minimize annotation effort by selecting the most useful instances from a poo
Hallucinations in Large Language Models (LLMs) remain a major obstacle, particularly in high-stakes
The current paradigm for safety alignment of large language models (LLMs) follows a one-size-fits-al
Large Language Models (LLMs) have achieved state-of-the-art performance across numerous tasks. Howev
Federated Kolmogorov-Arnold Networks (F-KANs) have already been proposed, but their assessment is at