[2412.13666] Evaluation of LLM Vulnerabilities to Being Misused for Personalized Disinformation Generation

[2412.13666] Evaluation of LLM Vulnerabilities to Being Misused for Personalized Disinformation Generation
[Submitted on 18 Dec 2024 (v1), last revised 25 Jul 2025 (this version, v2)] View a PDF of the paper ...
Read more

Dynamic and Generalizable Process Reward Modeling

[2412.13666] Evaluation of LLM Vulnerabilities to Being Misused for Personalized Disinformation Generation
arXiv:2507.17849v1 Announce Type: new Abstract: Process Reward Models (PRMs) are crucial for guiding Large Language Models (LLMs) in complex scenarios ...
Read more

[2505.22334] Advancing Multimodal Reasoning via Reinforcement Learning with Cold Start

[2412.13666] Evaluation of LLM Vulnerabilities to Being Misused for Personalized Disinformation Generation
[Submitted on 28 May 2025 (v1), last revised 23 Jul 2025 (this version, v2)] View a PDF of the paper ...
Read more

[2503.03460] Visualising Policy-Reward Interplay to Inform Zeroth-Order Preference Optimisation of Large Language Models

[2412.13666] Evaluation of LLM Vulnerabilities to Being Misused for Personalized Disinformation Generation
[Submitted on 5 Mar 2025 (v1), last revised 23 Jul 2025 (this version, v2)] View a PDF of the paper ...
Read more

[2507.15844] Hierarchical Budget Policy Optimization for Adaptive Reasoning

[2412.13666] Evaluation of LLM Vulnerabilities to Being Misused for Personalized Disinformation Generation
[Submitted on 21 Jul 2025 (v1), last revised 22 Jul 2025 (this version, v2)] View a PDF of the paper ...
Read more

[2507.15007] Hear Your Code Fail, Voice-Assisted Debugging for Python

[2412.13666] Evaluation of LLM Vulnerabilities to Being Misused for Personalized Disinformation Generation
[Submitted on 20 Jul 2025 (v1), last revised 22 Jul 2025 (this version, v2)] View a PDF of the paper ...
Read more

[2311.17741] End-to-end Joint Punctuated and Normalized ASR with a Limited Amount of Punctuated Training Data

[2412.13666] Evaluation of LLM Vulnerabilities to Being Misused for Personalized Disinformation Generation
[Submitted on 29 Nov 2023 (v1), last revised 21 Jul 2025 (this version, v3)] View a PDF of the paper ...
Read more

RAG-based Architectures for Drug Side Effect Retrieval in LLMs

[2412.13666] Evaluation of LLM Vulnerabilities to Being Misused for Personalized Disinformation Generation
arXiv:2507.13822v1 Announce Type: cross Abstract: Drug side effects are a major global health concern, necessitating advanced methods for their accurate ...
Read more

[2409.04617] Sparse Rewards Can Self-Train Dialogue Agents

[2412.13666] Evaluation of LLM Vulnerabilities to Being Misused for Personalized Disinformation Generation
[Submitted on 6 Sep 2024 (v1), last revised 18 Jul 2025 (this version, v3)] View a PDF of the paper ...
Read more

[2402.13722] Exploiting Adaptive Contextual Masking for Aspect-Based Sentiment Analysis

[2412.13666] Evaluation of LLM Vulnerabilities to Being Misused for Personalized Disinformation Generation
[Submitted on 21 Feb 2024 (v1), last revised 17 Jul 2025 (this version, v2)] View a PDF of the paper ...
Read more