Skip to content

Commit

Permalink
change font
Browse files Browse the repository at this point in the history
  • Loading branch information
yiranyyu committed Nov 30, 2023
1 parent fb091e9 commit f1874a2
Show file tree
Hide file tree
Showing 20 changed files with 7 additions and 7 deletions.
Binary file modified .DS_Store
Binary file not shown.
Binary file removed demos/p1.png
Binary file not shown.
Binary file removed demos/p2.png
Binary file not shown.
Binary file removed demos/p3.png
Binary file not shown.
Binary file removed demos/p4.png
Binary file not shown.
Binary file removed demos/p5.png
Binary file not shown.
Binary file removed demos/p6.png
Binary file not shown.
Binary file removed demos/p7.png
Binary file not shown.
Binary file removed demos/p8.png
Binary file not shown.
Binary file removed demos/p9.png
Binary file not shown.
Binary file modified images/.DS_Store
Binary file not shown.
Binary file removed images/Intro.png
Binary file not shown.
Binary file removed images/VPGTrans.png
Binary file not shown.
Binary file removed images/blip2.png
Binary file not shown.
Binary file removed images/cost.png
Binary file not shown.
Binary file added images/icon.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed images/icon.png
Binary file not shown.
Binary file removed images/overview.png
Binary file not shown.
Binary file removed images/vl-llama.png
Binary file not shown.
14 changes: 7 additions & 7 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -33,10 +33,10 @@
<link rel="stylesheet" href="./static/css/fontawesome.all.min.css">
<link rel="stylesheet"
href="https://cdn.jsdelivr.net/gh/jpswalsh/academicons@1/css/academicons.min.css">
<link rel="icon" href="images/icon.png">
<link rel="icon" href="images/icon.jpg">
<link rel="stylesheet" href="./static/css/index.css">

<link rel="shortcut icon" href="images/icon.png" type="image/x-icon">
<link rel="shortcut icon" href="images/icon.jpg" type="image/x-icon">

<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<script defer src="./static/js/fontawesome.all.min.js"></script>
Expand Down Expand Up @@ -82,7 +82,7 @@
<div class="columns is-centered">
<div class="column has-text-centered">
<h1 class="title is-1 publication-title">RLHF-V</h1>
<h2 class="title is-2 publication-title">Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback</h2>
<h2 class="title is-3 publication-title">Towards Trustworthy MLLMs via Behavior Alignment from Fine-grained Correctional Human Feedback</h2>
<div class="is-size-5">
<span class="author-block">
<a href="https://github.com/yiranyyu" style="color:#008AD7;font-weight:normal;">Tianyu Yu<sup>1</sup>
Expand Down Expand Up @@ -359,16 +359,16 @@ <h2 class="title is-3">Abstract</h2>
Existing Multimodal Large Language Models prevalently suffer from serious <b>hallucination</b> problems, generating text that is not factually grounded in associated images. Our <b>RLHF-V framework</b> enhances MLLM trustworthiness via behavior alignment from fine-grained correctional human feedback.
<ul>
<li>
<b>1.4K Fine-Grained and Diverse Human Preference Data</b>: We collect 1.4K pieces of segment-level corrections of human feedback on hallucinations, covering hallucination types including objects (41.2%), positions (20.3%), numbers (16.5%), attributes (10%), actions (5.3%), and others (6.8%).
<b>1.4K Fine-Grained and Diverse Human Preference Data</b>: <span style="font-size: 95%;">We collect 1.4K pieces of segment-level corrections of human feedback on hallucinations, covering hallucination types including objects (41.2%), positions (20.3%), numbers (16.5%), attributes (10%), actions (5.3%), and others (6.8%).</span>
</li>
<li>
<b>High Data Efficiency and Scalability</b>: With just 1.4K annotated data, we achieved a 34.8% reduction in model hallucinations. Moreover, the decrease in hallucinations becomes more significant as more data used.
<b>High Data Efficiency and Scalability</b>: <span style="font-size: 95%;">With just 1.4K annotated data, we achieved a 34.8% reduction in model hallucinations. Moreover, the decrease in hallucinations becomes more significant as more data used.</span>
</li>
<li>
<b>Enhanced Performance and Computational Efficiency with DDPO</b>: Our Dense Direct Preference Optimization (DDPO) algorithm can better exploit the fine-grained human feedback, allowing training in under 1 hour on 8 A100 GPUs.
<b>Enhanced Performance and Computational Efficiency with DDPO</b>: <span style="font-size: 95%;">Our Dense Direct Preference Optimization (DDPO) algorithm can better exploit the fine-grained human feedback, allowing training in under 1 hour on 8 A100 GPUs.</span>
</li>
<li>
<b>Outstanding Trustworthiness without Compromising Helpfulness</b>: Our model surpasses existing open-source MLLMs in reducing hallucination rates, mitigates hallucination from over-generalization, and maintains informativeness.
<b>Outstanding Trustworthiness without Compromising Helpfulness</b>: <span style="font-size: 95%;">Our model surpasses existing open-source MLLMs in reducing hallucination rates, mitigates hallucination from over-generalization, and maintains informativeness.</span>
</li>
</ul>
<!-- <br>
Expand Down

0 comments on commit f1874a2

Please sign in to comment.