diff --git a/README.md b/README.md
index 9ed8dab6cd10138986f59668dcdcaa0e62003866..f777216a04e88c32fde63da48ed807c288abc074 100644
--- a/README.md
+++ b/README.md
@@ -3,7 +3,7 @@
 Code for "Dialogue Evaluation with Offline Reinforcement Learning" paper. 
 
 <p align="center">
-  <img width="700" src="all2.pdf">
+  <img width="700" src="all2.png">
 </p>
 
 In this paper, we propose the use of offline reinforcement learning for dialogue evaluation based on static data.Such an evaluator is typically called a critic and utilized for policy optimization. We go one step further and show that offline RL critics can be trained for any dialogue system as external evaluators, allowing dialogue performance comparisons across various types of systems. This approach has the benefit of being corpus- and model-independent, while attaining strong correlation with human judgements, which we confirm via an interactive user trial.
diff --git a/all2.png b/all2.png
new file mode 100644
index 0000000000000000000000000000000000000000..0d6627d8c030257290f6b539d3d0b3b8da0b7bf9
Binary files /dev/null and b/all2.png differ