The Project

Being able to duplicate published research results is an important process of conducting research whether to build upon these findings or to compare with them. This process is called “replicability” when using the original authors' artifacts (e.g., code), or “reproducibility” otherwise (e.g., re-implementing algorithms). Reproducibility and replicability of research results have gained a lot of interest recently with assessment studies being led in various fields, and they are often seen as a trigger for better result diffusion and transparency. In this project, we assess replicability in Computer Graphics, by evaluating whether the code is available and whether it works properly. As a proxy for this field we compiled, ran and analyzed 192 codes out of 454 papers from SIGGRAPH conferences (exhaustive for 2014, 2016 and 2018). In the analysis described in

we show a clear increase in the number of papers with available and operational research codes with a dependency on the subfields, and exhibit a correlation between code replicability and citation count.

This website provides an interactive tool to explore our results and evaluation data. It also provides tools to comment on the various codes either as an author or as a user. All materials (data, scripts..) that have been used to generate these results are available on the GitHub project. The website contains the data for all papers in:

and partial data for:

As a long term goal, we would like to collect data for more SIGGRAPH venues, for SIGGRAPH ASIA editions, for ToG papers, and for any other computer graphics events. If you want to help, see contributions, and check out the contributors.

Our project aims at providing the community with tools to improve Computer Graphics research replicability. While the Graphics Replicability Stamp Initiative (GRSI) encourages authors to make their research replicable, in our project we check whether existing research is replicable.

You can contribute new code analysis for computer graphics papers. We're looking forward to your contributions. You can also contact us.

Data Digest

Key numbers

Number of papers reviewed: 454
Number of reviews: 461
Number of papers with code: 192
Number of replicable papers: 146

Number of review / year of publication

Replicability results for reviewed papers

PDF accessibility for reviewed papers

The Team



This work was funded in part by ANR-16-CE23-0009 (ROOT), ANR-16-CE33-0026 (CAL- iTrOp), ANR-15-CE40-0006 (CoMeDiC) and ANR-16-CE38-0009 (e- ROMA).


There are several ways in which you can contribute and help us improve Computer Graphics research replicability

  • Add a comment (alternative compilation tricks, details on the code...) via the discussion on each paper page.
  • Add a new variant (replicability test), i.e. edit an existing JSON file (see below).
  • Add a new entry to the system (new paper), i.e. submit a new JSON file.

For the last two cases, you can either submit a proper JSON file as a pull request to this project, or send the JSON file to

Note about self-reviewing: we accept the reviews provided by the authors of a paper. These reviews will be clearly identified and may be double checked to validate the replicability. If you're an author, please clearly mention it when submitting your review.

In our system, the website is fully generated from data available on JSON documents. You can browse the database on the github project page. Each paper is a single JSON file, named by the paper DOI, with multiple "variant" records. Each variant is a build test on a specific system, environment or reviewer. For example, an article with DOI 10.1145/2601097.2601102 has a 10.1145-2601097.2601102.json file which looks like

                    ... data for variant A ...
                    ... data for variant B ...

We highly recommend having a look to our template JSON with all the explanations about the fields we are using.

If you have any question, feature requests or found any bug, do not hesitate to report an Issue.

Contact us

Drop us an email for more information or to let us know what you think.