ECONOMIC INQUIRY | Data Availability Policy

Economic Inquiry requires that authors of all published empirical papers submitted to the journal after December 1, 2021 provide enough detail on their work so that their results can be replicated. It is expected that, prior to publication, not before submission, data from empirical papers will be uploaded into a suitable archive and that instructions be included with the data necessary for an interested party to reconstruct key tables and figures from the published manuscript. For any paper that involves gathering a unique data set, this includes economic experiments, surveys and similar data gathering exercises, all instruments used in gathering that data should also be provided in that archive. The replication materials will be linked to the published paper.

The WEAI has a sponsored data archive at OpenICPSR in which all authors can upload their replication materials free of charge after the paper has passed the review process (visit the WEAI repository for more information and instructions). Authors should expect to use WEAI's archive site but can if necessary use an alternative archive site. Any alternative site must be approved by the editor to ensure that the alternative archive site meets the journal’s requirements for data integrity.

After a paper receives an initial acceptance, the authors will be asked to upload their data archive and provide the URL of its location. A member of the EI Data Team will review the data archive to ensure that it meets the journal requirements. Authors are encouraged to pre-review their own data package using the same checklist members of our Data Team will use to verify that your submission meets our requirements. Prior to uploading your data archive, you can go through the same checks our team will to make certain that someone reviewing your archive would be able to check “Yes” to the relevant questions. If there are any questions where “No” would be selected, you can then make certain that your README file explains the reasons for that omission. By pre-reviewing your own data archive to ensure compliance, you can lessen the time spent in the data review process and get your paper published more quickly.

Review Data Archive Checklist

Once your materials are submitted, our Data Team will review them. If the materials provided are not complete, authors will receive communication from the Data Team pointing out what element of the checklist your archive was incomplete on with indications for how to rectify the issue. This will continue until an acceptable archive is produced. Under normal circumstances, this process should not take long to complete. If authors fail to comply with the requirement for providing replication materials or cannot provide an acceptable archive within six months, the journal maintains the right to withdraw an acceptance decision. As with our editorial decisions, we also seek to limit the number of rounds of submissions of data packages and so authors should pre-check their data packages carefully prior to submission to make sure that they should meet the requirements so that multiple submission rounds are unnecessary. Should replication materials turn out to be problematic after publication has occurred, the editor will deal with cases as they arise and in serious cases papers may be retracted by the journal.

Replication packages must include the following elements:

  1. A summary file (preferably in simple txt format or pdf) describing the contents of the replication package. It should explain the role and function of each file included and detail all software necessary to run the code as well as any additional add-on packages required. A simple explanation should be included providing instructions for how someone should run the code to generate the results as well as an explanation for where the results can be found once the code is finished. We require that this file should follow the format as specified in the README Template provided by the Social Science Data Editors.
  2. All data files and code necessary to produce the main tables and figures in the manuscript should be included.
  3. Ideally authors should provide the code used to clean and organize their data from original data files. When that is not feasible, authors should provide a clear description of their process for doing so including any decision criteria for dropping or excluding data, imputing values or other data transformations performed between the original data files and the final data file. When base data files are not included, authors should explain the origin of those base data files and explain how another researcher could access them.
  4. For any projects that involve the creation of an original data set via surveys, experiments or similar methods, authors should provide full details on the methods used in that process. This involves providing data gathering instruments, experiment programs, instruction scripts and so on including a brief description for how these materials were used in gathering data.
  5. For any projects which involve simulations or computational elements, the code generating those calculations should be included as well as an explanation for how one would run the code.
  6. It is expected that some data sets may be proprietary or unable to be publicly archived. If that’s the case, this should be explained when submitting. In lieu of providing the data, authors should provide clear descriptions regarding the process to obtain the data or how other researchers might be able to obtain the data as well as clear explanations for how the data were then processed into the form used. Code for conducting the work should still be included and explained even if the data itself cannot be uploaded.

If there are reasons that you would be unable to comply with this policy, those reasons should be explained when the paper is submitted. Waivers to this policy can be allowed when appropriate at the discretion of the editor.