About JEDI

Join JEDI   Post to JEDI


The “Journal Editors Discussion Interface” (JEDI) Listserv was created in March 2017 by several members of the Data Preservation Alliance for the Social Sciences (Data‐PASS). JEDI offers the editors and editorial staff of social science journals, and personnel from the data management community, a shared forum in which to ask and answer questions, pool information and expertise, and build a fund of collective knowledge. JEDI aims to facilitate discussion between the two groups about evolving practices, and to help and encourage them to develop a common language and set of norms and to generate solutions to problems new and old.


While the details of policies and workflows differ from journal to journal, there is a core set of functional tasks that most journal editors undertake. For example, editors typically develop and provide submission instructions and guidelines to authors, register and manage submissions, identify reviewers, organize the review process, interact repeatedly with authors and reviewers to improve submissions, and guide worthy manuscripts and their supporting materials through publication.

These editorial processes are crucial steps on the road linking research to published knowledge. These processes allow research results to be shared and integrated into the overall stock of social scientific knowledge. The great majority of these editorial processes are instantiated in a journal’s standard operating procedures, represented in its guidelines for authors, its workflows, and its journal management software.

While editorial teams acquire great expertise in carrying out these processes, from time to time questions and difficulties may arise. Moreover, editors may find it helpful to remain current on and contribute to debates about best practices so they can be prepared to respond to innovations that come to be considered mainstream editorial conduct.

Given the many demands on editors’ time – and given that most editors face similar processual challenges – there is great value to their interacting with each other about these key issues, and pooling their collective wisdom, sharing lessons, examples, insights, and solutions. The benefits can be further multiplied if experts on relevant topics are included in the conversation. JEDI seeks to generate that interaction and those benefits.


The majority of JEDI members are editors and editorial staff of social science journals, and personnel from the data management community, in particular representatives from digital repositories that safely store, publish, and preserve digital social science data. Any individual who plays a key editorial role at a social science journal or a leading or supporting role at an organization involved in data management is welcome to join JEDI.


JEDI’s initial focus is on the parts of the editorial process that deal with data and their analysis (e.g., effective data management, data citation, and linking data and analysis to published conclusions), and making research more transparent. This is an area where change is currently underway and where a discussion among journal editors is thus likely to be most lively and helpful.

For example, queries on the following topics might arise:

  • Assistance locating template language for authors’ guidelines on citation and the use of digital object identifiers (DOIs).
  • Advice on best practice for citing dynamic data.
  • Responses to requests for advice on the relative ease or difficulty of incorporating a new practice, such as prepublication verification of analysis, preregistration of research designs, and pre-analysis plans.
  • Discussion of second-order risks that arise from improved transparency, for example, how journals should take notice of subsequent appraisals including replications.

Of course, Listserv members are free to raise any questions about the editorial process that they think their fellow editors might be willing and able to address. It is anticipated that the Listserv will eventually cover any topic about which editors would find it valuable to share information.


A strong consensus is emerging within the discipline of political science that knowledge claims are more understandable and evaluable if scholars describe the research processes in which they engaged to reach their conclusions. Citing and showing the evidence on which those claims rest (when this can be done within ethical and legal constraints), discussing the processes through which that evidence was garnered, and explicating the analysis that was undertaken to produce the claims facilitate expression, interpretation, reproduction, and replication.

A vibrant community of institutions and individuals have made substantial progress in developing new ways to manage, cite, and share data and related supplemental materials. Journal editors have been at the forefront of these discussions. New tools make sharing and citing data, increasing the transparency of research, and integrating data into a journal’s workflow both easier and less expensive. Some journals have been early adopters of these techniques, demonstrating that it is possible both to achieve openness and to publish first-rate substantive research.

Notwithstanding these advances, generating effective strategies for making research transparent – and in particular, sharing data in legal and ethical ways, developing enduring citations to data, and connecting diverse types of data to published claims – is challenging. On the one hand, multiple communities are engaged in conversation about transparency, which complicates remaining up-to-date on all aspects of the dialogue. For example, conversations have begun about the increasingly common practice of pre-publication verification of analyses, with some journals making final acceptance of an article conditional on success in replicating its results. In the qualitative realm, a vibrant conversation has begun, with human subjects’ protection and logistical concerns front-and-center.

On the other hand, a set of stable and easily adoptable core practices has begun to emerge, especially with regard to data citation and management, and in these areas editors can proceed with confidence. For example, social (including political) science is moving toward the widespread use of permanent identifiers, such as digital object identifiers (DOIs), for a variety of research products, articles and datasets alike. Similarly, there is now a strong consensus that where data are located matters, and that (for example) trusted digital repositories are preferable to personal websites for data access. Notwithstanding such consensus, further change is likely on many fronts. To identify just one, editorial management software is likely to become increasingly integrated with data infrastructure, such that journal workflow easily takes note of the status of data and other supplemental materials.

JEDI seeks to help editors prepare for and work to address these impending challenges together.