CODECHECK tackles one of the main challenges of computational research with a workflow and roles to evaluate computer programs underlying scientific papers for their reproducibility. The independent time-stamped runs conducted by codecheckers award a “certificate of executable computation” and increase availability, discovery, documentation, and reproducibility of crucial artifacts for computer-based research. A CODECHECK certificate states that one codechecker was able to follow the author’s instructions at one point in time and largely came to the same outputs and results, similar to a peer review for the paper’s scientific contribution. CODECHECK emphasises openness, communication, and recognition of diverse research outputs and reviewing contributions.
Validation: See the CODECHECK paper for a full description of challenges, solutions, and goals of CODECHCK. Browse the CODECHECK register with all completed checks.
DataSeer fills the urgent need for low-cost, scalable solutions to measure open science, show researchers how to comply with policy, and deploy just-in-time interventions.
Antibody transparency report from the Only Good Antibodies community: This service evaluates the quality of information about the choice and use of antibodies within a research protocol or manuscript. It provides a table that shows which experiments relied (or will rely) on antibodies, and information about the suitability of the antibodies for these specific experiments – both within the submission/experimental plan, and from other sources where available. The report suggests where further experimental work to confirm antibody specificity is needed, where it is desirable, or where the use of an alternative antibody is suggested because of poor suitability of the antibody for the purpose used. This service is provided by experts within the OGA community, a non-profit organisation with the mission to enhance the quality of research that relies on antibodies.
Validation: This is an experimental platform that relies on a combination of human checking assisted by AI to enhance speed and accuracy. Evaluations provided for Lifecycle Journal submissions are part of our validation study.
Paper-Wizard is a cutting-edge AI-powered evaluation service that delivers comprehensive, actionable feedback on academic papers in minutes, not months. The system harnesses advanced processing to provide detailed insights on theoretical soundness, methodological rigor, statistical appropriateness, and writing quality, helping authors critical evaluate and enhance their manuscripts with unprecedented speed and precision. Operating as a fully autonomous digital peer reviewer, Paper-Wizard helps researchers polish their work to meet the highest academic standards.
Validation: Paper-Wizard leverages developments in generative AI to deliver manuscript evaluation capabilities. The system is in active development, with an established user base of over 1,500 researchers. While formal studies are upcoming, participation in Lifecycle Journal serves as part of our systematic validation process.
“Peer Community in” (PCI) is a non-profit and non-commercial scientific organisation gathering specific communities of researchers reviewing and recommending, for free, preprints in their field, e.g. PCI Ecology or PCI Archaeology. PCI is thus a peer-review and endorsement service that publicly evaluates – through peer review – and recommends/highlights high-quality preprints deposited in open archives. In the case of positive evaluation, PCI publishes a recommendation text and the whole evaluation process.
Validation: So far, we have published over 4500 public peer reviews and over 800 public recommendations. Participating PCI Communities:
- PCI Animal Science
- PCI Archaeology
- PCI Ecology
- PCI Ecotoxicology and Environmental Chemistry
- PCI Evolutionary Biology
- PCI Genomics
- PCI Health and Movement Science
- PCI Infection
- PCI Microbiology
- PCI Neuroscience
- PCI Organisation Studies
- PCI Palaeontology
Peer Community in Registered Reports (PCI RR) is a free, non-profit, non-commercial arm of the broader Peer Community In platform, dedicated to reviewing and recommending Registered Reports preprints across the full spectrum of STEM, medicine, the social sciences and humanities. Established in 2021 by the founders of the Registered Reports format, PCI RR has publicly recommended hundreds of Stage 1 and Stage 2 preprints.
More info
PREreview supports and empowers diverse and historically excluded communities of researchers (particularly those at the early stages of their careers) and other expert reviewers to engage in open preprint peer review. In addition to training and collaborative review facilitation services, we offer a free-to-use online review publication platform with a request-a-review feature that helps users find preprints to review. Reviewers can publish their preprint reviews with us for free by registering with their ORCID iD and then receive DOIs for published reviews and credit for their public review activity in the peer review sections of their ORCID profiles. We support dozens of preprint servers and offer COAR Notify Protocol integrations to partnering servers, like bioRxiv and SciELO Preprints, that want to give their preprint authors the ability to request reviews from PREreview’s community with the click of a button.
RegCheck leverages large-language models and AI to automatically compare preregistration plans with scientific publications. This lets researchers effortlessly identify if and how executed studies deviated from the initial plan.
ResearchHub provides an innovative open peer review marketplace for authors and journals, dramatically reducing review timelines from months to an average of 10 days. We financially incentivize reviewers with ResearchCoin (RSC), promoting rapid, high-quality feedback while promoting open science. With a transparent and efficient process, ResearchHub accelerates knowledge dissemination, empowers researchers with fair compensation, and ultimately advances the pace of scientific progress.
Validation: 1) Our editorial team evaluates each peer review against a rigorous internal rubric as part of the validation process. And only high quality and scientifically rigorous reviews are awarded. 2) To date, we’ve completed over 4,000 peer reviews, with a current weekly output exceeding 150 reviews. 3) Majority of authors found the reviews to be of high quality (4/5 and 5/5 on 1-5 rating).
ReviewerZero AI spots potential integrity issues in research manuscripts—like statistical mistakes, citation errors, duplicated content in figures and tables, and potentially manipulated data. See our evolving set of features.
Validation: ReviewerZero technology has been validated through multiple studies, demonstrating effectiveness in identifying figure duplication and citation requirements, improving the accessibility and interpretability of scientific figures, and automating peer review. Key studies include Automated detection of figure reuse in bioscience literature, Modeling citation worthiness by using attention-based bidirectional long short-term memory networks and interpretable models, Computational evaluation of figure interpretability in scientific publications, MAMORX: A multi-modal system for scientific review generation.
SciScore is an scientific methods integrity checking tool covering several checklists, including MDAR (Material Design Analysis Reporting) and STAR Methods. We provide authors a report that points out which of the facets of these guidelines were found and which were not found in the submitted text, while attempting to assess whether the criterion is applicable to the specific manuscript (see also Menke et al 2022).
Validation: Babic, et al., 2019; Menke et al., 2020; Menke et al., 2002.
The Social Science Prediction Platform (SSPP) offers researchers a systematic tool for collecting expert forecasts about their studies’ outcomes. While not an evaluation service, SSPP serves as a valuable resource for researchers submitting Research Plans to Lifecycle Journal. By gathering expert predictions ex ante, researchers can establish clear benchmarks against which to compare their eventual findings, helping to mitigate publication bias and improve experimental design. This approach aligns with Lifecycle Journal‘s commitment to transparency and rigorous research practices across the full research lifecycle. Researchers can include SSPP forecasts as supplementary documentation with their submissions, providing valuable context for subsequent evaluation services, and in their papers.
Validation: See this Science article for more information on forecasting, and see our Example Research page for uses of the SSPP in published papers.
Statcheck can be considered a “spellchecker for statistics”: it searches the text for results of null hypothesis significance tests and checks whether the reported p-value matches its accompanying test statistic and degrees of freedom. If the p-value does not match, statcheck flags it as an inconsistency. If the reported p-value is < .05, but the recalculated p-value is > .05, or vice versa, it is flagged as a decision inconsistency. Statcheck is currently designed to find statistics in APA reporting style. It finds about 60% of reported statistics and reaches an accuracy of 96.2% to 99.9% in classifying detected results (Nuijten et al., 2017; doi.org/10.31234/osf.io/tcxaj).
The Unjournal (est. 2022) is a grant-funded nonprofit organization aiming to make research more impactful and rigorous. Our team prioritizes research in economics and quantitative social science based on its potential for global impact. Managers recruit research experts as evaluators and we pay them to write a report providing high-quality feedback and discussion. We elicit percentile ratings across several criteria, emphasizing credibility, transparency, and methodological rigor, as well as an overall assessment benchmarked to standard journal-tier metrics. Authors can respond (and evaluators can adjust in turn) before we publish the evaluation package along with a manager’s synthesis. See our guidelines for evaluators and our public evaluation output (Validation: ~35 evaluation packages).
Other Partners

Become an Evaluation Service Partner
Do you run an evaluation service for scholarly research, or do you have an innovative idea for how planned or completed research could be evaluated? Contact us to explore partnering in the marketplace of evaluation services.