1) Text similarity (Crossref Similarity Check / iThenticate)
We compare the manuscript’s text to a large corpus to detect substantial overlap with published or web content.
Overlap can be legitimate (methods, boilerplate) or problematic (uncited reuse, salami slicing).
What editors do: We review the report manually, focusing on context not raw percentages.
Common phrases and references are discounted; uncredited overlap prompts a request for revision or clarification.
Example: A 23% overall similarity with most matches in the methods section. Editors note acceptable reuse
of standard procedures; two paragraphs match an earlier preprint by the authors → authors add a citation and rephrase.
2) Paper-mill & manipulation signals (STM Integrity Hub)
Pattern-based checks highlight risky combinations such as fabricated email domains, recycled images across submissions,
off-topic citations, or implausible author affiliations. These are signals, not verdicts.
What editors do: We verify identity details, request raw data or ethics documentation where needed,
and may consult an image-forensics report. Strong signals pause the workflow until concerns are resolved.
Example: Multiple manuscripts from unrelated authors list the same non-institutional domain and share figure layouts.
Editors request underlying data; one paper is withdrawn; others proceed after satisfactory evidence.
3) Statistics sanity passes (e.g., statcheck for NHST)
Where applicable (e.g., reported t/F with p values),
we auto-recalculate basic tests to catch common transcription inconsistencies. This is advisory, not a substitute for specialist review.
What editors do: If inconsistencies appear, we ask for a checked analysis file or clarification in the response letter.
Complex models are directed to subject-expert reviewers or a statistical editor.
Example: Reported t(58)=2.10, p=0.12 recalculates to p=0.04.
Authors correct a rounding error and upload their notebook; reviewers confirm conclusions remain supported.
4) Reference validation (Crossref DOIs)
We resolve reference metadata and DOIs automatically and flag missing or malformed entries.
Clean references improve citation tracking and help readers reach the right source.
What editors do: We return a highlighted list for author correction, fix obvious typos, and ensure
key datasets/software are cited with persistent identifiers.
Example: 8 of 42 references lack DOIs; two have mismatched years. After automated suggestions,
authors add DOIs and correct metadata before the manuscript moves forward.
5) Image authenticity (C2PA / Content Credentials)
We verify figures and graphical abstracts with Content Credentials (C2PA).
Manifests record when, how, and with which tools media were created or edited, making alterations transparent.
What editors do: If credentials are missing or inconsistent with claims, we request the original files
or a signed explanation. Clear manipulations result in rejection; honest edits with labels are acceptable.
Example: A gel image shows duplicated bands. Authors provide raw images and a corrected figure labeled “contrast adjusted.”
The record includes a brief note; the corrected figure proceeds.