Lab AI Policy
Todd M. Gureckis · · 5 minutes to readAfter a community forum meeting at NYUconcats on the use of AI in science, I drafted this lab AI policy (with the help of AI). Happy to share freely with others, or get feedback and comments.
Lab Policy on the Use of Generative AI Tools
Generative AI tools — including large language models (LLMs) for writing, coding assistants, and multimodal systems — are powerful additions to the scientific toolkit. Like any powerful tool, they can speed up good science and help catch errors; used carelessly, they can introduce subtle bugs, fabricate evidence, and undermine scientific integrity. This policy sets out clear expectations for how every member of this lab is to use these tools responsibly, transparently, and in a way that upholds our core commitment to rigorous, reproducible, open science.
Foundational Principle: Human Authors Bear Full Responsibility
No AI system — regardless of how much it contributed to a paper, piece of code, or analysis — will be listed as an author on any work from this lab. Every lab member assumes personal responsibility for all content: text, figures, code, citations, and methodology. Using an AI tool does not transfer, dilute, or share that responsibility.
1. Alignment with Field-Wide Standards
This policy reflects the emerging consensus across the major AI conference venues. Every leading AI/ML conference has reached the same conclusion: AI tools may assist human researchers, but humans remain solely accountable.
| Conference | AI Authorship | AI Assistance | Key Requirement |
|---|---|---|---|
| NeurIPS 2025 | Prohibited | Yes, as a tool | Authors fully responsible; must verify all citations; misuse may result in retraction |
| ICML 2025/2026 | Prohibited | Yes, with disclosure | Authors take full responsibility; LLMs not eligible for authorship |
| ICLR 2026 | Prohibited | Yes, must disclose | Must declare LLM use in paper; authors personally verify all AI-generated contributions |
| AAAI 2025 | Prohibited | Limited (editing only) | AI-generated text prohibited in body; AI not a citable source; full responsibility on authors |
| CVPR 2025 | Prohibited | Yes, permissive | No AI in peer reviews; authors responsible for all content |
| ACL / EMNLP | Prohibited | Yes, with disclosure | Standard disclosure; authors bear full responsibility |
| CogSci 2025/2026 | Prohibited | Yes, must disclose | Describe AI use in Methods or Acknowledgements; reviewers prohibited from uploading manuscripts to AI tools |
Our lab policy aligns with and, in some respects, exceeds these requirements.
2. What Responsible Use Looks Like
Responsible use of AI tools can genuinely accelerate science. We actively encourage lab members to use these tools — but always with verification and transparency.
Encouraged Uses
- Brainstorming research directions, counterarguments, or experimental designs
- Generating boilerplate code for standard tasks (data loading, plotting, etc.)
- Using coding assistants (e.g., GitHub Copilot, Claude Code) for programming tasks
- Improving grammar, clarity, and readability
- Documenting code bases, standardized workflows, and lab instructions
Uses Requiring Extra Caution
- Generating citations or bibliographies: use a reference manager (Paperpile, Zotero, Endnote) as your source of truth; see mandatory verification requirements in Section 3
- Writing methods sections or statistical analyses: verify every claim against actual results
- Generating code for core analyses: test rigorously before trusting output
- Interpreting results: AI models often have faulty reasoning in complex domains
3. The Verification Principle: Prefer Code Over Trust
This lab's core epistemological stance is that claims should be tested, not assumed. This applies directly to AI-assisted work: wherever possible, write code to verify what an AI tool produces rather than trusting its output on faith.
Verification Over Trust
A script that checks whether every citation in a paper exists and resolves correctly is worth more than asking an LLM "do these references look right?" The LLM will confidently confirm hallucinated citations; the script will not.
A unit test that verifies a model implementation produces known correct outputs on toy data is worth more than asking an LLM to review the code. Code can be wrong in exactly the ways LLMs fail to catch.
Specific Verification Requirements
The following checks are mandatory before any manuscript submission.
Citations. Verify that every DOI resolves, every URL is live, and a human has read at least the abstract of every paper cited to confirm it is being accurately characterized.
Statistical code. Any analysis generated or substantially assisted by an AI tool must be independently verified on simulated data with known ground-truth properties before application to real data. The verification script must be preserved and submitted with the code repository.
Model implementations. Cognitive model code must include unit tests covering known closed-form cases, boundary conditions, and, when possible, an end-to-end test reproducing a result from a prior published paper. Tests must pass before any data is published with the model.
AI-generated prose. Every factual claim in AI-assisted text must be traceable to a primary source. "The AI wrote it" is not a source.
Plan ahead!
These checks add a verification step that wasn't previously required. Build this time into your submission timeline.
4. Open Science Requirements
Responsible AI use raises the bar for open science, because the verification trail is now essential for others to evaluate AI-assisted work.
Code
- All analysis code must be published, not merely available upon request.
- Repositories must include the verification and testing scripts from Section 3, so reviewers and readers can see what checks were performed.
- Where AI tools generated substantial portions of code, note this in comments or the README.
- Code must be documented well enough for another lab member to run it on the original data and reproduce reported results.
- Except in exceptional cases, generative model output should not be a primary method of analysis — e.g., using an LLM to classify open-ended responses instead of writing a classifier.
Data
- Raw data (subject to IRB and participant privacy constraints) must be deposited in a public repository (OSF, GitHub, Zenodo, or equivalent) at the time of submission.
- Preprocessing and exclusion scripts must be included. Avoid unpublished intermediate processing steps.
Disclosure of AI Use
- Any paper that used AI assistance in a way required to be disclosed by the target venue must include that disclosure, drafted by the human author — not the AI.
- For internal lab reports, preprints, and grant documents: include a brief AI use statement noting which tools were used and for what purposes.
- If an AI tool was used to help generate experimental stimuli, analysis pipelines, or model code, describe this in the methods section, not only in an acknowledgment.
Pre-registration
- Confirmatory analyses must be pre-registered. AI tools may be used to help draft a pre-registration; the registered hypotheses and analysis plan are then binding in the same way as any pre-registration.
5. Individual Responsibility of Lab Members
Every lab member — PhD students, postdocs, research staff, undergraduate research assistants, and visiting researchers — is individually responsible for the work they produce. If an AI tool wrote a section, you are responsible for verifying it. If you wrote the section, you are responsible for every claim in it.
When Things Go Wrong
If you discover that AI-assisted work in a submitted or published paper contains an error — a hallucinated citation, a bug in AI-generated code, a fabricated statistic — report it immediately. This lab will always prioritize correcting the scientific record. Issuing prompt corrections and retractions earns more respect in the field than hiding errors. Prompt correction is not a failure; undisclosed error is.
6. Policy Updates
The norms around AI in research are evolving rapidly. This policy will be reviewed especially as major conferences or journals significantly update their own policies. The current version is maintained in the lab documentation site.