- Published on
Using AI In Your Lab Without Breaking Research Integrity
- Authors

- Name
- ResearchDock Team
If you walk through any lab or research office today, you will find the same quiet reality: almost everyone is using AI tools somewhere in their workflow, whether they admit it or not.
PhD students are pasting draft paragraphs into language models to “polish” their writing, asking for alternative wordings or clearer structure. Supervisors are testing AI for quick literature overviews, email replies, and even code review. At the same time, national regulators, universities, and journals are issuing increasingly strong warnings about academic integrity and generative AI, and updating their policies on disclosure and authorship.
As a researchers, we do not think AI in research is going away. The real question is:
How do we integrate these tools into our labs in a way that is transparent, ethical, and actually improves the quality of our work rather than quietly undermining it?
In this post, we sketch a pragmatic “AI playbook” at the lab or research-group level, and show how a central workspace (like ResearchDock) can help make the rules real rather than just another policy PDF that no one reads.
1. Everyone is already using AI. The question is how.
There is now a rapidly growing literature on AI-assisted academic writing and research. People have documented real benefits:
AI can assist with idea generation and structuring manuscripts.
It can help with summarising existing literature and improving clarity, especially for non-native English speakers.
It can speed up some of the “glue work” of research: emails, cover letters, outlines, checklists.
At the same time, the drawbacks are serious:
Large language models can hallucinate facts and generate fabricated references that look plausible but do not exist.
When summarising scientific papers, models can oversimplify or distort results, especially in sensitive domains where nuance and uncertainty are critical.
AI-generated code may be syntactically correct yet contain subtle bugs or security vulnerabilities.
Regulators and universities have noticed. Many are now publishing guidance on generative AI, integrity risks, and how assessment and supervision need to adapt.
So the problem is not “AI: good or bad?”. The problem is unstructured AI use: students and staff experimenting in an ad hoc way, without group-level norms on what is allowed, what must be disclosed, and what must always be checked.
2. Four places AI can help in a research project (and where it can hurt)
Below are four stages of a typical project where AI can add value, paired with the main risks we try to guard against.
2.1 Literature triage and exploration
Helpful uses
Brainstorming search terms and synonyms before diving into databases.
Asking for high-level overviews of a topic you already know something about.
Generating question lists or conceptual maps to guide a manual search.
Risks
Letting AI invent references or misrepresent the conclusions of papers.
Using AI summaries as a substitute for reading the original work, especially when methods and limitations are crucial.
Practically, any AI-assisted summary should be treated as a rough guide, not as the basis for precise claims.
2.2 Project planning and admin
Helpful uses
Drafting supervision meeting agendas, progress report templates, or email updates.
Turning a rough list of tasks into a more structured project plan.
Generating alternative formulations of aims and hypotheses for discussion.
Risks
Allowing AI to “decide” realistic milestones without human knowledge of local constraints, ethics lead times, or experimental bottlenecks.
Treating AI-generated plans as authoritative rather than as a starting point for negotiation within the group.
Here, we see AI as a prompted secretary: useful for drafting, never for deciding.
2.3 Writing and revising manuscripts
Helpful uses
Language polishing: improving clarity, reducing repetition, and smoothing transitions.
Suggesting alternative structures for an introduction or discussion.
Turning bullet point notes into a first rough paragraph that you heavily rework.
Many editors now explicitly recognise these potential benefits for readability and efficiency, while insisting that authors remain responsible for the scientific content.
Risks
Letting AI generate large blocks of text that the author barely edits, then failing to disclose this.
Allowing AI to rewrite technical sections in ways that inadvertently change the meaning, especially for statistics, limitations, and claims.
A useful rule of thumb:
If I would not be comfortable defending a sentence under examination without access to the model, it does not belong in the manuscript.
2.4 Code and analysis
Helpful uses
Asking for boilerplate code, examples of common analysis pipelines, or testing strategies.
Using AI to explain unfamiliar error messages or library behaviour.
Brainstorming unit tests or sanity checks for existing code.
Risks
Copying and pasting AI-generated code directly into production analysis without reviewing it.
Ignoring secure coding practices or subtle numerical issues.
For research code that affects published results, there should always be a second pair of human eyes, ideally from someone who was not involved in the prompt engineering.
3. A simple lab-level AI policy you can adapt
Every lab and discipline will have its own norms, but it helps to have something written down that everyone can point to. Here is a minimal structure that can be adapted.
3.1 Allowed uses (with disclosure and checking)
AI tools may be used for:
Brainstorming ideas, outlines, and alternative phrasings.
Language editing for clarity, grammar, and style.
Drafting administrative text such as emails or cover letters.
Generating code snippets as starting points, followed by human review and testing.
In all these cases, the human researcher remains responsible for ensuring that the final text, code, or analysis is accurate and appropriate.
3.2 Prohibited uses
AI tools must not be used for:
Fabricating or altering data, results, or images.
Generating peer review reports or reference letters that are then passed off as human-written.
Producing entire sections of theses, dissertations, or articles that are largely unedited AI output, particularly where authorship or originality is being assessed.
Circumventing peer review or editorial processes (for example, using AI to generate “friendly” reviews).
Many integrity guidelines explicitly identify these behaviours as misconduct or serious integrity breaches.
3.3 Always required
Regardless of discipline, three universal requirements are useful:
- Human verification of factual content
Any factual claim, reference, or result proposed by an AI must be verified against primary sources, datasets, or trusted documentation.
- AI-usage transparency for major outputs
For theses, publications, and formal reports, students and staff should keep a short AI-usage log that states:
Which tools were used.
For which parts of the work.
How the outputs were checked.
This log can be summarised in a methods or acknowledgements section, in line with emerging journal policies.
- Respect for institutional and disciplinary policies
Local university policies, disciplinary codes, and funding-body rules take precedence. In some contexts there are stricter bans (for example, on using AI for grant reviews or confidential data), and these must be followed.
4. Making the policy real with a central workspace
The hardest part is not drafting a policy. The hardest part is operationalising it in daily work so that it supports, rather than polices, students and supervisors.
This is where a central workspace such as ResearchDock is helpful. The specific tool is less important than the pattern: the lab needs a shared space where tasks, drafts, notes, and references live together.
Here are a few concrete patterns.
4.1 A dedicated “AI usage” note for each project
For every active project, keep a short “AI usage log” note in the project workspace. Whenever someone uses AI in a substantive way, they quickly add a line such as:
2025-10-02 – Used an LLM to suggest subheadings for Section 2; rewrote all text myself.2025-11-14 – Used coding assistant for initial plotting script; refactored and added tests X, Y, Z.
In ResearchDock, this simply lives as a shared note within the project. At submission time, you can glance through it and decide what needs to be disclosed formally. Over time, it also becomes an internal record of “how we actually used AI”.
4.2 Tasks that explicitly label AI-dependent work
Whenever AI is used to generate code or complex text, create a follow-up task such as:
“Manually verify AI-generated power analysis script.”
“Check AI summary of Smith et al. (2024) against the original paper.”
In a platform like ResearchDock, those tasks can be tagged (for example ai-assisted and needs-check) and assigned to a specific lab member. This makes verification visible, rather than an informal hope that “someone will look at it”.
As a supervisor, you can filter tasks by these tags and make AI verification part of normal progress monitoring.
4.3 Draft management with comments and version history
Draft management is one of the areas where AI and integrity issues bite hardest. In a shared workspace:
Students upload or write drafts directly in the project.
Supervisors comment inline, including questions like:
“Did you use AI for this paragraph? If so, please log it and rephrase in your own words.”
“This claim seems stronger than the underlying data. Check against methods.”
Version history then provides a natural record of how the text evolved. If a question ever arises about originality or over-reliance on AI, you have earlier drafts and the comment trail.
4.4 Reference manager as the single source of truth
One very practical way to reduce AI hallucination problems is to insist that all references come from the project’s reference manager, not directly from AI output.
With ResearchDock’s in-built project-specific reference manager:
Everyone in the group can see the shared library of papers.
People can comment on papers and reply to others’ notes.
You can flag papers to read or follow up, and link them directly to tasks or drafts.
When AI suggests a citation, the rule is:
It does not “exist” until it has been checked and added to the reference manager.
This simple discipline filters out most hallucinated or mismatched references and keeps the citation record tied to actual PDFs and notes, not just model output.
5. AI, structure, and wellbeing
There is a growing literature showing that PhD wellbeing is closely tied to supervision quality, clarity of expectations, and structured progress, rather than just individual resilience.
AI can either help or hurt here.
Used thoughtfully, AI can:
Make it easier for students to draft emails, reports, and early manuscript versions.
Reduce language barriers.
Help supervisors provide more frequent but lighter-weight feedback (for example, commenting on AI-generated outlines).
Used carelessly, it can:
Increase anxiety about “cheating” or inappropriate use.
Create opaque standards (“how much AI is too much?”).
Mask deeper issues in supervision and project design.
Clear group-level norms and a shared workspace reduce this anxiety. Students know what is allowed, supervisors know what to expect, and everyone has the same place to record and review AI involvement.
6. Closing thoughts
AI tools are now part of the everyday fabric of research life. Ignoring them or relying entirely on unreliable detection tools is not a sustainable strategy.
Instead, we can:
Be explicit about what we consider acceptable AI support.
Require human verification and transparent logging of AI use.
Use central platforms like ResearchDock to weave these practices into our daily workflows, rather than bolting them on as an afterthought.
If you are a supervisor or group leader, a good starting point is simply this:
Write down your lab’s AI rules, set up an “AI usage” note template in your project workspace, and ask your group to try it for one semester.
You can refine from there, but at least you will be dealing with AI in the open, as part of how the lab works, rather than as a hidden source of risk and uncertainty.