Artificial Intelligence (AI) Questions and AnswersThe ISMPP Artificial Intelligence (AI) Task Force provides answers to questions received from ISMPP members. Below are questions-and-answers from the AI plenary session and workshops at the ISMPP 20th Annual Meeting, April 28-May 1, 2024, and the public ISMPP U webinar - The AI Evolution: Medical Communications in the Digital Age - held on June 12, 2024. Email [email protected] to submit your questions related to AI in medical publications/communications. AI in Medical Publications/Communications DevelopmentWhat is the best way for small medical communication groups to begin learning how to incorporate AI to write first drafts?We suggest starting with AI slowly. Start with trying it on discrete manuscript or abstract sections, such as background, introduction (non-proprietary info for "external" AI), methods, or other sections (with "internal" or private AI systems). Suggest taking AI use slowly by experimenting with AI prompting to generate summaries of published literature. Play with AI a bit to understand output nuances before jumping into full manuscript use. Practice on published manuscripts. Is there any validity to the idea that you could rewrite a manuscript that was drafted with AI to the point that it was free of AI influence?If you significantly rewrite content from any source, disclose the original, regardless of it being human or AI-generated. Have you received any feedback on a document created using AI from the end user/audience (i.e., publishers, patients, regulators)?ISMPP member research has shown readability comparisons between AI- and human-generated documents. Most platforms can generate comparable output to humans. AI still makes mistakes and, therefore, necessitates a human in the loop. How can we use AI day-to-day without negatively affecting data privacy or security? What are the safeguards for protecting patient privacy or proprietary data when using an AI model?Understand the copyright, privacy, and security attributes of the AI system you're using, along with relevant company policies and the privacy considerations of contributing entities. If your source uses AI, should it be reported? For example, using the ConcertAI database to write an RWE paper.Yes. In the example, the use of ConcertAI as a data source should be included as part of the methods. We’re struggling with copyright challenges in terms of using AI to write summaries of open access and published data — is the workaround partnerships with the journal? Are there other things we’re missing?When summarizing content, regardless of whether done entirely by human or assisted by AI, the same copyright rules apply to the accountable human. If an editor is interested in using AI to help reword a sentence (for example, rewording text using passive voice to active voice), are there any concerns with data privacy and should inputting text from medical writers into AI be avoided?Data privacy concerns exist, so use secure AI solutions that don't train on user inputs. Various tools can aid in text rewording without violating privacy. If there is a plan to use AI to help prepare a manuscript/abstract/poster/PLS, during the author kick-off call, do we need author approval first and foremost?Obtain author approval before using AI for manuscript preparation, similar to medical writing support. Some medical conferences and journals prohibit use of AI in abstract/manuscript generation. Do you think they will come around? Will prohibitions change?As AI becomes commonplace, current prohibitions may convert to disclosure requirements. Always check the latest journal or congress guidelines. Is it safe to enter an unpublished manuscript into an AI tool for creating a PLS?It's safest to process unpublished manuscripts within your company's firewalled systems to prevent data leaks. Could you please provide a guide on how to develop a prompt for PLS publication?Different prompts suit different tasks, with conversational prompting useful for beginners. Identify target audiences in prompts. Consider refining prompts through LLMs or prompt writing tools. Supporting information with two example prompts can be found here: The use of a large language model to create plain language summaries of evidence reviews in healthcare: A feasibility study - Ovelman - 2024 - Cochrane Evidence Synthesis and Methods - Wiley Online Library If we create PLS for everything using AI, might this not add to the ever-growing flood of scientific data? Is there a need for greater curation too? Or some form of patient lay synthesis of multiple publications?Decide strategically whether generating PLS with AI aligns with audience needs. AI streamlines the creation process but isn’t always necessary simply due to convenience. How can we measure the efficacy of AI-driven solutions in improving medical communication outcomes?Utilize measures, such as time saved on very specific tasks, readability scores, error rates, accuracy, etc. As more use cases are discovered, the key performance measures of such use cases will become apparent. They will be project and situation specific. Assuming AI will take over publications and med comms for the most part at some point in the future, how do you see the role of med comms stakeholders (humans) changing?AI expedites many tasks, but human oversight is vital for accountability and quality. Strategic insight remains necessary, even as AI potentially takes over many med comms functions. Do you envision that pubs teams in pharma will build out a new role for AI experts, to assist the pub leads and head of pubs with the use of AI?AI expertise might become a role like a statistician. Teams should have AI Champions, though eventually, all professionals should be proficient. AI DisclosureICMJE/WAME guidelines ask for full disclosure (and sometimes publication in the manuscript itself) for all prompts and outputs generated. Personally, I would look mad if I wrote out my full conversations with a chatbot! Does the AI Task Force think these guidelines, whilst encouraging transparency, are also themselves barriers to medical writers experimenting with and using large language models (LLMs)?We don't believe this is a barrier. The AI Task Force interprets the ICMJE requirement to disclose prompts as reporting the prompts used when applying AI to data collection, analysis, or figure generation. Generally, if AI generated new content, it is best practice to keep (and report if required) the prompts used to generate the content. This reinforces our call to practice with AI ahead of using it with submitted drafts. ICMJE/WAME/ISMPP state that we should be disclosing the “use" of AI, and in some cases specifying prompts used. Do the panel think that's realistic - prompts are becoming increasingly complex and can be pages long, but also where is the line on "USE”? I disagree that the ISMPP statement is clear ;)?It's realistic to disclose AI use. See tools such as "ISMPP Author Checklist - Use of AI in Research and Manuscript Preparation." Methodology prompts are reasonable to include for the purpose of providing sufficient detail for research reproducibility. In terms of disclosure, where wording states “use of AI should be disclosed” - where do you draw the line of “use” of AI? Content generation is an obvious yes, but what if just prompting ideas? Summarizing a disease area for background experience?When AI generates content included in the publication, it is important to disclose this. If AI is used for a brainstorm, but without any content (edited or not) from the AI included in the publication, this might not be disclosed. Example: If using AI to simply educate yourself, it might not be disclosed. However, if used as part of the research methodology, such as generating a list of publications as part of a review, then it would be important to disclose the use of AI per ICMJE guidance. To what degree can someone utilize AI before it must ethically be disclosed? Currently, I use it to assist in literature searches, but that's about it. Should that be disclosed?In scientific publications, disclose AI use for formal research, such as literature searches and data analysis, in the methods section. Content generated through AI must also be disclosed. Editorial uses, like grammar and formatting checks, don't require disclosure unless specified by journals/congresses. Live AI use demands pre-disclosure due to data privacy concerns. Some publishers ask if you used AI and have a disclosure that the use of AI doesn't meet authorship requirements. There seem to be no unified guidelines regarding AI, so what can publishers do to reinforce their position? Will there be ICMJE guidelines in the near future to guide authors on what they can and cannot do using AI without getting in trouble with the journals?There are currently high-level guidelines by various organizations, though interpretations vary. Journals and publishers may develop more unified guidelines as AI usage expands. Some journals allow for the use of AI in publication development if appropriately disclosed; are reviewers made aware of the use of AI in the pubs they are reviewing? If so, do you think that disclosure biases reviews?Reviewers *should* be made aware of AI use in publications they are reviewing, as it is a recommended disclosure per journal requirements and ICMJE guidance. General AI Learning & Queries Other than ChatGPT, Gemini, etc., are there any other AI tools you may recommend?The LLM Overview is posted on the ISMPP AI web page and provides an overview of various large language models (LLMs) available as of August 2024, in an Excel sheet format. Will the AI Lexicon be available online for the public or just ISMPP members once it is complete?The AI Lexicon is posted on the ISMPP website and is available to the public. What is the best resource to learn how to best create AI prompts?Prompt Engineering Guide: https://www.promptingguide.ai/ or Gemini for Google Workspace Prompt Guide: https://inthecloud.withgoogle.com/gemini-for-google-workspace-prompt-guide/dl-cd.html Would writers be expected to learn prompt engineering in the coming months/years?Writers will initially need to learn prompt engineering. However, emerging tools may gradually handle this task, allowing indirect use. Does AI support proofreading?Yes, AI supports proofreading. Writers learn to write and craft a good story by practicing this and receiving feedback. Good reviewing comes from being a good writer. How do we reconcile this if there is less writing to start with?Writers benefit from using AI as an enhancement tool, gaining feedback and honing skills. Those relying solely on AI may miss out on skill development. Writing and reviewing remain complementary yet distinct skills. How do we encourage professionals to learn about AI and how it can benefit their outputs?Experiment with AI in a safe environment. Have AI open at every opportunity and have fun with it to see how AI might or might not benefit you. ISMPP has considerable educational opportunities about AI and has more planned for its members. Everyone wants to explore AI, but trepidation still persists - what can we do to encourage more test and learn scenarios?Practice and experiment with AI continuously and remember to "keep a human in the loop" to understand AI's limitations. Do you have any suggestions on how to increase AI adoption (change of mindset) within an organization?Invest in change management to gain buy-in by showcasing benefits. Share use cases, ensure compliance, prioritize security, and reward innovation. Incorporate AI objectives into personal goals and allocate time for learning. How to mitigate risk of biases from the underlying data used to train the AI models? Test the AI output and remember that final accountability for content lies with the human authors. Also, be aware of how specific you are with your prompt. More detail with your prompt may help mitigate bias; however, you still need to verify the output. Is it really possible to get to zero bias when bias is inherent in the training of LLMs?Absolute zero bias isn't achievable because AI is trained on human data, which inherently contains biases. However, bias can be mitigated through informed training sources and transparency. Tools within Responsible AI can help address and reduce bias. I work for a large pharma company with its own AI tools and high restrictions on use of public tools; however, we've seen an increase in agency partners using tools like Zoom's AI tool, Read.ai, etc., and sometimes they turn them on before we (client) are made aware. Any best practices for addressing incongruency between agency practices and client guardrails?Using AI during an online call is analogous to recording the call, and, therefore, it is best practice to gain acknowledgement to use AI during a call. If you're not comfortable in the use of AI during calls, feel comfortable asking to turn off the AI. The opposite of the question above - I work for an agency and would never use AI without talking to clients first, but any advice as to how to approach those conversations? It may require escalation to procurement teams in order to grant that permission and them demanding reduced prices - another massive barrier.On an individual basis, have the conversation about concerns and benefits in use of AI. Confidentiality? Privacy? Speed or efficiency gains? Discuss the value of using AI and mitigating the risk in use. Use of AI is an evolving landscape, and we are still evaluating how the use of AI translates into time and cost savings. What could be the implications if every Pharma comes with their different AI tool to produce, for instance, PLS?It's not just the tools but the alignment on outcomes that matter, especially regarding PLS readership metrics. AI hasn't really yet had its regulatory moment. How do you see that going?Regulation typically follows innovation. The EU AI Act provides a framework for potential future regulations. How will you distinguish between "artificial" and "augmented" intelligence? Does AI always mean artificial intelligence? Will AI always be taken to refer to artificial? Will you have to speak/spell out "augmented" every time?Artificial intelligence mimics human thought through machines, while Augmented Intelligence involves humans collaborating with AI (1+1=3) for more effective outcomes. In medical publications, "Augmented Intelligence" should be spelled out, whereas "AI" signifies Artificial Intelligence. |