Training Design – By Mary Barlow
Content creation tools have astounding abilities — and bring troubling concerns
An explosion of artificial intelligence (AI) writing tools has burst onto the scene, and life sciences editorial teams are wrapping their heads around how to leverage them. There are now dozens of programs — some devoted to medical writing — that create content when given a prompt.
You’ve heard it before. At first glance, these tools carry an astounding ability to complete sentences and write reams of literature on complex topics. However, seasoned writers can easily see troubling concerns, the biggest of which involves generating inaccurate information.
That fear was validated by research published in Skeletal Radiology on using a common large language model, ChatGPT, to develop journal articles. According to Hannah Murphy, writing about this study in an article for Health Imaging, “Out of the five ChatGPT articles included in their analysis, four were ‘significantly inaccurate with fictitious references.’ The remaining paper was well written and included good information, the authors noted, but it also shared fictitious references.”
Though these issues with accuracy are a clear hurdle, we’re all waiting — some with anticipation, some with fear — to see whether we can ever make a request and out will come a fully and perfectly crafted essay, researched and referenced as reliably as it would be by the most talented human writers.
The most likely scenario at the moment seems to be one in which AI assists writers and editors like personal admins assigned to specific and compliant tasks.
According to The New York Times book critic Dwight Garner, “To use these programs well, you need to know a lot. You must be a literary minded person. The more you know, and the better you can direct it, the better stuff you get out of it. The best users will be the best thinkers.”
Garner was commenting on his discussion with Stephen Marche, who wrote “The Death of an Author,” a murder mystery created with the help of three common AI tools. To develop the content for the book, Marche repeatedly prompted these systems for ideas and prose. Then he spent significant effort crafting and weaving the story together.
The Road to Adoption
Today, AI systems can easily aid with nonproprietary tasks, such as:
- Expanding topical ideas
- Finding and inserting boilerplate language
- Formatting content into tables or graphs
- Providing learning activities
- Delivering draft information that a human writer can then vet
That first step toward adoption, though, is to answer the question, “How can we ensure the entire company uses AI in a way that honors all that’s important to our organization and those we serve?”
The American Association for Advancement of Science (AAAS) and its partners began to address this question with an initiative they call (AI)2: Artificial Intelligence — Applications/ Implications. With (AI)2, the organization will begin its journey to AI by advocating for “the responsible development and application of AI, such that it alleviates, rather than exacerbates, social inequalities.”
Why? Because these are important values to AAAS. Through this initiative, AAAS will create a framework for the ethical and responsible use of AI, considering a wide array of ethical principles. Their plan is then to share recommendations with key stakeholders, to understand the legal implications and, finally, to develop applications with human rights needs in mind.
It Begins With a Statement
Forward-thinking organizations like AAAS can take the first two steps toward the adoption of AI by establishing a responsible use statement and mobilizing a cross-functional thought leadership team to investigate capabilities and, later, manage how to leverage them.
Let’s talk about the statement first. This language should demonstrate how you plan to embrace AI innovation while securing the integrity of your content. Here’s an example: Our organization seeks to leverage AI technology to elevate the caliber of the work and products we deliver to our clients, while protecting intellectual property and maintaining the utmost integrity and exceptional standard of accuracy we value.
While AI holds the potential to innovate, create and reshape how industries operate, we recognize the tremendous responsibility to use this technology with care. It is our organization’s policy to apply such tools in only the most ethical ways as we seek to maximize efficiency and explore new innovations.
In addition, it’s important to provide a set of guidelines organization-wide to ensure everyone understands the dos and don’ts of inputting and gaining content from AI for use in their jobs. For example, these guidelines may say to avoid entering proprietary information into external platforms and to never rely exclusively on AI-generated research. At the same time, they may also encourage employees to explore available tools for mundane, nonproprietary tasks — in ways that align with all your guidelines.
Grow Internal AI Thought Leadership
Regarding a cross-functional AI thought leadership team, these individuals needn’t be tech experts. Clever employees can enter this exploration with no or little knowledge about AI, quickly build expertise and then advise on its potential for their jobs.
Recognize that their commitment to the cause will become an ongoing effort. The team will begin to explore what makes sense now and into the future, as technology matures. Later, you’ll need governance to manage where AI finds content, how it makes decisions and what it is fed, among other considerations.
A benefit of this approach involves the ability to scale the organization’s knowledge as greater advancements are discovered. Also, gaining the team’s perspectives ensures your decisions are made in lockstep with your colleagues’ and stakeholders’ ideas, attitudes and advice in mind.
Be sure to have your responsible use statement and guidelines in place to share with your crossfunctional thought leadership team before they begin. As they experiment in external AI systems, they’ll want to act and seek solutions in alignment with your rules.
Here are some suggested steps to get your AI thought leadership team started:
- Define your goals and what type of AI you will focus on.
- Ask the group to create a series of questions to explore such as:
- How will we safeguard proprietary information?
- Do any tools exist to suit our organization’s needs?
- What AI tools are on the horizon?
- How much do the tools cost?
- What is the cost-benefit analysis of using them?
- Categorize the questions into phases. Since some questions are dependent on others, you may have to investigate one series of inquiries before moving to the next. For example, you cannot identify costs until you understand systems and usage.
- Break the larger group into subgroups that can study specific areas of interest.
- Choose what’s most interesting and have members dig deeper.
- Begin incorporating sensible elements from the research into work processes.
- Decide each next phase of research and who should be involved.
Conclusion
With AI development moving so quickly, it’s tempting to take a wait-and-see approach. But then how would you know when to jump in? By establishing in-house guardrails and thought leadership to guide and educate your organization, you can take the optimal route to whatever level of adoption you choose — and get there at a steady pace.
Mary Barlow is a senior director of media production with Encompass Communications and Learning. Email Mary at mbarlow@encompasscnl.com.