The Pan American Health Organization recently published a new ‘how to’ guide for developing artificial intelligence instructions in public health that is a game-changer when it comes to the role generative AI plays in creating, translating and disseminating health information throughout the Americas.
To help jurisdictions craft clear, safe and culturally sensitive prompts, the guide outlines steps they can take to develop them — to demonstrate that in public health, how a question is asked can be as important as the answer itself, according to the official announcement.
The book’s core message is deceptively simple and decisive: AI not only requires data, it needs context.
The guide defines prompts as “living protocols” that are shaped over time with communities, languages and circumstances.
It advises health workers to design prompts that take regional variation into account — say, offering vaccine advice to rural families in an informal tone instead of using technical language.
Similar shifts toward human-centered AI have been apparent in projects supported by the World Health Organization, which emphasize that same trade-off between innovation and responsibility.
This pivot draws attention, again, to the extent to which generative AI has become more than a data-processing tool — is now an interlocutor.
The variation between a well-marked prompt and an ambiguous one can be the difference between accurate health guidance and an alarmed community.
That was reiterated in recent findings from the European Commission’s health AI initiative, which observe that language precision and ethical framing are now paramount to safe AI use in medical outreaches.
But, spoiler alert: This won’t be easy. It can be a logistical nightmare for smaller or resource-strained public health agencies to keep “living” prompt libraries and ensuring accuracy, as well as, keeping current cultural nuances.
Harvard’s Berkman Klein Center says this kind of public sector AI work risks going off the rails or duplicating bias when there are no clear governance measures in place.
Yet there’s something quietly revolutionary about what PAHO is doing.
Rather than black-boxing AI, it’s asking health professionals to take the wheel — write, review and evolve for themselves the language that drives AI.
In a time when misinformation gets around much quicker than truth, it is also an approach that feels pragmatic and hopeful.
If how we talk to machines determines how they speak back, then teaching AI how to “listen” might be one of the most brilliant public health plays of the decade.


