Unlocking the power of ChatGPT: Best practices for using generative AI

Share this Article

Artificial intelligence (AI) is transforming how we work, think, and solve problems. Tools like ChatGPT are reshaping everything from drafting content to analyzing data. But with great power comes great responsibility, especially in the healthcare and benefits administration space.

It’s not a matter of asking, “what can AI do?” but rather, “what should AI do?” That distinction is critical — for your teams, your clients, and the people we serve together.

Whether you’re exploring AI for the first time or looking to scale usage within your organization, these best practices will help you harness AI tools like ChatGPT safely, effectively, and ethically.

Be intentional and purpose-driven

AI should solve real problems, not just add noise. Before engaging with ChatGPT or similar tools, identify the purpose of the interaction. Ask yourself:

  • What am I trying to accomplish?
  • Is AI the right tool for the job?
  • How will this improve client experience, productivity, or outcomes?

When used with clear intent, AI can deliver significant efficiencies — accelerating workflows, enhancing quality, and freeing up time for more strategic work. That’s the kind of meaningful impact you should aim for.

Provide clear context and prompts

When drafting a prompt, include relevant background, your audience, tone preferences, or specific constraints. This ensures responses are aligned with your goals and brand voice.

Instead of: “Write something about HSAs”

Try: “Write a 100-word summary of HSA benefits for a new employee onboarding guide, using a conversational tone.”

Context-rich prompts lead to more accurate and actionable results, especially when applied to client communications or member-facing resources.

Use AI for first drafts, not final products

Think of AI as a creative partner, not a replacement for human judgment. It’s a tool to accelerate first drafts, surface insights, and ideate quickly. But final outputs should always be vetted, refined, and approved by a real person.

Safeguard data: What NOT to share

Publicly available AI models are highly capable, yet they’re not designed to be secure vaults for confidential information. Never input personally identifiable or sensitive data into AI tools unless explicitly approved within a secured enterprise environment.

Avoid sharing:

  • PII: Names, SSNs, addresses, birth dates, contact info
  • PHI: Claims details, diagnosis/treatment info, plan IDs
  • Financials: Bank info, credit/debit card numbers, contribution records
  • Client data: Custom plan designs, service issues, contracts
  • Credentials: Login info, system screenshots, API tokens
  • Legal documents: Contracts, SLAs, audit materials, regulatory filings

A good rule of thumb: If it’s protected under HIPAA, sensitive to an employer group, or subject to compliance rules, don’t enter it into AI.

Turn early successes into scalable wins

AI works best when adoption is supported by structure. We’ve seen that when a “power user” leads the way, team-wide adoption rises dramatically. Sharing workflows, templates, and success stories encourages broader and safer use.

Consider launching AI lunch & learns or sharing internal use cases to fuel excitement and responsible experimentation.

Build AI into the workflow, not on top of it

AI should integrate naturally into your team’s process, not create extra work. Encourage use for things like brainstorming campaign ideas, generating knowledge base articles, creating initial versions of presentations or emails, and drafting FAQs or guides. Think about where your team has friction or bottlenecks and test AI as a solution.

Encourage creativity, but establish guardrails

AI can inspire innovation, but it’s not infallible. Be wary of “hallucinations” (confident but incorrect responses), and never use AI to replace critical thinking or decision-making.

Governance is crucial because it gives AI guardrails. Develop internal guidance and compliance standards that define acceptable use cases, set expectations for accuracy review, and outline escalation paths when uncertainty arises. And be sure to provide points of contact for any compliance questions.

Our AI rollout is governed by an enterprise steering committee to ensure safety, compliance, and alignment with our values. Partners should similarly ensure that enthusiasm is matched by oversight.

Transforming possibility into reality

Generative AI tools like ChatGPT have opened new doors for efficiency and ideation. By adopting best practices and aligning use with your organization’s values and goals, you can unlock the full potential of these tools, while keeping privacy, accuracy, and quality top of mind.

Want to learn more about how Alegeus is exploring AI to drive innovation in consumer-directed healthcare? Reach out to your partner success team — we’d love to share insights and collaborate.