How it actually works: Building trustworthiness into GenAI
In a world increasingly shaped by artificial intelligence, trustworthiness isn’t just a feature—it’s a necessity. AI tools are transforming businesses and government agencies, promising increased efficiency, improved decision-making, and new opportunities. But these benefits come with a responsibility: to ensure that the AI solutions we create and implement are trustworthy.
At ICF, we recognize that trustworthiness in AI isn’t just about avoiding mistakes or mitigating risks. It’s about creating technologies that people and organizations can rely on to support their goals, protect individuals, and deliver meaningful results. So, what does trustworthiness in AI look like? How do we work to build it into every solution we develop? Can we consistently build solutions that leverage AI benefits, address AI’s drawbacks, and enhance safety and security? Is that how it actually works? Let’s get into it.
The Situation: | Building trustworthy, conversational AI-powered internal efficiency tools |
The Subject: | Applying a human-centric, ethical, safe approach to next-generation AI-powered chatbots |
The Expert: | Ken Butler, Engineering Director |
We’ll start with a practical example. ICF has been working on the development of a conversational user interface (UI) for a website for one of our federal clients. The team developed an internal tool that’s trained on all of the agency’s content to act as a “digital librarian” for their staff. The goal is to improve access to the agency’s assets, national strategies, and even blog posts and articles, all to drive their overall mission. However, as a library holding sensitive medical and research data, for it all to work, it must be built on a foundation of trust.
What does it mean for AI to be trustworthy? At its core, trustworthiness in AI encompasses fairness, transparency, accountability, and reliability. These qualities ensure that the technology operates as intended, respects ethical boundaries, and serves its users effectively. However, trust isn’t something you can “tack on” at the end of development. It must be baked into every stage of the process—from initial design to deployment and beyond. This is where frameworks like the NIST AI Risk Management Framework (AI RMF) come into play.
As the emerging standard framework for AI implementation across the federal government, the NIST AI RMF provides organizations with guidelines for identifying, assessing, and managing the risks associated with AI. It encourages developers to consider factors like accuracy, fairness, privacy, and security, ensuring that the technology aligns with best practices and ethical standards. At ICF, we balance frameworks like this with our human-centric approach to guide our development process and deliver AI solutions that are both innovative and trustworthy.
The AI RMF includes:
- A systematic way to recognize, evaluate, and reduce AI risks
- A focus on social responsibility, risk management, testing and evaluation, and ascertaining trustworthiness
- A definition of seven "characteristics of trustworthy AI"
- The AI RMF Core, which is made up of four functions: govern, map, measure, and manage
At ICF, trustworthiness isn’t an afterthought—it’s a core principle that informs everything we do. Here’s how we make it happen:
1. Thoughtful application of filters and safeguards: When building AI solutions, we proactively design tools that prioritize ethical considerations. For example, we’ve developed systems capable of detecting and redacting Personally Identifiable Information (PII) during data input. This not only prevents misuse but also reinforces confidence that sensitive information is handled responsibly.
2. Balancing risk and value: Trustworthy AI is about minimizing risk — and maximizing value. By aligning technology with our clients’ goals, we ensure that the solutions we provide don’t just work but actively support the people they’re designed for. Protecting individuals and delivering value go hand in hand.
3. A human-centric approach: AI should serve people, not the other way around. We work closely with our clients to ensure that the technologies we develop align with their unique needs and objectives. This collaborative approach helps us deliver solutions that are as effective as they are trustworthy.
Circling back to our practical example, the conversational “digital librarian” tool was a success, safely and accurately building on existing content and data, and giving users straightforward, intuitive access, without sacrificing safety or accuracy. While it is currently an internal tool, current plans include a public release. The architecture is also highly scalable, opening up a world of potential implementations for other federal agencies that need a focused AI conversational UI experience to improve efficiency in accessing and utilizing their own data. With the guidance of frameworks like NIST RMF AI and our own responsible AI principles, that transition will be much smoother for developers and users.
So: IS THAT HOW IT ACTUALLY WORKS? YES. In practice, we can consistently build trustworthiness into AI implementations, resulting in solutions that:
- Improve the efficiency and trustworthiness of AI products, services, and systems
- Help teams optimize the benefits of AI technologies
- Identify and address the drawbacks of AI technologies
- Enhance AI safety and security by identifying gaps in AI risk management
- Help teams monitor and evaluate AI systems
Trustworthy AI isn’t just a buzzword—it’s a commitment to ethical, reliable, and effective technology. By integrating solid frameworks, prioritizing client goals, and designing with people in mind, we are proving that it’s possible to build trustworthiness into every AI solution. Whether you’re a business or a government agency, you can count on ICF to deliver AI solutions that you—and the people you serve—can trust. Ready to explore how trustworthy AI can transform your organization? Let’s start the conversation today.
AI is evolving at a breakneck pace, and ICF is helping organizations define the rules for how to integrate it efficiently and ethically. We’re already working with multiple government and energy clients to develop strategies and tactics for harnessing this powerful technology to deliver outcomes — all while minimizing risks and ensuring accuracy. Learn more about our Responsible AI principles.