AI needs better explainability to build decision-makers’ trust
Transparency and collaboration are key to meaningful AI adoption.
Artificial intelligence is rapidly evolving, but the industry is overlooking a critical issue—how AI models present their outputs to decision-makers. Without transparency and reasoning, trust in AI remains a challenge.
“What we are not speaking about that often in AI is how we actually present the output of an AI model so that the decision-makers, who are typically not data scientists, know when to trust and when not to trust AI,” said Dr. Adrienne Heinrich, AI Center of Excellence Head at UnionBank.
“Any type of AI has to include more reasoning, more explainability, more transparency, so that it’s really a handshake between the human and the AI,” Heinrich added.
Beyond technical improvements, Heinrich emphasised the need for stronger collaboration between AI developers, fintech firms, and organizations that serve underrepresented communities.
“We need to have several bridges between players to create a very customer-centric solution,” she said. “It’s easy to create something that’s possible, but we must focus on what truly benefits the target group.”
AI is also transforming financial inclusion in the Philippines by enabling alternative credit scoring for the underbanked and unbanked. Traditional credit models often fail this segment, but AI can assess creditworthiness using behavioral data.
“We can use data such as utility bill payments, internet access, and spending behavior to build AI models for credit scoring,” Heinrich explained.
Commentary
The Asian connection: China's path to sustainable growth