Finance chiefs cite deepfakes, synthetic identity attacks as top AI concerns
Human oversight remains crucial to manage AI safely.
Artificial intelligence (AI) may be the defining force of the decade for global finance, but some of the industry’s most influential leaders say its biggest risks are already testing the limits of trust, security and resilience.
At the 2025 Singapore Fintech Festival, executives from Mastercard, DBS, Swift, Ant International, and Sumitomo Mitsui Banking Corporation (SMBC) highlighted that AI-driven cyberattacks, deepfakes, data breaches, and algorithm failures—not financial losses—are the biggest threats, potentially undermining trust and institutional stability.
Craig Vosburg, chief services officer at Mastercard, said that whilst AI promises major productivity gains, its “dark side” cannot be ignored. “The bad guys are also excited about the new capabilities that AI brings to their business, and cybercrime and the risks associated with cybersecurity are a huge risk for all of us.”
He noted that cybercrime, in absolute dollar terms, “if it were a country, would be the third largest GDP in the world.” With such lucrative opportunities available to bad actors, he said they would “leverage this technology to its full extent.”
Vosburg said that as billions of devices connect and interact in real time, security must be constant and collective.
“No one party can protect the system from cybersecurity threats, whether that’s large-scale breaches or the advent of deepfakes that trick consumers into scams,” he said. “It takes partnerships, both in the private sector and with the public sector, to share information and to innovate with security by design.”
He added that embedding protection into new solutions from the outset and “investing in technologies that we all need to protect our environments, our supply chains and the broader ecosystem” would be essential for sustaining trust.
Ant International’s Chief Executive Officer, Yang Peng, said that cyberattacks driven by AI-generated content have already become part of daily operations for financial technology firms.
“To fake a photo, fake a video, fake a voice is so easy,” he said. “For payment and banking, more and more of our operations rely on eKYC [electronic Know Your Customer], but with the development of AIGC [AI-generated content], it’s now possible to breach these systems.”
Yang said Ant first detected an AI-generated deepfake attack on its systems in January last year.
“Nowadays, when we are talking, every day there are more than 20,000 attempts around the world, mostly in Asia,” he said. “By eyeball, you cannot tell the difference at all. So we have to keep developing our technology to make sure those eKYC will not be breached.”
Beyond defending against fake identities, Yang said AI models themselves can become a source of risk. Ant has replaced more than 200 smaller models with a single foundation model for foreign exchange forecasts, improving accuracy but also creating new vulnerabilities.
“If we rely everything into just one model, what if this model doesn’t work?” he said. To manage that, Ant applies an “n minus one” policy, keeping one previous model active to compare results. “If the discrepancy exceeds a certain amount, then people's intervention will happen,” he said. “We will keep doing that.”
Yoshihiro Hyakutome, deputy president executive officer and co-head of global banking unit at SMBC, said he “can’t agree more about the risk of cyber,” pointing to Japan’s recent experience with major cyber incidents.
“Cyber is going to be very relevant to AI and also data integrity,” he said. “Because if the data set is undermined by a third party, that’s going to change how agents look at the models and produce numbers.”
From a regulated bank’s perspective, Hyakutome said the key priority was ensuring “a clear standard for auditability of AI models” so that when security breaches or unintended consequences occur, institutions can identify what went wrong.
“That’s going to be one of the biggest risks for us,” he said.
DBS Bank Chief Executive Officer Tan Su Shan said that whilst cybersecurity remains the top concern, “technological resiliency” is another growing area of risk.
“As everyone has moved to microservices and APIs, and we all have third-party vendors — cloud, software services, large language models — you may not be able to control everyone’s resiliency,” she said.
Tan cited incidents such as the CrowdStrike outage as a reminder that “if something is down, not of your own doing, then what is your plan B to enable payments, to enable systems to work?”
“Given this complicated intertwining of everyone’s tech and the complexity of microservices, having those resiliency dry runs and alternative pathways is something we need to test and learn,” she added.
From Swift, Chief Executive Officer Javier Pérez-Tasso said the conversation about AI risk must extend beyond cybersecurity into governance and global standardisation. He described AI’s rapid adoption as a catalyst for what he called “a fundamental reshaping for financial services operating models.”
“Technology very often is the easy part,” he said. “It’s governance, controls, frameworks, the right processes, standardisation, upskilling and reskilling that are fundamental for scaling AI safely.”
Pérez-Tasso added that financial systems must prepare for the next frontier of cybersecurity — post-quantum cryptography.
“Post-quantum will massively change the way we scale solutions in the future,” he said. “We will be changing the known public key algorithms like RSA to quantum-ready standards being developed by NIST.”
He said such measures, along with standardisation of AI frameworks and cross-border cooperation, would be “the foundation of the future industry.”
Vosburg of Mastercard added that AI adoption was also forcing new thinking around data management.
“AI is only as good as the data that it’s trained on,” he said. “For all of us, to the extent we’re using it to deliver outcomes that are proprietary to our business, that largely rides on the power of the ability to leverage proprietary data.”
He said Mastercard had created a hub-and-spoke model with a central centre of excellence focused on “governance, use case prioritisation, partner selection, and tool selection,” and spoke-level teams focused on upskilling engineers and managing AI responsibly.
Hyakutome of SMBC said the Japanese bank is also using AI to transform staff engagement.
“We created an avatar CEO so that junior staff can reach out to this avatar CEO and ask tough questions,” he said. “This AI will challenge them. It’s basically enabling employees to feel that AI is their boss, their colleague, their co-worker.”
Yang of Ant International said AI would fundamentally change how firms structure their workforces.
“The number of models is more than our employee number,” he said. “In future, the agents will probably also be larger than my employee size. How you govern this — how to maximise this new workforce — will be a brand new topic for us.”
He said Ant had created a “modelling management team” to ensure responsible deployment and reduce model-related risks. Talent shortages, however, remain a challenge.
“Right now, the talent pool only exists in the US and China, and acquisition is becoming tougher and tougher,” Yang said.
Tan of DBS said that amid these changes, maintaining human purpose will be critical.
“AI will take away the mundane work, but humans still need to do the meaningful work, the creative work, the empathetic work,” she said. “One human can have multiple agents, but we need to learn how to manage them safely.”
She cautioned against “weaponisation of technology” and said banks must train their employees and customers “to evolve with AI, to ask the right questions, and to make sure it doesn’t harm our ecosystem.”
Yang added that Ant’s infrastructure is designed to preserve privacy while allowing collaboration.
“We have 11 data centres globally and provide 99.99959 per cent availability,” he said. “While we collaborate, the raw data will not go out from each company. Only the privacy-preserved compute inside can collaborate.”
Pérez-Tasso of Swift said interoperability between jurisdictions will be essential as AI scales across borders.
“In our case, we are an international cooperative. The big success has been that it is a standard decision body,” he said. “With AI, we are going to have domestic frameworks that will need to interoperate globally. Public and private sector collaboration will be fundamental.”
He added that Swift’s move to ISO 20022 standards demonstrates how shared frameworks can help institutions exchange data securely and consistently.
Across the hour-long discussion, one consensus was clear: as AI transforms finance, collaboration will define its safety. Vosburg put it simply — “partnerships will be foundational” — while Tan reminded the audience that the human element remains irreplaceable.
“We need to keep AI safe for our employees, for our stakeholders and our customers,” she said. “Experiment, learn quickly, contain the risks, and evolve as we go along.”