, APAC
194 views

Beyond the Hype: Building Trust in Generative AI for a Secure Financial Future

By Andy Ng

Financial organisations lean on AI to unravel complex issues, a shift propelled by an exponential rise in big and alternative data sources.

AI is no longer a science fiction – it’s a gamechanger today. With the advancement in modern computing, machines can learn, adapt and make autonomous decisions. Nowhere is this more evident than the launch of ChatGPT, a generative AI tool which has taken the world by storm and play a pivotal role in creating mainstream public awareness on the use of AI.

Today, AI has become an integral part of enterprises around the world and in sectors like finance or healthcare. From improving customer experiences to optimising business operations, the potential of AI can be limitless. Financial organisations lean on AI to unravel complex issues, a shift propelled by an exponential rise in big and alternative data sources. From retail banking, wholesale banking to wealth management, AI is having a transformative effect across the financial industry.

AI as a double-edged sword
Adopting AI tools can help organisations enhance their efficiencies, gain a competitive edge, and reduce manual requirements. In fact, the financial sector has been at the forefront of leveraging AI for cybersecurity purposes, including threat detection and behavioural analytics. For instance, AI provides faster detection of anomaly or deceptive activity, improving the ability of financial institutions to prevent theft and recover fraudulently transferred funds.

On the flip side, the use of AI, especially public generative AI tools, can spark a myriad of concerns spanning from ethical, security to legal domains, raising critical questions about the degree of trust that should be conferred upon intelligent systems devoid of robust checks and balances.

Unlike traditional AI systems, which rely on predefined rules and algorithms, generative AI has the capacity to generate novel outputs autonomously. While this unleashes exciting possibilities, it also introduces inherent risks. Without proper guardrails in place, there's a very real danger of sensitive data being compromised, leading to privacy violations and financial loss. 

In an industry already plagued by constant fraud concerns, with the global cost of fraud predicted to surpass $40 billion by 2027, it is no surprise that the widespread adoption of AI within the sector does not necessarily bring cheer to all. Given that even the least technical criminals can turn to AI technology to pilfer data for profit, we are not far from ushering in a golden age of cybercrime. Need an innovative malware or a more sophisticated phishing script? Just ask. 

Despite the widespread excitement surrounding the adoption of generative AI, access to these tools remains unequal. According to our latest Veritas research, 58% of office workers in Singapore said they use these tools weekly, while 20% do not use them at all. This disparity is likely due to the lack of guidelines, as only 62% of the respondents have received any guidance from their employers on what’s acceptable and what’s not. This is causing a worrying divide in the workplace: 56% of office workers in Singapore feel some employees using generative AI have an unfair advantage over those who are not, potentially leading to resentment and a negative culture. 

Organisations are also missing out when employees are not increasing their efficiency with the help of appropriate use of generative AI. For example, those who are using it said they benefit from faster access to information, increased productivity and even generating new ideas.

This underscores the urgent need to bridge the gap between employee demand for training and employer offerings. Concerns about data leakage, compliance risks, and inaccuracies in generated information loom large, further exacerbating the divide between employees. Such hesitancy not only impedes organisational productivity but also widens the talent gap, particularly in roles reliant on data-intensive tasks or stringent regulatory compliance.

The need for responsible AI in finance
In the financial landscape where decisions wield significant monetary impact, it is paramount for AI systems to prioritise fairness, mitigate bias, and uphold transparency throughout their lifecycle. The quality of data fed into AI engines emerges as a pivotal concern, as the accuracy of AI-generated information can be compromised, particularly when tainted with malicious intent to distort models or inject harmful biases. 

With the unregulated use of AI, security risks escalate as the technology evolves and becomes more sophisticated over time. Malicious actors keen on exploiting vulnerabilities inherent in generative AI models may launch cyberattacks, jeopardising the integrity and availability of financial data, applications, and systems. In the same Veritas research, 80% of office workers in Singapore acknowledged using generative AI tools such as ChatGPT and Bard at work – including risky behaviour like inputting customer details, employee information and company financials into the tools. As such, it is imperative for organisations to fortify their defences and implement stringent measures to safeguard data integrity.

Bridging the gap and building trust
It is critical for financial organisations to develop, implement and clearly communicate guidelines and policies on the appropriate use of generative AI, along with the right data compliance and governance tools in place for ongoing enforcement. 

Based on our latest research findings, 95% of office workers in Singapore said guidelines and policies on its use are important, but only 43% of employers currently provide any mandatory usage directions to employees. To encourage responsible adoption of generative AI and create a more level playing field, these guidelines and policies must extend beyond mere ethical considerations to encompass alignment with company objectives, robust risk mitigation strategies, and tailored employee training programmes. Financial organisations must also embark on a journey of comprehensive risk assessment. This entails reviewing data storage practices, refining access controls, and establishing stringent data sharing protocols to ensure compliance with regulatory mandates and industry standards.

When customers choose a bank or financial provider to do business with, they trust that the vast amounts of highly sensitive personal information that they share will be in safe hands. With data privacy as a linchpin of ethical AI, complying with and staying ahead of privacy regulations can be a catalyst for positive change.  By prioritising responsible AI practices like data transparency, privacy, and fairness measures, organisations build trust with users and regulators. This focus on responsible development becomes a long-term competitive advantage, as ethical AI becomes an essential commitment for any company embracing this technology. 

The future of finance is undoubtedly intertwined with AI. To chart a course towards a more secure and innovative future, financial organisations should embrace the adoption of AI tools with comprehensive measures and put priority on training employees on the latest compliance and cybersecurity protocols. Done right, financial organisations can unlock the power of AI while ensuring cyber resilience.

Join Asian Banking & Finance community
Since you're here...

...there are many ways you can work with us to advertise your company and connect to your customers. Our team can help you dight and create an advertising campaign, in print and digital, on this website and in print magazine.

We can also organize a real life or digital event for you and find thought leader speakers as well as industry leaders, who could be your potential partners, to join the event. We also run some awards programmes which give you an opportunity to be recognized for your achievements during the year and you can join this as a participant or a sponsor.

Let us help you drive your business forward with a good partnership!