It’s been almost a year since ChatGPT broke into the world’s collective consciousness.
The launch of OpenAI’s artificial intelligence (AI) chatbot last year brought a wave of interest in generative AI, spurring optimism that the technology would revolutionize solutions to address humanity’s greatest challenges—from health-care disparity to economic inequality to climate change. With its ability to process vast amounts of data and identify patterns quickly, AI has emerged as a powerful tool for individuals and institutions alike.
AI’s influence is set to be transformative, not least in global finance, where the technology is already being deployed to enhance efficiency and decision-making. The financial-services industry is exploring bespoke generative AI models for trading, risk management, and cybersecurity. There was even a recent proposal that chatbots such as ChatGPT should be subjected to a “modern Turing Test” for measuring human-like intelligence, with the task of turning a US$100,000 investment into a US$1 million portfolio.
The global financial crisis of 2008 highlighted the catastrophic consequences of a lack of trust in the ecosystem. Since then, regulators and industry stakeholders around the world have emphasized the importance of rebuilding trust to ensure the market’s orderly functioning. For AI to be a true game-changer for society, and for adoption to be meaningful and effective for the greater good, its implementation must be underpinned by trust.
Singapore is a leading example of how trust can bridge complexity and opportunity.
Humans and AI
Trust is the bedrock of any successful relationship, and the interaction between humans and AI is no exception. Individuals and institutions must trust that AI systems are reliable, unbiased, and transparent in their decision-making processes. Only then can society be confident in harnessing the technology to meet the challenges of the day. To this end, it is crucial to address concerns related to data privacy, algorithmic bias, and whether AI decisions can be explained.
Singapore, Asia’s only AAA-rated jurisdiction, is a leading example of how trust can bridge complexity and opportunity. As an international financial center, our stable and transparent regulatory environment has attracted global institutions seeking not only a flight-to-safety but a flight-to-quality venue for their investments. The Monetary Authority of Singapore has also introduced the “FEAT Principles”—Fairness, Ethics, Accountability, and Transparency—to guide responsible use of AI and data analytics among financial institutions. This approach builds trust across the community, enabling the integration of emerging technologies to take on issues facing the market.
Singapore’s commitment to fostering a culture of trust extends beyond its borders. It actively participates in international collaborations and discussions concerning AI governance, data protection, and ethical AI use. By engaging in these dialogues, we can contribute to the development of global norms and best practices, further strengthening trust in AI technologies worldwide.
Responsible Actions
To be sure, trust in AI cannot be achieved solely through policy. While regulators and the industry work together to establish frameworks towards responsible innovation and to uphold market integrity, the actions of participants matter too. Companies must demonstrate that their systems are transparent and be accountable for unintended consequences of AI-driven decisions. They should also constantly communicate and engage with their stakeholders to ensure concerns are being addressed. In Singapore, trust enables the deployment of AI algorithms for tasks such as risk assessment, fraud detection, and portfolio management. The technology’s ability to process vast data sets in real time empowers financial institutions to make informed decisions, reducing friction and risk exposure. The industry’s commitment to data protection and ethical AI use also bolsters investor confidence—leading to a virtuous cycle of trust as well as impactful outcomes.
In capturing efficiencies that innovative technologies such as AI may bring, SGX Group seeks to retain the values the world knows us for, such as reliability and accountability. As an international exchange and clearinghouse, we are also working hard behind the scenes to safeguard the resilience of our platforms. Cybersecurity threats pose significant risks to financial institutions, often impacting their clients and beyond, so systems must be fortified against potential breaches.
We will actively adopt AI to protect our platforms against cybersecurity risks and technological disruptions, while strengthening our capabilities to serve market participants. Our approach will be built on a foundation of trust, with the use of AI and data analytics aligning with the FEAT Principles.
As Bill Gates declared: The “Age of AI” has begun. A new global arms race is on, and as companies seek to distinguish themselves by how well they can harness AI, there will be many winners and losers along the way. Trust can be the bridge that makes the difference.