Securing data in the AI world | Inquirer Opinion
Business Matters

Securing data in the AI world

In just a few years, artificial intelligence (AI) has become a transformative force—one that’s reshaping industries and influencing how we work and live. AI and it’s associated technologies—including machine learning, deep learning, and foundation models—have significant potential to redefine economies everywhere by accelerating innovation and delivering productivity increases due to greater efficiency.

A recent Access Partnership study found that in the Philippines alone, generative AI could unlock up to $79.3 billion worth of productive capacity—equivalent to one-fifth of GDP in 2022. In work with business and government leaders across Asia Pacific, I’m also seeing that capabilities to secure AI tools and systems to deliver trustworthiness are much needed.

That’s because even as technology becomes more sophisticated, so do cyber threats. For example, while generative AI adoption creates tremendous value, it also expands the adversary attack surfaces, and enterprises must broaden their capabilities to accommodate.

Article continues after this advertisement

Ramping up cybersecurity measures is vital as digital reliance increases—especially in the Philippines. New research indicates the country had Southeast Asia’s highest number of disruptive cyberattacks in 2023, with 29 percent of organizations experiencing a 50-percent or more increase in incidents.

FEATURED STORIES

A core issue is that AI training sets ingest massive amounts of data—often valuable and sensitive data—which makes tools such as chatbots prime targets for extracting confidential data. Data sets are also subject to data poisoning—when attackers intentionally tamper with a model’s training data so that it generates inaccurate responses, an attack on integrity.

Externally facing chatbots are also exposed to new attacks such as prompt injection. Here, threat actors feed malicious prompts into AI tools to trick them into taking harmful actions such as leaking or deleting data, or generating convincing phishing emails.

Article continues after this advertisement

In response, organizations must act now to secure their AI data, models, and data usage—although working out how to achieve this can seem overwhelming. With new vulnerabilities and attack vectors constantly emerging, it’s becoming increasingly difficult for organizations to keep pace with data protection requirements.

Article continues after this advertisement

That’s not the only challenge. When multiple tools are used for different aspects of data security—from preventing data loss to detecting threats—it can lead to fragmented security controls that can be difficult to manage cohesively.

Article continues after this advertisement

Compliance issues add to the complexity, with regulations to govern AI usage poised to rapidly increase. The Philippines, for example, is planning to propose the creation of a Southeast Asian regulatory framework to set new AI rules, based on its own draft legislation.

However, where organizations lack integrated monitoring capabilities and visibility, it’s not easy to get a comprehensive view of data security across the organization.

Article continues after this advertisement

Fortunately, cybersecurity leaders can fight back with the right strategies and tools. The most powerful systems de-risk AI adoption in a five-step approach. The first step involves implementing AI security fundamentals such as encrypting data, managing identities and actions, and incorporating security through awareness training.

The second step involves securing AI models by continuously scanning for vulnerabilities, malware, and corruption across the AI pipeline, and configuring controls so that no one person or thing has access to all data or AI model functions.

A third step involves securing the way AI models can be used by implementing controls to monitor for prompt injections and detect and respond to data poisoning, among other threats.

Securing the infrastructure that enables the safe use of AI is also vital. This step includes refining access control, implementing robust data encryption, and deploying vigilant intrusion detection and prevention systems around AI environments.

The fifth step involves establishing solid governance to ensure AI systems remain safe and ethical.

An AI-powered security platform can help with all five steps, protecting increasing attack surfaces through measures such as threat intelligence and automation.

Savvy business leaders instinctively know that acting boldly with AI is a smart move. No enterprise wants to be left behind in the march toward such a transformative technology.

However, they also know that from customer, investor, and employee perspectives, a license is required to operate these exciting new systems. That license is trust—and AI cannot be trusted if it’s insecure or able to be manipulated by attackers.

By proactively embracing the right strategies, organizations can remain ahead of fast-evolving threats. They can also build a strong foundation for their customers to choose trusted AI—and identify themselves as AI leaders in the process.

——————

Chris Hockings is chief technical officer (cybersecurity) at IBM Asia Pacific.

——————

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

Business Matters is a project of Makati Business Club ([email protected]).

TAGS:

No tags found for this post.
Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our newsletter!

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

This is an information message

We use cookies to enhance your experience. By continuing, you agree to our use of cookies. Learn more here.