Loading
svg
Open

Corporate race to use Artificial Intelligence risky

February 27, 20245 min read

A rush by Australian companies to use generative Artificial Intelligence (AI) is escalating privacy and security risks to the public.

According to the University of the Sunshine Coast (UniSC) study, AI poses a threat to staff, customers and stakeholders.

The research was published in the Springer Nature journal AI and Ethics on the weekend. It warns that rapid AI take-up is leaving companies open to wide-ranging consequences.

These include mass data breaches that expose third-party information, and business failures based on manipulated or “poisoned” AI modelling – whether accidental or deliberate.

The study included a five-point checklist for businesses to ethically implement AI solutions.

ChatGPT fraught with moral issues in AI rush

UniSC Lecturer in Cyber Security Dr Declan Humphreys said the corporate race to adopt generative AI solutions like ChatGPT, Microsoft’s Bard or Google’s Gemini was fraught with not just technical, but moral issues.

Generative AI applications turn large amounts of real-world data into content that appears to be created by humans. ChatGPT is an example of a language-based AI application.

The University of the Sunshine Coast research, published in the Springer Nature journal AI and Ethics on the weekend, warns that rapid AI take-up is leaving companies open to wide-ranging consequences.

These include mass data breaches that expose third-party information, and business failures based on manipulated or “poisoned” AI modelling – whether accidental or deliberate.

The study included a five-point checklist for businesses to ethically implement AI solutions.

UniSC Lecturer in Cyber Security Dr Declan Humphreys says the corporate race to adopt generative AI solutions like ChatGPT, Microsoft’s Bard or Google’s Gemini was fraught with not just technical, but moral issues.

Generative AI applications turn large amounts of real-world data into content that appears to be created by humans. ChatGPT is an example of a language-based AI application.

“The research shows it’s not just tech firms rushing to integrate the AI into their everyday work. There are call centres, supply chain operators, investment funds, companies in sales, new product development and human resource management,” Dr Humphreys said.

“While there is a lot of talk around the threat of AI for jobs, or the risk of bias, few companies are considering the cyber security risks.

“Organisations caught in the hype can leave themselves vulnerable by either over-relying on or over-trusting AI systems.” 

Is Artificial Intelligence a security threat to the music industry?

The paper was co-authored by UniSC experts in cyber security, computer science and AI. This included Dr Dennis Desmond, Dr Abigail Koay and Dr Erica Mealy.

It found that many companies were making their own Artificial Intelligence models. Or using third-party providers without considering the potential for hacking.

“Hacking could involve accessing user data, which is put into the models, or even changing how the model responds to questions or the answers it gives,” Dr Humphreys says.

“This could mean data leaks, or otherwise negatively affect business decisions.”

He said legislation had not kept pace with issues of data protection and generative AI.

“This study recommends how organisations can ethically implement AI solutions by taking into consideration the cyber security risks.”

The five-point checklist includes:

  • Secure and ethical AI model design
  • Trusted and fair data collection process
  • Secure data storage
  • Ethical AI model retraining and maintenance
  • Upskilling, training and managing staff

Dr Humphreys says privacy and security should be top priority for businesses. Especially when implementing Artificial Intelligence systems in 2024 and beyond.

“The rapid adoption of generative AI seems to be moving faster than the industry’s understanding of the technology. And its inherent ethical and cyber security risks,” he says.

“A major risk is its adoption by workers without guidance or understanding of how various generative AI tools are produced or managed, or of the risks they pose.

“Companies will need to introduce new forms of governance and regulatory frameworks to protect workers, sensitive information and the public.” 

Is AI a security threat to the music industry? Have your say: email editor@yelo.live

How do you vote?

0 People voted this article. 0 Upvotes - 0 Downvotes.
Tagged In:#AI, #unisc,
svg

What do you think?

Show comments / Leave a comment

Leave a reply

Loading
svg