Ad Image

Why Security is the Black Box in the AI Race

AI

AI security

Solutions Review’s Contributed Content Series is a collection of contributed articles written by industry thought leaders in enterprise software categories. Chaz Lever of DEVO argues why AI security is the black box in the next leg of the artificial intelligence technology race.

The rapid rise of new, more powerful generative AI chatbot platforms has enterprises and governments scrambling to rein in the potential negative impacts of this disruptive technology. JP Morgan, for example, has prohibited the use of ChatGPT in the workplace, among others. Dozens of artificial intelligence leaders issued an open letter in March calling for a pause on ChatGPT development so safety measures could be reinforced. And the Biden Administration recently weighed in with several moves to develop “responsible AI” initiatives within the federal government.

They’re all worried about security. Concerns about AI are nothing new, but ChatGPT, Bard, and their ilk have upped the ante, and leaders across the spectrum are sounding the alarm. This reassessment of AI threats comes at a good time, especially with some analysts predicting AI to contribute upwards of $15 trillion to the global economy by 2030. The technology clearly isn’t going away; the genie is out of the bottle, and it’s not going back in. It’s already fueling futuristic applications such as autonomous transportation, weather forecasting, insurance, marketing, and scientific research. But before AI can reach its true potential, people have to trust that it’s secure and not creating more threats than it’s taking away.

Download Link to SIEM Buyers Guide

Security in the AI Race


AI Systems Can Be Used by Attackers

AI systems are widely used as cybersecurity assets. Their powerful algorithms can analyze large amounts of data to identify patterns that could tip organizations off about a cyber-attack. They can be used to proactively identify unknown cyber threats and trigger automated remediations that segment off breached systems or malicious files.

At the same time, AI introduces new attack vectors for malicious actors. It can be used by cyber-attackers to generate sophisticated phishing attacks that are designed to evade detection. AI-based malware can also adapt and evolve to avoid detection by traditional security systems.

AI Models Can Be Poisoned

Machine learning (ML) systems use very large amounts of data to train and refine their model, which requires that organizations ensure that their datasets maintain the highest degree of integrity and authenticity possible. Any failures on this front will cause their AI/ML machines to produce false or harmful predictions.

Attackers can purposely sabotage an AI model by damaging or “poisoning” the data itself. By secretly changing the source information used to train algorithms, data-poisoning attacks can be particularly destructive because the model will be learning from incorrect data. They provide false inputs to the system or gradually alter it to produce inaccurate outputs. Their goal is to trick the learning system into creating inaccurate models, which produce wayward results. Manipulated, or poisoned, data can be used to evade AI-powered defenses. Most companies aren’t prepared to deal with this escalating challenge, which is getting worse year by year.

Information Leakage Can Haunt Future Models

It’s bad enough when AI use opens an organization up to being hacked. It can be worse when sensitive information is shared inadvertently and used inappropriately. This can happen with AI models. If a developer inserts proprietary company secrets into a model, there’s an ongoing risk that those secrets will funnel back into future models. People will end up learning about things that only a few people are supposed to know about. Plus, organizations can face questions about data privacy based on where their AI models start and where they live.

What are the trade-offs to developing and running models locally versus in the cloud? From a privacy perspective, that might influence what organizations are willing to do.

Generative AI Can Create Convincing Fake Images and Profiles

Using AI, scammers can more easily create highly realistic fake content that they use to deceive targets – and the public. Applications include phishing emails, fake profiles, fake social media posts, and messages that appear legitimate to unsuspecting victims. In late May, a deepfake image of an explosion at the Pentagon briefly caused the stock market to drop. After a scam artist posted an image on Twitter, Arlington, Va., police quickly debunked the image. The stock market dipped by 0.26 percent before rebounding. Photography experts identified the photo as an AI-generated image. As generative AI technology continues to improve, these situations likely will become more prevalent and more problematic.

Generative AI can also be used to create photos of people who don’t exist. Once the scammer has the photo, it can be used to create fake profiles on social media platforms. It also can be used to create “deepfake” videos – superimposing a face onto someone else’s body – to manipulate people into believing a person has done something he hasn’t. Deepfakes have targeted celebrities and been used for blackmail.

Complicating Data Privacy

When AI collects personal data, does its use comply with the stipulations spelled out by GDPR? Not necessarily. Ideally, AI algorithms should be designed to limit the use of personal data and make sure the data is kept secure and confidential. GDPR is very specific when it comes to the use of personal data. It requires that automated decision-making can take place only if humans are involved in the decision-making, if the person whose information is being used has given consent, if the processing of information is needed to perform a contract, or where it is authorized by law. GDPR also requires users to tell individuals what information is being held and how it is being used. As a result, there will be significant legal issues that must be addressed in terms of GDPR and the use of personal data– and new policies will need to be set accordingly.

Proceeding with Caution

AI is already an important driver of innovation and value– and will continue to be. But it comes with risks that need to be addressed now. Generative AI applications have brought security and ethical issues to the surface, forcing stakeholders to ask questions and push for solutions that can position the technology to remain a net positive for years to come.

Download Link to SIEM Buyers Guide

 

Chaz Lever
Follow Chaz
Latest posts by Chaz Lever (see all)

Share This

Related Posts

Udacity Cybersecurity Ad