Why Nostr? What is Njump?
2025-01-02 15:11:27

nym on Nostr: 6 AI-Related Security Trends to Watch in 2025 # Generative Artificial Intelligence ...

6 AI-Related Security Trends to Watch in 2025
https://www.darkreading.com/cyber-risk/6-ai-related-security-trends-watch-2025

# Generative Artificial Intelligence and Security: Key Trends and Concerns

Most industry analysts expect organizations will accelerate efforts to harness generative artificial intelligence (GenAI) and large language models (LLMs) in a variety of use cases over the next year.

Typical examples include customer support, fraud detection, content creation, data analytics, knowledge management, and, increasingly, software development. A recent survey of 1,700 IT professionals conducted by Centient on behalf of OutSystems had 81% of respondents describing their organizations as currently using GenAI to assist with coding and software development. Nearly three-quarters (74%) plan on building 10 or more apps over the next 12 months using AI-powered development approaches.

While such use cases promise to deliver significant efficiency and productivity gains for organizations, they also introduce new privacy, governance, and security risks. Here are six AI-related security issues that industry experts say IT and security leaders should pay attention to in the next 12 months.

## AI Coding Assistants Will Go Mainstream — and So Will Risks

Use of AI-based coding assistants, such as GitHub Copilot, Amazon CodeWhisperer, and OpenAI Codex, will go from experimental and early adopter status to mainstream, especially among startup organizations. The touted upsides of such tools include improved developer productivity, automation of repetitive tasks, error reduction, and faster development times. However, as with all new technologies, there are some downsides as well. From a security standpoint these include auto-coding responses like vulnerable code, data exposure, and propagation of insecure coding practices.

> "While AI-based code assistants undoubtedly offer strong benefits when it comes to auto-complete, code generation, re-use, and making coding more accessible to a non-engineering audience, it is not without risks," says Derek Holt, CEO of Digital.ai.

The biggest is the fact that the AI models are only as good as the code they are trained on. Early users saw coding errors, security anti-patterns, and code sprawl while using AI coding assistants for development, Holt says.

> "Enterprises users will continue to be required to scan for known vulnerabilities with [Dynamic Application Security Testing, or DAST; and Static Application Security Testing, or SAST] and harden code against reverse-engineering attempts to ensure negative impacts are limited and productivity gains are driving expect benefits."

## AI to Accelerate Adoption of xOps Practices

As more organizations work to embed AI capabilities into their software, expect to see DevSecOps, DataOps, and ModelOps — or the practice of managing and monitoring AI models in production — converge into a broader, all-encompassing xOps management approach, Holt says.

The push to AI-enabled software is increasingly blurring the lines between traditional declarative apps that follow predefined rules to achieve specific outcomes, and LLMs and GenAI apps that dynamically generate responses based on patterns learned from training data sets, Holt says. The trend will put new pressures on operations, support, and QA teams, and drive adoption of xOps, he notes.

> "xOps is an emerging term that outlines the DevOps requirements when creating applications that leverage in-house or open source models trained on enterprise proprietary data," he says.

> "This new approach recognizes that when delivering mobile or web applications that leverage AI models, there is a requirement to integrate and synchronize traditional DevSecOps processes with that of DataOps, MLOps, and ModelOps into an integrated end-to-end life cycle."

Holt perceives this emerging set of best practices will become hyper-critical for companies to ensure quality, secure, and supportable AI-enhanced applications.

## Shadow AI: A Bigger Security Headache

The easy availability of a wide and rapidly growing range of GenAI tools has fueled unauthorized use of the technologies at many organizations and spawned a new set of challenges for already overburdened security teams.

One example is the rapidly proliferating — and often unmanaged — use of AI chatbots among workers for a variety of purposes. The trend has heightened concerns about the inadvertent exposure of sensitive data at many organizations.

Security teams can expect to see a spike in the unsanctioned use of such tools in the coming year, predicts Nicole Carignan, vice president of strategic cyber AI at Darktrace.

> "We will see an explosion of tools that use AI and generative AI within enterprises and on devices used by employees," leading to a rise in shadow AI, Carignan says.

> "If unchecked, this raises serious questions and concerns about data loss prevention as well as compliance concerns as new regulations like the EU AI Act start to take effect," she says.

Carignan expects that chief information officers (CIOs) and chief information security officers (CISOs) will come under increasing pressure to implement capabilities for detecting, tracking, and rooting out unsanctioned use of AI tools in their environment.

## AI Will Augment, Not Replace, Human Skills

AI excels at processing massive volumes of threat data and identifying patterns in that data. But for some time at least, it remains at best an augmentation tool that is adept at handling repetitive tasks and enabling automation of basic threat detection functions.

The most successful security programs over the next year will continue to be ones that combine AI's processing power with human creativity, according to Stephen Kowski, field CTO at SlashNext Email Security+.

> "The key is achieving the right balance where AI handles high-volume routine detection while skilled analysts investigate novel attack patterns and determine strategic responses."

## Attackers Will Leverage AI to Exploit Open Source Vulns

Venky Raju, field CTO at ColorTokens, expects threat actors will leverage AI tools to exploit vulnerabilities and automatically generate exploit code in open source software.

> "Even closed source software is not immune, as AI-based fuzzing tools can identify vulnerabilities without access to the original source code. Such zero-day attacks are a significant concern for the cybersecurity community," Raju says.

## Verification, Human Oversight Will Be Critical

Organizations will continue to find it hard to fully and implicitly trust AI to do the right thing.

> "Trust in AI will remain a complex balance of benefits versus risks, as current research shows that eliminating bias and hallucinations may be counterproductive and impossible," SlashNext's Kowski says.

The practical approach is to implement robust verification systems and maintain human oversight rather than seeking perfect trustworthiness, he says.

originally posted at https://stacker.news/items/833647
Author Public Key
npub1hn4zhxzsd5w4m5kvq326gqnsrc6zcakhparw8pee4tw7wlxw70ysawhtl5