CEO's Column
Search
More
Cybersecurity

AI-Powered Virtual Employees Are Coming—And So Are New Cybersecurity Challenges

ByNG AI Admin
2025-04-23.3 months ago
AI-Powered Virtual Employees Are Coming—And So Are New Cybersecurity Challenges
AI-Powered Virtual Employees Are Coming—And So Are New Cybersecurity Challenges

As AI technology continues its rapid evolution, companies may soon welcome a new type of digital worker: AI-powered virtual employees. According to Anthropic’s Chief Information Security Officer, Jason Clinton, these intelligent agents could begin operating across corporate networks as soon as next year—bringing with them transformative potential and unprecedented security risks as reported by Axios.

From Task-Based Agents to Autonomous AI Workers

Unlike traditional AI agents that execute specific, limited tasks—such as flagging phishing attempts or responding to system alerts—virtual employees will function more autonomously. Clinton explained in an interview with Axios that these AI identities could have their own digital “memories,” roles, logins, and access credentials within a company’s infrastructure.

This leap in capability means virtual employees might someday contribute to real business operations with minimal human oversight. But their deeper integration into core systems also raises critical cybersecurity questions.

“In that world, there are so many problems we haven’t solved yet from a security perspective,” said Clinton.

Rethinking Cybersecurity for the AI Workforce

Managing the identity, permissions, and behavior of these digital employees will require a complete rethink of identity and access management (IAM).

Clinton warns that these AI workers could, unintentionally or otherwise, interfere with sensitive operations—such as compromising a continuous integration system (used for testing and deploying new software), with no clear line of accountability for the outcome.

A New Frontier in Security Investment

Clinton believes that virtual employee management will become one of the most critical security investment areas for AI companies. Solutions that offer transparency into what AI accounts are doing—and that support a new identity classification system to separate human from non-human entities—are likely to gain traction. Some vendors are already making moves. For instance, Okta launched a platform in February that aims to protect non-human identities by continuously monitoring their access rights and activity across systems.

Despite the potential, integrating AI into the workplace remains a delicate balancing act. Last year, performance software firm Lattice faced backlash after briefly suggesting that AI bots could be added to company org charts. The idea was quickly abandoned.

Related Topics

AI in cybersecurityAI threat detection

Subscribe to NG.ai News for real-time AI insights, personalized updates, and expert analysis—delivered straight to your inbox.