A10 Networks Unveils New AI Firewall and Predictive Performance Features for Securing AI Infrastructures

As AI applications continue to evolve, organizations worldwide are deploying AI-ready data centers to automate operations and drive efficiencies. This growth has created a pressing need for ultra-high performance AI systems, particularly in large language model (LLM) inference environments, which demand real-time responses and robust security measures.
In response to this need, A10 Networks is unveiling new AI firewall capabilities and predictive performance solutions at the upcoming Interop Tokyo conference, slated for June 11-13, 2025.
New AI Firewall Capabilities for Enhanced Security
A10 Networks is introducing AI firewall capabilities designed specifically for APIs and URLs that expose LLMs, whether custom-built or based on popular solutions like OpenAI or Anthropic.
These capabilities, built on edge-optimized architecture with GPU-enabled hardware, aim to provide high-performance protection for AI environments. The AI firewall helps prevent, detect, and mitigate threats at the AI inference model level, such as prompt injections and sensitive data leaks.
Through proprietary safeguarding techniques, A10 Networks’ AI firewall inspects request and response traffic at the prompt level, enforcing security policies necessary to mitigate AI-level threats. These capabilities will be deployed incrementally, allowing businesses to integrate them into existing infrastructures without disruption.
Also read: Syneris.tech Deploys Decentralized AI Infrastructure to Democratize AI Development
Enabling Resilient and Secure AI Environments
Dhrupad Trivedi, president and CEO of A10 Networks, said, “Enterprises are deploying and training AI and LLM inference models on-premises or in the cloud at a rapid pace. New capabilities must be developed to address three key challenges of these new environments: latency, security and operational complexity.
He added, “With over 20 years of experience in securing and delivering applications, we are expanding our capabilities to deliver on these needs to provide resilience, high performance and security for AI and LLM infrastructures.”
Predictive Performance for AI Inference Environments
In addition to AI security, A10 Networks is focusing on enhancing the performance and resilience of AI and LLM-enabled applications. The firm's new features offload processor-intensive functions such as TLS/SSL decryption, optimize traffic flow, and enhance network performance.
Such measures ensure that AI applications function seamlessly by actively spotting and fixing probable network problems ahead of time when they might become critical.
A10 Networks' predictive performance solutions run on GPU-enabled appliances that allow for accelerated data processing and the capability to analyze enormous amounts of data. This feature gives early alerts of network congestion or capacity problems so that organizations can take preventive action before experiencing unplanned downtime and get the best performance.