A10 Networks is expanding its high-performance infrastructure and security solutions to now include an AI firewall and LLM safety tooling.
Developing New Approaches to Secure New Applications
Organizations that are putting LLMs into production will require new insights and easy-to-integrate solutions to deliver security and availability of inference workloads. These include tools that discover vulnerabilities such as responding to prompts that cause the model to hallucinate or divulge information that is proprietary or personally identifiable in nature. Additional tools would be required to, in some instances, break a secured model to help identify ways to make the model stronger.
New solutions like an AI firewall can be used to secure production LLMs. An AI firewall is a new approach to securing a new set of applications. It is a solution that can inspect the request and response path of traffic to an AI Inference application. An AI firewall can be applied to enforce specific policies, for example, to drop prompts that can be harmful to the untested and tested LLM. An AI firewall operates at a high speed to minimize latency and to work in conjunction with a proxy, or it may have a proxy function built in to terminate encrypted traffic and process traffic directly. Ultimately, an AI firewall can take actions to increase the security of AI applications, while also helping to provide availability and low latency.
“AI adoption is rapidly evolving as organizations forecast the benefits of AI in bringing new services and capabilities for their customers. But security remains a chief concern. Building on our expertise in helping secure service providers and hybrid cloud infrastructures, we are driving our AI strategy forward to develop new approaches. An AI firewall can help secure these new and evolving applications, while minimizing latency and maximizing availability so AI applications perform as they are intended,” said Dhrupad Trivedi, president and CEO, A10 Networks.