New Relic AI monitoring helps enterprises use AI with confidence
New Relic announced New Relic AI monitoring with a suite of new features to meet the evolving needs of organizations developing AI applications.
New features include in-depth AI response tracing insights with real-time user feedback and model comparison to help drive continuous improvement of AI application performance, quality, and cost—all while ensuring data security and privacy.
With 60+ integrations, New Relic AI monitoring helps organizations find the root cause of AI application issues faster, furthers their adoption of AI, and supports them at every stage of their AI journey.
“Based on my conversations with CIOs, CTOs, and executives across our customer base, it is clear that every company is thinking about how to scale their business with AI,“ said New Relic Chief Customer Officer Arnie Lopez. “The adoption of AI can be costly and introduce complexity into their stack. IT and technology leaders are turning to New Relic because observability is essential to help them confidently navigate the exciting future of AI, optimize performance and quality, and control costs, ultimately delivering exceptional customer experiences.”
Organizations are eager to adopt AI to offer better digital experiences to their customers. According to Gartner, over 80% of enterprises will have used generative AI APIs or deployed generative AI-enabled applications by 2026. While there is a strong and growing demand for AI, organizations struggle to bring AI into their tech stacks because of the complexity it introduces.
New Relic AI monitoring directly addresses this challenge and makes it easy for organizations to manage complexities of their AI stack by providing a unified view of their entire AI ecosystem alongside the rest of their performance data.
Key features include:
- Auto instrumentation: New Relic agents offer easy set-up for popular frameworks like OpenAI, AWS Bedrock, and LangChain across Python, Node.js, Ruby, Go and .NET languages.
- Full AI stack visibility: Holistic view across the application, infrastructure, and the AI layer, including AI metrics like number of requests, response time, and token usage.
- AI response view with end-user feedback: Quickly identify trends and outliers in AI responses, analyze sentiment, and see user feedback in a single consolidated view.
- Deep trace insights for every response: Trace the lifecycle of AI responses with tools like LangChain to fix performance and quality issues like bias, toxicity, and hallucinations.
- Enhanced data security: Maintain your organizational security and compliance policies by excluding sensitive data (PII) in your AI requests and responses from monitoring.
- Model comparison: Compare performance and cost of foundational models running in production in a single view to choose the model that best fits your needs.
- Quickstart integrations: One of the most comprehensive solutions for monitoring the AI ecosystem with 60 integrations for critical components like NVIDIA GPUs and vector databases like Pinecone, Weaviate and more.