Join Candorium to access the full article and more
The second annual Index, which ranks the top 22 leading language models lists Anthropic's Claude 3.5 Sonnet as the best performing model across all tasks
SAN FRANCISCO, July 29, 2024 /PRNewswire/ -- Galileo, a leader in developing generative AI for the enterprise, today announced the launch of its latest Hallucination Index, a Retrieval Augmented Generation (RAG)-focused evaluation framework, which ranks the performance of 22 leading Generative AI (Gen AI) large language models (LLMs) from brands like OpenAI, Anthropic, Google, and Meta.
This year's Index added 11 models to the framework, representing the rapid growth in both open- and closed-source LLMs in just the past 8 months. As brands race to create bigger, faster, and more accurate models, hallucinations remain the main hurdle to deploying production-ready Gen AI products.
Which LLM Performed the Best
The Index tests open-and closed-sourced models using Galileo's proprietary evaluation metric, context adherence, designed to check for output inaccuracies and help enterprises make informed decisions about balancing price and performance. Models were tested with inputs ranging from 1,000 to 100,000 tokens, to understand performance across short (less than 5k tokens), medium (5k to 25k tokens), and long context (40k to 100k tokens) lengths.
"In today's rapidly evolving AI landscape, developers and enterprises face a critical challenge: how to harness the power of generative AI while balancing cost, accuracy, and reliability. Current benchmarks are often based on academic use-cases, rather than real-world applications. Our new Index seeks to address this by testing models in real-world use cases that require the LLMs to retrieve data, a common practice in enterprise AI implementations," says Vikram Chatterji, CEO and Co-founder of Galileo. "As hallucinations continue to be a major hurdle, our goal wasn't to just rank models, but rather give AI teams and leaders the real-world data they need to adopt the right model, for the right task, at the right price."
Key Findings and Trends:
See a complete breakdown of Galileo's Hallucination Index results here.
About Galileo's Context Adherence Evaluation Model
Context Adherence uses a proprietary method created by Galileo Labs called ChainPoll to measure how well an AI model adheres to the information it is given, helping spot when AI makes up information that is not in the original text.
About Galileo
San Francisco-based Galileo is the leading platform for enterprise GenAI evaluation and observability. The Galileo platform, powered by Luna™ Evaluation Foundation Models (EFMs), supports AI teams across the development lifecycle, from building and iterating to monitoring and protection. Galileo is used by AI teams from startups to Fortune 100 companies. Visit rungalileo.io to learn more about the Galileo suite of products.
View original content to download multimedia: https://www.prnewswire.com/news-releases/galileo-releases-new-hallucination-index-revealing-growing-intensity-in-llm-arms-race-302208202.html
SOURCE Galileo