Llamas, Alpacas, Koalas, Falcons... there is a veritable zoo of LLMs out there! In today's episode, Caterina Constantinescu breaks down the LLM Leaderboards and evaluation benchmarks to help you pick the right LLM for your use case.
Caterina:
• Is a Principal Data Consultant at GlobalLogic, a full-lifecycle software development services provider with over 25,000 employees worldwide.
• Previously, she worked as a data scientist for financial services and marketing firms.
• Is a key player in data science conferences and Meetups in Scotland.
• Holds a PhD from The University of Edinburgh.
In this episode, Caterina details:
• The best leaderboards (e.g., HELM, Chatbot Arena and the Hugging Face Open LLM Leaderboard) for comparing the quality of both open-source and proprietary Large Language Models (LLMs).
• The advantages and issues associated with LLM evaluation benchmarks (e.g., evaluation dataset contamination is an big issue because the top-performing LLMs are often trained on all the publicly available data they can find... including benchmark-evaluation datasets).
The SuperDataScience podcast is available on all major podcasting platforms, YouTube, and at SuperDataScience.com.