OpenLLMetry - Observing the Quality of LLMs with Nir Gazit

PurePerformance - A podcast by PurePerformance - Mondays

Categories:

Its only been a year since ChatGPT was introduced. Since then we see LLMs (Large Language Models) and Generative AIs being integrated into every days life software applications. Developers have the hard choice to pick the right model for their use case to produce the quality of output their end users demand.Tune in to this session where we have Nir Gazit, CEO and Co-founder of Traceloop, educating us about how to observe and quantify the quality of LLMs. Besides performance and costs engineers need to look into quality attributes such as accuracy, readability or grammatical correctness.Nir introduces us to OpenLLMetry - a set of Open Source extensions built on top of OpenTelemetry providing automated observability into the usage of LLMs for developers to better understand how to optimize the usage of LLMs. His advice to every developer is to start measuring the quality of your LLMs on Day 1 and continuously evaluate as you change your model, the prompt and the way you interact with your LLM stack!If you have more questions about LLM Observability check out the following links:OpenLLMetry GitHub Page: https://github.com/traceloop/openllmetryTraceloop Website: https://www.traceloop.com/OpenLLMetry Documentation: https://traceloop.com/docs/openllmetry