Aparna DhinakaraninTowards Data ScienceChoosing Between LLM Agent FrameworksThe tradeoffs between building bespoke code-based agents and the major agent frameworks.Sep 2124Sep 2124
Aparna DhinakaraninTowards Data ScienceNavigating the New Types of LLM Agents and ArchitecturesThe failure of ReAct agents gives way to a new generation of agents — and possibilitiesAug 3010Aug 3010
Aparna DhinakaraninTowards Data ScienceEvaluating SQL Generation with LLM as a JudgeResults point to a promising approachJul 31Jul 31
Aparna DhinakaraninTowards Data ScienceLarge Language Model Performance in Time Series AnalysisHow do major LLMs stack up at detecting anomalies or movements in the data when given a large set of time series data within the context…May 12May 12
Aparna DhinakaraninTowards Data ScienceTips for Getting the Generation Part Right in Retrieval Augmented GenerationResults from experiments to evaluate and compare GPT-4, Claude 2.1, and Claude 3.0 OpusApr 6Apr 6
Aparna DhinakaraninTowards Data ScienceModel Evaluations Versus Task EvaluationsUnderstanding the difference for LLM applicationsMar 26Mar 26
Aparna DhinakaraninTowards Data ScienceWhy You Should Not Use Numeric Evals For LLM As a JudgeTesting major LLMs on how well they conduct numeric evaluationsMar 8Mar 8
Aparna DhinakaraninTowards Data ScienceThe Needle In a Haystack TestEvaluating the performance of RAG systemsFeb 15Feb 15
Aparna DhinakaraninTowards Data ScienceLLM Evals: Setup and the Metrics That MatterHow to build and run LLM evals — and why you should use precision and recall when benchmarking your LLM prompt templateOct 13, 20234Oct 13, 20234
Aparna DhinakaraninTowards Data ScienceSafeguarding LLMs with GuardrailsA pragmatic guide to implementing guardrails, covering both Guardrails AI and NVIDIA’s NeMo GuardrailsSep 1, 20233Sep 1, 20233