How to Use LLMs for Powerful Automatic Evaluations

discuss how you can perform automatic evaluations using LLM as a judge. LLMs are widely used today for a variety ...
Read more
Coconut: A Framework for Latent Reasoning in LLMs

Paper link: https://arxiv.org/abs/2412.06769 Released: 9th of December 2024 Figure 1. The two reasoning modes of Coconut. In Language Mode (left), ...
Read more
Demystifying Cosine Similarity | Towards Data Science

is a commonly used metric for operationalizing tasks such as semantic search and document comparison in the field of natural ...
Read more
Generating Structured Outputs from LLMs

interface for interacting with LLMs is through the classic chat UI found in ChatGPT, Gemini, or DeepSeek. The interface is ...
Read more
Finding Golden Examples: A Smarter Approach to In-Context Learning

Context using Large Language Models (LLMs), In-Context Learning (ICL), where input and output are provided to LLMs to learn from ...
Read more
Talk to my Agent | Towards Data Science

the past several months, I’ve had the opportunity to immerse myself in the task of adapting APIs and backend systems ...
Read more
How I Fine-Tuned Granite-Vision 2B to Beat a 90B Model — Insights and Lessons Learned

or vision-language models is a powerful technique that unlocks their potential on specialized tasks. However, despite their effectiveness, these approaches ...
Read more
How To Significantly Enhance LLMs by Leveraging Context Engineering

is the science of providing LLMs with the correct context to maximize performance. When you work with LLMs, you typically ...
Read more
Your 1M+ Context Window LLM Is Less Powerful Than You Think

are now able to handle vast inputs — their context windows range between 200K (Claude) and 2M tokens (Gemini 1.5 Pro). That’s ...
Read more
Do You Really Need a Foundation Model?

are everywhere — but are they always the right choice? In today’s AI world, it seems like everyone wants to use foundation ...
Read more