Talk to my Agent | Towards Data Science

the past several months, I’ve had the opportunity to immerse myself in the task of adapting APIs and backend systems ...
Read more
How I Fine-Tuned Granite-Vision 2B to Beat a 90B Model — Insights and Lessons Learned

or vision-language models is a powerful technique that unlocks their potential on specialized tasks. However, despite their effectiveness, these approaches ...
Read more
How To Significantly Enhance LLMs by Leveraging Context Engineering

is the science of providing LLMs with the correct context to maximize performance. When you work with LLMs, you typically ...
Read more
Your 1M+ Context Window LLM Is Less Powerful Than You Think

are now able to handle vast inputs — their context windows range between 200K (Claude) and 2M tokens (Gemini 1.5 Pro). That’s ...
Read more
Do You Really Need a Foundation Model?

are everywhere — but are they always the right choice? In today’s AI world, it seems like everyone wants to use foundation ...
Read more
Are You Being Unfair to LLMs?

hype surrounding AI, some ill-informed ideas about the nature of LLM intelligence are floating around, and I’d like to address ...
Read more
Building a Сustom MCP Chatbot | Towards Data Science

a method to standardise communication between AI applications and external tools or data sources. This standardisation helps to reduce the ...
Read more
Fairness Pruning: Precision Surgery to Reduce Bias in LLMs

a new model optimization method can be challenging, but the goal of this article is crystal clear: to showcase a ...
Read more
AI Agent with Multi-Session Memory

Intro In Computer Science, just like in human cognition, there are different levels of memory: Primary Memory (like RAM) is ...
Read more
A Developer’s Guide to Building Scalable AI: Workflows vs Agents

I had just started experimenting with CrewAI and LangGraph, and it felt like I’d unlocked a whole new dimension of ...
Read more