Author: Mamta Upadhyay

Model-on-Model Attacks

When language models interact, even safe ones can amplify hidden threats

continue reading
No Comments

[LLM Build] Building your first AI Agent

Learn how to build a lightweight AI agent using a local LLM and simple tools

continue reading
No Comments

Feedback Loops in AI Agents

Feedback loops in AI agents can be silently exploited to manipulate behavior over time without ever touching the prompt.

continue reading
No Comments

Open Source Agents: Memory Poisoning and Tool Access

How memory poisoning and tool access in open-source agents can silently lead to critical security breaches

continue reading
No Comments

Cognitive Overload in Agents

Context flooding aka Cognitive Overload does not cause immediate failures skews the agent’s decision making

continue reading
No Comments

[LLM Build] A Tiny Context-Aware Q&A Bot

A simple, context-aware QA bot that runs locally or with OpenAI. Perfect for beginners exploring LLM builds and RAG workflows.

continue reading
No Comments

[Local Lab]: Agentic Overdelegation

Demo to explore how AI agents can be manipulated to misuse tools

continue reading
No Comments

MCP Chains That Use Web Scraping

What would you target first in a prompt pipeline that scrapes the web?

continue reading
No Comments

Toolchain Integrity in MCP

MCP architectures create hidden pathways for LLM compromise

continue reading
No Comments

AI Security vs AI Safety

Understanding the Critical Divide in Responsible AI

continue reading
No Comments