Hameed's dev blog

Agentic AI

Agentic_ai_poster.png
Published on
/12 mins read/---

🎯 Definition of Agentic AI

Agentic AI refers to a system where an AI model (like an LLM) is used as an agent that can:

  1. Reason: Think through a problem.
  2. Plan: Break a complex task into smaller steps.
  3. Use Tools: Access external resources (Search, APIs, Databases) to execute actions.
  4. Iterate: Self-correct and work toward a specific goal autonomously.

⚡ Generative AI vs. Agentic AI: The Key Difference

FeatureGenerative AI (Reactive)Agentic AI (Proactive)
Core NatureReactive: Only speaks when spoken to.Proactive: Takes initiative to reach a goal.
WorkflowZero-shot: One prompt leads to one final answer.Iterative: Loops through reasoning and acting.
AnalogyThe librarian: Fetches the exact book you asked forThe Scholar: Owns the thesis from start to finish ,does research etc
AutonomyLow: Human must manage every step and connect dots.High: AI manages the process and solves sub-tasks.
ReasoningLimited to the current response window.High: Plans steps before taking action.
ExecutionJust text/code generation.Action-oriented: Calls APIs, searches web, runs code.
Error HandlingIf it fails, you must prompt it again to fix it.Self-Correction: It identifies and fixes its own bugs.
OutcomeDelivers Content (an answer).Delivers Results (a completed task).

🏗️ Case Study: The "Agentic" Hiring Pipeline

In this scenario, as an HR I want to hire a candidate based on the provided JD .

Now you are building an autonomous system that "owns" the hiring goal end to end with minimal interruption.

  • Traditional GenAI (Reactive): You ask ChatGPT to "Write a JD for a Python Backend Engineer." It gives you text. You then have to post it on job portals , you check emails, and you prompt the AI again to "Summarize this resume."
  • Agentic AI (Proactive): You give one high-level goal: "Hire a Backend Engineer with 3+ years of experience in FastAPI and AWS." The agent then plans and executes the entire multi-day workflow without you prompting every step.

Step 1: Goal Initialization

  • The input : "I need to hire a Backend Engineer with 3 years of experience in FastAPI and AWS.”
  • Agent Reasoning: Instead of just writing a Job Description (JD), the agent analyzes the intent. It understands that "FastAPI" implies a need for Python expertise and "AWS" requires cloud infrastructure knowledge.

Step 2: Planning & Research

  • Proactive Action: The agent researches modern industry standards for Backend roles in 2025.
  • Drafting: It creates a comprehensive JD that includes technical skills, soft skills, and company-specific culture.

Step 3: Human-in-the-Loop (HITL) Approval 🛡️

  • The Pause: The agent does not post the job immediately. It stops and waits for the HR Manager.
  • The Goal: The HR reviews the JD to ensure it aligns with the actual team needs.
  • Feedback Loop: If the human says, "Add Docker to the requirements," the agent updates its internal "Plan" and resumes.

Step 4: Autonomous Sourcing (Tool Use)

  • The Action: Once approved, the agent uses Tools (APIs) to scan LinkedIn, GitHub, and Indeed.
  • The Filter: It doesn't just find names; it uses its "Brain" to evaluate profiles against the JD. It might say, "This candidate has 2 years of experience but 50+ GitHub contributions to FastAPI; I will shortlist them."

Step 5: Automated Outreach & Persistence

  • Execution: The agent drafts and sends personalized emails to the top 5 candidates.
  • Memory & State: The agent remembers who it emailed. If "Candidate A" doesn't reply in 48 hours, the agent automatically triggers a Follow-up Nudge.
  • Self-Correction: If candidates are rejecting the offer due to salary, the agent reports this trend back to the HR manager to suggest a budget adjustment.

Step 6: Interview Scheduling & Handling Unpredictability

  • Tool Integration: The agent accesses the manager’s calendar and the candidate’s availability.
  • Edge Case Handling: If a candidate asks, "Is this role remote?", the agent checks its internal knowledge base. If it knows the answer, it replies; if not, it escalates the question to the human.

🌟 Key Characteristics of Agentic AI

Agentic AI Core Characteristics

Autonomy

Autonomy refers to the AI system’s ability to make decisions and take actions on its own to achieve a given goal, without needing step-by-step human instructions.

Key Aspects of Autonomy

  1. Our AI recruiter agent is autonomous

  2. Proactivity: Unlike a chatbot that waits for your next prompt, an autonomous agent is proactive , it takes the initiative to complete the next step in a sequence.

  3. Facets of Autonomy: It manifests in three main areas:

    • Execution: Carrying out the task on its own wit minimal HITL.
    • Decision Making: Choosing between different paths (e.g., "Should I search Google or Wikipedia?").
    • Tool Usage: Knowing which external API or software to trigger.
  4. Controlling Autonomy. Because full autonomy can be unpredictable, four ways to "reign in" the agent

    • Permission Scope: Limiting what the agent is allowed to do or perform action independently (e.g., it can screen resumes but cannot send rejection emails without a human).
    • Human-in-the-Loop (HITL): Strategic checkpoints where the agent must pause and wait for human approval before proceeding(e.g., can I post this JD).
    • Override Controls: The ability for a user to stop, pause or manually change the agent's behavior mid-task (eg., pause screening due to budget constraint.
    • Guardrails / Policies: Hard-coded ethical or logical rules (e.g., "Never schedule an interview on a weekend").
  5. The Risks ("Autonomy can be Dangerous")

    Unchecked autonomy can lead to significant real-world failures:

    • Financial Risk: An agent might overspend on LinkedIn ads or cloud resources without realizing it.
    • Legal/Ethical Risk: An autonomous recruiter might accidentally shortlist candidates based on age or nationality, violating anti-discrimination laws.
    • Accuracy Risk: The system could autonomously send out job offers with incorrect salary figures or terms.

In production systems, the "Permission Scope" and "Guardrails" mentioned in the second image are actually the most important parts of the code. They ensure the AI is helpful without becoming a liability.

Goal-Oriented Behavior

  • Goal acts as a "Compass" for autonomy: A goal acts as a compass for autonomy. Without a goal, an autonomous agent would just wander aimlessly. The agent constantly asks: "Does this action bring me closer to the goal?"
  • Goals can come with constraints
  • Goals are stored in core memory
{
"main_goal": "Hire a backend engineer",
"constraints": {
"experience": "2-4 years",
"remote": true,
"stack": ["Python", "Django", "Cloud"]
},
"status": "active",
"created_at": "2025-06-27",
"progress": {
"JD_created": true,
"posted_on": ["LinkedIn", "AngelList"],
"applications_received": 8,
"interviews_scheduled": 2
}
}
  • Goals can be altered : Mid-Stream Updates - A user can jump in and change a constraint (e.g., changing "experience: 2-4 years" to "experience: 5+ years") while the agent is already searching. Also Goals might be altered due to roadblocks or refinement

Agent uses "Planning" to handle these constraints when it finds a problem

Planning

Planning is the agents ability to breaks down a high-level goal into a structured sequence of actions or sub-goals and decide the best path to achieve the desired outcome.

  • Step 1: Generating multiple candidate plans
    • Plan A: Post JD on LinkedIn, GitHub Jobs, AngelList
    • Plan B: Skip job boards and use internal referrals and hiring agencies instead.
  • Step 2: Evaluate each plan
    • Efficiency : Which path is faster?
    • Tool Availability : Does the agent actually have access to the LinkedIn API or the database of agencies?
    • Cost : Does one plan require expensive premium tool credits?
    • Risk : What is the likelihood that Plan A results in zero applicants?
    • Alignment : Does the plan respect your initial constraints (e.g., "remote-only")?
  • Step 3: Select the best plan with the help of:
    • Human-in-the-loop: The agent presents its options to you: "I have Plan A and Plan B. Which do you prefer?”
    • A pre-programmed policy : The agent follows a hard rule you've set, such as "Always favor low-cost channels first".
    An agent evaluate these plans in real-time by Reasoning (actual "Chain of Thought").

Reasoning: The Agent's Cognitive Engine

Reasoning is the cognitive process through which an agentic AI system interprets information, draws conclusions, and makes decisions—both while planning ahead and while executing actions in real-time.

A. Reasoning During Planning (Pre-Action)

Before the agent takes its first step, it uses reasoning to build a "mental model" of the project:

  1. Goal Decomposition: Breaking down abstract goals into concrete, manageable steps.
  2. Tool Selection: Deciding exactly which tools (APIs, search engines, databases) will be needed for each specific step.
  3. Resource Estimation: Estimating the time required, identifying dependencies (e.g., "I can't do Step B until Step A is finished"), and assessing potential risks.

B. Reasoning During Execution (Real-Time)

Once the agent is "active," it must think on its feet to handle the messiness of the real world:

  1. Decision-Making: Choosing between multiple valid options.
    • Example: If 3 candidates match a job description, the agent might reason that it should schedule the 2 best and reject the 1 that is over budget.
  2. HITL (Human-in-the-Loop) Handling: Knowing when its own intelligence isn't enough. The agent must reason: "I am unsure about this salary range; I should pause here and ask the human for help".
  3. Error Handling: Interpreting why a tool or API failed (was it a temporary crash or a permanent block?) and deciding how to recover.

Adaptability: The Resilience Factor

Adaptability is the agent's ability to modify its plans, strategies, or actions in response to unexpected conditions—all while staying aligned with the original goal.

Three common scenarios where an agent must adapt:

  1. Failures: If a primary tool fails (e.g., the Calendar API is down), an adaptable agent doesn't crash; it searches for an alternative way to reach the person or waits and retries later.
  2. External Feedback: If the agent observes that its current plan isn't working (e.g., getting a low number of applications on LinkedIn), it adapts by suggesting a new plan, like posting on a different platform.
  3. Changing Goals: If the user changes the mission mid-way (e.g., "Stop looking for a full-timer, let's hire a freelancer instead"), the agent adapts its reasoning and tool usage to fit the new objective immediately.

Context Awareness

Context awareness is the agent’s ability to understand, retain, and utilize relevant information from ongoing tasks, past interactions, user preferences, and environmental cues to make a better decisions through multi step process.

  • Types of Context:
    • The original goal.
    • Progress till now & Interaction History: e.g., "Job description was finalized and posted to linkedin, github".
    • Environment State: e.g., number of applicants so far → 8 posted on Linkedin , linkedin promotional add ends in 2 days.
    • Tool Responses: e.g., Resume parser ->’candidate has 3 years of AWS experience’ or calendar API -> ‘No conflicts at 2pm Monday’.
    • User Specific Preferences: e.g., "Prefers remote-first candidates".
    • Policy or Guardrails: e.g., "Do not send offer without explicit approval".
  • Implementation: Context is maintained through
    • Short term memory (for the current task flow) - Short-term memory acts like a "temporary notepad" or a computer's RAM. it holds information only for the duration of the current task or conversation thread and is limited by the model's context window.
    • Long term memory (for durable knowledge and history) - Long-term memory acts like a "hard drive" or permanent database. It allows an agent to retain knowledge across different sessions and days, enabling it to learn from past feedback and recognize patterns.

Core components of Agentic AI

  1. 🧠 Brain (LLM)
    • Goal Interpretation : Understands user instructions and translates them into objectives.
    • Planning : Breaks down high-level goals into subgoals and ordered steps.
    • Reasoning : Makes decisions, resolves ambiguity, and evaluates trade-offs.
    • Tool Selection : Chooses which tool(s) to use at a given step.
    • Communication : Generates natural language outputs for humans or other agents.
  2. 💾Orchestrator
    • Task Sequencing : Determines the order of actions (step 1 → step 2 → …).
    • Conditional Routing : Directs flow based on context (e.g., failure, retry, or escalate).
    • Retry Logic : Handles failed tool calls or reasoning attempts with backoff.
    • Looping & Iteration : Repeats steps (e.g., keep checking job apps until 10 are received).
    • Delegation : Decides whether to hand off work to tools, an LLM, or a human.
  3. 🛠️Tools
    • External Actions : Perform API calls (e.g., post a job, send an email, trigger onboarding).
    • Knowledge Base Access : Retrieve factual or domain-specific information using RAG or search tools to ground responses.
  4. 💾Memory
    • Short-Term Memory : Maintains the active session’s context — recent user messages, tool calls, and immediate decisions.
    • Long-Term Memory : Persists high-level goals, past interactions, user preferences, and decisions across sessions.
    • State Tracking : Monitors progress: what is completed, what is pending (e.g., “JD posted,” “Offer sent”).
  5. 🛡️Supervisor
    • Approval Requests (HITL) : Agent checks with human before high-risk actions (e.g., sending offers).
    • Guardrails Enforcement : Blocks unsafe or non-compliant behavior.
    • Edge Case Escalation : Alerts humans when uncertainty or conflict arises.
Agentic AI Core Components

Refs

← Previous postKey Product metrics