Gen AI & Agentic AI Crash Course ...

Become Industry-Ready in Just 7 Days

āœ” Working Professionals Across All Industries
āœ” 7 Days
āœ” 6 Hours / Day
āœ” 42 Hours Total
āœ” Python → Deep Learning → Gen AI → Agentic AI → Real-World Applications

Why Choose This Program?

AI is no longer a future topic — it is happening right now across every industry, and the professionals who understand it will lead the next decade. This course is your fast-track to becoming one of them.

In just 7 days, you will go from zero to building real AI systems that organisations are deploying today — systems that automate complex workflows, serve customers 24/7, qualify leads intelligently, process documents in minutes, and make decisions that used to take days.

You do not need to be a programmer or data scientist to start. Every concept is taught step by step, with real-world examples drawn from banking, e-commerce, sales, HR, and operations.

By the end, you will know how AI thinks, how to direct it, and how to apply it to the real challenges your organisation faces.

DAY 1 — Python & Deep Learning Foundations

Duration: 6 Hours | Level: Foundations

Build the technical base — Python programming, data handling, neural networks, and NLP fundamentals that power every modern AI system.

Session 1 — Python Fundamentals

Topics Covered – Variables, Data Types, Control Flow, Functions, OOP – Lists, Dictionaries, Sets, Comprehensions – File I/O, Exception Handling, Modules

Session 2 — NumPy & Pandas for Data Analysis

Topics Covered – NumPy Arrays, Broadcasting, Vectorized operations – Pandas DataFrames — slicing, filtering, groupby, merge – Time-series handling, rolling windows, aggregations

Session 3 — Data Visualization

Topics Covered – Matplotlib — line, bar, scatter, histogram plots – Seaborn — heatmaps, pairplots, distribution plots – Storytelling with data

Session 4 — Neural Networks & Deep Learning

Topics Covered – Perceptrons & Multi-Layer Perceptrons (MLPs) – Activation Functions: ReLU, Sigmoid, Tanh, Softmax – Forward & Backward Propagation – Loss Functions & Optimizers: SGD, Adam, RMSprop – Regularization: Dropout, L1/L2

Session 5 — RNNs & LSTMs

Topics Covered – Sequential data processing, Hidden states, Temporal dependencies – Vanishing/Exploding Gradient Problem – LSTM architecture: Forget, Input, Output gates – Bidirectional LSTMs – CNNs: Convolutional layers, pooling, feature maps

Session 6 — NLP Fundamentals

Topics Covered – Text preprocessing: tokenization, stop words, stemming, lemmatization – Bag of Words (BoW), TF-IDF – Word Embeddings: Word2Vec, GloVe – Encoder-Decoder Seq2Seq architecture

DAY 2 — Transformers, LLMs & Prompt Engineering

Duration: 6 Hours | Level: Intermediate

Understand the architecture behind ChatGPT, Claude, and every modern AI assistant. Master prompts that work in production.

Session 1 — Transformer Architecture Mastery

Topics Covered – Self-Attention: Query, Key, Value (QKV) matrices – Scaled Dot-Product Attention – Multi-Head Attention – Why Attention beats RNNs for long documents


Session 2 — Positional Encodings & Tokenization

Topics Covered – Why position matters in transformers – Sinusoidal positional encoding – Relative positional encodings: RoPE, ALiBi – Tokenization: BPE, WordPiece, SentencePiece – Subword tokenization advantages


Session 3 — LLM Architecture & Variants

Topics Covered – BERT (Encoder-only) — classification, NER – GPT (Decoder-only) — generation, completion – T5 (Encoder-Decoder) — summarization, translation – Embedding layers, Transformer blocks, LayerNorm – Context length limitations, KV cache optimization


Session 4 — Sampling, Generation & Alignment

Topics Covered – Temperature, Top-k, Top-p (nucleus) sampling – Beam search, Greedy decoding – RLHF — Reinforcement Learning from Human Feedback – DPO — Direct Preference Optimization – Constitutional AI


Session 5 — Prompt Engineering Fundamentals
Topics Covered – Zero-shot, One-shot, Few-shot prompting – System, User, Assistant role structure – Chain-of-Thought (CoT) prompting – Tree of Thoughts (ToT) – Self-consistency


Session 6 — Structured Outputs & Prompt Security
Topics Covered – JSON mode and schema enforcement – Function calling patterns – Prompt injection awareness and delimiter usage – Token optimization, prompt versioning – LangSmith for prompt debugging

DAY 3 — Agentic AI Foundations & RAG Systems

Duration: 6 Hours | Level: Intermediate

Move from chat to action — build AI agents that can reason, plan, and retrieve information from your organisation’s documents.

Session 1 — Agentic AI Fundamentals
Topics Covered – AI Agents vs Models — key distinctions – Autonomous vs Semi-Autonomous agents – Perception-Action Loop: Observe → Think → Act → Feedback – Autonomy levels and use cases – Agent limitations and failure modes


Session 2 — Planning, Reasoning & Tool Use
Topics Covered – Task decomposition and goal setting – Planning algorithms: A*, MCTS – ReAct paradigm (Reasoning + Acting) – Plan-and-Execute pattern – Tool use: function calling, web browsing, code execution


Session 3 — Agent Memory Systems
Topics Covered – Working memory (context window) – Short-term memory: conversation history – Long-term memory: vector stores, databases – Episodic vs Semantic memory – State tracking and session management


Session 4 — RAG Fundamentals
Topics Covered – RAG architecture: Retriever + Reader + Generator pattern – Why RAG? Hallucination mitigation – When to use RAG vs fine-tuning – Embedding models: OpenAI, Cohere, BGE, E5 – Cosine similarity and retrieval metrics


Session 5 — Vector Databases & Document Processing
Topics Covered – Vector DBs: FAISS, Pinecone, Weaviate, Chroma, Qdrant, Milvus – Indexing strategies: HNSW, IVF – Metadata filtering – Chunking strategies: fixed-size, semantic, overlap – Document processing: OCR, PDF parsing, table extraction


Session 6 — Advanced RAG Patterns
Topics Covered – Hybrid search: dense + sparse (BM25) – Re-ranking mechanisms – Hypothetical Document Embeddings (HyDE) – Query decomposition, multi-query retrieval – Graph RAG: knowledge graphs + entity linking – RAG Evaluation: RAGAS, ARES frameworks

DAY 4 — Agentic Frameworks & Multi-Agent Systems

Duration: 6 Hours | Level: Advanced

Build production-grade AI pipelines using LangChain, LangGraph, AutoGen, and CrewAI.

Session 1 — LangChain
Topics Covered – LCEL (LangChain Expression Language) and chains – Agents and tools – Memory management in chains – Callbacks and tracing – Custom tool development


Session 2 — LangGraph for Stateful Workflows
Topics Covered – Graph-based stateful workflows – State machines and conditional routing – Checkpointing and persistence – Human-in-the-Loop (HITL) integration


Session 3 — AutoGen for Multi-Agent Systems
Topics Covered – Conversable agents and group chat – Multi-agent orchestration – Code execution agents – Human-in-the-loop patterns


Session 4 — CrewAI for Role-Based Agent Teams
Topics Covered – Role-based agent design – Task assignment and delegation – Crew orchestration patterns – Framework comparison: LangChain vs LangGraph vs AutoGen vs CrewAI


Session 5 — Multi-Agent Architecture & Communication
Topics Covered – Centralized vs Decentralized control – Communication protocols: message passing, pub-sub, event-driven – Agent routing and semantic routing – Fallback mechanisms and error handling


Session 6 — Hierarchical Agents & Human-in-the-Loop
Topics Covered – Manager-Worker patterns – Task decomposition (RACI matrix) – Collaborative vs competitive agents – Approval workflows and human feedback integration – Reflection agents and critique-refine loops

DAY 5 — Fine-Tuning, Multimodal AI & Agent Memory

Duration: 6 Hours | Level: Advanced

Customize AI models for your domain, process documents and audio visually, and build persistent agent memory.

Session 1 — Fine-Tuning Fundamentals
Topics Covered – When to fine-tune vs RAG vs prompt engineering – Full fine-tuning vs PEFT – Data requirements: JSONL format, train/validation split – Quality over quantity principle


Session 2 — Parameter-Efficient Fine-Tuning (PEFT)
Topics Covered – LoRA (Low-Rank Adaptation) — how it works – QLoRA (Quantized LoRA) — memory-efficient training – Adapter layers and Prefix tuning – Instruction tuning and SFT (Supervised Fine-Tuning)


Session 3 — Agent Memory & Cross-Session Persistence
Topics Covered – Multi-tier memory architecture – Short-term: context window and conversation history – Long-term: vector stores and database storage – Episodic memory: event sequence storage – Semantic memory: fact storage, knowledge graph integration – State management: SQLite, Redis, PostgreSQL – State checkpointing, privacy considerations

DAY 6 — Safety, Evaluation, MCP & Observability

Duration: 6 Hours | Level: Advanced

Deploy AI responsibly — guardrails, evaluation frameworks, the Model Context Protocol, and production monitoring.

Session 1 — AI Safety & Content Moderation
Topics Covered – AI Safety Principles: alignment, robustness, transparency – Content Moderation: toxicity detection, PII detection – Input validation and sanitization: length limits, format checks – Output validation: factuality verification, hallucination detection – Constitutional AI and value alignment


Session 2 — Prompt Injection Defense & Hallucination Mitigation
Topics Covered – Understanding prompt injection attacks – Delimiter-based defenses – Instruction hierarchy and system message hardening – Jailbreak prevention: refusal training, multi-layer defenses – Hallucination Mitigation: Chain-of-Verification (CoVe), confidence scoring, citation requirements


Session 3 — Evaluation Frameworks
Topics Covered – Evaluation Framework Design: metric selection, benchmark creation – Offline evaluation: automated metrics (BLEU, ROUGE, F1, embedding similarity) – Online evaluation: A/B testing, canary deployment, champion-challenger – LLM-as-Judge: criteria-based evaluation, bias considerations – RAG-specific: context relevance, answer faithfulness, RAGAS and ARES – Agent evaluation: task completion rate, tool usage accuracy


Session 4 — Observability & Monitoring
Topics Covered – Observability fundamentals: metrics, logs, traces – Distributed tracing for AI pipelines – LangSmith platform: trace visualization, dataset management, prompt playground – Grafana dashboards: metrics visualization and alerting – Cost tracking, latency analysis, token usage monitoring


Session 5 — Model Context Protocol (MCP)
Topics Covered – MCP protocol: architecture, MCP vs traditional APIs – MCP Server Development: tool registration, schema definition – MCP Client Integration: tool discovery, error handling – LLM Connectors: connecting LLMs to external systems – MCP Security: authentication, permission model, rate limiting – MCP Deployment: containerization, orchestration, monitoring


Session 6 — Red-Teaming & Responsible AI
Topics Covered – Red-teaming methodology for AI systems – Adversarial testing and penetration testing – OWASP Top 10 for LLMs – Human oversight: approval workflows, audit logging, escalation – Safety classifiers: multi-stage filtering – Building audit trails for compliance and governance

DAY 7 — Real-World Projects & Capstone

Duration: 6 Hours | Level: Production

Build 3 complete AI systems across different domains — from design to deployment. Portfolio-ready projects you can showcase immediately.

Domain: Banking & Financial Services
Type: Agentic AI Frameworks: LangGraph (7-node workflow)

Problem:
Manual loan processing takes 5–7 days, involves 28+ processors, and handles only 600 loans per month with high error rates and poor customer experience.

What You Build:
An end-to-end autonomous loan processing agent — from application submission to disbursement — with no manual intervention for standard cases.

Workflow Nodes:

  1. Document intake + OCR extraction

  2. ID authenticity verification

  3. Credit score analysis

  4. Debt-to-income ratio calculation

  5. Property / collateral valuation

  6. Risk scoring + interest rate determination

  7. Auto-approve / escalate to human / disburse

Agentic Capabilities:

  • Multi-step autonomous decision making

  • Human-in-the-loop escalation for edge cases

  • Document verification using Vision AI

  • Real-time status updates to applicant

Tech Stack:
LangGraph, GPT-4o, Qdrant, Unstructured.io (OCR), e-signature API, Streamlit

Real Benchmark:
HDFC Bank > Loan approval: 5 days → 2 minutes | Capacity: 600 → 15,000 loans/month (25x) | Default accuracy: 78% → 94%

Domain: E-commerce / Customer Service
Type: Multi-Agent + MCP Frameworks: LangGraph + MCP

Problem:
Support costs $1.3 trillion globally. Average wait time is 11 minutes, causing 75% customer abandonment. Agents spend 60% of time on repetitive tasks. Traditional chatbots fail at multi-step problems and cannot take real actions like processing refunds. Support is unavailable 16 hours/day for most businesses.

What You Build:
An autonomous multi-agent support system handling 70% of queries end-to-end — checking orders, processing refunds, updating records, and escalating only when needed.

Multi-Agent System:

  • Triage Agent — classifies intent, routes to correct agent

  • Order Management Agent — checks order status, tracking, delivery

  • Product Info Agent — answers product queries, availability, specs

  • Billing Agent — handles payments, invoices, refunds

  • Technical Support Agent — troubleshooting, step-by-step guidance

  • Escalation Agent — hands off to human with full context

MCP Integration:
Centralized tool registry connecting all business systems — order database, inventory system, payment processor, email service, ticket creation

Decision Logic:
Business rules engine for refunds, returns, and escalation thresholds

Tech Stack:
LangGraph, GPT-4o-mini, Weaviate, MCP (business system connectors), WebSocket, Streamlit

Key Metrics:

70% queries resolved end-to-end | Response time < 30 seconds | 24/7 availability | Support cost reduced by 60%+


PROJECT 3 — Lead Scoring Agent

Domain: Sales & Marketing
Type: Agentic AI + Web Search Frameworks: CrewAI + Web Intelligence

Problem:
Sales teams waste 60% of their time on unqualified leads — contacting 100 prospects to close just 3 deals. Manual research takes 15–30 minutes per lead with no systematic qualification process, resulting in $50,000+ wasted effort per sales rep annually. There is no data-driven way to prioritize high-probability prospects.

What You Build:
An autonomous lead qualification agent that researches, scores, and prioritizes prospects — giving sales teams a ranked list of high-conversion leads with personalized talking points ready to go.

Agentic Workflow:

  • Research Agent — scrapes company website, LinkedIn, news, funding data, tech stack

  • Analysis Agent — matches lead against Ideal Customer Profile (ICP)

  • Contact Agent — identifies decision-makers and generates contact information

  • Scoring Agent — scores lead 0–100 based on conversion probability

Data Sources:
Web search, company websites, LinkedIn, tech stack databases, funding databases, news APIs

Scoring Model Inputs:

  • ICP matching (industry, company size, revenue range)

  • Buying signals (recent funding, hiring trends, tech adoption)

  • Engagement signals (website visits, content downloads)

  • Decision-maker accessibility

Output Per Lead:

  • Lead score (0–100) with reasoning

  • Decision-maker contacts and roles

  • Personalized talking points tailored to company context

  • Recommended outreach timing and channel

Tech Stack:
CrewAI, GPT-4o, Tavily (web search), Qdrant, LinkedIn API, Streamlit

Key Metrics:

15–30 min manual research → 2 min automated | Sales team focuses only on top 20% leads | Conversion rate improved 3–5x

Domain: Sales & Marketing
Type: Agentic AI + Web Search Frameworks: CrewAI + Web Intelligence

Problem:
Sales teams waste 60% of their time on unqualified leads — contacting 100 prospects to close just 3 deals. Manual research takes 15–30 minutes per lead with no systematic qualification process, resulting in $50,000+ wasted effort per sales rep annually. There is no data-driven way to prioritize high-probability prospects.

What You Build:
An autonomous lead qualification agent that researches, scores, and prioritizes prospects — giving sales teams a ranked list of high-conversion leads with personalized talking points ready to go.

Agentic Workflow:

  • Research Agent — scrapes company website, LinkedIn, news, funding data, tech stack

  • Analysis Agent — matches lead against Ideal Customer Profile (ICP)

  • Contact Agent — identifies decision-makers and generates contact information

  • Scoring Agent — scores lead 0–100 based on conversion probability

Data Sources:
Web search, company websites, LinkedIn, tech stack databases, funding databases, news APIs

Scoring Model Inputs:

  • ICP matching (industry, company size, revenue range)

  • Buying signals (recent funding, hiring trends, tech adoption)

  • Engagement signals (website visits, content downloads)

  • Decision-maker accessibility

Output Per Lead:

  • Lead score (0–100) with reasoning

  • Decision-maker contacts and roles

  • Personalized talking points tailored to company context

  • Recommended outreach timing and channel

Tech Stack:
CrewAI, GPT-4o, Tavily (web search), Qdrant, LinkedIn API, Streamlit

Key Metrics:

15–30 min manual research → 2 min automated | Sales team focuses only on top 20% leads | Conversion rate improved 3–5x

    Talk to an Expert





    "I authorise DV Data & Analytics & its representatives to contact me with updates and notifications via Email/SMS/WhatsApp/Call. This will override DND/NDNC." Privacy Policy and Terms & Conditions

    šŸš€ Ready to Become an Expert in Generative AI & Agentic AI?

    Enroll Now and Secure Your AI Future

    Training Supports & Benefits

    • Learn from the World’s Best Faculty & Industry Experts
    • Learn with fun Hands-on Exercises & Assignments
    • Participate in Hackathons & Group Activities
    • Dedicated Faculty
    • 9AM to 6 PM Support
    • Participate in Hackathons & Group Activities
    • Resume Building & Mock Interview Prep​aration.
    • We offer personalized access to our Learning Management System (LMS).
    • Moc Interview Practise + Real time Test

      Talk to an Expert





      "I authorise DV Data & Analytics & its representatives to contact me with updates and notifications via Email/SMS/WhatsApp/Call. This will override DND/NDNC." Privacy Policy and Terms & Conditions

      Our Student Placed in...

      Best Training and Placement Institutes in Bangalore

      Students Reviews

      • I would like to thank DV Analytics support staff and faculties especially Dev sir. I joined DV recently and I was not confident because of my non IT background but after attending regular classes and doing assignments I feel very confident that I can be a data scientist. This is one of the best institute to learn data science for IT as well as non IT students.

        Soumyaranjan Sutar Avatar Soumyaranjan Sutar

        Best in-class institute for all data driven skills.

        Gaurav Rathore Avatar Gaurav Rathore

        Dv is the great place to learn Data Science. Dev Sir is very committed to every student's success. All classes are live, and all doubts are clarified. The live projects are the key point to achieve success. The class material and assignments is more than adequate for you to grasp all concepts. I would highly recommend for anyone interested in Data Science to join DV analytics

        Abhisek Debata Avatar Abhisek Debata

        I am a student of DV Analytics. And I m not from IT background but because of the #faculties and #Dev sir I feel I can be a data scientist nd I will definitely achieve my goals. Thank you 😊 #DVAnalytics

        Supriya Mona Avatar Supriya Mona

        DV Analytics is a best Data Science Institute. With a wide and extraordinary classes, they also helps us with business development strategies, projects across different industries.

        rajat chaudhary Avatar rajat chaudhary

        DV Institute is a place which you can look up for carrier change and also personally for me it's a course content and delivery is all what let you into a good profile as Data Analyst, DV is one of the India's best institute as many training institute and online platform are available these day but a content is not been well aligned in most of these places as my personal experience you will end up with no results after working hard, however when it's comes to DV content is well aligned with industry requirements and assignments are been designed in such a fashion like if you practice those no-one, can stop you bagging a very good offer,as it's all about developing a skill set and don't worry at all about anything if you do your part. DV will always been supportive to every student.

        raina goswami Avatar raina goswami
      • Highly recommended to all the people who believe sky is the limit and sees themselves succeeding as well as growing in life. On the basis of my personal experience, DV is one the best training institute. The way all the programs has been designed, be it mock interviews, classes or assignments, all these makes you ready for the competitive world out there. Also the faculty members are way too supportive and motives you at each and every step.

        kritika raina Avatar kritika raina

        If you really think that it's hard to be a data scientist or data Analyst then Dv Analytics is the right place to join to know how easy it is to be a data analyst . I am saying because earlier I myself even though that coding is really hard but the day I joined and started learning i came to know it's all easy if you get a right tutor because the way they will nurture you in this 6 month's will definitely make you get placed in some good MNC. #Dv Analytics #Dvtian

        Roopesh Mohapatra Avatar Roopesh Mohapatra

        Dv analytics is a good place to learn Data science enhance our technical skills. My experience at this center was really great. good training environment, Friendly work culture, supportive management.

        Karthik Mutyala Avatar Karthik Mutyala

        Good institute Well experienced faculties. It is the most rated data science training institute in Bangalore. They have a great team of teachers. Appreciate the fact that doubt clearing sessions are conducted regularly. Dv analytics is also great platform to solve queries.

        tarun somisetty Avatar tarun somisetty

        It’s a great place to learn and make a carrier in data science. The atmosphere of this institute is very good I am glad that I took this decision..to all people those are looking to make a carrier in this field must join DV Analytics šŸ‘šŸ»

        Payal Udhwani Avatar Payal Udhwani

        One of the best and finest institute to learn Data Science . Mr.Dev sir teaches good subject with real time examples. They have amazing industry updated syllabus. They also provide internships with guarantee job assistance. I really recommend this institute to learn Data Science course.

        srinija 2000 Avatar srinija 2000
      • On the basis of my personal experience, DV is one the best Data Science training institute in India. I really appreciate that the team of Dv Analytics take care of Each and every student from their joining into Dv to getting success in his/her life.

        Suryakanta lenka Avatar Suryakanta lenka

        DV Analytics Training Institute boasts a friendly work culture that fosters collaboration. The training environment is conducive to effective learning, with supportive instructors and a management team that genuinely cares about your CAREER

        sahal roshan Avatar sahal roshan

        Any one who wants to get into the field of Data can definitely check this place. At this place you wont just learn a lot, the placement support DV gives its students is insanely strong. Deb sir and Venky sir are genius in their field and to learn from them was just amazing experience.

        Vijith Visweswaran Avatar Vijith Visweswaran

        Great place to learn! Dev Sir is very committed to every student's success. They have started offering online classes since the pandemic began. All classes are live, and all doubts are clarified. The class material and assignments is more than adequate for you to grasp all concepts. I would highly recommend for anyone interested in Data Science.

        Shreejil PV Avatar Shreejil PV

        I'm a student at DV Analytics right now, and I couldn't be more satisfied. I wholeheartedly endorse DV Analytics to anyone wishing to advance their data analytics skills, whether they are novices or seasoned professionals. The course material is interesting and applicable. Deb Sir is a true Data Scientist expert, with a wealth of knowledge and experience in the field. Thank you, DV Analytics!

        ajit Avatar ajit

        DV Analytics provided an excellent learning experience in Data Science facilitated by experienced and helpful tutors who made the whole learning journey enriching at every level.

        Satyajit Panda Avatar Satyajit Panda
      • DV Analytics is a highly professional institute dedicated to enlighten the students towards the path of data science.Supportive staff interactive and regular classes with a vision to make one find success. and the best parts is the placement

        Abdul Sameer. Dv analytics Avatar Abdul Sameer. Dv analytics

        DV Analytics is a best Data Science Institute. With a wide and extraordinary classes, they also helps us with business development strategies, projects across different industries. The courses that I looked to gain knowledge in Ai and machine learning and etc. One can have a decent learning experience with long hours devoted to the course. Offers project based learning which we can use in real time as it helps us to enhance our decision making abilities. Dev sir is really a good person and have broad knowledge across data analytics industry

        Chandan A Avatar Chandan A

        Five stars for DV Analytics! The courses are well-structured, and the institute's commitment to empowering students in data science and AI is evident. Grateful for the knowledge and confidence gained here.

        Harsha Damaraju Avatar Harsha Damaraju

        DV Analytics is the best institute in India,if anyone want to transform their carrier in Data Science. Dev sir is committed to each students success and staff faculty is supportive.

        Anil Mamodi Avatar Anil Mamodi

        DV Analytics is not just an institute, It is a temple of knowledge. The people there are so down to earth and supportive, They are one of the best institute that I have been to. The entire training was a fun session with lots of learning and creativity. Their team is very good and they have excellent people in the organization who supports us every step of our journey. The student mentors support us to make sure that we complete the assignment on time. The placement team supports us until we get placed. Every person in the organization has supported us in some way or the other. Thanks DV for shaping my future.

        Shashank Nr Avatar Shashank Nr

        If someone wants to build his career in Data Science field than DV Analytics is the best place for this. Faculties are best in their respective subjects and specially Dev sir , he is teacher cum guide for us . šŸ‘ Best place to learn.

        Rahul Ghorpade Avatar Rahul Ghorpade

      BECOME A GLOBAL CERTIFIED

      MASTER PROGRAM IN GEN AI & AGENTIC AI...

      100% Placement Assistance