Impact-Site-Verification: 578d421e-1081-463d-918b-ec5e29c5b9db
<Back

AI Schedule Adjustment Agent

Commissioned Work

Overview

Designed and developed an AI agent that learns implicit user preferences through dialogue to improve schedule proposal accuracy. Adopted a hybrid architecture combining 4 LLM agents with rule-based processing via LangGraph, implemented as a full-stack system with FastAPI backend and Next.js 16 frontend.

Architecture

Hybrid design applying LLM agents for flexible intent understanding, priority estimation, and proposal generation, while using rule-based processing for precise calendar calculations and preference filtering. By combining LLM flexibility with rule accuracy, achieves high-precision scheduling while mitigating hallucination risks.

Human-in-the-Loop via LangGraph's interrupt() implements an interactive loop for candidate confirmation and follow-up questions. When users reject candidates, the rejection reason is fed back into context for subsequent proposals, avoiding repetition of the same patterns.

  1. parse_request (LLM): Intent analysis and structuring of natural language requests
  2. estimate_priority (LLM): Meeting priority estimation
  3. extract_free_busy (RULE): Calendar free slot extraction, holiday exclusion, movable event detection
  4. load_memory (STORE): Loading user preferences and QA history
  5. apply_prefs_filters (RULE): Candidate filtering through 8 preference-based filters
  6. propose_schedule (LLM): Schedule proposal generation considering learned preferences
  7. confirm / select_questions (HITL): User confirmation and preference learning loop via follow-up questions

Memory & Learning

Implemented two Semantic Memory patterns using LangGraph Store (BaseStore). The Profile pattern manages structured preference data as a single document per user, continuously learning preferences through deep merge updates. The Collection pattern accumulates QA history as append-only individual records, maintaining past dialogue context in a searchable format.

Implemented progressive preference elicitation through 24 question templates across 6 categories (time of day, event rescheduling, participants, format, schedule, workload). When no candidates are found, progressively relaxes constraints (lunch slot usage -> event shuffling -> participant reduction -> duration shortening -> deadline extension) to expand the search space.

Key Features

Preference Filter Engine: Implemented 8 rule-based filters including time slot sorting, lunch protection, hard block exclusion, capacity limits, movable event detection, work block rescheduling, external/internal priority control, and optional participant matching.

Free/Busy Analysis Engine: Cross-analyzes all participants' calendars to calculate common free slots, including holiday exclusion, lunch time detection, and movable event extraction.

Frontend: SPA built with Next.js 16 / React 19. Features FullCalendar for calendar display, Zustand for agent communication state management, TanStack Query for server state management, and Tailwind CSS 4 for styling.

Development & Quality

Established unit tests and scenario tests with pytest, building a reproducible testing infrastructure using LLM mocks and calendar fixtures. Set up LLM tracing and monitoring with LangSmith, containerized deployment with Docker, and graph visualization/debugging environment with LangGraph Studio. Implemented type validation for all LLM outputs using Pydantic v2's with_structured_output().

Technologies

PythonFastAPILangGraphLangChainLangGraph Store (Semantic Memory)LangSmithPydanticpytestNext.jsReactTypeScriptZustandFullCalendarTanStack QueryTailwind CSSDockerMulti-AgentHuman-in-the-Loop