Impact-Site-Verification: 578d421e-1081-463d-918b-ec5e29c5b9db
<Back

Internal Chat System Development

European Consulting Firm | Internal Project

Overview

Designed, built, and operated an internal RAG chat system on Azure. Forked `Azure-Samples/azure-search-openai-demo` and trimmed it into a lighter deployment tuned for internal knowledge (policies, manuals, FAQs). The project was recognized with the firm's 2024 Corporate Award.

Architecture

A React + TypeScript frontend is paired with a Python Quart async backend; answers stream in progressively. Azure OpenAI Service (GPT-4) generation is chained with Azure AI Search's vector, hybrid, and semantic-ranker stages to handle the abbreviation / formal-name drift that is endemic to internal documents.

`app.py` is kept to 584 lines — lighter than the ~784-line upstream — by trimming external integrations that internal use does not need, while adding user feedback capture (thumbs up / thumbs down) and explicit citation rendering. Azure Document Intelligence parses PDF / Word / PowerPoint structure and Integrated Vectorization refreshes the index automatically, so the upload-to-searchable path runs unattended.

CI/CD & DevOps

Source, build, planning, and work tracking all live in Azure DevOps: `.azdo/pipelines/azure-dev.yml` drives build / test / deploy through Azure Pipelines. Choosing Azure Pipelines over GitHub Actions aligned the pipeline with the firm's existing security controls (identity, audit trails, network boundary).

Azure resources are managed declaratively with Bicep / ARM templates to minimize environment drift. 26+ shell and PowerShell scripts automate setup, credential rotation, and data preprocessing, and 31+ tests in CI block regressions. Git Flow is used as the branching strategy, with pull-request review mandatory to embed a code-review culture.

User Feedback & Monitoring

Every answer captures a thumbs-up / thumbs-down signal that is recorded as a custom metric in Application Insights. An Azure Log Analytics + KQL dashboard tracks satisfaction, response time, and answer accuracy over time, feeding a continuous tuning loop for prompts and search parameters. Multi-turn conversation, citation rendering, and thought-process visualization were layered in alongside.

Achievement

Contributions to faster, higher-quality internal Q&A earned the 2024 Corporate Award. The system is now an internal standard tool and continues to be operated and improved.

Technologies

PythonQuartReactTypeScriptAzure OpenAI ServiceAzure AI SearchAzure Blob StorageAzure Document IntelligenceAzure DevOpsAzure PipelinesAzure ReposAzure BoardsApplication InsightsLog Analytics (KQL)Bicep / ARM.promptyPowerShellRAGVector SearchSemantic RankerIntegrated VectorizationGit Flow