Notes

Microsoft AI Decision Framework

A three-phase intake playbook for standardizing AI project selection, enforcing a "business value first" approach before technology selection.

Planted February 1, 2026
Last tended February 2, 2026
min read
|
19 days old
🌲
Evergreen

Mature and stable. Well-developed thinking.

Growth Journey

SeedlingEarly exploration
BuddingTaking shape
EvergreenMature & stable

Leverage This Note

  • Reference architecture ready
  • Solution component
  • Decision framework

What I'm Exploring

I am analyzing Microsoft's official methodology for AI project intake. The goal is to move away from "Shiny Object Syndrome"—where clients or internal teams ask for an "agent" without a defined use case—and toward a standardized decision matrix that validates business value, user experience, and technical feasibility before any code is written.

Initial Thoughts

The framework is distinct because it acts as a gatekeeper rather than just a technical guide. It introduces a "Three-Phase Decision Methodology" that I find particularly useful for consulting intake:

  1. The Intake Filter: It demands answers to three specific questions before proceeding:

    • Outcome: What is the precise ROI?
    • UX: Does this actually need a chat bot, or just a smarter search bar?
    • Evolution: Can a SaaS tool (like M365 Copilot) solve this with zero coding?
  2. The "Kitchen" Analogy (Spectrum of Control): This is a great mental model for explaining "Build vs. Buy" to stakeholders:

    • Dining Out (SaaS): Order off the menu (M365 Copilot).
    • Meal Kit (Low-Code): Assemble ingredients (Copilot Studio).
    • Scratch Cooking (Pro-Code): Full control, high effort (Foundry/Agent SDK).
  3. Orchestration ("The Coin"): It separates agents into "Side A" (Interactive/Front-Office) and "Side B" (Invisible/Back-Office/Triggers). This distinction is vital for architecture—avoiding the trap of trying to make a conversational bot handle heavy background processing.

Open Questions

  • Adoption: How rigidly should the "Decision Gate" metrics from the BXT (Business-Experience-Technology) framework be applied to rapid, low-risk internal pilots?
  • Integration: For hybrid scenarios, how seamless is the "Invisible Agent" (Side B) handoff between Azure Logic Apps and non-Azure event triggers?
  • Skills: Does the "Capability Envisioning" phase require a dedicated functional architect, or can this be run by technical leads?

References