Build a secure, grounded helpdesk assistant that answers from company knowledge, performs real actions, and respects governance from day one.
What you will build
The Copilot runs in Teams and the web chat canvas. It routes user messages to intents. Low risk Q and A uses grounded knowledge. High risk requests move into action topics that call flows. A resolver policy decides when to escalate to a human agent in a shared mailbox or queue. All steps write telemetry so you can audit who asked what, which source was used, and which automation ran.
Intent design and topic hierarchy
Start from real tickets, not from a blank page. Cluster historic tickets from your helpdesk tool or Dataverse into three tiers of intent maturity.
Tier A intents answer or act with high confidence. Examples are request software access, request distribution list change, reset MFA, and where to find a template. These get first class topics with entity extraction for requester, system, urgency, device, department. Each topic includes clarifying questions that only fire when a slot is missing. Keep prompts short and deterministic.
Tier B intents provide knowledge only. Examples are password policy, VPN troubleshooting, how to book equipment. These map to Q and A topics that quote sources and apply a confidence floor. If confidence is low the bot offers to raise a ticket instead.
Tier C is the long tail. Use a single fallback topic that tries search across approved sites and then offers a handoff if nothing is found. Capture unknown phrases into analytics so you can graduate new intents over time.
A simple hierarchy works well. Entry topic handles greeting and routing. Child topics handle A level actions. A shared confirmation topic standardizes user consent messaging before any change is made. A shared escalate topic sends the full transcript and extracted entities to your human queue.
Grounding sources and retrieval constraints
Add your knowledge sources in Copilot Studio using the Knowledge section. Use two classes of sources.
SharePoint sites for documents, FAQs, SOPs, and policy PDFs. Restrict scope by site and library rather than tenant wide search. Limit file types to docx, pdf, pptx, html. Exclude folders that contain drafts. Enable citation snippets so answers show where the content came from. Apply a recency gate, such as only documents updated within the last 18 months for operational topics, and allow older content only for policy pages.
Dataverse tables for structured answers like service catalog items and known error database. Create a table with columns title, description, owner, SLA minutes, fulfillment flow id. Use a lookup so action topics can bind directly to the correct flow and target system. Keep this table as the single source of truth for what the assistant is allowed to do.
Constrain retrieval in the topic settings. Set a minimum answer confidence such as 0.65. Require at least one citation before the bot can answer. Cap snippet count to reduce noise. Pin exact match keywords for critical terms like MFA, BitLocker, Salesforce. Deny answers that include pattern matches for personal data or secrets. If any deny rule triggers the topic routes to escalation.
Actions and automations with Power Automate
Actions turn the assistant into a resolver instead of a search box. Model each action as a topic that gathers slots then calls a flow through the Power Automate connector.
Common actions:
Reset MFA for a user in Entra ID. The topic collects upn and reason. The flow validates requester entitlement, writes an audit row in Dataverse, and calls the Graph API using an application connection.
Create a helpdesk ticket. The topic collects title, category, and impact. The flow creates a Dataverse case row, posts a Teams card to the user, and returns the case id to the chat.
Access request approvals. The topic collects app name and access level. The flow generates an approval and only after approval calls the provisioning API. The bot updates the user on each state change through proactive messages.
Minimal flow definition JSON
Below is a simplified extract that shows the core shape. Replace place holders with your environment and connector names.
{
"definition": {
"$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
"triggers": {
"When_Called_From_Copilot": {
"type": "Request",
"kind": "Http",
"inputs": {
"schema": {
"type": "object",
"properties": {
"intent": {"type": "string"},
"requesterUpn": {"type": "string"},
"action": {"type": "string"},
"parameters": {"type": "object"}
},
"required": ["intent", "action", "parameters"]
}
}
}
},
"actions": {
"Authorize_Requester": {
"type": "If",
"expression": "@equals(triggerBody().parameters.requesterRole,'HelpdeskAgent')",
"actions": {
"Create_Ticket": {
"runAfter": {},
"type": "OpenApiConnection",
"inputs": {
"host": {"connectionName": "dataverse", "operationId": "CreateRecord", "apiId": "/providers/Microsoft.PowerApps/apis/shared_commondataservice"},
"parameters": {"entityName": "incidents", "item": {"title": "@{triggerBody().parameters.title}", "severitycode": "@{triggerBody().parameters.impact}"}}
}
},
"Start_Approval": {
"runAfter": {"Create_Ticket": ["Succeeded"]},
"type": "OpenApiConnection",
"inputs": {
"host": {"connectionName": "approvals", "operationId": "StartAndWaitApproval", "apiId": "/providers/Microsoft.PowerApps/apis/shared_approvals"},
"parameters": {"approvalType": "ApproveReject", "title": "@{triggerBody().parameters.title}", "assignedTo": "@{triggerBody().parameters.approver}"}
}
}
},
"else": {
"actions": {
"Reject": {"type": "Terminate", "inputs": {"runStatus": "Failed", "runError": {"code": "AccessDenied", "message": "Requester not authorized"}}}
}
}
},
"Return_To_Copilot": {
"runAfter": {"Authorize_Requester": ["Succeeded", "Failed"]},
"type": "Response",
"inputs": {"statusCode": 200, "body": {"ticketId": "@{outputs('Create_Ticket')?['body/id']}", "approval": "@{outputs('Start_Approval')?['body/outcome']}"}}
}
}
}
}
Bind this flow to the action topic using a secured connection reference. Pass parameters as a JSON object so you can evolve fields without changing the topic. Return only what the bot needs to tell the user and update the case.
Security and privacy
Apply sensitivity labels in Microsoft Purview to your SharePoint libraries and enforce encryption at rest and in transit. Configure default labels on new document libraries so authors do not forget. Use label scoped search so the bot cannot read documents above its sensitivity. Pair this with DLP policies that prevent connectors from exfiltrating sensitive content to external locations.
For Dataverse, use custom security roles that restrict the assistant to read from knowledge tables and create on the Case table. Deny delete by default. Use field level security for personal data like phone numbers. Store all automations in a dedicated solution with a service account and least privilege. Rotate client secrets on a schedule and prefer managed identities where possible.
Enable Copilot Studio audit logging. Retain chat transcripts for the period required by your policy. Log which knowledge source answered each turn and include citation URLs and document ids. For actions, log the flow run id, inputs, outputs, and the Dataverse row keys that were created.
Keep a human in the loop for risky actions. Build a shared confirmation topic that reads back the action and parameters, requires a final yes from the user, and shows the applicable policy. For elevated changes require manager or application owner approval. When the bot is not confident, or when a deny rule triggers, route to a human queue with everything an agent needs to finish the job.
Example prompts
User: I need access to Salesforce as a read only user for the Marketing team by Friday.
User: Reset MFA for alex@contoso.com, I lost my device.
User: Where is the latest VPN client download for Windows 11
User: Create a helpdesk ticket for laptop running hot and shutting down after 10 minutes.
User: What is the email retention policy for deleted items
Admin: Add a new knowledge source for the IT playbook site and limit answers to that site only.
Governance checklist
- Owners named for Copilot, flows, and knowledge sources
- Purview labels applied on libraries used for grounding
- DLP rules tested for Teams, Exchange, SharePoint, Dataverse
- Service account and connection references stored in a solution
- Audit logging enabled and retention set
- Unknown intent capture and review cadence agreed
- Escalation path and SLAs documented
- Disaster stop and rollback plan ready
Final mile tips
Keep conversation controls simple. Prefer buttons and quick replies to free text for slot capture. Invest in the first confirmation screen so users trust the bot before it changes anything. Tune retrieval often, since knowledge drift is the largest source of wrong answers. Track resolution rate, time to resolution, percentage of grounded answers with citations, and escalation rate as your core KPIs.