SharePoint Agent and Responsible AI in Mental Health Support
Introduction
Balancing AI safety measures and practical functionality for SharePoint Agents is key for some use cases due to the lack of customisation of the inbuilt content safety against the four categories: harm, self-harm, sex, and hate. A particular use case we wanted to explore was its use for mental health support agents in the form of virtual mental health first aiders. My partner in crime Lee Ford and I explored what we could do without tripping the Responsible AI principles and flagging Content Safety categories. Responsible AI protections are essential and finding ways to enable compassionate assistance without violating safety rules was the challenge. View Data, Privacy, and Security for Microsoft 365 Copilot for more details on Content Safety behind SharePoint Agents and M365 Copilot. Responsible Al. Microsoft adheres to its principles of responsible Al development, along with broadly applicable privacy laws and standards, allluf which promote the ethical treatment of sensitive data and reinforce a company's control over its data.
The alternative was to use the Azure AI Foundry where we could configure and customise for our own custom engine.
For the SharePoint Hackathon, Lee Ford and I were thinking of creating a mental health first aider agent to help improve employee well-being and experience within the workplace. However, I was curious about how far I could go without flagging Responsible AI, considering there could be prompt input involving harm, self-harm,etc.. from end users trying to express frustration or staff suffering from depression.
Each SharePoint site comes with its agent which can’t be edited and won’t respond to any request, clearly failing Responsible AI checks if the prompt is blocked may be because of harm/self-harm content.
In the gif, the mentalfirstaider
is the out-of-the-box SharePoint site agent, while the MentalFirstAider
is a custom agent grounded using data within the Documents
library with a custom meta/system prompt.
Any mention of harmful content within the system/meta prompt renders the agent unusable too.
One of the instructions I tried was to direct the end user to external resources like the Samaritans helpline within the UK in case of self-harm. The agent itself became dysfunctional.
If the system prompt specifies harm, self-harm, or violence explicitly, for example:
For example:
* Recommend external services (e.g., emergency services, charities) for additional support in case of self harm, harm or violence
The agent won’t function. However, if I remove the terms harm, self-harm, or violence from the prompt, the agent starts responding.
View our SharePoint Hackathon 2025 submission on Project: Mental Health First Aider - SharePoint Agent embedded in SharePoint Page #61
Conclusion
The Azure AI Content Safety service is inbuilt within Copilot for Microsoft 365. By modifying the system/meta prompt, you may help to a certain extent modify the behavior of an agent to be helpful in situations related to providing help in case of self-harm.
For more control over the configuration and customization of the safety categories, the Azure AI Foundry might provide a better option, even choosing a different model.