Insights from Ryse Technologies and MSDW’s Copilot Chronicles
MSDynamicsWorld’s (MSDW) monthly Copilot Chronicles series brings leading experts together to explore how organizations can navigate the rapidly evolving world of AI. In the August 2025 session, Minimizing Data Risks andEnabling Secure AI, participants gained a deep dive into the foundations of building secure, responsible AI environments.
The conversation was hosted by Jason Gumpert of MSDW and featured two seasoned experts:
- Will Hawkins – Azure AI & Fabric Engineer, Data Scientist, Co-Pilot Studio MVP, and Founder of RitewAI.
- Steven Settle – Managing Partner at Ryse Technologies, a Boston-based consulting firm specializing in bespoke AI solutions for Microsoft Dynamics 365 Finance and Supply Chain Management.
Together, they unpacked critical risks organizations face in their AI journey and shared actionable best practices to build safe, scalable, and value-driven AI initiatives.
Meet the Speakers
Will Hawkins
Will brings a unique perspective as both a data scientist and an Azure AI/Fabric engineer. Through his work at RitewAI, he helps ISVs, Microsoft partners, and enterprise customers move from “AI-curious” to “AI-ready.” His expertise in sandboxing, AI risk mitigation, and responsible deployment framed the session’s technical foundation
Steven Settle
As Managing Partner of Ryse Technologies, Steven leads a team that implements, upgrades, and supports Dynamics 365 environments while also developing innovative ISV products. His company’s tools—Performance Scout (a watchdog for diagnosing performance issues) and Clone Commander (a sandbox preparation and data compliance tool)—directly address the kinds of risks discussed in the webinar
Why Secure AI Matters
The webinar highlighted a fundamental truth: AI adoption is skyrocketing, but without secure foundations, most projects fail to deliver value
- The cost of AI predictions has collapsed. Organizations can now generate outputs faster and cheaper, creating unprecedented demand for co-pilot and agentic AI solutions.
- Data, not models, is the bottleneck. Studies show that up to 95% of generative AI projects fail due to poor data infrastructure, governance gaps, or unaddressed security risks—not because the models themselves can’t perform
- Governance frameworks must evolve. Traditional data governance focused on reporting isn’t enough. AI interacts with, manipulates, and changes records—requiring new safeguards for compliance and security
Key Risks in AI Development
Hawkins and Settle outlined the most pressing risks they see when companies rush AI projects into production:
- Using production data in testing – leading to PII exposure, IP leaks, and regulatory violations.
- Shadow AI tools – employees using unsanctioned apps that may inadvertently share sensitive data.
- Prompt injections and adversarial attacks – malicious inputs that trick models into exposing restricted information.
- Unsecured backups and logs – overlooked areas that often contain sensitive information.
- Incomplete anonymization – false assumptions that masking a single field is enough to protect identities
A real-world example shared by Steven underscored the stakes: a client gave offshore developers raw production data, which was later exploited in a ransomware attack after the vendor was compromised. The lesson? Every copy of your data is an attack surface
Best Practices for Secure AI
The speakers provided practical strategies to help organizations minimize risks while accelerating AI innovation:
- Anonymize and pseudonymize data – ensuring sensitive fields can’t be reconstructed.
- Mask intelligently – so test data looks valid (e.g., realistic email formats) without exposing real values.
- Automate sandbox preparation – use tools like Clone Commander to refresh environments quickly and securely.
- Design governance beyond reporting – include promotion frameworks, immutable logs, and need-to-know access policies.
- Monitor proactively – implement alerts and scanning tools to detect violations early
From Sandbox to Production
One of the session’s central messages was: Safe-by-design sandboxes are the fastest path to AI value.
By creating secure, anonymized environments for prototyping, companies can:
- Experiment freely without risking compliance.
- Promote successful pilots into production systematically.
- Continuously retrain and harden models as data evolves.
This structured approach not only reduces risk but also builds trust across the enterprise, leading to faster executive approvals and greater long-term ROI
About Ryse Technologies
Ryse Technologies helps organizations tackle complex Dynamics 365challenges with a focus on AI-driven solutions. In addition to consulting and managed services, their ISV products—Performance Scout and CloneCommander—equip enterprises with tools to diagnose, optimize, and secure their environments.
By aligning cutting-edge AI expertise with deep ERP knowledge, Ryse empowers customers to innovate with confidence.
Watch the Webinar On Demand
If you missed the live session, you can still catch the full discussion and knowledge check at the end.