While there are many reasons AI projects fall short, data privacy and security concerns are often a major culprit. In many cases, a support team evaluates and pilots an AI solution based on features, but their security, compliance and legal teams shut the project down due to concerns about exposing private data to a public AI model. Gartner predicts that by the end of 2026, businesses will abandon 60% of AI projects unsupported by AI-ready data.
Data security must be front and center to protect AI investments. Support leaders who work with their security and compliance teams from pilot to production will maximize the value of AI, improving internal operations and external customer satisfaction.
Support teams are realizing they won’t see the full value of their AI investments unless they factor in data security from the beginning. In a recent Deskpro survey of over 220 support and IT leaders, 81% of respondents said security is “very important” or “critical” when selecting support technology, and 78% involve their IT or security teams in the final decision-making process.
Security, compliance, and legal teams help support teams understand all applicable data protection requirements, including how data must be stored, processed, and reused. These requirements may limit or prevent the support team’s use of software that relies on AI services hosted by the public cloud hyperscalers, AWS, Microsoft Azure and Google Cloud . Data required for processing by the AI large language models (LLMs) is typically sent from a CX or help desk platform to the model services in unencrypted form–raising data security concerns, especially for organizations in regulated industries and those with strict security and data privacy requirements.
Support teams in those organizations should choose an AI solution they can configure in a controlled environment, such as a private cloud, sovereign cloud, or their own enterprise data center on-premise.
Controlled environments enable organizations to meet the security and compliance requirements of their business, industry, and region. For example, sovereign clouds store, process, and back up data at data centers within specified geographic boundaries, with technical controls in place to prevent the data from being accessed or processed outside the region. These days, most sovereign clouds are also investing in building out AI capabilities within their clouds. This helps organizations align with local data protection and sovereignty requirements, such as NESA and ISR in the United Arab Emirates, while still benefitting from AI innovation.
Controlled environments also eliminate the risk of private data being exposed to a public AI model. They open up more opportunities for support teams to launch and scale AI solutions and use cases, including behind-the-scenes agent assistance and customer self-service.
When choosing a controlled environment for their AI solution, support teams also need to factor in cost. While some teams worry that running an AI model in a private environment will increase infrastructure and operational costs, there are ways to keep the cost down while still keeping data secure.
In regulated environments, the goal is predictable, defensible costs that align with security and compliance requirements.
Choosing a secure environment sets a strong foundation for any AI initiative–but support teams often still need to complete a pilot program to prove their chosen AI solution is safe, scalable, and valuable.This is where 95% of generative AI projects stall out and fail to deliver, according to a recent MIT study. To avoid getting stuck in the pilot phase, support leaders should:
Setting up the proper data infrastructure before going live with AI is a huge advantage–because the organization controls where the data lives, the support team can connect the AI model with relevant internal data without worrying about exposing it to a public cloud.
Some organizations are even going so far as to build their own custom models, typically starting with open source LLMs such as Meta’s Llama or DeepSeek. Training the AI model on organizational information and internal data lets support teams deliver more personalized service rather than relying on generic responses from a public model–something that’s especially important with AI chatbots. “If you can give an AI chatbot access to the appropriate corporate data, you can have that chatbot deliver very timely and accurate answers,” says Brad Murdoch, CEO of Deskpro. “And if you’re doing that, it frees up human agents to focus on more complex, challenging issues, dramatically increasing productivity.”
With any AI use case, the quality of the data used to fine-tune or train the model will impact the quality of the outputs. The better the outputs, the better the productivity, customer experience, and ROI.
From chatbots to agent assist tools to automated workflows, there is a wide range of AI solutions that have the potential to help support teams resolve issues more efficiently and increase customer satisfaction. But support teams will continue to be at risk of having their AI projects shut down–and failing to see a return on investment–if they don’t take data protection seriously from the beginning.
Support leaders must take a purposeful approach to any AI project, mapping out data requirements with their compliance and legal teams, establishing the necessary infrastructure for their AI, testing and learning from a pilot program, scaling the program, and fine-tuning the AI model. This approach reduces risk and increases the potential uses for AI, allowing support teams and their organizations to get the full benefit of the technology.