Skip to main

You are here

A glimpse into the MS-AI-P future…

MSPs must evolve into trusted AI advisors, argues N-able’s Nicole Reineke as she explores how AI expectations and opportunities are reshaping the managed services landscape

With the first wave of hype around ChatGPT, and the rise in popularity of the first widely used LLMs, many were happy to experiment, accepting output inaccuracies as part of the early adoption curve. Then, it was all about the promise of the technology. Now, more than two years on and with 400 million weekly users according to a report by Reuters in February, it’s a different story. AI has transformed from a technology with potential, to one with real-world practical implementations. As it has matured, so have customers’ expectations. They are now much savvier in their approach to AI, understanding the efficiencies it can bring but also demanding a transparent approach to its use. Emerging governance is also demanding transparency and accountability.

Customers are also looking externally for help implementing AI within their own organisation, which opens up new possibilities for MSPs equipped to offer it. MSPs, so often IT advisors to their clients, are increasingly relied upon to be ‘Chief AI Advisors’ too. But there are barriers to MSPs stepping into this role: AI has created the biggest tech skills gap in 15 years according to recruitment firm, Harvey Nash.

AI-specific skills scarce in talent pool

MSPs continue to struggle finding the right talent, not to mention specialized AI talent, particularly for emerging roles like prompt engineers and data scientists. HR specialists, Deel, have found that IT, tech, and AI positions are the hardest to recruit for in 2025, and 43 percent of UK business leaders are considering hiring talent internationally instead. Demand outstripping supply makes talent expensive with research from the Oxford Internet Institute showing that the talent shortage is driving salary premiums for AI roles.

MSPs can mitigate the talent shortage through a variety of methods including upskilling current staff and leaning on vendors. Luckily, MSPs already have a technically adept workforce and can look inwards to upskill and give staff opportunity to grow into AI roles rather than fail to hire externally.

On the other hand, AI can help alleviate gaps that exist in a team. Automating compliance audits, resource optimisation, and people management using AI within the MSP will free up the valuable time of engineers.

Customers want transparent AI use

Across the industry, we’re seeing customers informing themselves on AI and its efficiencies while also seeing AI as a security risk. As a result, customers will increasingly demand transparency from MSPs, with clear explanations of how and where they use AI and what oversight it has.

MSPs and vendors must take an active role in implementing governance frameworks to help meet compliance and ethical standards. For example, it’s important that organisations make sure that their AI models, whatever data and systems they are a part of, are compliant with applicable privacy regulations such as GDPR. The EU AI Act clearly states AI vendors and their customers must make sure they are compliant. Businesses will need to prioritise data privacy and security, and work with vendors, to ensure this is the case.

Of course, this isn’t unique to AI, new technologies will also need guardrails. We don’t yet, for example, allow self-driving cars on the road without supervision. Vendors and MSPs must bring that awareness to make sure they’re using AI efficiently and in a manner that doesn’t introduce risk to themselves or to the organisation. Guidelines can help to reassure customers and provide transparency when questioned.

AI’s role in incident response

AI is a potentially double-edged sword when it comes to incident response. Currently, security analysts have a heavy workload dealing with a barrage of alerts each day, risking fatigue. AI has the already begun to transform this, with our 2025 State of the SOC report revealing an upside potential to automate 70% of all incident investigations and threat remediation activity. This automation reduces the amount of human intervention, freeing up the team to focus on other alerts, decreasing the response time, and improving accuracy by reducing human error.

But we must also consider the security implications of AI as a staple in incident response planning. MSPs must make sure customers have a plan in place to respond to AI failures. Using tabletop exercises to review responses is the best way to uncover flaws in any plan, and MSPs can conduct exercises themselves before rolling this training out to customers.

Taking the leap to MS-AI-P

Customers are keen to benefit from the efficiencies AI can bring, and are worried about being left behind if they don’t, but need help with implementation and staying compliant. MSPs are in the perfect position to step into this advisory role, much like how they are often security and compliance advisors today.

Upskilling their already technically adept workforce will help MSPs mitigate the shortage of AI skills and help bridge this gap for customers, too. Prioritising the transparent use of AI by aligning to a framework will also make it easier to prove compliance to customers and using AI in incident response will make it easier to serve customers. MSPs can also offer advisory services in planning for any potential AI failures and ensuring customers have a plan in place to respond.

MSPs need to upgrade their services so that they can deliver on their customer’s needs, and in effect become MS-AI-Ps…though we definitely need to work on a better name.

 

Nicole Reineke is Senior Distinguished Product Manager of AI Strategy at N-able