IT Solutions
Depend on us to get your organisation to the next level.
Sectors
BCN have a heritage of delivering outcomes through our cloud-first services and currently support over 1200 customers across specialist sectors.
About Us
Your tech partner
Posted 28th January 2025
The rise of AI apps for productivity in the workplace has been a rapid and hugely transformative element across almost every industry and space over the last 12 months. The support it offers for every role, and team member, to automate, streamline and augment daily tasks is boosting output and uncovering new ways of working at such a rate that it becomes difficult to see a future without this innovative technology in place.
“Data security incidents due to the use of AI applications rose from 27% in 2023 to 40% in 2024.”
However, for those responsible for data security within organisations, this pace of change has meant that the safety checks and balances that normally accompany such widespread adoption may be failing to appropriately keep up. An independent research survey carried out in 2024 stated this as a growing concern with a marked rise in data security incidents because of the use of AI applications from 27% in 2023 to 40% in 2024.
User training and regular updates across the whole company should be used to share AI practices and processes for every role.
“Businesses must be realistic –although they can try, the reality is it will be impossible to stop unauthorised use of AI altogether, so how do we mitigate any risks and impact?”
Simon Edwards, Head of Managed Security Services
Communication on who is using company data, and why, should be clearly documented, with a reference to the AI tools they apply when doing their work. This way, the people responsible for integrating data security for the company can be made aware and combine that information with the alerts and notification they already have set up across the network. It really is about avoiding any unknowns and surprises where possible.
The advent and development of AI has been built on a familiar theme of the last two decades in technology. Through the democratisation of innovative tech, and its widespread use across almost every commercial domain, we have all been presented with entirely new ways of operating that often blur the lines between work and personal applications. This means that we are all intuitively aware and capable of identifying improvements and shortcuts, with the devices we use often crossing over both elements of our lives. There are undoubted benefits in this integrated way of working that business leaders are acutely aware of with a massive 85% of them understanding that it is now critical for employees to use these tools to maintain and develop a competitive edge.
“85% of decision makers say it is now critical for employees to use these tools to maintain and develop a competitive edge.”
It appears that there can be no argument for the use of AI productivity tools, such as Microsoft Copilot, offering incredible opportunities for a positive effect on the company bottom line. The cost-savings and time gains alone have become essential elements in emerging SME strategies across the board. However, there is a new tension between these productivity gains and security blind spots that could potentially cripple a business in the event of a successful cyberattack routed through an unprotected AI app. A new balance is required that involves first establishing a comprehensive understanding of how, why, when and where AI apps are being used in the day to day running of a company.
Protecting IT environments is a challenge that already involves the oversight of hundreds, if not thousands, of vulnerable points in your network. The devices, internal systems, external tools and communications technology are all elements of your solution, covered by an integrated security solution managed by an IT team or Managed Service Provider such as BCN. If new tools and apps are added into the workflow, using your company data in an unseen and unauthorised way, it immediately presents a new vulnerability to threats and attacks.
This is a multi-faceted issue too, with the same research data showing alarming statistics for exactly how this unauthorised AI use is infiltrating company systems. 53% of people regularly log in with personal credentials for work purposes and 48% use their personal device when using AI for work. All of which is exposing company information, details and data to a great unknown digital space.
The decades of knowledge and experience we share across the BCN teams has demonstrated that the most important piece of the cyber security puzzle is always people. Our whole partner philosophy is built on the foundation that digital technology should be viewed as a tool for people to make their systems, processes and roles work in the best way. That means providing them with constant, comprehensive information and instructions on how these tools work, and increasingly, the dangers involved when they are used incorrectly or for the wrong purpose. User awareness training has to remain regular, consistent and constantly updated.
Most company leaders and decision makers share this view for AI tools with an almost unanimous amount of 87% agreeing that they are willing to spend resources of time and money training their employees in secure practices. And, as we are seeing, the constant development and widespread adoption of these tools dictates that this has to be addressed immediately with a long-term strategy set in place for the greatest security and stability.
“ 87% of decision makers are willing to spend resources of time and money training their employees in secure practices.”
New technology always means new responsibilities and ownership for security policies too. These can be shared across key stakeholders in the company to a certain extent but aligning with a managed service provider such as BCN is the only way to ensure that the right guardrails are in place. Our experience has shown us that any AI App governance framework must be built around three foundation blocks.
Stay aware of the AI apps – and the data they use – that your company and team members are implementing in their roles. Data security tools should be implemented to identify where AI is being used, building a list of their use and the risks they pose. Building out a definitive data flow for AI apps allows you greater compliance when requested too.
Create and enforce strong guidelines on how your company authorises the use of AI apps and when they should be prohibited. This must include the procedures for blocking and restricting use where appropriate. Even if an app is sanctioned, there must be guidelines and granular policies to prevent certain sensitive data ever being accessed while non-sensitive data is allowed through.
The risks and threat landscape are evolving daily. Guaranteeing that your company policies and procedures for AI app security remain best practice for data security requires regular and detailed assessment. This can also be combined with trend reporting and third-party industry information to always provide the big picture.
Vendor and partner choice will always be one of the most important pillars of any cyber security defence measure. For AI apps this is essential. With so many open source and browser-based AI apps & tools offering little or no protection for the wider sharing and use of your data, compliance and sensitivity data integrity has to be the priority.
Microsoft’s development and testing in AI tools for business applications was one of the most impressive and successful developments in the space last year. It marked the transition for AI advantages from large corporations to SMEs.
Perhaps most importantly, it opened up their intuitive AI Copilot capability for wholesale use across existing Microsoft 365 Apps. Crucially, the data handling and privacy policies of Copilot ensures that prompts and responses are considered as your own data and therefore never used to train Large Language Models (LLM). This keeps your data confidential and out of the public domain, something that most browser-based and open source AI tools can never guarantee. It also leverages your investment through existing Microsoft subscriptions to offer the power of AI for productivity with the confidence that comes from industry-leading security.
Great security posture for IT environments comes from meticulous planning. And the best planning means gathering as much information as you can to align with specific knowledge and experience for implementation. Once again, people will be the decisive factor. Understanding the tools they need, or want, to use and integrating them with your processes and workflow in a secure and productive way is the goal to work towards.
BCN has been at the forefront of applied AI tools for our partner clients for many years and always conducts thorough research, testing and rollouts before integrating them into IT infrastructures. Our position as one of the very few MSPs in the country to hold all six Microsoft partner accreditations for the modern workplace. Specifically, our experience as a Microsoft partner for Data & AI and is a fantastic resource for our clients to rely on.
Talk to our dedicated team today to get a real understanding of how vulnerable your AI apps and tools could be making your business.