IT Solutions
Depend on us to get your organisation to the next level.
Sectors
BCN have a heritage of delivering outcomes through our cloud-first services and currently support over 1200 customers across specialist sectors.
About Us
Your tech partner
By Fraser Dear, Head of AI and Data Innovation. Posted 24th February 2025
Generative AI has revolutionised the way we approach data, creativity, and automation. However, one of the challenges that comes with this powerful technology is the phenomenon known as AI hallucinations. As businesses are increasingly interested in developing generative AI to drive innovation and efficiency, understanding and mitigating AI hallucinations is essential for ensuring reliable and accurate outputs.
AI hallucinations occur when an artificial intelligence system generates outputs that are incorrect, nonsensical, or unsubstantiated by its training data. These hallucinations can manifest in various forms, such as fabricated facts, inaccurate translations, or content that is unsuitable for the solution outcome. The root cause often lies in the model’s attempt to produce coherent and contextually appropriate outputs, even when it lacks sufficient or accurate data.
Generative AI models, while immensely powerful, are not infallible. They rely on patterns learned from large datasets, and when faced with ambiguous or incomplete information, they may ‘hallucinate’ plausible but incorrect responses. This can pose significant challenges, especially in business applications where accuracy and reliability are critical.
Understanding where AI sources its data is crucial in mitigating hallucinations. Generative AI models are trained on extensive datasets that encompass a wide range of information. However, not all data sources are of equal quality or reliability. Some may contain outdated, biased, or incorrect information, which can propagate into the AI’s outputs.
For business users, it is essential to scrutinise the data sources used in their AI solutions. Reliable and up-to-date data can significantly reduce the likelihood of hallucinations. Moreover, transparency in data sourcing allows users to trace the origin of information, providing a layer of accountability and trustworthiness.
Microsoft has been at the forefront of developing solutions that leverage generative AI, including Copilot Chat, Copilot Studio, and AI Foundry. Each of these tools offers unique capabilities that can enhance business operations, but they also necessitate robust measures to mitigate AI hallucinations.
Copilot Chat integrates AI-driven support within communication platforms, assisting users with tasks, information retrieval, and decision-making. To tackle hallucinations within Copilot Chat, Microsoft employs several strategies:
Copilot Studio enables the creation of customised AI models tailored to specific business needs. Mitigating hallucinations in Copilot Studio involves:
AI Foundry provides a platform for developing and deploying AI solutions at scale. Key measures to address hallucinations in AI Foundry include:
A simple example would be the creation of a generative AI experience using Copilot Studio to perform a Retrieval-Augmented Generation (RAG) scenario. This translates into an end user chat experience that has a provided data set, like an organisation’s policies, reference materials or transactional data. The user can ask questions over that provided data set and the solution will understand the question and find relevant information in the dataset to answer the question provided.
In the example of an organisations’ policies, the solution design intent should be that responses should be factual and only come from the data provided. It is critical that the response provided is true and the user can check the answer with citations from provided data set.
This is where the controls outlined previously in Copilot Studio enables the solution to provide a response that does not contain errors or has created a probable answer that is not grounded in the data set.
For a Microsoft Azure AI Foundry implementation of the same use case, more control can be provided in prompt engineering, and direct control of two parameters that are available to developers that influence the behaviour and output quality of AI models:
The temperature setting controls the randomness of the AI-generated responses. A lower temperature results in more deterministic and focused outputs, while a higher temperature increases creativity and variability in the responses.
On the other hand, the Top P, is used to determine the statistical significance of the AI model’s predictions. It helps assess whether the observed outcomes are likely due to chance or if they reflect a genuine pattern in the data. Together, these controls enable developers to fine-tune the balance between accuracy, creativity, and reliability in the final solution.
Generative AI has immense potential for transforming business operations, but it is not without its challenges. AI hallucinations can undermine the reliability and accuracy of AI-driven solutions, making it imperative for businesses to understand and address this phenomenon. By ensuring high-quality data sourcing, leveraging Microsoft’s robust AI tools, and continuously refining AI models, businesses can harness the power of generative AI while minimising the risks associated with hallucinations.
Staying informed and proactive is key to successful and responsible AI adoption. Microsoft’s suite of generative AI solutions, including Copilot Chat, Microsoft Copilot 365, Copilot Studio, and AI Foundry, offers powerful tools to navigate this journey, enabling businesses to drive innovation with confidence and precision.
By partnering with BCN, you can leverage the power of AI to drive growth, enhance services, and stay ahead as AI becomes integral to our daily lives. Contact us today, and together, we can shape a future where technology acts as a force for good, enriching our understanding of the past and improving our prospects for the future.
Read other insights about AI