Le Gouvernement du Grand-Duché du Luxembourg

Data governance: a major challenge for AI

As companies face the growing need to adopt artificial intelligence (AI) tools, one of the biggest challenges lies not only in security, but also in ensuring the reliability and compliance of the data on which AI depends.

The recent national survey conducted by Luxinnovation and FEDIL, in collaboration with the Luxembourg Digital Innovation Hub (L-DIH), reveals that 56% of organisations already have a data and AI governance policy in place — a clear sign of their commitment to responsible AI adoption. “This is a promising indicator of the growing recognition of the importance of governance in AI initiatives,” the study notes.

Limited confidence in data

But this also means that 45% of companies have not yet implemented such policies. “This represents a valuable opportunity for these organisations to enhance their digitalisation strategies by developing comprehensive data and AI governance frameworks, ensuring a responsible approach to adoption,” says Mickael Desloges, Senior Advisor at Luxinnovation. This is an essential step to guarantee that the long-term sustainability and ethical implications of their AI initiatives are properly managed.

Companies must now see data governance not as a constraint, but as a strategic cornerstone of their AI initiatives. Mickael Desloges, Luxinnovation

Organisations should view data governance as a strategic foundation for their AI initiatives. Mickael Desloges, Luxinnovation

According to an international study published in the fall of 2024 by Precisely, a provider of data integrity solutions, and LeBow College of Business at Drexel University, 62% of the 550 data and analytics professionals surveyed identified the lack of data governance as the main obstacle to AI initiatives.
 This is further aggravated by limited trust in data: 67% of companies still do not fully trust the data on which they base their decisions — up from 55% in 2023. “Companies must now see data governance not as a constraint, but as a strategic cornerstone of their AI initiatives,” Mr Desloges emphasises.

Assessing your cybersecurity maturity...

Some results from the Luxinnovation and FEDIL study are particularly striking. A very large majority (87%) of respondents admit to using public generative AI tools (GenAI) for professional tasks. “This widespread adoption raises significant concerns about the handling of sensitive data,” the authors warn.

It is well known that the most widely used GenAI applications (ChatGPT, Gemini, etc.) generally use the information provided by users to refine their models. This creates a potential risk of confidential data being exposed or inadvertently used. Cybersecurity risks don’t always stem from external attacks...

Contribute to the construction of a safety culture and encourage proactive risk management at all levels of the company. Mickael Desloges, Luxinnovation

Yet, despite these challenges, cybersecurity often remains undervalued, with many organizations unsure where to begin. This is where structured assessment tools prove essential — such as the one offered by the Luxembourg Digital Innovation Hub (L-DIH). As part of its mission, L-DIH, in collaboration with the Luxembourg House of Cybersecurity, offers a comprehensive cybersecurity maturity assessment service, helping companies evaluate their current situation and plan meaningful improvements.

… and building a culture of safety

“The goal is not just to identify vulnerabilities, but also to help build a culture of security, encouraging proactive risk management at every level of the company,” adds Desloges. He also points out that the survey highlights the urgent need for better knowledge sharing, targeted training initiatives, and robust security and governance frameworks to support the wider adoption of AI and GenAI technologies.

The study also underlines a potentially significant gap between existing data and AI governance policies and the actual use of public GenAI tools within organisations.

For instance, a third of companies that report having no data and AI governance policy still allow staff to use public GenAI tools without any formal guidelines. Surprisingly, this figure rises to over 50% among companies that do claim to have such a policy. “Are these policies truly robust, or are companies relying primarily on their employees’ assumed caution and digital literacy — possibly without specific training on the risks linked to public GenAI?” the study’s authors ask.

Do you have a project related to AI?

Contact Luxinnovation's Corporate R&D and Innovation Support department.
Contact us

Newsletter sign up

Read our privacy policy

Stay informed on our latest innovation activities.

Get regular updates via our monthly newsletters and periodic emails tailored to your preferences.

You may unsubscribe from your newsletter and email subscriptions at any time. For information on how to unsubscribe, as well as our privacy practices and commitment to protecting your privacy, please review our Privacy Notice.