Safeguarding Ethical AI through a Culture of Global Compliance 

Safeguarding Ethical AI through a Culture of Global Compliance 

QUALCO |

Interview by Marco Cozzi, QUALCO Country Manager, Italy 


How do you think existing and pending regulations on financial compliance, corporate compliance, data protection and AI can be reconciled in the evolution of the European single market? (Ref European regulation 2167 and AI ACT) 

The reconciliation of regulations on financial compliance, corporate compliance, data protection, and AI in the evolution of the European single market requires careful consideration and alignment of regulatory frameworks. While the European Union (EU) has taken proactive steps by introducing regulations like GDPR (General Data Protection Regulation) and the AI Act, there is still a need to harmonise existing ones further, ensuring consistency and avoiding duplication or conflicting requirements. This can be achieved by aligning definitions, principles, and obligations across regulatory frameworks. 

Collaboration between regulatory bodies responsible for financial compliance, corporate compliance, data protection, and AI is crucial. By sharing insights, best practices, and expertise, they can develop a unified approach to regulation, considering each sector's requirements and challenges. Also, a risk-based approach allows for effective allocation of resources and targeted regulation to address specific concerns in each domain. 

Engaging relevant stakeholders, including industry associations, businesses, and experts, in the regulatory process is also essential. Consultation can help identify practical challenges, provide insights into industry dynamics, and ensure that regulations are effective and feasible. Lastly, regulatory frameworks should be flexible and adaptable to ensure that regulations remain relevant and effective as new technologies emerge and the market evolves. 


What do you see as the main cultural drivers to foster international sharing of best practices in this area? Compliance not only seen as a regulatory obligation to be fulfilled but rather as an added value for companies in terms of competitiveness on the market (do you think that a project such as the European Credit Challenge can contribute in some way to fostering this cultural process?) 

The main cultural drivers to foster international sharing of best practices in compliance, particularly concerning AI, include collaboration and networking, thought leadership and education, industry associations and standards, and regulatory initiatives. These promote a culture of learning, knowledge sharing, and cooperation among professionals, policymakers, and industry leaders. Creating platforms, conferences, and events where stakeholders can come together facilitates the exchange of experiences, insights, and best practices, whereas encouraging thought leaders to publish articles, research papers, and case studies on compliance and AI contributes to the dissemination of knowledge. Additionally, investing in training data practitioners and educating students tackles skill gaps from new regulations, equipping professionals and future generations to navigate the evolving compliance and AI landscape. 

Establishing industry associations and working groups focused on compliance and AI helps develop best practice guidelines, standards, and certifications. This common framework enables companies to follow recognised practices and promotes consistency across borders. Collaboration among regulatory bodies internationally encourages the sharing of regulatory approaches and experiences, fostering a consistent understanding of compliance requirements across different jurisdictions. 

In this context, projects like the European Credit Challenge (ECC) can contribute to fostering the cultural process of international sharing of best practices. By providing a platform for professionals and organisations to showcase innovative approaches, share success stories, and learn from one another, the ECC facilitates cross-border collaboration and the adoption of effective credit management and debt collection strategies. The success of such projects depends on participant engagement, the practicality of showcased solutions, and continued support and resources dedicated to the initiative. By leveraging these cultural drivers and encouraging collaboration, compliance can be seen not only as a regulatory obligation but also as an added value for companies in terms of competitiveness in the market. Investing in AI model explainability and transparency, as mandated by regulations, drives innovation, enhances decision-making processes, and reduces risks of biased outcomes. Embracing these advancements sets companies apart from competitors and positions them as industry frontrunners. Compliance acts as a catalyst for positive change, encouraging continual improvement to meet evolving standards, and unlocking growth opportunities in a dynamic business landscape. 


In the area of management platforms, what are the potential risks in operational processes arising from the use of AI, Machine Learning, in addition to the secure opportunities? What are the risks faced by operators who increasingly resort to the excessive or improper use of Artificial Intelligence?

Using AI in operational processes presents significant secure opportunities, offering streamlined operations, automation of repetitive tasks, and enhanced efficiency. Furthermore, they facilitate proactive decision-making and issue mitigation by uncovering patterns, trends and potential risks, even on very large datasets. 

However, incorporating AI in management platforms brings potential risks. Biased training data for building AI models can result in discriminatory outcomes or perpetuate existing inequalities. Even in cases where some bias is inevitable, it is crucial to assess and address any unfairness, as mandated by the AI Act, to promote ethical AI practices. Striving for fairness and inclusivity creates technology that benefits all individuals and communities. Social and cultural consequences, particularly in hiring, promotion, or resource allocation, may arise. Also, certain algorithms can be challenging to interpret, leading to a lack of transparency in decision-making processes. These risks emphasise the need for a proactive approach to address biases, enhance transparency, and implement robust data protection measures to ensure responsible and trustworthy AI implementation in management platforms. 

Moreover, significant effort is required for the calibration of the models as the data become out-of-date. This becomes more critical as the number of supporting AI models that underpin business processes increases. Merely building the models is not sufficient. A dedicated strategy for AI and data maintenance that facilitates the periodic refresh of the models is crucial to ensure their continued accuracy and relevance. 

To address these risks, operators should adopt responsible AI practices, prioritise robust data management, implement explainable algorithms that provide transparency, and stay informed about evolving regulations. By taking these measures, operators can mitigate risks and leverage the secure opportunities offered by AI in their operational processes. Emphasising these measures ensures that AI systems are not only compliant but also ethically sound, transparent, and dependable, fostering a more sustainable and inclusive AI ecosystem for the future.

This interview was originally published in CVM Champs, a special issue of CV Magazine. 👇

 

 CVM CHAMPS MAGAZINE