Responsible Data Practices in High-Risk Industries Ahead of the AI Act | September Tech Leaders Roundtable Recap

Pubished 9th October 2024



By Anna Massey.


Last month’s Tech Leaders Roundtable was an absolute eye-opener, and I’m still reflecting on the incredible discussions we had. The topic, “The Countdown to the AI Act: Ensuring Responsible Data Practices in High-risk Industries,” couldn’t have come at a better time. With the AI Act set to be enforceable in the next few months, it was great to see so many data scientists and industry experts eager to explore what’s coming and how we can prepare.

We were privileged to hear from a fantastic speaker: Carla Canino, CEO and CPO of Kindlee’s. She brought unique insights into responsible AI practices and the complexities of AI governance, particularly in high-risk industries like Fintech, Healthtech, and EdTech.


Responsible AI

The AI Act has been on everyone’s radar for a while, but as its enforcement date draws closer, the urgency to ensure compliance is becoming very real. The legislation will place new requirements on AI systems, especially those in industries where risks to human rights, safety, and well-being are most significant. Fintech, Healthtech, and EdTech are at the forefront of this, and our discussions were focused on how to ensure that AI algorithms are trustworthy, transparent, and human-centric while still allowing for innovation.

The key challenge is clear: how do organisations strike the right balance between fostering innovation and ensuring compliance with these regulations? Our roundtable aimed to provide some answers, offering a space for data scientists and tech leaders to share ideas, best practices, and concerns about the road ahead.


Carla Canino on Balancing Innovation with Responsibility

Carla Canino opened the evening with her unique perspective on blending innovation with responsibility. With over 15 years of global experience in sectors like payments, digital assets, and retail, Carla’s approach to AI is grounded in both technical excellence and a deep commitment to inclusivity.

What stood out in her talk was her emphasis on human-centric AI. Carla shared how her work with organisations like the US Federal Reserve and the W3C group on web payment accessibility has shaped her view on building AI systems that prioritise not just commercial success but also accessibility and user experience. She made it clear that compliance with regulations like the AI Act should not be seen as a barrier to innovation but rather as an opportunity to develop AI that is robust, inclusive, and effective.

One point Carla drove home was the need to avoid treating regulation as a box-ticking exercise. Instead, she encouraged organisations to view the upcoming AI Act as a framework that can drive more thoughtful product development. In her words, “Regulations like the AI Act aren’t here to limit us—they’re here to push us to innovate responsibly.” Her insights offered a roadmap for how companies can navigate the complex intersection of regulatory adherence and cutting-edge innovation.


The Group’s Diverse Reactions

One of my favourite moments of the evening was when we opened the floor for discussion. It was fascinating to see how different organisations are approaching the challenges of the AI Act. There were mixed reviews, especially on how versatile AI is across sectors. Some attendees pointed out that AI’s impact varies so much depending on the industry, making it hard to have a one-size-fits-all approach to governance.

Participants also delved into pressing issues like algorithmic bias and the importance of transparency. Many agreed that while AI systems have the power to automate decisions at an unprecedented scale, they can inadvertently perpetuate existing biases in data. This is particularly dangerous in high-risk industries, where biased algorithms can lead to unfair outcomes—whether that’s denying someone a loan or making incorrect healthcare recommendations.

It was clear from the discussion that, while everyone is aware of the need to be compliant, no one has all the answers yet—and that’s okay. This is uncharted territory for many, and part of the solution will come from ongoing dialogue and knowledge sharing within the community.


My Key Takeaways

As I reflect on last month’s discussion, a few key themes stand out to me:

  • Start early: Don’t wait for the AI Act to be enforceable before taking action. Organisations need to start auditing their AI models now and ensure they’re transparent and compliant.
  • Collaboration is key: Successful AI governance requires input from across teams—from data scientists to compliance officers, everyone needs to be involved in the process.
  • Transparency matters: Whether it’s ensuring that algorithms are explainable or keeping track of how decisions are made, transparency will be crucial in meeting the demands of the AI Act.
  • Inclusion drives innovation: Carla’s focus on accessibility and human-centric design was a powerful reminder that building AI systems that work for everyone isn’t just good ethics—it’s good business.


What’s Next?

We have plenty more events coming up, including our next session on “Metadata: From Boring Stuff for Data Folks to Core Business Enabler,” where we’ll explore how organisations can leverage metadata to drive innovation and transform data governance from an overhead into a key business enabler by optimising the Information Supply Chain. Seats are limited, but if you’re interested in attending, reach out to me at anna.massey@interquestgroup.com!

Whether you’re already deep into AI or just starting to explore its potential, we’re here to help you navigate the complexities of AI governance. Let’s connect and keep the conversation going as we prepare for the new era of responsible AI.