Skip to main content
Spotlights

Spotlight feature on Katrina Ingram, Ethically Aligned AI

Did you know that the term ‘artificial intelligence‘ has been around since the late 1950s? It has been historically synonymous with words like progress and evolution. We are moving towards a time where AI drives so much in our life, both on the personal and professional front.

Raise your hand if you use Alexa, Siri, or Google to set reminders, alarms or to simply turn on the music while you cook. Give a shout if your company is using a chatbot function on any web or mobile-based platform, or talking about automating that report that’s been eating up a major chunk of your work time. 

As AI continues to advance, concerns regarding privacy, surveillance, bias, and discrimination have also evolved, along with the philosophical and ethical constraints of human judgment. Meet Katrina Ingram – Founder and CEO of Ethically Aligned AI, a company focused on helping organizations drive better outcomes in the design, development, and deployment of Artificial Intelligence (AI) systems. Katrina is on a mission to help organizations build better AI systems and help people make sound choices in selecting trustworthy AI-enabled products or services.

“I often feel like Al Gore, circa 2006 – telling people an ‘Inconvenient Truth.’ So much of the work is education because people don’t know what you are talking about right away – you have to show them what the harms are and why this work matters. It takes time to do that.”

For over two decades of her professional life, Katrina held senior executive roles in not-for-profit and corporate organizations in the technology and media sectors. Then, in 2017, she took a leap and made the decision to go back to grad school. 

It was then that AI was on every startup’s mind and the world was moving towards a more intelligent time.

“During one of the talks I attended, a prominent AI researcher shared his concerns regarding how some artificial intelligence was being built and deployed in ways that are unethical and harmful. When I asked him about who was working on the solution, he said, ‘not enough people.’”

This was a turning point in Katrina’s life. True to her curious nature, she signed up to learn more about AI and ethics. 

Katrina Ingram’s journey to ‘saving the world, one algorithm at a time’

While startups around the world are transitioning and investing to offer better and more digital AI solutions, there remains a lack of urgency towards integrating ‘ethics’ into their foundational plans. This is the gap Katrina excels in and thrives on. Very few people in the country have the knowledge or skills required to understand the range of issues surrounding AI ethics, let alone apply appropriate solutions in an organizational context. 

“Everything takes longer than you think it will. Everyone starts somewhere, so just get started.”

Ethically Aligned AI originally envisioned auditing AI systems and providing a seal of approval to companies who passed the accreditation. Over time, Katrina and her team have learned a lot about audits vs assessments, and what ultimately drives an audit or regulation. So, as is the case with most startups, Ethically Aligned AI evolved and reframed its USP to help organizations build and deploy better AI solutions, with an emphasis on education and consulting. Katrina still believes audits are important but doesn’t think the market is quite there yet, though regulations, like Bill C-27, might change things in the not too distant future.

Katrina is backed by an Advisory team made up of two professors from the University of Alberta who were also her academic advisors during her graduate research days, as well as a long-time friend and trusted colleague.

Ethically Aligned AI meets the Investment Readiness Program

Katrina’s entrepreneurial journey coincided with the COVID-19 pandemic, which makes her achievements all the more inspirational. 

“A wonderful social entrepreneur consultant, Anna Bubel, alerted me to the Investment Readiness Program. Anna told me ‘it will be a lot of work, but if you buckle down and focus, I think you can do this!’”

Katrina had a month to draft her application, review it, and submit it. Explaining AI ethics and describing why it was a challenge to social impact in very succinct terms was a difficult task, and it still is, recollected Katrina. Encouraging her to submit an application and successfully complete the steps necessary to receive funding wouldn’t have been possible without her support system. Katrina takes stock of the fact that there are lots of people working behind the scenes to make things happen. 

Moreover, in Katrina’s experience, the funding and feasibility work that went into the Investment Readiness Program proved valuable in clarifying the business idea and plans for Ethically Aligned AI. It is what ultimately led to their first contract and took the social enterprise in a direction that wasn’t in the original plan, yet served as a cornerstone in Ethically Aligned AI’s evolution. The growth from the IRP program saw Ethically Aligned AI ultimately become the pioneering Social Purpose Organization to develop Canada’s first AI Ethics Micro-credential with Athabasca University, Alberta in 2021. 

As an experienced IRP fundee, Ethically Aligned AI’s Katrina Ingram is hopeful for the development of more mechanisms to share their journey with other IRP social enterprises and foster a relationship. One of Katrina’s biggest challenges has also been finding a community that intersects digital technology, social impact, and business. So, if you are, or know of a social entrepreneur or Social Purpose Organization driven towards similar goals, help us bring you together, in the spirit of harmony between technology and humanity! 

Keeping Up with Trends & Mental Health

“Grow at a sustainable pace. Don’t let someone else set the pace for you.”

Katrina switches from working on the business to working in the business from time to time, to take a step back and reassess. While her first 6 months at the beginning of this venture were spent solely on setting up the social enterprise and everything that comes along with it, nowadays she’s getting better at balancing wearing the boss-hat and the entrepreneur hat. Katrina is also learning to give herself mental health breaks where needed. 

“I am really bad at this! I do not take enough breaks. The first year I worked 7 days a week at all hours of the day to keep things going. In 2022, it wasn’t quite so bad but it’s still not a great balance. I really enjoy my work so it doesn’t feel like a burden, but long-term it’s not sustainable.” 

She’s working on it. Katrina consciously gets outside once a day for a walk and a little time in nature – at least 30 minutes. This has not only helped her feel more relaxed, but increased her focus. 

Katrina imagines a world where we are not all living in a dystopian version of the Metaverse in 2030! One of the things she is working on is a compelling vision for the future as it relates to technology and humanity. So much of her current work is “don’t do this, don’t do that,” says  Katrina, as ethicists are often cleaning up messes that could’ve been mitigated. 

“We need new, inclusive, and less neoliberal capitalist versions of a collective future.” 

“Perhaps we could have an internet that isn’t driven by an advertising-based model that relies on selling our data and attention. That would be a start.”

Footnote: Katrina Ingram (she/her) is the Founder and CEO of Ethically Aligned AI. She is a member of IEEE and volunteers with several AI ethics organizations including Women in AI and Ethics (WAIE), For Humanity, and All Tech is Human. She was named as one of the 100 Brilliant Women in AI Ethics for 2022. Catch her on her podcast AI4Society Dialogues where she combines her love of audio and interest in AI. Katrina is also a member of the Calgary Police Services Technology Ethics Committee.

Some of her projects and work highlights include: 

AI + U – a project in partnership with the Canadian Civil Liberties Association to educate youth about AI ethics and address topics like data privacy, surveillance capitalism, machine learning, cyber security and racism in technology. The World That AI Built is a short video that uses a 21st century version of the children’s story, ‘The House That Jack Built,’ to explain how socio-technical systems work.

The World That AI Built is a short video that uses a 21st century version of the children’s story, The House That Jack Built, to explain how socio-technical systems work.