Future in Sound

Jaxson Khan: Trust in AI

Re:Co Episode 33

This month on the Future in Sound, Jaxson Khan- former senior policy advisor to Canada's Minister of Innovation, Science, and Industry- explores the evolving role of AI in business. With a background in accessibility, AI policy, safety, innovation, and ethical frameworks, few people are better placed to help us think through the connections between tech and society. 

Jenn and Jaxson discuss the delicate balance between innovation and caution when it comes to AI. How do companies manage the risks of AI whilst also pursuing the opportunities it offers? Jaxson highlights the potential for AI both to enhance accessibility and to create new barriers if not implemented thoughtfully. They also tackle global approaches to AI policy—from the flexible frameworks seen in the U.S. to the more structured EU AI Act—emphasising the need for practical, citizen-focused regulations that build public trust.

Useful links:

Book: Prediction Machines by AJ Agrawal, Joshua Gans, and Avi Goldfarb

Click here for the episode web page.

For more insights straight to your inbox subscribe to the Future in Sight newsletter, and follow us on LinkedIn and Instagram 

This podcast is brought to you by Re:Co, a tech-powered advisory company helping private market investors pursue sustainability objectives and value creation in tandem. 

Produced by Chris Attaway
Artwork by Harriet Richardson
Music by Cody Martin

JAXSON: I really sometimes think that accessibility kind of deserves its own category, but it is really, you know, in the context of people with disabilities, people with even sometimes different learning styles, different ways of engaging, like, are you giving those people an easy pathway to still access and do something with your brand?
Right. And sometimes artificial intelligence, I think, can help in those areas, but another times it can hurt and we haven't fully tested that. And so conducting user testing, like really, really thoughtful testing with a diverse group of people, ideally people who also have disabilities. I think that's really important.
JENN: Welcome to the Future in Sound podcast. I'm your host, Jen Wilson. This is a podcast where we discuss people, planet, and profit. In each episode, we'll learn from world leading experts who can help us see the future we want. And our role in it.
This is episode 33. Trust in AI.
Jaxson Khan is a former senior policy advisor to the Canadian Minister of Innovation, Science, and Industry. Prior to government, Jaxson worked in technology, including as director of Growth at Fable, an accessibility platform that helps major organizations build more inclusive digital products for people with disabilities.
He's also worked for companies like Nudge AI, which is acquired by Affinity, Influitive, Paddle, which was acquired by LinkedIn, and Microsoft. Jaxson is a published author and speaker on technology, education, and policy, including with the International Economic Development Council and also TEDx.
All right, Jaxson, it is great to be here with you today in person. I am absolutely delighted to be talking about a topic that has been front of mind. To many who will be listening to this podcast, the implications of AI on their businesses, perhaps even in their life day to day, what the role of business is in managing risk related to AI.
I can't think of anybody who'd be better placed to help us think through some of the sort of connections between technology and society. So welcome to the future in sound podcast. 
JAXSON: Thanks so much, Jenn. It's really great to be here with you. Thanks for the very kind words. Always happy to chat about the topic.
I think I've just come off of a very, uh, interesting time working on the problem in federal politics in Canada. So happy to dive into it with you. 
JENN: Sounds great. And why don't we start, for listeners who aren't familiar with your work, why don't we start with just a 60 to 90 second intro on who you are and the work that you've done.
JAXSON: Yeah, sounds good. I can keep it short, but basically I was a senior policy advisor to Canada's Minister of Innovation, Science and Industry. One of the key files that we were working on the last couple of years was definitely related to AI safety, building AI trust and also accelerating AI innovation. So one of the big things that we thought about is, well, how do we actually create a system of laws that can be flexible to grow at the speed of innovation, but also make sure that Canadians and consumers writ large are protected in our society.
And we're also very thoughtful to the concerns of creators, artists, intellectual property makers, whatever it is, uh, we want to make sure that AI is developed safely and responsibly. And Canada historically is. Huge place in AI, right? We have invented some of the key technologies like natural language processing, uh, machine learning ethics is something we've worked on for a long time.
So I feel very, very happy to and privileged to have come from that spot. And then before I worked in government, I actually worked in tech startups for a number of years, including early AI startups, and I ran my own AI podcast. So happy to be joining forces today. 
JENN: It's a little bit of a meta moment where a podcaster is interviewing a podcaster.
I think this, I think this is ideal. I wanted to pick up on sort of your background. I mean, obviously you've built businesses that are using AI and then you've also looked at the policy side. Why don't we start? So as far as I can see when it comes to the landscape of those who are experts in AI or looking at policy, there's almost this scale of at one end.
Let's move fast and break things. AI is always a good thing. You know, let's develop the technology as quickly as we can because there's so much opportunity. There will be some, maybe there'll be some negative side effects, but that's a rounding error at one end of the scale. At the other end of the scale, more, yes, there are opportunities with AI.
We need to proceed with caution. There are a variety of, uh, challenges that, you technology could be used, you know, against to undermine, uh, democracies or to have sort of negative consequences. Where are you on that sort of scale? 
JAXSON: Yeah, I try not to, uh, plot myself too specifically, not just to remain mysterious, but to actually just keep an open mind.
I think, I think As a policymaker in the last couple of years, it has been moving so quickly. And I've had a chance to sit down with some of the most esteemed thought leaders in the world. Everyone from some of the most foundational AI researchers, and in some cases actually the godfathers of AI. I've also talked to some of the top chip makers and leaders of those firms.
And on the other side of the spectrum, I also talked to business leaders who, you know, some are very concerned about what AI can do. They're concerned what might it do to their industries. Others are so bullish on artificial intelligence that I think any, any regulations in the way of that, any frameworks are actually the worst thing that we could do for society.
And so in the midst of all that, what I try and do is think about, well, what is the average person? What do they care about? They probably care about their kids. They probably care about their job and, and the kind of life that they're going to have, right? And technological change can always be scary. It can also be very exciting.
We know there's an innovation adoption curve. Some people are more comfortable with it. Others aren't, right? And I think with AI, we have a more profound and potential technology than ever, right? I mean, it's so high potential. Some people have. compared the emergence of AI to the emergence of the Internet.
I actually heard a more apt analogy, which I thought was more like the invention of electricity. In that it's just so, it's going to permeate every kind of area of society and work and learning. And we're still trying to really understand what it is. So that's why I'm a little hesitant to say, yes, I'm an AI accelerationist or decelerationist.
I generally think, uh, you know, I grew up with a dad who's a software architect. So I've always been a huge geek. I love new tech, but at the same time, you know, I grew up with a mother who has worked in the education system with some of the most vulnerable. And so I draw a lot from my parents. I draw a lot from my ancestors.
I draw a lot from, you know, growing up in a multicultural society. And so I think we got to do the best we can to end up in a good place that doesn't leave folks behind, but that also starts to paint a picture where we can end up with a better world. So, you know, I don't want that to sound like platitudes, but the bottom line is I, um, I really think that we have something great here, and I'm also very conscious of the national security element, where countries around the world, some of who we are in alliance with, and others who, you know, we are in, sometimes, rival we are in competition with, are, are trying to develop this as fast as they can.
And so, you know, in the midst of all that, can we set up a responsible course? Can we set one that's safe and how do we build trust over time? That's, that's really kind of the questions that have been occupying my mind. 
JENN: And how do we do those things? What are the most important tenants of sort of the policy direction before we get into the business level?
Like what, what's really important for us to consider if we're to develop AI for good and the great outcomes that sort of meet its potential. 
JAXSON: Yeah. I mean, if I was to respond in like a quick and dirty way to that question, like the first thing is you got to talk, you got to talk to people who are building this stuff.
You really got to understand from the basic technical dimensions, like how is this working? How fast can it grow? How are you training the models? How, you know, not how big can they get, but basically how. How can, how complex can they get? And I still think we're trying to answer those questions, but I think talking to technologists is very important at the same time.
I think sometimes we can have a fallacy where we sometimes paint technologists as almost philosopher Kings and thinking that they also know how to create public policy. That's not always the case. They can upgrade insights, but just trying to. Ascribe policy exactly to, you know, the parameters of the people who are also designing it.
If we did that in every case, I think we could be in trouble. There's also instances, you know, where some of the most prominent investors in tech, I think of, I believe the name is Bill Gurley, who gave this kind of famous talk more recently. It was about why is, you know, innovation done in Silicon Valley?
It was because it's far away from Washington. I would poke some holes in that. I also think that, you know, Washington is also responsible for some of Profound advances in technology through foundational R and D investments, right, including in the Internet, including some of the most, you know, early stages of defense research, right?
And so there are the most the most dramatic advances in society, I think, have often come through a fusion of government sponsored efforts that then meet And ideally, then government can get out of the way to some extent, but then if it gets very powerful, like you do then, have to ask, okay, well, how do we make sure that the technology is adapted responsibly?
So I think you've got to talk to the technologists, but then you've got to go and make sure throughout the whole process, you're talking to citizens, you're talking to civil society groups who, you know, see different faucets of how does this actually end up? You have to talk to academics. And then also what I tried to do was.
I tried to talk to the people who were most critical and sometimes that meant meeting with critics more than, you know, people who are just slapping us on the back and say, yep, let's get it going and let's pour all of our money into this. And, and I think, you know, critically evaluating criticism, I guess, I guess to be very, um, flunky with my words, I think it was very important because it really made us think it made us challenge, you know, some of the assumptions we may have held.
A lot of it was also even educating people internally within the public service. Um, some of the public servants that I worked with didn't necessarily come from a technical background. They didn't, you know, have a great understanding of what is it actually like to work in a startup. And so part of that innovation life cycle, it's important to educate people on how quickly can this actually come about and make sure you're facilitating those connections between policymakers and the innovation community.
JENN: With that, you know, context of Ensuring we're talking to critics, ensuring we're keeping sort of an open mind, that there's solid discourse to help us steer in the right direction. What kinds of policies do you see in the next, I know it's very difficult to answer this question, but what kind of policies, not specifically in Canada, let's say, you know, globally, What kinds of policies do you think we're likely to see in the coming, say, five to ten years related to AI, if any?
JAXSON: Yeah, I think there's, there's kind of two big, you know, examples that we have right now, at least in the Western world. One is kind of the U. S. approach, where they have the White House order on executive, uh, executive order on AI. So, some examples there would be, you know, there, there's pieces around, you know, Procuring artificial intelligence orders to federal agencies on how to handle that.
There's also the White House agreement, the Biden commitments on artificial intelligence, which were actually voluntary commitments from, you know, some of the top AI companies, including Anthropic, OpenAI, and those actually stipulating certain types of security requirements and actually then coordinating with the U.
S. government on safety testing. I think these are all great things, right? They're, they're voluntary. They can move very quickly. They can move at the speed of industry, but. You know, particularly with the force of the U S government behind them. I think they carry a lot of weight and also these companies, you know, they have a prerogative to try and build public trust and understanding.
I liked it. That's very flexible. Some advocates in civil society and citizens might say, well, yeah, but there don't seem to be any clear consequences. You know, what are the penalties for doing wrong? Um, and then the further end, you know, folks who are very worried about AI safety, it's like, well, how are we actually trying to prevent catastrophic risk?
Like it's just safety testing enough. Like we really need to, you know, maybe put the fear of God and in some of these people, cause they are creating something that could be very powerful. The other side of that, of course, is like, you know, the EU AI act, which, you know, Very prescriptive has a lot of like an extensive amount of detail, unacceptable categories of risk, all sorts of different parameters there.
I'm fascinated by both models. I think in Canada, we tried to. I did take a little bit of the best of both in the interim. We had created our own volunteer code of conduct. We tried to model some of the best practices from what we saw from the U S perspective, but that also stipulate that we think some of the applications for business to business applications, for example, in the enterprise are actually quite different than the ones that are consumer facing.
There's different levels of risk, different levels of application. And so we tried to differentiate there. And I would argue that I think we've done something pretty cool there and we got. You know, all the top AI companies in Canada and the top technology companies signed up to it. On the other side, we had an AI law that we've been working on and pushing through our House of Commons and Parliament for the last couple years.
It is still an industry committee study, but that tried to set some high level parameters and categories of risk, but then have a lot of flexibility left to regulations. It remains to be seen if that bill will pass, particularly given that, you know, the Canadian government is a minority situation. But even if the bill doesn't pass, I think it's, it's very useful that this has been a tool for the government, which, which I've recently left, by the way.
To collect feedback to understand from civil society, what's gone, what's gone, right. What's gone wrong from industry, the same thing. And I'm hopeful that, you know, people will continue to experiment and try different models. And even if you get negative feedback. I think we're trying to figure this out, right?
Like this really just became, you know, I feel like a more of a theoretical area into something that governments around the world are chasing after and trying to figure out how do we appropriately regulate artificial intelligence. The last thing I would say is, you know, countries like Singapore, for example, are doing a lot of innovative work and trying to figure out how could this be done better.
And I think it's worth looking at China's even had, you know, in some respects on trying to regulate artificial intelligence. So. I ultimately think there will also have to be some sort of international agreements on what comes out. You know, there's the Council of Europe, uh, work that's potentially being done on this.
The OECD is playing a significant role. I also think organizations like Mozilla, for example, which are quite prominent on developing, you know, open source definitions on artificial intelligence. I think they're playing a strong role in actually advancing new and interesting concepts of how we can define what we're actually working with here.
JENN: Hey, it's Jen. I just wanted to take a quick moment to let you know a bit about Re:Co and what we do. We're a tech enabled advisory firm that helps private market investors and companies measure sustainability metrics using our software platform. We also help you to set targets and focus your efforts on sustainability areas that really matter for your business.
And finally, we help clients to translate all of this work into your core value creation strategy or your business model. Check us out at re. co. com to get in touch. All right. Now back to our conversation.
It's really interesting because it's seen, it's so competitive and there's so much opportunity for competitive advantage with AI that, you know, often when we look at more, you know, ESG metrics. There's this question of, okay, is there going to be a specific, you know, standard that comes to the fore like GDPR, which then for international companies becomes the standard because you're operating globally.
Therefore, you know, adhering to GDPR makes sense. But with, you know, this scenario, it sounds like it's going to be a bit more bespoke. I'm interested in pivoting a little bit. To the business perspective. So for example, Re:Co, you know, we are developing tools all the time with AI. And of course we're advising clients on business ethics, future of capitalism, the right metrics to have.
And many of our clients are technology companies. So what are some of the tips that you would have for business executives who are running technology companies and managing the risks related to AI while pursuing the opportunities? 
JAXSON: Yeah, I mean, I think you got to start with, uh, what it is that you're actually doing right now, what it is that you want to do and really understand the possible implications of that.
That might sound pretty simplistic, but I think sometimes, you know, it can be easy to get ahead of yourself when you're bored, your shareholders are pushing for you to take action and do an AI thing sooner rather than later. So I think, I think it's really, you know, taking a pause and being thoughtful about this.
I also think it's worth, you know, really engaging with your users and consumers on this front. I think, Right now, because artificial intelligence in general is under high scrutiny, we've already seen like I feel like a ton of screw ups from even some major big brands who've, you know, used artificial intelligence, generate images, uh, without credit, even for example, you know, and, and experiencing pushback from artists or, you know, even communities of people on social media about that.
And so something that was meant to be good is immediately turned back. And around, uh, you know, we're seeing political campaigns do this and having it backfire sometimes. So, I think it's really, really important to operate with like a much higher level of responsibility, accountability, or respect. There are, you know, global principles of transparency, explainability, etc.
that are often talked about, you know, in ethics communities. But I think before you even think about, you know, Artificial intelligence ethics and you know, how are we actually operating this? I think it's doing some common sense gut checks sometimes and and really just thinking through. Okay If someone took the most cynical view of this possible in our intended application, what is it going to look like?
How can the media talk about it? How could our customers, you know, talk about it and Good. I don't think there's a segue way you want to take Jenn, but I do think a lot of it comes down to trust And Edelman, their trust barometer is like an annual metric based survey they, they, they kind of share every year.
And one of the special reports I believe had been focused on artificial intelligence and found that trust levels, I believe, are kind of below 50%, I think, in most major Western economies. I think Canada was one of the lower ones. U. S. even, I think, was not doing very well. I think Pew Research survey is saying most Americans, you know, unless there was some sort of rules on the use of AI, would, wouldn't trust them.
When not necessarily trusted very much, the rules would certainly help. And so I think we've got to overcome a trust gap that is prevalent, that does exist. And I think, you know, as citizens, as technologists, policymakers, we've all got a job to do and business owners to, to try and build trust in anything that we want to do before it, before it comes to our, our customers.
JENN: Really practically, you know, if you're on the board. Or if you're the CEO of a technology company, that's not, you're not building, you know, large language models. You're not, you know, you're using models in your business. What are some of the questions that should be asked on a quarterly basis or on an annual basis to ensure that you're keeping your technology in check and that you are managing some of the risks while also pursuing opportunities?
JAXSON: Yeah, I'd have to think about that for a second, but I think foundationally, it's you know, like what? At what speed are we delivering this? To what extent do we know? What types of results the model is producing. Like if you're using a preexisting model or, you know, one of the consumer based ones, you may not always be able to control the outcomes.
These models are updating very frequently. The data that they're capturing can change. Sometimes they're not getting the latest information, right. Um, and necessarily from online, depending on what types of things you're using, whatever API, like running out of credits on your API, like you want to make sure that you have some control over the types of variables, some ways you might be able to control that better is certainly fine tuning your model, using your own private data.
Those are the questions I would try and ask is, like, do we know what kind of outcomes this is typically going to produce? Do we know where the data is coming from? Is this found in actually our terms of service? Is it updated? What kind of data are we gathering from the people that we are asking to use this tool, if any?
And, and then, you know, if you want to go even further, you could start to think about the safety implications. But, I don't know, I, I usually think more about the practical ones right now. Less kind of the catastrophic or existential ones or where those could go. I think most people are not even close to there yet.
Um, but I think a lot more about. You know, our artists can be pissed off, uh, depending on how someone's using this. If they're trying to create new content, are people in my own company and my own employees going to be pissed off if this model that goes and spits out something that, you know, is incorrigible or just doesn't make sense with our own processes.
In some cases, AI chatbots have given people refunds, um, and, uh, consumer tech companies have had to go and figure that out, right. And they've got pissed off customers cause they thought they were talking to someone real, but that wasn't the case. So, you know, you really just got to check it. Yeah. 
JENN: Check it rather than having a black box.
JAXSON: So I think, I think that's a fair, reasonable expectation, right? Again, like I am not. And AI engineer. I'm not a research scientist. I don't even know, you know, to the deep technical extent how some of these work. Right? I think what I've tried to do having some background in tech and growth and policy is understand, you know, what does this typically mean for the average person?
How can be more honest about what this actually means and not just get excited about what this is going to look like on a spreadsheet in my next, you know, quarters numbers, right? I think a lot of people are getting You're very excited. You know, policymakers and economists are accelerate the productivity aspects.
I think businesses are excited about, you know, new channels of growth. Um, that's awesome. Right. I want those things for our economy, our society, but you know, the average person is still like, well, I just got to do a thing and get it done. You know, I got to buy a product. I got to get a service done and unless this actually makes my life better, it could just drive more frustration, right?
Like sometimes people can encounter an AI chat bot and be like, well, I just need to talk to a human right now about a thing. Right. And so making sure there's still an ease of access path that is there. Um, the last thing I would say quickly is, you know, more and more brands, and this is somewhat connected to ESG and to some extent DEI, I suppose, but I really sometimes think that accessibility kind of deserves its own category, but it is really, you know, in the context of people with disabilities, people with even sometimes different learning styles, different ways of engaging, like.
Are you giving those people an easy pathway to still access and do something with your brand, right? And sometimes artificial intelligence, I think, can help in those areas, but a lot of times it can hurt. And we haven't fully tested that. And so conducting user testing, like really, really thoughtful testing with a diverse group of people, ideally people who also have disabilities, I think that's really important.
JENN: What's an example of AI not being great for somebody with disabilities, just so the audience can understand? 
JAXSON: I will give a quick estimated guess, but my strong advice is to ask someone with a disability, which I am not. I think that's great. Companies out there who, who can, who can provide insight there. I actually used to work for one called fable.
So, so feel free to talk to them, but you know, an example I can think of off the top of my head is a lot of people who are blind or who have visual impairments often use a screen reader to interact with the device, but. If, for example, the focus of your device, let's say a computer or a phone, just the focus, let's say to a chat box or somewhere else, that can be confusing, particularly if there's no actual like audio cue to the person who is using a screen reader, which, you know, delivers voice messages to someone who, who again, may have a visual impairment that can be confusing.
Right? So it's like, are you as a product designer, as an engineer builder, are you actually making this an easier experience? Right. At the same time, I can also. You know, awesome benefits, right? There could be translation, it could be better audio, right? There, there could be a more thoughtful engagement of someone.
Maybe they can be even a reckon a recognition that, yeah, Oh yeah, this is clearly someone with a disability who is browsing using my website. I think based on the characteristics, maybe we should, you know, put up a prompt and inquire, or we should start to serve at a different user interface. Like there are a lot of opportunities, but I think the most important thing is don't make assumptions, especially if you're able bodied, if you don't have experience, a little experience in that area, like it's really, really important to think thoughtfully better because.
People with disabilities are like 15 percent of the global population. I think it's a billion people. And it's one in two seniors. 
JENN: Thank you for that. My final question, Jaxson, for you is Many people, you know, listening to this will be thinking, I just want to keep up to date with some of the developments that matter.
What resources should I be taking a look at? I'm wondering, are there any maybe newsletters, you know, books, is there anything that you read on a regular basis that helps you think more strategically about AI and technology that you'd be interested in sharing with the audience? 
JAXSON: Yeah, definitely. I mean, a lot of the stuff I read is like, Really, really geeky.
If I can think of something that I think is more useful for, for let's say a business person or just someone who's keen and understanding a bit better about how machine learning works and also generative and artificial intelligence. I think Goldfarb and Ajay Agarwal, and I think I'm forgetting one, a third writer, I apologize.
But, uh, they wrote a book a few years back called Prediction Machines, and they just wrote another book, which I think I've ordered and I still got to read through, but they do a pretty darn good job of just explaining, you know, Here's how this works. And I think that kind of accessible, useful content is, is really, really valuable.
I also think, you know, my advice would actually be to stay away from stuff that is too simplistic. I think there's a lot of people on Twitter saying, you know, here's the seven ways that chat GPT is going to change your life. I think it's died down a little bit since the earliest hype cycle, but. You know, maybe, maybe try and go, you know, and look for experts who are still human and can, you know, talk to you in a real way.
And that's kind of my advice is, you know, try and read books about this stuff. Don't, don't just, you know, read the latest tweets. I think it's really, really important to get reasonably deep into it before you decide to go and change your whole business model or, you know, the way that you run your organization.
JENN: Jaxson. so much. 
JAXSON: Right on. No worries, Jen. This is a lot of fun.
JENN: The future and sound podcast is written and hosted by Jen Wilson. And produced by Chris Attaway. This podcast is brought to you by Re:Co, a tech powered advisory company helping private market investors pursue sustainability objectives and value creation in tandem. If you enjoyed this podcast, don't forget to tell a friend about it.
And if you have a moment to rate us in your podcast app, we'd really appreciate it. Until next time, thanks for listening.