Last updated: July 12th, 2022
Skip to section
Meet Toby Walsh. Scientia Professor of Artificial Intelligence at the Department of Computer Science and Engineering at the University of New South Wales, Toby is both a leader—and an active voice—in his field.
Nationally, he’s part of a 3-year project with the Australian Human Rights Commission to create a blueprint for the use of new technologies in society. Globally, his open letter to the UN asking for pledges from other AI researchers to stand up against killer robots drew attention (and signatures) from tech giants Elon Musk and Bill Gates.
Our editorial team had the unique opportunity to ask Toby 6 questions about AI, its future in the workplace and how corporate leaders can prepare.
Question #1: Has AI for hiring moved too fast, and with too little scrutiny?
TW: Yes, it surprises me how quickly and with how little scrutiny we are adopting AI in this domain.
Humans are terrible at making hiring decisions, we know. But equally, we know of lots of elephant traps with machines making decisions. We will eventually get it right. But I worry it is too soon to be handing such life-changing decisions to machines without careful oversight.
Corporate leaders and HR managers need to be on the lookout for bias—unintended or intended. Algorithms that encourage bias are examples of what I call “stupid AI”.
Question #2: Are there other forms of stupid AI influencing the world of work right now?
TW: Yes, there are many other examples of stupid AI having a negative impact in the workplace.
For instance, take the algorithms used by firms in the gig economy to hand out work. The problem we’re facing here is that we have to understand the whole complex system and unexpected feedback loops.
Case in point, it was recently found out that Uber unintentionally penalises women drivers. Male drivers earn more than female drivers. In part, this is because male drivers speed more than women.
And the pricing model for Uber encourages this. So we’ve created a system where algorithms act so that women get paid less, and men are encouraged to drive more dangerously.
Question #3: Some predict AI will displace workers while leaving businesses with talent gaps. How to solve this?
TW: Easy. You need to invest in the most valuable asset in your business—your people.
Help them keep reinventing themselves. Focus your training on the cultivation of emotional and social intelligence. On creativity, problem-solving, adaptability. And, of course, on deep technical skills.
Along these lines, it’s critical to remember that automation will free up people’s time. As a leader, this gives you two choices: do the work with fewer people. Or use those people’s time to improve your product or service. Companies that last will be those that do the second.
Question #4: How can we “future-proof” ourselves in this changing world of work?
TW: I’ve got a theory on this that distills down to a triangle. The three parts of the triangle are:
- inventor of the future
- emotional intelligence
- artisan / artistic.
To future-proof yourself, invest your skills into one of those three areas. Most of the careers we’ll see in the future can be categorised that way.
To prepare, we need to reinvigorate our creativity and embrace our inner child. This won’t be easy for some, but it’s essential.
Question #5: How can the leader of a non-tech company foster a responsible approach to AI?
TW: People talk about AI and ethics as though AI introduces lots of new ethical dilemmas. It doesn’t. Most of it is old fashioned values.
As an example, take Facebook and Cambridge Analytica. Stealing people’s private information and using this to manipulate elections is bad behaviour. It has been so for thousands of years, ever since the ancient Greeks got us voting.
We don’t need any new ethics to decide this. Technology lets us behave badly quicker, cheaper, more easily. So first and foremost it is about the company having good values and acting to ensure them.
Question #6: When will the future of AI be here—is it now, 5 years away, 20 years away?
TW: My latest book is entitled 2062: The World that AI Made. I chose 2062 because that is when the majority of my colleagues expect AI to be as smart as humans.
Even though it’s 40 years away, this doesn’t mean that corporate leaders should wait or hesitate. AI is going to touch almost every part of every business. CEOs need an AI plan. Just like 10 years ago they needed a mobile plan. And before that an internet plan.
Right now, there’s a lot of doubt and uncertainty around how AI will take shape in the workplace, and in business. We need to build the institutional structures so we trust machines making decisions.
We trust doctors with our lives. But only because we have structures in place to let us trust them. We need our corporate leaders to invent similar structures with AI.