Site icon Forkast

Why artificial intelligence is ‘dumb’ and needs more ethical and legal standards

david toman2

As the coronavirus pandemic continues to ravage countries around the world, the field of artificial intelligence (AI) is developing in tandem, with growing applications in health care and other industries.

While the potential of AI may not be realized for another 10 to 15 years, experts believe that this sector must be prepared to address ethical and legal concerns before this emerging technology becomes more widespread.

David Toman, an AI researcher at University of Oxford who is also involved with the United Arab Emirates’ artificial intelligence program, told Forkast.News about the dangers of rapid development and the need for greater oversight.

See related article: AI giants are exploiting your data. A new system can help you take back control

“The problem with AI is [that] it’s relatively dumb — it still can be very, very linear, it can still be very constrained not only by the technology of the programming side of it, but also the ethics side, and legal,” said Toman, in an interview with Forkast.News Editor-in-Chief Angie Lau. “Those are two areas that we really haven’t got into yet, and that is the constraint as we move forward.”

Highlights

Full Transcript

Angie Lau: Welcome to Word on the Block, the series that takes a deeper dive into the emerging technology that shapes our world — AI, IoT, 5G, blockchain — at the intersection of business, politics and economy. I’m Angie Lau, Editor-in-Chief of Forkast.News. Welcome to the show.

I’d like to welcome to the show, right now, David Toman, who is one of the leaders in AI when it comes to the UAE’s artificial intelligence program, really helping to lead that on behalf of the University of Oxford. He’s a [former] visiting fellow at Kellogg College, University of Oxford. David, welcome to the show.

David Toman: Good morning. How are you, Angie?

Lau: Absolutely fine. Look, I wanted to get you on because AI has been a hot button topic/issue, certainly for the past couple of years, and it seems to be accelerated now in a post-Covid world. We have seen digital transformation and digital adoption by corporations really speed up in light of Covid-19. AI is one of the central themes when it comes to thinking about digital transformation.

AI, big data, analytics, these are all buzzwords that corporates are considering right now. If they’re not implementing it, they’re thinking about it. But really, I want to take a very candid look at this technology. It means a lot of things to a lot of people, but at the end of the day, what can it really do? So if we could just start off with asking, from my side, a very dumb question — what is AI, and what can corporates do with it?

Toman: I think AI is a lot of things to a lot of people. What it really is, is a technology that’s been around for about a hundred years. It’s blown hot, it’s blown cold, but it’s really a way to almost emulate the human brain. It’s a way for us to use it as a means to reach from our own history, our own past, our own thinking, and really try to recreate that from a computer point of view. Now, that’s a very, very overly simplified way of doing it. But what it is, is a sort of conglomeration of various technologies that come together, probably to make things different for our real world today. That’s what it’s trying to do.

Lau: A lot of us think about it as artificial intelligence. But really, I did an interview with an AI scientist who explained it like this: it really is the illusion of intelligence. It’s really just speeding up the functionalities in a very practical way so that it looks like it’s intelligence, but really it’s machine programming by man-, by woman-, by humankind. And therein lies a really critical issue for corporates: how do we use this technology to speed up the functionality and create efficiencies? But then also what happens next?

Toman: AI is one of a series of technologies that come through, and again, it’s used as a coverall — at the moment, the current state of affairs we have is, AI is relatively dumb. It’s a long, long way from this sort of nirvana that people reach out to [and think it] will solve everything. There are still significant steps to be taken. AI in the true sense may not happen for another 10, 15 years. And in fact, if you look at the past, the history of AI, it’s blown hot and cold. There have been AI dawns, there have been AI winters, and this is the latest AI dawn. Whether that then progresses into an AI winter is yet to be seen.

The two critical things that have happened, I think, in terms of AI, are the ability for us to process data and the ability for us to hold data. So those two things have actually enabled us to move into a different space entirely. The problem with it still is it’s relatively dumb. It still can be very, very linear, it can still be very constrained not only by the technology of the programming side of it, but also the ethics side, and legal. Those are two areas that we really haven’t got into yet, and that is the constraint as we move forward.

Lau: Okay, let’s talk about ethics and legal. You know, we are in a world right now that is experiencing privacy issues and tracking when it comes to coronavirus. We have nations that are on the basis of public health really infringing on individual rights. And then we have protest movements around the world, and charges of discrimination, racism, and unconscious bias that truly exist. But if it’s people who are the ones who are programming AI and the thinking behind this technology, what must we think about when it comes to ethics and legalities that we perhaps haven’t thought about yet?

Toman: I’m sitting here 20 miles north of Oxford, and yesterday we had a student protest about the road statue outside Oriel College, Oxford. That just shows you the depth of strength of feeling, and quite rightly so, about the Black Lives Matter movement and also the injustices of the colonial past of the UK in particular. I think AI falls into that same sort of context, where it’s being done [at] us, not with us.

AI is seen to many people as a black box. It’s where the very clever programmers sit. They create a computer program, they know what goes into your personal data. It might be where you’re walking. It may be who you’re speaking to. It may be recording what you’re [saying]. But at the end, we have what is seen to be some sort of great global good. And that creates a real challenge for governments around the world to be able to respond to that.

If you look at the level of trust for governments, it ranges from on one side, places like the UAE where the trust in government is pretty strong and people are prepared to trust the government, and again, rightly so, to actually deliver an AI product which is for the greater good. If I walk out into the street here and ask someone on the Isle of Wight which is testing the National Health Service in the UK is apt for coronavirus, there’s a great deal of suspicion, there’s a great deal of angst.

Just to develop the point a bit further, that black box is inherently dangerous. Unless you know what’s happening within that black box, you are going to run into problems both from a corporate point of view and an individual point of view. So that’s the crux of the problem.

Lau: You know, we cover blockchain, decentralization, and distributed ledger technology a lot on Forkast.News, and that seems to be a technology that is often married to AI and big data. As we expand this conversation and thinking about ethics, privacy, and how as a community we can contribute to AI instead of a singular voice or a singular number of voices in a centralized way, could this be a potential part of the thinking that needs to be embraced from the AI community?

Toman: I think the AI community kind of tends to be slightly secretive about what’s going on, what’s happening within the community. The drive for technology is so strong and the desire to beat competition from a corporate point of view sometimes blurs the line. But where are we going to go? What happens when we crack this technology? We have this wonderful product that we produce, but we produced it in this black box environment, where no one trusts it. No one says, this is great, fantastic, well done, here’s the technology.

There’s a lovely anecdote that concerns a very similar situation where the first virtual landing was made at Heathrow Airport. It was a British Airways flight, the pilot came in on a very foggy day at Heathrow — not particularly unusual — landed the plane fully automatically. The captain turned around and just thanked all the passengers and crew. “This is a fantastic landing, fully automated.” And one of the first class passengers said, “that’s not what I’m paying for.” And it’s a real PR because, again, the technology has leapt in front of what the customers and the clients and the person on the street actually wants to do.

There’s a danger if we let AI run in that same direction. We’ll have that person, or that community/office saying “what on earth are we doing? You’ve created a monster. You created something which we don’t believe in, we don’t want, and we don’t trust.”

Lau: And so then how transparent do you have to be when you’re running your artificial intelligence program on behalf of University of Oxford and UAE and you’re trying to integrate this technology? Sure, you’re working with a government where people’s trust level is quite high, but at the same time, you still need to build in a foundation of communication. How are you integrating some of these things that we’re talking about, even practically in your day-to-day?

Toman: I work with a great bunch of talented academics and thoughtful researchers at Oxford, people like Dr. Maddie, who is one of the go-to people in terms of research, in terms of real estate, through to Professor Howard at the Oxford Internet Institute. They’re the people who feed into the cause, amongst many others within Kellogg College.

One of the things we do is actually make sure we force this gap, force the bridge between, on one side, technology people who are very [desperate to be] left alone, to develop the AI, develop and prove that they’ve developed some fantastic AI, which is great. And on the other side, you have the managers, the people like what I was 10 years ago in BP, who you think will let the techies run away and do their thing and all of a sudden they produce something that doesn’t work.

So what we do is we, with our colleagues and friends in the ministry, bring those two factions, two wings, two tribes, as it were, together, and say that the managerial and executive team have to learn a little bit about the technology. You’re going to have to learn a little bit about Python programming… You can get inside and ask the right questions inside the minds of your programming staff and your external suppliers, and deliver a solution to you and your business, or to the community at large, that actually will deliver something.

And then on the other side, an equally challenging situation where our technical folks are saying “we don’t care about the ethics, we don’t care about the legal, all we’re concerned about is delivering a fantastic AI product.”

Between the two, we bring them together and force them to think about ethics from one side and from the other side to think about and actually physically program. One of the key things His Excellency Omar al Olama pushed on us as a college is to say, “I want everyone to be able to come away with a little bit of programming knowledge. It might be dangerous, but nonetheless, it’s an insight into the other, electronic, world.”

Lau: You’re basically baking in empathy. You’re basically baking in walk-a-mile-in-my-shoes to create a team dynamic that hopefully can elevate a product. You touched upon legality another thing that few people think about when it comes to AI. What are the red flags or red issues that we should be aware of as we saunter into the second half of this year, still reeling and still dealing with Covid-19 realities?

Toman: It’s a saunter, it’s a drift… The red flag [analogy] is great. The analogy there is that when cars started to be produced in the U.K., people were very frightened. They would drive down in their wonderful new shiny Fords or Buicks or actually probably in the U.K., it wasn’t really General Motors and Daimler, certainly Fords as well in there somewhere. But people were very frightened of that new emerging technology; great analogy for the AI world. And the government then legislated for someone to walk in front of those cars with a red flag to basically make sure that you couldn’t go any faster than walking pace to constrain the technology.

As we move into this AI world, the danger from a legal point of view is that government will actually respond to their population and say, “hang a second, we’re going to have to have someone walking in front of this technology again with that physical red flag” — slow the ability for that to be implemented. So unless we address those legal issues and legislative issues, that’s what’s going to happen — someone’s going to walk out with a red flag and say, “stop. You cannot progress any further.” And unless you understand that, you’re going to have problems.

The exact analogy is cars again, instead of walking in front of a Ford through the streets of Oxford, is something from one of the big I.T. companies, big AI companies developing driverless cars, that are saying, OK, fine, that’s wonderful you’ve developed these great technologies, there’ve been a couple of issues in terms of, sadly people have died as a result of this. Again, great tension, concern from the population exactly as there was 100 years ago when cars were first invented.

But then, who’s liable if that driverless car kills someone? Is it the owner of the car? Is it the car company? Is it the people who developed the software? And until it’s actually solved, you can do as much technology as you’d like, but the technology will not progress. And people will put in a block — an artificial, political or emotional block — that may delay the implementation of AI for many, many, many years. That’s the danger.

Lau: Well, it’s just really interesting that we went from talking about the technology which we’ve been talking about for a while now, and it’s matured to a stage where now we’re realizing the social impact. So we’re talking about the ethical issues, the ideologies behind even the integration of AI, and then the legalities as you’ve rightfully pointed out.

I think as we wrap up this conversation for those corporates and professionals across industries, how should corporates and global firms in the stage of digital transformation that they’re in now be thinking about AI? How has it changed business logic, and what do they need to do, aside from the things that we talked about, ethics and legality, what do they need to be thinking about that’s different that they might not have thought about in a pre-Covid world?

Toman: I think Covid’s basically taken up bandwidth for the corporate boards. I worked for a large corporation called BP, and it’s a capital issue in terms of the Gulf of Mexico disaster. That took away every piece of management’s additional thinking time. Many, many, many years really until recovery. Took me 10 years from the incident, through to a decent place to be in. And the same thing happens with Covid, I think, where my friends who are still sitting in the corporate chief executive’s office, are sitting there just trying to survive.

And the danger is that through the problem about faster research, faster picking, any AI technology into the IT department, say, “look, just get on with it. We don’t want to know, we’re too busy, get on with it.” That will be fatal. You’ve got to spend the time understanding what you’re asking people to do. AI is a corporate bandwagon to a certain extent. It’s the latest buzzword; as you started this conversation, Angie, “it’s blockchain, it’s big data it’s AI, let’s chuck another one in.” What does it mean to us as a business? What benefit are we going to get?

So what I would actually counsel people to do is actually firstly understand the business. Getting people involved who are business process people and then understanding what your business process gap is before you start throwing vast amounts of corporate dollars at a solution which may not work and is actually not fit for your business. It’s great to tick a corporate box, it’s better to make more money for your shareholders.

Lau: Well, you’re totally right. The bandwidth of thinking may be stretched thin, but this can’t be ignored. There’s too much at stake. It’s every one of our lives that are at stake, the impact that it will have on our privacy and all the rest. So this is a critical conversation, and I thank you for it, David. Thank you so much for joining us on this latest episode of Word on the Block. David, I bid you adieu, and thank you, everyone, for joining us as well. I’m Angie Lau, Forkast.News Editor-in-Chief. Until the next time.

Exit mobile version