Responsible AI: selecting degrees of transparency and highlighting potential for bias

By

As the power of AI soars, the ethics of how we use AI is becoming an increasingly pressing issue, which as a futurist I speak about frequently.

In working with the intelligent automation company Pega I have learned about some of their extremely interesting approaches to ‘Responsible AI’.

In a conversation with Jo Allen of Pega, she discusses in particular two important concepts embedded into how Pega uses AI.

The first is the ability to select the degree of transparency for any particular application of AI. As Jo explains, this can vary across applications, and be simply selected from a 5 point scale by the business user when implemented the system.

The second is the system’s ability to highlight the potential for bias in specific instances algorithmic decision-making, allowing human testing and checking of the integrity of the AI systems.

Watch the video for Jo’s insights on how these systems work, or you can read a transcript of the video below.


TRANSCRIPT
Ross:
Great to be speaking with you, Jo.

Jo:
Hi, and you.

Ross:
AI is very much of the topic at the moment, with its extraordinary capabilities, but all sorts of potential ethical and other challenges around AI. So Pega has this concept of responsible AI. Sounds great. I’d love to hear more about what does this mean? What does responsible AI mean at Pega?

Jo:
We’ve seen such an increase in the use of AI over recent years, and it has great value. But with great power comes great responsibility; that old phrase, right?

Ross:
Yes.

Jo:
So we need to be able to give our clients the ability to be able to control that, really. There are instances where AI is used in real high-stakes situations, so it’s important to be able to have some control over what’s happening with that. What we consider is this ability to scale, so that we understand that in some cases it’s okay to be opaque, and that AI can be rather opaque in situations where it’s not so important to be able to explain yourself. That might be thinking about the type of color that you want an advert to be, or perhaps some marketing communications. It’s less important to be able to explain yourself.

But there are situations where it’s really high stakes, particularly in banking, credit risk, those type of situations. We have to be able to explain to a human, so you need your AI to be transparent. So what we are able to do with what we call the T-switch, is to actually bring in that transparency, so you can set your activity to either be opaque, or on a scale of five levels up to being very transparent. To give that control will allow you to be able to explain it as and when you need to, because it’s a balancing act. Sometimes you are constraining your AI, if you are completely transparent. So we like to give our clients the choice.

Ross:
I’d love to dig into that a little bit more. Is it that the AI will perform much better if it is opaque, and so if you make it transparent, it is transparent but it is less high-level performance?

Jo:
Not necessarily. It’s about having that control really, and being able to look at it on different levels. Because sometimes it is constraining when you are making something very transparent. It’s not necessarily better, worse, but we know that people need to be able to have those levels and look at things in different ways. That’s not the only way we’re able to be responsible. We also think about what we call the ethical bias check, where we are giving our clients the ability to understand whether bias is creeping in across entire strategies, rather than just within the modeling aspect of what they’re undertaking.

Bias naturally creeps in sometimes to your modeling activity, into your rules. Sometimes that’s okay, and sometimes it’s not a good thing. And with particular increase in regulations, you need to be able to monitor that. And with the ethical bias check, we can introduce that ability to monitor so you can either simulate what’s going to happen before, and understand where you think bias might creep in, and you can configure thresholds so that you can determine when it’s acceptable to go beyond the threshold or not, right? So whether bias is okay, and trigger a notification, and you can set up when that notification and how that notification might come through to you.

There are some instances as well, where it’s okay to have some bias. I’ll give you an example of talking to someone about a credit card. We know that you have to be over 18 to have a credit card, and as such, you expect some bias to be prevalent within your strategy. So again, this is about giving clients the choice, giving them the control, giving them the tools to be able to control when and how they control that bias within their strategies.

Ross:
So in that case, essentially they’re able to say, “I am concerned that there might be bias in this particular way,” so that I can then test that and check that?

Jo:
Absolutely. So you can set it up, let it run in the background, notify you when you see bias creeping in, whether that be about age, ethnicity, or whatever; whatever you’ve got the data to be able to drive. And you can let that run and notify yourself when things are changing, which I think is great. So you’re able to monitor what’s going on.

Ross:
Yes. And as you say, it’s for regulation and just for sheer ethics, you want to make sure that you don’t have inappropriate bias.

Jo:
Absolutely.

Ross:
Just going back to the T-switch, as I understand that, you can set that scale of transparency to opaqueness for different types of decisions. Is that right? And so how might you implement that? What sort of different things might you set to opaque or transparent in the set of parameters?

Jo:
It’s a scale that you can set across your different kind of activities, or your different strategies, rather. So it might be that for one action that you’re setting up, within the next best action, you can set that to be opaque. In another, you might set to be completely transparent. And that’s based on the situation and the strategy we are talking about, and you can set that individually.

Ross:
What might be an example of when you would choose to be transparent or opaque?

Jo:
I might choose something like a marketing offer. If I had an action that was around promoting an activity around collecting a customer’s data, or giving them a goodwill gesture… If you’re trying to develop loyalty and you want to recommend some customers, when you might give them a treat when it’s their birthday, the instance of using that versus talking to them about something that’s for higher stakes, like whether they will be eligible for a credit card or not, that is kind of the two ends of the scale where you need to think differently.

Ross:
Right. That makes sense. That’s fantastic. Thanks so much for your insight and time, Jo. Really appreciate it.

Jo:
No problem, thanks Ross.