A conversation with Patrick Hall - Senior Director of Product at H2O.ai
It’s not something I’d ever thought about, but it sounds obvious when Patrick explains that “’trust’ and ‘understanding’ are different technical problems” in machine learning.
“Basically: explanations lead to understanding, which leads to the ability to appeal decisions – all slightly different from trust.”
Check out his paper: “Guidelines for Responsible and Human-Centred Use of Explainable Machine Learning.”
He’s working on “all sorts of interesting things in highly scaled machine learning; mostly ‘interpretable’ and ‘explainable’ ML.”
“I’m just more and more aware that these automated systems for making decisions are becoming more common. From decisions about music to show to you, or whether it’s something much more serious (credit card, prison). They’re becoming much more common.”
He talks about unravelling black boxes, torturing black box models. He advocates ‘white box’ or ‘grey box’ models “for any high stakes scenario”.
“I don’t think it’s a good idea to use black boxes for really high impact decisions that affect people’s lives.”
Documenting software is important, and we discuss the grave reality of unintended consequences.
“We don’t want to be caught. It’s a whole new domain for liability” -- I foresee algorithm lawsuits and ask if that’s happened over there; Americans are notorious for suing over hot coffee, let alone having one’s career, freedom and financial capability negatively affected unfairly due to an algorithm (that a company doesn’t really understand); but, he “can’t comment on whether that’s happened yet”.
He does, however, cite a “famous algorithm, COMPAS, that was used and is probably still used to give judges and parole boards a risk score about whether someone will commit a crime again in the future.”
I didn’t know about that. A black box is already in control of some people’s freedom. How do you feel about that?
“There’s a documented case in New York Times. Because of a black box, where a person was held in prison wrongly. It was very hard for them to appeal.”
Turns out it’s hard to sue a black box …
“That said, it hasn’t yet scared off innovation – there doesn’t appear to be friction from a legal perspective in many areas.”
Yet. I saw a chart about the growing investment for machine learning; it appears to be gaining momentum, most notably in the medical industry. I think the industry would benefit from communicating worst case scenarios so it’s not such a shock when an algorithm kills the odd person. = Expectation management 101.
“There’s an implied trade-off between accuracy and an algorithm’s comprehensiveness, but this is being questioned– some people say it doesn’t exist.”
(He cites Cynthia Rudin for dismissing the Accuracy/Interpretability trade-off: “Please stop explaining Black Box Models for High-Stakes Decisions”.)
“Complex models may be more accurate in static data sets, but this may or may not be the case in the wild …
He moves on to discussing issues of accuracy on static test data sets versus their ongoing accuracy in real world. Personally, I think Tesla is on the money with their up-front goal of designing a system that will be able to handle long tail events. If the system doesn’t truly ‘understand’, as Elon puts it, it’s useless.
I ask about his vision for ML as a technology and what it might solve for the planet. For this question, I always reference the optimised creation and allocation of resources – economics 101 on supply and demand: growingly scarce resources (happening) for growing demand (happening) will result mass unaffordability for the 7 billion people on the planet. Oh wait, or is it 8 billion now?
“This is my take being a kind of cynical medium-term player in the software and analytics market: these technologies have been deployed at the margins of businesses for decades. Marginal gains in terms of saving or increasing revenue. But a hospital is a hospital. Not a data science firm. They charge money for making people better. A power company charges people for power.”
“The longest term most successful applications of this tech. is marginal increase in value. The potential for machine learning is largely to optimise things on the edges … and maybe optimise everything. So, yes, it could make life better for everyone. But as we draw close to the precipice, it could make it worse.”
How, I ask?
Hacked. Unfair. Bias. Wrong. “A combination of these problems.”
(In an email follow up, he clarifies a point on fairness: “I'm just not sure that what American academic elites think about fairness will translate to even middle America (/main St. America) and then how can these ideas work in other cultures with completely different notions of privacy, social responsibility etc.”)
My interpretation here is clear; the elite will be building the algorithms that will impact (optimise, control and allocate) such incredible number of people’s lives. Who’s to say that what they think should happen, should in fact happen. Why is your way better than ours? A theme of cultural enlightenment comes up: sometimes it’s obvious – like gender equality, but often issues are so blurry that it would be arrogant for a culture/nation to intercede by imposing their will on the other.
I ask whether he’s familiar with the idea of an AI winter, for his thoughts. He references the massive disappointment with the industry in the past and mentions if we’re not careful, it’s going to happen again. Not because of a failure with the technology but mostly a misunderstanding of it.
As a communication professional, I tend to agree. Managing. Ex.Pec.Tations.
“What value does deep learning play for, say, Facebook – maybe 25% of business or perhaps more marginal? - My point about deep learning is: it is used by big tech. in a way that makes it hard to know if they are making or losing money with it. Moreover, it may not matter if they are losing money on deep learning b/c they have so much money to begin with. This is different from how I work. The tools I make have to create tangible value quickly."
"We should focus on the value of specific use cases versus the go fast and break things mindset that is good…sometimes."
"Maybe it’s time to slow down and really think through how we’re going to impact people.”
We move on to cultural relativism, global power struggles, but Patrick [probably sensibly] doesn’t like to talk outside his field, although “one comment I can go on the record there with: I do think Chinese government spending is outpacing America. They just jumped into hooking a whole city up!”
China and their AI system for resource and logistics control. Talking from a solely progress perspective, maybe our democratic ways are not always the fastest way to do things dammit! Certainly not when it comes to - practically overnight - hooking up an entire city to a system. Imagine how much data that’s generating?!?
More data = better models = greater demand for computational power = greater pressure on innovation = faster technological advancement = national advantage.
In my opinion, we’re already in the next cold war and we’re almost certainly going to lose if something doesn’t change.
“Cheaper to do a machine learning model than it is to build a hospital or drill for oil or something, but people do underestimate the cost of commercial data science projects. There has been relatively little AI investment, now there’s a bit of a mad dash. Certainly, my own career has benefited from that.”
Where’s it all going, Patrick?
“I think we’re living in the age of weak artificial intelligence. It’s good at scanning luggage in airports, or recommending you music, but I suspect these systems will take years to proliferate, to become more common in terms of single purpose. But there are more and more of those in our lives.”
“We just need to develop them responsibly. As a practice, we could see extremely draconian regulations put in place if something really bad happens. Technology is ahead of the government and there’s very little preventing innovation now; but,.."
"...I think there’s this chance that if we’re not responsible as practitioners (or there’s a serious hack, wide-spread discrimination or we start losing luggage), it could really lead to public distrust of the technology and the result could be draconian overrule.”
He makes a surprisingly simple comment that both intrigues and terrifies me.
“I think government could eventually be replaced by innovation.”
“In [Washington] DC, you don’t have to stop at a red light – there’s an implied ‘5 second rule’ of the red light.” I crack up when he says, “it would cause in accident if you stop!”
I think his point is that society kind of creates its own rules, with cultural differences in different places, more ‘guided’ than ‘controlled’ by the government. With time, the people will get harder to control. I believe government will need a deft touch to sustain their existence.
He references conversations he’s had with friends in the field; “an attorney friend says it’s hard to say whether machine learning hacks are illegal.” Are there are even laws on the books for this kind of thing, I wonder? “But other friends say there’s an obvious legal precedent that exists to prosecute these kinds of things. There are certainly questions and grey areas as to whether some machine learning hacking is illegal or just ‘competitive ingenuity’.”
What’s that expression about law and ass? To be fair, Patrick’s right in saying “they’re very, very complex things.” Our institutions have to be so cautious when enshrining anything in law, but this caution is being outstripped by the move fast and break things attitude of Silicon Valley.
Something’s gotta give, eh?
“There’s a power struggle between a traditional and an automated approach to ‘data driven’. It’s tempting to believe that the latter will win, but maybe it won’t.”
I’m personally beginning to see what’s happening as a new breed of economics.
“There’s the undeniable drive to a data driven economy. Even if legislation does [install draconian rule] … the toothpaste can’t go back in tube!!”
Black markets will have drug dealer efficiency. Now that’s a worrying thought: black market algorithms.
“If we as practitioners are lazy, don't test and explain our models, governments and the public may turn against AI, potentially resulting in draconian over-regulation of the field and a new AI winter. So let's get our act together and be responsible in our own practice.”
Ha! Finally, an AI winter that won't be marketing’s fault ;)
“What I’m working on now, I’m trying to finish up now my work on explanations. The main thing of explanation is that they enable ‘appeal’."
"Transparent machine learning is not just needed - it’s crucial, think about college applications and prison.”
“I would argue no one actually wants a black box decision made about them. Firms should have to tell you why, based on information.” Oh my god I agree.
He says the field is about 90% solved when it comes to explanations (and ‘appeal’).
“What I want to work on now is model debugging - this will make them trustworthy. ‘Trustworthy’ and ‘appeal’ are two different technical problems. The root of appeal is understanding. Understanding is ‘tell me why it made this decision’.”
“They’re overlapping concepts. I want to trust and understand a system. You want both.
Well, ideally both, but you can do one without the other.”
It’s important to truly understand the contributions of an entity in a complex system, he says, citing ‘Shapley Values’.
(Shapley Values in machine learning: a prediction can be explained by assuming that each feature value of the instance is a “player” in a game where the prediction is the payout. The Shapley value – a method from coalitional game theory – tells us how to fairly distribute the “payout” among the features.
To finish up this incredible conversation; tell me something I don’t know, Patrick.
“People are using these decision tree ensembles etc, and I think they fail to grasp the complexity of the models they’re letting loose on the world. Torturing one gradient boosting machine on one data set and the complexity is astounding. They’re just using them anyway! People talk about machine learning models online, but they don’t do any real testing on how complex these models are, even the simple ones…
…oh and ’50 years of Data Science’. You should definitely read this paper.”
----
Further resources sent:
Evidence of bad things happening in the U.S. in AI today:
- http://gendershades.org/ - https://www.nytimes.com/2017/06/13/opinion/how-computers-are-harming-criminal-justice.html
Most of his thoughts mentioned are also organized here.
Comments