Interviewee, Jason Katzer: https://www.linkedin.com/in/jasonkatzer/
Jason Katzer (Capital One) is fairly cynical about emerging tech (we laughed over the tv series Silicon Valley's ML hot-dog-not-hot-dog app). His cynicism is understandable given the number of buzzwords in the industry. Without naming names, I always grin whenever I see the word ’AI’ (artificial Intelligence) because equating computation to human level ‘intelligence’ is, to say the least, a stretch.
Human cognition is incredibly complicated. Psychology spent decades attempting to understand what happens within the skull basically through observation and correlation, mapping the mind against answers given by people or observations of their behaviour. Neuroscience seemed to open the door to the mind and humanity was excited to finally get answers to long asked questions, yet the staggering scale and complexity of our brains has kept us at bay from any truly significant answers.
For example, we haven’t found ‘consciousness’. I would recommend Anika Harris’ book ‘Concious’ in which she resurfaces the fascinating idea of panpsychism. Personally, I think the future of neuroscience and quantum mechanics are intertwined, the only way to determine if even free will exists and to shed light on why the human mind is ‘something to be like‘ (Sam Harris’ practical definition of conciousness).
Our conversation begins with a story from ‘Valley of the boom’, describing a salesman conning major film studios about a video streaming product that boasted incredible speeds. He told me how this company sold these streaming speeds to these eager production companies, when the technology to stream at these speeds literally didn’t exist yet. They all bought it because they didn’t have the knowledge to ask the right questions. He laments the fact that so much in technology advancement has a ‘fake it until you make it’ attitude.
The details aren’t important; new technology sold as a solution to all of our problems is a tired theme. It was his opinion that it basically comes down to a lack of due diligence. Personally, I agree for the most part but the truth of technology products is that they’re often extremely difficult to understand to the layman. I’ve written about machine learning for months now and, whilst I can hold my own at a meta level with a technical expert, the second details come up I have very little chance of knowing whether what the other person says is accurate. The devil is in the detail yet sales people don’t know it and if buyers don’t know to ask then they’ll be landed with products that don’t serve their needs in the way they expected. I’ve learned that technical solutions have a million nuances that all have to fire just right to succeed.
“Engineers love working with the new and shiny”,
he says, and it’s a perfectly forgivable bias that consultants do well to pay heed. An engineer who’s worked with one technology for decades will jump for the opportunity to work with new and exciting things, like anyone would. Even in data science, machine learning won’t always be a smart implementation when, say, a regression analysis might suffice. (I’m reminded of that meme going around about blockchain: Do I need a blockchain ——> No.) Human psychology is such that the ends often justify the means; people don’t collect information and then come to an opinion, they come to an opinion and then collect information to support it. In the same way, people will pick the tools or approaches they want to use and then justify why they are necessary. Only if you’re aware of these biases can you engage ‘compunction’ - the counter thoughts. Still, with awareness of your biases, you’re not guaranteed success in avoiding them; the grandfather of behavioural psychology himself (Kahnemen) laughs in various interviews about how even he can’t avoid his system 1 thinking.
Jason simplifies what Machine Learning is for me: "ML is pretty much all about feature engineering - Garbage In Garbage Out". For example, he references NLP - converting text into numbers, having these numbers predict other numbers that are then converted back to text. Data engineers and scientists spend 80% of their job (maybe more) cleaning, sanitising and transforming data, so machine learning can create a proper model.
ML is a technology of the meta, which is exactly why I think it’s such a groundbreaking technology: there is literally nothing that you can’t meta analyse, so when features are mapped and ML is employed, therefore, there is essentially nothing that can’t be optimised through ML. In a later interview I have, “machine learning’s value is in the margins“ - who knows exactly how much ML brings to the ROI calculation but we do know even very small percentages of optimisation can have a colossal impact at scale.
This reiterates a recent chat I had with a Lead Enterprise Architect (at Nestle) – “we’re at least a couple of years away…we have to change everything so that there’s actually meta data for ML models to work with.”
As is only natural in a ML conversation, I ask him to distinguish it from AI. He classifies ML as a “middle schooler taught to fish” and AI as “how do you get them to teach themselves to learn how to fish”. Maybe I’ll have to lose my begrudgement of the the term ‘AI’, because it seems the industry is moving forward with their own definitions irrespective of the psychology department’s resistance (hey, give us a break, we spent decades agonising over the definition of intelligence!). Over time, I suspect AI will come to define any computational model that is non-human generated, and improves itself with time so as to adapt to new situations; whereas machine learning will define algorithms with more narrow applications.
I’ve heard some professionals state that, for ML to be effective, the entire organisation needs to shift its culture - its very attitude towards data (becoming truly data-centric).
Then again, I’ve also talked to experts working in incredibly isolated environments (focusing on singular use cases) and they’ve had great success biting off small chunks of use cases - showcasing great results - and then being invited to replicate those results in other areas of the business. Jason is of this opinion, that “ML, to pay off, it doesn’t need the whole company to be using it”.
Whilst I think a more total transformation will deliver better results, waiting for the technology firms to build plug and play models is a valid counter argument - but we’re not quite there yet. In a hilarious interview I have with Marijn Markus, he suggested that no company in their right mind would ever plug and play a solution “without a whole damn team of people asking where the f*** this number came from!”. Maybe I’m more cynical, in digital media I saw clients do precisely that.
He raises the interesting point that “technology gets commoditised once the problem gets solved”. He referenced how the original Dynamo DB became Casandra and then MongoDB, or how massively scalable databases are outsourced to cloud companies (referencing Amazon).
He believes it doesn’t make sense for a company to jump to a new technology if being in the business of innovation is not their core business. ”The pioneer won’t necessarily make all the money“.
Jason succinctly concludes that companies which don’t have data as a core competitive advantage have zero incentive to pioneer ML technology. They’ll just wait for the pioneers to progress it to a usable point, then - and only then - will there be mass uptake.
There’s actually an incentive not to progress technology internally and, if you think about, there’s no reason for the likes of Google to release the most advanced technology they have - they only need to release the most advanced technology on the market to remain ahead of the game. In a nutshell, “there’s an immediate ROI from the current state of AI” - so why advance it?
Aside; that’s why competition is so crucial and why, despite its weaknesses, capitalism has been so incredibly successful. I mean, look at what SpaceX has done to the space launch industry in only a few years! (I.e. Amazon and Virgin’s space projects.)
So what? It comes down to ’core competences’, where businesses should (yes, keep an eye out on new innovations, but...) focus on their core areas of business. As merchants of innovation, machine learning professionals should sell products that serve the ‘lowest hanging fruit’ - don’t try and change the world for business success (as you genius people like to do!), simply sell a simple solution to an existing problem.
Perhaps selling machine learning as ‘machine learning’ isn’t the way to go. Sell the result and simply hold ML as another – very powerful – tool in your pocket.