Updated: Mar 22
This is the seventh in a series of interviews with members of the Machine Commons supplier Collective. Subscribe to the site to be alerted about future posts, or become a partner today!
Manish Patel runs Jiva.AI, a highly advanced, funded healthcare start up in London, creating a platform for data driven healthcare.
This COVID thing, eh?
“Covid. It’s annoying. I have elderly parents and so it’s actually also quite scary. Societally, it is really difficult. Kids aren’t crazy yet, but the anxiety issues are definitely coming out.”
“Humans are networkers naturally; mental health is a big problem.”
Are you worried about it out-mutating our vaccine efforts?
“In general viruses tend to mutate to less virulent strains. If it’s killing off its hosts too quickly it doesn’t tend to spread.”
“I think the bigger problem will be the consequences of the last year.
"Especially in the UK. Possibly among the worst in the world. I’ve seen reports that reckon there are 600-800 people per week that didn’t have their prostate scan.”
“And that’s just one disease area. That’s a huge backlog. Think about kidney, heart, lungs, …there will be this tsunami of diagnoses to overcome. It’ll bring NHS to knees to cope with demand.”
How’s business in the pandemic era? Have you felt much impact?
“Not to sound callous – as obviously the widespread death isn’t great, but from a business perspective that’s a huge opportunity for AI companies.”
“AI in healthcare is a hot topic! Hospitals and universities overwhelmed by COVID-19, so they haven’t been as engaged as they were previously as they’re just too busy. But they will be.”
“We count as a new venture, so VCs just haven’t been as interested; they want to invest in companies they’ve already put money into.”
“I understand it, it’s what I’d do. If my existing venture has cashflow issues, I would definitely approach the investors [who stand to lose money if it fails].”
“On the flipside, the platform product we’re working on has received a lot of traction. We’re try to build all of our diagnostics from this platform.”
“Every year there are thousands of SMEs trying to create better diagnostics, but don’t have AI or ML really embedded in what they do.”
“We’re trying to democratise the use of AI, building this platform that’s relatively easy for non AI persona to plug data into.”
“As a business model, we don’t want to create specific medical diagnostics, so we’re building a platform that allows for multi-modal AI processes.”
“Liver disease is a good example. There’s the modality based on proteomics (study of proteins). Another modality of biomarkers. Or genetics. Or primary care indicators (life style, age, that kind of thing).”
“All of these separately give you one kind of indicator for the disease. Different aspects of a complex system. So with the platform, we’ll build it iteratively and only improve it.”
“I explain to clinicians like the three blind men the elephant parable.”
“It’s not ready, for example when we’ve been working with liver or prostate disease diagnostics we’ve had to build multiple toolsets.”
“In a perfect world, they would have been built on top of our platform, building the diagnostics using that base toolset.”
So, is this an unstructured data approach, C.N.N.s?
“We don’t actually use neural networks – before AI was a sexy thing! No perceptron styled approach.”
“We build entirely self-organizing software units.”
Having never heard this term, I mumble the word ‘layman?’ unintelligibly.
“I’ll explain what I mean! I got my PHD in 2001. I was trying to simulate the complex behaviour of a tumour. Growth and genetic mutation, etc., lots of models to simulate component parts of the tumour.”
“It can all be stitched together: there’s a generic way of plugging in multiple different models with minimal input from the user."
"This is a piece of software that has more to do with simulations than ML.”
“And this basically applies to the machine learning approach. So the same principles apply to the ML models.”
In truth, I’ve heard ‘one model to rule them all’ claims before and I’m sceptical. So I ask…
What’s the efficacy like?
"We validate with real world data. Obviously training it is supervised (i.e. labelled scans), so we’re able to validate the output of the new model to what we’re seeing in the clinic.”
“Sometimes modalities are not compatible in the way you might expect them to be.”
“So, in prostate detection, there are different scans. A 1.5 tesla scan then a 3 tesla scan (these are different imaging resolutions).”
“If you were to train a [normal ML] model on 1.5 tesla data then get it to analyse 3 tesla data, you’d get completely different results.”
Sounds amazing. Is there a patent – tell me, what’s your secret sauce?!
“Actually a patent is going through right now.”
“The fundamental component of the model is based on cellular automata – technology developed by John von Neumann; he developed this idea that you can have a one dimensional grid of black and white squares, you have one set of rules, and the rule might be that you have two neighbour cells…” I cut him off.
…Wait, isn’t this the ‘Game of Life’?
“Yes, that totally came out of it.”
“Even with a 1D grid, from a simple rule, you can get completely random behaviour. The idea is that you don’t need to have complexity under the hood to get complex behaviour.”
“Like us. Cells behave in, well, I say ‘simple’, but they are quite simple for what they are.”
“This beautiful system comes out of it. We create a model with a certain number of interacting agents with simple rules, and you optimize the entire system. Instead of optimizing weights, you optimize the rules of the system.”
“We then apply fuzzification. We attach semantic meaning to these sets of rules.”
But that’s a huge decision set. There must be something that guides the optimization methods?
“The entropy and ease of use, you give up for a faster training methodology."
"Here it’s the other way around; our training is much longer and harder to do."
"What a deep learning model trains in an hour, it might take our model weeks to get to a solution. But the model runs as fast as a neural network when complete.”
Abstraction on abstraction, I confess I don’t really understand.
Does this then make it a black box?
“During training, yes, but at the end of it, no. It’s more of a grey box. Clinicians are reluctant to use AI as it’s often not explainable.”
“You can actually ask our model why it’s made a decision. You can back trace through the system to, say in a scan, point to the image that led it to decide one way or another.”
“Expert knowledge is attached as semantics to the sets of rules. There are a few base concepts we use and you build a network of logic that tells the machine what you want it to do.”
“Based on this network of knowledge, you can merge this up with the corresponding logical network or rules.”
“In an image, whether tumours, or the prostate, the prostate wall; you give the machine everything it needs to know about the problem.”
“That’s all I can really say without spilling IP!”
Thank goodness. Dig an inch deeper, I’ll definitely be lost.
What’s the most challenging aspect of what you do?
“There are several very difficult aspects.”
“We’re a new company, which makes it hard to get ahead and for us to explain to prospective clients that we’re not an AI imaging company.”
“When they hear “AI” everyone slots you into the imaging and radiology kind of slot. It’s hard to get away from that mindset.”
“Communicating our ideas is very hard and I find it really difficult to explain what we’re doing technically. But penetrating the NHS has always been hard.”
How did you get into this line of work?
“After my PhD I went straight into banking and hedge funds, algorithmic trading. Just wasn’t satisfactory in life!”
“You get up at half five then spend 12 hours at a desk. There’s no real upside. I mean, you get paid well, but it’s not satisfactory.”
“The other founder is an old friend of mine and he brought up that there’s this huge data issue in healthcare. Previously, I was CTO at another company called Cupris Health and talking to him really reminded me that I loved working in data and healthcare.”
What do you find interesting about machine learning?
“The mindset in AI and ML, from my interactions with data scientists and even my own CTO, is that we all tend to go along the deep learning curve and find ourselves dug into a hole.”
“Think about how long it took to make GPT3 – how many inputs (practically the whole internet). If we’re going to create a genuine AGI, you need something that really stitches it all together.”
“You need fusion technology.”
“Fusion is the glue that will put the whole thing together. It’s a technology that’s missing in everything. That’s the piece. You can’t make more practical, useful AI tech without this concept.”
“Without it, you predicate your entire view that you have to have the entire dataset to begin with. But in reality that never happens!”
“Take the medical world. There’s a diagnostic, and then 6 months later suddenly there’s a new modality (a new factor not seen in the training set). Now they can’t integrate it, so they have to start again!”
So, your Game of Life approach is the route to fusion?
“It’s one route. If I was to put my marketing hat on, I would say YES!!”
What’s another route to fusion technology in ML?
“Ensemble learning is probably the more practical route. You have lots of different networks and then some kind of average or decision tree on the outputs.”
“Another route is to have one model that inspects the other models. Like a bullsh*t meter [for ML approaches]. And that’s the kind of fusion I mean.”
What do you find most exciting about the future of machine learning?
“Well. The moment you make a prediction you’ll be wrong!”
“I think AGI is probably a lot closer than we think.”
“Well, it depends on your definition. I think we’re getting quite close, and I think a fusion approach to technology is the way.”
“I’m not talking about awareness and consciousness. I’m talking about a system that is able to learn and grow for itself.”
“The basic rule is that if I were to create a general AI, I would want to ask it a question that it knows nothing about. Then it goes off and tries to learn by itself. I think that qualifies as general AI. And that’s practically useful for us a human race.”
“To understand that there’s something it doesn’t know. Not just reading a wiki page but understanding.”
“For example, my son every half term has to do project. This one is with a ball down a track and the objective is to go down the track as slowly as possible. So, he put bumps in the road. Sounds simple but It’s quite a cool piece of creativity.”
“So, like my son, ML would be generally intelligent if it doesn’t understand the physics but can intuit at a solution.”
“I don’t think we need to get it to consciousness. I mean, am I even aware?”