Updated: Aug 22, 2019
“My favourite thing is accomplishing things but a dream only months ago.”
Seyed Sina, dreamer, ML expert, roboticist.
He works in the space where machine learning and robotics overlap.
“What I’m doing focuses on arms.”
Sina’s robots catch things. They take sensory information, understand the object, understand the motion of the object, then catch the ball.
I ask him for the hardest things about his work.
“It’s a different research problem to Boston Dynamics.”
He tells me the difficulty in the experimental setup is having no prior knowledge about how the object is going to move. The program has to determine the trajectory and location, accounting for the uncertainty of the directionality of the object.
"Leg robots have [to focus on] stability issues; they’re always loaded, which is important to not fall down.”
“With arms, a stable base means one can focus on making them more intelligent.”
A more stable base means he can make the catching more intelligent, that he can program it to conduct more future reasoning. When he said these two things it became clear to me that ‘weight bearing’ and ‘catching’ could be thought of as two entirely distinct research problems, not immediately obvious to a layman. I realised how the line of progression of these distinct fields would surely - and suddenly - reach an intersection.
Think about how a person catches a ball: they run, sometimes jumping, meanwhile sending their arms in the direction they know they’re going to need them in the future - even though at the time they triggered that directional arm movement it was not necessarily in the direction of the ball. It occurred to me how, for a future ‘full’ robot (i.e. legs and arms), the field of leg stability could progress to a point where it would be able to feed a forecasted virtual fixed point in space to the program governing arms (such as in running to a ball); the arms would be able to use this forecasted fix point as a virtually stable base, so the arms could act more intelligently by catching the ball as if they were moving from a fixed point.
I think this is significant because from a consumer perspective, almost from one day to the next, robots will appear to have terrible motor skills one year and suddenly transcend to having super human motor skills. I mean that the sudden advancement will take us all a bit by surprise. In truth, the latest stuff is already surprising.
There are so many niche fields impacted by machine learning that it’s as though the technology is improving many independent ‘parts’ of some unknown future machine. One day many of these distinct fields will begin to merge together and we’ll see truly dramatic improvements. I think the products and services released later in my life will seem almost like magic. There is not much in my present environment of which I don’t have at least a vague understanding of how they work, perhaps not so true of my future. Things like GPS are already a bit foggy.
“Physical constraints are observed, then machine learning predicts the motion of the ball and formulates a catch.”
The aim of the game is to make fastest catch as smoothly as possible. He programs the constraints of the experimental setup, dimensions of each mechanical part, rules like gravity and so on, into the system; then he sets loose his models to mathematically arrive at the most perfect catch achievable.
It needs to observe the state of the system, predict and then formulate the correct correlating behaviours in response to the dynamic environment - to achieve the programmed goals.
He loves his challenging field, especially how the rapid advancement of the various fields - both machine learning and robotics - means that he accomplishes newly dreamed up feats regularly.
He runs only a handful of real tests on the physical experimental set up to avoid damaging the arm - it costs an arm and a leg! The monumentally profound idea here is that, in the real world, time and expense is friction to testing. Yet, in the mind of a machine these factors fall away.
Thousands of revised catch attempts, done in a heartbeat. It trains on situations that never actually happened. Part of his work is to make the simulated and real situations as similar to one another as possible, so he tests in reality a few times to be sure, then “transcends the training into the simulated state”.
“Machine learning should be able to compensate for uncertain behaviours. Whilst you’re fitting your models, if it’s not learning or generalising it cannot react to uncertain environments.”
Waymo and other companies have already trained cars that can drive around test courses entirely on simulated data. In other words, not only are their environments simulated but the simulated environments themselves are made up. A program created them by taking real situations and then moving various elements around. Gulp.
“You get the data and then generalise to many more data points.” A majority of the training data are situations that only ever happened virtually, which
“works 99% of the time in the real world. You just need to make sure that the 1% of the time isn’t going to kill anyone.”
(He qualifies he’s not an expert in safety!)
What’s the implication of all of this?
Well, machine learning is making a tangible and significant impact in research costs, in this case, in the field of robotics. For robotics, this means that the overall cost of Seyed’s robot arm is getting cheaper. Machine learning is also making the sophisticated programs that govern robotic movements easier to create. In turn, this lowers the ‘barrier to entry’ in the field of advanced robotics.
The result? “Progress is happening REALLY fast.” He thinks the field will explode soon.