Feeling stiff? Let the computer take a look
How computer vision could revolutionise the management of musculoskeletal disease
What if you could point your smartphone camera at your hand to see if your joint swelling had got worse? Or set up your tablet on a tripod whilst you perform a range of movements in front of the camera.
Before you read this, sit up straight. You’ll thank me later.
🤷♂️ Problem
1.7 billion people (yes - billion) have a musculoskeletal condition. They’re the leading cause of disability worldwide and are costly to both health services and the wider economy (early retirement, absenteeism).
💡 Solution
Use computer vision, a type of AI, to provide monitoring and rehabilitation for musculoskeletal conditions. Computer vision solutions are scalable and can be used remotely.
📖 Terms
Computer vision. An interdisciplinary scientific field that deals with how computers can gain high-level understanding from digital images or videos. It has a wide range of applications across healthcare - such as monitoring blood loss in the operating theatre, to detecting abnormalities on medical imaging.
📚 History
Developments in computer vision sort of progressed with our understanding of human vision…
1950s. In the name of science, no animal is safe. Scientists begin to understand the neurophysiology of visual perception by conducting experiments on cats. By looking at how neurons react to certain stimuli, they determine that vision is hierarchical. Features like edges are detected first, then shapes, then more complex representations.
1959. Researchers develop a method to turn images into grids of numbers - a format which computers can read. The first image to be scanned? The inventors son. An image so famous that it’s now proudly stored in the Portland Museum of Art.
1963. At a time when a PhD thesis could still be groundbreaking, Lawrence Roberts described a process of deriving 3D information of solid objects from 2D photographs. His work is widely regards as being the precursor to computer vision.
1979. Japanese computer scientist Kunihiko Fukushima build an artificial neural network called ‘neocognitron’ that can detect patterns including Japanese handwritten characters. Arguably it’s the first convolutional neural net.
1982. British neuroscientist David Marr establishes a framework for vision where low-level algorithms that detect edges and curves etc are used as stepping stones towards a high-level understanding of visual data.
2001. Paul Viola and Michael Jones develop the first face detection network that can work in real time. Five year later, Fujitsu release a camera with face detection capabilities that is based on their work!
2010. The famous ImageNet competition is launched. The dataset contains over a million, manually cleaned images. Each year, teams submit their models and determine who's is the best at image detection.
2012. A team from the University of Toronto submit AlexNet - a convolutional neural network (CNN) that wins the ImageNet competition with an error rate of 16.4%. All the winners since have been CNNs.
💼 Use cases
Providing physiotherapy and personal training
Improving athletic performance
Monitoring chronic conditions like rheumatoid arthritis
Monitoring degenerative conditions like Parkinson’s disease
👥 Players
Kaia Health. A digital therapeutics company that have developed ‘Motion Coach’, a computer vision tool that can analyse movements in real time, evaluate performance and provide feedback.
Arthronica. A remote monitoring solution that uses computer vision to assess individuals suffering with arthritis. Can evaluate joint swelling, freedom of movement and other important biomarkers.
Machine Medicine. A UK based startup that uses computer vision to carry out motor assessments. Their current offering focuses on evaluating the symptoms of Parkinson’s disease.
CerebrumEdge. An Indian outfit that carry out computer vision ergonomics assessments to help prevent industry related musculoskeletal disorders.
Vitrue Health. A UK company providing computer vision-based motor assessments for use by doctors and physios to help treat musculoskeletal disease.
Physimax. Aimed at athletes, Physimax’s offering helps improve athletic performance. Their platform can both predict injury and help reduce it.
Lookinglass. An Australian startup supposedly developing a smart mirror that incorporates computer vision. Uses include movement monitoring, disease detection and medication adherence. No hard evidence of this mirror on their website however…
Huma. A UK health tech company with a wide range of products. They have a partnership with Tencent to develop a computer vision tool to monitor Parkinson’s disease.
🤔 Challenges
Health inequalities. Successful remote monitoring requires decent equipment and some level of digital literacy. Do those who would benefit most from computer vision solutions have appropriate access or the know-how to benefit?
Fidelity. AI is only as good as the data it’s been trained on. Computer vision for healthcare needs to work on a range of devices, in a range of lighting conditions and in a variety of different settings. No mean feat.
Workflows. Regular assessments via AI is all well and good. But who’s going to review them. What constitutes disease progression or worsening symptoms? Clinicians don’t have time for further additions to their administration burden
Outcomes. Does a remote-first chronic disease management platform lead to better outcomes for patients with musculoskeletal conditions? We need the evidence before widespread deployment.
🌅 Opportunities
Remote. You don’t want to travel to the office and patients don’t want to travel to the clinic. Computer vision solutions open the door for remote monitoring from the comfort of your own home.
Triage. Waiting lists for elective surgeries and outpatient appointments are similar to those for Michelin starred restaurants (thanks to Covid). Computer vision solutions could help prioritise patients who need to be seen sooner.
Standardisation. Clinicians are great, but their quality can vary. Computer vision can perform consistent assessments, 24/7. Consistency can lead to better quality care.
Early detection. Degenerative diseases are miserable. Treatments to slow progression are key to maximising quality of life. Computer vision might pick up on subtle changes that clinicians might otherwise miss.
🔮 Predictions
Routine appointments. Remote monitoring will kill the the paradigm of routine outpatient appointments. Clinicians will bring in patients who show signs of deterioration.
Cost savings. Continuous monitoring, early detection, timely intervention. All of this will ultimately stop small (cheap) problems turning into big (costly) problems.
Feedback. Computer vision for remote monitoring will ultimately be somewhat of a closed loop. The AI will observe, analyse then provide recommendations to the user. Any human-in-the-loop will be light touch.
Passive. Computer vision will be built into the devices we’ve already grown accustomed to in our home. Smart mirrors, video devices (Amazon Echo Show, Facebook Portal) etc. Our health will be analysed without us needing to initiate an assessment…
WFH. Computer vision will be deployed by large companies to ensure their employees have an home working set up that won’t lead to crippling back issues down the line.
That’s it for this week - catch ya next time 👋