Why Zocdoc Can Understand My Insurance Card
by Matt Ferrari
Co-Founder & Former CTO
I had a chance to reflect on the mystery of my insurance card recently while at the doctor trying to make sense of my benefits. I’m inspired by a company called Zocdoc that focuses on this common problem – understanding your insurance card. Who is in network? Who is not? What is a co-pay? How does it vary by provider? How is my doctor paid, and more importantly to me, how is my doctor incentivized to do a great job?
Zocdoc launched a deep learning capability using a program called Zocdoc Insurance Checker. It uses machine learning technologies to extract pieces of information from an image of a patient’s insurance card.
They recently partnered with Kelton Global to conduct a survey (n=1,000) and found:
- 28% of insured Americans are not always confident that the doctors they book appointments with will be in-network.
- 20% have been turned away when booking with a new doctor because the doctor didn’t accept their insurance.
- 16% made an appointment with a new doctor not knowing if their insurance covered the visit.
- 51% of insured Americans think it would be frustrating to make sure their preferred doctors were still in-network if their insurance was changing.
Add the extra layer of complexity that networks and providers change frequently, and directories can often be out of date, and it’s a pretty frustrating experience trying to get help when you probably are already not feeling great and you need a doctor.
Their app gets the name of the insurer, carrier, plan, member ID, all from image recognition. It’s fascinating to think about all of the things machine learning (ML) has to take into account when using images, from lighting to orientation. It’s all built out into a model that takes into account not only everything the card says, but also looks at the overall user experience. You could think of it like a yelp model for healthcare patients. It looks at how a doctor provides treatment, how long it takes to get an appointment, helps patients understand what they are going to pay before walking into their appointment, helps with scheduling appointments – within your network no less – and more. They had to take a look at not only the architecture of their ML model, but also figure out how to focus on learning over time, because after all, that is the point of machine learning, it learns more as it’s provided with more data.
It’s an exciting use case for how machine learning can improve the patient experience, but it’s also beneficial to the provider who spends a considerable amount of staffing time answering calls about insurance coverage from current and potential patients.
I’ll be talking a lot on the road ahead about machine learning and the amazing work happening with genomic research, oncology, radiation, macular degeneration and many more use cases. But sometimes just reducing the time an anxious patient spends in the waiting room or trying to find a doctor has tremendous merit.