DarwinAI, which spun out of work at the University of Waterloo, normally works on AI explainability. The company makes a tool that can show why deep-learning modules make the decisions they do, enabling users to correct the inputs that lead to wrong decisions, and fix the architecture or retrain the system to prevent the same mistakes in future. The idea is that, by getting an insight into why AI does what it does, companies can speed up the development of their AI products. However, a few weeks ago, DarwinAI decided to turn its attention to whether AI could have a role to play in fighting COVID-19. Within two weeks, the company’s cofounder and University of Waterloo professor Alexander Wong, with his research student Linda Wang, had developed COVID-Net.
COVID-Net is a deep convolutional neural network designed to screen patients with suspected coronavirus infections by identifying tell-tale signs of the disease on chest X-rays. While testing for coronavirus infection is traditionally done with a cheek or nose swab, there’s often a lack of testing kits and testers in hospitals, whereas chest x-rays are quick to do and commonplace in hospitals. The bottleneck between taking an x-ray and getting it interpreted is typically finding a radiologist to report the scan — having an AI read it instead could mean scan results are turned around far faster.
While COVID-19 might be a relatively new infection, AI has been used for to analyse images from any number of medical conditions, including cancer and eye diseases. “Where deep learning as a tool has traditionally excelled is with image analysis — looking at an image and then deducing some kind of conclusion from that image… [COVID-Net] was just a natural by-product of the existing work we’d already done and the strengths of deep learning as a tool for image analysis,” DarwinAI’s CEO, Sheldon Fernandez, told ZDNet.
After training the system with nearly 5,000 X-rays, COVID-Net was open-sourced in late March. “The hope is that the open-access COVID-Net, along with the description on constructing the open-source COVIDx dataset, will be leveraged and build upon by both researchers and citizen data scientists alike to accelerate the development of highly accurate yet practical deep-learning solutions for detecting COVID-19 cases and accelerate treatment of those who need it the most,” Wong and Wang wrote in their paper on COVID-Net.
After COVID-Net was open-sourced, “the response has just been overwhelming”, Fernandez said. “Our inboxes are inundated with people recommending improvements and building on top of what we’re doing,” he added. More datasets of X-rays were contributed to train the system, which has now learnt from over 17,000 images.
Since its release, researchers from countries including Yemen, Indonesia, Turkey, and Saudi Arabia have begun experimenting with COVID-Net. For areas where computing resources may be scarce — developing countries or rural environments with poor connectivity, for example — DarwinAI has also released a ‘lite’ version of COVID-Net, which can be run off a laptop. The system, according to Fernandez, has only a small drop in accuracy compared to the full version.
Alongside DarwinAI, a number of companies — including Alibaba — are working on parallel efforts, hoping to use AI to pick out the COVID-19 infections on X-rays. The products are all closed source, however, making it harder for clinicians to know why the systems reach the conclusions they do.
“The decision to [open source COVID-Net] was what caught the eyes of a lot of people because there are several companies that have said, ‘here’s a neural network that will do COVID detection’, and that’s all they say. They don’t give you the source code, the data and so forth. That’s been the kind of standard mechanism in deep learning, it’s your trade secret. But we didn’t want to do that, given what’s happening. We really wanted to just make everything available to anybody that wanted it,” Fernandez said.
COVID-Net isn’t yet production-ready, according to the CEO, but the company hopes it will be ready within a very short time. “Right now, it’s just a very technical kind of implementation that data scientists could leverage but certainly not a radiologist or healthcare worker, so it needs to be wrapped in a proper application UI that’s fairly easy to use and that somebody who’s not deeply technical can leverage. We’re working with a number of government organisations here in Canada to try to get some funding so that we can very quickly turn this around,” Fernandez said.
According to the CEO, COVID-Net could be deployed by healthcare organisations as a method of triaging patients — seeing who has the virus, who has a condition that might look similar but isn’t COVID-19, and who is healthy. “We think it can be a complimentary tool with more conclusive testing,” Fernandez said. It could also potentially be used to detect people that have come into hospital for other reasons who might also be unwittingly carrying COVID-19. “It allows [clinicians] to then attach this test on to their X-ray and say, ‘not only did you break a rib, there’s a 95% chance that you have [coronavirus] and you don’t even know it, go home and rest and self-isolate’.”
The next project the company is working is risk stratification for COVID-19 cases. Medics often use simple algorithms to calculate a patient’s risk of having a particular condition, or of that condition worsening. While COVID-Net works to determine someone has coronavirus or not from their X-ray, COVID-RiskNet aims to predict how badly the disease could affect a person and what sort of treatment they might need — allowing healthcare workers to separate those who can self-isolate and get better at home, and those who might need inpatient medical care. “It’s a tool for them to get a very quick snapshot around how serious a problem it is, and then start to triage accordingly,” Fernandez said. DarwinAI expects to open-source COVID-RiskNet in the coming weeks.
DarwinAI’s main business of explainability is designed to allow users — including clinicians — to understand why AI’s make certain decisions and diagnoses. But it’s not just one-way traffic — allowing users to peek inside the black box of machine learning can occasionally allow machines to change the way humans think. Fernandez cites the example of how a neural network’s rationale for diagnosing a certain type of lung cancer was different to that of radiologists.
“The neural network was not only looking at the central part of the lungs, it was looking at the walls of the lungs. Apparently, that was that was new to the radiologists, at least from what we were told, who hadn’t really looked at the walls of the lungs for this particular type of cancer under these kinds of conditions. It actually changed the way they thought about diagnosing cancer in this domain. If you understand why the AI is doing what it’s doing, sometimes it’ll teach you something about your own subject matter,” Fernandez said.