Canadian AI Healthtech
Jul 10, 2018 ● Alexandra Lozovschi
Scientists Are Teaching AI Systems To Read Medical X-Rays — Starting With Fake Ones

Artificially generated X-rays are being used to teach AI systems to identify medical conditions

Science has already begun to explore the potential of artificial intelligence (AI) in medical applications. One idea is that AI systems can help improve the process of diagnosing a host of medical conditions by providing a faster, more accurate way of reading X-rays.

But it’s going to take a while before AI systems are able to interpret X-rays all by themselves. Just like human physicians, these systems need to be trained before they can properly identify conditions by looking at an X-ray.

In order to learn, AI requires a tremendous amount of data — literally thousands of X-ray images detailing all sorts of conditions.

One of the biggest hurdles so far has been the lack of a sufficiently extensive database, considering that for many rare pathologies, there simply aren’t enough X-ray images available on which the AI systems to learn.

But a team of Canadian researchers has come up with a brilliant idea to correct this deficiency, reports Phys.org.

To get the ball rolling, the scientists have turned to artificially generated X-rays, which fill the gap in the existing database.

Led by Professor Shahrokh Valaee from the University of Toronto in Ontario, Canada, the team has devised an AI technique that generates the desired content, which is then used to teach other AI systems how to read X-rays and identify conditions.

“In a sense, we are using machine learning to do machine learning,” Valaee said in a statement.

Known as a deep convolutional generative adversarial network (DCGAN), this AI-based method creates X-ray simulations and continues to refine them until they can no longer be told apart from real X-ray images.

After a sufficient amount of content is generated, the artificial images are combined with real ones and mashed up in a large database, which the researchers then use to train deep convolutional neural networks how to look at X-rays and spot abnormalities.

In a news release from the University of Toronto, Valaee explains the entire process.

“We are creating simulated X-rays that reflect certain rare conditions so that we can combine them with real X-rays to have a sufficiently large database to train the neural networks to identify these conditions in other X-rays.”

According to Phys.org, the team’s augmented database proved to be more efficient than the original one (containing just the real X-rays) and improved the AI’s accuracy at classifying common conditions by 20 percent.

The result was even more spectacular when it came to rare pathologies, where the AI experienced a 40 percent increase detection accuracy.

As an added bonus, this vast database can now be shared with other researchers as well, since the artificial images don’t violate patient privacy.

Commenting on the results, Valaee points out the great advantage of the method developed by his team.

“Deep learning only works if the volume of training data is large enough and this is one way to ensure we have neural networks that can classify images with high precision.”

This article originally appeared in INQUISITR

Article by:

Alexandra Lozovschi