In a recent study, researchers trained an algorithm to differentiate between malignant and benign lesions in scans of breast tissue.

Doctor and patient with breast scanShare on Pinterest
A new study asks whether artificial intelligence could streamline cancer diagnosis.

With cancer, the key to successful treatment is catching it early.

As it stands, doctors have access to high quality imaging, and skilled radiologists can spot the telltale signs of abnormal growth.

Once identified, the next step is for doctors to ascertain whether the growth is benign or malignant.

The most reliable method is to take a biopsy, which is an invasive procedure.

Even then, errors can occur. Some people receive a cancer diagnosis where there is no disease, while others do not receive a diagnosis when cancer is present.

Both outcomes cause distress, and the latter situation may cause delays to treatment.

Researchers are keen to improve the diagnostic process to avoid these issues. Detecting whether a lesion is malignant or benign more reliably and without the need for a biopsy would be a game changer.

Some scientists are investigating the potential of artificial intelligence (AI). In a recent study, scientists trained an algorithm with encouraging results.

Ultrasound elastography is a relatively new diagnostic technique that tests the stiffness of breast tissue. It achieves this by vibrating the tissue, which creates a wave. This wave causes distortion in the ultrasound scan, highlighting areas of the breast where properties differ from the surrounding tissue.

From this information, it is possible for a doctor to determine whether a lesion is cancerous or benign.

Although this method has great potential, analyzing the results of elastography is time-consuming, involves several steps, and requires solving complex problems.

Recently, a group of researchers from the Viterbi School of Engineering at the University of Southern California in Los Angeles asked whether an algorithm could reduce the steps needed to draw information from these images. They published their results in the journal Computer Methods in Applied Mechanics and Engineering.

The researchers wanted to see whether they could train an algorithm to differentiate between malignant and benign lesions in breast scans. Interestingly, they attempted to achieve this by training the algorithm using synthetic data rather than genuine scans.

When asked why the team used synthetic data, lead author Prof. Assad Oberai says that it comes down to the availability of real-world data. He explains that “in the case of medical imaging, you’re lucky if you have 1,000 images. In situations like this, where data is scarce, these kinds of techniques become important.”

The researchers trained their machine learning algorithm, which they refer to as a deep convolutional neural network, using more than 12,000 synthetic images.

By the end of the process, the algorithm was 100% accurate on synthetic images; next, they moved on to real life scans. They had access to just 10 scans: half of which showed malignant lesions and the other half pictured benign lesions.

We had about an 80% accuracy rate. Next, we continue to refine the algorithm by using more real-world images as inputs.”

Prof. Assad Oberai

Although 80% is good, it is not good enough — however, this is just the start of the process. The authors believe that if they had trained the algorithm on real data, it might have shown improved accuracy. The researchers also acknowledge that their test was too small scale to predict the system’s future capabilities.

In recent years, there has been a growing interest in the use of AI in diagnostics. As one author writes:

“AI is being successfully applied for image analysis in radiology, pathology, and dermatology, with diagnostic speed exceeding, and accuracy paralleling, medical experts.”

However, Prof. Oberai does not believe that AI can ever replace a trained human operator. He explains that “[t]he general consensus is these types of algorithms have a significant role to play, including from imaging professionals whom it will impact the most. However, these algorithms will be most useful when they do not serve as black boxes. What did it see that led it to the final conclusion? The algorithm must be explainable for it to work as intended.”

The researchers hope that they can expand their new method to diagnose other types of cancer. Wherever a tumor grows, it changes how a tissue behaves, physically. It should be possible to chart these differences and train an algorithm to spot them.

However, because each type of cancer interacts with its surroundings so differently, an algorithm will need to overcome a range of problems for each type. Already, Prof. Oberai is working on CT scans of renal cancer to find ways that AI could aid diagnosis there.

Although these are early days for the use of AI in cancer diagnosis, there are high hopes for the future.