AI holds promise in medicine

Alan karthikesalingam

The role of artificial intelligence (AI) has the potential to be “transformative” in medicine, including vascular surgery. So says Alan Karthikesalingam, UK research manager at Google Health (London, UK) and lecturer in vascular surgery, who spoke on the topic at the annual meeting of the British Society of Endovascular. Therapy (BSET) this year (June 24, online). The presenter noted in particular that developments in three-dimensional (3D) positioning and navigation are an “exciting” area in the field of vascular surgery, but noted that “high-quality, prospective and randomized evidence” are necessary to prove clinical results and cost-effectiveness.

Karthikesalingam first gave an overview of deep learning, which he described as the field of AI that has seen “the most impressive advances in medicine in recent years.” He noted that deep learning is something most people use every day, as it’s involved in how a person’s phone can help find users themselves, or their pets. company on their photos, or offers autocomplete suggestions in email programs.

“Most computer programs are limited by the instructions provided to them,” Karthikesalingam said, noting that the difference with deep learning is that it includes math functions that learn from examples.

Karthikesalingam spoke specifically of the field of supervised learning and, in particular, computer vision applications. One of the “most famous” applications of supervised learning, he noted, is the recognition of objects in computer vision, whereby scientists train a deep neural network to be able to classify the presence of ‘an object in an image, for example by detecting the presence of a cat or dog in a photo. “Out of hundreds of thousands, maybe even millions of examples”, detailed the presenter, “these networks are getting really good at the job, to the point where they can properly classify even really ambiguous images, sometimes even at the level. humans “.

AI medical applications

Speaking to viewers at BSET, the presenter detailed some of the work Google has done on the application of supervised learning in computed tomography (CT) scans, training the technology to learn how to identify cancerous nodules in lung cancer screening. .

He also pointed out that deep learning can help healthcare professionals understand “something a little more nuanced” about a picture rather than just the presence or absence of a particular disease. “We might want to define, for example, the boundaries of a tissue, whether it is normal anatomy or pathology,” he said, noting a specific application in this case as the planning of endovascular aneurysm repair (EVAR).

The ability of this technology to identify new biomarkers and signals is another application, Karthikesalingam added. He explained, for example, that these systems could be used to identify markers of disease progression, to help predict response to treatment or to identify new predictors of disease.

The presenter noted that this technology “has the real potential to help our patients” by improving the accuracy and availability of care and reducing unwarranted variation, especially in the field of medical imaging.

Translation issues

One of the key issues in translating this technology into clinical practice concerns the interaction of deep learning systems with clinicians, including how scientists design user interfaces and tools for clinicians so that these systems are precise, well calibrated and easy to use. . “This is an important area of ​​research right now,” Karthikesalingam said.

The presenter also referred to various “pitfalls” in how this type of research can be conducted “meaningfully and appropriately”. He explained that many routine data sets collected in clinical practice “may encode unspoken biases and representational issues in clinical practice today,” particularly around treatment and the equity of delivery. care for disadvantaged groups and minorities.

In addition, he noted “significant” concerns regarding data quality, data security, confidentiality, consent and confidentiality, as well as regulatory considerations.

High quality data needed

Moderator Simon Neequaye (Liverpool University Hospitals NHS Foundation Trust, Liverpool, UK) was interested in hearing Karthikesalingam’s thoughts on the role of deep learning in 3D positioning and navigation with “systems like Cydar [Cydar Medical], FORS [Fiber Optic RealShape, Philips], and robotics ”.

“This is an exciting area,” Karthikesalingam replied, noting that he suspects that there will be an increase in the availability of these tools over the next five years. He stressed that “the evidence will be whether these tools are really useful in clinical practice”, concluding however that the technology is “very promising” in this area. “If the trends we are seeing in screening and diagnostic imaging continue in interventional settings, it should really make a difference this decade.”

Closing his presentation, Karthikesalingam was keen to stress that while these types of systems will translate into clinically cost-effective tools, it will require “high quality, prospective and randomized evidence in the years to come”.


Source link

About Hector Hedgepeth

Hector Hedgepeth

Check Also

Diadem presents data at 22nd International Alzheimer’s Drug Discovery Conference, emphasizing that his blood biomarker test is 84% ​​concordant with amyloid brain imaging

Further analysis of clinical validation data shows that Diadem’s AlzoSure® prediction is highly concordant with …

Leave a Reply

Your email address will not be published. Required fields are marked *