Connect with us

Industry

ARTIFICIAL INTELLIGENCE WILL IMPROVE MEDICAL TREATMENTS

Published

on

It will not imminently put medical experts out of work

FOUR years ago a woman in her early 30s was hit by a car in London. She needed emergency surgery to reduce the pressure on her brain. Her surgeon, Chris Mansi, remembers the operation going well. But she died, and Mr Mansi wanted to know why. He discovered that the problem had been a four-hour delay in getting her from the accident and emergency unit of the hospital where she was first brought, to the operating theatre in his own hospital. That, in turn, was the result of a delay in identifying, from medical scans of her head, that she had a large blood clot in her brain and was in need of immediate treatment. It is to try to avoid repetitions of this sort of delay that Mr Mansi has helped set up a firm called Viz.ai. The firm’s purpose is to use machine learning, a form of artificial intelligence (AI), to tell those patients who need urgent attention from those who may safely wait, by analysing scans of their brains made on admission.

That idea is one among myriad projects now under way with the aim of using machine learning to transform how doctors deal with patients. Though diverse in detail, these projects have a common aim. This is to get the right patient to the right doctor at the right time.

In Viz.ai’s case that is now happening. In February the firm received approval from regulators in the United States to sell its software for the detection, from brain scans, of strokes caused by a blockage in a large blood vessel. The technology is being introduced into hospitals in America’s “stroke belt”—the south-eastern part, in which strokes are unusually common. Erlanger Health System, in Tennessee, will turn on its Viz.ai system next week.

The potential benefits are great. As Tom Devlin, a stroke neurologist at Erlanger, observes, “We know we lose 2m brain cells every minute the clot is there.” Yet the two therapies that can transform outcomes—clot-busting drugs and an operation called a thrombectomy—are rarely used because, by the time a stroke is diagnosed and a surgical team assembled, too much of a patient’s brain has died. Viz.ai’s technology should improve outcomes by identifying urgent cases, alerting on-call specialists and sending them the scans directly.

The AIs have it

Another area ripe for AI’s assistance is oncology. In February 2017 Andre Esteva of Stanford University and his colleagues used a set of almost 130,000 images to train some artificial-intelligence software to classify skin lesions. So trained, and tested against the opinions of 21 qualified dermatologists, the software could identify both the most common type of skin cancer (keratinocyte carcinoma), and the deadliest type (malignant melanoma), as successfully as the professionals. That was impressive. But now, as described last month in a paper in the Annals of Oncology, there is an AI skin-cancer-detection system that can do better than most dermatologists. Holger Haenssle of the University of Heidelberg, in Germany, pitted an AI system against 58 dermatologists. The humans were able to identify 86.6% of skin cancers. The computer found 95%. It also misdiagnosed fewer benign moles as malignancies.

There has been progress in the detection of breast cancer, too. Last month Kheiron Medical Technologies, a firm in London, received news that a study it had commissioned had concluded that its software exceeded the officially required performance standard for radiologists screening for the disease. The firm says it will submit this study for publication when it has received European approval to use the AI—which it expects to happen soon.

This development looks important. Breast screening has saved many lives, but it leaves much to be desired. Overdiagnosis and overtreatment are common. Conversely, tumours are sometimes missed. In many countries such problems have led to scans being checked routinely by a second radiologist, which improves accuracy but adds to workloads. At a minimum Kheiron’s system looks useful for a second opinion. As it improves, it may be able to grade women according to their risks of breast cancer and decide the best time for their next mammogram.

Efforts to use AI to improve diagnosis are under way in other parts of medicine, too. In eye disease, DeepMind, a London-based subsidiary of Alphabet, Google’s parent company, has an AI that screens retinal scans for conditions such as glaucoma, diabetic retinopathy and age-related macular degeneration. The firm is also working on mammography.

Heart disease is yet another field of interest. Researchers at Oxford University have been developing AIs intended to interpret echocardiograms, which are ultrasonic scans of the heart. Cardiologists looking at these scans are searching for signs of heart disease, but can miss them 20% of the time. That means patients will be sent home and may then go on to have a heart attack. The AI, however, can detect changes invisible to the eye and improve the accuracy of diagnosis. Ultromics, a firm in Oxford, is trying to commercialise the technology and it could be rolled out later this year in Britain.

There are also efforts to detect cardiac arrhythmias, particularly atrial fibrillation, which increase the risk of heart failure and strokes. Researchers at Stanford University, led by Andrew Ng, have shown that AI software can identify arrhythmias from an electrocardiogram (ECG) better than an expert. The group has joined forces with a firm that makes portable ECG devices and is helping Apple with a study looking at whether arrhythmias can be detected in the heart-rate data picked up by its smart watches. Meanwhile, in Paris, a firm called Cardiologs is also trying to design an AI intended to read ECGs.

Seeing ahead

Eric Topol, a cardiologist and digital-medicine researcher at the Scripps Research Institute, in San Diego, says that doctors and algorithms are comparable in accuracy in some areas, but computers have the advantage of speed. This combination of traits, he reckons, will lead to higher accuracy and productivity in health care.

Artificial intelligence might also make medicine more specific, by being able to draw distinctions that elude human observers. It may be able to grade cancers or instances of cardiac disease according to their risks—thus, for example, distinguishing those prostate cancers that will kill quickly, and therefore need treatment, from those that will not, and can probably be left untreated.

What medical AI will not do—at least not for a long time—is make human experts redundant in the fields it invades. Machine-learning systems work on a narrow range of tasks and will need close supervision for years to come. They are “black boxes”, in that doctors do not know exactly how they reach their decisions. And they are inclined to become biased if insufficient care is paid to what they are learning from. They will, though, take much of the drudgery and error out of diagnosis. And they will also help make sure that patients, whether being screened for cancer or taken from the scene of a car accident, are treated in time to be saved.

 

 

 

 

 

Source:  The Economist

Continue Reading
Click to comment

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Advertising

GOOGLE MAKES $550M STRATEGIC INVESTMENT IN CHINESE E-COMMERCE FIRM JD.COM

Published

on

By

Google has been increasing its presence in China in recent times, and today it has continued that push by agreeing to a strategic partnership with e-commerce firm JD.com, which will see Google purchase $550 million worth of shares in the Chinese firm.

Google has made investments in China, released products there and opened offices that include an AI hub, but now it is working with JD.com largely outside of China. In a joint release, the companies said they would “collaborate on a range of strategic initiatives, including joint development of retail solutions” in Europe, the U.S. and Southeast Asia.

The goal here is to merge JD.com’s experience and technology in supply chain and logistics — in China, it has opened warehouses that use robots rather than workers — with Google’s customer reach, data and marketing to produce new kinds of online retail.

Initially, that will see the duo team up to offer JD.com products for sale on the Google Shopping platform across the word, but it seems clear that the companies have other collaborations in mind for the future.

JD.com is valued at around $60 billion, based on its NASDAQ share price, and the company has partnerships with the likes of Walmart and it has invested heavily in automated warehouse technology, drones and other “next-generation” retail and logistics.

The move for a distribution platform like Google to back a service provider like JD.com is interesting since the company, through search and advertising, has relationships with a range of e-commerce firms, including JD.com’s arch rival Alibaba.

But it is a sign of the times for Google, which has already developed relationships with JD.com and its biggest backer Tencent, the $500 billion Chinese internet giant. All three companies have backed Go-Jek, the ride-hailing challenger in Southeast Asia, while Tencent and Google previously inked a patent-sharing partnership and have co-invested in startups such as Chinese AI startup XtalPi.

 

 

Source: Tech Crunch.

Continue Reading

Industry

GOOGLE LAUNCHES A PODCAST APP FOR ANDROID WITH PERSONALIZED RECOMMENDATIONS

Published

on

By

Google today is introducing a new standalone podcast app for Android. The app, called simply Google Podcasts, will use Google’s recommendation algorithms in an effort to connect people with shows they might enjoy based on their listening habits. While podcasts have previously been available on Android through Google Play Music and third-party apps, Google says the company expects Podcasts to bring the form to hundreds of millions of new listeners around the world. (Google Listen, an early effort to build what was then called a “podcatcher” for Android, was killed off in 2012.)

“There’s still tons of room for growth when it comes to podcast listening,” said Zack Reneau-Wedeen, product manager on the app. Creating a native first-party Android app for podcasts “could as much as double worldwide listenership of podcasts overall,” he said.

Google Podcasts will look familiar to anyone who has used a podcast app before. It lets you search for new podcasts, download them, and play them at your convenience. More than 2 million podcasts will be available on the app on launch day, Google says, including “all of the ones you’ve heard of.”

Open the app, and a section called “For you” shows you new episodes of shows you’ve subscribed to, episodes you’ve been listening to but haven’t finished, and a list of your downloaded episodes. Scroll down, and you’ll see top and trending podcasts, both in general and by category. The podcast player has fewer fine-grained controls than you might be used to from apps like Overcast. You can’t customize the skip buttons or create playlists of podcasts to listen to, for example.

The Podcasts app is integrated with Google Assistant, meaning you can search for and play podcasts wherever you have Assistant enabled. The company will sync your place in a podcast across all Google products, so if you listen to half a podcast on your way home from work, you can resume it on your Google Home once you’re back at the house.

In the coming months, Google plans to add a suite of features to Podcasts that are powered by artificial intelligence. One feature will add closed captions to your podcast, so you can read along as you listen. It’s a feature that could be useful to people who are hard of hearing or for anyone who is listening in a noisy environment. (I usually miss a few minutes of the podcasts I listen to every day, thanks to a noisy subway ride.)

Closed captions also mean that you’ll be able to skip ahead to see what’s coming up later in a show. Eventually, you’ll be able to read real-time live transcriptions in the language of your choice, letting you “listen” to a podcast even if you don’t speak the same tongue as the host.

Google also wants to expand the number of people making podcasts. The company’s research showed that only one-quarter of podcast hosts are female, and even fewer are people of color. In an effort to diversify the field, Google formed an independent advisory board that will consider ways to promote podcast production outside of the handful of major metropolitan areas in the United States that currently dominate the field.

Google will not pay any creators to make podcasts directly, the company said, but it will likely explore ways of giving podcasts from underrepresented creators extra promotion. It’s also examining ways to make recording equipment more accessible to people who can’t afford it.

If you already listen to podcasts on Google Play Music, nothing will change today. But the company made it clear that it plans to focus its future efforts around podcasting in the standalone app.

The Android app can be downloaded here. There are currently no plans for an iOS app.

Continue Reading

Industry

HUAWEI MATE 20 PRO TIPPED TO SPORT A 6.9-INCH SAMSUNG OLED DISPLAY

Published

on

By

arlier this month, Huawei introduced the Watch 2 smartwatch with an eSIM and voice call support. Now, a new development claims that the company is procuring OLED displays from Samsung. The South Korean giant is said to have already sent out samples to Huawei, and if all goes well, full scale production is expected to start by Q3 2018. The smartphone to sport these 6.9-inch OLED panels is said to release sometime in the fourth quarter or even early 2019, and we largely expect to see them on the Huawei Mate 20 Pro.

South Korean media The Bell reports that Samsung is in the process of finalising samples with Huawei for its order of 6.9-inch OLED displays. These large-sized displays are usually seen on Huawei’s P series or Mate series. While the P30 series is not expected to arrive before MWC 2019, the Mate series traditionally arrives sometime in Q4. Furthermore, with the screen size being so large, we expect the Pro version to sport the 6.9-inch display, while the Mate 20 could sport a 6.1-inch or some such.

If Huawei is indeed bringing a 6.9-inch display smartphone, it should easily win the screen size battle, as the iPhone X Plus is expected to sport a 6.5-inch display, while the Samsung Galaxy Note 9 is expected to sport a 6.4-incher. These large sized displays are very popular in the Chinese market, and Huawei wants to meet expectations in its home market. Bigger screens are popular also because of the large text area used by the Chinese language, the report adds. Huawei wouldn’t want to lose its momentum in its biggest market by not staying ahead of its game.

Of course, all of this is based on sheer speculation, and we expect you to take everything with a pinch of salt, till Huawei makes things official.

 

 

Source: Gadget360

Continue Reading

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 680 other subscribers

Advertisement

Trending

    %d bloggers like this: