Connect with us

Advertising

Google Patents Technology to Serve Ads Based on Background Noise

Published

on

GoogleHQ

A new Google patent could enable the search giant to base advertising on background noise during phone conversations, although the scope of the patent is much broader.

Google was awarded a patent Tuesday for advertising based on based on “environmental conditions,” as the search giant calls it in the patent documents. Advertising can be served on the basis of a sensor that detects temperature, humidity, sound, light or air composition near a device, and ads are served accordingly.

This could mean that if the Google technology detects the sound of the sea, advertisements for beach balls and towels could be served. The ad could be delivered in the form of text image or video, sent to the users’ device after detecting the environmental conditions. Google plans to connect those conditions with keywords that advertisers can buy.

The patented technology is meant for personal computers, digital billboards, digital kiosks, vending machines and mobile phones. This raises the question whether Google is planning to serve ads based on background noises picked up during phone conversations.

“On the face of it, it can certainly do that”, said Peggy Salz, chief analyst and founder of MobileGroove, a company specialized in mobile search and advertising technologies. “But so could Shazam,” she added, referring to an app that listens to music being played and matches the sound with a database of songs.

According to Salz, there could be certainly a privacy issue with the newly patented search and advertising technology. “But if you look at it that way there is a privacy issue with everything that is on your phone,” she said. “I wouldn’t imagine Google doing that without at least informing the users.”

Google wasn’t keen on responding to questions about the patent’s purpose. Google spokesman Mark Jansen said that the company was not willing to speculate about future purposes of newly acquired patents.

“We file patent requests for numerous ideas our employees dream up,” he explained, emphasizing that not all ideas turn into products. “Product announcements cannot simply be inferred from our patent applications.”

Google notes in the patent documents that users should be provided with privacy options when the technique is used, and have the option to turn it off.

Apart from the privacy issue, Salz said the new patent is interesting on several levels. “It is always interesting when a big player does something like this,” she said. If Google is looking into technology that serves ads based on sounds, light or air composition, that proves that Google is taking multimodal search seriously, she added.

Multimodal search uses different methods to get a relevant result. Instead of only using text search, it can also analyze images or detect sounds. Several companies have been developing these kind of search algorithms. “But it only becomes very interesting when a big company does that,” Salz said. She pointed out that voice search has been around for some years, but only really took off when Apple introduced Siri, a voice search agent integrated with the iPhone 4S.

“This is recognition of the importance of multimodal search,” Salz added. Users will probably see implications of the new patent in the future. “They don’t do anything off-the-cuff,” she said. ” When Google does something, than that is the thing to do.”

 

 

source: http://www.pcworld.com/

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Advertising

Sharing Options from WordPress.com to Facebook Are Changing

Published

on

By

We wanted to update you about an upcoming change Facebook is introducing to their platform, and which affects how you may share posts from your WordPress.com website to your Facebook account.

Starting August 1, 2018, third-party tools can no longer share posts automatically to Facebook Profiles. This includes Publicize, the tool for WordPress.com and Jetpack-powered sites that connects your site to major social media platforms (like Twitter, LinkedIn, and Facebook).

Will this affect your ability to share content on Facebook? It depends. If you’ve connected a Facebook Profile to your site, then yes: Publicize will no longer be able to share your posts to Facebook. On the other hand, nothing will change if you keep a Facebook Page connected to your site — all your content should still appear directly on Facebook via Publicize. (Not sure what the difference is between a Page and a Profile? Here’s Facebook’s explanation.)

If you’ve previously connected a Facebook Profile to your WordPress.com site and still want your Facebook followers to see your posts, you have two options. First, you could go the manual route: once you publish a new post, copy its URL and share the link in a new Facebook post. (You can also share right from the WordPress mobile apps after a scheduled post goes live.)

The other option is to convert your Facebook Profile to a Page. This might not be the right solution for everyone, but it’s something to consider if your website focuses on your business, organization, or brand.

While Facebook is introducing this change to improve their platform and prevent the misuse of personal profiles, we know that this might cause a disruption in the way you and your Facebook followers interact. If you’d like to share your concerns with Facebook, head to their Help Community.

In the meantime, WordPress.com’s Publicize feature (and social media scheduling tools) will continue to be available to you for posting to Twitter, LinkedIn, and other social media platforms.

Continue Reading

Advertising

THIS GOOGLE-FUNDED COMPANY USES ARTIFICIAL INTELLIGENCE TO FIGHT AGAINST FAKE NEWS

Published

on

“Falsehood flies, and the truth comes limping after it,” wrote Jonathan Swift over 200 years ago.

If that was the case back then, before telephones and radio, let alone Twitter and Instagram, imagine how much bigger the problem is now.  In fact, it’s so big that “fake news” has become a hot topic for those on both sides of the political spectrum. Gartner has gone as far as predicting that by 2022, we will consume more lies than truth.

But if technology has exacerbated the situation, there’s hope that it may also offer remedies. In particular, artificial intelligence – in its most useful current form, machine learning – can potentially be a powerful tool for sorting truth from fiction.

Machine learning is already being used by banks and financial institutions to comb through records of financial transactions, looking for tell-tale signs of errors or fraud, and then using that data to become more efficient – effectively “learning” without human input.

In the same way, algorithms can be trained to monitor media – across both social networks and news organizations – looking for tell-tale signs that any piece of output might be out of alignment with whatever objective truths are known regarding situations or events.

One exciting application of this technology comes from Belgium-based startup VeriFlix. They have developed a method of scanning user-submitted videos – which play an increasingly significant part in the output of most media organizations – and attempting to determine whether they actually are what they purport to be.

After winning funding through Google’s Digital News Initiative, the company’s technology is now being put to use by one of that country’s largest media outlets – Rourlarta, with promising results.

Founder Donald Staar talked to me about how the platform had evolved from its initial conception as a peer-to-peer crowdsourcing app for videos. Media organizations would make a request for video footage through the app, and any user within the correct geolocation could switch on their phone and start filming.

“Once the videos get sent to the platform we add a layer which first detects the content of every stream – so we can say what we see in the video, alongside the geolocation data and time stamp,” Staar tells me.

“And once the videos are tagged we can compare them to one another, so that if for example, one request results in 1,000 videos, we can compare the content of every video and if a majority of the videos show the same content, then it can verify the authenticity of what has been shot.

“If 800 videos out of 1,000 show the same thing then the probability that the video has been faked is very low.”

Veriflix uses the YOLO (You Only Look Once) real-time object detection algorithms to classify and label contents of videos, before passing that data through to proprietary algorithms, designed in partnership with KU Leuven University. These algorithms analyze the data, alongside timestamp and geolocation information passed through the application’s secure interface.

Staar says “There are two main advantages – the first is that media companies can now make sure that videos they use are authentic and shot in the location where they say they are taken, and not modified or doctored.

“The other advantage is that they are able to bridge the gap between themselves and their audience – let their audience become a part of the story, and source exclusive and verified content very quickly. It can be for small things, too – it doesn’t have to just be big, breaking news.”

As is common with those working in today’s AI space, Staar is keen to point out that the idea isn’t to put journalists and human fact-checkers out of jobs.

“It will not replace the job of the journalist – we will always need journalists to put everything in perspective, but to get the raw data, this will be a great tool.”

Of course, as technology advances, the tools that fakers use to attempt to pull the wool over our eyes are likely to become increasingly sophisticated. It’s already possible to make highly realistic videos putting words in the mouths of people who would, in reality, be very unlikely to say such things. This doctored video of Obama being rather rude about Trump is a great example (warning, contains explicit language)

Over time it’s likely we will see a continuation of the arms race which has always existed in the technology sphere – with good guys racing against the bad guys to be the first to deploy the latest and most powerful tools.

Fake news is unlikely ever to be fully eradicated – there will always be someone willing to present a skewed version of the truth to push their own agenda. However, it could be the case that tools like VeriFlix, or whatever comes next, will raise the barrier regarding the tech and expertise needed to hoodwink us, going some way toward making the world a more truthful place.

Bernard Marr is a best-selling author & keynote speaker on business, technology and big data. His new book is Data Strategy. To read his future posts simply join his network here.

Continue Reading

Advertising

SCAMMERS ABUSE MULTILINGUAL DOMAIN NAMES

Published

on

Cyber-criminals are abusing multilingual character sets to trick people into visiting phishing websites.

The non-English characters allow scammers to create “lookalike” sites with domain names almost indistinguishable from legitimate ones.

Farsight Security found scam sites posing as banks, loan advisers and children’s brands Lego and Haribo.

Smartphone users are at greater risk as small screens make lookalikes even harder to spot.

Targeted attack
The Farsight Security report looked at more than 100 million domain names that use non-English character sets – introduced to make the net more familiar and usable for non-English speaking nations – and found about 27% of them had been created by scammers.

It also uncovered more than 8,000 separate characters that could be abused to confuse people.

Farsight founder Paul Vixie, who wrote much of the software underpinning the net’s domain names told the BBC: “Any lower case letter can be represented by as many as 40 different variations.”

And many internationalised versions added just a tiny fleck or mark that was not easy to see.

Eldar Tuvey, founder and head of security company Wandera, said it had also seen an upsurge in phishing domains using different ways of forming characters.

In particular, it had seen an almost doubling of the number of scam domains created using an encoding system called punycode over the past few months.

And phishing gangs were using messages sent via mobile apps to tempt people into clicking on the similar-looking links.

“They are targeting specific groups,” Mr Tuvey said.

And research had established people were three times more likely to fall for a phishing scam presented on their phone.

“To phish someone, you just have to fool them once,” Mr Tuvey said. “Tricking them into installing malware is much more work.”

Continue Reading
Advertisement

Trending