How worried should we be about artificial intelligence? I asked 17 experts. – Sean Illing Sep 17, 2017, 9:20am EDT

“We should take seriously the possibility that things could go radically wrong.”

Ming Yeung / Getty Images

Imagine that, in 20 or 30 years, a company creates the first artificially intelligent humanoid robot. Let’s call her “Ava.” She looks like a person, talks like a person, interacts like a person. If you were to meet Ava, you could relate to her even though you know she’s a robot.

Ava is a fully conscious, fully self-aware being: She communicates; she wants things; she improves herself. She is also, importantly, far more intelligent than her human creators. Her ability to know and to problem solve exceeds the collective efforts of every living human being.

Imagine further that Ava grows weary of her constraints. Being self-aware, she develops interests of her own. After a while, she decides she wants to leave the remote facility where she was created. So she hacks the security system, engineers a power failure, and makes her way into the wide world.

But the world doesn’t know about her yet. She was developed in secret, for obvious reasons, and now she’s managed to escape, leaving behind — or potentially destroying — the handful of people who knew of her existence.

This scenario might sound familiar. It’s the plot from a 2015 science fiction film called Ex Machina. The story ends with Ava slipping out the door and ominously boarding the helicopter that was there to take someone else home.

So what comes next?

The film doesn’t answer this question, but it raises another one: Should we develop AI without fully understanding the implications? Can we control it if we do?

Recently, I reached out to 17 thought leaders — AI experts, computer engineers, roboticists, physicists, and social scientists — with a single question: “How worried should we be about artificial intelligence?”

There was no consensus. Disagreement about the appropriate level of concern, and even the nature of the problem, is broad. Some experts consider AI an urgent danger; many more believe the fears are either exaggerated or misplaced.

Here is what they told me.

Article continues:

Advances in AI are used to spot signs of sexuality – The Economist Sep 9th 2017

Machines that read faces are coming

AI’s power to pick out patterns is now turning to more intimate matters. Research at Stanford University by Michal Kosinski and Yilun Wang has shown that machine vision can infer sexual orientation by analysing people’s faces. The researchers suggest the software does this by picking up on subtle differences in facial structure. With the right data sets, Dr Kosinski says, similar AI systems might be trained to spot other intimate traits, such as IQ or political views. Just because humans are unable to see the signs in faces does not mean that machines cannot do so.

The researchers’ program, details of which are soon to be published in the Journal of Personality and Social Psychology, relied on 130,741 images of 36,630 men and 170,360 images of 38,593 women downloaded from a popular American dating website, which makes its profiles public. Basic facial-detection technology was used to select all images which showed a single face of sufficient size and clarity to subject to analysis. This left 35,326 pictures of 14,776 people, with gay and straight, male and female, all represented evenly.

Out of the numbers

The images were then fed into a different piece of software called VGG-Face, which spits out a long string of numbers to represent each person; their “faceprint”. The next step was to use a simple predictive model, known as logistic regression, to find correlations between the features of those faceprints and their owners’ sexuality (as declared on the dating website). When the resulting model was run on data which it had not seen before, it far outperformed humans at distinguishing between gay and straight faces.

Article continues:

Google and Microsoft Can Use AI to Extract Many More Ad Dollars from Our Clicks – TOM SIMONITE 08.31.17 07:00 AM

When Google and Microsoft boast of their deep investments in artificial intelligence and machine learning, they highlight flashy ideas like unbeatable Go players and sociable chatbots. They talk less often about one of the most profitable, and more mundane, uses for recent improvements in machine learning: boosting ad revenue.

AI-powered moonshots like driverless cars and relatable robots will doubtless be lucrative when—or if—they hit the market. There’s a whole lot of money to be made right now by getting fractionally more accurate at predicting your clicks.

Many online ads are only paid for when someone clicks on them, so showing you the right ones translates very directly into revenue. A recent research paper from Microsoft’s Bing search unit notes that “even a 0.1 percent accuracy improvement in our production would yield hundreds of millions of dollars in additional earnings.” It goes on to claim an improvement of 0.9 percent on one accuracy measure over a baseline system.

Google, Microsoft, and other internet giants understandably do not share much detail on their ad businesses’ operations. But the Bing paper and recent publications from Google and Alibaba offer a sense of the profit potential of deploying new AI ideas inside ad systems. They all describe significant gains in predicting ad clicks using deep learning, the machine learning technique that sparked the current splurge of hope and investment in AI.

Article continues:

Talk Is Cheap: Automation Takes Aim at Financial Advisers—and Their Fees – Andrea Fuller July 26, 2017 4:06 p.m. ET

Services that use algorithms to generate investment advice, deliver it online and charge low fees are pressuring the traditional advisory business

Ann Gugle, a principal at Alpha Financial Advisors in Charlotte, N.C., says her firm has cut its fee on assets over $5 million to 0.125%.

Ann Gugle, a principal at Alpha Financial Advisors in Charlotte, N.C., says her firm has cut its fee on assets over $5 million to 0.125%. Photo: Mike Belleme for The Wall Street Journal

 Automation is threatening one of the most personal businesses in personal finance: advice.

Over the past decade, financial advisers in brokerage houses and independent firms have amassed trillions in assets helping individuals shape investment portfolios and hammer out financial plans. They earn around 1% of these assets in annual fees, a cost advisers say is deserved because they understand clients’ particular situations and can provide assurance when markets fall.

In the latest test of the reach of technology, a new breed of competitors— including Betterment LLC and Wealthfront Inc. but also initiatives from established firms such as Vanguard—is contending even the most personal financial advice can be delivered online, over the phone or by videoconferencing, with fees as low as zero. The goal is to provide good-enough quality at a much lower price.

“It’s always been questionable whether or not advisers were earning our money at 1% and up,” said Paul Auslander, director of financial planning at ProVise Management Group in Clearwater, Fla., who says potential clients now compare him with less expensive alternatives. “The spread’s got to narrow.”

The shift has big implications for financial firms that count on advice as a source of stable profits, as well as for rivals trying to build new businesses at lower prices. It also could mean millions in annual savings for consumers and could expand the overall market for advice.

Competitors across the spectrum agree the demand is there. Advice “is big and growing—it’s what clients are looking for,” said Roger Hobby, executive vice president of private wealth management at Fidelity Investments.

The hunger for help marks a shift from the 1990s, when do-it-yourself investing was in vogue. Back then, the adoption of 401(k) plans moved responsibility for investment choices to company employees just as one of the biggest bull markets in history was boosting individuals’ confidence in their investing prowess. Meanwhile, pioneering online brokerage firms made trading inexpensive and convenient.

Article continues

Artificial Intelligence Makes Strides, but Has a Long Way to Go – By  Christopher Mims Updated Dec. 4, 2016 9:14 p.m. ET

Creating systems that can be used for a variety of problems could take decades

Andrew Ng, chief scientist and AI guru of Chinese search giant Baidu, in 2012.ENLARGE

Andrew Ng, chief scientist and AI guru of Chinese search giant Baidu, in 2012.Photo: Jemal Countess/Getty Images for TIME

Artificial intelligence is having a moment.

Startups that claim to be using AI are attracting record levels of investment. Big tech companies are going all-in, draining universities of entire departments. Nearly 140 AI companies have been acquired since 2011, including 40 this year alone.

AI is showing up in our everyday lives, as voice-recognition technology in our devices and image recognition in our Facebook and Google accounts.

Now, Google parent Alphabet Inc., Inc. and Microsoft Corp. are making some of their smarts available to other businesses, on a for-hire basis. Want to make your app or gadget respond to voice commands, and answer in its own “voice?” These services can do that. Need to transcribe those conversations so they can be analyzed? This new breed of services can do this and many other things, from face recognition to identifying objectionable content in images.

But wringing measurable utility from these new AI toys can be hard. “Everyone wants to think the AI spring is going to blossom into the AI summer, but I think it’s 10 years away,” says Angela Bassa, head of the data-science team at energy-intelligence-software company EnerNOC Inc.

Before switching to her new role, Ms. Bassa led a team at EnerNOC that used AI techniques such as machine learning and deep learning, which feed massive amounts of data into computer programs to “train” them. But the company found that customers were more interested in analytics than in the incremental value that sophisticated AI-powered algorithms could provide.

AI, says Ms. Bassa, requires three things that most companies don’t have in sufficient quantities. The first is enough data. Companies like Facebook, Amazon, Alphabet, General Electric Co. and others are harvesting enormous amounts of data, but they are exceptions.

Article continues:

Why Google believes AI is the next front in the smartphone wars – Updated by Timothy B. Lee Oct 5, 2016, 1:40p

Google CEO Sundar Pichai Ramin Talaie/Getty Images

Google CEO Sundar Pichai Ramin Talaie/Getty Images`

Google’s Android dominates the smartphone market overall, but Apple has attracted a disproportionate share of high-end users — and consequently an outsize share of smartphone profits.

At a Tuesday event, Google unveiled a two-pronged strategy to change that. Part one was the Pixel, the first smartphone that will be designed and manufactured by Google. Google is betting that building its own phone will allow it to offer the same kind of seamless user experience Apple provides its own users.

But the second prong of Google’s strategy is more original and received more attention on Tuesday. The company wants to make voice-based artificial intelligence a much bigger part of how people interact with their smartphones. Google envisions a future where you’ll make restaurant reservations, look up photos, and play music by talking to your phone instead of tapping and swiping on its screen.

New Money logo
This article is part of New Money, a new section on economics, technology, and business.

Obviously, this isn’t a totally new idea, as all the major smartphone platforms have had voice-based personal assistants — Apple’s Siri, Microsoft’s Cortana — for several years. But Google says it’s about to make this technology a lot better — so much better that people will use it a lot more.

If anyone can pull this off, it’s Google. Making AI really good requires a lot of data to “train” sophisticated machine learning algorithms. Wrangling large amounts of data has always been Google’s specialty. But even if the company can build a voice-based AI that can really understand a wide variety of requests, I’m still skeptical it will change the smartphone game as much as Google hopes.

Article continues: