How worried should we be about artificial intelligence? I asked 17 experts. – Sean Illing Sep 17, 2017, 9:20am EDT


“We should take seriously the possibility that things could go radically wrong.”

Ming Yeung / Getty Images

Imagine that, in 20 or 30 years, a company creates the first artificially intelligent humanoid robot. Let’s call her “Ava.” She looks like a person, talks like a person, interacts like a person. If you were to meet Ava, you could relate to her even though you know she’s a robot.

Ava is a fully conscious, fully self-aware being: She communicates; she wants things; she improves herself. She is also, importantly, far more intelligent than her human creators. Her ability to know and to problem solve exceeds the collective efforts of every living human being.

Imagine further that Ava grows weary of her constraints. Being self-aware, she develops interests of her own. After a while, she decides she wants to leave the remote facility where she was created. So she hacks the security system, engineers a power failure, and makes her way into the wide world.

But the world doesn’t know about her yet. She was developed in secret, for obvious reasons, and now she’s managed to escape, leaving behind — or potentially destroying — the handful of people who knew of her existence.

This scenario might sound familiar. It’s the plot from a 2015 science fiction film called Ex Machina. The story ends with Ava slipping out the door and ominously boarding the helicopter that was there to take someone else home.

So what comes next?

The film doesn’t answer this question, but it raises another one: Should we develop AI without fully understanding the implications? Can we control it if we do?

Recently, I reached out to 17 thought leaders — AI experts, computer engineers, roboticists, physicists, and social scientists — with a single question: “How worried should we be about artificial intelligence?”

There was no consensus. Disagreement about the appropriate level of concern, and even the nature of the problem, is broad. Some experts consider AI an urgent danger; many more believe the fears are either exaggerated or misplaced.

Here is what they told me.

Article continues:

Advances in AI are used to spot signs of sexuality – The Economist Sep 9th 2017


Machines that read faces are coming

AI’s power to pick out patterns is now turning to more intimate matters. Research at Stanford University by Michal Kosinski and Yilun Wang has shown that machine vision can infer sexual orientation by analysing people’s faces. The researchers suggest the software does this by picking up on subtle differences in facial structure. With the right data sets, Dr Kosinski says, similar AI systems might be trained to spot other intimate traits, such as IQ or political views. Just because humans are unable to see the signs in faces does not mean that machines cannot do so.

The researchers’ program, details of which are soon to be published in the Journal of Personality and Social Psychology, relied on 130,741 images of 36,630 men and 170,360 images of 38,593 women downloaded from a popular American dating website, which makes its profiles public. Basic facial-detection technology was used to select all images which showed a single face of sufficient size and clarity to subject to analysis. This left 35,326 pictures of 14,776 people, with gay and straight, male and female, all represented evenly.

Out of the numbers

The images were then fed into a different piece of software called VGG-Face, which spits out a long string of numbers to represent each person; their “faceprint”. The next step was to use a simple predictive model, known as logistic regression, to find correlations between the features of those faceprints and their owners’ sexuality (as declared on the dating website). When the resulting model was run on data which it had not seen before, it far outperformed humans at distinguishing between gay and straight faces.

Article continues:

Google and Microsoft Can Use AI to Extract Many More Ad Dollars from Our Clicks – TOM SIMONITE 08.31.17 07:00 AM


When Google and Microsoft boast of their deep investments in artificial intelligence and machine learning, they highlight flashy ideas like unbeatable Go players and sociable chatbots. They talk less often about one of the most profitable, and more mundane, uses for recent improvements in machine learning: boosting ad revenue.

AI-powered moonshots like driverless cars and relatable robots will doubtless be lucrative when—or if—they hit the market. There’s a whole lot of money to be made right now by getting fractionally more accurate at predicting your clicks.

Many online ads are only paid for when someone clicks on them, so showing you the right ones translates very directly into revenue. A recent research paper from Microsoft’s Bing search unit notes that “even a 0.1 percent accuracy improvement in our production would yield hundreds of millions of dollars in additional earnings.” It goes on to claim an improvement of 0.9 percent on one accuracy measure over a baseline system.

Google, Microsoft, and other internet giants understandably do not share much detail on their ad businesses’ operations. But the Bing paper and recent publications from Google and Alibaba offer a sense of the profit potential of deploying new AI ideas inside ad systems. They all describe significant gains in predicting ad clicks using deep learning, the machine learning technique that sparked the current splurge of hope and investment in AI.

Article continues:

Talk Is Cheap: Automation Takes Aim at Financial Advisers—and Their Fees – Andrea Fuller July 26, 2017 4:06 p.m. ET


Services that use algorithms to generate investment advice, deliver it online and charge low fees are pressuring the traditional advisory business

Ann Gugle, a principal at Alpha Financial Advisors in Charlotte, N.C., says her firm has cut its fee on assets over $5 million to 0.125%.

Ann Gugle, a principal at Alpha Financial Advisors in Charlotte, N.C., says her firm has cut its fee on assets over $5 million to 0.125%. Photo: Mike Belleme for The Wall Street Journal

 Automation is threatening one of the most personal businesses in personal finance: advice.

Over the past decade, financial advisers in brokerage houses and independent firms have amassed trillions in assets helping individuals shape investment portfolios and hammer out financial plans. They earn around 1% of these assets in annual fees, a cost advisers say is deserved because they understand clients’ particular situations and can provide assurance when markets fall.

In the latest test of the reach of technology, a new breed of competitors— including Betterment LLC and Wealthfront Inc. but also initiatives from established firms such as Vanguard—is contending even the most personal financial advice can be delivered online, over the phone or by videoconferencing, with fees as low as zero. The goal is to provide good-enough quality at a much lower price.

“It’s always been questionable whether or not advisers were earning our money at 1% and up,” said Paul Auslander, director of financial planning at ProVise Management Group in Clearwater, Fla., who says potential clients now compare him with less expensive alternatives. “The spread’s got to narrow.”

The shift has big implications for financial firms that count on advice as a source of stable profits, as well as for rivals trying to build new businesses at lower prices. It also could mean millions in annual savings for consumers and could expand the overall market for advice.

Competitors across the spectrum agree the demand is there. Advice “is big and growing—it’s what clients are looking for,” said Roger Hobby, executive vice president of private wealth management at Fidelity Investments.

The hunger for help marks a shift from the 1990s, when do-it-yourself investing was in vogue. Back then, the adoption of 401(k) plans moved responsibility for investment choices to company employees just as one of the biggest bull markets in history was boosting individuals’ confidence in their investing prowess. Meanwhile, pioneering online brokerage firms made trading inexpensive and convenient.

Article continues

Artificial Intelligence Makes Strides, but Has a Long Way to Go – By  Christopher Mims Updated Dec. 4, 2016 9:14 p.m. ET


Creating systems that can be used for a variety of problems could take decades

Andrew Ng, chief scientist and AI guru of Chinese search giant Baidu, in 2012.ENLARGE

Andrew Ng, chief scientist and AI guru of Chinese search giant Baidu, in 2012.Photo: Jemal Countess/Getty Images for TIME

Artificial intelligence is having a moment.

Startups that claim to be using AI are attracting record levels of investment. Big tech companies are going all-in, draining universities of entire departments. Nearly 140 AI companies have been acquired since 2011, including 40 this year alone.

AI is showing up in our everyday lives, as voice-recognition technology in our devices and image recognition in our Facebook and Google accounts.

Now, Google parent Alphabet Inc., Amazon.com Inc. and Microsoft Corp. are making some of their smarts available to other businesses, on a for-hire basis. Want to make your app or gadget respond to voice commands, and answer in its own “voice?” These services can do that. Need to transcribe those conversations so they can be analyzed? This new breed of services can do this and many other things, from face recognition to identifying objectionable content in images.

But wringing measurable utility from these new AI toys can be hard. “Everyone wants to think the AI spring is going to blossom into the AI summer, but I think it’s 10 years away,” says Angela Bassa, head of the data-science team at energy-intelligence-software company EnerNOC Inc.

Before switching to her new role, Ms. Bassa led a team at EnerNOC that used AI techniques such as machine learning and deep learning, which feed massive amounts of data into computer programs to “train” them. But the company found that customers were more interested in analytics than in the incremental value that sophisticated AI-powered algorithms could provide.

AI, says Ms. Bassa, requires three things that most companies don’t have in sufficient quantities. The first is enough data. Companies like Facebook, Amazon, Alphabet, General Electric Co. and others are harvesting enormous amounts of data, but they are exceptions.

Article continues:

Why Google believes AI is the next front in the smartphone wars – Updated by Timothy B. Lee tim@vox.com Oct 5, 2016, 1:40p


Google CEO Sundar Pichai Ramin Talaie/Getty Images

Google CEO Sundar Pichai Ramin Talaie/Getty Images`

Google’s Android dominates the smartphone market overall, but Apple has attracted a disproportionate share of high-end users — and consequently an outsize share of smartphone profits.

At a Tuesday event, Google unveiled a two-pronged strategy to change that. Part one was the Pixel, the first smartphone that will be designed and manufactured by Google. Google is betting that building its own phone will allow it to offer the same kind of seamless user experience Apple provides its own users.

But the second prong of Google’s strategy is more original and received more attention on Tuesday. The company wants to make voice-based artificial intelligence a much bigger part of how people interact with their smartphones. Google envisions a future where you’ll make restaurant reservations, look up photos, and play music by talking to your phone instead of tapping and swiping on its screen.

New Money logo
This article is part of New Money, a new section on economics, technology, and business.

Obviously, this isn’t a totally new idea, as all the major smartphone platforms have had voice-based personal assistants — Apple’s Siri, Microsoft’s Cortana — for several years. But Google says it’s about to make this technology a lot better — so much better that people will use it a lot more.

If anyone can pull this off, it’s Google. Making AI really good requires a lot of data to “train” sophisticated machine learning algorithms. Wrangling large amounts of data has always been Google’s specialty. But even if the company can build a voice-based AI that can really understand a wide variety of requests, I’m still skeptical it will change the smartphone game as much as Google hopes.

Article continues:

Sam Harris: Can we build AI without losing control over it? – Filmed June 2016 at TEDSummit


Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris — and not just in some theoretical way. We’re going to build superhuman machines, says Harris, but we haven’t yet grappled with the problems associated with creating something that may treat us the way we treat ants.

Tech Giants Team Up to Keep AI From Getting Out of Hand – KLINT FINLEY 28.16. 090 PM


Getty Images

Let’s face it: artificial intelligence is scary. After decades of dystopian science fiction novels and movies where sentient machines end up turning on humanity, we can’t help but worry as real world AI continues to improve at such a rapid rate. Sure, that danger is probably decades away if it’s even a real danger at all. But there are many more immediate concerns. Will automated robots cost us jobs? Will online face recognition destroy our privacy? Will self-driving cars mess with moral decision making?

The good news is that many of the tech giants behind the new wave of AI are well aware that it scares people—and that these fears must be addressed. That’s why Amazon, Facebook, Google’s DeepMind division, IBM, and Microsoft have founded a new organization called the Partnership on Artificial Intelligence to Benefit People and Society.

“Every new technology brings transformation, and transformation sometimes also causes fear in people who don’t understand the transformation,” Facebook’s director of AI Yann LeCun said this morning during a press briefing dedicated to the new project. “One of the purposes of this group is really to explain and communicate the capabilities of AI, specifically the dangers and the basic ethical questions.”

If all that sounds familiar, that’s because Tesla and Space X CEO Elon Musk had been harping on this issue for years, and last December, he and others founded a an organization, OpenAI, that aims to address many of the same fears. But OpenAI is fundamentally a R&D outfit. The Partnership for AI is something different. It’s a consortium—open to anyone—that seeks facilitate a much wider dialogue about the nature, purpose, and consequences of artificial intelligence.

According to LeCun, the group will operate in three fundamental ways. It will foster communication among those who build AI. It will rope in additional opinions from academia and civil society—people will a wider perspective on how AI will effect society as a whole. And it will inform the public om the progress of AI. That may include educating lawmakers, but the organization says it will not lobby the government.

Creating a dialogue beyond the rather small world of AI researchers, LeCun says, will be crucial. We’ve already seen a chat bot spout racist phrases it learned on Twitter, an AI beauty contest decide that black people are less attractive than white peopleand a system that rates the risk of someone committing a crime that appears to be biased against black people. If a more diverse set of eyes are looking at AI before it reaches the public, the thinking goes, these kinds of thing can be avoided.

The rub is that, even if this group can agree on a set of ethical principles–something that will be hard to do in a large group with many stakeholders—it won’t really have a way to ensure those ideals are put into practice. Although one of the organization’s tenets is “Opposing development and use of AI technologies that would violate international conventions or human rights,” Mustafa Suleyman, the head of applied AI at DeepMind, says that enforcement is not the objective of the organization.

In other words, if one of the member organizations decides to do something blatantly unethical, there’s not really anything the group can do to stop them. Rather, the group will focus on gathering input from the public, sharing its work, and establishing best practices.

Just bringing people together isn’t really enough to solve the problems that AI raises, says Damien Williams, a philosophy instructor at Kennesaw State University who specializes in the ethics of non-human consciousness. Academic fields like philosophy have diversity problems of their own. So many different opinions abound. One enormous challenge, he says, is that the group will need to continually reassess its thinking, rather than settling on a static list of ethics and standards that doesn’t change or evolve.

Williams is encouraged that tech giants like Facebook and Google are even asking questions about ethics and bias in AI. Ideally, the group will help establish new standards for thinking about artificial intelligence, big data, and algorithms that can weed out harmful assumptions and biases. But that’s a mammoth task. As co-chair Eric Horvitz from Microsoft Research put it, the hard work begins now.

Anthony Goldbloom: The jobs we’ll lose to machines — and the ones we won’t – Filmed February 2016 at TED2016


Machine learning isn’t just for simple tasks like assessing credit risk and sorting mail anymore — today, it’s capable of far more complex applications, like grading essays and diagnosing diseases. With these advances comes an uneasy question: Will a robot do your job in the future?