Talk Is Cheap: Automation Takes Aim at Financial Advisers—and Their Fees – Andrea Fuller July 26, 2017 4:06 p.m. ET


Services that use algorithms to generate investment advice, deliver it online and charge low fees are pressuring the traditional advisory business

Ann Gugle, a principal at Alpha Financial Advisors in Charlotte, N.C., says her firm has cut its fee on assets over $5 million to 0.125%.

Ann Gugle, a principal at Alpha Financial Advisors in Charlotte, N.C., says her firm has cut its fee on assets over $5 million to 0.125%. Photo: Mike Belleme for The Wall Street Journal

 Automation is threatening one of the most personal businesses in personal finance: advice.

Over the past decade, financial advisers in brokerage houses and independent firms have amassed trillions in assets helping individuals shape investment portfolios and hammer out financial plans. They earn around 1% of these assets in annual fees, a cost advisers say is deserved because they understand clients’ particular situations and can provide assurance when markets fall.

In the latest test of the reach of technology, a new breed of competitors— including Betterment LLC and Wealthfront Inc. but also initiatives from established firms such as Vanguard—is contending even the most personal financial advice can be delivered online, over the phone or by videoconferencing, with fees as low as zero. The goal is to provide good-enough quality at a much lower price.

“It’s always been questionable whether or not advisers were earning our money at 1% and up,” said Paul Auslander, director of financial planning at ProVise Management Group in Clearwater, Fla., who says potential clients now compare him with less expensive alternatives. “The spread’s got to narrow.”

The shift has big implications for financial firms that count on advice as a source of stable profits, as well as for rivals trying to build new businesses at lower prices. It also could mean millions in annual savings for consumers and could expand the overall market for advice.

Competitors across the spectrum agree the demand is there. Advice “is big and growing—it’s what clients are looking for,” said Roger Hobby, executive vice president of private wealth management at Fidelity Investments.

The hunger for help marks a shift from the 1990s, when do-it-yourself investing was in vogue. Back then, the adoption of 401(k) plans moved responsibility for investment choices to company employees just as one of the biggest bull markets in history was boosting individuals’ confidence in their investing prowess. Meanwhile, pioneering online brokerage firms made trading inexpensive and convenient.

Article continues

How can we stop algorithms telling lies? – Cathy O’Neil Sunday 16 July 2017 09.59 BST


How might an algorithm sort your data? Photograph: MatejMo/Getty Images/iStockphoto`

Lots of algorithms go bad unintentionally. Some of them, however, are made to be criminal. Algorithms are formal rules, usually written in computer code, that make predictions on future events based on historical patterns. To train an algorithm you need to provide historical data as well as a definition of success.

We’ve seen finance get taken over by algorithms in the past few decades. Trading algorithms use historical data to predict movements in the market. Success for that algorithm is a predictable market move, and the algorithm is vigilant for patterns that have historically happened just before that move. Financial risk models also use historical market changes to predict cataclysmic events in a more global sense, so not for an individual stock but rather for an entire market. The risk model for mortgage-backed securities was famously bad – intentionally so – and the trust in those models can be blamed for much of the scale and subsequent damage wrought by the 2008 financial crisis.

Since 2008, we’ve heard less from algorithms in finance, and much more from big data algorithms. The target of this new generation of algorithms has been shifted from abstract markets to individuals. But the underlying functionality is the same: collect historical data about people, profiling their behaviour online, location, or answers to questionnaires, and use that massive dataset to predict their future purchases, voting behaviour, or work ethic.

The recent proliferation in big data models has gone largely unnoticed by the average person, but it’s safe to say that most important moments where people interact with large bureaucratic systems now involve an algorithm in the form of a scoring system. Getting into college, getting a job, being assessed as a worker, getting a credit card or insurance, voting, and even policing are in many cases done algorithmically. Moreover, the technology introduced into these systematic decisions is largely opaque, even to their creators, and has so far largely escaped meaningful regulation, even when it fails. That makes the question of which of these algorithms are working on our behalf even more important and urgent.

Article continues:

How algorithms rule our working lives – by Cathy O’Neil Thursday 1 September 2016 01.00 EDT


Screen Shot 2016-09-01 at Sep 1, 2016 2.18

A few years ago, a young man named Kyle Behm took a leave from his studies at Vanderbilt University in Nashville, Tennessee. He was suffering from bipolar disorder and needed time to get treatment. A year and a half later, Kyle was healthy enough to return to his studies at a different university. Around that time, he learned from a friend about a part-time job. It was just a minimum-wage job at a Kroger supermarket, but it seemed like a sure thing. His friend, who was leaving the job, could vouch for him. For a high-achieving student like Kyle, the application looked like a formality.

But Kyle didn’t get called in for an interview. When he inquired, his friend explained to him that he had been “red-lighted” by the personality test he’d taken when he applied for the job. The test was part of an employee selection program developed by Kronos, a workforce management company based outside Boston. When Kyle told his father, Roland, an attorney, what had happened, his father asked him what kind of questions had appeared on the test. Kyle said that they were very much like the “five factor model” test, which he’d been given at the hospital. That test grades people for extraversion, agreeableness, conscientiousness, neuroticism, and openness to ideas.

At first, losing one minimum-wage job because of a questionable test didn’t seem like such a big deal. Roland Behm urged his son to apply elsewhere. But Kyle came back each time with the same news. The companies he was applying to were all using the same test, and he wasn’t getting offers.

Article continues:

Page Views Don’t Matter Anymore—But They Just Won’t Die | WIRED – JULIA GREENBERG 12.31.15. . 7:00 AM


The page view is a zombie. For years, everyone has been saying it is no longer a meaningful way to measure online popularity. But the publishers who make websites and the advertisers who pay for them swore throughout the year that they’re no longer fooled. The era where a mere click is the crown jewel of metrics is dead. But someone still needs to shoot this zombie in the head.

“We’ve talked about page views dying for ten years,” says Jason Kint, CEO of Digital Content Next, a digital publishing trade group that represents publishers on the web, including WIRED parent company Condé Nast. “They’re not dead, but they should be.”

Along with its corrupting effects, the page view itself has been corrupted.

The page view, much like the click-through, was once the key way websites understood their audiences. It was the way news organizations figured out who was reading their stories—how many, how often, which, from where—and the way advertisers were able to calculate the value of serving up ads on those sites.

But the page view notoriously spawned that most reviled of Internet aggravations: clickbait. Quality became less important than provocation; the curiosity gap supplanted craft. The page view also drove the primacy of “search engine optimization,” or the technique of selecting keywords in headlines, metadata, and text to push articles higher in Google’s page-ranking algorithms. All of this served an online publishing economy propped up by display ads, which helped cement the assumption that news on the Internet should be free.

Article continues:

http://www.wired.com/2015/12/everyone-knows-page-views-dont-matter-but-they-just-wont-die/#slide-1

‘The issue formerly known as privacy’ – by Sara M. Watson November 4, 2014 5:00AM ET


Screen Shot 2014-11-04 at Nov 4, 2014 2.29

Editor’s note: This is the eighth installment of the Living With Data series exploring how our online data is tracked, collected and used. Do you have questions about how your personal data is being used? Curious to learn more about your daily encounters with algorithms? Email the Decoder at thedecoder@aljazeera.net or submit your question via the form here. Screen shots and links are helpful clues! 

“For someone with an interest in privacy, there’s certainly a lot about you online.”

Someone once said that to me, and I laughed because I never said my research was about privacy. It’s a common assumption that because I’m writing about data and algorithms, I’m working on privacy.

I often share personal details in my stories to get my point across — when Netflix thinks I have children, when fitness trackers don’t match my personal fitness needs, or when Facebook asks me about my fiancé. I understand the cognitive dissonance that comes from sharing these details in an article when it seems the concern is about privacy. I rarely use the word in my work or in introducing myself, yet people still categorize the set of concerns I raise as falling under the umbrella term “privacy.”

As more of our lives are made legible as data and more of our experiences are processed by algorithms, I think privacy is an inexact term and doesn’t fully encapsulate the range of our concerns. So if not “privacy,” what could we call our concerns over data instead?

Privacy means a lot of things in a lot of contexts. For the most part, it comes out of a legal heritage. It’s everything from Justice Brandeis’ 1890 concept of the “right to be let alone,” to the ability to act autonomously, to control over the personal space of the home or the body to control over information in different contexts. In the Information Age and now in the realm of Big Data, it often concerns personally identifiable information or sensitive information.

Aside from these legal contexts, I think the concept of privacy makes more sense when we apply it to relationships among humans rather than a description of the concerns that surface in sociotechnical systems.

Julia Angwin agrees that the term “privacy” isn’t cutting it anymore. From The Wall Street Journal’s What They Know series and now at ProPublica, she has investigated the business and technology of data and the Internet.

Article continues:

http://america.aljazeera.com/articles/2014/11/4/data-privacy.html