Services that use algorithms to generate investment advice, deliver it online and charge low fees are pressuring the traditional advisory business
Ann Gugle, a principal at Alpha Financial Advisors in Charlotte, N.C., says her firm has cut its fee on assets over $5 million to 0.125%. Photo: Mike Belleme for The Wall Street Journal
Automation is threatening one of the most personal businesses in personal finance: advice.
Over the past decade, financial advisers in brokerage houses and independent firms have amassed trillions in assets helping individuals shape investment portfolios and hammer out financial plans. They earn around 1% of these assets in annual fees, a cost advisers say is deserved because they understand clients’ particular situations and can provide assurance when markets fall.
In the latest test of the reach of technology, a new breed of competitors— including Betterment LLC and Wealthfront Inc. but also initiatives from established firms such as Vanguard—is contending even the most personal financial advice can be delivered online, over the phone or by videoconferencing, with fees as low as zero. The goal is to provide good-enough quality at a much lower price.
“It’s always been questionable whether or not advisers were earning our money at 1% and up,” said Paul Auslander, director of financial planning at ProVise Management Group in Clearwater, Fla., who says potential clients now compare him with less expensive alternatives. “The spread’s got to narrow.”
The shift has big implications for financial firms that count on advice as a source of stable profits, as well as for rivals trying to build new businesses at lower prices. It also could mean millions in annual savings for consumers and could expand the overall market for advice.
Competitors across the spectrum agree the demand is there. Advice “is big and growing—it’s what clients are looking for,” said Roger Hobby, executive vice president of private wealth management at Fidelity Investments.
The hunger for help marks a shift from the 1990s, when do-it-yourself investing was in vogue. Back then, the adoption of 401(k) plans moved responsibility for investment choices to company employees just as one of the biggest bull markets in history was boosting individuals’ confidence in their investing prowess. Meanwhile, pioneering online brokerage firms made trading inexpensive and convenient.
How might an algorithm sort your data? Photograph: MatejMo/Getty Images/iStockphoto`
Lots of algorithms go bad unintentionally. Some of them, however, are made to be criminal. Algorithms are formal rules, usually written in computer code, that make predictions on future events based on historical patterns. To train an algorithm you need to provide historical data as well as a definition of success.
We’ve seen finance get taken over by algorithms in the past few decades. Trading algorithms use historical data to predict movements in the market. Success for that algorithm is a predictable market move, and the algorithm is vigilant for patterns that have historically happened just before that move. Financial risk models also use historical market changes to predict cataclysmic events in a more global sense, so not for an individual stock but rather for an entire market. The risk model for mortgage-backed securities was famously bad – intentionally so – and the trust in those models can be blamed for much of the scale and subsequent damage wrought by the 2008 financial crisis.
Since 2008, we’ve heard less from algorithms in finance, and much more from big data algorithms. The target of this new generation of algorithms has been shifted from abstract markets to individuals. But the underlying functionality is the same: collect historical data about people, profiling their behaviour online, location, or answers to questionnaires, and use that massive dataset to predict their future purchases, voting behaviour, or work ethic.
The recent proliferation in big data models has gone largely unnoticed by the average person, but it’s safe to say that most important moments where people interact with large bureaucratic systems now involve an algorithm in the form of a scoring system. Getting into college, getting a job, being assessed as a worker, getting a credit card or insurance, voting, and even policing are in many cases done algorithmically. Moreover, the technology introduced into these systematic decisions is largely opaque, even to their creators, and has so far largely escaped meaningful regulation, even when it fails. That makes the question of which of these algorithms are working on our behalf even more important and urgent.
A few years ago, a young man named Kyle Behm took a leave from his studies at Vanderbilt University in Nashville, Tennessee. He was suffering from bipolar disorder and needed time to get treatment. A year and a half later, Kyle was healthy enough to return to his studies at a different university. Around that time, he learned from a friend about a part-time job. It was just a minimum-wage job at a Kroger supermarket, but it seemed like a sure thing. His friend, who was leaving the job, could vouch for him. For a high-achieving student like Kyle, the application looked like a formality.
But Kyle didn’t get called in for an interview. When he inquired, his friend explained to him that he had been “red-lighted” by the personality test he’d taken when he applied for the job. The test was part of an employee selection program developed by Kronos, a workforce management company based outside Boston. When Kyle told his father, Roland, an attorney, what had happened, his father asked him what kind of questions had appeared on the test. Kyle said that they were very much like the “five factor model” test, which he’d been given at the hospital. That test grades people for extraversion, agreeableness, conscientiousness, neuroticism, and openness to ideas.
At first, losing one minimum-wage job because of a questionable test didn’t seem like such a big deal. Roland Behm urged his son to apply elsewhere. But Kyle came back each time with the same news. The companies he was applying to were all using the same test, and he wasn’t getting offers.
The page view is a zombie. For years, everyone has been saying it is no longer a meaningful way to measure online popularity. But the publishers who make websites and the advertisers who pay for them swore throughout the year that they’re no longer fooled. The era where a mere click is the crown jewel of metrics is dead. But someone still needs to shoot this zombie in the head.
“We’ve talked about page views dying for ten years,” says Jason Kint, CEO of Digital Content Next, a digital publishing trade group that represents publishers on the web, including WIRED parent company Condé Nast. “They’re not dead, but they should be.”
Along with its corrupting effects, the page view itself has been corrupted.
The page view, much like the click-through, was once the key way websites understood their audiences. It was the way news organizations figured out who was reading their stories—how many, how often, which, from where—and the way advertisers were able to calculate the value of serving up ads on those sites.
But the page view notoriously spawned that most reviled of Internet aggravations: clickbait. Quality became less important than provocation; the curiosity gap supplanted craft. The page view also drove the primacy of “search engine optimization,” or the technique of selecting keywords in headlines, metadata, and text to push articles higher in Google’s page-ranking algorithms. All of this served an online publishing economy propped up by display ads, which helped cement the assumption that news on the Internet should be free.