Broker Check
When it Comes to Robo-Advisors...Who's REALLY Pulling the Strings?

When it Comes to Robo-Advisors...Who's REALLY Pulling the Strings?

June 06, 2023

In our last piece, I hope you started questioning why you allow others to influence major outcomes in your own life. Is it because you trust them? Is it because they’re the most knowledgeable in a certain field? You wouldn’t let an unqualified doctor perform surgery on you.

So, should you allow an algorithm that doesn’t know you to take control of your finances?


The basic premise of robo-advisor services is that they are, at minimum, a more efficient way to invest. Greater efficiency is claimed to be the result of cutting out a live person, offering automatic rebalancing, tax-loss harvesting, etc. If we’re talking about investing, though, we’re obviously talking about investment selection as well, and that would seem to beg a natural question: how are the investment decisions of a Robo-Advisor made?

Recently, you might have heard quite a bit about artificial intelligence (AI). In fact, our Vice President, Kamala Harris, was dubbed the ‘AI czar’ at the beginning of May 2023. The significance of AI is that it is what is driving the decisions of Robo-Advisors. If AI is driving the decisions, however, that means a live person is not. Well, sort of.

To anyone who is unfamiliar, the most fundamental driving force behind AI is an algorithm. An algorithm is really nothing more than a decision tree that uses if-then logic to proceed through a series of decisions. A good example of if-then logic would be that applied when approaching an intersection. In that instance, the if-then logic might be something like, if nobody pulls out in front of me, then I will proceed through the intersection.

The fact that if-then logic is being applied means two things. First, it means that conditions are being observed and evaluated—that’s the if part. Second, it means a decision is being made—that’s the then part—and if a decision is at hand, then that means there are options that must be evaluated.

Here’s the key point, though: if AI is taking stock of conditions, then it must be told the conditions for which it needs to account. Similarly, if certain conditions trigger certain decisions, then it must be told what options are applicable to the decision at hand.

Why does that matter? Well, if a live person is not informing the algorithm of those things, then who, or more accurately what, is? (Hint: it is, in fact, a live person.)

The fact that we’re relying upon a live person means that the person behind the algorithm matters. This would seem to bring us right back to the idea of choosing to trust someone to do something, because of specialized skills or knowledge they might possess or how trustworthy we have determined them to be, in general.

After conducting what was admittedly a relatively quick search, I couldn’t find anyone holding themselves out as an expert investment-AI-builder-person-thing.

This would seem to create a problem: if we can’t identify who is building the AI, then how can we evaluate whether they are worthy of our trust? If we aren’t able to evaluate how worthy they are of our trust, and we choose to take advantage of whatever service has been offered, then what does that mean we are basing this decision upon?

The image this seems to leave of Robo-Advisors is that they put an extra layer between you and the real “advisor,” assuming there is one who is ultimately calling the shots. In other words, they obscure our ability to conduct a proper evaluation of what is being offered and by whom.

To cement the potential gravity of this issue, allow me to leave you with another hypothetical.

Let’s say that healthcare goes 100% in the AI direction. So, the way you get diagnosed and prescribed a solution is you have to sit down in front of a computer and answer a series of questions. Let’s say that you’ve been experiencing a little pain in your leg, but nothing too substantial, and certainly nothing that has caused a change in your lifestyle. You decide to pursue healthcare, and in the course of answering questions on the computer let’s say that AI throws a couple in there that seem a little odd, as if AI might be trying to suggest that the problem is much more significant than something that can be resolved with a simple solution. Finally, let’s say that the culmination of this process is a diagnosis of gangrene with the prescribed solution being an above-the-knee amputation of your leg.

How comfortable are you going to feel acting upon what a computer screen told you?

I can understand if some might dismiss that specific scenario and say, “Well, Matt, that’s extreme.” Fair enough, but what if it turned out to be right?

In other words, if the prescription was for some antibiotics, would we even bat an eye? I mean, all we’re being asked to do is take a pill. It would seem that it’s only when we start talking about hacking off body parts that the ground shifts. Who’s to say that an antibiotic is any less extreme, though? In the instance that amputation proves to be necessary, would we not say that the antibiotic prescription was extreme, as in extremely insufficient?

The real problem that this highlights is that in evaluating what has been offered to us we have nothing to go on. We could say that we could use performance history, but with Robo-Advisors being new to the scene, it’s akin to the financial advice an unlicensed friend might offer. In actuality, though, it might be worse, because at least we know something about our friend, and we are left to assume quite a bit about what is driving the advice a Robo-Advisor might give.

Having said all of that, the reality is that we probably interact with AI more than we know. What we are not doing, however, is completely turning over our finances or our healthcare to it…yet.