What is the impact of AI & Machine Learning In Finance & Portfolio Management and how are they helping decision-makers have the best tools and assets to make the right decisions?
Technology is a key component of asset management and is considered a vital function in many aspects of the financial investment process, ranging from trading, risk management, operations, and client services. Technology has enabled financial companies to benefit from sub-second reactions and combine a multitude of data sources to make informed decisions, and financial companies are now also beginning to invest more in AI and ML across the customer lifecycle; beyond the realms of algorithmic trading to customer service and risk management.
ML can help financial companies make better trading decisions by reducing the negative effects of human biases on the investment process, ultimately helping to reduce market volatilities. The question is, how does ML help overcome these human biases, and is there a negative effect? Does this mean ML has the potential to replace human workers in this industry? After all, ML can analyse a wealth of data better and faster than humans – what kind of impact is this likely to have on the financial sector?
Biased Decisions: Machine Learning VS Human Behaviour
As humans, we are prone to make irrational decisions. It’s in our nature, and in recent years, behavioural economists and cognitive psychologists have shed light on the extensive range of irrational decisions taken by most humans (HBR). Identified in this range are things like loss aversion and confirmation bias.
Confirmation bias comes with the negative consequence of missing findings or ignoring evidence that could otherwise change our view, as it is characterised by the tendency to search for, interpret, and remember information in a way that confirms our existing preconceptions, which may not always be based on factual evidence (Psychology Today).
This can be a major problem in trading, as it leads traders to become overconfident in their positions and stay in trades well after they should be abandoned because they are focused on the trading indicators and market behaviour that support staying in the position. Traders who cannot combat confirmation bias will end up taking massive trading losses (Investopedia). This can be automated by an advance order to sell an asset when it reaches a particular price point (a stop loss).
Loss aversion is the tendency for people to prefer avoiding losses than acquiring gains, and the negative effect of this can manifest itself in a resistance to change, whereby we focus more on what we might lose rather than on what we might gain. This is detrimental to the trading process – for instance, say an investor buys stocks and the price goes up. Loss aversion in this case would be the fearful thinking that the stock may go down as fast as it went up, which makes them sell the stocks too soon and miss out on the potential profits of holding on to the stock for longer. The negative effect here is that exiting too early to protect gains severely limits upside potential (Synapsetrading).
On the other hand, loss aversion can also manifest itself in holding on a stock that is below the price you initially paid strictly because you do not want to take a loss. This can be damaging to a trader as it can prevent them from cutting a losing trade, even if they see no prospect of a turnaround, thus leading to even further losses (Synapsetrading).
How can Machine Learning help in this case?
By implementing ML into the investment process, asset managers can now eliminate systematic biases by stitching together a broad set of data sources about an individual or team’s trading history, communication patterns, psychometric attributes, and time-management practices. All of which allow firms to identify drivers of performance and behavioural root causes at a more granular and individualised level than previously (McKinsey) – see the summary graphic below from McKinsey.
It can also be employed to interrogate the historical trading record of portfolio managers and analyst teams to search for pattern manifesting these biases. Individuals can then double-check investment decisions that fit into these patterns, using ML to check for bias at every level of the investment process to ensure best practices are met (HBR).
Nevertheless, there are two sides to the coin: ML algorithms may themselves exhibit significant biases derived from the data sources used in the training process, from deficiencies of the algorithms themselves or belonging to the person who initially built those algorithms in the first place (HBR). Given that the features and related data used for training the algorithms are designed and gathered by humans, individual bias may get into the way of data preparation for training the models. This means that the model may fail to capture essential regularities present in the dataset due to a lack of sufficient features and related datasets used for training the models in the first place.
Moreover, research has uncovered that, without intention, business and moral decisions are being made based on deeply ingrained biases that are obscured within ML learning. Unlike humans, algorithms are ill-equipped to consciously counteract learned biases and can’t reverse biases once decisions are made, meaning that ML models that include bias can actually perpetuate bias in a way that’s self-fulfilling (IBM). Therefore, it is crucial to detect bias in these models and eliminate them as much as possible.
The introduction of bias isn’t always obvious during a model’s construction, because you may not realise the downstream impacts of your data and choices until much later. This makes retroactively identifying where that bias came from much more difficult (MIT). In order to minimise bias, we must be able to define and measure fairness, but given the multitude of definitions for fairness, where do we even start? In addition, it’s also not clear what the absence of bias should even look like, which isn’t just true in computer science. This question has a long history of debate in philosophy, social science, and law (McKinsey) and lead us to a quite important question..
Can ML replace humans in this case?
Although ML has huge potential to increase the ability of investors to find outperforming stocks, humans will be needed to develop the right algorithms and exercise fair investment judgement (FT). ML comes with some limitations, as it may have biases derived from the data used to train algorithms or statistical quirks in its methodologies, and in order to detect and limit these biases, companies need talented and trained data scientists to ensure that ML-supported decision making is fair (McKinsey).
There are also cases where ML can draw correlations between data points without a concrete understanding of their underlying cause, meaning that some correlations will be insignificant (FT). In this case, a trained human would be needed to judge whether these correlations are valid or not. Would you be willing to invest in ice cream stocks if the death rate suddenly shot up? Or would you prefer to understand the real cause .(PS)
According to an ML expert at a large US investment manager, his team spends days evaluating whether any pattern detected by ML meets all four tests: Sensible, predictive, consistent, and additive. It was then reported that even when ML finds patterns that meet all four tests, these aren’t always easily convertible into profitable investment decisions, which will still ultimately require a professional’s judgement (HBR), ultimately illustrating the necessity of human supervision of these systems.
You can feed many ML models with inputs and observe the outputs, but how they map those inputs to outputs is concealed within the trained model. Explainable models can help bring to light how ML models come to their conclusions, but until these are commonplace, having a human in the loop is the alternative. Therefore, this means that traditional ML models are better for the time being when coupled with humans, who can monitor the results of the machine learning model allowing them to observe when algorithmic or other data set biases come into play (IBM).
The real question is not whether ML will replace humans within the realm of investment, but more importantly how ML and asset managers can work together in order to make better decisions quickly and consistently (McKinsey). Ultimately, improving investment performance continues as a primary goal for asset managers, which is aided by ML’s ability to quickly analyse pertinent data. Through this automatic analysis of data, asset managers will be able to reduce management costs by limiting the manual analysis of data, fundamentally improving the organisation’s process through the elimination of manual tasks.
This makes it possible for asset managers to uncover new and complex insights and quickly make connections that would be impossible for a human to identify (FinTech Times). For instance, ML can obtain real-time inflation rates using the online prices of millions of items or estimate agricultural yields by analysing satellite images of specific locations, which asset managers can then use to better inform their business decisions and asset investment (FinTech Times).
While AI and ML offer huge advantages to investors in this regard, there is the chance that its decisions can possibly be based on correlation rather than causation, as covered earlier on in this article. For this reason, ML and AI still require a degree of human oversight to contextualize their findings. Ultimately, AI will most likely not replace humans in this case, but will rather augment professionals in the asset management sector. Perhaps asset managers will take a leaf out of Kasparov’s book and learn to work with the ML Algorithm rather like Centaur Chess (Wikipedia.com). In this way, it could be said that AI is enhancing the work being done by humans, allowing asset managers to make better decisions and draw from more reliable data analysis than before.