REUTERS | Eric Gaillard

AI-powered investments: Who (if anyone) is liable when it goes wrong? Tyndaris v VWM

Who (if anyone) is liable when an artificial intelligence (AI)-powered trading / investment system causes substantial losses for an investor? The English courts are currently considering this question for the first time in the case of Tyndaris v VWM, which is due for trial next year.

In this blog, we explain the background to this ongoing case, including the key facts and issues in dispute. We also highlight the issues that investors and investment managers can expect to face, for example:

  • Is the AI system fit for purpose?
  • Can the investment manager explain how the system works?
  • How is the system marketed to customers and how is this reflected in the contract?
  • Has sufficient testing been undertaken?
  • Does the system comply with emerging ethical best practices and regulatory requirements?

Background

In 2017, the claimant, Tyndaris SAM (Tyndaris), a Monaco-based investment manager, signed an agreement with the defendant, MMWWVWM Limited (VWM) under which Tyndaris agreed to manage an account for VWM using an AI-powered system to make investment decisions (the managed account). VWM wanted a fund that would trade with no human intervention so as to remove any emotion and bias.

Investment decisions were to be based solely on trading signals created by an AI system run on a supercomputer, said to be capable of applying machine learning to real-time news, social media data and other sources, to predict sentiment in the financial markets (the K1 supercomputer). Algorithmically-determined stop losses and trailing profit stops would also be set by the K1 supercomputer to protect the risk-adjusted return of the strategy. Tyndaris stated that the K1 supercomputer had been subject to extensive backtesting and “live testing”.

Live trading started in December 2017. VWM’s notional investment amount reached US $2.5 billion at its peak. VWM quickly suffered losses (approximately US $22 million overall, including a loss of US $20 million on 14 February 2018). As a result, VWM wrote to Tyndaris demanding that trading be suspended until further notice.

In response, Tyndaris claimed approximately US $3 million from VWM in unpaid fees, eventually commencing proceedings in the English High Court. VWM counterclaimed, seeking to recover its losses on the basis that it had invested in the managed account in reliance on misrepresentations by Tyndaris regarding the capabilities of the K1 supercomputer.

Key issues

The parties are currently engaged in the litigation. Their pleadings / legal submissions (which we have reviewed) reveal the following issues:

  • How did the K1 supercomputer operate and what did Tyndaris say about how it would operate?
  • Did Tyndaris have sufficient expertise to operate the K1 supercomputer as marketed?
  • What was the nature of the testing that Tyndaris carried out on the K1 supercomputer before marketing it?
  • Did Tyndaris act as a “prudent professional discretionary manner” when using the K1 supercomputer?
  • What level of human intervention was appropriate when operating the K1 supercomputer?

Unless the parties settle, the High Court will hear the dispute in mid-2020.

Practical implications

Whilst the outcome of the dispute will principally depend on the facts leading up to the transaction and the relevant contract terms, the court’s judgment may include wider comments on the use of AI systems by funds or investment managers.

The use of AI in this context raises novel questions about the appropriate allocation of risk and liability, particularly given that AI systems necessarily act autonomously and will often make unforeseeable decisions. It may not arise in this case, but parties may in future argue that the independent decisions of an AI system break the chain of causation, meaning that the defendant is not liable for any harm caused. In other words: “the computer did it”. However, in the short term we think this is unlikely to impress a court.

Similar issues are beginning to arise in multiple jurisdictions. In the recent case B2C2 Ltd v Quoine Pte Ltd, the Singapore International Commercial Court considered how the law of unilateral mistake should apply when the entity said to have made the mistake was an algorithmic trading system. The judge held that because the system in question was “largely deterministic”, “regard should be had to the state of mind of the programmer of the software of that program at the time the relevant part of the program was written.” This leaves open the possibility of a different outcome for a system which operates autonomously of its programmers (that is, in a non-deterministic manner), as modern machine learning is capable of doing.

In general, we suggest that the topics below are crucial in understanding the legal issues that can arise (including the question of who, if anyone, is liable) when an investor suffers losses in similar circumstances to the Tyndaris case.

Fitness for purpose of the AI system

  • Given that use of the AI system will (subject to any future regulation) generally be governed by a contract between private parties, the system will need to perform as well as has been contractually bargained for.
  • However, a fund / investment manager is still likely to be subject to an express or implied duty to undertake its services with reasonable care and skill. Although the fund / investment manager may not have developed the system (for example, it may have been licensed or purchased from a third party), from both a legal and a commercial perspective, it could still be held accountable by its customers for any issues.
  • Accordingly, there is a threshold question: is the AI system capable of doing what it is intended to do? The fund / investment manager will need to be comfortable that, fundamentally, the AI system is able to make trades in a sufficiently sophisticated manner such that it does not put the fund / investment manager in breach of its principal obligations.

Marketing the AI system

  • Whilst funds / investment managers will inevitably seek to distinguish their services and technology from their competitors, it is vital that the system is explained accurately and that there are no over-promises about its capabilities so as to mitigate the legal risks arising from these communications.
  • It would be prudent for investment managers to produce a document clearly explaining the AI system, including the nature of the AI model, the data, any testing that has been undertaken and the level of human oversight. This could be used as the principal marketing material in order to mitigate the risk of a misrepresentation claim, as in Tyndaris. This document should be produced with both technical and legal input.

Testing and accuracy of the AI system

  • This document should also include details of the AI system’s testing (which should emulate live trading conditions as closely as possible).
  • The crucial difference between AI systems and traditional computer programming is that not all of an AI system’s operations are pre-determined by programmers. This means that testing may need to be more extensive as compared with other algorithmic-trading platforms. Machine learning systems operate differently depending on the data to which they are exposed, so ensuring that a sufficiently rich and realistic data set is used in testing will be key.

Insurance

  • We predict that insurance of AI systems is likely to become increasingly sought after, as firms seek to pass on potential losses to the insurance market, and underwriters seek to profit from writing valuable policies.
  • Both sides of the market stand to benefit from this practice. In the short term, we envisage that standards for AI testing and reliability may well be driven by the contractual terms negotiated or imposed by insurers as “conditions of cover”, much as insurers already require minimum standards when writing policies in other industries.

Regulatory scrutiny on the use of AI in the financial services industry and ethical best practices

  • In addition to the commercial and legal risks outlined above, there is also increasing regulatory scrutiny of AI systems, particularly in the financial services industry. Recent policy announcements of the Financial Conduct Authority and Prudential Regulation Authority have emphasised the need for greater governance of AI.
  • Explainability is an increasingly important concept in relation to AI. It can be a regulatory requirement (for example, under the GDPR, in relation to personal data) and it is widely acknowledged to be an important requirement in the ethical use of AI, including by the European Commission’s recent Ethics Guidelines for Trustworthy AI and the CBI’s recent report AI: Ethics into Practice – steps to navigate emerging ethical issues.
  • Explainability can cause difficulty because some AI systems are said to operate in a “black box”, such that even those responsible for developing or managing the systems cannot explain how the AI reaches certain decisions.
  • Understanding the way that an AI system operates does not mean that the technology needs to be disclosed or made open-source. However, those responsible for operating the technology should themselves have a good idea about how it operates, and should be able to explain at least in high-level terms how it operates.
  • In the context of financial services, the FCA has emphasised that companies should take responsibility for their use of AI, which includes:
    • transparency in their use of AI, so that customers “know when and how machines are involved in decision-making”;
    • having “sufficient interpretability” of the company’s AI systems, so that the company is able to understand and explain how it is using AI; and
    • board level responsibility in understanding how their company is using AI. Board members are unlikely to be able to say “the machine knows best” when asked about their company’s use of AI by a regulator.

With thanks to William Dunning at Simmons & Simmons for contributing to this blog.

Leave a Reply

Your email address will not be published. Required fields are marked *

Share this post on: