Fraud is estimated to cost the insurance industry over £1.5 billion a year. The fraud itself can range from the exaggeration of a genuine claim, the falsifying of all or part of a claim or the deliberate falsification or omission of information in order to obtain less expensive cover. The insurance industry invests heavily in identifying and preventing fraud, given the cost to insurers and the honest policyholders who find their premiums increased due to this activity. One of the growing areas for counter-fraud investment is artificial intelligence (AI), but can AI really prevent insurance fraud?
Applying for insurance
When a customer submits their application for insurance, there is an expectation that the potential policyholder provides honest and truthful information. However, some applicants choose to falsify information to manipulate the quote they receive.
To prevent this, insurers could use AI to analyse an applicant’s social media profiles and activity for confirmation that the information provided is not fraudulent. For example, in life insurance policies, social media pictures and posts may confirm whether an applicant is a smoker, is highly active, drinks a lot or is prone to taking risks. Similarly, social media may be able to indicate whether “fronting” (high-risk driver added as a named driver to a policy when they are in fact the main driver) is present in car insurance applications. This could be achieved by analysing posts to see if the named driver indicates that the car is solely used by them, or by assessing whether the various drivers on the policy live in a situation that would permit the declared sharing of the car.
However, convincing social media sites to allow insurers access to their data may prove difficult. Last year, Admiral Insurance attempted to launch an AI which would analyse a first time driver’s Facebook page to determine their level of risk; should the AI have deemed the driver as “low risk”, they would have benefitted from a discount on the insurance. The project was suspended when Facebook indicated that this technology would breach its privacy rules.
Even if AI could access all the data contained within someone’s social media, it does not prevent people from falsifying information; it merely places another hurdle in the fraudster’s way. When the Admiral scheme was discussed, privacy groups were concerned that people would be encouraged to censor their online presence. It was thought that some may even present false pictures on social media. In theory, fraudsters could simply adjust what they upload and post on social media to convince the AI that the information they have provided in their insurance application is accurate.
The idea of “beating” AI in this way can apply further than in social media. For example, life insurance policyholders who monitor their activity levels via their Fitbit could allow another more active person to wear it, so as to increase the discount the insurance holder receives on their policy. Drivers, benefiting from reductions via black box technology in their cars, could allow other, more experienced, drivers to drive their vehicles.
Insurers can also use AI to detect patterns of insurance fraud within claims. AI is able to spot these patterns using self-educating software that processes “big data” (extremely large amounts of data from varying sources that can be “mined” to provide information on patterns and trends) to flag claims considered to warrant further investigation.
For example, it can help to identify potential large-scale organised insurance fraud (such as “crash for cash”) and duplicate claims by sensing patterns between various claims. AI’s high sophistication level means that it will not do this just by considering the policyholder’s or third parties’ names and addresses (as these can change), but by assessing, for instance, whether similar incident circumstances are occurring in the same town, involving a particular type of car and comparable alleged injuries. The more claims AI reviews, the better it becomes at spotting those that are fraudulent.
AI can also assist insurers with individual claims by querying the alleged events of an incident. For example, if a driver involved in a motor accident indicates in their claim that it was raining at the time of the incident, AI can check weather reports to confirm if this was the case. If a policyholder’s alleged circumstances are disproven by the AI’s searches, this indicates that the claim could be fraudulent; the system can then flag that it requires further investigation.
As the AI involved in reviewing fraud in claims relies heavily on “big data” from a variety of sources, it is harder for fraudsters to manipulate this information to fit their requirements. AI’s assistance in preventing insurance fraud will only continue to grow, as more and more data is collected and the software increases its ability to spot fraudulent activity through self-education.
Those who wish to defraud insurance companies currently do so by finding ways to “beat” the system. For some uses of AI, fraudsters can simply modify their techniques to “beat” the AI system. In these circumstances, whilst AI creates an extra barrier to prevent and deter fraud, it does not eradicate the ability to commit insurance fraud. However, with other uses of AI, the software is able to create larger blockades through its use of “big data”. It can therefore provide more preventative assistance. As AI continues to develop, this assistance will become of greater use to the insurance industry in their fight against fraud.