Meteoric Rise in Fraud Attacks by Human Farms

4 min Read

The end of last year saw a considerable spike in human farms being used by cybercriminals to launch low-cost fraud attacks around the globe, a new report from the fraud risk assessment firm Arkose Labs revealed.

Entitled ‘Q1 2020 Fraud and Abuse Report’, the report was released by Arkose Labs in February, and makes use of real-life user sessions and attack patterns dating from October to December 2019 in order to assess the risks of online fraud.

By analyzing more than 1.3 billion individual transactions, Arkose Labs discovered that a massive spike in online fraud attacks had occurred during this period—up a staggering 90% in only six months. It also discovered that these online fraud attacks tended to originate from human farms located primarily in developing nations.

The large volume of transactions analyzed by Arkose Labs were primarily made up of user account registrations, account logins, and online payments. The research included cross-sectoral data; ranging from financial services, travel and e-commerce, to social media, gaming and entertainment.

The shifting nature of fraud attacks

While the extent to which human farms are responsible for online fraud was the most noteworthy finding of the report, Arkose Labs also pointed out that a large number of the attacks do, in fact, still originate from automated bots rather than from human effort directly.

Another noteworthy finding of the report is that of the shifting geographical nature of the bases of operations used by cybercriminals. According to Arkose Labs, the origin of fraud attacks launched by human farms grew in number from Venezuela, Vietnam, Thailand and India. However, the locations which saw the biggest increases since earlier in 2019 were the Philippines, Russia and Ukraine—whose data almost tripled during that period.

These human farms are essentially sweatshop-like organizations, overseen by fraudsters, whose primary objective is to launch fraud attacks across the globe. They tend to be highly effective due to the low-cost nature of their operations.

The newest findings come after Arkose Labs uncovered in 2019 that automated fraud attacks had been growing at a rate of 25%, and that they were becoming increasingly more complex and capable of evading detection. For example, the rise in complexity of automated fraud attacks can be embodied by a new multi-step technique by which malevolent bots mimic trusted customer behavior to avoid detection.

According to Kevin Gosschalk, the CEO of Arkose Labs, “notable shifts are occurring in today’s threat landscape.” He adds that fraudsters are “no longer looking to make a quick buck and instead opting to play the long game, implementing multi-step attacks that don’t initially reveal their fraudulent intent.”

Social media and gaming as the targets of human farms

Fraud attacks have not been targeting all online industries equally. The twin giants of online gaming and social media have presented the ideal industries to be the target of attacks by human farms.

Previous research by Arkose Labs in 2019, for example, revealed that there has been a considerable increase in the volume of attacks against both social media account registrations and logins. According to the researchers, indeed more than 40% of login attempts and 20% of new account registrations were found to be fraudulent—leaving social media as one of the most meddled-with industries in terms of online fraud.

The ratio of human to automated attacks also made a considerable jump during this period, with more than 50% of social media login attacks being directed by humans.

In the world of online gaming, the scenario is scarcely any more optimistic. The industry has long been the victim of highly sophisticated attack patterns—more so than most online industries, in fact. And not only are the attacks against the online gaming industry complex, but they are becoming more frequent too. The last quarter of 2019, for example, saw a 25% increase in the number of attacks—many of which were launched by human farms.

Attackers commonly make use of gaming applications to direct their stolen methods of payment, or even to fraudulently obtain in-game assets. The most common technique the attackers employ is to use automation to surreptitiously build fake accounts in their gaming application of choice. Using these, the attackers are then able to build up their in-game assets and sell on the account at a later stage.

Aside from social media and gaming, the worsening rates of sweatshop driven attack techniques are doubtlessly affecting the online industry as a whole. And with such a rapid rise in such fraudulent activity over such a short space of time, the scale of the response ought to rise to the occasion to match the threat posed by online fraud.

“To identify the subtle, tell-tale signs that predict downstream fraud, organizations must prioritize in-depth profiling of activity across all customer touchpoints,” Gosschalk believes. “By combining digital intelligence with targeted friction, large-scale attacks will quickly become unsustainable for fraudsters.”

Read the original article from CPO magazine here. 

Share Now