What is New Account (NAO) Fraud?
New account fraud is when fake accounts are created on a digital platform, with fraudulent intent outside of their intended use. This could include creating a fake profile on a dating app with the intention of sending phishing messages; setting up bogus online gaming accounts to accrue in-game assets using bots; or setting up financial accounts in other people's names to get credit. New account creation powers a wide range of downstream fraud attacks, which is why it is such a prevalent problem for businesses.
How Fraudsters Monetize New Account Fraud?
New account registration attacks can be monetized in many ways depending on the industry and account type. Attacks range from those that inflict direct losses on the targeted business, to less direct attacks which are meant to lay the groundwork for downstream fraud.
The potential direct and indirect losses and the implications for the wider digital ecosystem are why it is imperative to stop account origination fraud at the front door. In the end, it's the business and legitimate customers who are the ones that suffer.
What are the Steps of an Account Origination Attack?
The vast majority of new account registration fraud is accomplished with the use of automated scripts or human click farms. Sometimes, they are used in tandem. Each is used for specific types of fraud attacks.
Bot services are cheap to acquire and require relatively little technical knowledge to deploy. A basic internet search can turn up dozens of bot marketplaces. Bots can create new accounts quickly and at scale - much more than what a human would be able to. These bot-powered accounts are ideal for attacks such as phishing or content scraping - basically any attack where massive scale is needed for it to be profitable.
Human Click Farms
Fraudsters can also hire teams of human workers to create new accounts and perform actions that require more nuance than bots can achieve. Common attacks in this regard include writing fake reviews for a product or business to falsely improve ratings, liking videos, or testing stolen credit card credentials.
Activating Dormant Accounts
Sometimes new accounts are created with the intention of lying dormant for weeks or even months, only to be reactivated later all at once to commit a series of attacks. These can include coordinated attacks such as DDoS or disrupting the usability of a website in some manner.
New Account Fraud Solution Brief
Why is New Account Fraud on the Rise?
Given the ease and low cost at which fraudsters can access stolen personal data and buy the tools to create bogus accounts at scale, new account creation fraud can be a profitable enterprise.
They only need to have a low success rate for the ROI to work out on behalf of the attacker. For example, bots can create new accounts on a social media site at a large scale to send phishing messages en masse. Even if only a few of these are able to trick unsuspecting users into divulging personal information, the attack can turn out to be profitable for the fraudster. Fraudulent accounts are also used to scrape personal information from social networking sites, which can then be resold to third parties, or used for social engineering purposes in order to launch targeted spear-phishing attacks against individuals.
Additionally, new account fraud is used to create accounts that enable taking advantage of promotional items meant for new customers to entice them into signing up for a service, or to apply for loans and credit cards with no intention of paying them back. These are just a small number of the almost infinite types of fraud that can be committed that start with bogus account registration. That's why this type of fraud is so prevalent and a constant strain for companies to combat.
Limitations of Current Approaches
It's difficult for businesses to identify and stop new account fraud because the fake account can often be masked to look like a real account. Sure, dumb bots are easily spotted, but today many attacks utilize more sophisticated bots that appear to mimic a good user. And human fraudsters performing multiple NAO attacks can hide or obfuscate their IP address, location, or any other identifiers. The more sophisticated the form of fraud, the more difficult it is to detect.
With the sophistication of fraud attacks today, much traffic falls into a "gray area" - the traffic that is hard to distinguish between being clearly good or clearly bad. Fraudsters count on this when they launch NAO attacks so they can "blend in" with good traffic, inflicting damage before the attacks are discovered, by which time it's usually too late. Since it's so easy to emulate good users, this exponentially increases false negatives. When faced with an overwhelming tide of these attacks, many businesses resort to blocking any user that might be deemed somewhat suspicious, which in turn hurts new customer acquisition, ruins user experience, and decreases brand loyalty.
This is an issue since many companies rely on an identity-based authentication strategy; but fraudsters already have the knowledge, tools, and data to evade detection from data-driven security parameters. This type of mitigation strategy is geared towards detecting extremes, but as we have seen, very little traffic falls into clean buckets of 'good' or 'bad'.
Microsoft Outlook.com Tackles Fraud and Abuse Globally Using Arkose Labs
Smarter Authentication With Targeted Friction
Rather than taking extreme measures such as letting too many suspicious users create new accounts for fear of hindering good customers or mass blocking traffic, which inevitably leads to false positives, Arkose Labs takes an approach that uses targeted friction for a superior authentication experience.
Many sophisticated bots can accurately mimic human traffic and go undetected by traditional solutions. While Arkose Labs' step-up enforcement can detect and stop most large scale bots, sometimes the fraudsters deploy bots that have been trained to act like humans. These bots behave like humans but have solve patterns that are closer to automated traffic. Upon detecting the presence of such bots, Arkose Labs deploys a proprietary 'acid test' to effectively triage the traffic into humans vs. bots. This starts with the platform switching out the one visual puzzle for a completely new kind of puzzle, still easy for humans to solve. This effectively stops all automated traffic, as the attack program cannot possibly solve a puzzle its designer has never seen before.
The Arkose Labs Fraud and Abuse Platform does not just mitigate the effects of fraud but provides powerful remediation which blocks 100% of malicious bot traffic, and enables businesses to deflect attacks from bots, skilled cybercriminals, and sweatshop outfits. This allows good users to maintain the seamless digital authentication experience they have grown accustomed to, while providing friction and frustration to fraudsters.
With this approach, targeted authentication can be deployed to stymie automated attacks and slow down and frustrate human attackers. Arkose Labs, thus, delivers targeted enforcement challenges that accurately distinguish between authentic users, malicious humans, and bots.
- The challenges gradually increase in difficulty depending on the associated risk of the user. Ultimately, this makes fraudsters expend a large amount of time and energy and makes it inefficient to clear challenges at scale. Since increasing costs diminish the profitability of attacks, fraudsters are compelled to stop. That's how Arkose Labs effectively bankrupts the business model of fraud.
- This means good users will be able to easily sign up for new accounts and take advantage of promotional deals and other incentives designed to attract new customers. Meanwhile, fraudsters who create fake new accounts for malicious purposes will be foiled.
To find out more about how Arkose Labs can help you stop new account origination attacks, click here to request a demo.