Category Archives: Risk Management

Dealing with Account Take Over? Here are my top tips (O’Reilly post)

Online payments and eCommerce have been targets for fraud ever since their inception. The availability of real monetary value coupled with the ability to scale an attack online attracted many users to fraud in order to make a quick buck. At first, fraudsters used stolen credit card details to make purchases online. As services became more widely used, a newer, sometimes easier alternative emerged: account takeover.

Account takeover (ATO) occurs when one user guesses, or has been given, the credentials to another’s value storing account. This can be your online wallet, but also your social networking profile or gaming account. The perpetrator is often someone you don’t know, but it can just as easily be your kid using an account you didn’t log out of. All fall under various flavors of ATO, and are easier than stealing one’s identity; all that’s needed is guessing or phishing a user’s credentials and you’re rewarded with all the value they’ve been able to create through their activity.

Read more on O’Reilly’s programming blog here.

Working on risk and fraud prevention? Don’t dig your career into a hole

I give this talk about Risk Management called The Top 8 Reasons You Have a Fraud Problem. I learn a lot from the way audiences respond to it, mostly from objections. Most commonly, objections tell me how risk managers paint themselves into a corner in day-to-day work, effectively limiting their ability to drive change or participate in key business decisions.

How do they do that?

First, they make losses their one and only benchmark. It’s easy to focus on reducing losses when the business is taking hits, it’s your job and it’s what’s expected of you. But overcompensating and focusing on aggressive loss reduction whenever possible, while rejecting troves of good customers, will not only limits your business’s growth prospects – it turns the risk manager into a single-issue player. Revenue enablement must be a core KPI for the risk team or it will lose relevance.

Second, risk managers focus on maintaining status quo. When one lacks tools and methods to control their environment, their first response is to try to make sure that nothing ever changes. It’s not the risk team’s job to say no to everything new; it’s their job to find a way to say yes. That’s where the technological and organizational edge is. Find ways to enable new business by shifting risk across your portfolio and finding detection and prevention solutions that support even the craziest marketing ideas. You may flail at first but long-term, you’re building an important muscle.

Last, they tend to distrust the customer. It makes sense – when faced mainly (and often solely) with the malfunctions of the operation, often caused by customers themselves, one tends to stop believing in people’s good intention. That starts becoming a problem when every product design process turns into a theoretical cat-and-mouse game where every possible abuse opportunity must be curbed in advance. You should let users be users, and that means that there will be breakage and there will be losses. Zero losses can easily be achieved by stopping all activity in your system; you should accept that some customers will be bad and find a way to detect these as they act in your system, rather than limit every customer’s ability to use your product.

As I often write, risk teams are multidisciplinary and must think about operations, data science, product design and more. Whenever one focuses on limiting risk instead of trusting users, challenging the status quo and enabling new business, they are contributing to turning risk into a control function, a technocratic add-on that doesn’t deserve a seat at the decision makers’ table. Make sure that’s not you.

(If you want to read some concrete advice on how to do that, take a look at my free eBook here)

Payments: What is the best career play in payments today (Jan 2013)?

There are two viable high yield[1] career plays in payments (other than completely staying away from this highly commoditized and increasingly red-ocean market):

  1. Work for a short term lending company. Successful companies are popping up and new underwriting models using social, new data sources, and other feedback loops are the future. From Klarna to LendingClub to other smaller ones, if you’re extending some kind of credit or facilitating that, you’re learning something very valuable for the next 5-10 years.
  2. Be a modeler/good risk person for payments. Good risk people are worth their weight in gold and possibly more expensive metals. Extra points if you understand the data science aspect as well as the operations side. I cannot begin to explain how big the need is – supply is at least 2 years behind the demand (in the sense that it takes time to grow people into being strong domain experts) and it’s going to remain that way for at least a few more years. For this path I’d try to get hired into companies like Signifyd.

[1] High yield means not working for the man for a low 6 digit income for the rest of your life. If you want that, there are many other options.

Fraud in Digital Goods Sales 201 (Signifyd post)

The Signifyd blog has a blog post worth reading today:

Selling digital and virtual goods is a lucrative business, but one that also attracts a lot of fraud attempts. The logic is obvious: no shipping requires no physical presence or appearance of one, fast delivery allows fraudsters to quickly buy multiple items and exploit much more of every stolen card, recourse by the seller is almost impossible due to the speed and finally, reselling stolen products is much easier than tangible goods. After our blog was featured in Balanced’s post about fraud, we saw multiple questions about fraud in digital goods. One of them was this comment on HN. One reason for Signifyd getting a lot of retailer attention is our ability to provide quality fraud prevention decisions that help reduce fraud in cases where there’s little recourse. We wanted to share some insights.

Common wisdom about preventing fraud in digital goods is abound. We’re not looking to repeat the regular tips – using IP address to billing address distance, purchase velocity, email domain type and device fingerprinting as indicators. What we’d like to do is add some more details as to why these things often fail, and suggest a few best practices. Here are some:

  1. Digital goods purchases provide a quick feedback loop, allowing fraudsters to test and learn fast and adapt. Deploying rules with a single threshold or indicator (e.g. number of past purchases over 4, or IP country must match BIN country) and rejecting 100% of purchases immediately simply provides faster feedback. Either compose rules that have multiple indicators, randomly reject less than 100% of purchases, or implement a random delay in your response.
  2. IP to billing address location is a complex indicator. Simply measuring distance won’t work when the network is mobile, and setting a single threshold won’t work in most countries. Use sources like GeoIPOrg to understand what connection this IP comes from, and implement bins to your distance function.
  3. Email domain type is relevant but simplistic. After you weed out the free but rare ones (bad) and corporate emails (usually good) you remin with a ton of Gmails. What then? Using online searches to determine that this email is actually tied to a person is an important next step.
  4. Customer browsing patterns are highly indicative. New customers, returning customers and fraudsters all navigate differently on your website. Count the number of clicks to initiating a purchase, as well as which types of pages new customers pass through. You’ll see obvious patterns emerging.
  5. Don’t wait for chargebacks to come. Have one person on staff reviewing purchases randomly to detect emerging trends and respond to them.
  6. Machine fingerprinting is helpful, but is often a glorified javascript. Build basic matching in house based on information you collect from consumer sessions, and watch for users who look similar to previous ones but always have new cookies. Fraudsters know how to flush cookies – it’s not the linking that gives them away, but rather the attempt to not be detected.
  7. Don’t use 3DS. You will pay much more in lost business than prevent fraud.

Fraud in digital goods is a real problem, but a solvable one. Don’t let the threat of lost money shut down your business and drive you to blocking whole countries from your system. And, give us a buzz. We’d love to see how we can help you.

PayPal, Lenovo and killing the password

I like this new initiative from PayPal and Lenovo. With little software installation it basically turns every device into a random password generator providing another authentication factor. It’s hard to know whether phishing and brute force password hacking are still prevalent issues since most of the data are from solution providers’ FUD campaigns; my view is that the problem is real, however not as big and complex as it’s made to be. Based on my experience in PayPal most hacking activity can be detected through probabilistic means rather than assigning the consumer with more secrets. You can read more about that here.

Will this solution prove useful? Having an app to automatically contribute an authentication factor removes some part of the human factor in the equation, and that is a lot of potential security breaches. No argument there. Still the biggest problem in access control is the human factor, and that is what makes defending against it so complicated, and turns additional authentication factors into a limited solution: people forget, and more often, they compromise themselves.

No matter if simple or complex, secure or un-secure (actually ,more so when secure and complex): if there’s a password, users will forget it, and you will have to offer some kind of password retrieval flow that may not require the secured device. Once you allow going around that requirement, it will be used by fraudsters to access accounts.

The bigger problem is that users compromise themselves. They give their credentials to others, they give their devices to their kids, they use shared devices to access confidential information. They do that because it’s what they need to do in their day to day, this is how they need to use your product. Many times there’s no alternative to sharing credentials since the product itself doesn’t allow shared use (multiple users with different permissions on a mobile device? Hard to imagine) but even when such solutions exit they are hard to use and aren’t taken up by consumers . A good example is shared/linked prepaid child accounts that get loaded with cash by parents. While these solutions exist, their use is rudimentary unless the child already has an established, separate financial relationship. It’s so much easier to just give the kid your card.

The bottom line is that usability trumps security, at least the type of security that adds barriers and authentication factors. The industry is long due on moving to behavioral and probabilistic measures to provide online security, but is definitely lagging. Until such knowledge gets properly dispersed, which may take years, and as a mid-way solution, I definitely like what PayPal and Lenovo are doing.

Using social network data in fraud prevention

Linking to a post I put up on Signifyd‘s blog:

Some of the most common questions we get asked are around social data. How do you use social data in fraud prevention? What’s the right way to leverage social network analysis in fraud investigations and real time decisions? We’ve had to deal with this issue with many of our customers, and found a few major obstacles and some very interesting use cases.

To be able to use social data, you first have to gather and understand it. In Signifyd‘s system, one of the first steps we take for each automated decision is “enrichment”, using a large number of online data sources to augment the consumer’s profile and understand the information we get from you to make the best decision.

The first challenge is getting the data. For many smaller retailers, using social data means using their personal (and sometimes fake) Facebook profile to look at a consumer’s profile and learn more about them, maybe run a few Google searches. Doing so at scale, however, is impossible. We went through dozens of online sources and integrated them through public and private APIs to allow collection of public information into a central repository. Doing that allows Signifyd to gather a lot of small pieces into a concrete mosaic of social data, since not every source will yield results at any given time.

When dealing with social data, one of the most important concerns regards consumers’ privacy. When you use a fake profile to friend a consumer you don’t only harm their privacy but also violate Facebook’s terms of service. Being able to use social sources without violating privacy – collecting publicly available information only, while respecting proper use, and only using it for highly targeted use cases – is what allows us to use social data but make consumers, and the businesses that use Signifyd to inspect those consumers, safe.

Once you cross that off, you’re faced with integrating the data. Social data is that it’s highly fragmented; inferring relationships between different pieces – the consumer’s work place, whether their kid is using their details or whether the provided phone number is indeed theirs – is a complex inference task. It requires normalization of provided data into one common form, fuzzy comparison algorithms and other tricks.

Once you have it, how can social data be used for fraud prevention? At Signifyd, we see it being handy for two main uses:

  1. Identity validation: when you accept payments online, stolen credit cards are common. Many times the fraudster doesn’t have all of the card holder’s details, and they augment what they have with invented details. Emails, phone numbers and occasionally names and parts of the billing address are invented. Using social data, different details can be tied to multiple people or be identified as invalid – using, for example, complex white-pages searches. As a result, identity validation becomes a simpler task.Some of this can be used by your team very easily: using a consumer’s social fingerprints, you can establish whether they’ve had any meaningful activity online and how far back that activity has occurred. Profiles that haven’t existed for more than a few weeks or months are often times connected to fake or stolen identities.
  2. Friendly fraud prevention: friendly fraud, or abuse, often happens when a relative or co-worker uses one’s identity to make a purchase. These cases are more subtle in both detection and handling since the offender is often highly informed – knowing passwords, personal details, and having access to personal devices. By using social data on provided details and behaviors, you can infer that there are actually two different people involved in a certain purchase.One of the basic and common scenarios is when, using the provided email address, you learn that the alleged shopper is grossly underage. That immediately raises the suspicion of a kid using a parent’s details. Tying an email address to a work place, and through it to the IP the consumer has connected from, can allow you to better validate their identity and make sure that their information is not used by a family member.

Social data is complicated to use since it’s unstructured and often lacking. Building a strong portfolio of data sources, integrating them effectively and using the data to make fraud detection decisions is one of the important pillars of Signifyd‘s solutions. Try us out!

 

What are the risks of mobile POS systems?

I’m embedding another Quora answer, since this is a topic that gets debated quite a lot. I don’t view mPOS as inherently more vulnerable, and frankly, the limited scale is (as always) the reason why I believe fraudsters will go elsewhere. Online is almost always easier.

Read Quote of Ohad Samet’s answer to Online and Mobile Payments: What are the risks of mobile POS systems? on Quora

Forget Big Data

These are the slides from a talk I gave last week. The gist of it: “Big Data” in Fraud and Risk prevention for payments won’t suffice, and must be augmented by domain experts (including a few notes about reasons for that, a bit about domain experts, and some real life examples). Nothing new for readers of this blog, but you may find the slides or wording helpful.