Category Archives: Fraud Prevention

Forget Big Data

These are the slides from a talk I gave last week. The gist of it: “Big Data” in Fraud and Risk prevention for payments won’t suffice, and must be augmented by domain experts (including a few notes about reasons for that, a bit about domain experts, and some real life examples). Nothing new for readers of this blog, but you may find the slides or wording helpful.

 

Feature companies in risk and fraud (or: enterprise and consumer startups are different)

David Pakman quoted Tim Armstrong today: “I’ve seen too many feature companies get hot, raise too much $ and get way too overvalued.”

It is interesting to see that this is true in various markets. The trend is extremely obvious in payments/security as well, and is a by product of the boom in seed funcing for consumer startups. I wrote extensively about payments startups and how important it is to know where you are in the value chain. The thing is that I see the same in risk and fraud detection, where you’d expect the need for complete and complex products to be obvious.

Indeed, the concepts of consumer and lean startups trickled into enterprise; as a result, small 2-3 person teams are trying to build rudimentary detection mechanisms, mostly based on “social data” (a euphemism for opt-in Connect or scraping Facebook directly) and expect to position themselves in the market as serious providers in a short time frame. This is far from a reasonable expectation, however since money is abundant and is only looking for a way out of pure consumer plays, some of these teams get funded and end up overvalued and unable to cut losses with an acqui-hire, the most likely scenario.

While I agree consumerization of the enterprise is real, this is not a sustainable approach. The definition of an MVP (much more feature complete) and iteration (much longer) as well as what it means to do customer development is very different in enterprise. Small merchants continue to think more and more like consumers and are becoming more tech savvy, and that leads to more usage of SaaS tools and more openness to outsourcing some non-core activities (in eCommerce, fraud prevention may well be considered non-core). That doesn’t mean they are open to testing any new tool that gets put out there; the time as well as expertise to integrate and evaluate its performance may be more than they can afford. You can’t trust your tool to just get picked up at random to a reasonable scale and learn from there, unless you have a very big war chest; then we go back to the funding issue.

Case in point is device fingerprinting (DFP) companies. A few years back DFP (a lot of times a glorified javascript) was all the rage. Since it wasn’t a text or flash based cookie most fraudsters, themselves not more than script kiddies, did not have the knowledge or tools to properly resist being profiled. As a result, for a while it worked well especially in reducing short term horizontally scaled attacks. Only there were a few problems: overfunded companies built too big a team, especially heavy on the Sales side since Sales cycles with financial institutions were long and require a lot of patience, as well as multiple integration solutions. Since the teams were big and sales took time each contract had to be big, so pricing went up as much as possible rather than adapt a freemium model that could boost adoption. Moreover, once fraudsters and engineers caught on it was easy to circumvent or duplicate, either internally at retailers and banks and by competitors. As a result, most of these companies are struggling and dealing mostly with litigation against competitors for some negligent IP.

In enterprise, specifically in security, one feature isn’t enough, starting lean is more complicated, and just a feature will not do not matter how many patent you have pending. One option is to take your time to come with a holistic solution, and that is tremendously harder to build (in fact, since FraudSciences was acquired, only Signifyd and Sift Science have tried building a standalone risk-as-a-service solution). The other is to start very slow and very lean, and raise very little capital. MaxMind is a good example of the latter. It’s a whole different world out there now, especially for enterprise startups. Make sure you build a real product that can sell. Don’t built a feature.

Vouch for me: social collateral and voter fraud

With the elections concluded and the winner decided, a few words about voter fraud and identification.

There has been a lot of talk about voter ID requirements and policing and its sometimes adverse effects (such as this story about an elderly woman not allowed to vote since she doesn’t drive and therefore has no driver’s license). Voter fraud and voting machine malfunction have been contentious subjects; in previous campaigns, there have been multiple cases of identity fraud – as bad as dead people registering to vote.

How do you solve the problem of identification? Interestingly enough the suggestions are very similar to attempts for stopping account take-over in online services. There are basically two common responses.

The first is – get a 3rd party certificate. The equivalent in the online world is using Facebook connect – basically your online passport – or other types of openID solutions. That requires trust in that 3rd party. We know that FB is riddled with fake accounts, but as far as government issued driver’s licenses our trust is a little bit more warranted – if only due to the physical, offline nature of issuing a new one.

The second is – issue yet another authentication factor for citizens to identify themselves. I wrote about this topic in the past; issuing new secrets tends to not work (since fraudsters will target getting those secrets, and will be less likely than legitimate users to have them handy when prompted). Using biometrics, such as DNA samples or finger prints, as the blogger I linked to suggests is the beginning of a slippery slope of lack of privacy[1].

Let me offer a third options: social collateral. Let groups of pre-registered voters vouch for one another as being who they are, as long as at least one of them can be recognized with a government issued ID, while you register them jointly as a “voting batch”. Have those people sign a pledge to maintain voting in good faith. The mechanics of having to commit to that pledge as a group as well as include at least a minimum amount of outside validation will govern against voter fraud. This could be a cheap, reasonable and research-supported way to enforce identity and reduce incentive for fraud.

As President Obama noted in his acceptance speech, the voting process must be fixed in the next 4 years. Using social mechanics is one way that must be considered.

[1] Edit: In the comments, Nir Alfasi presented a biometric solution that doesn’t require an actual database. I believe it still falls under the complexity of the first  type of solutions since it, too, requires pre-registration. It’s however good to know that such solutions exist.

Smart risk management: why the “factory” approach could bring you down

This is a re-posting of a 2010 post.

If you deal with payments in any shape or form, you know you’re going to end up with a “risk management” team. A lot of times it creeps up on you: volume picks up and so you know you need someone to look at orders. If you’re running a small shop it’s most probably going to be you, but a lot of companies just hire one or two folks. These people use whatever tool you have to look at transactions – most times a customer service tool – and make up their technique as they go. With time, and sometimes with chargebacks coming in, you realize that your few analysts can’t review all transactions, so you turn to set up a few rules to make queue and transaction hold decisions. Since your analysts are not technology people you resort to hard coding some logic based on a product manager’s refinement of the analysts’ thoughts, again based on a few (or many) cases they’ve already seen. Not a long while passes, and you realize that the analysts are caught in a cat and mouse game where they try to create a rule to stop the latest attack that found its way to the chargeback report, and put a lot of strain on the engineers who maintain the rule-set. Even after coding some simple rule writing interface the situation isn’t better since the abundance of rules creates unpredictable results, especially if you allowed the rules to actually make automated decisions and place restrictions on transactions and accounts.

It’s at this point that you realize you need a statistician to run regressions, so you bring someone in. Hopefully, you have enough of a data set for them to create a decent regression model, and you can just get anyone off the proverbial street since regression is a very common tool. The statistician comes on board and creates a model that has industry standard false positives, let’s say 80%. Your review volume grows with transaction volume and you have to hire even more analysts and customer service folks to deal with complaints by legitimate customers – makes sense, since placing restrictions with 80% false positives will get you a lot of incoming calls. Then you discover that the regression model’s effectiveness degrades pretty quickly since they’re trying to predict what transaction will get a chargeback, but there are multiple reasons for getting a chargeback, making it harder to predict correctly. You then also discover that to create an updated regression model you need to wait for most of the chargebacks to come in so you’d have a good enough set of problematic transactions, meaning that you have at least 3 months’ lead time before a new process can be kicked off. That’s, of course, given that you have engineers on board to code the model.

Next thing you do is buy fraud prevention tools to add to your modeling power; you start creating black lists of IPs and Emails to mark problematic transactions. This improves things a bit but leads to additional false positives since people share resources. You consider buying a platform to manage rule and model deployment but decide that cost is prohibitive and generally, it looks like risk management is taking over your dev resources. So you decide to hire more analysts to do manual reviews, and a product manager to decide on the rule and model roadmap, and a risk operations manager for the growing group of analysts, and a head of risk. The rules and models you already have in place are blocking so many transactions you start to wonder if they’re not slowing down your growth instead of helping you protect your business. It looks like risk is managing you.

There’s something wrong with this model. Sure, some of what I’m describing makes sense for a company that’s just starting out, but getting caught in the factory approach to risk management is a huge burden in later stages, one that can be prevented by realizing that risk management is just one of a series of classification and inference questions a company needs to deal with, and that those require a different way of upfront investment in building a team.

I had two very interesting conversations about this very subject last week with two very inspiring people, but there’s one thing I remember a colleague telling me a few months back when they took a new job. The person insisted on being responsible not only for the new company’s risk management efforts, but also generally for their data and business intelligence. That was based on the same understanding: with the abundance of data created by organizations’ activity, all attempts to organize that data and make sense of it should be bound together. It doesn’t matter whether you’re qualifying leads, improving conversion or reducing fraud – you’re dealing with users and their actions, and how automated decisions impact them. It is the practice of making sense of data, and it transcends using data to control the experience of bad users. Once you realize that, you start demanding more of your analysts: that they be technical, know how to generalize on trends beyond targeted rules, become sources of truth. You understand that off-the-shelf regression cannot be just carried between domains without adjustment. You build a system that can correct itself. And with that, you create a team that can win risk, and do much more than that:  develop data sources, identify trends in huge data sets, and reach actionable insights that transform the way you work with your users, both fraudulent and legitimate.

What’s missing? I think there first needs to be a critical mass of people dealing with data in a way that sees beyond “intuition” but doesn’t get lost with over complicating inference using huge data sets. It takes time for these people to develop skill and want to continue solving difficult problems; my analysis group had less than 20 people in it and a good part of them have had enough of payments risk and classification for the rest of their career. I’m not even mentioning starting a company. But when you start a risk or data team, make sure you seed it well, or you’ll find that the bad start costs you a lot more money and effort than you have planned. 

What’s Missing in Data Science Talks

On January 28th, 2008, the $169M sale of Israeli FraudSciences to eBay’s payments division PayPal was publicly announced. I was part of the 65 person crew and head of the analytics group at the time. FraudSciences became PayPal’s Israeli R&D center and is still a thriving team spanning more than 100 people and providing great value to the company. Our story has even been mentioned on StartUp Nation, in an inspired-by-a-true-story style dramatization of events.

The sale and its ramifications is not what I want to talk about, though; what I do want to talk about is the events that led to that sale, and more specifically the test that PayPal ran us through. You see, PayPal had to see whether our preposterous claims about how good our algorithms were held true, so they threw a good chunk of transactions at us to analyze and send back to them with our suggested decisions. Long story short, our results had an upside of up to 17% over PayPal’s own algorithms at the time, and the rest is history.

How did we do that, then? We must have had a ton of data. We must have used algorithm X or technique Y. We must have been masters of Hadoop. Wait – no. 2007. Nothing of the sort. Everything takes forever. To get to these results we didn’t even use the two famous patents FraudSciences viewed as huge assets since they required some sort of real time interaction with the buyer. What we did have were roughly 40,000 (indeed) well-tagged purchases, good segmentation, and great engineered features all geared at very well defined user behaviors. What we had, plain and simple, was strong domain expertise.

Domain expertise, or lack thereof, is exactly my issue with the talk about Data Science today. Here’s an example: I recently had a friend, a strong domain expert, rejected from a pretty nascent startup filled with very smart engineers since they didn’t really know where to place his non-developer profile in their team. Were they wrong to not hire him? Maybe, maybe not. I can’t judge. Were they wrong to make the decision based on coding skills? Most definitely. It’s a very common passion for data and ML geeks such as ourselves to embark on the (in my opinion) hubris-driven task of building an artificial intelligence that will solve all problems, the Generic SkyNet. We neglect to admit the need for specific knowledge. It is then when discussions of volume and structure of data sets replace keen understanding of what people are trying to achieve – when complex tools replace user research. Unsurprisingly, these attempts either fail or scale down to take domain by domain. They can still take over the world – just with a different strategy.

When I read people on Kaggle, in itself an amazing website and community, list the tools they threw at a dataset instead of how they led with a pure analysis of pattern and indicators, I cringe a little. This is a craft fueled by excess – in space, in memory, in computing power, even in data. While often times highly useful, almost as often does it  make us miss the heuristic just in front of our eyes. I think that analysis and Data Science need to incorporate this realization as well, to become a real expertise.

Fraud detection and prevention and Credit issuance, the stuff we deal with on a daily basis at Klarna, are areas where this is an obvious issue. High fragmentation in geographies, payment instruments and products creates smaller training and validation sets than you’d ideally want. The need to wait for default or a chargeback limits the time between iterations. The presence of bad signals is scarce compared to other types of classification. Operational issues and fraudsters’ strong incentives to hide (as well as abuse or “friendly” fraud) cause “dirty” performance flags. And still we have a shop that uses a number of instances per segment that Data Science teams would frown upon to make some accurate decisions. How is that? The same way FraudSciences gave PayPal’s algorithms a run for their money – we use domain expertise to distill features that capture interaction in a way that automated feature engineering methods will find hard to imitate. We use bottom up analysis of behavioral patterns. We add a sprinkle of behavioral economics (but building a purchase flow is a completely different story).

This aspect of what we do is available to any Data Scientist out there – I’ve written extensively about finding domain experts. They’re around you. Use them – and don’t get hooked on the big guns just because they’re there*.

*Well, only if you want to get better results quicker and are acting under market and product constraints. If you’re a contributor to an open source project – carry on with your great work!

In Defense of Knowledge Sharing in Fraud Prevention

Like any other field, there are conventions and (few) blog posts about fraud prevention specifically, and behavior management generally (be that abuse, spam or anything else). There is, though, an air of secrecy around these topics that always leaves the discussion at a very high level: no numbers around KPIs, no discussion of best practices and little sharing of information other than explicit blacklist data. This is a situation that has to change if risk management online is to become a real practice, and with the growing need for risk experts it is a must.

Secrecy in fraud prevention can be driven by multiple reasons, but mostly for competitive advantage – whether the company thinks that telling too much about its fraud prevention will tip off fraudsters and competitors, or risk staff feels that not sharing what they know gives them career advantage over others. Both considerations are true to some extent, but not to the extent to which they currently justify secrecy.

The truth about fraud prevention is simple: there is a finite set of generic tools and indicators that you can deploy. The key to differentiating in risk is not the run-of-the-mill 80% but rather in the 20%, and those are the ones that should be kept secret – interestingly enough, usually those are the ones hardest to steal and transfer anyway.

What’s in the 80% and what’s in the 20%? Let’s look at a few examples.

Tools? When I tell you that every risk team must utilize linking, velocity, rules, modeling and manual review tools (the “Basic toolbox”), that’s the 80%. Even if we discuss particulars of implementation such as the right type of edit distance for fuzzy matching an address, this is still far from the 20%. Are you really giving away competitive advantage? The type of customer and data your colleague is working with may not be the same. Shopping cart design may even impact the types of typos good customers do, which then impacts normalization and matching.

Procedures and Analytics? The way we develop and implement rules in Klarna may give us some competitive advantage, but that is not because of our procedures or the fact that we use control groups. It’s only in the mixture of hiring, training, day to day mix of detection/analysis/action that the competitive advantage is attained and maintained. Risk and fraud prevention are inherently operational practices – user behavior is constantly changing, if only for the changes in the product or macro-economic trends. In consulting assignments I sometimes found myself explaining essence and implementation to a very detailed level only to see it being missed or misinterpreted in execution.

These are two examples, but there are more. Of course some things must be confidential. Your training plan, your hiring methods, and the code for the variables used in your models are all good examples. Consider this, though: if your success depends on fraudsters not knowing that emails from yahoo.com are always suspected while emails from intel.com never are, you’re in trouble no matter WHAT is in your “secret sauce”.

Secrecy is important, but the current fear of collaboration is hurting more than helping. I’m not suggesting open sources, but I am suggesting that risk management should be more similar to standard software development. This will allow us to focus on the hard problems of implementation, operation and real innovation.