20160714_184011When building an online marketplace, there are a few rites of passage to go through that are pretty ubiquitous. First, you try to build traction and attract more users. Once you do that, you’ll hit a milestone, be it one million users, one million transactions, or anything of the like  that is momentous and serves as a bit of a coming out. It celebrates your arrival. This momentum will attract more users and you will feel like you are on a rocket ship to growth. The only problem is that you will attract the attention of other people as well, people we call “bad actors.” You now have something of value, something they want to steal. Bad actors can create accounts quickly and in mass, working to spam, steal (information and/or money) and negatively impact the experience of your good users. How do you stop them? More importantly, how do you stop them without creating friction for these good users you are trying to protect, all while keeping the integrity of your platform intact?

These are some of the issues we tackled at our Third Trust & Safety Meetup last week. Featuring a panel of top experts from some of the largest online communities in the world, we talked about the challenges of securing an online marketplace, as well as what they are doing to try to combat fraud, and what still needs to be done.

Our panel included:

  • Adelin Cai – Head of Policy, Pinterest
  • Eliza Jacobs – Community Policy Manager, Airbnb
  • Michael Pezely – Sr. Manager, Trust Technology, OfferUp
  • Noam Naveh – Independent Consultant on Online Payment & Identity Fraud Prevention, FraudStrategy.com
  • Phillip Cardenas – Head of Global Trust and Safety, Uber
  • Sami Sharaiha – Head of Risk and Payments, TripAdvisor

One of the main takeaways of the discussion, and an ongoing theme throughout all of the responses, was prioritizing, and I mean that in a few ways.

20160714_184534

Warm Up Exercises – there’s no escaping them

The first priority for everyone was, naturally, the users. The members who make up his or her respective communities. It was clear that our panelists were laser focused on that above all else. Their jobs were to protect the users and protect the users’ experiences, through both technology and policy.

A new issue that many companies are dealing with now, introduced by this sharing and peer-to-peer economy, is physical safety. Physical safety is paramount over ad fraud, financial fraud, spam, etc. For those panelists with a community who is out there interacting with each other in the physical world, this is not a challenge they take lightly. One of the ways that they discussed as a measure of safety is understanding, revisiting, and reinforcing your safety promise to users. There are number of new concerns that will come up that you may not have anticipated, but by developing safety codes of conduct, you are making a user aware of what he or she should expect, and what is not acceptable. You need to start there.

Another time to prioritize is what you are protecting from a technology standpoint. For example, a product with a quickly growing user base should be a larger focus than a product that is being phased out or less popular. This is not only important from a resource perspective, it also addresses an issue that a lot of security and fraud folks deal with, which is “fear of crying wolf.” If you shut something down too many times on suspicion, people will stop reacting when it is a true security emergency. You have to be sure your engineers are available to help when it’s needed, and not burnt out from putting out small or inconsequential fires.

Another interesting subject that may seem more appropriate for a psychology panel than a security panel was understanding how humans think and work. Often times people do not fall neatly and squarely into “good” and “bad”camps. There is a lot of gray area and people, frankly, act like people. In most of these communities, it is beneficial to learn as much as you can about your users to not only know who they are, but also to make them feel accountable as opposed to anonymous. Additionally, it’s important to look at what may be perceived as a “bad” behavior, or something against policy, and communicate to see if the user even knew he or she was doing something wrong. Sometimes education is the quickest way to prevention, especially in international communities when some things may literally be “lost in translation.”

This is perhaps one of the biggest challenges – enforcing without alienating. One solution discussed was user reporting, which seems to work for a number of communities. It not only serves as an extension of your team, but also empowers users to help maintain the integrity of a community about which they feel so passionate. But for more advanced fraudsters, or bad actors who are still posing as legitimate users, this is not as easy to see or address.

It’s clear there is a lot of work still to be done, especially as people are sharing more information online, interacting more offline because of these communities, and the fraud landscape continues to quickly and effectively evolve.

There were lots of other points discussed like reputation systems and trusted personas and other marketplace challenges noted such as fraud on the supplier side, in the secondary market for gift cards, enforcing policies/security without all the available information, and policing merchant quality. We’ll look forward to expanding on some of these topics in future posts.

Be sure to join us for our next panel on User Acquisition Fraud and if you’re interested in learning more about our Trust and Safety community, feel free to reach out to me directly at Julian (dot) Wong (at) datavisor.com.