I recently had the opportunity to attend MoneyLive in London; an event focused on the future of banking innovation. Given the rise of open banking and the regulatory changes associated with PSD2, it was no surprise to find that, as banks head into the next phase of digital transformation, there is increasing emphasis is on customer experience and the technologies that enable it. Unquestionably, data continues to be the epicenter of this transformation. Creating a Data-Centric Organization While in attendance, I participated in a panel titled “Creating a Data-Centric Organization.” My discussion with peer panelists Elena Alfaro Martinez (Global Head of Data & Open Innovation, BBVA) and Roshan Awatar (Director of Data Strategy, Lloyds Banking Group) highlighted the many challenges around extracting value from data, and focused on how technology—particularly AI and machine learning—can address this long-standing issue; one that has plagued banking organizations for decades. The most vexing dilemma when it comes to creating a data-centric organization is how to move beyond treating the centralization of data sources as merely an infrastructure problem. There are many reasons why this approach has continued to stall out in the face of strong headwinds—one of them being possessiveness associated with data ownership. Individual business units within banks own the data they collect, and creating a collective repository raises questions of security as well as data reconciliation, both of which are daunting tasks to undertake. But contrary to popular belief, being data-centric isn’t about data being operationally centralized— it’s quite a bit more than that. Centralizing Intelligence If we could go back in time a few years, we’d find discussions around big data infrastructure and digital signals solely focusing on the priority of creating data lakes and data repositories to make data easily accessible and readily usable. Regrettably, we haven’t moved too far from those conversations, and those limited goals still dominate much of our current dialogue. This must change. It is time for us to evolve from centralized data to centralized intelligence. This centralized intelligence should be accessible and adaptable to any business construct—whether this be new product lines or new use cases. It should draw on, and leverage, the domain expertise that its been built on, and be able to operate at big data scale. In this era of big data, it’s often said that data is the new gold. It is actually quite the contrary. Data is merely a utility. One that is commoditized. The real gold is the intelligence within that data. The End of “More is Better” Too often today, we treat commoditized data as something readily available that can be used without concerns for data privacy. It’s common as well to proceed on the assumption that more is better when it comes to data acquisition. However, especially as it relates to GDPR, this “more is better” maxim is not necessarily true. On the contrary, if we are smart about the data we collect and use, we don’t need anywhere near the amount of data we think we do. In fact, we can create more meaningful intelligence from derived or aggregated data, in ways that will mitigate many of the issues arising out of GDPR. As an example, at DataVisor, instead of looking at individual IP address, we use IP prefix, and process the data at a group level. By proceeding in this fashion, we ultimately need to know less about individual users, even as we gain superior levels of actionable intelligence. Open Banking and Unknown Fraud Threats As we think of this in the context of open banking, it’s essential to understand that, for all its opportunities, open banking has its share of associated threats as well. One of these is the worrisome fact that most of the threats are currently unknown. New and emerging threats come fast and furious, and existing attack strategies modulate, evolve, and adapt. So, our solutions cannot just depend on addressing known types of attacks. They must address unknown attack types, and they must do so in real-time. The Power of Unsupervised Machine Learning At DataVIsor, our machine learning approach focuses on unsupervised learning capabilities. By being able to detect attacks at a group level—and through using clustering and graph analysis instead of relying on simple anomaly detection—DataVisor is able to deliver high accuracy with very low negative impact for good users. Plus, by not requiring labels, our solutions can adapt to newer threats without frequent model retraining and retuning. DataVisor’s Feature PlatformDataVisor is committed to empowering its clients with advanced centralized intelligence capabilities. One way we do this is through our Feature Platform—a centralized hub that allows users to create signals and features from big data and manage these features across teams. DataVisor’s Feature Platform also enables businesses to take advantage of proprietary out-of-the-box features tailored for specific use cases—these can deliver highly useful signals almost immediately. The platform also offers flexible, highly scalable operators for customization. Conclusion In addressing the past and current challenges with leveraging intelligence from data, I’ve done so largely through the lens of fraud management and frictionless customer experience, as that’s the arena that DataVisor operates within. However, I’ve tried to draw my conclusions here in such a way as to make clear the fundamental agnosticism of a centralized intelligence approach. Ultimately, all companies are big data companies now, and as such, we must all make the leap successfully from the centralization of data to the centralization of intelligence. If we fail to do so, we do so at our own peril. View posts by tags: data | feature platform | intelligence | moneylive Related Content: How To Accelerate Feature Engineering From Weeks to Minutes The Valuable Byproducts of Innovation How Criminals Use Stolen Data Stay up-to-date on the latest fraud insights and intelligence. Thank you for subscribing.