Back to Blog
Biometrics 2.0: More Data, More risks
Biometric authentication has come a long way since the early days of fingerprint scanners and basic facial recognition. Once considered futuristic, these technologies have now become routine, letting users unlock phones, verify bank logins, and sign legal documents with just a glance or a touch. Yet as biometric methods become more accurate and widespread—especially when paired with behavioral biometrics—serious questions arise about data usage, security, and accountability. Regulators around the globe, including the European Banking Authority (EBA), are signaling a heightened focus on how these systems work, what data they collect, and how that data is stored, shared, and potentially misused.
The Maturation of Biometric Technology
Initially, biometrics revolved around physical traits like fingerprints or facial geometry. Over the years, the spectrum has widened to include iris scans, voice recognition, and vein pattern analysis. Even more interestingly, financial institutions and fintechs have ventured into behavioral biometrics, which uses a person’s unique patterns—such as typing speed, mouse movements, or touchscreen gestures—to confirm identity.
Back in an old blog post we noted that many banks employ behavioral signals to gauge post-login activity, hoping to flag suspicious behaviors without bothering customers with extra prompts. In a sense, behavioral biometrics was seen as a frictionless enhancement to Strong Customer Authentication (SCA). But frictionless or not, the data it gathers can be expansive. For instance, continuous monitoring can capture everything from swiping gestures to subtle patterns of movement that might inadvertently reveal health conditions, disabilities, or other sensitive attributes. This very richness of data that makes behavioral biometrics so powerful also renders it a privacy and security minefield.
Deepfake Risks, KYC, and the New Breed of Attacks
The expansion of biometrics into know-your-customer (KYC) flows has fueled both innovation and vulnerability. Imagine signing up for a new service: you scan your passport, and an app compares the embedded passport photo to a live face-capture via your mobile camera. It sounds secure—until you realize a sufficiently advanced attacker could feed the camera feed with a deepfake AI engine that replicates your facial features. If that same attacker has stolen your actual passport, the biometric check may be fooled into confirming a “live” match, based on nothing more than your passport. This becomes a chilling scenario if the KYC process is tied to re-enrolling accounts with stored balances or sensitive data.
Traditional biometric checks were often lauded for convenience and speed, but these new deepfake tactics demonstrate that bigger data sets and more sophisticated AI can also make systems more vulnerable. The difficulty lies in verifying not only the physical or behavioral trait, but also the authenticity of the channel delivering that data. Is the user’s face really coming from a phone camera in real time, or is it a generated image piped into the feed? Or what if the camera itself has been replaced? This is why regulators worldwide are urging heightened scrutiny: more robust liveness checks, cryptographic safeguards, and stricter controls on how biometric samples are stored and validated.
Regulatory Spotlight and Ethical Accountability
As biometric data becomes a linchpin of modern authentication, data protection authorities and financial regulators are spelling out their expectations more clearly. Some new guidelines propose that banks and fintechs must encrypt all biometric samples end-to-end, regularly audit the algorithms that match faces or fingerprints, and ensure minimal data retention. In parallel, user consent is no longer a “check the box” exercise—it must be explicit, informed, and freely given, especially for behavioral biometrics that operate in the background.
In the EU, the Revised Payment Services Directive (PSD2) already set the stage for SCA, but it’s likely that upcoming directives— PSD3 or beyond—will tighten rules on how biometric and behavioral data may be used. This includes clarifying liability if a biometric vector is compromised or if a deepfake sidesteps liveness checks. If new frameworks follow the course of the General Data Protection Regulation (GDPR), companies that misuse or inadequately protect biometric data could face substantial fines and reputational damage.
Sign Up for Our Newsletter
Unlock updates, insights, and exclusive content delivered to you.
Striking a Balance: The Path Forward
Biometric technology is advancing so rapidly that even a well-designed system can become outdated in a few years. The real challenge is future-proofing security and data governance from the outset. This means doing more than just adopting the hottest new biometrics. It involves:
- Implementing Advanced Liveness Detection: Providers must invest in techniques that differentiate between a live person and an AI-manipulated feed.
- Integrating Cryptographic Anchors: Much like how we handle certificates and keys in digital authentication, biometric references can be protected with hardware-bound tokens and encrypted data vaults.
- Conducting Ongoing Risk Assessments: Once a system is deployed, it must be regularly tested against emerging threats—such as new deepfake algorithms—to ensure that “biometrics 2.0” doesn’t become stale technology overnight.
- Respecting User Choice and Privacy: It’s one thing to capture behavior data for security. It’s another to store it or combine it with third-party services that may track user habits across platforms. Transparent data policies, opt-in consent, and easy revocation are essential to maintain public trust.
While biometrics present remarkable convenience and security potential, they also carry the weight of our most personal information. Strengthening SCA with facial recognition or behavioral checks can cut down fraud, but it must be done with eyes wide open to the privacy, compliance, and technological pitfalls. If organizations rush ahead without addressing these risks, deepfakes and other AI-fueled attacks will seize the advantage. On the other hand, those who embrace strict security protocols and robust user protections will be poised to benefit from Biometrics 2.0—earning the trust of regulators and end-users alike in a market that increasingly values accountability as much as innovation.