The Power of PushAuth™

Welcome to the first post in our series on PushAuth™, UnifyID’s push authentication service. This series, outlined below, will help you leverage push authentication to increase the security of your authorization flow.

PushAuth™ Blog Series Content

  1. Comprehensive guide to push authentication: what it is, how it works, and pros and cons of using it.
  2. An end-to-end tutorial of implementing PushAuth™ in a simple user sign-up flow. (Coming soon)
  3. Technical deep-dive into the details of building and using our open-source project. (Coming soon)
  4. A more secure extension of the simple user sign-up flow that includes user registration and trusted device pairing. (Coming soon)
  5. Integration with AWS Cognito, a commonly used authentication platform. (Coming soon)

Push authentication is one of the most seamless and secure methods of multi-factor authentication. We want to help you understand its importance and the power it has to upgrade your security!

Table Stakes

Let’s clear up two terms that are critical to understanding what PushAuth is solving:

Authorization = allowing you access to a system

Authentication = verifying your identity

There is a great Medium article that describes those differences in more depth. You could have an authorization flow that requires no authentication, letting all users in without proving anything (please don’t do this). On the other hand, you could have an authorization flow that makes users jump through countless hoops to try to make 100% sure the true user is accessing their account. The magic is in striking a balance between these extremes, ensuring a secure authorization process without excessive user annoyance.

One more term to define before moving forward:

Multi-factor authentication = an authentication method that requires two or more pieces of proof before granting a user access to resources

Multi-factor authentication (MFA) is sometimes referred to as two-factor authentication (2FA). Here is a page on MFA basics provided by NIST.

Why use multi-factor authentication?

Before jumping into push authentication, let’s explain MFA. The three basic authentication mechanisms are:

  1. Knowledge: something you know (password, security question answers, PIN)
  2. Possession: something you have (access badge, ATM card, smartphone, hard token)
  3. Biometric: something you are (fingerprint, face, gait, voice)

As defined above, MFA uses two or more of these authentication mechanisms. There are many security benefits to using MFA. The primary benefit is increased confidence in an authorization flow. This stronger assertion on the user’s identity increases the application’s security. Using MFA mitigates the dangers of poor security habits many users have with passwords, such as writing them down, making them simple or easily-guessed, and reusing them across platforms.

Push authentication is one method of accomplishing MFA. This method usually requires a user to download an app on their phone and register it with their account. The mobile app receives push notifications during the login flow. In doing so, the user establishes two authentication mechanisms: knowledge of their username/password combination and possession of their registered device. Here is an example of using push authentication in a login flow:

A user logs in to a web application by entering their username and password. Upon clicking the button to log in, the server sends a push notification to the app (which the user has already downloaded and registered). The user unlocks their phone and authorizes the request by clicking “Accept” on the notification. The server receives this approved response and proceeds to log the user in to the web app. If the user instead clicks “Deny,” the authentication step fails and the user would not gain access to the web app.

Other types of MFA include SMS codes, phone calls, email codes or magic links, authenticator app verification codes, and hardware tokens. This PCMag article does a good job of aggregating different ways well-known companies offer MFA. In the next section we discuss the advantages push authentication has over other methods.

Why choose push authentication?

In a nutshell, push authentication increases security in a cost-effective manner without affecting usability. This section lists some of the aspects specific to push authentication that make it superior to other methods of MFA.

MFA methodFree tierCost per million
SMS100$6,540
Email1,000$20
Mobile Push Notifications1 million$0.50
  • Convenient and easy to use. We can assume that users who have smartphones are comfortable with using apps and interacting with notifications. This makes using push authentication easy for them, since there is nothing else they need to learn to use or get used to the method. Push notifications are much easier to respond to – there is no typing, clicking links, or copying codes while trying to beat the timer. Authentication happens with a simple and speedy tap on their screen.
  • Secure. The push authentication mobile app is installed securely on users’ devices, so there is no reliability on a user’s accounts with external companies. This out-of-band communication can’t be intercepted at the point of password entry, and it is encrypted from end to end between the application and the secured push authentication provider. The many insecurities of the SMS method led to the downfall of its prominence in this space.
  • Fast. Since authentication requests are sent in real time via notifications, a user can become aware of and deny fraudulent requests when they happen and promptly take action. Unlike some methods of MFA, the user must unlock their device before responding to the request. However, this is a level of friction that users are already familiar with on a day-to-day (or, more realistically, minute-to-minute) basis. For devices with TouchID or FaceID, this process is almost frictionless. This even increases the security by preventing unhindered access to authentication by an attacker.
  • No hardware to manage. Unlike with hardware tokens, push authentication utilizes users’ existing phones. Since most people consider their phone an extension of their body, this also means the hardware is unlikely to be easily or frequently misplaced. When phones are lost or broken, the responsibility is on the user, rather than the service provider, to obtain a replacement.
  • No over-the-shoulder copy-ability. Codes and magic links sent via SMS/email and codes generated in an authenticator app can all be intercepted and used to log in as an attacker. Push notifications don’t contain anything that can be copied or reproduced.

Is push authentication perfect?

Unsurprisingly, like most things, no. On a base level, push authentication requires that every user has a fully-functional smartphone. Without a smartphone, they can’t have the mobile app (and the provider needs to have an application). Without internet connection, their device can’t receive push notifications through the mobile app. In the specific cases of a user’s phone being lost, stolen, or otherwise not in their possession, push authentication will fail. Even if the device is in their possession but out of battery, waterlogged, or otherwise not functional, the same assertion applies. For the sake of argument, let’s assume that none of these scenarios are issues. Push authentication is still not “perfect.”

There are some valid security concerns surrounding the use of push authentication as a method for MFA. Smartphones themselves, where the authentication bit happens, are vulnerable to attacks and viruses. There’s also the issue of trusting a device to be associated with its true user during registration, as well as not allowing attackers to register devices under the true user’s account. Even with a trusted device, any pre-registered device is a weak point for attacker entry. You can’t ensure that users have their devices protected via a locking mechanism. If they don’t have access into their phone locked, then it’s pretty easy for an attacker to accept a notification on the user’s behalf. Finally, it’s completely possible that a user accidentally approves a fraudulent request on their device without realizing it, or realizing too late.

However, there are solutions to these problems. In the cases of a temporarily out-of-possession device, you can provide fallback resources (one-time passcodes, email or SMS delivery, security questions, etc.) as a workaround for that authentication attempt. If the device is permanently out of possession, such as a ruined or stolen phone, you can provide a method of revoking access from the old device and allowing users to self-sign up and provision the app on a new mobile device. If an attacker gains physical access to the device before access is revoked, it’s likely that the user will have yet another authentication barrier in the form of a passcode, TouchID, or FaceID. While this can be attacked as well, it still raises the bar for entry before access is revoked. With regards to users accepting fraudulent requests on their device, the push notification prompt could be extended to include a unique identifier that is expected to match a value on the web login page. The existence of this would set the expectation that a user is to confirm the value before accepting the request.

However, there’s only so much you can do as a developer. People will still make mistakes, and hackers will still gain access. The main point here is that these are things you, as a developer, should be aware of and address. All things considered – push authentication sits in a pretty great place security-wise, is incredibly low friction, and (perhaps most importantly) incredibly cost-effective. Keep an eye out for our next post of the series, which will walk you through using our open-source project to implement PushAuth yourself!

Vulnerability of deep learning-based gait biometric recognition to adversarial perturbations

PDF of full paper: Vulnerability of deep learning-based gait biometric recognition to adversarial perturbations
Full-size poster image: Vulnerability of deep learning-based gait biometric recognition to adversarial perturbations

[This paper was presented on July 21, 2017 at The First International Workshop on The Bright and Dark Sides of Computer Vision: Challenges and Opportunities for Privacy and Security (CV-COPS 2017), in conjunction with the 2017 IEEE Conference on Computer Vision and Pattern Recognition.]

Vinay Uday Prabhu and John Whaley, UnifyID, San Francisco, CA 94107

Abstract

In this paper, we would like to draw attention towards the vulnerability of the motion sensor-based gait biometric in deep learning-based implicit authentication solutions, when attacked with adversarial perturbations, obtained via the simple fast-gradient sign method. We also showcase the improvement expected by incorporating these synthetically-generated adversarial samples into the training data.

Introduction

In recent times, password entry-based user-authentication methods have increasingly drawn the ire of the security community [1], especially when it comes to its prevalence in the world of mobile telephony. Researchers [1] recently showcased that creating passwords on mobile devices not only takes significantly more time, but it is also more error prone, frustrating, and, worst of all, the created passwords were inherently weaker. One of the promising solutions that has emerged entails implicit authentication [2] of users based on behavioral patterns that are sensed without the active participation of the user. In this domain of implicit authentication, measurement of gait-cycle [3] signatures, mined using the on-phone Inertial Measurement Unit – MicroElectroMechanical Systems (IMU-MEMS) sensors, such as accelerometers and gyroscopes, has emerged as an extremely promising passive biometric [4, 5, 6]. As stated in [7, 5], gait patterns can not only be collected passively, at a distance, and unobtrusively (unlike iris, face, fingerprint, or palm veins), they are also extremely difficult to replicate due to their dynamic nature.

Inspired by the immense success that Deep Learning (DL) has enjoyed in recent times across disparate domains, such as speech recognition, visual object recognition, and object detection [8], researchers in the field of gait-based implicit authentication are increasingly embracing DL-based machine-learning solutions [4, 5, 6, 9], thus replacing the more traditional hand-crafted-feature- engineering-driven shallow machine-learning approaches [10]. Besides circumventing the oft-contentious process of hand-engineering the features, these DL-based approaches are also more robust to noise [8], which bodes well for the implicit-authentication solutions that will be deployed on mainstream commercial hardware. As evinced in [4, 5], these classifiers have already attained extremely high accuracy (∼96%), when trained under the k-class supervised classification framework (where k pertains to the number of individuals). While these impressive numbers give the impression that gait-based deep implicit authentication is ripe for immediate commercial implementation, we would like to draw the attention of the community towards a crucial shortcoming. In 2014, Szegedy et al. [11] discovered that, quite like shallow machine-learning models, the state-of- the-art deep neural networks were vulnerable to adversarial examples that can be synthetically generated by strategically introducing small perturbations that make the resultant adversarial input example only slightly different from correctly classified examples drawn from the data distribution, but at the same time resulting in a potentially controlled misclassification. To make things worse, a large plethora of models with disparate architectures, trained on different subsets of the training data, have been found to misclassify the same adversarial example, uncovering the presence of fundamental blind spots in our DL frameworks. After this discovery, several works have emerged ([12, 13]), addressing both means of defence against adversarial examples, as well as novel attacks. Recently, the cleverhans software library [13] was released. It provides standardized reference implementations of adversarial example-construction techniques and adversarial training, thereby facilitating rapid development of machine-learning models, robust to adversarial attacks, as well as providing standardized benchmarks of model performance in the adversarial setting explained above. In this paper, we focus on harnessing the simplest of all adversarial attack methods, i.e. the fast gradient sign method (FGSM) to attack the IDNet deep convolutional neural network (DCNN)-based gait classifier introduced in [4]. Our main contributions are as follows: 1: This is, to the best of our knowledge, the first paper that introduces deep adversarial attacks into this non-computer vision setting, specifically, the gait-driven implicit-authentication domain. In doing so, we hope to draw the attention of the community towards this crucial issue in the hope that further publications will incorporate adversarial training as a default part of their training pipelines. 2: One of the enduring images that is widely circulated in adversarial training literature is that of the panda+nematode = gibbon adversarial-attack example on GoogleNet in [14], which was instrumental in vividly showcasing the potency of the blind spot. In this paper, we do the same with accelerometric data to illustrate how a small and seemingly imperceptible perturbation to the original signal can cause the DCNN to make a completely wrong inference with high probability. 3: We empirically characterize the degradation of classification accuracy, when subjected to an FGSM attack, and also highlight the improvement in the same, upon introducing adversarial training. 4: Lastly, we have open-sourced the code here.

Figure 1. Variation in the probability of correct classification (37 classes) with and without adversarial training for varying ε.

Figure 2. The true accelerometer amplitude signal and its adversarial counterpart for ε = 0.4.

2. Methodology and Results

In this paper, we focus on the DCNN-based IDNet [4] framework, which entails harnessing low-pass-filtered tri-axial accelerometer and gyroscope readings (plus the sensor-specific magnitude signals), to, firstly, extract the gait template, of dimension 8 × 200, which is then used to train a DCNN in a supervised-classification setting. In the original paper, the model identified users in real time by using the DCNN as a deep-feature extractor and further training an outlier detector (one-class support vector machine-SVM), whose individual gait-wise outputs were finally combined into a Wald’s probability-ratio-test-based framework. Here, we focus on the trained IDNet-DCNN and characterize its performance in the adversarial-training regime. To this end, we harness the FGSM introduced in [14], where the adversarial example, x ̃, for a given input sample, x, is generated by: x ̃ = x + ε sign (∇xJ (θ, x)), where θ represents the parameter vector of the DCNN, J (θ, x) is the cost function used to train the DCNN, and ∇x () is the gradient function.

As seen, this method is parametrized by ε, which controls the magnitude of the inflicted perturbations. Fig. 2 showcases the true and adversarial gait-cycle signals for the accelerometer magnitude signal (given by amag(t) = √(a2x (t) + a2y (t) + a2z (t))) for ε = 0.4. Fig. 1 captures the drop in the probability of correct classification (37 classes) with increasing ε. First, we see that in the absence of any adversarial example, we were able to get about 96% ac- curacy on a 37 class classification problem, which is very close to what is claimed in [4]. However, with even mild perturbations (ε = 0.4), we see a sharp decrease of nearly 40% in accuracy. Fig. 1 also captures the effect of including the synthetically generated adversarial examples in this scenario. We see that, for ε = 0.4, we manage to achieve about 82% accuracy, which is a vast improvement of ∼ 25%.

3. Future Work

This brief paper is part of an ongoing research endeavor. We are currently currently extending this work to other adversarial-attack approaches, such as Jacobian-based Saliency-Map Approach (JSMA) and Black-Box-Attack (BBA) approach [15]. We are also investigating the effect of these attacks within the deep-feature-extraction+SVM approach of [4], and we are comparing other architectures, such as [6] and [5].

References
[1]  W.Melicher, D.Kurilova, S.M.Segreti, P.Kalvani, R.Shay, B. Ur, L. Bauer, N. Christin, L. F. Cranor, and M. L. Mazurek, “Usability and security of text passwords on mobile devices,” in Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 527–539, ACM, 2016. 1
[2]  E. Shi, Y. Niu, M. Jakobsson, and R. Chow, “Implicit authentication through learning user behavior,” in International Conference on Information Security, pp. 99–113, Springer, 2010. 1
[3]  J. Perry, J. R. Davids, et al., “Gait analysis: normal and pathological function.,” Journal of Pediatric Orthopaedics, vol. 12, no. 6, p. 815, 1992. 1
[4]  M. Gadaleta and M. Rossi, “Idnet: Smartphone-based gait recognition with convolutional neural networks,” arXiv preprint arXiv:1606.03238, 2016. 1, 2
[5]  Y. Zhao and S. Zhou, “Wearable device-based gait recognition using angle embedded gait dynamic images and a convolutional neural network,” Sensors, vol. 17, no. 3, p. 478, 2017. 1, 2
[6]  S. Yao, S. Hu, Y. Zhao, A. Zhang, and T. Abdelza- her, “Deepsense: A unified deep learning framework for time-series mobile sensing data processing,” arXiv preprint arXiv:1611.01942, 2016. 1, 2
[7]  S. Wang and J. Liu, Biometrics on mobile phone. INTECH Open Access Publisher, 2011. 1
[8]  Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015. 1
[9]  N. Neverova, C. Wolf, G. Lacey, L. Fridman, D. Chandra, B. Barbello, and G. Taylor, “Learning human identity from motion patterns,” IEEE Access, vol. 4, pp. 1810–1820, 2016. 1
[10]  C. Nickel, C. Busch, S. Rangarajan, and M. Mo ̈bius, “Using hidden markov models for accelerometer-based biometric gait recognition,” in Signal Processing and its Applications (CSPA), 2011 IEEE 7th International Colloquium on, pp. 58–63, IEEE, 2011. 1
[11]  C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” arXiv preprint arXiv:1312.6199, 2013. 1
[12]  C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9, 2015. 1
[13]  N. Papernot, I. Goodfellow, R. Sheatsley, R. Feinman, and P. McDaniel, “cleverhans v1.0.0: an adversarial machine learning library,” arXiv preprint arXiv:1610.00768, 2016. 1
[14]  I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explain- ing and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014. 2
[15] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against deep learning systems using adversarial examples,” arXiv preprint arXiv:1602.02697, 2016.

Smile in the face of adversity much? A print based spoofing attack

PDF of full paper: Smile in the face of adversity much? A print based spoofing attack
Full-size poster image: Smile in the face of adversity much? A print based spoofing attack

[This paper was presented on July 21, 2017 at The First International Workshop on The Bright and Dark Sides of Computer Vision: Challenges and Opportunities for Privacy and Security (CV-COPS 2017), in conjunction with the 2017 IEEE Conference on Computer Vision and Pattern Recognition.]

Vinay Uday Prabhu and John Whaley, UnifyID, San Francisco, CA 94107

Abstract

In this paper, we demonstrate a simple face spoof attack targeting the face recognition system of a widely available commercial smart-phone. The goal of this paper is not proclaim a new spoof attack but to rather draw the attention of the anti-spoofing researchers towards a very specific shortcoming shared by one-shot face recognition systems that involves enhanced vulnerability when a smiling reference image is used.

Introduction

One-shot face recognition (OSFR) or single sample per person (SSPP) face recognition is a well-studied research topic in computer vision (CV) [8]. Solutions such as Local Binary Pattern (LBP) based detectors [1], Deep Lambertian Networks (DLN) [9] and Deep Supervised Autoencoders (DSA) [4] have been proposed in recent times to make the OSFR system more robust to changes in illumination, pose, facial expression and occlusion that they encounter when deployed in the wild. One very interesting application of face recognition that has gathered traction lately is for mobile device unlocking [6]. One of the highlights of Android 4.0 (Ice Cream Sandwich) was the Face Unlock screen-lock option that allowed users to unlock their devices with their faces. It is rather imperative that we mention here that this option is always presented to the user with a cautioning clause that typically reads like *Face recognition is less secure than pattern, PIN, or password.

The reasoning behind this is that there exists a plethora of face spoof attacks such as print attacks, malicious identical twin attack, sleeping user attack, replay attacks and 3D mask attacks. These attacks are all fairly successful against most of the commercial off-the-shelf face recognizers [7]. This ease of spoof attacks has also attracted attention of the CV researchers that has led to a lot of efforts in developing liveness detection anti-spoofing frameworks such as Secure-face [6]. (See [3] for a survey.)

Recently, a large scale smart-phone manufacturer introduced a face recognition based phone unlocking feature. This announcement was promptly followed by media reports about users demonstrating several types of spoof attacks.

In this paper, we would like to explore a simple print attack on this smart-phone. The goal of this paper is not proclaim a new spoof attack but to rather draw the attention of the anti-spoofing community towards a very specific shortcoming shared by face recognition systems that we uncovered in this investigation.

2. Methodology and Results

Figure 1. Example of two neutral expression faces that failed to spoof the smart-phone’s face recognition system.

Figure 2. Example of 2 smiling registering faces that successfully spoofed the smart-phone’s face recognition system.

The methodology we used entailed taking a low quality printout of the target user’s face on a plain white US letter paper size (of dimension 8.5 by 11.0 inches) and then unlocking the device by simply exposing this printed paper in front of the camera. Given the poor quality of the printed images, we observed that this simple print attack was duly repulsed by the detector system as long as the attacker sported neutral facial expressions during the registration phase. However, when we repeated the attack in such a way that the attacker had an overtly smiling face when (s)he registered, we were able to break in successfully with high regularity.

In Figure 1, we see two examples of neutral expression faces that failed to spoof the smart-phone’s face recognition system when the registering image had a neutral facial expression. A video containing the failed spoofing attempt with a neutral facial expression can be viewed here.

In Figure 2, we see the same two subjects’ images that successfully spoofed the phone’s face recognition system when the registering (enrollment) image was overtly smiling. The face training demo videos are available here. The video of the successful spoof can be viewed here.

2.1. Motivation for the attack and discussion

It has been well known for a long time in the computer vision community that faces displaying expressions, especially smiles, resulted in stronger recall and discrimination power [10]. In fact, the authors in [2] termed this the happy-face advantage, and showcased the variation in detection performance for varying facial expressions. Through experimentation, we wanted to investigate the specific onshot classification scenario when the registering enrollment face had a strong smile that resulted in the discovery of this attack. As for defense from this attack, there are two straightforward recommendations. The first recommendation would be to simply display a message goading the user to maintain a passport-type neutral facial expression. The second would entail having a smile detector such as [5] as a pre-filter that would only allow smile-free images as a reference image.

References
[1] T. Ahonen, A. Hadid, and M. Pietikainen. Face description with local binary patterns: Application to face recognition. IEEE transactions on pattern analysis and machine intelligence, 28(12):2037–2041, 2006. 1
[2]  W. Chen, K. Lander, and C. H. Liu. Matching faces with emotional expressions. Frontiers in psychology, 2:206, 2011. 2
[3]  J. Galbally, S. Marcel, and J. Fierrez. Biometric antispoofing methods: A survey in face recognition. IEEE Access, 2:1530–1552, 2014. 1
[4]  S. Gao, Y. Zhang, K. Jia, J. Lu, and Y. Zhang. Single sample face recognition via learning deep supervised autoencoders. IEEE Transactions on Information Forensics and Security, 10(10):2108–2118, 2015. 1
[5]  P. O. Glauner. Deep convolutional neural networks for smile recognition. arXiv preprint arXiv:1508.06535, 2015. 2
[6]  K. Patel, H. Han, and A. K. Jain. Secure face unlock: Spoof detection on smartphones. IEEE Transactions on Information Forensics and Security, 11(10):2268–2283, 2016. 1
[7]  D. F. Smith, A. Wiliem, and B. C. Lovell. Face recognition on consumer devices: Reflections on replay attacks. IEEE Transactions on Information Forensics and Security,10(4):736–745, 2015. 1
[8]  X.Tan,S.Chen,Z.-H.Zhou, and F.Zhang. Face recognition from a single image per person: A survey. Pattern recognition, 39(9):1725–1745, 2006.
[9]  Y. Tang, R. Salakhutdinov, and G. Hinton. Deep lambertian networks. arXiv preprint arXiv:1206.6445, 2012. 1
[10]  Y. Yacoob and L. Davis. Smiling faces are better for face recognition. In Automatic Face and Gesture Recognition, 2002. Proceedings. Fifth IEEE International Conference on, pages 59–64. IEEE, 2002. 2

UnifyID Anoints 16 Distinguished Scientists for the AI Fellowship

Fast Growing Startup Uses Machine Learning to Solve Passwordless Authentication

Today, UnifyID, a service that can authenticate you based on unique factors like the way you walk, type, and sit, announced the final 16 fellows selected for its inaugural Artificial Intelligence Fellowship for the Fall of 2016. Each of the fellows have shown exemplary leadership and curiosity in making a meaningful difference in our society and clearly has an aptitude for making sweeping changes in this rapidly growing area of AI.

Of the company’s recent launch and success at TechCrunch Disrupt, claiming SF Battlefield Runner-Up (2nd in 1000 applicants worldwide), UnifyID CEO John Whaley said, “We were indeed overwhelmed by the amazing response to our first edition of the AI Fellowship and the sheer quality of applicants we received. We also take immense pride in the fact that more than 40% of our chosen cohort will be women, which further reinforces our commitment as one of the original 33 signees of the U.S. White House Tech Inclusion Pledge.”

The final 16 fellows hail from Israel, Paris, Kyoto, Bangalore, and cities across the U.S. with Ph.D., M.S., M.B.A., and B.S. degrees from MIT, Stanford, Berkeley, Harvard, Columbia, NYU-CIMS, UCLA, Wharton, among other top institutions.

  • Aidan Clark triple major in Math, Classical Languages and CS at UC Berkeley
  • Anna Venancio-Marques Data Scientist in Residence, PhD École normale supérieure
  • Arik Sosman Software Engineer at BitGo, 2x Apple WWDC scholar, CeBIT speaker
  • Baiyu Chen Convolutional Neural Network Researcher, Masters in CS at UC Berkeley

  • Fuxiao Xin Lead Machine Learning Scientist at GE Global Research, PhD Bioinformatics

  • Kathy Sohrabi VP Engineering, IoT and sensors, MBA at Wharton, PhD EE at UCLA
  • Kazu Komoto Chief Robotics Engineer, CNET Writer, Masters in ME at Kyoto University

  • Laura Florescu Co-authored Asymptopia, Mathematical Reviewer, PhD CS at NYU

  • Lorraine Lin Managing Director, MFE Berkeley, PhD Oxford, Masters Design Harvard
  • Morgan Lai AI Scientist, MIT Media Lab, Co-founder/CTO, M.Eng. CS at MIT
  • Pushpa Raghani Post Doc Researcher at Stanford and IBM, PhD Physics at JNCASR

  • Raul Puri Machine Learning Development at Berkeley, BS EE/CS/Bioeng at Berkeley
  • Sara Hooker Data Scientist, Founder non-profit, educational access in rural Africa
  • Siraj Raval Data Scientist, the Bill Nye of Computer Science on YouTube

  • Wentao Wang Senior New Tech Integration Engineer at Tesla, PhD ME at MIT

  • Will Grathwohl Computer Vision Specialist, Founder/Chief Scientist, BS CSAIL at MIT

 

This highly selective, cross-disciplinary program covers the following areas:

  • Deep Learning
  • Signal Processing
  • Optimization Theory
  • Sensor Technology
  • Mobile Development
  • Statistical Machine Learning
  • Security and Identity
  • Human Behavior

Our UnifyID AI Fellows will get to choose from one of 16 well-defined projects in the broad area of applied artificial intelligence in the context of solving the problem of seamless personal authentication. The Fellows will be led by our esteemed Fellowship Advisors, renown experts in machine learning and PhDs from CMU, Stanford, and University of Vienna, Austria.

Please welcome our incoming class! ✨

 

Read the original UnifyID AI Fellowship Announcement:

https://unify.id/2016/10/10/announcing-the-unifyid-ai-fellowship/

 

Initial Release:

http://www.prweb.com/releases/2016/unifyid/prweb13804371.htm#!