In my previous blog post “How I stopped worrying and embraced docker microservices” I talked about why Microservices are the bees knees for scaling Machine Learning in production. A fair amount of time has passed (almost a year ago, whoa) and it proved that building Deep Learning pipelines in production is a more complex, multi-aspect problem. Yes, microservices are an amazing tool, both for software reuse, distributed systems design, quick failure and recovery, yada yada. But what seems very obvious now, is that Machine Learning services are very stateful, and statefulness is a problem for horizontal scaling.
Context switching latency
An easy way to deal with this issue is understand that ML models are large, and thus should not be context switched. If a model is started on instance A, you should try to keep it on instance A as long as possible. Nginx Plus comes with support for sticky sessions, which means that requests can always be load balanced on the same upstream a super useful feature. That was 30% of the message of my Nginxconf 2017 talk.
The other 70% of my message was urging people to move AWAY from microservices for Machine Learning. In an extreme example, we announced WebTorch, a full-on Deep Learning stack on top of an HTTP server, running as a single program. For your reference, a Deep Learning stack looks like this.
What is this data, why is it so dirty, alright now it’s clean but my Neural net still doesn’t get it, finally it gets it!
Now consider the two extremes in implementing this pipeline;
Every stage is a microservice.
The whole thing is one service.
Both seem equally terrible for different reasons and here I will explain why designing an ML pipeline is a zero-sum problem.
Communication latency
If every stage of the pipeline is a microservice this introduces a huge communication overhead between microservices. This is because very large dataframes which need to be passed between services also need to be
Serialized
Compressed (+ Encrypted)
Queued
Transfered
Dequeued
Decompressed (+ Decrypted)
Deserialized
What a pain, what a terrible thing to spend cycles on. All of these actions need to be repeated every time the microservice limit is crossed. The horror, the terrible end-to-end performance horror!
In the opposite case, you’re writing a monolith which is hard to maintain, probably you’re either using uncomfortable semantics either for writing the HTTP server or the ML part, can’t monitor the in between stages etc. Like I said, writing a ML pipeline for production is a zero-sum problem.
An extreme example; All-in-one deep learning
Torch and Nginx have one thing in common, the amazing LuaJIT
That’s right, you’ll need to look at your use case and decide where you draw the line. Where does the HTTP server stop and where does the ML back-end start. If only there was a tool that made this decision easy and allowed you to even go to the extreme case of writing a monolith, without sacrificing either HTTP performance (and pretty HTTP server semantics) or ML performance and relevance in the rapid growing Deep Learning market. Now such a tool is here (in alpha) and it’s called WebTorch.
WebTorch is the freak child of the fastest, most stable HTTP server, nginx and the fastest, most relevant Deep Learning framework Torch.
Now of course that doesn’t mean WebTorch is either the best performance HTTP server and/or the best performing Deep Learning framework, but it’s at least worth a look right? So I run some benchmarks, loaded the XOR neural network found at the torch training page. I used another popular Lua tool, wrk to benchmark my server. I’m sending serialized Torch 2D DoubleTensor tensors to my server using POST requests to train. Here’s the results:
Huzha! Over 1000 req/sec on my Macbook air, with no Cuda support and 2 Intel cores!
So there, plug that into a CUDA machine and see how much performance you squeeze out of that bad baby. I hope I have convinced you that sometimes, mixing two great things CAN lead to something great and that WebTorch is an ambitious and interesting open source project!
And hopefully, in due time it will become a fast, production level server which makes it easy for Data Scientists to deploy their models in the cloud (do people still say cloud?) and devOps people to deploy and scale.
Possible applications of such a tool include, but not limited to:
Vinay Uday Prabhu and John Whaley, UnifyID, San Francisco, CA 94107
Abstract
In this paper, we would like to draw attention towards the vulnerability of the motion sensor-based gait biometric in deep learning-based implicit authentication solutions, when attacked with adversarial perturbations, obtained via the simple fast-gradient sign method. We also showcase the improvement expected by incorporating these synthetically-generated adversarial samples into the training data.
Introduction
In recent times, password entry-based user-authentication methods have increasingly drawn the ire of the security community [1], especially when it comes to its prevalence in the world of mobile telephony. Researchers [1] recently showcased that creating passwords on mobile devices not only takes significantly more time, but it is also more error prone, frustrating, and, worst of all, the created passwords were inherently weaker. One of the promising solutions that has emerged entails implicit authentication [2] of users based on behavioral patterns that are sensed without the active participation of the user. In this domain of implicit authentication, measurement of gait-cycle [3] signatures, mined using the on-phone Inertial Measurement Unit – MicroElectroMechanical Systems (IMU-MEMS) sensors, such as accelerometers and gyroscopes, has emerged as an extremely promising passive biometric [4, 5, 6]. As stated in [7, 5], gait patterns can not only be collected passively, at a distance, and unobtrusively (unlike iris, face, fingerprint, or palm veins), they are also extremely difficult to replicate due to their dynamic nature.
Inspired by the immense success that Deep Learning (DL) has enjoyed in recent times across disparate domains, such as speech recognition, visual object recognition, and object detection [8], researchers in the field of gait-based implicit authentication are increasingly embracing DL-based machine-learning solutions [4, 5, 6, 9], thus replacing the more traditional hand-crafted-feature- engineering-driven shallow machine-learning approaches [10]. Besides circumventing the oft-contentious process of hand-engineering the features, these DL-based approaches are also more robust to noise [8], which bodes well for the implicit-authentication solutions that will be deployed on mainstream commercial hardware. As evinced in [4, 5], these classifiers have already attained extremely high accuracy (∼96%), when trained under the k-class supervised classification framework (where k pertains to the number of individuals). While these impressive numbers give the impression that gait-based deep implicit authentication is ripe for immediate commercial implementation, we would like to draw the attention of the community towards a crucial shortcoming. In 2014, Szegedy et al. [11] discovered that, quite like shallow machine-learning models, the state-of- the-art deep neural networks were vulnerable to adversarial examples that can be synthetically generated by strategically introducing small perturbations that make the resultant adversarial input example only slightly different from correctly classified examples drawn from the data distribution, but at the same time resulting in a potentially controlled misclassification. To make things worse, a large plethora of models with disparate architectures, trained on different subsets of the training data, have been found to misclassify the same adversarial example, uncovering the presence of fundamental blind spots in our DL frameworks. After this discovery, several works have emerged ([12, 13]), addressing both means of defence against adversarial examples, as well as novel attacks. Recently, the cleverhans software library [13] was released. It provides standardized reference implementations of adversarial example-construction techniques and adversarial training, thereby facilitating rapid development of machine-learning models, robust to adversarial attacks, as well as providing standardized benchmarks of model performance in the adversarial setting explained above. In this paper, we focus on harnessing the simplest of all adversarial attack methods, i.e. the fast gradient sign method (FGSM) to attack the IDNet deep convolutional neural network (DCNN)-based gait classifier introduced in [4]. Our main contributions are as follows: 1: This is, to the best of our knowledge, the first paper that introduces deep adversarial attacks into this non-computer vision setting, specifically, the gait-driven implicit-authentication domain. In doing so, we hope to draw the attention of the community towards this crucial issue in the hope that further publications will incorporate adversarial training as a default part of their training pipelines. 2: One of the enduring images that is widely circulated in adversarial training literature is that of the panda+nematode = gibbon adversarial-attack example on GoogleNet in [14], which was instrumental in vividly showcasing the potency of the blind spot. In this paper, we do the same with accelerometric data to illustrate how a small and seemingly imperceptible perturbation to the original signal can cause the DCNN to make a completely wrong inference with high probability. 3: We empirically characterize the degradation of classification accuracy, when subjected to an FGSM attack, and also highlight the improvement in the same, upon introducing adversarial training. 4: Lastly, we have open-sourced the code here.
Figure 1. Variation in the probability of correct classification (37 classes) with and without adversarial training for varying ε.Figure 2. The true accelerometer amplitude signal and its adversarial counterpart for ε = 0.4.
2. Methodology and Results
In this paper, we focus on the DCNN-based IDNet [4] framework, which entails harnessing low-pass-filtered tri-axial accelerometer and gyroscope readings (plus the sensor-specific magnitude signals), to, firstly, extract the gait template, of dimension 8 × 200, which is then used to train a DCNN in a supervised-classification setting. In the original paper, the model identified users in real time by using the DCNN as a deep-feature extractor and further training an outlier detector (one-class support vector machine-SVM), whose individual gait-wise outputs were finally combined into a Wald’s probability-ratio-test-based framework. Here, we focus on the trained IDNet-DCNN and characterize its performance in the adversarial-training regime. To this end, we harness the FGSM introduced in [14], where the adversarial example, x ̃, for a given input sample, x, is generated by: x ̃ = x + ε sign (∇xJ (θ, x)), where θ represents the parameter vector of the DCNN, J (θ, x) is the cost function used to train the DCNN, and ∇x () is the gradient function.
As seen, this method is parametrized by ε, which controls the magnitude of the inflicted perturbations. Fig. 2 showcases the true and adversarial gait-cycle signals for the accelerometer magnitude signal (given by amag(t) = √(a2x (t) + a2y (t) + a2z (t))) for ε = 0.4. Fig. 1 captures the drop in the probability of correct classification (37 classes) with increasing ε. First, we see that in the absence of any adversarial example, we were able to get about 96% ac- curacy on a 37 class classification problem, which is very close to what is claimed in [4]. However, with even mild perturbations (ε = 0.4), we see a sharp decrease of nearly 40% in accuracy. Fig. 1 also captures the effect of including the synthetically generated adversarial examples in this scenario. We see that, for ε = 0.4, we manage to achieve about 82% accuracy, which is a vast improvement of ∼ 25%.
3. Future Work
This brief paper is part of an ongoing research endeavor. We are currently currently extending this work to other adversarial-attack approaches, such as Jacobian-based Saliency-Map Approach (JSMA) and Black-Box-Attack (BBA) approach [15]. We are also investigating the effect of these attacks within the deep-feature-extraction+SVM approach of [4], and we are comparing other architectures, such as [6] and [5].
References
[1] W.Melicher, D.Kurilova, S.M.Segreti, P.Kalvani, R.Shay, B. Ur, L. Bauer, N. Christin, L. F. Cranor, and M. L. Mazurek, “Usability and security of text passwords on mobile devices,” in Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 527–539, ACM, 2016. 1
[2] E. Shi, Y. Niu, M. Jakobsson, and R. Chow, “Implicit authentication through learning user behavior,” in International Conference on Information Security, pp. 99–113, Springer, 2010. 1
[3] J. Perry, J. R. Davids, et al., “Gait analysis: normal and pathological function.,” Journal of Pediatric Orthopaedics, vol. 12, no. 6, p. 815, 1992. 1
[4] M. Gadaleta and M. Rossi, “Idnet: Smartphone-based gait recognition with convolutional neural networks,” arXiv preprint arXiv:1606.03238, 2016. 1, 2
[5] Y. Zhao and S. Zhou, “Wearable device-based gait recognition using angle embedded gait dynamic images and a convolutional neural network,” Sensors, vol. 17, no. 3, p. 478, 2017. 1, 2
[6] S. Yao, S. Hu, Y. Zhao, A. Zhang, and T. Abdelza- her, “Deepsense: A unified deep learning framework for time-series mobile sensing data processing,” arXiv preprint arXiv:1611.01942, 2016. 1, 2
[7] S. Wang and J. Liu, Biometrics on mobile phone. INTECH Open Access Publisher, 2011. 1
[8] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015. 1
[9] N. Neverova, C. Wolf, G. Lacey, L. Fridman, D. Chandra, B. Barbello, and G. Taylor, “Learning human identity from motion patterns,” IEEE Access, vol. 4, pp. 1810–1820, 2016. 1
[10] C. Nickel, C. Busch, S. Rangarajan, and M. Mo ̈bius, “Using hidden markov models for accelerometer-based biometric gait recognition,” in Signal Processing and its Applications (CSPA), 2011 IEEE 7th International Colloquium on, pp. 58–63, IEEE, 2011. 1
[11] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” arXiv preprint arXiv:1312.6199, 2013. 1
[12] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9, 2015. 1
[13] N. Papernot, I. Goodfellow, R. Sheatsley, R. Feinman, and P. McDaniel, “cleverhans v1.0.0: an adversarial machine learning library,” arXiv preprint arXiv:1610.00768, 2016. 1
[14] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explain- ing and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014. 2
[15] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against deep learning systems using adversarial examples,” arXiv preprint arXiv:1602.02697, 2016.
Vinay Uday Prabhu and John Whaley, UnifyID, San Francisco, CA 94107
Abstract
In this paper, we demonstrate a simple face spoof attack targeting the face recognition system of a widely available commercial smart-phone. The goal of this paper is not proclaim a new spoof attack but to rather draw the attention of the anti-spoofing researchers towards a very specific shortcoming shared by one-shot face recognition systems that involves enhanced vulnerability when a smiling reference image is used.
Introduction
One-shot face recognition (OSFR) or single sample per person (SSPP) face recognition is a well-studied research topic in computer vision (CV) [8]. Solutions such as Local Binary Pattern (LBP) based detectors [1], Deep Lambertian Networks (DLN) [9] and Deep Supervised Autoencoders (DSA) [4] have been proposed in recent times to make the OSFR system more robust to changes in illumination, pose, facial expression and occlusion that they encounter when deployed in the wild. One very interesting application of face recognition that has gathered traction lately is for mobile device unlocking [6]. One of the highlights of Android 4.0 (Ice Cream Sandwich) was the Face Unlock screen-lock option that allowed users to unlock their devices with their faces. It is rather imperative that we mention here that this option is always presented to the user with a cautioning clause that typically reads like *Face recognition is less secure than pattern, PIN, or password.
The reasoning behind this is that there exists a plethora of face spoof attacks such as print attacks, malicious identical twin attack, sleeping user attack, replay attacks and 3D mask attacks. These attacks are all fairly successful against most of the commercial off-the-shelf face recognizers [7]. This ease of spoof attacks has also attracted attention of the CV researchers that has led to a lot of efforts in developing liveness detection anti-spoofing frameworks such as Secure-face [6]. (See [3] for a survey.)
Recently, a large scale smart-phone manufacturer introduced a face recognition based phone unlocking feature. This announcement was promptly followed by media reports about users demonstrating several types of spoof attacks.
In this paper, we would like to explore a simple print attack on this smart-phone. The goal of this paper is not proclaim a new spoof attack but to rather draw the attention of the anti-spoofing community towards a very specific shortcoming shared by face recognition systems that we uncovered in this investigation.
2. Methodology and Results
Figure 1. Example of two neutral expression faces that failed to spoof the smart-phone’s face recognition system.Figure 2. Example of 2 smiling registering faces that successfully spoofed the smart-phone’s face recognition system.
The methodology we used entailed taking a low quality printout of the target user’s face on a plain white US letter paper size (of dimension 8.5 by 11.0 inches) and then unlocking the device by simply exposing this printed paper in front of the camera. Given the poor quality of the printed images, we observed that this simple print attack was duly repulsed by the detector system as long as the attacker sported neutral facial expressions during the registration phase. However, when we repeated the attack in such a way that the attacker had an overtly smiling face when (s)he registered, we were able to break in successfully with high regularity.
In Figure 1, we see two examples of neutral expression faces that failed to spoof the smart-phone’s face recognition system when the registering image had a neutral facial expression.
In Figure 2, we see the same two subjects’ images that successfully spoofed the phone’s face recognition system when the registering (enrollment) image was overtly smiling.
2.1. Motivation for the attack and discussion
It has been well known for a long time in the computer vision community that faces displaying expressions, especially smiles, resulted in stronger recall and discrimination power [10]. In fact, the authors in [2] termed this the happy-face advantage, and showcased the variation in detection performance for varying facial expressions. Through experimentation, we wanted to investigate the specific onshot classification scenario when the registering enrollment face had a strong smile that resulted in the discovery of this attack. As for defense from this attack, there are two straightforward recommendations. The first recommendation would be to simply display a message goading the user to maintain a passport-type neutral facial expression. The second would entail having a smile detector such as [5] as a pre-filter that would only allow smile-free images as a reference image.
References
[1] T. Ahonen, A. Hadid, and M. Pietikainen. Face description with local binary patterns: Application to face recognition. IEEE transactions on pattern analysis and machine intelligence, 28(12):2037–2041, 2006. 1
[2] W. Chen, K. Lander, and C. H. Liu. Matching faces with emotional expressions. Frontiers in psychology, 2:206, 2011. 2
[3] J. Galbally, S. Marcel, and J. Fierrez. Biometric antispoofing methods: A survey in face recognition. IEEE Access, 2:1530–1552, 2014. 1
[4] S. Gao, Y. Zhang, K. Jia, J. Lu, and Y. Zhang. Single sample face recognition via learning deep supervised autoencoders. IEEE Transactions on Information Forensics and Security, 10(10):2108–2118, 2015. 1
[5] P. O. Glauner. Deep convolutional neural networks for smile recognition. arXiv preprint arXiv:1508.06535, 2015. 2
[6] K. Patel, H. Han, and A. K. Jain. Secure face unlock: Spoof detection on smartphones. IEEE Transactions on Information Forensics and Security, 11(10):2268–2283, 2016. 1
[7] D. F. Smith, A. Wiliem, and B. C. Lovell. Face recognition on consumer devices: Reflections on replay attacks. IEEE Transactions on Information Forensics and Security,10(4):736–745, 2015. 1
[8] X.Tan,S.Chen,Z.-H.Zhou, and F.Zhang. Face recognition from a single image per person: A survey. Pattern recognition, 39(9):1725–1745, 2006.
[9] Y. Tang, R. Salakhutdinov, and G. Hinton. Deep lambertian networks. arXiv preprint arXiv:1206.6445, 2012. 1
[10] Y. Yacoob and L. Davis. Smiling faces are better for face recognition. In Automatic Face and Gesture Recognition, 2002. Proceedings. Fifth IEEE International Conference on, pages 59–64. IEEE, 2002. 2
Today, we would like to announce the UnifyID AI Fellowship program for Spring 2017. This is the second edition of the fellowship (Fall 2016 cohort) and is expected to run for 12 weeks, February 23 through May 18. This selective, cross-disciplinary program covers the following areas:
Deep Learning
Signal Processing
Optimization Theory
Sensor Technology
Mobile Development
Statistical Machine Learning
Security and Identity
Human Behavior
UX/UI Development for the above areas
Tech Journalism for the above areas
Special Focus:
We will be assigning one fellow to work on fakenewschallenge.org in collaboration with Dr. Dean Pomerleau of the Carnegie Mellon University Robotics Institute. If interested, please add a note in your application. We expect this fellowship applicant to have substantial experience with handling textual data and NLP expertise. The application should reflect links to previous work in this domain.
FELLOWSHIP DETAILS
Our UnifyID AI Fellows will be initially allocated to a well-defined project matched with their area of interest and expertise and also mapped to a fellowship mentor. The fellows are then presented with a week’s time to collaborate with the mentor and come up with an 11-week timeline roughly detailing the pathway that they plan to take to achieve the project end-goals.
During the fellowship, the fellows are expected to convene in-person and present weekly updates every Thursday evening in our office located in SoMa, San Francisco. In exceptional cases, individuals will be allowed to present via video chat. Absentees in these update-presentation sessions for two consecutive weeks will result in an automatic ejection from the fellowship.
All selected fellows will be awarded:
Life-long designation as a UnifyID AI Fellow.
A fellowship stipend.
Access to state-of-the-art GPU hardware and $360,000 in Microsoft Azure cloud service credits.
Access to our office space in SoMa.
Prepaid Clipper card to help with commuting to/from the office.
A chance to collaborate and publish with top-tier security experts from MIT, Stanford, CMU, Berkeley, Dartmouth, etc.
Conference registration fees for all of the publications that emanate from the fellowship.
Travel expenses for one flagship top-tier conference in case fellow’s work gets accepted as a publication.
A citation and certificate commemorating your achievement.
Exclusive UnifyID Fellow swag.
A chance to present at the UnifyID Tech-expo Day in May 2017.
DELIVERABLES
A short paper describing the project.
A detailed, well-commented code submission on either ai-on.org or http://www.gitxiv.com (in case you have an arxiv worthy submission).
A one-page blog post providing a less technical version of the project details. ($ ipython nbconvert–to markdown notebook.ipynb–stdout will do!)
A final presentation in .ppt or .pdf format during the UnifyID Tech-expo Day.
We also expect that with regard to some of the projects, we may be able to munge certain openly available datasets and upload with associated open problems on ai-on.org if the fellow is limited by the timeline of the fellowship.
REQUIREMENTS
We welcome applications from practitioners, tech-enthusiasts as well as students spanning both the undergraduate and graduate levels, preferably from the SF bay area.
Tracks
Languages
Libraries/Platforms/Frameworks
Machine Learning
Python, Lua, Julia, R, Scala, Java
Scikit-learn, Torch/Autograd, Caffe, Keras with Theano/TensorFlow, Chainer
A personal statement (no longer than 250 words) explaining what you expect to achieve with this fellowship.
A 5-slide presentation (ppt or pdf) detailing your most cherished accomplishment in the area you are applying to (with links to publication(s), GitHub code-base, live-project link, etc.).
Now, imagine seamless authentication everywhere. Software so powerful that by the sensors you already have on your phone, wearables, devices at home or the office, knows it’s you. No more 6-digit pin, string of upper and lowercase letters and numbers to signify that it is really you making a purchase, logging in, or entering a key swipe. Anywhere online or offline where you need to identify yourself, UnifyID promises that based on your everyday actions from factors like how you sit, walk, and type (i.e. passive factors also known as implicit authentication in academia), your “you-ness” can be determined with 99.999% accuracy. At times when the machine learning algorithms are unsure, an active challenge will be triggered on your nearest phone or device (e.g. fingerprint verification, among a dozen others in development).
The UnifyID iOS active challenge is triggered when the machine learning algorithm requires additional verification to learn that it is really you.
UnifyID has been called the holy grail of authentication because the degree of security and sophistication of its machine learning efforts are unparalleled and the convenience and focus on usability makes trying the product unbelievably easy.
Between now and then, we’re in the stage of private beta–ensuring that the flows are easy and work as expected. UnifyID launched out of stealth at TechCrunch Disrupt in September. The initial sign on, logging out and logging back into sites has gone through more than 25 iterations in a few weeks (thanks to the onsite testers!). We’re ready to move forward to a remote private beta and test outside the bounds of our four-walls.
Join us on this journey to disrupt passwords. While “The Oracle” is still under development (our machine learning algorithms), we are moving full-forward on making sure that at this stage, the UnifyID user flows are easy for everyone to use, many times, everyday, across all sites.
***
Sign up for the UnifyID Private Beta: https://unify.id, click “Apply for Private Beta,” enter “Imagination” and why you are interested in participating in the beta in the secret handshake field.
Fast Growing Startup Uses Machine Learning to Solve Passwordless Authentication
Today, UnifyID, a service that can authenticate you based on unique factors like the way you walk, type, and sit, announced the final 16 fellows selected for its inaugural Artificial Intelligence Fellowship for the Fall of 2016. Each of the fellows have shown exemplary leadership and curiosity in making a meaningful difference in our society and clearly has an aptitude for making sweeping changes in this rapidly growing area of AI.
Of the company’s recent launch and success at TechCrunch Disrupt, claiming SF Battlefield Runner-Up (2nd in 1000 applicants worldwide), UnifyID CEO John Whaley said, “We were indeed overwhelmed by the amazing response to our first edition of the AI Fellowship and the sheer quality of applicants we received. We also take immense pride in the fact that more than 40% of our chosen cohort will be women, which further reinforces our commitment as one of the original 33 signees of the U.S. White House Tech Inclusion Pledge.”
The final 16 fellows hail from Israel, Paris, Kyoto, Bangalore, and cities across the U.S. with Ph.D., M.S., M.B.A., and B.S. degrees from MIT, Stanford, Berkeley, Harvard, Columbia, NYU-CIMS, UCLA, Wharton, among other top institutions.
Aidan Clark triple major in Math, Classical Languages and CS at UC Berkeley
Anna Venancio-Marques Data Scientist in Residence, PhD École normale supérieure
Arik Sosman Software Engineer at BitGo, 2x Apple WWDC scholar, CeBIT speaker
Baiyu ChenConvolutional Neural Network Researcher, Masters in CS at UC Berkeley
Fuxiao XinLead Machine Learning Scientist at GE Global Research, PhD Bioinformatics
Kathy Sohrabi VP Engineering, IoT and sensors, MBA at Wharton, PhD EE at UCLA
Kazu KomotoChief Robotics Engineer, CNET Writer, Masters in ME at Kyoto University
Laura Florescu Co-authored Asymptopia, Mathematical Reviewer, PhD CS at NYU
Morgan Lai AI Scientist, MIT Media Lab, Co-founder/CTO, M.Eng. CS at MIT
Pushpa RaghaniPost Doc Researcher at Stanford and IBM, PhD Physics at JNCASR
Raul Puri Machine Learning Development at Berkeley, BS EE/CS/Bioeng at Berkeley
Sara Hooker Data Scientist, Founder non-profit, educational access in rural Africa
Siraj Raval Data Scientist, the Bill Nye of Computer Science on YouTube
Wentao Wang Senior New Tech Integration Engineer at Tesla, PhD ME at MIT
Will GrathwohlComputer Vision Specialist, Founder/Chief Scientist, BS CSAIL at MIT
This highly selective, cross-disciplinary program covers the following areas:
Deep Learning
Signal Processing
Optimization Theory
Sensor Technology
Mobile Development
Statistical Machine Learning
Security and Identity
Human Behavior
Our UnifyID AI Fellows will get to choose from one of 16 well-defined projects in the broad area of applied artificial intelligence in the context of solving the problem of seamless personal authentication. The Fellows will be led by our esteemed Fellowship Advisors, renown experts in machine learning and PhDs from CMU, Stanford, and University of Vienna, Austria.
Please welcome our incoming class! ✨
Read the original UnifyID AI Fellowship Announcement:
Today, we would like to announce the UnifyID AI Fellowship program for Fall 2016. The fellowship runs for six weeks, beginning October 28, 2016 through to December 4, 2016. This selective, cross-disciplinary program covers the following areas:
Deep Learning
Signal Processing
Optimization Theory
Sensor Technology
Mobile Development
Statistical Machine Learning
Security and Identity
Human Behavior
Our UnifyID AI Fellows will get to choose from one of 16 well-defined projects in the broad area of applied artificial intelligence in the context of solving the problem of seamless personal authentication.
All selected fellows will be awarded:
A fellowship stipend.
Access to state-of-the-art GPU hardware and $360,000 in Microsoft Azure cloud service credits.
Weekend access to our office space in SoMa, as well as as-needed access on weekdays.
Prepaid Clipper card to help with commuting to/from the office.
Chance to collaborate and publish with top-tier security experts from MIT, Stanford, CMU, Berkeley, Dartmouth, etc.
A citation, certificate, and plaque commemorating your achievement.
Exclusive UnifyID Fellow signature bags and sweatshirts for the Fall 2016 inaugural class.
A chance to present at the UnifyID Tech-expo Day in December 2016.
We expect the work from your Fellowship to result in either a publication (with fully open-sourced code and data repository on GitHub for reproducible research) or a patent filing.
REQUIREMENTS
We welcome applications from practitioners, hackers, tech-enthusiasts as well as students in full-time accredited academic programs both at the undergraduate and graduate levels, preferably from the SF bay area. An ideal candidate has both math and coding chops, but more importantly, this individual is an engineer, signal-processor, hacker, and self-proclaimed guru who is comfortable with crafting, hacking, implementing, re-implementing, and breaking Machine Learning algorithms deep, shallow or otherwise.
Tracks
Machine Learning
Mobile Dev.
Languages
Python, Lua, Julia, R, Scala, Java
Swift, Objective C, Java
Libraries/Platforms/Frameworks
Scikit-learn, Torch/Autograd, Caffe, Keras with Theano/TensorFlow, Chainer
Ubuntu, OS X, RHEL / CentOS / Fedora, iOS, Android
Please apply here and include in the open form field, a personal statement (no longer than 250 words) explaining what you expect to achieve with this fellowship along with your favorite moment in the sun (publication, GitHub code-base, live-project link).