Building an Object Recognition App and Protecting It From Bots

Introduction

I love building tech for other people to use.

Unfortunately, I learned early on that if your application is accessible to users, it’s also vulnerable to cyberattacks. This is a problem that developers everywhere face, from the person adding a form to their blog to the programmer building applications used by millions.

That’s why I was so excited to join UnifyID, a startup building passwordless authentication solutions, as an intern this summer. When I started, a particular product called HumanDetect really caught my eye. Quoting the documentation,

“UnifyID HumanDetect helps you determine whether a user of your app is a human or a bot. While the purpose is similar to a CAPTCHA, HumanDetect is completely passive, creating a frictionless user experience.”

I imagine most people have come across CAPTCHAs in one form or another, so I can’t be the only person who’s super annoyed each time I’m asked to click a bunch of small boxes.

reCAPTCHA

From a developer’s perspective, CAPTCHAs aren’t ideal either. They can be time consuming, which hurts the user experience by interrupting the flow of an application. Additionally, they may be difficult to complete (especially on the smaller screens of mobile devices), making it harder for some users to access features. However, for a long time CAPTCHAs have been one of the only reliable ways to verify humans, reduce spam, and prevent bot attacks. Despite their downsides, they’ve become a fixture of software as we know it.

UnifyID HumanDetect is different: machine learning algorithms passively determine whether a user is human, replacing CAPTCHAs while not interrupting the user flow. Additionally, while CAPTCHAs work best for web apps, HumanDetect is designed for mobile applications, which don’t have many reliable human authentication methods. To me, this is exciting—HumanDetect completely eliminates the need for explicit user actions.

In this blog post, I’ll outline how I built a simple object recognition app for iOS. It allows users to take a picture using the phone’s camera, which is sent to the Flask backend. There, the image is run through a neural network and the inference results are returned to the app.

After finishing the basic functionality of the app, I added HumanDetect to protect my app from bot attacks, which should give you a good idea of how developers can take advantage of this tool. Finally, I’ve linked all my code so that you can run everything yourself and see how you can use HumanDetect to protect your own apps.

Building the Flask Server

The first part of this project involved setting up a Flask server to serve as the backend of this app. Functionally, it will accept a POST request that contains an image, use a machine learning model to generate predictions based on the picture, and return the five most likely results.

I chose to use Python for the server side of the project because it’s the language I’m most comfortable with, and it’s extremely easy to code and debug. Plus, it’s widely used for machine learning, so adding object classification should be a piece of cake. I decided to use Flask over another framework like Django for similar reasons. I’ve previously used Flask for a couple of projects and it’s also lightweight, meaning it’s super simple to get up and running.

To start off, I needed to set up my environment. Isolating the packages I was using for this project was crucial since I’d need to replicate everything when I deployed my app to a cloud platform. I chose to use Conda simply because it’s what I’m most comfortable with (there’s a theme here, in case you haven’t noticed), although virtualenv would have been fine, too.

Next, I installed Flask and created a simple app that was just a webpage with “HumanDetect Example” on it. After running it locally and verifying that everything was set up correctly, I created a project in Heroku and prepared to deploy my app.

HumanDetect Webpage

To do this, I had to set up a custom CI/CD pipeline for GitLab that would trigger a fresh deployment each time I made a commit, which ended up taking quite a bit of time. Things are a lot simpler if you’re using GitHub (which is where the example code for this project is hosted, fortunately).

With most of the setup out of the way, I could finally begin building the functionality. First, I needed a way to accept an image via a POST request. Although I tried encoding the file as a string, I ended up using a method that simulated uploading a file via a multi-part form POST body before saving it to an ./upload folder.

@app.route("/", methods=['GET', 'POST'])
def home():
  if request.method == 'POST':
    file = request.files['file']
    filename = os.path.join('./uploads', 'image.png')
    file.save(filename)

Arguably the most important part of this whole project is the machine learning object recognition code. Although I could have made it quite complex, I made a couple of decisions that simplified this part as much as possible. I decided to use Keras because it is incredibly easy to use, and includes several common pre-trained models that only take a few lines of code to implement. Plus, I’m not too concerned about performance, so there isn’t really a particular reason to use TensorFlow or PyTorch in this case.

Keras provides a number of Convolutional Neural Networks (CNNs) covering the most common and high performing architectures for image classification. Because free Heroku dynos have a memory constraint, I wanted to minimize the size of the model while ensuring that accuracy is still high. I ultimately decided to go with the MobileNet architecture, a highly efficient network that performs within a few percentage points of VGG, ResNet, and Inception models. Since Keras provides pre-trained weights for the ImageNet dataset, I decided to use them without training my own models.

Before being fed into the model, I needed to preprocess the data so I would get the most accurate classification results. The CNN is built for inputting RGB images with a 224*224 resolution, but the images that I’ll be taking from the iOS app won’t have these dimensions. Therefore, I needed to resize each image using OpenCV. I decided to use this approach rather than cropping some parts of the image out to make a perfect square because important elements could be cut out, and the trained model should be robust enough to ignore minor changes to the aspect ratio.

Once the preprocessed image is fed into the model, the results need to be returned via a response to the POST request. I decided to get the five classes with the highest probabilities, reformat them into a single clean string, and return this string.

img = cv2.imread(filename)
img = cv2.resize(img, (224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
preds = decode_predictions(model.predict(x), top=5)[0]

preds_formatted = ", ".join([
    f"{class_description}: {score*100:.2f}%"
    for (_, class_description, score) in preds
])

print("predictions: ", preds_formatted, "\n")
return preds_formatted

To test that everything was working, I wrote a simple Python script that submits this image of a taxi via a POST request.

Taxi

Here’s the returned response:

cab: 87.69%, police_van: 5.23%, racer: 1.45%, sports_car: 1.33%, car_wheel: 1.23%

Success! With the Flask app complete, I moved on to the next part of the project.

Building the iOS App

Let me make something clear: I’m not an iOS developer. I’ve built several apps for Android and the web, but I’ve never really tried Swift or Xcode—in fact, I haven’t even owned an Apple device in the last 7 years. Therefore, everything about this iOS thing was new for me, and I had to lean pretty heavily on Google and Stack Overflow.

Luckily, the Apple developer environment seemed relatively intuitive, and was in many ways simpler than building apps for its Android counterparts. It took me some time to go through a few basic iOS development guides online, but before long I was up and running with my first app in Xcode.

The most important function of the app is that it allows a user to take a picture using the phone’s camera. To accomplish this, I used a view controller called UIImagePickerController which adds the ability to capture images in just a few lines of code. I just followed the instructions from this article that I found on Google, and got this part working pretty quickly.

iOS Screenshot 1

Now that the user can take a picture, it needs to be sent via a POST request to the Flask server. Because of the way the backend expects the request to be made, I ended up having to manually add some metadata and body content. Although it looks a bit messy (and there might be a cleaner way to do it), I eventually did get it working, which is what counts.

let filename = "image.png"
let boundary = UUID().uuidString
let config = URLSessionConfiguration.default
let session = URLSession(configuration: config)
var urlRequest = URLRequest(url: URL(string: flaskURL)!)
urlRequest.httpMethod = "POST"
urlRequest.setValue("multipart/form-data; boundary=\(boundary)", forHTTPHeaderField: "Content-Type")
var data = Data()
                
data.append("\r\n--\(boundary)\r\n".data(using: .utf8)!)
data.append("Content-Disposition: form-data; name=\"file\"; filename=\"\(filename)\"\r\n".data(using: .utf8)!)
data.append("Content-Type: image/png\r\n\r\n".data(using: .utf8)!)
data.append(image.pngData()!)
data.append("\r\n--\(boundary)--\r\n".data(using: .utf8)!)

Finally, I added a few UI elements to finish up the iOS app. I set up a loading screen spinner that is activated just after the picture is taken and deactivated once the response to the POST request is received. I also added a pop up alert that displays the object recognition results to the user.

iOS Results Screenshots

And that’s it! The main functionality of the object recognition app is now complete.

Protecting From Bots

This project is a great example of a possible use case for HumanDetect. Since the object recognition functionality involves quite a bit of machine learning and heavy processing, it’s important to ensure that each request to the backend is made by legitimate users of the app. An attack involving many unauthorized requests could become very costly (both computationally and financially) or even cause the app to become overwhelmed and crash. Implementing a verification step with HumanDetect before each POST request is processed can protect apps like this from attacks.

Adding HumanDetect to the app was surprisingly easy, as the documentation provides step-by-step instructions for adding it to the frontend and backend. Before I wrote any additional code, I created a new developer account at developer.unify.id. After setting up a new project in the dashboard, I came across a page with a bunch of technical jargon.

UnifyID Dashboard

For HumanDetect, the only things that matter are API Keys and SDK Keys. An API key gives access to the Server APIs that are used to verify whether a request to the backend is from a human or bot, while an SDK Key is used to initialize the iOS SDK and allows the app to generate a unique token that encodes information about the human/bot user. For this project, I went ahead and created one of each.

There are a few things that needed to happen on the iOS side. Once the HumanDetect pod is added, I initialized the SDK in AppDelegate.swift using the SDK key generated from the dashboard.

import UnifyID

let unify : UnifyID = { try! UnifyID(
    sdkKey: "<YOUR SDK KEY>"
)}()

Next, I set up an instance of HumanDetect to utilize its functionality.

import HumanDetect
let humanDetect = unify.humanDetect

Data capture needs to be manually started right when the app first loads. This allows the app to begin recording data that will later be used to determine if the user is a human or bot. Maximizing the time when data capture is active will generally result in higher accuracy.

override func viewDidLoad() {
    super.viewDidLoad()
    humanDetect.startPassiveCapture()
}

Data capture continues until a token is generated right after the picture is taken, and the token is added to the same POST request as the picture to be sent to the backend.

switch humanDetect.getPassive() {
    case .success(let humanDetectToken):
                
        // Creating POST request
        let fieldName = "token"
        let fieldValue = humanDetectToken.token
        
        …

        data.append("\r\n--\(boundary)\r\n".data(using: .utf8)!)
        data.append("Content-Disposition: form-data; name=\"\(fieldName)\"\r\n\r\n".data(using: .utf8)!)
        data.append("\(fieldValue)".data(using: .utf8)!)
                
        …

        // POST request to Flask server
        session.uploadTask(with: urlRequest, from: data, completionHandler: { responseData, response, error in
                    
            …

        }).resume()

    …

}

The Flask server has also been modified to accept the token generated by the iOS app. Right after the POST request from the app, the server makes its own POST request to https://api.unify.id/v1/humandetect/verify containing the generated token and the API Key from the developer dashboard.

HEADERS = {
    'Content-Type': 'application/json',
    'X-API-Key': <YOUR-API-KEY>,
}

@app.route("/", methods=['GET', 'POST'])
def home():
    if request.method == 'POST':
        file = request.files['file']
        token = request.form['token']

        if not file:
            return "Error: file not found in request"

        if not token:
            return "Error: token not found in request"

        print("token:", token)

        hd_response = requests.post('https://api.unify.id/v1/humandetect/verify', headers=HEADERS, data=json.dumps({"token": token}))

        if hd_response.status_code == 400:
            return "Error: invalid HumanDetect token"

        hd_json = hd_response.json()

        if "valid" not in hd_json or not hd_json["valid"]:
            return "Error: HumanDetect verification failed"

If the response indicates that the user is a valid human, the image is run through the Convolutional Neural Network normally. If it detects that the request is made by a bot, however, it will immediately return an error message without running the machine learning code. This ensures that bots won’t overwhelm server resources, and helps protect the integrity of the application’s infrastructure.

Next Steps

The code for this HumanDetect example is available at https://github.com/UnifyID/humandetect-sample-flask and https://github.com/UnifyID/humandetect-sample-ios. Instructions for setting everything up are included in the README files. If you run into any issues or have questions about HumanDetect, feel free to contact us.

If you want to learn more about how to counter bot attacks, I’d highly suggest reading this Medium article, which goes into more detail about various solutions including HumanDetect.

Thanks for reading! I hope that this has been helpful.

A Unique Experience – Interning at UnifyID

I get mixed up with my friend Eric a lot. In the picture above, I’m on the left and Eric is on the right. We have similar builds, wear glasses, and although Eric will tell you he’s incomparably more handsome than me, even our close friends will accidentally call me, Eric and Eric, Isaac on campus at UCSD. I thought the peak of our similarities were when we both accepted full-stack internships at UnifyID in San Francisco this summer, but I realized I was mistaken. On Day 1, Eric and I had gone and picked out the exact same outfit for our internship debut. We had black t-shirts, tan chinos, blue shoes, and even opposite desks to really sell the mirror illusion. At a company built upon faith in each individual’s uniqueness, initially, I could not have felt more out of place. 

Despite our many similarities, Eric and I do have our differences, and they showed in how we dealt with our first-day jitters. I smiled politely and tried not to get in anyone’s way; Eric dropped the f-bomb before lunch. Having prior experience at a company where that sort of thing wouldn’t fly, I took it upon myself to pull him aside and tell him to rein it in. I thought that I had done him a favor until later that day when a full-time engineer casually slung a string of curses at his monitor with even more gusto than Eric had. It was then that I started to realize that working at UnifyID would be unlike anything I had experienced before.

Me, excelling.
Looking back, I shouldn’t have been surprised that UnifyID gives its employees the space to be themselves. Our mission is to identify people by what makes them unique–to squash those qualities would be sacrilege. As a result of this, the atmosphere is lighter and the conversations more genuine.

In the three months that I spent at UnifyID, I came to realize that it is this freedom that makes the team work as well as it does. I never felt like I had to put energy into trying to fill the role of the intern I thought I should be. Instead, I could just go in every day as myself. Once I realized this and started to embrace it, my productivity and sense of fulfillment soared. I went on to make significant contributions to our Android SDK, from redesigning our service architecture to developing a full suite of end to end tests. Now at the end of my internship, I find myself a far better engineer than I entered, lost trying to find where the time has gone, and sad to say goodbye to the friends I’ve made.

It’s difficult to describe a summer of my experiences at UnifyID in a few short paragraphs. But in a word? I would say, authentic.

63 Days of Summer at UnifyID

60 hours after I wrapped up my second year at UC Berkeley, I walked into the UnifyID office for my first day as a software engineering intern. I was not sure what to expect, but I definitely did not think that before I left that day, I would have already contributed to the codebase! The decision to work at UnifyID was an easy one. This team was working on technology that I believed was the future of security, using implicit authentication to determine what makes you unique, ultimately eliminating passwords.

Pushing an MR to master, Day 1 done!

Throughout the summer, I worked on various projects ranging from Android development to devOps to server backend work. One project I particularly enjoyed working on was the continuous integration for our Android project. It was interesting to understand how the code that was written was built, tested, and deployed through the pipeline, and how it all tied together with Docker and Amazon Web Services. I had never worked in any of these areas before arriving at UnifyID, but with guidance from my mentor, CEO John Whaley, and the incredible support of the other engineers, I was able to directly contribute to the product. I learned something new every day and noticed my growth as a software engineer as the summer progressed.

As a female engineer, I have always noticed the underrepresentation of women in engineering. I constantly wonder what I can do to lessen this gap? From this experience, I have learned that as long as you are passionate about your work and genuinely care about what you are doing, not much can stand in your way. To all my aspiring engineering peers: be inquisitive, be supportive, and a caring community will form.

Impromptu team outing at a SoMa neighborhood cafe!

The team really makes the office feel like a comfortable and enjoyable space to be in. The whole team is so passionate about their work and willing to take time out of their day to share and explain their projects to me. Everyone comes from such different backgrounds and each person is so interesting to talk to and learn from.

As the summer comes to an end, I would like to thank the team at UnifyID for this wonderful learning experience. Nowhere else would I have been able to discuss ideas, designs, and implementations with such qualified people while working on a groundbreaking solution to such a prolific problem.

Recapping our Summer 2017 Internship Program

This summer we ran our largest internship program yet at UnifyID. We hosted an immensely talented group of 16 interns who joined us for 3 months, and there was never a dull day! While bringing in interns for the summer does create an energetic cadence, fresh viewpoints challenge us to grow as a company too. 12 weeks can feel like both a sprint and marathon, but in start-up days, even the hour can be precious.

Almost all our interns mentioned a desire to contribute to the technology of the future when asked why they chose to work at UnifyID, and we think this is a testament to the quality of our internship program—interns are able to contribute their talents in a meaningful way, whether on our machine learning, software engineering, or product teams.

Our machine learning interns focused on research, under the guidance of Vinay Prabhu. Much of their work has been on figuring out how to integrate new factors into our algorithms or develop datasets of human activity for future use. Three of our paper submissions were accepted to ICML workshops to be held in Sydney this year. This brings the total number of peer reviewed research papers accepted or published by UnifyID in the last few weeks to seven! What is especially exciting is the fact that these were the first peer-reviewed papers for our undergraduate interns in what we hope will be long and fruitful research careers.

Our software engineering interns have been integral in supporting our product sprints, which have been centered around deploying initial versions of our technology to our partners quickly. As one of our interns, Joy, said: “From mobile development to server work to DevOps, I learned an insane amount from this incredible team.”

Our product interns were involved across teams and worked on projects varying from product backlog grooming and retrospectives to beta community management to content marketing to analyst relations to technical recruiting to team building efforts. Having worked across multiple facets of the business, they were able to wear many hats and learn a great deal about product development and operations.

Aside from work, there’s no shortage of events to attend in the Bay Area, from informal ones like Corgi Con or After Dark Thursday Nights at the Exploratorium, to events focused on professional development like Internpalooza or a Q&A with Ben Horowitz of a16z, who provided his advice on how to succeed in the tech world. Our interns were also able to take part in shaping our team culture: designing custom t-shirts, going on team picnics, and participating in interoffice competitions and hackathons.

A serendipitous meet up at Norcal Corgi Con!

Though we are sad to see them go, we know that they all have a bright future ahead of them and are so grateful for the time they were able to spend at our company this summer. Thank you to the Summer 2017 class of UnifyID interns!

  • Mohannad Abu Nassar, senior, MIT, Electrical Engineering and Computer Science
  • Divyansh Agarwal, junior, UC Berkeley, Computer Science and Statistics
  • Michael Chien, sophomore, UC Berkeley, Environmental Economics and Policy
  • Pascal Gendron, 4th year, Université de Sherbrooke, Electrical Engineering
  • Peter Griggs, junior, MIT, Computer Science
  • Aditya Kotak, sophomore, UC Berkeley, Computer Science and Economics
  • Francesca Ledesma, junior, UC Berkeley, Industrial Engineering and Operations Research
  • Nikhil Mehta, senior, Purdue, Computer Science
  • Edgar Minasyan, senior, MIT, Computer Science and Math
  • Vasilis Oikonomou, junior, UC Berkeley, Computer Science and Statistics
  • Joy Tang, junior, UC Berkeley, Computer Science
  • Issac Wang, junior, UC San Diego, Computer Science
  • Eric Zhang, junior, UC San Diego, Computer Engineering
Bay Area feels

UnifyID at a16z’s Battle of the Hacks

Last weekend, UnifyID was invited to attend Andreessen Horowitz’s 4th annual Battle of the Hacks at their headquarters in Menlo Park—an exclusive hackathon for the organizers of the 14 top university hackathons in North America, ultimately competing for a $25,000 sponsorship from a16z. Grace served as a judge alongside others from companies like Slack, Lyft, and Github, while Andres was a mentor for the event, advising teams on how best to complete their projects!

We’ve sent out people to hackathons before (see our CEO John’s post from HackMIT here) and we continue to do it for a few reasons. First, we’re strong believers in supporting innovation, particularly through mentorship, because it’s the same thing we do at UnifyID. Second, we’re able to meet students and hackers working on incredible projects, which is not only inspiring but shows us the depth and breadth of knowledge in the talent pool for us to hire from. Finally, no matter what hackathon, we have enjoyed ourselves without fail. In fact, Andres even stayed the night at the event (which students said they’d never seen a mentor do!)

The winner of the hackathon was HackMIT, which built a Chrome Extension called Cubic leveraging NLP to provide a timeline (topic history across sites) and proper context (detail or general views on related topics) for any news story. The judging panel was incredibly impressed by the difficulty of the project, adding dimensionality to the content we consume on a daily basis.

Hack the North from UWaterloo was the runner-up: they made a creative visual system called Fable to augment live storytelling. By breaking down voice inputs and pairing them with relevant web images, they constructed a useful supplement to traditional stories.

In 3rd was the Bitcamp team from University of Maryland, College Park. They created Alexagram, an interactive hologram using Alexa. One of the coolest demos by far, their project was able to give Alexa some personality as well as some visual interaction with the user.

You can check out all the submissions here!

All the projects were incredible and the teams were all very impressive. We’re already looking forward to the next one!

Docker and Beanstalk: Welcome to the Gaps

At UnifyID we’re big fans of microservices à la Docker and Elastic Beanstalk. And for good reason. Containerization simplifies environment generation, and Beanstalk makes it easy to deploy and scale.

Both promise an easier life for developers, and both deliver, mostly. But as with all simple ideas, things get less, well simple, as the idea becomes more widely adopted, and then adapted into other tools and services with different goals.

Soon there are overlaps in functionality, and gaps in the knowledge base (the Internet) quickly follow. Let’s take an example.

When you first jump into Docker, it makes total sense. You have this utility docker and you write a Dockerfile that describes a system. You then tell docker to read this file and magically, a full blown programming environment is born. Bliss.

But what about running multiple containers? You’ll never be able to do it all with just a single service. Enter docker-compose, a great utility for handling just this. But suddenly, what was so clear before is now less clear:

  • Is the docker-compose.yml supposed to replace the Dockerfile? Complement it?
  • If they’re complementary, do options overlap? (Yes.)
  • If options overlap, which should go where?
  • How do the containers address each other given a specific service? Still localhost? (Not necessarily.)

Add in something like Elastic Beanstalk, its Dockerrun.aws.json file, doing eb local run, and things get even more fun to sort out.

In this post I want to highlight a few places where the answers weren’t so obvious when trying to implement a Flask service with MongoDB.

To start off, it’s a pretty straightforward setup. One container runs Flask and serves HTTP, and a second container serves MongoDB. Both are externally accessible. The MongoDB is password protected, naturally, and in no way am I going to write my passwords down in a config file. They must come from the environment.

Use the Dockerfile just for provisioning

The project began its life with a single Dockerfile containing an ENTRYPOINT to start the app. This was fine while I was still in the early stages of development — I was still mocking out parts of external functionality, or not even handling it yet.

But then I needed the same setup to provide a development environment with actual external services running, and the ENTRYPOINT in the Dockerfile became problematic. And then I realized — you don’t need it in the Dockerfile, so ditch it. Let the Dockerfile do all the provisioning, and specify your entrypoint in one of the other ways. From the command line:

docker run --entrypoint make myserver run-tests

Or, from your docker-compose.yml you can do it like

version: '2'
services:
  myserver:
    ...
    entrypoint: make dev-env

This handily solved the problem of having a single environment oriented to different needs, i.e. test runs and a live development environment.

Don’t be afraid of multiple Dockerfiles

The docker command looks locally for a file named Dockerfile. But this is just the default behavior, and it’s pretty common to have slightly different configs for an environment. E.g. our dev and production environments are very similar, but we have some extra stuff in dev that we want to weed out for production.

You can easily specify the Dockerfile you want by using docker -f Dockerfile.dev ..., or by simply using a link: ln -s Dockerfile.dev Dockerfile && docker ...

If your docker-compose.yml specifies multiple containers you may find yourself in the situation where you not only have multiple Dockerfiles for a given service, but Dockerfile(s) for each service. To demonstrate, let’s say we have the following docker-compose.yml

version: '2'
services:
  flask:
    build: .
    image: myserver:prod
    volumes:
      - .:/app
    links:
      - mongodb
    environment:
      - MONGO_USER=${MONGO_USER}
      - MONGO_PASS=${MONGO_PASS}
    ports:
      - '80:5000'
    entrypoint: make run-server
  mongodb:
    build: ./docker/mongo
    image: myserver:mongo
    environment:
      - MONGO_USER=${MONGO_USER}
      - MONGO_PASS=${MONGO_PASS}
    ports:
      - '27017:27017'
    volumes:
      - ./mongo-data:/data/mongo
    entrypoint: bash /tmp/init.sh

In the source tree for the above, we have Dockerfiles in the following locations:

Dockerfile.dev
Dockerfile.prod
docker/mongo/Dockerfile

The docker-compose command uses the build option to tell it where to find the Dockerfile for a given service. The top two files are for the Flask service, and the appropriate Dockerfile is chosen using the linking strategy mentioned above. The mongodb service uses its own Dockerfile kept in a certain folder. The line

build: ./docker/mongo

tells docker where to look for it.

Dockerrun.aws.json, the same, but different

Enter Elastic Beanstalk and Dockerrun.aws.json. Now you have yet another file, and it pretty much duplicates docker-compose.yml — but of course with its own personality.

You use Dockerrun.aws.json v2 to deploy multiple containers to Elastic Beanstalk. Also, when you do eb local run, the file .elasticbeanstalk/docker-compose.yml is generated from it.

Here’s what the Dockerrun.aws.json corollary of the above docker-compose.yml file looks like:

{
  "AWSEBDockerrunVersion": 2,
  "volumes": [
    {
      "name": "mongo-data",
      "host": {
        "sourcePath": "/var/app/mongo-data"
      }
    }
  ],
  "containerDefinitions": [
    {
      "name": "myserver",
      "image": "SOME-ECS-REPOSITORY.amazonaws.com/myserver:latest",
      "environment": [
          {
            "name": "MONGO_USER",
            "value": "changemeuser"
          },
          {
            "name": "MONGO_PASS",
            "value": "changemepass"
          },
          {
            "name": "MONGO_SERVER",
            "value": "mongo-server"
          }
      ],
      "portMappings": [
        {
          "hostPort": 80,
          "containerPort": 5000
        }
      ],
      "links": [
        "mongo-server"
      ],
      "command": [
        "make", "run-server-prod"
      ]
    },
    {
      "name": "mongo-server",
      "image": "SOME-ECS-REPOSITORY.amazonaws.com/mongo-server:latest",
      "environment": [
          {
            "name": "MONGO_USER",
            "value": "changemeuser"
          },
          {
            "name": "MONGO_PASS",
            "value": "changemepass"
          }
      ],
      "mountPoints": [
        {
          "sourceVolume": "mongo-data",
          "containerPath": "/data/mongo"
        }
      ],
      "portMappings": [
        {
          "hostPort": 27017,
          "containerPort": 27017
        }
      ],
      "command": [
        "/bin/bash", "/tmp/init.sh"
      ]
    }
  ]
}

Let’s highlight a few things. First, you’ll see that the image option is different, i.e.

      "image": "SOME-ECS-REPOSITORY.amazonaws.com/myserver:latest",

This is because we build our docker images and push them to a private repository on Amazon ECS. On deploy, Beanstalk looks for the one tagged latest, pulls, and launches.

Next, you may have noticed that in docker-compose.yml we have the entrypoint option to start the servers. However, in Dockerrun.aws.json we’re using "command".

There are some subtle differences between ENTRYPOINT and CMD. But in this case, it’s even simpler. Even though Dockerrun.aws.json has an "entryPoint" option, the server commands wouldn’t run. I had to switch to "command" before I could get eb local run to work. Shrug.

Another thing to notice is that in docker-compose.yml we’re getting variables from the host environment and setting them into the container environment:

    environment:
      - MONGO_USER=${MONGO_USER}
      - MONGO_PASS=${MONGO_PASS}

Very convenient. However, you can’t do this with Dockerrun.aws.json. You’ll have to rewrite the file with the appropriate values, then reset it. The next bit will demonstrate this.

We’re setting a local volume for MongoDB with the following block:

  "volumes": [
    {
      "name": "mongo-data",
      "host": {
        "sourcePath": "/var/app/mongo-data"
      }
    }
  ]

The above path is production specific. This causes a problem with eb local run, mainly because of permissions on your host machine. If you set a relative path, i.e.

        "sourcePath": "mongo-data"

the volume is created under .elasticbeanstalk/mongo-data, and everything works fine. On a system with Bash, you can solve this pretty easily doing something along the following lines:

cp Dockerrun.aws.json Dockerrun.aws.json.BAK
sed -i '' "s/\/var\/app\///g" Dockerrun.aws.json
eb local run ; mv Dockerrun.aws.json.BAK Dockerrun.aws.json

We just delete the /var/app/ part, run the container locally, and return the file back to how it’s supposed to be for deploys. This is also how we set the password — changemepass — from the environment on deploy.

Last, you’d think running eb local run, which is designed to simulate an Elastic Beanstalk environment locally via Docker, would execute pretty much the same as when you invoke with docker-compose up.

However, I discovered one frustrating gotcha. In our Flask configuration, we are addressing the MongoDB server with mongodb://mongodb (instead of mongodb://localhost) in order to make the connection work between containers.

This simply did not work in eb local run. Neither did using localhost. It turns out the solution is to use another environment variable, MONGO_SERVER. In our Flask config, we do the following, which defaults to mongodb://mongodb:

    'MONGO_SERVER': os.environ.get('MONGO_SERVER', 'mongodb'),

In Dockerrun.aws.json, we specify this value as

          {
            "name": "MONGO_SERVER",
            "value": "mongo-server"
          }

Why? Because the "name" of our container is mongo-server and eb generates an entry in /etc/hosts based on that.

So now everything works between docker-compose up, which uses mongodb://mongodb, and eb local run, which uses mongodb://mongo-server.

These are just a few of the things that might confound you when trying to do more than just the basics with Docker and Elastic Beanstalk. Both have a lot to offer, and you should definitely jump in if you haven’t already. Just watch out for the gaps!

UnifyID Takes Home the Win at SXSW!

Our first trip to SXSW didn’t disappoint! Among the legions of Interactive and Film conference goers, were scores of locals partaking in the immersive spectacle of Austin–attendees in all, 50k strong in the first week. For a bay area San Francisco startup, weird is relative.

At SXSW, Bravo promotes its new show, Stripped on 6th Street in Austin, and on our right, San Francisco from my Facebook feed in the same day, #stayweird. Photo credit: thanks Shannon! (?)

Companies, marketers, creatives, tastemakers, brands, and bands are all vying for visibility, reach, and engagement. The hype game was strong; however, if you could but for a moment suspend a cynic’s disbelief, those ice-cold Lone Stars and live music erupting in every drizzling corner of Austin became magical.

UnifyID was incredibly honored to place #1 in the Security and Privacy track at SXSW’s annual Interactive Accelerator but part of what made that win so sweet was the goodwill of the SXSW attendees. In a surprising moment on stage, a room of about 400 people became an intimate family affair for a few minutes. Together, we all sang happy birthday to Sophie, John’s newly turned 4-year old daughter.

Thank you to everyone at SXSW for making our exclusive Silent Disco Brunch a success and our Accelerator Pitch one for the books!

UnifyID Scores a Unanimous Win at RSA Innovation Sandbox!

Behind every great idea, there lies a kernel of unequivocal human truth and a long road of execution to realize those intentions. On Monday, February 13th, the UnifyID team delivered and unanimously won RSA’s 2017 Innovation Sandbox competition.

“UnifyID demonstrated they were the most innovative by proving there is a way to actually leverage the individuality of humans to improve security.”
– Linda Gray Martin, Director & General Manager of RSA Conference. 

UnifyID Founder and CEO, John Whaley captivated a 1,200-person standing-room-only audience on its toes after a 3-minute pitch and 3-minute rapid-fire line of questioning from a panel of venture capitalists, entrepreneurs, and large security company judges.

Watch the 3-minute pitch below!

Many thanks to RSA and all our supporters who also saw that unequivocal human truth: there is only one you in the world.

We are on a mission to change the world and build a revolutionary identity platform based on implicit authentication to make your security seamless.

Global Security Survey Across 700+ Organizations

In this brave new world of emerging protectionism and continued globalization, privacy and security seem at odds. In our market survey across 730 individuals and a similar count of organizations, security concerns ranked 10 very important (0 not at all) on startlingly 50% of those surveyed.

Scale of Security Concern
UnifyID survey question on security (0 not at all concerned to 10 very important).

UnifyID, a service that authenticates you based on unique factors like the way you walk, type, and sit is a revolutionary new identity platform for seamless security. Understanding the need for data privacy and ownership, product ease of use, and multifactor security, UnifyID has crafted a solution to address the pain of remembering passwords for authorized access in online and offline use cases.

In a deeper dive across 70 organizations and in 40 hours of interviews, we discovered that people care a lot about easier access at work but also at home. “It would be great if you can take stuff off my plate: several cell phones for different countries, computers, iPads, smart software in my car and home that can all actually talk to each other so that I don’t use the same password or long passwords every time I do a software update–this would save me several days every year I take to manage the access to these independent tools,” says Marco, an enterprise software COO.

Global Security Interest - Interviewed
70 organizations and 40 hours of interviews across various backgrounds.

In another interview with John, an undergraduate aerospace engineer, “Personally, I’m just excited to make new technology a part of my life. UnifyID complements my life with easier access to all of the sites I visit and makes my life easier and exciting to see the technology of the future as part of my life. Other than protecting my identity, it’s really cool to use this technology to make a big difference in people’s lives.” 

These interviews were a special treat to meet people from different cultures, backgrounds, and walks of life. We had an incredibly unique chance to hear more on what specifically about security is most important to our users’ day-to-day lives. “As a small company, you have the opportunity to touch more people than Coca-Cola! A guy living in Istanbul is really interested in what you’re doing right now, 10k kilometers away. I’m sure more than 100k people are very excited about what 6 people in San Francisco are doing,” remarked a manager at Coca-Cola at the end of our call.

Our challenge is unique in that we’re not just addressing large corporations but real people including our friends and family. Though the political tides may be changing, taking back your rights to security and privacy is a paramount task we don’t take lightly. If you’d like to join us in this journey to taking down passwords, please sign up for beta or feel free to drop us a note anytime.

Survey Demographics

Survey Demographics: Age

Survey Demographics: Gender

Survey Demographics: Ethnicity

Survey Demographics: Education

UnifyID Anoints 16 Distinguished Scientists for the AI Fellowship

Fast Growing Startup Uses Machine Learning to Solve Passwordless Authentication

Today, UnifyID, a service that can authenticate you based on unique factors like the way you walk, type, and sit, announced the final 16 fellows selected for its inaugural Artificial Intelligence Fellowship for the Fall of 2016. Each of the fellows have shown exemplary leadership and curiosity in making a meaningful difference in our society and clearly has an aptitude for making sweeping changes in this rapidly growing area of AI.

Of the company’s recent launch and success at TechCrunch Disrupt, claiming SF Battlefield Runner-Up (2nd in 1000 applicants worldwide), UnifyID CEO John Whaley said, “We were indeed overwhelmed by the amazing response to our first edition of the AI Fellowship and the sheer quality of applicants we received. We also take immense pride in the fact that more than 40% of our chosen cohort will be women, which further reinforces our commitment as one of the original 33 signees of the U.S. White House Tech Inclusion Pledge.”

The final 16 fellows hail from Israel, Paris, Kyoto, Bangalore, and cities across the U.S. with Ph.D., M.S., M.B.A., and B.S. degrees from MIT, Stanford, Berkeley, Harvard, Columbia, NYU-CIMS, UCLA, Wharton, among other top institutions.

  • Aidan Clark triple major in Math, Classical Languages and CS at UC Berkeley
  • Anna Venancio-Marques Data Scientist in Residence, PhD École normale supérieure
  • Arik Sosman Software Engineer at BitGo, 2x Apple WWDC scholar, CeBIT speaker
  • Baiyu Chen Convolutional Neural Network Researcher, Masters in CS at UC Berkeley

  • Fuxiao Xin Lead Machine Learning Scientist at GE Global Research, PhD Bioinformatics

  • Kathy Sohrabi VP Engineering, IoT and sensors, MBA at Wharton, PhD EE at UCLA
  • Kazu Komoto Chief Robotics Engineer, CNET Writer, Masters in ME at Kyoto University

  • Laura Florescu Co-authored Asymptopia, Mathematical Reviewer, PhD CS at NYU

  • Lorraine Lin Managing Director, MFE Berkeley, PhD Oxford, Masters Design Harvard
  • Morgan Lai AI Scientist, MIT Media Lab, Co-founder/CTO, M.Eng. CS at MIT
  • Pushpa Raghani Post Doc Researcher at Stanford and IBM, PhD Physics at JNCASR

  • Raul Puri Machine Learning Development at Berkeley, BS EE/CS/Bioeng at Berkeley
  • Sara Hooker Data Scientist, Founder non-profit, educational access in rural Africa
  • Siraj Raval Data Scientist, the Bill Nye of Computer Science on YouTube

  • Wentao Wang Senior New Tech Integration Engineer at Tesla, PhD ME at MIT

  • Will Grathwohl Computer Vision Specialist, Founder/Chief Scientist, BS CSAIL at MIT

 

This highly selective, cross-disciplinary program covers the following areas:

  • Deep Learning
  • Signal Processing
  • Optimization Theory
  • Sensor Technology
  • Mobile Development
  • Statistical Machine Learning
  • Security and Identity
  • Human Behavior

Our UnifyID AI Fellows will get to choose from one of 16 well-defined projects in the broad area of applied artificial intelligence in the context of solving the problem of seamless personal authentication. The Fellows will be led by our esteemed Fellowship Advisors, renown experts in machine learning and PhDs from CMU, Stanford, and University of Vienna, Austria.

Please welcome our incoming class! ✨

 

Read the original UnifyID AI Fellowship Announcement:

https://unify.id/2016/10/10/announcing-the-unifyid-ai-fellowship/

 

Initial Release:

http://www.prweb.com/releases/2016/unifyid/prweb13804371.htm#!