About CloudCV

CloudCV began in the summer of 2013 as a research project within the Machine Learning and Perception lab at Virginia Tech (now at Georgia Tech), with the ambitious goal of making platforms to make AI research more reproducible. We’re a young community working towards enabling developers, researchers, and fellow students to build, compare and share state-of-the-art Artificial Intelligence algorithms. We believe that one shouldn’t have to be an AI expert to have access to cutting edge vision algorithms. Likewise, researchers shouldn’t have to worry about building a service around their deep learning models to showcase and share it with others.

We have participated in the past eight installments of Google Summer of Code, over the course of which our students built several excellent tools and features. If you are interested in participating as a student or mentor, scroll down to check out our projects and get involved! We are more than happy to answer any questions you may have regarding CloudCV, so feel free to reach out to us on our Slack workspace or on our mailing list.


Evaluating submission code in Docker

Python Django DRF Docker AWS

The rise of reinforcement learning based problems or any problem which requires that an agent must interact with an environment introduces additional challenges for benchmarking. In contrast to the supervised learning setting where performance is measured by evaluating on a static test set, it is less straightforward to measure generalization performance of these agents in the context of the interactions with the environment. Evaluating these agents involves running the associated code on a collection of unseen environments that constitutes a hidden test set for such a scenario. The goal of this project is to set up a robust pipeline for uploading prediction code in the form of Docker containers (as opposed to test prediction file) that will be evaluated on remote machines and the results will be displayed on the leaderboard.

Please wait for loading the projects ...


Enhance UI/UX of EvalAI

AngularJS HTML CSS Javascript

This will focus on improving the existing UI of EvalAI to improve the experience of both challenge organizers and participants. We also want to improve the discoverability of all the features that are supported on EvalAI. With the increase in the number of users of on EvalAI, it is critical to have a frictionless and intuitive user experience. The goal of this project is to ease the pipeline for challenge creation, enhancing the user experience of the platform, adding plots for displaying the progress of state-of-the-art algorithms, for displaying the progress of participant team in a challenge over the years and several other features.

Please wait for loading the projects ...


Robust Evaluation Pipeline

Python Django DRF Docker AWS

Currently, the submission worker that evaluates the challenge requires manual scaling. Moreover, logging & metrics-monitoring isn’t available to the challenge hosts for the submission worker in real-time. Also, an often requested feature by the challenge organizers has been the ability to test their competition package (evaluation scripts, etc) locally before uploading it to EvalAI. This capability will also reduce assistance required by the platform maintainers. The goal of this project is to write a robust test suite for submission worker, port it to AWS Fargate to setup auto-scaling and logging. The tasks will also include giving control to challenge hosts over the submission worker from the UI in terms of starting, stopping and restarting it.

Please wait for loading the projects ...


New Frontend for EvalAI using Angular 5

Angular Typescript HTML CSS

EvalAI’s current frontend is setup using Angular 1 which is not maintained by the community actively. Angular in the later versions support really nice features like better SEO, client-side rendering, etc. We want to migrate the current codebase in Angular 5 with a new design and achieve feature-parity. The first half of the summer will focus on adding the existing features from the older version with a new UI, while the latter half will focus on building an exhaustive analytics platform for challenge host and participants. The tasks will also include adding the UI for hosts and participants for reinforcement learning based challenges.

Please wait for loading the projects ...

Become a Mentor!

Mentoring is very important to the future of CloudCV. It introduces new people to the world of open source software who will enrich our community with their ideas and talents.

Apart from technical skills, being a mentor requires your time, a clear roadmap for your project and good organization skills. If you think you would be a good fit to mentor one of our projects, do reach out to us!


Okay, let's mentor!

Have your own idea? Add an issue to our GSoC-Ideas repository.

In case of queries, you can contact us.

Email: team@cloudcv.org