About CloudCV

CloudCV began in the summer of 2013 as a research project within the Machine Learning and Perception lab at Virginia Tech (now at Georgia Tech), with the ambitious goal of making platforms to make AI research more reproducible. We’re a young community working towards enabling developers, researchers, and fellow students to build, compare and share state-of-the-art Artificial Intelligence algorithms. We believe that one shouldn’t have to be an AI expert to have access to cutting edge vision algorithms. Likewise, researchers shouldn’t have to worry about building a service around their deep learning models to showcase and share it with others.

We have participated in the past eight installments of Google Summer of Code, over the course of which our students built several excellent tools and features. If you are interested in participating as a student or mentor, scroll down to check out our projects and get involved! We are more than happy to answer any questions you may have regarding CloudCV, so feel free to reach out to us on our Slack workspace or on our mailing list.


Analytics dashboards for challenge hosts and participants

Angular 7 Django Django Rest Framework D3.js

This project will involve writing REST API’s, plotting relevant graphs and building analytics dashboards for challenge hosts and participants. The analytics will help challenge hosts view the progress of participants in their challenge – for instance, comparing the trends of the accuracy from participant submissions over the period of time. Participants will be able to visualize the performance of all of their submissions with time and their corresponding rank on the leaderboard. The final goal is to provide users with several analytics to track their progress on the platform.

Please wait for loading the projects ...


Improvements in EvalAI frontend

Angular 7 HTML CSS Typescript

After last year’s GSOC, we’ve reached feature parity on EvalAI-ngx with the existing UI, this project will involve fixing the last remaining kinks in the UI. The goal of this project would be to improve the new UI as we replace the existing UI with the new UI before GSOC. We will be improving on the new UI and incorporating the feedback we will receive from the challenge hosts and participants for the AI challenges organized this year.

Please wait for loading the projects ...


Monitoring setup for EvalAI admins

Python Django Django rest framework Docker

As the number of challenges on EvalAI are increasing, we want to focus on improving the performance of our services. As a first step, we will focus on monitoring and measuring all the key metrics of our services. Insights from these will allow us to efficiently utilize our infrastructure, improve uptime and reduce costs. The project will concentrate on setting up metric reporting and alerts infrastructure, writing REST API’s, plotting relevant graphs and building analytics dashboards to help EvalAI admins maintain and monitor the services.

Please wait for loading the projects ...


Static code upload challenge evaluation

Python Django Kubernetes Docker

EvalAI is a platform to host and participate in AI challenges around the globe. To a challenge host, reproducibility of submission results and privacy of the test data are the main concerns. Towards this, the idea is to allow users to submit a docker image for their models and evaluate them on static datasets. In order to achieve this we want to build a pipeline which will use the dockerized models and run it on kubernetes based infrastructure with stored test annotations and report the results on the EvalAI leaderboard. Another part of the project is to streamline our challenge creation pipeline. Last year we added support for github based challenge creation which allows challenge hosts to use a private github repository to create and manage updates in a challenge. The goal for this year is to support bi-directional updates for challenges created using github. This feature will allow hosts to sync changes from EvalAI UI to their challenge github repository. The goal is to enhance the challenge creation experience for challenge hosts involving minimal support from the EvalAI team.

Please wait for loading the projects ...

Become a Mentor!

Mentoring is very important to the future of CloudCV. It introduces new people to the world of open source software who will enrich our community with their ideas and talents.

Apart from technical skills, being a mentor requires your time, a clear roadmap for your project and good organization skills. If you think you would be a good fit to mentor one of our projects, do reach out to us!


Okay, let's mentor!

Have your own idea? Add an issue to our GSoC-Ideas repository.

In case of queries, you can contact us.

Email: team@cloudcv.org