About CloudCV

CloudCV began in the summer of 2013 as a research project within the Machine Learning and Perception lab at Virginia Tech (now at Georgia Tech), with the ambitious goal of making platforms to make AI research more reproducible. We’re a young community working towards enabling developers, researchers, and fellow students to build, compare and share state-of-the-art Artificial Intelligence algorithms. We believe that one shouldn’t have to be an AI expert to have access to cutting edge vision algorithms. Likewise, researchers shouldn’t have to worry about building a service around their deep learning models to showcase and share it with others.

We have participated in the past eight installments of Google Summer of Code, over the course of which our students built several excellent tools and features. If you are interested in participating as a student or mentor, scroll down to check out our projects and get involved! We are more than happy to answer any questions you may have regarding CloudCV, so feel free to reach out to us on our Slack workspace or on our mailing list.


Adversarial Data using Gradio and EvalAI

Django AWS DevOps

The aim of this project is to develop an infrastructure that enables the collection of adversarial data for models submitted to EvalAI. This will be achieved by integrating Gradio with EvalAI’s code upload challenge pipeline and deploying the models as web services. The web services will record all user interactions, providing a dataset for each submission that can be used to evaluate the robustness of the model.

Please wait for loading the projects ...


Analytics Dashboards for EvalAI Users

Angular 7 Django Django Rest Framework D3.js

The goal of this project is to provide challenge hosts and participants with insightful analytics to track their progress on the platform. This project will involve writing REST APIs, plotting relevant graphs, and building analytics dashboards for both challenge hosts and participants. The analytics will help challenge hosts view the progress of participants in their challenge (changes in performance and ranking over time), and participants will be able to visualize the performance of all their submissions over time and their corresponding rank on the leaderboard.

Please wait for loading the projects ...


Evaluation Infrastructure Optimization

Angular Django AWS

This project aims to enhance EvalAI’s functionalities through automating large worker deployments in AWS, adding relevant features for efficient challenge management and also writing a robust and efficient test suite. The focus of the project is two-fold:

  • To automate large worker deployment processes on EvalAI using AWS EC2 instances or spot instances, make challenge management seamless and less reliant on the admins. We want to reduce the dependency of challenge hosts on EvalAI admins.

  • To make EvalAI more reliable and error-free by incorporating tests for different frontend and backend components. Having robust tests prevents making code-breaking changes to the codebase. This task will include adding unit tests for the API suite, prediction upload evaluation workers, code upload evaluation workers (on EKS), and integration tests for end-to-end testing of all components.

Please wait for loading the projects ...


Improvements in EvalAI User Interface

Angular Django

The goal of this project is to improve the overall user experience for challenge hosts and participants on EvalAI by allowing them to tag and filter challenges and creating an intuitive and informative leaderboard. This project will involve creating - a comprehensive search feature to find challenges, a tagging system for different challenge types (ex: Computer Vision, NLP, etc) for categorization. This will help participants and challenge hosts find challenges and search for related ones based on tags. An improved leaderboard, along with the search feature, will help in streamlining the process of organizing challenges, participating in challenges, and ranking participants. In addition, we will also work on adding support for relevant metadata for each challenge such as prize money, sponsors, etc.

Please wait for loading the projects ...

Become a Mentor!

Mentoring is very important to the future of CloudCV. It introduces new people to the world of open source software who will enrich our community with their ideas and talents.

Apart from technical skills, being a mentor requires your time, a clear roadmap for your project and good organization skills. If you think you would be a good fit to mentor one of our projects, do reach out to us!


Okay, let's mentor!

Have your own idea? Add an issue to our GSoC-Ideas repository.

In case of queries, you can contact us.

Email: team@cloudcv.org