Mentor: Dr. Lap-Fai (Craig) Yu
Mixed reality technologies (e.g., virtual reality headsets, augmented reality glasses, holographic displays) are becoming increasingly popular as a new medium for entertainment, communication, and education. In this project, we will explore using such technologies for driving personalized learning applications. For example, can we create a virtual coach to teach people how to dance in augmented reality? Can we devise a multi-player holographic board game with adaptive visual aids to teach people how to play? In addition to fancy visualizations, a unique strength of mixed reality is the ability to track a user's actions. We will explore how to leverage such tracked user data for personalizing the learning process.
Project 2: Building a Virtual Assistant to Help End-Users Report Software Problems
Mentor: Dr. Kevin Moran
As software continues to pervade different aspects of society, the importance of prompt, detailed reporting of issues or bugs “in the wild” becomes increasingly important. Well-constructed bug reports help to alert developers of different issues in their application and ultimately lead to improved software quality. However, the quality of end-user bug reports often suffer due to the lexical gap that exists between developers and users of an application. In other words, while developers have intimate, low-level knowledge of how a program works, end-users typically do not, making it difficult for them to provide detailed information about software issues. This project will focus on the development and evaluation of a “virtual assistant” to help users report bugs in mobile applications. The project will make use of machine learning techniques, natural language processing, and data mined at large scale from software repositories like GitHub to build a “smart” bug reporting system for end users. The developed system will be evaluated through users studies.
Project 3: Teaching GUI-based Programming Skills via Automated Feedback
Mentor: Dr. Kevin Moran
User Interfaces (UIs) are an integral part of end-user software, often comprising large portions of an applications code base. As such, teaching effective UI programming skills to novice developers is an important component of software engineering education. However, while many program analysis tools exist that operate directly on code related to UIs, it can be difficult for novice students to understand whether or not they are following “best practices” for UI design. This project will involve building a “visual linter” for UI programming in order to help novice developers learn proper UI coding conventions. Students will be exposed to aspects of machine learning, computer vision, static and dynamic code analysis, as well as mining data from software repositories such as GitHub. The system will be evaluated in a user study with students.
Project 4: Adaptive Educational Video Games
Mentor: Dr. Michael Eagle
Development and evaluation of an educational game that uses techniques from cognitive tutors to enhance the player experience.The goal is for a game that can be shared though a web browser. Evaluation of the game will be done though log data, as well as the potential for pilot user studies.Topic areas will include introductory computing concepts. (Game design, development, AI, UX, human subjects.)
Project 5: Teaching Hyperparameter Tuning Strategies
Mentor: Dr. Thomas LaToza
Building machine learning systems today requires mastery of not only high-level conceptual knowledge but also practical knowledge in using ML frameworks and making choices between model architectures, pre-processing raw data, and tuning hyperparemters. While experts eventually acquire this knowledge through extensive trial and error, novices face many barriers in even getting started building machine learning systems.
This project will explore methods for making explicit the ways in which experts create and tune machine learning models. The project will engage with experts and existing resources to codify explicit strategies as sequences of steps, written in the Roboto strategy description language. To enable learners everywhere to make use of these strategies, this project will explore creating a StackOverflow for solving machine learning problems.
Project 6: Fairness and Equity in Education
Mentors: Dr. Huzefa Rangwala, Dr. Mark Snyder
In this project, we will explore how issues of fairness and equity arise in an educational setting, as well as how problems may present themselves or become evident. Then we can develop an analytics dashboard to highlight metrics for inclusivity to assist in course correction and student support.
Project 7: Analyzing Digital Museum Artifacts
Mentors: Dr. Huzefa Rangwala and Dr. Monaco Pino (Smithsonian Institution)
In today’s digital age, museums across the world have embarked on the mass digitization of it’s collections. The Smithsonian Institution with its large and rich collection of artifacts across various museums is at the forefront and also provides a tremendous opportunity for learning via the products made available by the Smithsonian Learning Lab. Teachers across the world can incorporate artifacts within lesson plans and use them across the curriculum. The key objective of this project is to develop machine learning methods that first seek to automatically curate missing metadata (and learning information) from various artifacts, analyze the patterns within the lesson plan creation process and inform recommender systems to automatically curate lesson plans.
Project 8: Protein Modeling with Deep Learning
Mentors: Dr. Amarda Shehu and Huzefa Rangwala
Deep neural networks have significantly advanced our ability to determine three-dimensional structures of proteins in silico. However, the objective of existing models is limited to determining one biologically-active structure. In contrast, complex organisms reuse proteins for a variety of tasks in silico and exquisitely harness the ability of metamorphic protein molecules to change structures to regulate interactions with diverse molecular partners. In this project we will conceptualize, design, and test deep neural networks to model such proteins and expand the objective from predicting one structure to computing the possibly diverse set of structures relevant for biological activity.
Project 9: Supporting NLP for Under-served Languages via Cross-Lingual Transfer
Mentors: Dr. Antonios Anastasopoulos
Progress in natural language processing has been swift, with large pre-trained neural language models (often multi-lingual) leading to large improvements in performance. Unfortunately, these improvements have been constrained on a few languages, for which we have a lot of annotated data. For hundreds of languages, though, as well as for smaller regional varieties of dominant languages, we have no annotated data and as a result we cannot use these technologies. In this project, we will investigate various ways for making these models work for under-served languages, using e.g. cross-lingual transfer, cross-lingual annotation projection, automatic translation for data augmentation, and other techniques.
Project 10: Multi-modal Sign Language Detection for Training Deaf and Hard of Hearing Students
Mentors: Dr. Parth Pathak, Dr. Huzefa Rangwala, Dr. Jana Kosecka
The broader research question of this project is to design, implement and evaluate a sign language recognition system that can be used by DHH signers to interact with others (those who are not familiar with sign language) as well as voice-controlled personal digital assistants. The sign language recognizer system identifies the individual signs and contextual grammar signatures using multiple sensing modalities (video camera, motion sensors and wireless signals), and translates them to text or speech, enabling interaction with other users and computing devices.
As part of this project, a team of undergraduate students will design and implement sign language recognition systems using different sensors. The project will focus on use of three types of sensors: (1) camera (RGB vision and depth) (2) wearable IMU motion sensor and (3) WiFi signals. The team of students will develop a sign language recognition system using a different type of sensor. This will include collecting the sensor data for a few pre-determined ASL signs, profiling the body motion through data mining and developing machine learning models to uniquely recognize the signs based on their sensor signatures.