Statement of Work: Group 1

We are going to use the Kinect to emulate the shadow of the Hero and Snakeman during their first battle using full body tracking. The shadow for the Hero will look like a normal human, while the Snakeman's shadow will look snake-like. We are going to also work on the possibility of using 2 (or more) Kinects at the same time to extend the lateral range of the sensor and allow the actors more flexibility in their movements across the stage.

STATEMENT OF WORK: Group 2

Bradlee Krupa and Chaitanya Sathi

Group 2

For our semester project for CS1635, we plan to design a system for tracking the motion of a thrown object. The object will be thrown from real-space and be translated into virtual space while maintaining trajectory and speed. The motion tracking will be done with the Kinect software.

Milestone 1: Group 3

Samantha Kuhn, Zachary Parker

a) Statement of Work -

Scenario 6: Second fight between our hero and the Snakeman (Act 7) The hero tries to activate a safety valve (?) on the wall (screen), while the Snakeman is doing its best to prevent that from happening, with visual effects.

In this project, a software component will be developed which will use a Microsoft Kinect to control the obfuscation of a displayed image through gestures of the hands and/or body. This will be applied to the above scenario, in which a safety valve must be hidden. By motioning toward the screen, smoke, etc. can be generated to hide the displayed object.

b) Conceptual Pattern -

Problem: The safety valve must be obscured by using gestures to generate smoke.

Context: The hero is trying to activate a safety valve on the wall while the Snakeman is trying to prevent that from happening.

Solution: By using gestures (by the Snakeman), smoke can be generated interactively, in accordance with the Snakeman's movement. By rendering smoke on top of the safety valve, the device can be obscured in order to keep the hero from finding and reaching it.

c) Design Pattern -


Statement of work: Group 4

Brandon Spires and Lawrence West

CS 1635 Group 4

We are interested in the two scenes which depict our hero fighting on a rooftop of the burning city. However, our group will focus on developing scenario six, namely the second fight between our hero and the snakemen. In this scene, the snakemen will be on-screen, while the actor plays the hero in the foreground. As the snakemen advance towards the hero, he will repel then by using various attacks. These different attacks will register different on-screen reactions (possibly in the snakemen themselves or in the scenery). The scenario is ended once the hero reaches the safety valve.

Statement of Work: Group 5
Kinect gesture control for robots

Author: Yang Hu Qizhang Dai

Introduction

We want to develop a system that using Kinect as an input device to recognize human body movements then sends specific command to remote device through SIS Server.

Idea

Our idea is that, not like the others, using something like a game pad to control the robot. We can use our natural body movement to achieve it. The architect will look like this.

Kinect component: responsible to recognize and classify human body movements, then generate standard control message, send the message to the SIS server.

Device component: connect to the SIS server and wait for the message from the .Kinect component.. Different device component will react differently according to different message sequence.

Device

Remote device will be something like a tiny robot car. It can:

Gesture

In order to control the device easily, we are going to design an algorithm that can tell the difference between these gestures below:

Message Format


Sample message for "Left" gesture:

< ?xml version="1.0" standalone="yes"? >
< !--
Author : Yang Hu
Email : yah14@pitt.edu
-- >
< Msg >
	< Head >
		< MsgID >410< /MsgID >
		< Description >Gesture Command< /Description >
	< /Head >
	< Body >
		< Item >
			< Key >Gesture< /Key >
			< Value >Left< /Value >
		< /Item >
	< /Body >
< /Msg >

SOW for G6

Andrew Powell and CJ McAllister

We would like to do something in our project that involves sound to change the state of the machine to perform different actions onscreen. I was thinking this would go well with Scenario 6. Snakeman while trying to stop the hero could have some dialog that would set the state so that the valve could be turned.


Statement of Work: Group 7

Group 7 will do sound playback.  We envision controlling sound output with two hands, where hand depth and height will control such factors as volume and pitch.  Gestures will select between different sound types, similar to an electronic keyboard's sound presets.  Other controls and gestures may be created as the project develops.

Statement of Work: Group 8

The primary objective of the project will be to try and implement human motions that will control Google TV over the Kinect. We shall implement recognition of a set of simple gestures for this control. The gestures will activate and deactivate Kinect control as the user desires along with controlling television operation. Furthermore, we will use depth perception via the Kinect to determine the interactions, possibly using a reference point/area on the user.s body. We plan to start with power and channel surfing control, possibly expanding to other areas if time allows.