Project 2

Project 2 has two phases.

Phase 1 Stereo
Phase 2: Segmentation

Phase 1: Stereo

Deadline: Wednesday, March 30 at Noon

Phase 1 Description
sample raw IR image: irData.mat

Note: If you are struggling to pull IR images from your Kinect - you are permitted to work with irData.mat instead. Just remember, that we have the correct disparity value so we have a point of comparison of your work.

Deliverables:

A zipped directory that should contain:

  • Code
  • a README file that states
    • Feedback on Phase 1
      • Do you understand everything you did?
      • Was it a useful exercise in understanding the Kinect's inner workings?
      • How can we make the project better next year?
  • Visualization of disparity. Be creative. Here are some examples:
    • 2D image
    • point cloud
    • interesting visualizations will get you extra credit

Note: No writeup :)

The zip file (as well as the directory) should be called:

FirstnameLastname-Project2-Phase1.zip


Email it to Ben at bcohen -a-t- seas with the subject line:
[meam620] Project2-Phase1

Phase 2: Segmentation

Deadline: Wednesday, April 13 at Noon

Phase 2 Description

You will need getNormals.m for this assignment.

Andrew's explanation of the getNormals() function:

getNormals() is expecting the disparity map output from the Kinect (the other parameters are optional and probably don't need to be adjusted). It returns [normals visual depth].

Normals is a 3D array where normals(:,:,1) is the X coordinates of the vectors, (:,:,2) is the Y, and (:,:,3) is the Z. These can be viewed using quiver3() but it's very slow and hard to tell what is going on.

Visuals is a visual representation of the normals where they are scaled and shifted such that each dimension must vary between 0 and 1. As a result imagesc(visuals) will display the normal map. This is similar to the format that would be used when creating normal maps for 3D graphics applications. Changes in color represent changes in normal direction.

Finally, depth is a smoothed version of the input that was used in calculating the normals.

Deliverables:

A zipped directory that should contain:

  • all of your code
  • a README file that states
    • Feedback on Phase 2 (same as in Phase 1)
  • a short one page report
  • results:
    • image of a person segmented from the background
    • image of an object segmented on a tabletop

Extra Credit:

  • an image of a person segmented into linear pieces (segment the body limbs).
    • tip: Use both x,y,z,surface normal,as well as image norm on the boundary of the body parts.
  • post process an RGBD video, segmenting out the object of interest in each frame.
    • ex. the video should display the RGB camera with the object of interest either circled or highlighted in some way.

The zip file (as well as the directory) should be called:

FirstnameLastname-Project2-Phase2.zip


Email it to Ben at bcohen-a-t-seas with the subject line:
[meam620] Project2-Phase2