README Our data is composed of the files: shoes_faces.zip and scenes.zip. Each data set is composed by NAME_DATASET/USER_ID_I/IMAGE_J.mat In order to read properly IMAGE_J.mat, we provide the class eye_data.m with an example below. eye_data class is composed of : * gx: x coordinates provided by gaze tracker device * gy: y coordinates provided by gaze tracker device * valid: valid coordinates provided by gaze tracker device * image: full screen image * im_name: image name from original data set * id_user: user identifier (for future versions) * att: question based on attribute * answer: user answer to question "att": 0 for No and 1 for Yes. (Initialized with -1) * fix_map: fixation map updated after call create_heatmap * NUM_FIX_PER_SEC: number of fixations per second (Default = 60). * VAL_SIGMA: sigma value used to create heatmap (Default = 1). EXAMPLE load('./shoes_faces_data/2/image_data2.mat') data.create_heatmap('test.png')