PKU HumanID Dataset

The PKU HumanDI dataset is constructed by National Engineering Laboratory for Video Technology (NELVT), Peking University, sponsored by the National Basic Research Program of China and Chinese National Natural Science Foundation.

This dataset is composed of videos subjects crossing 11 cameras in a campus. It includes 6 high definition network cameras (Camera HD01, Camera HD02, Camera HD03, Camera HD04, Camera HD05, Camera HD06) and 7 normal network cameras (Camera BWBQ, Camera DCM, Camera WMHD, Camera XDMN, Camera YGLN, Camera YGLQ, Camera YTX). Some samples of the labeled results are shown below:

pku-humanid-dataset-1

HD Cameras (HD 01, HD 02, HD 04, HD 06)

pku-humanid-dataset-2

Normal Cameras (WMHD, DCM, YTX, YGLN)

The PKU HumanID dataset is now partly made available for the academic purpose only on a case-by-case basis. The NELVT at Peking University is serving as the technical agent for distribution of the dataset and reserves the copyright of all the videos in the dataset. Any researcher who requests the PKU HumanID dataset must sign this agreement and thereby agrees to observe the restrictions listed in this document.

LICENSE

  • The videos and the corresponding annotation results for download are part of PKU HumanID.
  • The videos and the corresponding annotation results can only be used for ACADEMIC PURPOSES. NO COMERCIAL USE is allowed.
  • Copyright © National Engineering Laboratory for Video Technology (NELVT) and Institute of Digital Media, Peking University (PKU-IDM). All rights reserved.

All publications using PKU HumanID dataset should cite the paper below:

  • Lan Wei, Yonghong Tian, Yaowei Wang, Tiejun Huan, Swiss-System Based Cascade Ranking for Gait-based Person Re-identification, 29th AAAI Conference on Artificial Intelligence, Austin Texas, USA, January, 2015

DOWNLOAD

You can download the agreement (pdf) by clicking the DOWNLOAD link.

After filling it, please send the electrical version to our Email: pkuml at pku.edu.cn (Subject: PKU-HumanID-Agreement)

After confirming your information, we will send the download link and password to you via Email. You need to follow the agreement.

Image saliency: From intrinsic to extrinsic context

An implementation of “Wang M, Konrad J, Ishwar P, Jing K, Rowley H (2011) Image saliency: From intrinsic to extrinsic context. CVPR, 2011.”

Re-implemented by Jia Li (jiali@buaa.edu.cn) and Shu Fang (sfang@pku.edu.cn).

Code folder: contains our implementation of (Wang et al. 2011) and the metrics for computing AUC, EOF and FS. More details can be found in our paper submitted to IJCV (J. Li et al. Measuring Visual Surprise Jointly from Intrinsic and Extrinsic Contexts for Image Saliency Estimation)

MIT1003 folder: the data used for testing (Wang et al. 2011). Subfolder “image” contains all the images from the dataset MIT1003, and subfolder “refImages” contains all the 20 most similar images retrieved from a large database with 31.2 million images.

Result folder: three subfolders, IES_intSal, IES_extSal and IES, contain the saliency maps from intrinsic context, extrinsic context and both contexts, respectively.

 

Code:
/mlg/download/code/wang11.zip
/mlg/download/code/wang11-codeResult.zip