== Cameras Video cameras are particularly useful for collecting data in smart cities. They facilitate detection and tracking of vehicles, pedestrians and other objects by feeding videos into AI-enabled cloud compute servers. COSMOS nodes in New York City are equipped with cameras to facilitate research on a variety of smart city applications. The COSMOS site at the intersection of 120th Street and Amsterdam Avenue contains four permanently installed cameras . Installation and management of cameras at the COSMOS pilot site, and processing of collected data, has been supported by a number of students at Columbia University who worked in Prof. Kostic's lab: Zhengye Yang, Alex Angus, Emily Bailey, Zhuoxu Duan, Jeswanth Yadagani, Vedaant Dave, Dwiref Oza, Zihao Xiong, Hongzhe Ye, Richard Samoilenko, Mingfei Sun, Shiyun Yang, Tianyao Hua as well as by Mahshid Ghasemi Dehkordi from Prof. Zussman's group. === Installed Camera Information 1. Mudd 1st floor [https://wiki.cosmos-lab.org/wiki/Architecture/Deployment/Measurements/md1 (part of the medium 1 node)], Mech E lab, towards 120th St.[https://www.google.com/maps/@40.8093023,-73.9601931,19.29z/data=!4m3!11m2!2sxHa1QduyCtt1iXpkY2bNQH8bA-EvEA!3e3 (map)] 2. Mudd 2nd-floor balcony [https://wiki.cosmos-lab.org/wiki/Architecture/Deployment/Measurements/md2 (part of the medium 2 node)], towards Amsterdam Ave.[https://www.google.com/maps/@40.8093023,-73.9601931,19.29z/data=!4m3!11m2!2sxHa1QduyCtt1iXpkY2bNQH8bA-EvEA!3e3 (map)] 3. Mudd 12th floor, Botwinick lab mudd1224, towards 120th St. [https://www.google.com/maps/@40.8093023,-73.9601931,19.29z/data=!4m3!11m2!2sxHa1QduyCtt1iXpkY2bNQH8bA-EvEA!3e3 (map)] 4. Mudd 12th-floor balcony, towards Amsterdam Ave. [https://www.google.com/maps/@40.8093023,-73.9601931,19.29z/data=!4m3!11m2!2sxHa1QduyCtt1iXpkY2bNQH8bA-EvEA!3e3 (map)] === COSMOS Cameras Data-set * 1st-floor videos (anonymized): [https://drive.google.com/drive/u/0/folders/1QXrfsLXEKfRfQyc6qzvtg37A0Z1i0io5] * 2nd-floor videos (anonymized): [https://drive.google.com/drive/u/0/folders/1LR7H4theRazz2_uYHvCFGVVewQmKbWSF] * 12th-floor videos (120th St.): [https://drive.google.com/drive/u/0/folders/1SEsocAAIReepdjE4XyVyT4kiqrunv7BU] * 12th-floor videos (Amsterdam Ave.): [https://drive.google.com/drive/u/0/folders/1qC-62s8ohTGg-odyzo7BNw2GDv1OIeoK] The 12-th floor cameras capture images of cars and pedestrians such that neither faces nor license plates can be recognized. Therefore there is no need for post-processing for privacy protection. === Anonymization Workflow for 1st and 2nd-floor Cameras The annonymization workflow is described in the following paper and also described briefly below. We would appreciate it if you cite this paper when publishing results obtained using the datasets above. A. Angus, Z. Duan, G. Zussman, and Z. Kostic, “Real-Time Video Anonymization in Smart City Intersections,” in Proc. IEEE MASS’22 (invited), Oct. 2022. [https://wimnet.ee.columbia.edu/wp-content/uploads/2022/08/RealTimeVideoAnonymization_MASS2022.pdf (download)][https://www.cosmos-lab.org/wp-content/uploads/2022/10/video_blurring_Mass2022.pdf (presentation)] Videos from the 1st and 2nd floor, saved in this directory, are the outputs of the COSMOS YOLOv4 blurring pipeline. Faces and license plates are anonymized with Gaussian blurred areas defined by bounding box detection coordinates. A high-level overview of the blurring is as follows: 1. Frames are read individually from a video file. 2. Each frame is: 2.1 Resized to the input size of the specific YOLOv4 model (960x960 or 1440x1440) 2.2 Processed by the YOLOv4 model which outputs bounding box predictions for pedestrians, pedestrian faces, vehicles, vehicle license plates (and other objects) 2.3 License plates and faces of people are blurred, by performing the blurring operation within corresponding bounding boxes 3. Blurred frames are written to the output video file. The YOLOv4 blurring model is trained in Darknet on the Mudd 1st floor video dataset annotated in Summer 2021. Deep learning models for detection and tracking have been converted from Darknet to Nvidia TensorRT for integration into the current implementation of the blurring pipeline. === Additional Papers The following papers describe the COSMOS project vision and key technologies relevant to the use of video cameras. M. Ghasemi, S. Kleisarchaki, T. Calmant, Jiawei Lu, Shivam Ojha, Z. Kostic, L. Gurgen, J. Ghaderi, and G. Zussman, “Real-time Multi-Camera Analytics for Traffic Information Extraction and Visualization,” In Proc. IEEE Percom'23. [https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10150224 (Download)] https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10150224 M. Ghasemi, Z. Yang, M. Sun, H. Ye, Z. Xiong, J. Ghaderi, Z. Kostic, and G. Zussman, “Video-based Social Distancing: Evaluation in the COSMOS Testbed,” In IEEE IoT Journal’23. [https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10220207 (Download)] https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10220207 D. Raychaudhuri, I. Seskar, G. Zussman, T. Korakis, D. Kilper, T. Chen, J. Kolodziejski, M. Sherman, Z. Kostic, X. Gu, H. Krishnaswamy, S. Maheshwari, P. Skrimponis, and C. Gutterman, “Challenge: COSMOS: A city-scale programmable testbed for experimentation with advanced wireless,” in Proc. ACM MOBICOM’20, 2020." ​[https://wimnet.ee.columbia.edu/wp-content/uploads/2020/02/MobiCom2020_COSMOS.pdf (Download)] https://doi.org/10.1145/3372224.3380891 S. Yang, E. Bailey, Z. Yang, J. Ostrometzky, G. Zussman, I. Seskar, Z. Kostic, “COSMOS Smart Intersection: Edge Compute and Communications for Bird’s Eye Object Tracking,” IEEE Percom – Smart Edge 2020, 4th International Workshop on Smart Edge Computing and Networking, Mar. 2020. [https://wimnet.ee.columbia.edu/wp-content/uploads/2020/02/Smart_Intersection_COSMOS_SmartEdge2020.pdf (Download)] Additional publications are listed at [https://www.cosmos-lab.org/experimentation/smart-city-intersections/].