Video cameras are particularly useful for collecting data in smart cities. They facilitate detection and tracking of vehicles, pedestrians and other objects by feeding videos into AI-enabled cloud compute servers. COSMOS nodes in New York City are equipped with cameras to facilitate research on a variety of smart city applications. COSMOS pilot site contains four permanently installed cameras which are recording scenes at the intersection of 120th Street and Amsterdam Avenue.
Installation and management of cameras at the COSMOS pilot site, and processing of collected data, has been supported by a number of students at Columbia University who worked in the lab of Professor Kostic. Contributors: Zhengye Yang, Alex Angus, Mahshid Ghasemi Dehkordi, Emily Bailey, Zhuoxu Duan, Jeswanth Yadagani, Vedaant Dave, Dwiref Oza, Zihao Xiong, Hongzhe Ye, Richard Samoilenko, Mingfei Sun, Shiyun Yang, Tianyao Hua.
Installed Camera Information
- Mudd 12th floor, Botwinick lab mudd1224, towards 120th St. (map)
- Mudd 12th-floor balcony, towards Amsterdam Ave. (map)
COSMOS Cameras Data-set
- 1st-floor videos (anonymized): https://drive.google.com/drive/u/0/folders/1QXrfsLXEKfRfQyc6qzvtg37A0Z1i0io5
- 2nd-floor videos (anonymized): https://drive.google.com/drive/u/0/folders/1LR7H4theRazz2_uYHvCFGVVewQmKbWSF
- 12th-floor videos (120th St.): https://drive.google.com/drive/u/0/folders/1SEsocAAIReepdjE4XyVyT4kiqrunv7BU
- 12th-floor videos (Amsterdam Ave.): https://drive.google.com/drive/u/0/folders/1qC-62s8ohTGg-odyzo7BNw2GDv1OIeoK
Anonymization Workflow for 1st and 2nd-floor Cameras
12-th floor cameras capture images of cars and pedestrians such that neither faces nor license plates can be recognized - therefore there is no need for post-processing for privacy protection.
Videos from the 1st and 2nd floor, saved in this directory, are the outputs of the COSMOS YOLOv4 blurring pipeline.
Faces and license plates are anonymized with Gaussian blurred areas defined by bounding box detection coordinates. A high-level overview of the blurring is as follows:
- Frames are read individually from a video file.
- Each frame is:
2.1 Resized to the input size of the specific YOLOv4 model (960x960 or 1440x1440)
2.2 Processed by the YOLOv4 model which outputs bounding box predictions for pedestrians, pedestrian faces, vehicles, vehicle license plates (and other objects)
2.3 License plates and faces of people are blurred, by performing the blurring operation within corresponding bounding boxes
- Blurred frames are written to the output video file.
The YOLOv4 blurring model is trained in Darknet on the Mudd 1st floor video dataset annotated in Summer 2021. Deep learning models for detection and tracking have been converted from Darknet to Nvidia TensorRT for integration into the current implementation of the blurring pipeline.
Reference Papers and DOI
The following papers describe the COSMOS project vision and key technologies relevant to the use of video cameras. We would appreciate it if you cite these papers when publishing results obtained using the datasets above.
- Raychaudhuri, I. Seskar, G. Zussman, T. Korakis, D. Kilper, T. Chen, J. Kolodziejski, M. Sherman, Z. Kostic, X. Gu, H. Krishnaswamy, S. Maheshwari, P. Skrimponis, and C. Gutterman, “Challenge: COSMOS: A city-scale programmable testbed for experimentation with advanced wireless,” in Proc. ACM MOBICOM’20, 2020." (Download) https://doi.org/10.1145/3372224.3380891
- Yang, E. Bailey, Z. Yang, J. Ostrometzky, G. Zussman, I. Seskar, Z. Kostic, “COSMOS Smart Intersection: Edge Compute and Communications for Bird’s Eye Object Tracking,” IEEE Percom – Smart Edge 2020, 4th International Workshop on Smart Edge Computing and Networking, Mar. 2020. (Download) https://wimnet.ee.columbia.edu/wp-content/uploads/2020/02/Smart_Intersection_COSMOS_SmartEdge2020.pdf
Additional publications are listed at https://www.cosmos-lab.org/experimentation/smart-city-intersections/.