Changes between Version 13 and Version 14 of Hardware/Cameras


Ignore:
Timestamp:
Apr 21, 2022, 6:04:00 PM (2 years ago)
Author:
zk2172
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • Hardware/Cameras

    v13 v14  
    11
    22== Cameras
    3 === Installed Cameras Information
     3
     4Video cameras are particularly useful for collecting data in smart cities. They facilitate detection and tracking of vehicles, pedestrians and other objects by feeding videos into AI-enabled cloud compute servers. COSMOS nodes in New York City are equipped with cameras to facilitate research on a variety of smart city applications. COSMOS pilot site contains four permanently installed cameras which are recording scenes at the intersection of 120th street and Amsterdam Avanue.
     5
     6Installation and management of cameras at the COSMOS pilot site, and processing of collected data has been supported by a number of students at Columbia University.
     7 
     8'''''Contributors''': Zhengye Yang, Alex Angus, Mahshid Ghasemi Dehkordi, Emily Bailey, Zhuoxu Duan, Jeswanth Yadagani, Vedaant Dave, Dwiref Oza, Zihao Xiong, Hongzhe Ye, Richard Samoilenko, Mingfei Sun, Shiyun Yang, Tianyao Hua.''
     9
     10
     11=== Installed Camera Information
    412
    513   1. Mudd 1st floor [https://wiki.cosmos-lab.org/wiki/Architecture/Deployment/Measurements/md1 (part of the medium 1 node)], Mech E lab, towards 120th St.[https://www.google.com/maps/@40.8093023,-73.9601931,19.29z/data=!4m3!11m2!2sxHa1QduyCtt1iXpkY2bNQH8bA-EvEA!3e3 (map)]
     
    816
    917   3. Mudd 12th floor, Botwinick lab mudd1224, towards 120th St. [https://www.google.com/maps/@40.8093023,-73.9601931,19.29z/data=!4m3!11m2!2sxHa1QduyCtt1iXpkY2bNQH8bA-EvEA!3e3 (map)]
    10 
    1118
    1219   4. Mudd 12th-floor balcony, towards Amsterdam Ave. [https://www.google.com/maps/@40.8093023,-73.9601931,19.29z/data=!4m3!11m2!2sxHa1QduyCtt1iXpkY2bNQH8bA-EvEA!3e3 (map)]
     
    2431
    2532=== Anonymization Workflow for 1st and 2nd-floor Cameras
    26 Videos in this directory are outputs of the COSMOS YOLOv4 blurring pipeline.
     33
     3412-th floor cameras capture images of cars and pedestrians such that neither faces nor license plates can be recognized - therefore there is no need for post-processing for privacy protection.
     35
     36Videos from the 1st and 2nd floor, saved in this directory, are the outputs of the COSMOS YOLOv4 blurring pipeline.
    2737
    2838Faces and license plates are anonymized with Gaussian blurred areas defined by bounding box detection coordinates. A high-level overview of the blurring is as follows:
     
    30401. Frames are read individually from a video file.
    3141
    32 2. Each frame is then:
     422. Each frame is:
    3343
    3444    2.1 Resized to the input size of the specific YOLOv4 model (960x960 or 1440x1440)
    3545
    36     2.2 Passed through the YOLOv4 model which outputs bounding box predictions
     46    2.2 Processed by the YOLOv4 model which outputs bounding box predictions for pedestrians, pedestrian faces, vehicles, vehicle license plates (and other objects)
    3747
    38     2.3 Blurred corresponding to the bounding box predictions
     48    2.3 License plates and faces of people are blurred, by performing the blurring operation within corresponding bounding boxes
    3949
    40503. Blurred frames are written to the output video file.
    4151
    4252The YOLOv4 blurring model is trained in Darknet on the Mudd 1st floor video
    43 dataset annotated in Summer 2021. Models are converted from Darknet to Pytorch for
     53dataset annotated in Summer 2021. Deep learning models for detection and tracking have been converted from Darknet to PyTorch for
    4454integration into the current implementation of the blurring pipeline.
    4555
    46 === Reference Paper and DOI
    47 The following paper describes the project vision and key technologies as well as the development, deployment, and outreach efforts. We would appreciate it if you cite this paper when publishing results obtained using the COSMOS testbed.
     56=== Reference Papers and DOI
     57The following papers describe the project vision and key technologies as well as the development, deployment, and outreach efforts. We would appreciate it if you cite these papers when publishing results obtained using the COSMOS testbed.
    4858
    4959D. Raychaudhuri, I. Seskar, G. Zussman, T. Korakis, D. Kilper, T. Chen, J. Kolodziejski, M. Sherman, Z. Kostic, X. Gu, H. Krishnaswamy, S. Maheshwari, P. Skrimponis, and C. Gutterman, “Challenge: COSMOS: A city-scale programmable testbed for experimentation with advanced wireless,” in Proc. ACM MOBICOM’20, 2020." ​[https://wimnet.ee.columbia.edu/wp-content/uploads/2020/02/MobiCom2020_COSMOS.pdf (Download)] https://doi.org/10.1145/3372224.3380891
    5060
     61P. Skrimponis, N. Makris, S. B. Rajguru, K. Cheng, J. Ostrometzky, E. Ford, Z. Kostic, G. Zussman, and T. Korakis, “Cosmos educational toolkit: Using experimental wireless networking to enhance middle/high school stem education,” SIGCOMM Comput. Commun. Rev., vol. 50, p. 58–65, oct 2020.
     62
     63S. Yang, E. Bailey, Z. Yang, J. Ostrometzky, G. Zussman, I. Seskar, and Z. Kostic, “COSMOS smart intersection: Edge compute and communications for bird’s eye object tracking,” in Proc. SmartEdge, 2020.
     64
     65Z. Duan, Z. Yang, R. Samoilenko, D. S. Oza, A. Jagadeesan, M. Sun, H. Ye, Z. Xiong, G. Zussman, and Z. Kostic, “Smart city traffic intersection: Impact of video quality and scene complexity on precision and inference,” in Proc. IEEE Smart City’21, 2021.
     66
     67Z. Yang, M. Sun, H. Ye, Z. Xiong, G. Zussman, and Z. Kostic, “Birds eye view social distancing analysis system,” arXiv preprint:2112.07159, 2021.
     68
     69A. Angus et al., "Real-Time Video Anonymization in Smart City Intersections," in preparation.
     70
     71Z. Kostic, Alex Angus, Zhengye Yang, Zhuoxu Duan, Ivan Seskar, Gil Zussman, Dipankar Raychaudhuri, "Intelligence Nodes for Future Metropolises," in preparation.
     72
     73
     74
     75
     76