Changes between Version 13 and Version 14 of Hardware/Cameras
- Timestamp:
- Apr 21, 2022, 6:04:00 PM (3 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
Hardware/Cameras
v13 v14 1 1 2 2 == Cameras 3 === Installed Cameras Information 3 4 Video cameras are particularly useful for collecting data in smart cities. They facilitate detection and tracking of vehicles, pedestrians and other objects by feeding videos into AI-enabled cloud compute servers. COSMOS nodes in New York City are equipped with cameras to facilitate research on a variety of smart city applications. COSMOS pilot site contains four permanently installed cameras which are recording scenes at the intersection of 120th street and Amsterdam Avanue. 5 6 Installation and management of cameras at the COSMOS pilot site, and processing of collected data has been supported by a number of students at Columbia University. 7 8 '''''Contributors''': Zhengye Yang, Alex Angus, Mahshid Ghasemi Dehkordi, Emily Bailey, Zhuoxu Duan, Jeswanth Yadagani, Vedaant Dave, Dwiref Oza, Zihao Xiong, Hongzhe Ye, Richard Samoilenko, Mingfei Sun, Shiyun Yang, Tianyao Hua.'' 9 10 11 === Installed Camera Information 4 12 5 13 1. Mudd 1st floor [https://wiki.cosmos-lab.org/wiki/Architecture/Deployment/Measurements/md1 (part of the medium 1 node)], Mech E lab, towards 120th St.[https://www.google.com/maps/@40.8093023,-73.9601931,19.29z/data=!4m3!11m2!2sxHa1QduyCtt1iXpkY2bNQH8bA-EvEA!3e3 (map)] … … 8 16 9 17 3. Mudd 12th floor, Botwinick lab mudd1224, towards 120th St. [https://www.google.com/maps/@40.8093023,-73.9601931,19.29z/data=!4m3!11m2!2sxHa1QduyCtt1iXpkY2bNQH8bA-EvEA!3e3 (map)] 10 11 18 12 19 4. Mudd 12th-floor balcony, towards Amsterdam Ave. [https://www.google.com/maps/@40.8093023,-73.9601931,19.29z/data=!4m3!11m2!2sxHa1QduyCtt1iXpkY2bNQH8bA-EvEA!3e3 (map)] … … 24 31 25 32 === Anonymization Workflow for 1st and 2nd-floor Cameras 26 Videos in this directory are outputs of the COSMOS YOLOv4 blurring pipeline. 33 34 12-th floor cameras capture images of cars and pedestrians such that neither faces nor license plates can be recognized - therefore there is no need for post-processing for privacy protection. 35 36 Videos from the 1st and 2nd floor, saved in this directory, are the outputs of the COSMOS YOLOv4 blurring pipeline. 27 37 28 38 Faces and license plates are anonymized with Gaussian blurred areas defined by bounding box detection coordinates. A high-level overview of the blurring is as follows: … … 30 40 1. Frames are read individually from a video file. 31 41 32 2. Each frame is then:42 2. Each frame is: 33 43 34 44 2.1 Resized to the input size of the specific YOLOv4 model (960x960 or 1440x1440) 35 45 36 2.2 P assed through the YOLOv4 model which outputs bounding box predictions46 2.2 Processed by the YOLOv4 model which outputs bounding box predictions for pedestrians, pedestrian faces, vehicles, vehicle license plates (and other objects) 37 47 38 2.3 Blurred corresponding to the bounding box predictions48 2.3 License plates and faces of people are blurred, by performing the blurring operation within corresponding bounding boxes 39 49 40 50 3. Blurred frames are written to the output video file. 41 51 42 52 The YOLOv4 blurring model is trained in Darknet on the Mudd 1st floor video 43 dataset annotated in Summer 2021. Models are converted from Darknet to Pytorch for53 dataset annotated in Summer 2021. Deep learning models for detection and tracking have been converted from Darknet to PyTorch for 44 54 integration into the current implementation of the blurring pipeline. 45 55 46 === Reference Paper and DOI47 The following paper describes the project vision and key technologies as well as the development, deployment, and outreach efforts. We would appreciate it if you cite this paperwhen publishing results obtained using the COSMOS testbed.56 === Reference Papers and DOI 57 The following papers describe the project vision and key technologies as well as the development, deployment, and outreach efforts. We would appreciate it if you cite these papers when publishing results obtained using the COSMOS testbed. 48 58 49 59 D. Raychaudhuri, I. Seskar, G. Zussman, T. Korakis, D. Kilper, T. Chen, J. Kolodziejski, M. Sherman, Z. Kostic, X. Gu, H. Krishnaswamy, S. Maheshwari, P. Skrimponis, and C. Gutterman, “Challenge: COSMOS: A city-scale programmable testbed for experimentation with advanced wireless,” in Proc. ACM MOBICOM’20, 2020." [https://wimnet.ee.columbia.edu/wp-content/uploads/2020/02/MobiCom2020_COSMOS.pdf (Download)] https://doi.org/10.1145/3372224.3380891 50 60 61 P. Skrimponis, N. Makris, S. B. Rajguru, K. Cheng, J. Ostrometzky, E. Ford, Z. Kostic, G. Zussman, and T. Korakis, “Cosmos educational toolkit: Using experimental wireless networking to enhance middle/high school stem education,” SIGCOMM Comput. Commun. Rev., vol. 50, p. 58–65, oct 2020. 62 63 S. Yang, E. Bailey, Z. Yang, J. Ostrometzky, G. Zussman, I. Seskar, and Z. Kostic, “COSMOS smart intersection: Edge compute and communications for bird’s eye object tracking,” in Proc. SmartEdge, 2020. 64 65 Z. Duan, Z. Yang, R. Samoilenko, D. S. Oza, A. Jagadeesan, M. Sun, H. Ye, Z. Xiong, G. Zussman, and Z. Kostic, “Smart city traffic intersection: Impact of video quality and scene complexity on precision and inference,” in Proc. IEEE Smart City’21, 2021. 66 67 Z. Yang, M. Sun, H. Ye, Z. Xiong, G. Zussman, and Z. Kostic, “Birds eye view social distancing analysis system,” arXiv preprint:2112.07159, 2021. 68 69 A. Angus et al., "Real-Time Video Anonymization in Smart City Intersections," in preparation. 70 71 Z. Kostic, Alex Angus, Zhengye Yang, Zhuoxu Duan, Ivan Seskar, Gil Zussman, Dipankar Raychaudhuri, "Intelligence Nodes for Future Metropolises," in preparation. 72 73 74 75 76