Changes between Version 30 and Version 31 of Tutorials/Edge Computing/Alveo Getting Started
- Timestamp:
- May 25, 2020, 11:35:56 AM (4 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
Tutorials/Edge Computing/Alveo Getting Started
v30 v31 4 4 5 5 === Description === 6 Compute servers in COSMOS, and cloud computing nodes in ORBIT Sandbox 9 are equipped with Alveo U200 accelerator cards (with Virtex UltraScale+ XCU200-2FSGD2104E FPGA). These cards can be used to accelerate compute-intensive applications such as dmachine learning, and video processing. They are connected to the Intel Xeon host CPU over PCI Express® (PCIe) Gen3x16 bus.7 8 This tutorial demonstrates how to run an accelerated FPGA kernel on the above platform. Vitis unified software platform 2019.2 is used for developing and deploying the application.6 Compute servers in COSMOS, and cloud computing nodes in ORBIT Sandbox 9 are equipped with Alveo U200 accelerator cards (with Virtex UltraScale+ XCU200-2FSGD2104E FPGA). These cards can be used to accelerate compute-intensive applications such as machine learning, and video processing. They are connected to the Intel Xeon host CPU over PCI Express® (PCIe) Gen3x16 bus. 7 8 This tutorial demonstrates how to run an accelerated FPGA kernel on the above mentioned platform. Vitis unified software platform 2019.2 is used for developing and deploying the application. 9 9 10 10 === Prerequisites === … … 12 12 13 13 === Resources required === 14 1 COSMOS compute server or 1 node in ORBIT SB9. This tutorial uses a compute server on COSMOS SandBox1.14 1 COSMOS compute server or 1 node in ORBIT SB9. This tutorial uses a compute server on COSMOS [wiki:/Architecture/Domains/cosmos_bed bed] domain. 15 15 16 16 === Tutorial Setup === 17 17 18 Follow the steps below to gain access to the [wiki:/Architecture/Domains/cosmos_ sb1sandbox 1 console] and set up the node with appropriate image.18 Follow the steps below to gain access to the [wiki:/Architecture/Domains/cosmos_bed sandbox 1 console] and set up the node with appropriate image. 19 19 1. If you don't have one already, sign up for a [https://www.cosmos-lab.org/portal-2/ COSMOS account] 20 1. [wiki:/GettingStarted#MakeaReservation Create a resource reservation] on sandbox 121 1. [Documentation/Short/Login Login] into sandbox 1 console (console.sb1.cosmos-lab.org) with an SSH session.20 1. [wiki:/GettingStarted#MakeaReservation Create a resource reservation] on COSMOS bed. 21 1. [Documentation/Short/Login Login] into COSMOS bed console (console.bed.cosmos-lab.org) with an SSH session. 22 22 1. Make sure the server/node used in the experiment is turned off: 23 23 {{{#!shell … … 46 46 * The reserved resource (node in SB9, orbit) has Alveo U200 card attached over PCIe bus. Check if the card is successfully installed and if its firmware matches with the shell installed on the host. Run lspci command as shown below. If the card is successfully installed, two physical functions should be found per card, one for management and one for user. 47 47 {{{#!shell-session 48 root@ node1-6:~# sudo lspci -vd 10ee:48 root@srv1-lg1:~# sudo lspci -vd 10ee: 49 49 d8:00.0 Processing accelerators: Xilinx Corporation Device d000 50 50 Subsystem: Xilinx Corporation Device 000e 51 Flags: bus master, fast devsel, latency 0, IRQ 2 00, NUMA node 151 Flags: bus master, fast devsel, latency 0, IRQ 267, NUMA node 1 52 52 Memory at f0000000 (32-bit, non-prefetchable) [size=32M] 53 53 Memory at f2000000 (32-bit, non-prefetchable) [size=64K] … … 62 62 63 63 }}} 64 * The above output shows only the management function. In that case, the firmware on the FPGA needs to be updated as follows. xilinx_u200_xdma_201830_2 is the deployment shell installed on the alveo _runtime.ndz image.65 {{{#!shell-session 66 root@ node1-6:~# sudo /opt/xilinx/xrt/bin/xbmgmt flash --update --shell xilinx_u200_xdma_201830_264 * The above output shows only the management function. In that case, the firmware on the FPGA needs to be updated as follows. xilinx_u200_xdma_201830_2 is the deployment shell installed on the alveo-runtime.ndz image. This takes a few minutes to complete. 65 {{{#!shell-session 66 root@srv1-lg1:~# sudo /opt/xilinx/xrt/bin/xbmgmt flash --update --shell xilinx_u200_xdma_201830_2 67 67 Status: shell needs updating 68 Current shell: xilinx_u200_GOLDEN_ 568 Current shell: xilinx_u200_GOLDEN_3 69 69 Shell to be flashed: xilinx_u200_xdma_201830_2 70 70 Are you sure you wish to proceed? [y/n]: y … … 95 95 * Run lspci command to verify the updated firmware. The output now shows both the management and user functions. 96 96 {{{ 97 root@ node1-6:~# sudolspci -vd 10ee:97 root@srv1-lg1:~# lspci -vd 10ee: 98 98 d8:00.0 Processing accelerators: Xilinx Corporation Device 5000 99 99 Subsystem: Xilinx Corporation Device 000e … … 113 113 d8:00.1 Processing accelerators: Xilinx Corporation Device 5001 114 114 Subsystem: Xilinx Corporation Device 000e 115 Flags: bus master, fast devsel, latency 0, IRQ 2 20, NUMA node 1115 Flags: bus master, fast devsel, latency 0, IRQ 289, NUMA node 1 116 116 Memory at 387ff0000000 (64-bit, prefetchable) [size=32M] 117 117 Memory at 387ff4020000 (64-bit, prefetchable) [size=64K] … … 125 125 Kernel driver in use: xocl 126 126 Kernel modules: xocl 127 127 128 }}} 128 129 * Use the xbmgmt flash --scan command to view and validate the card's current firmware version, as well as display the installed card details 129 130 {{{#!shell-session 130 root@ node1-6:~# /opt/xilinx/xrt/bin/xbmgmt flash --scan131 root@srv1-lg1:~# /opt/xilinx/xrt/bin/xbmgmt flash --scan 131 132 Card [0000:d8:00.0] 132 133 Card type: u200 … … 136 137 Flashable partitions installed in system: 137 138 xilinx_u200_xdma_201830_2,[ID=0x000000005d1211e8],[SC=4.2.0] 139 138 140 }}} 139 141 Note the shell version installed on the FPGA(Flashable partition running on FPGA) matches with that installed on the host(Flashable partitions installed in system).