Changes between Version 30 and Version 31 of Tutorials/Edge Computing/Alveo Getting Started


Ignore:
Timestamp:
May 25, 2020, 11:35:56 AM (4 years ago)
Author:
prasanthi
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • Tutorials/Edge Computing/Alveo Getting Started

    v30 v31  
    44
    55=== Description ===
    6 Compute servers in COSMOS, and cloud computing nodes in ORBIT Sandbox 9 are equipped with Alveo U200 accelerator cards (with Virtex UltraScale+ XCU200-2FSGD2104E FPGA). These cards can be used to accelerate compute-intensive applications such as d machine learning, and video processing. They are connected to the Intel Xeon host CPU over PCI Express® (PCIe) Gen3x16 bus.
    7 
    8 This tutorial demonstrates how to run an accelerated FPGA kernel on the above platform. Vitis unified software platform 2019.2 is used for developing and deploying the application.
     6Compute servers in COSMOS, and cloud computing nodes in ORBIT Sandbox 9 are equipped with Alveo U200 accelerator cards (with Virtex UltraScale+ XCU200-2FSGD2104E FPGA). These cards can be used to accelerate compute-intensive applications such as machine learning, and video processing. They are connected to the Intel Xeon host CPU over PCI Express® (PCIe) Gen3x16 bus.
     7
     8This tutorial demonstrates how to run an accelerated FPGA kernel on the above mentioned platform. Vitis unified software platform 2019.2 is used for developing and deploying the application.
    99
    1010=== Prerequisites ===
     
    1212
    1313=== Resources required ===
    14 1 COSMOS compute server or 1 node in ORBIT SB9. This tutorial uses a compute server on COSMOS SandBox1.
     141 COSMOS compute server or 1 node in ORBIT SB9. This tutorial uses a compute server on COSMOS [wiki:/Architecture/Domains/cosmos_bed bed] domain.
    1515
    1616=== Tutorial Setup ===
    1717
    18 Follow the steps below to gain access to the [wiki:/Architecture/Domains/cosmos_sb1 sandbox 1 console] and set up the node with appropriate image.
     18Follow the steps below to gain access to the [wiki:/Architecture/Domains/cosmos_bed sandbox 1 console] and set up the node with appropriate image.
    1919 1. If you don't have one already, sign up for a [https://www.cosmos-lab.org/portal-2/ COSMOS account]
    20  1. [wiki:/GettingStarted#MakeaReservation Create a resource reservation] on sandbox 1
    21  1. [Documentation/Short/Login Login] into sandbox 1 console (console.sb1.cosmos-lab.org) with an SSH session.
     20 1. [wiki:/GettingStarted#MakeaReservation Create a resource reservation] on COSMOS bed.
     21 1. [Documentation/Short/Login Login] into COSMOS bed console (console.bed.cosmos-lab.org) with an SSH session.
    2222 1. Make sure the server/node used in the experiment is turned off:
    2323{{{#!shell
     
    4646* The reserved resource (node in SB9, orbit) has Alveo U200 card attached over PCIe bus. Check if the card is successfully installed and if its firmware matches with the shell installed on the host. Run lspci command as shown below. If the card is successfully installed, two physical functions should be found per card, one for management and one for user.
    4747{{{#!shell-session
    48 root@node1-6:~# sudo lspci -vd 10ee:
     48root@srv1-lg1:~# sudo lspci -vd 10ee:
    4949d8:00.0 Processing accelerators: Xilinx Corporation Device d000
    5050        Subsystem: Xilinx Corporation Device 000e
    51         Flags: bus master, fast devsel, latency 0, IRQ 200, NUMA node 1
     51        Flags: bus master, fast devsel, latency 0, IRQ 267, NUMA node 1
    5252        Memory at f0000000 (32-bit, non-prefetchable) [size=32M]
    5353        Memory at f2000000 (32-bit, non-prefetchable) [size=64K]
     
    6262
    6363}}}
    64 * The above output shows only the management function. In that case, the firmware on the FPGA needs to be updated as follows. xilinx_u200_xdma_201830_2 is the deployment shell installed on the alveo_runtime.ndz image.
    65 {{{#!shell-session
    66 root@node1-6:~# sudo /opt/xilinx/xrt/bin/xbmgmt flash --update --shell xilinx_u200_xdma_201830_2
     64* The above output shows only the management function. In that case, the firmware on the FPGA needs to be updated as follows. xilinx_u200_xdma_201830_2 is the deployment shell installed on the alveo-runtime.ndz image. This takes a few minutes to complete.
     65{{{#!shell-session
     66root@srv1-lg1:~# sudo /opt/xilinx/xrt/bin/xbmgmt flash --update --shell xilinx_u200_xdma_201830_2
    6767         Status: shell needs updating
    68          Current shell: xilinx_u200_GOLDEN_5
     68         Current shell: xilinx_u200_GOLDEN_3
    6969         Shell to be flashed: xilinx_u200_xdma_201830_2
    7070Are you sure you wish to proceed? [y/n]: y
     
    9595* Run lspci command to verify the updated firmware. The output now shows both the management and user functions.
    9696{{{
    97 root@node1-6:~# sudo lspci -vd 10ee:
     97root@srv1-lg1:~# lspci -vd 10ee:
    9898d8:00.0 Processing accelerators: Xilinx Corporation Device 5000
    9999        Subsystem: Xilinx Corporation Device 000e
     
    113113d8:00.1 Processing accelerators: Xilinx Corporation Device 5001
    114114        Subsystem: Xilinx Corporation Device 000e
    115         Flags: bus master, fast devsel, latency 0, IRQ 220, NUMA node 1
     115        Flags: bus master, fast devsel, latency 0, IRQ 289, NUMA node 1
    116116        Memory at 387ff0000000 (64-bit, prefetchable) [size=32M]
    117117        Memory at 387ff4020000 (64-bit, prefetchable) [size=64K]
     
    125125        Kernel driver in use: xocl
    126126        Kernel modules: xocl
     127
    127128}}}
    128129* Use the xbmgmt flash --scan command to view and validate the card's current firmware version, as well as display the installed card details
    129130{{{#!shell-session
    130 root@node1-6:~# /opt/xilinx/xrt/bin/xbmgmt flash --scan
     131root@srv1-lg1:~# /opt/xilinx/xrt/bin/xbmgmt flash --scan
    131132Card [0000:d8:00.0]
    132133    Card type:          u200
     
    136137    Flashable partitions installed in system:
    137138        xilinx_u200_xdma_201830_2,[ID=0x000000005d1211e8],[SC=4.2.0]
     139
    138140}}}
    139141Note the shell version installed on the FPGA(Flashable partition running on FPGA) matches with that installed on the host(Flashable partitions installed in system).