Skip to content

Latest commit

 

History

History
140 lines (101 loc) · 5.59 KB

README.md

File metadata and controls

140 lines (101 loc) · 5.59 KB

Galapagos Hardware Stack

Welcome to the Galapagos Hardware Stack.

Prerequisites

Both the Docker Container and native install requires Xilinx Vivado to be installed. Current versions supported are 2017.4, 2018.1, 2018.2, 2018.3

Docker Jupyter Tutorial

To run tutorial refer to instructions in this README

Initial Setup for Native Install

First you need to initialize all environment variables. This is done with a build script. source build.sh

The layers of the stack that we introduce are as follows:

  • Middleware Layer
  • Hypervisor Layer
  • Physical Hardware /Network Setup

For more details on our automation process please refer to the Makefile.

Physical Hardware/Network Setup Layer

Our setup has all FPGAs connected directly to a network switch. We have the following FPGA boards:

  • Alphadata 7v3
  • Alphadata 8k5
  • Alphadata 8v3
  • Fidus Sidewinder (also has a hardened ARM CPU)

Boards without the hardened ARM are connected with an X86 CPU via PCIe.

Hypervisor Layer

We plan to have hypervisors setup for various boards. Currently the hypervisor abstracts away the network and PCIe interfaces. This exposes all devices with a hypervisor as an AXI-stream in and out through the network interface, and an S_AXI interface from PCIe or ARM and M_AXI to off-chip memory

Middleware Layer

This takes two files (refer to LOGICALFILE, MAPFILE defined in the Makefile) and partitions a large cluster logically described by the user into multiple separate FPGAs.

LOGICALFILE

The cluster is described in a LOGICALFILE with no notion of the mappings. The following is an example kernel from the logical file:

 <kernel> kernelName
	<num> 1 </num>
        <rep> 1 </rep>
        <clk> nameOfClockPort </clk>
        <id_port> nameOfIDport </id_port>
        <aresetn> nameOfResetPort </aresetn>
        <s_axis>
            <name> nameOfInputStreamInterface </name>
	    <scope> scope </scope>
        </s_axis>
        <m_axis>
            <name> nameOfOutputStreamInterface </name>
	    <scope> scope </scope>
            <debug/>
        </m_axis>
        <s_axi>
            <name> nameofControlInterface </name>
	    <scope> scope </scope>
        </s_axi>
        <m_axi>
            <name> nameOfMemoryInterface </name>
	    <scope> scope </scope>
        </m_axi>
</kernel>

The <num> tag refers to the unique ID of a kernel.
The <rep> refers to the number of times to repeat a kernel. The IDs are of repeated kernels are increased sequentially.
The <clk> refers to the name of the clock interface, this will be tied to the clock in the Hypervisor. <br/ The <aresetn> refers to the name of the reset interface, this will be tied to the clock in the Hypervisor (negative edge triggered).
The <id_port> refers to the port name in the kernel that will be tied to a constant with the value of the unique kernel ID. (optional)
The <s_axi> refers to a port from that would be of the s_axi interface. If the scope is global then this will connect to the control interface (can be either PCIe or ARM, depending on the board). For a local scope, you can specify the master which would be another m_axi interface that is of local scope.
The <m_axi> refers to a port that would be of the m_axi interface. If it's of global scope then it will tie to the off-chip memory, else it will connect to an s_axi interface that is of local scope.
The <s_axis> and <m_axis> is similar to that of the above interfaces, except that is is the AXI stream. global scope ties to the networking port, local can connect to each other.

MAPFILE

The cluster is described in a MAPFILE with no notion of the mappings.
The following is an example kernel from the map file:

<node>
        <board> adm-8k5-debug </board>
        <comm> eth </comm>
        <type> hw </type>
        <kernel> 1 </kernel>
        <kernel> 2 </kernel>
        <kernel> 3 </kernel>
        <mac_addr>  fa:16:3e:55:ca:02 </mac_addr>
        <ip_addr> 10.1.2.102 </ip_addr>
</node>

The <board> tag refers to the FPGA board you wish to use for this particular node.
The <kernel> refers to the unique kernel ID that you wish to put on this node.

For an example refer to galapagos/middleware/python/tests/conf0/configuration_files/*

Publications

  • N. Tarafdar, N. Eskandari, V. Sharma, C Lo, P. Chow, Galapagos: A Full Stack Approach to FPGA Integration in the Cloud, in IEEE Micro 38(6) 2018.

  • N. Eskandari, N. Tarafdar, D. Ly-Ma, P. Chow, A Modular Heterogeneous Stack for Deploying FPGAs and CPUs in the Data Center, in FPGA Symposium 2019.

  • N. Tarafdar, T. Lin, D. Ly-Ma, D. Rozhko, A. Leon-Garcia, P.Chow, Building the Infrastructure for Deploying FPGAs in the Cloud, in Hardware Accelerators in Data Center by Springer.

  • N. Tarafdar, N. Eskandari, V. Sharma, C Lo, P. Chow, Heterogeneous virtualized network function framework for the data center, in FPL 2017.

  • N. Tarafdar, T.Lin, E. Fukuda, H. Bannazadeh, A. Leon-Garcia, P. Chow, Enabling Flexible Network FPGA Clusters in a Heterogeneous Cloud Data Center, in FPGA Symposium 2017.

Citation

If you use the Galapagos Hardware Stack in your project please cite the following paper and/or link to the github project:

@article{tarafdar2018galapagos,
  title={Galapagos: A Full Stack Approach to FPGA Integration in the Cloud},
  author={Tarafdar, Naif and Eskandari, Nariman and Sharma, Varun and Lo, Charles and Chow, Paul},
  journal={IEEE Micro},
  volume={38},
  number={6},
  pages={18--24},
  year={2018},
  publisher={IEEE}
}