# Linaro's Automated Validation Architecture (LAVA) Docker Container ## Introduction The goal of lava-docker is to simplify the install and maintenance of a LAVA lab in order to participate in distributed test efforts such as kernelCI.org. With lava-docker, you describe the devices under test (DUT) in a simple YAML file, and then a custom script will generate the necessary LAVA configuration files automatically. Similarly, LAVA users and authentication tokens are described in a(nother) YAML file, and the LAVA configurations are automatically generated. This enables the setup of a LAVA lab with minimal knowledge of the underlying LAVA configuration steps necessary. ## Prerequisites lava-docker has currently been tested primarily on Debian stable (stretch). The following packages are necessary on the host machine: * docker * docker-compose * pyyaml ## Quickstart Example to use lava-docker with only one QEMU device: * Checkout the lava-docker repository * Generate configuration files for LAVA, udev, serial ports, etc. from boards.yaml via ``` ./lavalab-gen.py ``` * Go to output/local directory * Build docker images via ``` docker-compose build ``` * Start all images via ``` docker-compose up -d ``` * Once launched, you can access the LAVA web interface via http://localhost:10080/. With the default users, you can login with admin:admin. * By default, a LAVA healthcheck job will be run on the qemu device. You will see it in the "All Jobs" list: http://localhost:10080/scheduler/alljobs * You can also see full job output by clicking the blue eye icon ("View job details") (or via http://localhost:10080/scheduler/job/1 since it is the first job ran) * For more details, see https://validation.linaro.org/static/docs/v2/first-job.html ### Adding your first board: #### device-type To add a board you need to find its device-type, standard naming is to use the same as the official kernel DT name. (But a very few DUT differ from that) You could check in https://github.com/Linaro/lava-server/tree/release/lava_scheduler_app/tests/device-types if you find yours. Example: For a beagleboneblack, the device-type is beaglebone-black (Even if official DT name is am335x-boneblack) So you need to add in the boards section: ``` - name: beagleboneblack-01 type: beaglebone-black ``` #### UART Next step is to gather information on UART wired on DUT.
If you have a FTDI, simply get its serial (visible in lsusb -v or for major distribution in dmesg)

For other UART type (or for old FTDI without serial number) you need to get the devpath attribute via: ``` udevadm info -a -n /dev/ttyUSBx |grep ATTR|grep devpath | head -n1 ``` Example with a FTDI UART: ``` [ 6.616707] usb 4-1.4.2: New USB device strings: Mfr=1, Product=2, SerialNumber=3 [ 6.704305] usb 4-1.4.2: SerialNumber: AK04TU1X The serial is AK04TU1X ``` So you have now: ``` - name: beagleboneblack-01 type: beaglebone-black uart: idvendor: 0x0403 idproduct: 0x6001 serial: AK04TU1X ``` Example with a FTDI without serial: ``` [2428401.256860] ftdi_sio 1-1.4:1.0: FTDI USB Serial Device converter detected [2428401.256916] usb 1-1.4: Detected FT232BM [2428401.257752] usb 1-1.4: FTDI USB Serial Device converter now attached to ttyUSB1 udevadm info -a -n /dev/ttyUSB1 |grep devpath | head -n1 ATTRS{devpath}=="1.5" ``` So you have now: ``` - name: beagleboneblack-01 type: beaglebone-black uart: idvendor: 0x0403 idproduct: 0x6001 devpath: "1.5" ``` #### PDU (Power Distribution Unit) Final step is to manage the powering of the board.
Many PDU switchs could be handled by a command line tool which control the PDU.
You need to fill boards.yaml with the command line to be ran.
Example with an ACME board: If the beagleboneblack is wired to port 3 and the ACME board have IP 192.168.66.2: ``` pdu_generic: hard_reset_command: /usr/local/bin/acme-cli -s 192.168.66.2 reset 3 power_off_command: /usr/local/bin/acme-cli -s 192.168.66.2 power_off 3 power_on_command: /usr/local/bin/acme-cli -s 192.168.66.2 power_on 3 ``` #### Example: beagleboneblack, with FTDI (serial 1234567), connected to port 5 of an ACME ``` - name: beagleboneblack-01 type: beaglebone-black pdu_generic: hard_reset_command: /usr/local/bin/acme-cli -s 192.168.66.2 reset 5 power_off_command: /usr/local/bin/acme-cli -s 192.168.66.2 power_off 5 power_on_command: /usr/local/bin/acme-cli -s 192.168.66.2 power_on 5 uart: idvendor: 0x0403 idproduct: 0x6001 serial: 1234567 ``` ## Architecture The basic setup is composed of a host which runs the following docker images and DUT to be tested.
* lava-master: run lava-server along with the web interface * lava-slave: run lava-dispatcher, the compoment which sends jobs to DUTs * squid: an HTTP proxy for caching downloaded contents (kernel/dtb/rootfs) (Work in progress) The host and DUTs must share a common LAN.
The host IP on this LAN must be set as dispatcher_ip in boards.yaml.
Since most DUTs are booted using TFTP, they need DHCP for gaining network connectivity.
So, on the LAN shared with DUTs, a running DHCPD is necessary. (See DHCPD below)
![lava-docker diagram](doc/lava-docker.png) ## Multi-host architectures Lava-docker support multi-host architecture, Master and slaves could be on different host. Lava-docker support multiples slaves, but with a maximum of one slave per host. This is due to that slave need TFTP port accessible from outside. ### Power supply You need to have a PDU for powering your DUT. Managing PDUs is done via pdu_generic ### Network ports The following ports are used by lava-docker and are
# The H3-based (as opposed to the M3-based version) salvator-x needs a different
# MACHINE but otherwise uses the same config as the H3.
require conf/include/agl_h3ulcb.inc