Ubuntu Core (UC) is Canonical’s take in the IoT space. There are pre-built images for officially supported devices, like Raspberry Pi or Intel NUCs, but if we have something else and there is no community port, we need to create the UC image ourselves. High level instructions on how to do this are found in the official docs. The process is straightforward once we have two critical components: the kernel and the gadget snap.
Creating these snaps is not necessarily complex, but there can be bumps in the road if you are new to the task. In this post I explain how I created them for the Jetson TX1 developer kit board, and how they were used to create a UC image for said device, hoping this will provide new tricks to hackers working on ports for other devices. All the sources for the snaps and the build scripts are available in github:
So, let’s start with…
The kernel snap
The Linux kernel that we will use needs some kernel configuration options to be activated, and it is also especially important that it has a modern version of apparmor so snaps can be properly confined. The official Jetson kernel is the 4.4 release, which is quite old, but fortunately Canonical has a reference 4.4 kernel with all the needed patches for snaps backported. Knowing this, we are a
git format-patch command away to obtain the patches we will use on top of the nvidia kernel. The patches include also files with the configuration options that we need for snaps, plus some changes so the snap could be successfully compiled on Ubuntu 18.04 desktop.
Once we have the sources, we need, of course, to create a snapcraft.yaml file that will describe how to build the kernel snap. We will walk through it, highlighting the parts more specific to the Jetson device.
Starting with the kernel part, it turns out that we cannot use easily the kernel plugin, due to the special way in which the kernel needs to be built: nvidia distributes part of the needed drivers as separate repositories to the one used by the main kernel tree. Therefore, I resorted to using the nil plugin so I could hand-write the commands to do the build.
The pull stage that resulted is
# Get kernel sources, which are distributed across different repos
./source_sync.sh -k tegra-l4t-r28.2.1
# Apply canonical patches - apparmor stuff essentially
git am ../../../patch-display/*
git am ../../../patch/*
which runs a script to retrieve the sources (I pulled this script from nvidia Linux for Tegra -L4T- distribution) and applies Canonical patches.
The build stage is a few more lines, so I decided to use an external script to implement it. We will analyze now parts of it. For the kernel configuration we add all the necessary Ubuntu bits:
make "$JETSON_KERNEL_CONFIG" \
Then, to do the build we run
make -j"$num_cpu" Image modules dtbs
An interesting catch here is that zImage files are not supported due to lack of a decompressor implementation in the arm64 kernel. So we have to build an uncompressed Image instead.
After some code that stages the built files so they are included in the snap later, we retrieve the initramfs from the core snap. This step is usually hidden from us by the kernel plugin, but this time we have to code it ourselves:
# Get initramfs from core snap, which we need to download
core_url=$(curl -s -H "X-Ubuntu-Series: 16" -H "X-Ubuntu-Architecture: arm64" \
| jq -r ".anon_download_url")
curl -L "$core_url" > core.snap
# Glob so we get both link and regular file
unsquashfs core.snap "boot/initrd.img-core*"
cp squashfs-root/boot/initrd.img-core "$SNAPCRAFT_PART_INSTALL"/initrd.img
ln "$SNAPCRAFT_PART_INSTALL"/initrd.img "$SNAPCRAFT_PART_INSTALL"/initrd-"$KERNEL_RELEASE".img
Moving back to the snapcraft recipe we also have an initramfs part, which takes care of doing some changes to the default initramfs shipped by UC:
after: [ kernel ]
find . | cpio --quiet -o -H newc | lzma >> "$SNAPCRAFT_STAGE"/initrd.img
Here we are taking advantage of the fact that the initramfs can be built as a concatenation of compressed cpio archives. When the kernel decompresses it, the files included in the later archives overwrite the files from the first ones, which allows us to modify easily files in the initramfs without having to change the one shipped with core. The change that we are doing here is a modification to the resize script that allows UC to get all the free space in the disk on first boot. The modification makes sure this happens in the case when the partition is already taken all available space but the filesystem does not. We could remove this modification when these changes reach the core snap, thing that will happen eventually.
The last part of this snap is the firmware part:
wget https://developer.nvidia.com/embedded/dlc/l4t-jetson-tx1-driver-package-28-2-ga -O Tegra210_Linux_R28.2.0_aarch64.tbz2
tar xf Tegra210_Linux_R28.2.0_aarch64.tbz2 Linux_for_Tegra/nv_tegra/nvidia_drivers.tbz2
tar xf Linux_for_Tegra/nv_tegra/nvidia_drivers.tbz2 lib/firmware/
cd lib; cp -r firmware/ "$SNAPCRAFT_PART_INSTALL"
mkdir -p "$SNAPCRAFT_PART_INSTALL"/firmware/gm20b
ln -sf "../tegra21x/acr_ucode.bin" "acr_ucode.bin"
ln -sf "../tegra21x/gpmu_ucode.bin" "gpmu_ucode.bin"
ln -sf "../tegra21x/gpmu_ucode_desc.bin" "gpmu_ucode_desc.bin"
ln -sf "../tegra21x/gpmu_ucode_image.bin" "gpmu_ucode_image.bin"
ln -sf "../tegra21x/gpu2cde.bin" "gpu2cde.bin"
ln -sf "../tegra21x/NETB_img.bin" "NETB_img.bin"
ln -sf "../tegra21x/fecs_sig.bin" "fecs_sig.bin"
ln -sf "../tegra21x/pmu_sig.bin" "pmu_sig.bin"
ln -sf "../tegra21x/pmu_bl.bin" "pmu_bl.bin"
ln -sf "../tegra21x/fecs.bin" "fecs.bin"
ln -sf "../tegra21x/gpccs.bin" "gpccs.bin"
Here we download some files so we can add firmware blobs to the snap. These files come separate from nvidia kernel sources.
So this is it for the kernel snap, now you just need to follow the instructions to get it built.
The gadget snap
Time now to take a look at the gadget snap. First, I recommend to start by reading great ogra’s post on gadget snaps for devices with u-boot bootloader before going through this section. Now, same as for the kernel one, we will go through the different parts that are defined in the snapcraft.yaml file. The first one builds the u-boot binary:
# Apply UC patches + bug fixes
git am ../../../uboot-patch/*.patch
export ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu-
nice make -j$(nproc)
cp "$SNAPCRAFT_PART_BUILD"/u-boot.bin $SNAPCRAFT_PART_INSTALL"/
We decided again to use the nil plugin as we need to do some special quirks. The sources are pulled from nvidia’s u-boot repository, but we apply some patches on top. These patches, along with the uboot environment, provide
- Support for loading the UC kernel and initramfs from disk
- Support for the revert functionality in case a core or kernel snap installation goes wrong
- Bug fixes for u-boot’s ext4 subsystem – required because the just mentioned revert functionality needs to call u-boot’s command
saveenv, which happened to be broken for ext4 filesystems in tegra’s u-boot
More information on the specifics of u-boot patches for UC can be found in this great blog post.
The only other part that the snap has is uboot-env:
mkenvimage -r -s 131072 -o uboot.env uboot.env.in
cp "$SNAPCRAFT_PART_BUILD"/uboot.env "$SNAPCRAFT_PART_INSTALL"/
# Link needed for ubuntu-image to work properly
cd "$SNAPCRAFT_PART_INSTALL"/; ln -s uboot.env uboot.conf
This simply encodes the
uboot.env.in file into a format that is readable by u-boot. The resulting file,
uboot.env, is included in the snap.
This environment is where most of the support for UC is encoded. I will not delve too much into the details, but just want to mention that the variables that need to be edited usually for new devices are
devtype to set the system boot partition, from which we load the kernel and initramfs
fdt_high to determine the name of the device tree and where in memory it should be loaded
initrd_high to set the loading location for the initramfs
kernel_addr_r to set where the kernel needs to be loaded
args contains kernel arguments and needs to be adapted to the device specifics
- Finally, for this device,
snappy_boot was changed so it used
booti instead of
bootz, as we could not use a compressed kernel as explained above
Besides the snapcraft recipe, the other mandatory file when defining a gadget snap is the gadget.yaml file. This file defines, among other things, the image partitioning layout. There is more to it, but in this case, partitioning is the only thing we have defined:
- name: system-boot
- name: TBC
- name: EBT
- name: BPF
- name: WB0
- name: RP1
- name: TOS
- name: EKS
- name: FX
- name: BMP
- name: SOS
- name: EXI
- name: LNX
- image: u-boot.bin
- name: DTB
- name: NXT
- name: MXB
- name: MXP
- name: USP
The Jetson TX1 has a complex partitioning layout, with many partitions being allocated for the first stage bootloader, and many others that are undocumented. So, to minimize the risk of touching a critical partition, I preferred to keep most of them untouched and do just the minor amount of changes to fit UC into the device. Therefore, the gadget.yaml volumes entry mainly describes the TX1 defaults, with the main differences comparing to the original being:
- The APP partition is renamed to system-boot and reduced to only 64MB. It will contain the uboot environment file plus the kernel and initramfs, as usual in UC systems with u-boot bootloader.
- The LNX partition will contain our u-boot binary
- If a partition with
role: system-data is not defined explicitly (which is the case here), a partition which such role and with label “writable” is implicitly defined at the end of the volume. This will take all the available space left aside by the reduction of the APP partition, and will contain the UC root filesystem. This will replace the UDA partition that is the last in nvidia partitioning scheme.
Now, it is time to build the gadget snap by following the repository instructions.
Building & flashing the image
Now that we have the snaps, it is time to build the image. There is not much to it, you just need an Ubuntu One account and to follow the instructions to create a key to be able to sign a model assertion. With that just follow the README.md file in the jetson-ubuntu-core repository. You can also download the latest tarball from the repository if you prefer.
The build script will generate not only a full image file, but also a tarball that will contain separate files for each partition that needs to be flashed in the device. This is needed because unfortunately there is no way we can fully flash the Jetson device with a GPT image, instead we can flash only individual partitions with the tools nvidia provides.
Once the build finishes, we can take the resulting tarball and follow the instructions to get the necessary partitions flashed. As can be read there, we have to download the nvidia L4T package. Also, note that to be able to change the partition sizes and files to flash, a couple of patches have to be applied on top of the L4T scripts.
After this, you should have a working Ubuntu Core 18 device. You can use the serial port or an external monitor to configure it with your launchpad account so you can ssh into it. Enjoy!