If like me you just can’t stand your boot loader showing warnings and errors that are “known bugs” or “just warnings, nothing critical” and you are running antergos on zfs chances are you are getting at least one of the following messages (Grub):

For starters

ERROR: resume: no device specified for hibernation

Caused by a (yet) missing hibernation support in ZFS. Although you got a zvol set up by antergos as a virtual swap partition this can not be used for hibernation/resume as explained here. But we can still get rid of this error message by telling Grub that our zfs root is our swap volume. We can pass the UUID of our root partition (this would be most probably /dev/sda3, if it’s not you possibly know which one it is) as a kernel parameter in our grub config. To get the UUID of your device partitions you can issue blkid. After getting the UUID of your zfs on root partition you can sudo nano /etc/default/grub and add the following kernel parameter (if you don’t already know; you add this to GRUB_CMDLINE_LINUX_DEFAULT=“quiet …”):
resume=UUID=theUUIDofyourzfsonrootpartition
Don’t forget to sudo grub-mkconfig -o /boot/grub/grub.cfg after editing the grub config file.

Note: This will NOT enable hibernation, it will just get rid of the error message. If you check dmesg you will notice that it now finds a hibernation partition but it will claim that “PM: Hibernation image not present or could not be loaded”. There is no workaround for this if you don’t have a real seperate swap partition (as you probably won’t).

The most common one

ash: 1: unknown operand
cannot open 'yourRootPoolName': no such pool

This one is due to formatting in the mkinitcpio zfs hook and has already been merged in ZOL git but it’s not yet in antergos (@karasu @developers). You can edit the hook yourself though by applying the patch discussed here and rebuild afterwards. To do so: sudo nano /usr/lib/initcpio/hooks/zfs. I’ll leave the important bits to you (search the corresponding lines):

# Inside of zfs_mount_handler ()
-    if ! "/usr/bin/zpool" list -H $pool 2>&1 > /dev/null ; then
+    if ! "/usr/bin/zpool" list -H $pool 2>1 > /dev/null ; then

# The following all inside run_hook()
-    [[ $zfs_force == 1 ]] && ZPOOL_FORCE='-f'
-    [[ "$zfs_import_dir" != "" ]] ...
+    [[ "${zfs_force}" = 1 ]] && ZPOOL_FORCE='-f'
+    [[ "${zfs_import_dir}" != "" ]] ...
     # Double quotes and curly brackets !

-    if [ "$root" = 'zfs' ]; then
+    if [ "${root}" = 'zfs' ]; then

-    ZFS_DATASET=$zfs
+    ZFS_DATASET=${zfs}

After editing this you need to sudo mkinitcpio -p linux

Not so common one

ZFS: No hostid found on kernel command line or /etc/hostid. ZFS pools may not import correctly

Can be remedied by adding the spl hostid to /etc/hostid. I didn’t even have this file so I created it. Get your hostid with $ hostid and add it as the only entry.

Even less common

[FAILED] Failed to start ZFS file system shares
See 'systemctl status zfs-share.service' for details

I got this because I had some zfs datasets mixed up. Happens because really, the mountpoints of the failing datasets are actually not empty because maybe the system or you put stuff in there at some point that don’t belong to the dataset.
I’ll try to explain my case here. I had a dataset pool/home which mounted to /home and several datasets with a structure of pool/userdata/documents, pool/userdata/downloads etc. The latter ones all mounted to /home/username/documents and respectively.

This hasn’t been an issue for quite some time but I recently started getting this error and reviewed my dataset structure. Apparently the system wrote all my user files and directories like .bash_profile, .cache etc. to a user directory inside of /home (naturally). When I exported pool/home though, all of these files remained. I really don’t know when I messed this up by not getting my dataset set up right but this is what happened. To remedy the situation I exported the pool, copied everything that remained (.cache, .local, .xinitrc etc. etc.) to a temporary directory on my root pool with cp -arv, made sure everything got backed up and deleted the /home/username directory completely.
After that I rather set pool/userdata to mountpoint /home/username and the children just mount below that.

This is very specific and individual and can happen everytime you got files and directories written that don’t export with the respective dataset. In that case review your dataset structure, export, import, review, export, import, review until you see where you have to move/remove stuff so your set will mount fine.
I refered to this comment on Github to get an idea and start reviewing my sets in rescue mode.

Good luck!