• Resolving slow boot times

    Hi @all

    I have had this ongoing problem for about 3-4 weeks now. I have done a quick look around the Antergos forums as well as have had a look throughout the net about slow boot times for Arch based systems. No luck yet though there are some related threads on similar issues like mine but with different initial reasons.

    systemd-analyze critical-chain
    The time after the unit is active or started is printed after the "@" character.
    The time the unit takes to start is printed after the "+" character.
    graphical.target @18.731s
    └─multi-user.target @18.731s
      └─ModemManager.service @13.621s +5.110s
        └─basic.target @13.620s
          └─sockets.target @13.620s
            └─avahi-daemon.socket @13.620s
              └─sysinit.target @12.937s
                └─systemd-update-utmp.service @12.876s +61ms
                  └─systemd-tmpfiles-setup.service @12.705s +170ms
                    └─systemd-journal-flush.service @9.927s +2.777s
                      └─systemd-journald.service @2.611s +7.315s
                        └─systemd-journald-dev-log.socket @2.515s
                          └─-.slice @2.102s
    systemd-analyze blame
             18.014s [email protected]
              7.405s dev-sda1.device
              7.315s systemd-journald.service
              5.110s ModemManager.service
              3.166s ufw.service
              2.777s systemd-journal-flush.service
              2.669s systemd-logind.service
              2.455s avahi-daemon.service
              2.449s systemd-user-sessions.service
              2.431s systemd-udevd.service
              2.348s bluetooth.service
              2.346s NetworkManager.service
              1.819s ntpd.service
              1.710s systemd-vconsole-setup.service
              1.596s lightdm.service
               927ms polkit.service
               859ms systemd-tmpfiles-setup-dev.service
               599ms systemd-localed.service
               575ms colord.service
               542ms systemd-random-seed.service
               520ms accounts-daemon.service
               429ms systemd-rfkill.service
               338ms systemd-sysctl.service
               334ms dev-hugepages.mount
               324ms dev-mqueue.mount
               269ms systemd-udev-trigger.service
               233ms kmod-static-nodes.service
               206ms sys-kernel-debug.mount
               206ms sys-kernel-config.mount

    From the above, there is this culprit which I have looked into before. However, no good solutions have I found yet.
    Here is additional info:

    sudo systemctl status [email protected][email protected] - User Manager for UID 1000
       Loaded: loaded (/usr/lib/systemd/system/[email protected]; static; vendor preset: disabled)
       Active: active (running) since Wed 2015-11-18 12:06:14 MST; 15min ago
     Main PID: 709 (systemd)
       Status: "Startup finished in 18.009s."
       CGroup: /user.slice/user-1000.slice/[email protected]
               │ ├─ 844 /usr/lib/gvfs/gvfsd
               │ ├─ 849 /usr/lib/gvfs/gvfsd-fuse /run/user/1000/gvfs -f -o big_writes
               │ └─1073 /usr/lib/gvfs/gvfsd-trash --spawner :1.11 /org/gtk/gvfs/exec_s...
               │ ├─ 799 /usr/bin/dbus-daemon --session --address=systemd: --nofork --n...
               │ ├─ 805 /usr/lib/at-spi2-core/at-spi-bus-launcher
               │ ├─ 810 /usr/bin/dbus-daemon --config-file=/etc/at-spi2/accessibility....
               │ ├─ 812 /usr/lib/at-spi2-core/at-spi2-registryd --use-gnome-session
               │ ├─ 939 /usr/lib/gnome-shell/gnome-shell-calendar-server
               │ ├─ 944 /usr/lib/evolution-data-server/evolution-source-registry
               │ ├─ 952 /usr/lib/gnome-online-accounts/goa-daemon
               │ ├─ 959 /usr/lib/gnome-online-accounts/goa-identity-service
               │ ├─ 961 /usr/lib/telepathy/mission-control-5
               │ ├─1017 /usr/bin/zeitgeist-daemon
               │ ├─1027 /usr/lib/tracker/tracker-store
               │ ├─1078 /usr/lib/zeitgeist/zeitgeist-fts
               │ ├─1110 /usr/lib/evolution-data-server/evolution-calendar-factory
               │ ├─1135 /usr/lib/dconf/dconf-service
               │ ├─1137 /usr/lib/GConf/gconfd-2
               │ ├─1149 /usr/lib/evolution-data-server/evolution-calendar-factory-subp...
               │ ├─1163 /usr/lib/evolution-data-server/evolution-addressbook-factory
               │ ├─1165 /usr/lib/evolution-data-server/evolution-calendar-factory-subp...
               │ ├─1176 /usr/lib/evolution-data-server/evolution-calendar-factory-subp...
               │ ├─1195 /usr/lib/evolution-data-server/evolution-addressbook-factory-s...
               │ ├─1439 /usr/lib/gnome-terminal/gnome-terminal-server
               │ ├─1443 bash
               │ ├─1664 dbus-launch --autolaunch=57ab45711920499cba4e7424c492a2d7 --bi...
               │ ├─1665 /usr/bin/dbus-daemon --fork --print-pid 5 --print-address 7 --...
               │ ├─1668 /usr/lib/dconf/dconf-service
               │ ├─2707 sudo systemctl status [email protected]
               │ └─2708 systemctl status [email protected]
               │ └─837 /usr/bin/pulseaudio --daemonize=no
               │ └─975 /usr/lib/gvfs/gvfs-goa-volume-monitor
               │ └─1401 /usr/lib/gvfs/gvfsd-metadata
               │ ├─709 /usr/lib/systemd/systemd --user
               │ └─710 (sd-pam)
               │ └─971 /usr/lib/gvfs/gvfs-udisks2-volume-monitor
               │ └─979 /usr/lib/gvfs/gvfs-gphoto2-volume-monitor
                 └─983 /usr/lib/gvfs/gvfs-mtp-volume-monitor

    [email protected] is linked to alot of services. How to correct it for better boot times/performance?
    And why is it taking so long to get going?

    In case you are interested, here is some HW information. I do not have an old computer. XPS 15 are still fairly new.

    Architecture:          x86_64
    CPU op-mode(s):        32-bit, 64-bit
    Byte Order:            Little Endian
    CPU(s):                8
    On-line CPU(s) list:   0-7
    Thread(s) per core:    2
    Core(s) per socket:    4
    Socket(s):             1
    NUMA node(s):          1
    Vendor ID:             GenuineIntel
    CPU family:            6
    Model:                 60
    Model name:            Intel(R) Core(TM) i7-4712HQ CPU @ 2.30GHz
    Stepping:              3
    CPU MHz:               2301.078
    CPU max MHz:           3300.0000
    CPU min MHz:           800.0000
    BogoMIPS:              4589.66
    Virtualization:        VT-x
    L1d cache:             32K
    L1i cache:             32K
    L2 cache:              256K
    L3 cache:              6144K
    NUMA node0 CPU(s):     0-7
    Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm ida arat epb pln pts dtherm tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt
    free -m
                  total        used        free      shared  buff/cache   available
    Mem:          15956        1892       11171        1175        2892       12783
    Swap:          7976           0        7976

    So you can see from above that it’s not like I am lacking anything.

    No failed either:

    systemctl --failed
    0 loaded units listed. Pass --all to see loaded but inactive units, too.
    To show all installed unit files use 'systemctl list-unit-files'.

    The other thing that crossed my mind, is that I am wondering, if there are symlinks broken,
    Might that be one of issues reflecting poor boot time?
    For example:find ./ -type l -exec file {} \; |grep broken comes up with a bunch of broken symlinks but if those would be culprits, how to get rid of them to determine?
    Even then, would broken symlinks be culprits? If I did find a way to delete some of the broken symlnks, there might be the possibility to break the system.

    EDIT: I don’t think that broken symlinks are a partial cause to why boot times take longer. But I did find a resource for removing broken symllinks.
    Still looking for solution to the long boot time.

  • @Modisc said:

    sudo systemctl status [email protected]
    [email protected] - User Manage

    Are you using an SDD or HDD? What’s the output of: sudo systemd-analyze critical-chain [email protected]?

  • @lots-0-logs

    I have a SSD. The system of course is installed on HDD.

    systemd-analyze critical-chain [email protected]
    The time after the unit is active or started is printed after the "@" character.
    The time the unit takes to start is printed after the "+" character.
    [email protected] +16.824s
    └─user-1000.slice @36.985s
      └─user.slice @2.405s
        └─-.slice @2.079s

    some extra info. Might not be needed but thought that I post it anyways:

    systemctl list-dependencies --after [email protected]
    [email protected]
    ● ├─systemd-journald.socket
    ● ├─systemd-user-sessions.service
    ● ├─user-1000.slice
    ● └─basic.target
    ●   ├─-.mount
    ●   ├─tmp.mount
    ●   ├─paths.target
    ●   │ ├─org.cups.cupsd.path
    ●   │ ├─systemd-ask-password-console.path
    ●   │ └─systemd-ask-password-wall.path
    ●   ├─slices.target
    ●   │ ├─-.slice
    ●   │ ├─machine.slice
    ●   │ ├─system.slice
    ●   │ └─user.slice
    ●   ├─sockets.target
    ●   │ ├─avahi-daemon.socket
    ●   │ ├─dbus.socket
    ●   │ ├─org.cups.cupsd.socket
    ●   │ ├─syslog.socket
    ●   │ ├─systemd-initctl.socket
    ●   │ ├─systemd-journald-audit.socket
    ●   │ ├─systemd-journald-dev-log.socket
    ●   │ ├─systemd-journald.socket
    ●   │ ├─systemd-udevd-control.socket
    ●   │ └─systemd-udevd-kernel.socket
    ●   └─sysinit.target
    ●     ├─dev-hugepages.mount
    ●     ├─dev-mqueue.mount
    ●     ├─emergency.service
    ●     ├─kmod-static-nodes.service
    ●     ├─ldconfig.service
    ●     ├─proc-sys-fs-binfmt_misc.automount
    ●     ├─sys-fs-fuse-connections.mount
    ●     ├─sys-kernel-config.mount
    ●     ├─sys-kernel-debug.mount
    ●     ├─[email protected]:intel_backlight.service
    ●     ├─[email protected]:dell::kbd_backlight.service
    ●     ├─systemd-binfmt.service
    ●     ├─systemd-firstboot.service
    ●     ├─systemd-hwdb-update.service
    ●     ├─systemd-journal-catalog-update.service
    ●     ├─systemd-journald.service
    ●     ├─systemd-machine-id-commit.service
    ●     ├─systemd-modules-load.service
    ●     ├─systemd-random-seed.service
    ●     ├─systemd-sysctl.service
    ●     ├─systemd-sysusers.service
    ●     ├─systemd-timesyncd.service
    ●     ├─systemd-tmpfiles-setup-dev.service
    ●     ├─systemd-tmpfiles-setup.service
    ●     ├─systemd-udev-trigger.service
    ●     ├─systemd-udevd.service
    ●     ├─systemd-update-done.service
    ●     ├─systemd-update-utmp.service
    ●     ├─systemd-vconsole-setup.service
    ●     ├─ufw.service
    ●     ├─cryptsetup.target
    ●     ├─emergency.target
    ●     │ └─emergency.service
    ●     ├─local-fs.target
    ●     │ ├─-.mount
    ●     │ ├─home-frankenstein-.cache.mount
    ●     │ ├─run-user-1000-gvfs.mount
    ●     │ ├─run-user-1000.mount
    ●     │ ├─systemd-fsck-root.service
    ●     │ ├─systemd-remount-fs.service
    ●     │ ├─tmp.mount
    ●     │ └─local-fs-pre.target
    ●     │   ├─dm-event.service
    ●     │   ├─systemd-remount-fs.service
    ●     │   └─systemd-tmpfiles-setup-dev.service
    ●     └─swap.target
    ●       └─dev-disk-by\x2duuid-49e89d3d\x2d6164\x2d4957\x2da3be\x2df306674da2d3.swap

    Are all these services, sockets, etc waiting for [email protected] to finally load up so that they themselves can be loaded up? Is this why it may take so long? There must be an explanation for this.


    systemctl list-dependencies --before [email protected]
    [email protected]
    ● └─shutdown.target
  • I am just wondering that if this is a daemon to not have it run at start up. Maybe have something written in /etc/rc.conf
    The Arch Wiki states:

    Initscripts uses rc.d scripts to used to control the starting, stopping and restarting of daemons

    from the SysVinit section.

  • Is this boot time a recent change? I mean, was it once faster than it is now by a easily noticeable margin?

  • It’s been like this for about 3-4 weeks now. So yes, it was much quicker before.
    Have been googling the net since then on how to resolve. Initially I thought that it was because journal files got too big or too much. That definitely is not the reason.
    journalctl --disk-usage followed by sudo journalctl --vacuum-size=XM is a common enough routine for myself.
    So having a large journal is not the reason.
    Is [email protected] service a necessity?
    Do you know off hand?
    What would happen if I disabled it?
    Looking at the above outputs, it seems like a lot is dependent upon this service.

  • That service is a wrapper service unit under which your entire session runs (from systemd’s POV). I don’t have time to explain it in detail at the moment, so you’ll just have to trust me when I say you shouldnt focus on [email protected] specifically while trying to identify the problem. It’s not the problem. If you want to learn why issues like this are almost impossible to debug, you have to understand how systemd works internally. Here is a great article if you are interested:


    I will see what I can dig up related to your issue later tonight and get back to you. Cheers!

  • @lots-0-logs
    Ok. Thanks for the link.
    I see it is quite a bit lengthy but it might prove educational.

Posts 8Views 7404
Log in to reply
Bloom Email Optin Plugin

Looks like your connection to Antergos Community Forum was lost, please wait while we try to reconnect.