Added file and container image caching (#4828)

* File and container image downloads are now cached localy, so that repeated vagrant up/down runs do not trigger downloading of those files. This is especially useful on laptops with kubernetes runnig locally on vm's. The total size of the cache, after an ansible run, is currently around 800MB, so bandwidth (=time) savings can be quite significant.

* When download_run_once is false, the default is still not to cache, but setting download_force_cache will still enable caching.

* The local cache location can be set with download_cache_dir and defaults to /tmp/kubernetes_cache

* A local docker instance is no longer required to cache docker images; Images are cached to file. A local docker instance is still required, though, if you wish to download images on localhost.

* Fixed a FIXME, wher the argument was that delegate_to doesn't play nice with omit. That is a correct observation and the fix is to use default(inventory_host) instead of default(omit). See ansible/ansible#26009

* Removed "Register docker images info" task from download_container and set_docker_image_facts because it was faulty and unused.

* Removed redundant when:download.{container,enabled,run_once} conditions from {sync,download}_container.yml

* All features of commit d6fd0d2aca by Timoses <timosesu@gmail.com>, merged May 1st 2019, are included in this patch. Not all code was included verbatim, but each feature of that commit was checked to be working in this patch. One notable change: The actual downloading of the kubeadm images was moved to {download,sync)_container, to enable caching.

Note 1: I considered splitting this patch, but most changes that are not directly related to caching, are a pleasant by-product of implementing the caching code, so splitting would be impractical.

Note 2: I have my doubts about the usefulness of the upload, download and upgrade tags in the download role. Must they remain or can they be removed? If anybody knows, then please speak up.
This commit is contained in:
Johnny Halfmoon 2019-06-10 20:21:07 +02:00 committed by Kubernetes Prow Robot
parent 14141ec137
commit 23c9071c30
15 changed files with 531 additions and 424 deletions

14
Vagrantfile vendored
View file

@ -21,7 +21,7 @@ SUPPORTED_OS = {
"ubuntu1604" => {box: "generic/ubuntu1604", user: "vagrant"}, "ubuntu1604" => {box: "generic/ubuntu1604", user: "vagrant"},
"ubuntu1804" => {box: "generic/ubuntu1804", user: "vagrant"}, "ubuntu1804" => {box: "generic/ubuntu1804", user: "vagrant"},
"centos" => {box: "centos/7", user: "vagrant"}, "centos" => {box: "centos/7", user: "vagrant"},
"centos-bento" => {box: "bento/centos-7.5", user: "vagrant"}, "centos-bento" => {box: "bento/centos-7.6", user: "vagrant"},
"fedora" => {box: "fedora/28-cloud-base", user: "vagrant"}, "fedora" => {box: "fedora/28-cloud-base", user: "vagrant"},
"opensuse" => {box: "opensuse/openSUSE-15.0-x86_64", user: "vagrant"}, "opensuse" => {box: "opensuse/openSUSE-15.0-x86_64", user: "vagrant"},
"opensuse-tumbleweed" => {box: "opensuse/openSUSE-Tumbleweed-x86_64", user: "vagrant"}, "opensuse-tumbleweed" => {box: "opensuse/openSUSE-Tumbleweed-x86_64", user: "vagrant"},
@ -180,9 +180,17 @@ Vagrant.configure("2") do |config|
"flannel_interface": "eth1", "flannel_interface": "eth1",
"kube_network_plugin": $network_plugin, "kube_network_plugin": $network_plugin,
"kube_network_plugin_multus": $multi_networking, "kube_network_plugin_multus": $multi_networking,
"docker_keepcache": "1", "download_run_once": "True",
"download_run_once": "False",
"download_localhost": "False", "download_localhost": "False",
"download_cache_dir": ENV['HOME'] + "/kubespray_cache",
# Make kubespray cache even when download_run_once is false
"download_force_cache": "True",
# Keeping the cache on the nodes can improve provisioning speed while debugging kubespray
"download_keep_remote_cache": "False",
"docker_keepcache": "1",
# These two settings will put kubectl and admin.config in $inventory/artifacts
"kubeconfig_localhost": "True",
"kubectl_localhost": "True",
"local_path_provisioner_enabled": "#{$local_path_provisioner_enabled}", "local_path_provisioner_enabled": "#{$local_path_provisioner_enabled}",
"local_path_provisioner_claim_root": "#{$local_path_provisioner_claim_root}", "local_path_provisioner_claim_root": "#{$local_path_provisioner_claim_root}",
"ansible_ssh_user": SUPPORTED_OS[$os][:user] "ansible_ssh_user": SUPPORTED_OS[$os][:user]

View file

@ -3,23 +3,22 @@ Downloading binaries and containers
Kubespray supports several download/upload modes. The default is: Kubespray supports several download/upload modes. The default is:
* Each node downloads binaries and container images on its own, which is * Each node downloads binaries and container images on its own, which is ``download_run_once: False``.
``download_run_once: False``.
* For K8s apps, pull policy is ``k8s_image_pull_policy: IfNotPresent``. * For K8s apps, pull policy is ``k8s_image_pull_policy: IfNotPresent``.
* For system managed containers, like kubelet or etcd, pull policy is * For system managed containers, like kubelet or etcd, pull policy is ``download_always_pull: False``, which is pull if only the wanted repo and tag/sha256 digest differs from that the host has.
``download_always_pull: False``, which is pull if only the wanted repo and
tag/sha256 digest differs from that the host has.
There is also a "pull once, push many" mode as well: There is also a "pull once, push many" mode as well:
* Override the ``download_run_once: True`` to download container images and binaries only once * Setting ``download_run_once: True`` will make kubespray download container images and binaries only once and then push them to the cluster nodes. The default download delegate node is the first `kube-master`.
then push to cluster nodes in batches. The default delegate node * Set ``download_localhost: True`` to make localhost the download delegate. This can be useful if cluster nodes cannot access external addresses. To use this requires that docker is installed and running on the ansible master and that the current user is either in the docker group or can do passwordless sudo, to be able to access docker.
for pushing is the first `kube-master`.
* If your ansible runner node (aka the admin node) have password-less sudo and NOTE: When download_once is true and download_localhost is false, all downloads will be done on the delegate node, including downloads for container images that are not required on that node. As a consequence, the storage required on that node will probably be more than if download_run_once was false, because all images will be loaded into the docker instance on that node, instead of just the images required for that node.
docker enabled, you may want to define the ``download_localhost: True``, which
makes that node a delegate for pushing while running the deployment with On caching:
ansible. This may be the case if cluster nodes cannot access each other via ssh
or you want to use local docker images and binaries as a cache for multiple clusters. * When download_once is true, all downloaded files will be cached locally in $download_cache_dir, which defaults to /tmp/kubespray_cache. On subsequent provisioning runs, this local cache will be used to provision the nodes, minimizing bandwidth usage and improving provisining time. Expect about 800MB of disk space to be used on the ansible node for the cache. Disk space required for the image cache on the kubernetes nodes is a much as is needed for the largest image, which is currently slightly less than 150MB.
* By default, if download_once is false, kubespray will not retreive the downloaded images and files from the remote node to the local cache, or use that cache to pre-provision those nodes. To force the use of the cache, set download_force_cache to true.
* By default, cached images that are used to pre-provision the remote nodes will be deleted from the remote nodes after use, to save disk space. Setting download_keep_remote_cache will prevent the files from being deleted. This can be useful while developping kubespray, as it can decrease provisioning times. As a consequence, the required storage for images on the remote nodes will increase from 150MB to about 550MB, which is currently the combined size of all required container images.
Container images and binary files are described by the vars like ``foo_version``, Container images and binary files are described by the vars like ``foo_version``,
``foo_download_url``, ``foo_checksum`` for binaries and ``foo_image_repo``, ``foo_download_url``, ``foo_checksum`` for binaries and ``foo_image_repo``,
@ -36,8 +35,7 @@ dnsmasq_digest_checksum: 7c883354f6ea9876d176fe1d30132515478b2859d6fc0cbf9223ffd
dnsmasq_image_repo: andyshinn/dnsmasq dnsmasq_image_repo: andyshinn/dnsmasq
dnsmasq_image_tag: '2.72' dnsmasq_image_tag: '2.72'
``` ```
The full list of available vars may be found in the download's ansible role defaults. The full list of available vars may be found in the download's ansible role defaults. Those also allow to specify custom urls and local repositories for binaries and container
Those also allow to specify custom urls and local repositories for binaries and container
images as well. See also the DNS stack docs for the related intranet configuration, images as well. See also the DNS stack docs for the related intranet configuration,
so the hosts can resolve those urls and repos. so the hosts can resolve those urls and repos.
@ -46,7 +44,7 @@ so the hosts can resolve those urls and repos.
In case your servers don't have access to internet (for example when deploying on premises with security constraints), you'll have, first, to setup the appropriate proxies/caches/mirrors and/or internal repositories and registries and, then, adapt the following variables to fit your environment before deploying: In case your servers don't have access to internet (for example when deploying on premises with security constraints), you'll have, first, to setup the appropriate proxies/caches/mirrors and/or internal repositories and registries and, then, adapt the following variables to fit your environment before deploying:
* At least `foo_image_repo` and `foo_download_url` as described before (i.e. in case of use of proxies to registries and binaries repositories, checksums and versions do not necessarily need to be changed). * At least `foo_image_repo` and `foo_download_url` as described before (i.e. in case of use of proxies to registries and binaries repositories, checksums and versions do not necessarily need to be changed).
NB: Regarding `foo_image_repo`, when using insecure registries/proxies, you will certainly have to append them to the `docker_insecure_registries` variable in group_vars/all/docker.yml NOTE: Regarding `foo_image_repo`, when using insecure registries/proxies, you will certainly have to append them to the `docker_insecure_registries` variable in group_vars/all/docker.yml
* `pyrepo_index` (and optionally `pyrepo_cert`) * `pyrepo_index` (and optionally `pyrepo_cert`)
* Depending on the `container_manager` * Depending on the `container_manager`
* When `container_manager=docker`, `docker_foo_repo_base_url`, `docker_foo_repo_gpgkey`, `dockerproject_bar_repo_base_url` and `dockerproject_bar_repo_gpgkey` (where `foo` is the distribution and `bar` is system package manager) * When `container_manager=docker`, `docker_foo_repo_base_url`, `docker_foo_repo_gpgkey`, `dockerproject_bar_repo_base_url` and `dockerproject_bar_repo_gpgkey` (where `foo` is the distribution and `bar` is system package manager)

View file

@ -1,5 +1,15 @@
--- ---
local_release_dir: /tmp/releases local_release_dir: /tmp/releases
download_cache_dir: /tmp/kubespray_cache
# do not delete remote cache files after using them
# NOTE: Setting this parameter to TRUE is only really useful when developing kubespray
download_keep_remote_cache: false
# Only useful when download_run_once is false: Localy cached files and images are
# uploaded to kubernetes nodes. Also, images downloaded on those nodes are copied
# back to the ansible runner's cache, if they are not yet preset.
download_force_cache: false
# Used to only evaluate vars from download role # Used to only evaluate vars from download role
skip_downloads: false skip_downloads: false

View file

@ -0,0 +1,33 @@
---
# NOTE: The ampersand hell in this block is needed because docker-inspect uses go templates,
# which uses double ampersands as delimeters, just like Jinja does. If you want to understand
# the template, just replace all instances of {{ `{{` }} with {{ and {{ '}}' }} with }}.
# It will output something like the following:
# nginx:1.15,gcr.io/google-containers/kube-proxy:v1.14.1,gcr.io/google-containers/kube-proxy@sha256:44af2833c6cbd9a7fc2e9d2f5244a39dfd2e31ad91bf9d4b7d810678db738ee9,gcr.io/google-containers/kube-apiserver:v1.14.1,etc...
- name: check_pull_required | Generate a list of information about the images on a node
shell: >-
{{ docker_bin_dir }}/docker images -q | xargs -r {{ docker_bin_dir }}/docker inspect -f "{{ '{{' }} if .RepoTags {{ '}}' }}{{ '{{' }} (index .RepoTags) {{ '}}' }}{{ '{{' }} end {{ '}}' }}{{ '{{' }} if .RepoDigests {{ '}}' }},{{ '{{' }} (index .RepoDigests) {{ '}}' }}{{ '{{' }} end {{ '}}' }}" | sed -e 's/^ *\[//g' -e 's/\] *$//g' -e 's/ /\n/g' | tr '\n' ','
delegate_to: "{{ download_delegate if download_run_once or inventory_hostname }}"
no_log: true
register: docker_images
failed_when: false
changed_when: false
check_mode: no
become: "{{ not download_localhost }}"
when: not download_always_pull
- name: check_pull_required | Set pull_required if the desired image is not yet loaded
set_fact:
pull_required: >-
{%- if image_reponame in docker_images.stdout.split(',') %}false{%- else -%}true{%- endif -%}
when: not download_always_pull
- name: check_pull_required | Check that the local digest sha256 corresponds to the given image tag
assert:
that: "{{ download.repo }}:{{ download.tag }} in docker_images.stdout.split(',')"
when:
- not download_always_pull
- not pull_required
- pull_by_digest
tags:
- asserts

View file

@ -1,40 +1,129 @@
--- ---
- name: container_download | Make download decision if pull is required by tag or sha256 - block:
include_tasks: set_docker_image_facts.yml - name: download_container | Set a few facts
when: import_tasks: set_container_facts.yml
- download.enabled run_once: "{{ download_run_once }}"
- download.container
tags: tags:
- facts - facts
# FIXME(mattymo): In Ansible 2.4 omitting download delegate is broken. Move back - name: download_container | Determine if image is in cache
# to one task in the future. stat:
- name: container_download | Download containers if pull is required or told to always pull (delegate) path: "{{ image_path_cached }}"
command: "{{ docker_bin_dir }}/docker pull {{ pull_args }}" delegate_to: localhost
register: pull_task_result delegate_facts: no
until: pull_task_result is succeeded register: cache_image
retries: 4 changed_when: false
delay: "{{ retry_stagger | random + 3 }}" become: false
changed_when: not 'up to date' in pull_task_result.stdout
when: when:
- download_run_once - download_force_cache
- download.enabled
- download.container
- any_pull_required | default(download_always_pull)
delegate_to: "{{ download_delegate }}"
delegate_facts: yes
run_once: yes
- name: container_download | Download containers if pull is required or told to always pull (all nodes) - name: download_container | Set fact indicating if image is in cache
command: "{{ docker_bin_dir }}/docker pull {{ pull_args }}" set_fact:
register: pull_task_result image_is_cached: "{{ cache_image.stat.exists | default(false) }}"
until: pull_task_result is succeeded tags:
- facts
when:
- download_force_cache
- name: download_container | Upload image to node if it is cached
synchronize:
src: "{{ image_path_cached }}"
dest: "{{ image_path_final }}"
use_ssh_args: "{{ has_bastion | default(false) }}"
mode: push
delegate_facts: no
register: upload_image
failed_when: not upload_image
run_once: "{{ download_run_once }}"
until: upload_image is succeeded
retries: 4 retries: 4
delay: "{{ retry_stagger | random + 3 }}" delay: "{{ retry_stagger | random + 3 }}"
changed_when: not 'up to date' in pull_task_result.stdout
when: when:
- not download_run_once - download_force_cache
- download.enabled - image_is_cached
- download.container - not download_localhost
- pull_required|default(download_always_pull) - ansible_os_family not in ["CoreOS", "Container Linux by CoreOS"]
- group_names | intersect(download.groups) | length
- name: download_container | Load image into docker
shell: "{{ docker_bin_dir }}/docker load < {{ image_path_cached if download_localhost else image_path_final }}"
delegate_to: "{{ download_delegate if download_run_once or inventory_hostname }}"
run_once: "{{ download_run_once }}"
register: container_load_status
failed_when: container_load_status | failed
become: "{{ user_can_become_root | default(false) or not (download_run_once and download_localhost) }}"
when:
- download_force_cache
- image_is_cached
- ansible_os_family not in ["CoreOS", "Container Linux by CoreOS"]
- name: download_container | Prepare container download
import_tasks: check_pull_required.yml
run_once: "{{ download_run_once }}"
when:
- not download_always_pull
- debug:
msg: "XXX Pull required is: {{ pull_required }}"
# NOTE: Pre-loading docker images will not prevent 'docker pull' from re-downloading the layers in that image
# if a pull is forced. This is a known issue with docker. See https://github.com/moby/moby/issues/23684
- name: download_container | Download image if required
command: "{{ docker_bin_dir }}/docker pull {{ image_reponame }}"
delegate_to: "{{ download_delegate if download_run_once or inventory_hostname }}"
delegate_facts: yes
run_once: "{{ download_run_once }}"
register: pull_task_result
until: pull_task_result is succeeded
delay: "{{ retry_stagger | random + 3 }}"
retries: 4
become: "{{ user_can_become_root | default(false) or not download_localhost }}"
when:
- pull_required | default(download_always_pull)
# NOTE: image_changed is only valid if a pull is was needed or forced.
- name: download_container | Check if image changed
set_fact:
image_changed: "{{ true if pull_task_result.stdout is defined and not 'up to date' in pull_task_result.stdout else false }}"
run_once: true
when:
- download_force_cache
tags:
- facts
- name: download_container | Save and compress image
shell: "{{ docker_bin_dir }}/docker save {{ image_reponame }} | gzip -{{ download_compress }} > {{ image_path_cached if download_localhost else image_path_final }}"
delegate_to: "{{ download_delegate if download_run_once or inventory_hostname }}"
delegate_facts: no
register: container_save_status
failed_when: container_save_status.stderr
run_once: true
become: "{{ user_can_become_root | default(false) or not download_localhost }}"
when:
- download_force_cache
- not image_is_cached or (image_changed | default(true))
- ansible_os_family not in ["CoreOS", "Container Linux by CoreOS"]
- name: download_container | Copy image to ansible host cache
synchronize:
src: "{{ image_path_final }}"
dest: "{{ image_path_cached }}"
use_ssh_args: "{{ has_bastion | default(false) }}"
mode: pull
delegate_facts: no
run_once: true
when:
- download_force_cache
- not download_localhost
- not image_is_cached or (image_changed | default(true))
- ansible_os_family not in ["CoreOS", "Container Linux by CoreOS"]
- name: download_container | Remove container image from cache
file:
state: absent
path: "{{ image_path_final }}"
when:
- not download_keep_remote_cache
- ansible_os_family not in ["CoreOS", "Container Linux by CoreOS"]
tags:
- download

View file

@ -1,76 +1,123 @@
--- ---
- name: file_download | Downloading... - block:
- name: download_file | Starting dowload of file
debug: debug:
msg: msg: "{{ download.url }}"
- "URL: {{ download.url }}" run_once: "{{ download_run_once }}"
- "Dest: {{ download.dest }}"
- name: file_download | Create dest directory - name: download_file | Set pathname of cached file
set_fact:
file_path_cached: "{{ download_cache_dir }}/{{ download.dest | regex_replace('^\\/', '') }}"
tags:
- facts
- name: download_file | Create dest directory on node
file: file:
path: "{{ download.dest | dirname }}" path: "{{ download.dest | dirname }}"
owner: "{{ download.owner | default(omit) }}"
mode: 0755
state: directory state: directory
recurse: yes recurse: yes
when:
- download.enabled
- download.file
- group_names | intersect(download.groups) | length
# As in 'download_container.yml': - name: download_file | Create local cache directory
# In Ansible 2.4 omitting download delegate is broken. Move back file:
# to one task in the future. path: "{{ file_path_cached | dirname }}"
- name: file_download | Download item (delegate) state: directory
recurse: yes
delegate_to: localhost
delegate_facts: false
run_once: true
become: false
tags:
- localhost
- name: download_file | Check if file is available in cache
stat:
path: "{{ file_path_cached }}"
register: cache_file
run_once: true
changed_when: false
delegate_to: localhost
delegate_facts: no
become: false
when:
- download_force_cache
tags:
- facts
- name: download_file | Set file_is_cached fact based on previous task
set_fact:
file_is_cached: "{{ cache_file.stat.exists | default(false) }}"
when:
- download_force_cache
tags:
- facts
- name: download_file | Copy file from cache to nodes, if it is available
synchronize:
src: "{{ file_path_cached }}"
dest: "{{ download.dest }}"
use_ssh_args: "{{ has_bastion | default(false) }}"
mode: push
run_once: "{{ download_run_once }}"
register: get_task
until: get_task is succeeded
delay: "{{ retry_stagger | random + 3 }}"
retries: 4
when:
- download_force_cache
- file_is_cached
- ansible_os_family not in ["CoreOS", "Container Linux by CoreOS"]
- name: download_file | Set mode and owner
file:
path: "{{ download.dest }}"
mode: "{{ download.mode | default(omit) }}"
owner: "{{ download.owner | default(omit) }}"
run_once: "{{ download_run_once }}"
when:
- download_force_cache
- file_is_cached
- ansible_os_family not in ["CoreOS", "Container Linux by CoreOS"]
# This must always be called, to check if the checksum matches. On no-match the file is re-downloaded.
- name: download_file | Download item
get_url: get_url:
url: "{{ download.url }}" url: "{{ download.url }}"
dest: "{{ download.dest }}" dest: "{{ file_path_cached if download_localhost else download.dest }}"
sha256sum: "{{ download.sha256|default(omit) }}" owner: "{{ omit if download_localhost else (download.owner | default(omit)) }}"
owner: "{{ download.owner|default(omit) }}" mode: "{{ omit if download_localhost else (download.mode | default(omit)) }}"
mode: "{{ download.mode|default(omit) }}" checksum: "{{ 'sha256:' + download.sha256 if download.sha256 or omit }}"
validate_certs: "{{ download_validate_certs }}" validate_certs: "{{ download_validate_certs }}"
url_username: "{{ download.username|default(omit) }}" url_username: "{{ download.username | default(omit) }}"
url_password: "{{ download.password|default(omit) }}" url_password: "{{ download.password | default(omit) }}"
force_basic_auth: "{{ download.force_basic_auth|default(omit) }}" force_basic_auth: "{{ download.force_basic_auth | default(omit) }}"
delegate_to: "{{ download_delegate if download_run_once else inventory_hostname }}"
run_once: "{{ download_run_once }}"
register: get_url_result register: get_url_result
become: "{{ not download_localhost }}"
until: "'OK' in get_url_result.msg or 'file already exists' in get_url_result.msg" until: "'OK' in get_url_result.msg or 'file already exists' in get_url_result.msg"
retries: 4 retries: 4
delay: "{{ retry_stagger | default(5) }}" delay: "{{ retry_stagger | default(5) }}"
delegate_to: "{{ download_delegate }}"
when:
- download_run_once
- download.enabled
- download.file
- group_names | intersect(download.groups) | length
run_once: yes
- name: file_download | Download item (all) - name: "download_file | Extract file archives"
get_url: include_tasks: "extract_file.yml"
url: "{{ download.url }}"
dest: "{{ download.dest }}"
sha256sum: "{{ download.sha256|default(omit) }}"
owner: "{{ download.owner|default(omit) }}"
mode: "{{ download.mode|default(omit) }}"
validate_certs: "{{ download_validate_certs }}"
url_username: "{{ download.username|default(omit) }}"
url_password: "{{ download.password|default(omit) }}"
force_basic_auth: "{{ download.force_basic_auth|default(omit) }}"
register: get_url_result
until: "'OK' in get_url_result.msg or 'file already exists' in get_url_result.msg"
retries: 4
delay: "{{ retry_stagger | default(5) }}"
when: when:
- not download_run_once - not download_localhost
- download.enabled
- download.file
- group_names | intersect(download.groups) | length
- name: file_download | Extract archives - name: download_file | Copy file back to ansible host file cache
unarchive: synchronize:
src: "{{ download.dest }}" src: "{{ download.dest }}"
dest: "{{ download.dest |dirname }}" dest: "{{ file_path_cached }}"
owner: "{{ download.owner|default(omit) }}" use_ssh_args: "{{ has_bastion | default(false) }}"
mode: "{{ download.mode|default(omit) }}" mode: pull
copy: no run_once: true
when: when:
- download.enabled - download_force_cache
- download.file - not file_is_cached or get_url_result.changed
- download.unarchive|default(False) - download_delegate == inventory_hostname
- group_names | intersect(download.groups) | length - not (download_run_once and download_delegate == 'localhost')
- ansible_os_family not in ["CoreOS", "Container Linux by CoreOS"]
tags:
- download

View file

@ -1,35 +0,0 @@
---
- name: Register docker images info
shell: >-
{{ docker_bin_dir }}/docker images -q | xargs {{ docker_bin_dir }}/docker inspect -f "{{ '{{' }} (index .RepoTags 0) {{ '}}' }},{{ '{{' }} (index .RepoDigests 0) {{ '}}' }}" | tr '\n' ','
no_log: true
register: docker_images
failed_when: false
changed_when: false
check_mode: no
when: download_container
- name: container_download | Create dest directory for saved/loaded container images
file:
path: "{{ local_release_dir }}/containers"
state: directory
recurse: yes
mode: 0755
owner: "{{ ansible_ssh_user|default(ansible_user_id) }}"
when: download_container
- name: container_download | create local directory for saved/loaded container images
file:
path: "{{ local_release_dir }}/containers"
state: directory
recurse: yes
delegate_to: localhost
delegate_facts: false
become: false
run_once: true
when:
- download_run_once
- download_delegate == 'localhost'
- download_container
tags:
- localhost

View file

@ -0,0 +1,10 @@
---
- name: extract_file | Unpacking archive
unarchive:
src: "{{ download.dest }}"
dest: "{{ download.dest | dirname }}"
owner: "{{ download.owner | default(omit) }}"
mode: "{{ download.mode | default(omit) }}"
copy: no
when:
- download.unarchive | default(false)

View file

@ -1,40 +1,56 @@
--- ---
- include_tasks: download_prep.yml - name: download | Prepare working directories and variables
import_tasks: prep_download.yml
when: when:
- not skip_downloads|default(false) - not skip_downloads|default(false)
tags:
- download
- upload
- include_tasks: kubeadm_images.yml - name: download | Get kubeadm binary and list of required images
import_tasks: prep_kubeadm_images.yml
when: when:
- kube_version is version('v1.11.0', '>=') - kube_version is version('v1.11.0', '>=')
- not skip_downloads|default(false) - not skip_downloads|default(false)
- not skip_kubeadm_images|default(false) - not skip_kubeadm_images|default(false)
- inventory_hostname in groups['kube-master'] - inventory_hostname in groups['kube-master']
tags:
- download
- upload
- name: Set kubeadm_images - name: download | Create kubeadm_images variable if it is absent
set_fact: set_fact:
kubeadm_images: {} kubeadm_images: {}
when: when:
- kubeadm_images is not defined - kubeadm_images is not defined
tags:
- download
- upload
- facts
- name: "Download items" - name: download | Download files / images
include_tasks: "{{ include_file }}" include_tasks: "{{ include_file }}"
with_dict: "{{ downloads | combine(kubeadm_images) }}"
vars: vars:
download: "{{ download_defaults | combine(item.value) }}" download: "{{ download_defaults | combine(item.value) }}"
include_file: "download_{% if download.container %}container{% else %}file{% endif %}.yml" include_file: "download_{% if download.container %}container{% else %}file{% endif %}.yml"
with_dict: "{{ downloads | combine(kubeadm_images) }}"
when: when:
- not skip_downloads|default(false) - not skip_downloads | default(false)
- download.enabled
- item.value.enabled - item.value.enabled
- (not (item.value.container|default(False))) or (item.value.container and download_container) - (not (item.value.container | default(false))) or (item.value.container and download_container)
- (download_run_once and inventory_hostname == download_delegate) or (group_names | intersect(download.groups) | length)
- name: "Sync items" - name: download | Sync files / images from ansible host to nodes
include_tasks: "{{ include_file }}" include_tasks: "{{ include_file }}"
with_dict: "{{ downloads | combine(kubeadm_images) }}"
vars: vars:
download: "{{ download_defaults | combine(item.value) }}" download: "{{ download_defaults | combine(item.value) }}"
include_file: "sync_{% if download.container %}container{% else %}file{% endif %}.yml" include_file: "sync_{% if download.container %}container{% else %}file{% endif %}.yml"
with_dict: "{{ downloads | combine(kubeadm_images) }}"
when: when:
- not skip_downloads|default(false) - not skip_downloads | default(false)
- download.enabled
- item.value.enabled - item.value.enabled
- download_run_once - download_run_once
- group_names | intersect(download.groups) | length - group_names | intersect(download.groups) | length
- not (inventory_hostname == download_delegate)

View file

@ -0,0 +1,78 @@
---
- name: prep_download | Set a few facts
set_fact:
download_force_cache: "{{ true if download_run_once else download_force_cache }}"
tags:
- facts
- name: prep_download | Create staging directory on remote node
file:
path: "{{ local_release_dir }}/images"
state: directory
recurse: yes
mode: 0755
owner: "{{ ansible_ssh_user | default(ansible_user_id) }}"
when:
- ansible_os_family not in ["CoreOS", "Container Linux by CoreOS"]
- name: prep_download | Create local cache for files and images
file:
path: "{{ download_cache_dir }}/images"
state: directory
recurse: yes
mode: 0755
delegate_to: localhost
delegate_facts: no
run_once: true
become: false
tags:
- localhost
- name: prep_download | On localhost, check if passwordless root is possible
command: "true"
delegate_to: localhost
run_once: true
register: test_become
changed_when: false
ignore_errors: true
become: true
when:
- download_localhost
tags:
- localhost
- asserts
- name: prep_download | On localhost, check if user has access to docker without using sudo
shell: "{{ docker_bin_dir }}/docker images"
delegate_to: localhost
run_once: true
register: test_docker
changed_when: false
ignore_errors: true
become: false
when:
- download_localhost
tags:
- localhost
- asserts
- name: prep_download | Parse the outputs of the previous commands
set_fact:
user_in_docker_group: "{{ not test_docker.failed }}"
user_can_become_root: "{{ not test_become.failed }}"
when:
- download_localhost
tags:
- localhost
- asserts
- name: prep_download | Check that local user is in group or can become root
assert:
that: "user_in_docker_group or user_can_become_root"
msg: >-
Error: User is not in docker group and cannot become root. When download_localhost is true, at least one of these two conditions must be met.
when:
- download_localhost
tags:
- localhost
- asserts

View file

@ -1,28 +1,28 @@
--- ---
- name: kubeadm | Download kubeadm - name: prep_kubeadm_images | Download kubeadm binary
include_tasks: "download_file.yml" include_tasks: "download_file.yml"
vars: vars:
download: "{{ download_defaults | combine(downloads.kubeadm) }}" download: "{{ download_defaults | combine(downloads.kubeadm) }}"
when: when:
- not skip_downloads|default(false) - not skip_downloads | default(false)
- downloads.kubeadm.enabled - downloads.kubeadm.enabled
- name: kubeadm | Sync kubeadm - name: prep_kubeadm_images | Sync kubeadm binary to nodes
include_tasks: "sync_file.yml" include_tasks: "sync_file.yml"
vars: vars:
download: "{{ download_defaults | combine(downloads.kubeadm) }}" download: "{{ download_defaults | combine(downloads.kubeadm) }}"
when: when:
- not skip_downloads|default(false) - not skip_downloads | default(false)
- downloads.kubeadm.enabled - downloads.kubeadm.enabled
- download_run_once - download_run_once
- group_names | intersect(download.groups) | length - group_names | intersect(download.groups) | length
- name: kubeadm | Create kubeadm config - name: prep_kubeadm_images | Create kubeadm config
template: template:
src: "kubeadm-images.yaml.j2" src: "kubeadm-images.yaml.j2"
dest: "{{ kube_config_dir }}/kubeadm-images.yaml" dest: "{{ kube_config_dir }}/kubeadm-images.yaml"
- name: kubeadm | Copy kubeadm binary from download dir - name: prep_kubeadm_images | Copy kubeadm binary from download dir to system path
synchronize: synchronize:
src: "{{ local_release_dir }}/kubeadm-{{ kubeadm_version }}-{{ image_arch }}" src: "{{ local_release_dir }}/kubeadm-{{ kubeadm_version }}-{{ image_arch }}"
dest: "{{ bin_dir }}/kubeadm" dest: "{{ bin_dir }}/kubeadm"
@ -32,26 +32,21 @@
group: no group: no
delegate_to: "{{ inventory_hostname }}" delegate_to: "{{ inventory_hostname }}"
- name: kubeadm | Set kubeadm binary permissions - name: prep_kubeadm_images | Set kubeadm binary permissions
file: file:
path: "{{ bin_dir }}/kubeadm" path: "{{ bin_dir }}/kubeadm"
mode: "0755" mode: "0755"
state: file state: file
- name: container_download | download images for kubeadm config images - name: prep_kubeadm_images | Generate list of required images
command: "{{ bin_dir }}/kubeadm config images pull --config={{ kube_config_dir }}/kubeadm-images.yaml"
when: not download_run_once
- name: container_download | fetch list of kubeadm config images
command: "{{ bin_dir }}/kubeadm config images list --config={{ kube_config_dir }}/kubeadm-images.yaml" command: "{{ bin_dir }}/kubeadm config images list --config={{ kube_config_dir }}/kubeadm-images.yaml"
register: result register: kubeadm_images_raw
run_once: true run_once: true
when: download_run_once
changed_when: false changed_when: false
- name: container_download | extract container names from list of kubeadm config images - name: prep_kubeadm_images | Parse list of images
vars: vars:
kubeadm_images_list: "{{ result.stdout_lines }}" kubeadm_images_list: "{{ kubeadm_images_raw.stdout_lines }}"
set_fact: set_fact:
kubeadm_image: kubeadm_image:
key: "kubeadm_{{ (item | regex_replace('^(?:.*\\/)*','')).split(':')[0] }}" key: "kubeadm_{{ (item | regex_replace('^(?:.*\\/)*','')).split(':')[0] }}"
@ -60,15 +55,12 @@
container: true container: true
repo: "{{ item.split(':')[0] }}" repo: "{{ item.split(':')[0] }}"
tag: "{{ item.split(':')[1] }}" tag: "{{ item.split(':')[1] }}"
groups: groups: k8s-cluster
- k8s-cluster
loop: "{{ kubeadm_images_list | flatten(levels=1) }}" loop: "{{ kubeadm_images_list | flatten(levels=1) }}"
register: kubeadm_images_cooked
run_once: true run_once: true
when: download_run_once
register: result_images
- name: container_download | set kubeadm_images - name: prep_kubeadm_images | Convert list of images to dict for later use
set_fact: set_fact:
kubeadm_images: "{{ result_images.results | map(attribute='ansible_facts.kubeadm_image') | list | items2dict }}" kubeadm_images: "{{ kubeadm_images_cooked.results | map(attribute='ansible_facts.kubeadm_image') | list | items2dict }}"
run_once: true run_once: true
when: download_run_once

View file

@ -0,0 +1,23 @@
---
- name: set_container_facts | Display the name of the image being processed
debug:
msg: "{{ download.repo }}"
- name: set_container_facts | Set if containers should be pulled by digest
set_fact:
pull_by_digest: >-
{%- if download.sha256 is defined and download.sha256 -%}true{%- else -%}false{%- endif -%}
- name: set_container_facts | Define by what name to pull the image
set_fact:
image_reponame: >-
{%- if pull_by_digest %}{{ download.repo }}@sha256:{{ download.sha256 }}{%- else -%}{{ download.repo }}:{{ download.tag }}{%- endif -%}
- name: set_container_facts | Define file name of image
set_fact:
image_filename: "{{ image_reponame | regex_replace('/|\0|:', '_') }}.tar"
- name: set_container_facts | Define path of image
set_fact:
image_path_cached: "{{ download_cache_dir }}/images/{{ image_filename }}"
image_path_final: "{{ local_release_dir }}/images/{{ image_filename }}"

View file

@ -1,50 +0,0 @@
---
- name: Set if containers should be pulled by digest
set_fact:
pull_by_digest: >-
{%- if download.sha256 is defined and download.sha256 -%}true{%- else -%}false{%- endif -%}
- name: Set pull_args
set_fact:
pull_args: >-
{%- if pull_by_digest %}{{ download.repo }}@sha256:{{ download.sha256 }}{%- else -%}{{ download.repo }}:{{ download.tag }}{%- endif -%}
- name: Register docker images info
shell: >-
{{ docker_bin_dir }}/docker images -q | xargs -r {{ docker_bin_dir }}/docker inspect -f "{{ '{{' }} if .RepoTags {{ '}}' }}{{ '{{' }} (index .RepoTags) {{ '}}' }}{{ '{{' }} end {{ '}}' }}{{ '{{' }} if .RepoDigests {{ '}}' }},{{ '{{' }} (index .RepoDigests) {{ '}}' }}{{ '{{' }} end {{ '}}' }}" | sed -e 's/^ *\[//g' -e 's/\] *$//g' -e 's/ /\n/g' | tr '\n' ','
no_log: true
register: docker_images
failed_when: false
changed_when: false
check_mode: no
when:
- not download_always_pull
- group_names | intersect(download.groups) | length
- name: Set if pull is required per container
set_fact:
pull_required: >-
{%- if pull_args in docker_images.stdout.split(',') %}false{%- else -%}true{%- endif -%}
when:
- not download_always_pull
- group_names | intersect(download.groups) | length
- name: Does any host require container pull?
vars:
hosts_pull_required: "{{ hostvars.values() | map(attribute='pull_required') | select('defined') | list }}"
set_fact:
any_pull_required: "{{ True in hosts_pull_required }}"
run_once: true
changed_when: false
when: not download_always_pull
- name: Check the local digest sha256 corresponds to the given image tag
assert:
that: "{{ download.repo }}:{{ download.tag }} in docker_images.stdout.split(',')"
when:
- group_names | intersect(download.groups) | length
- not download_always_pull
- not pull_required
- pull_by_digest
tags:
- asserts

View file

@ -1,141 +1,37 @@
--- ---
- name: container_download | Make download decision if pull is required by tag or sha256 - block:
include: set_docker_image_facts.yml - name: sync_container | Gather information about the current image (how to download, is it cached etc.)
when: import_tasks: set_container_facts.yml
- download.enabled
- download.container
tags: tags:
- facts - facts
- name: container_download | Set file name of container tarballs - name: sync_container | Upload container image to node
set_fact:
fname: "{{ local_release_dir }}/containers/{{ download.repo|regex_replace('/|\0|:', '_') }}:{{ download.tag|default(download.sha256)|regex_replace('/|\0|:', '_') }}.tar"
run_once: true
when:
- download.enabled
- download.container
- download_run_once
tags:
- facts
- name: "container_download | Set default value for 'container_changed' to false"
set_fact:
container_changed: "{{ pull_required|default(false) }}"
when:
- download.enabled
- download.container
- download_run_once
- name: "container_download | Update the 'container_changed' fact"
set_fact:
container_changed: "{{ pull_required|default(false) or not 'up to date' in pull_task_result.stdout }}"
when:
- download.enabled
- download.container
- download_run_once
- pull_required|default(download_always_pull)
run_once: "{{ download_run_once }}"
tags:
- facts
- name: container_download | Stat saved container image
stat:
path: "{{ fname }}"
register: img
changed_when: false
delegate_to: "{{ download_delegate }}"
delegate_facts: no
become: false
run_once: true
when:
- download.enabled
- download.container
- download_run_once
- any_pull_required | default(download_always_pull)
tags:
- facts
- name: container_download | save container images
shell: "{{ docker_bin_dir }}/docker save {{ pull_args }} | gzip -{{ download_compress }} > {{ fname }}"
delegate_to: "{{ download_delegate }}"
delegate_facts: no
register: saved
failed_when: saved.stderr
when:
- download.enabled
- download.container
- download_run_once
- any_pull_required | default(download_always_pull)
- (ansible_os_family not in ["CoreOS", "Container Linux by CoreOS"] or download_delegate == "localhost")
- (container_changed or not img.stat.exists)
- name: container_download | create container images directory on ansible host
file:
state: directory
path: "{{ fname | dirname }}"
delegate_to: localhost
delegate_facts: no
run_once: true
become: false
when:
- download.enabled
- download.container
- download_run_once
- any_pull_required | default(download_always_pull)
- ansible_os_family not in ["CoreOS", "Container Linux by CoreOS"]
- inventory_hostname == download_delegate
- download_delegate != "localhost"
- saved.changed
- name: container_download | copy container images to ansible host
synchronize: synchronize:
src: "{{ fname }}" src: "{{ image_path_cached }}"
dest: "{{ fname }}" dest: "{{ image_path_final }}"
use_ssh_args: "{{ has_bastion | default(false) }}"
mode: pull
private_key: "{{ ansible_ssh_private_key_file }}"
become: false
when:
- download.enabled
- download.container
- download_run_once
- ansible_os_family not in ["CoreOS", "Container Linux by CoreOS"]
- inventory_hostname == download_delegate
- download_delegate != "localhost"
- saved.changed
- name: container_download | upload container images to nodes
synchronize:
src: "{{ fname }}"
dest: "{{ fname }}"
use_ssh_args: "{{ has_bastion | default(false) }}" use_ssh_args: "{{ has_bastion | default(false) }}"
mode: push mode: push
become: true delegate_facts: no
register: get_task register: get_task
become: true
until: get_task is succeeded until: get_task is succeeded
retries: 4 retries: 4
delay: "{{ retry_stagger | random + 3 }}" delay: "{{ retry_stagger | random + 3 }}"
when: when:
- download.enabled - ansible_os_family not in ["CoreOS", "Container Linux by CoreOS"]
- download.container
- download_run_once
- pull_required|default(download_always_pull)
- (ansible_os_family not in ["CoreOS", "Container Linux by CoreOS"] and
inventory_hostname != download_delegate or
download_delegate == "localhost")
tags:
- upload
- upgrade
- name: container_download | load container images - name: sync_container | Load container image into docker
shell: "{{ docker_bin_dir }}/docker load < {{ fname }}" shell: "{{ docker_bin_dir }}/docker load < {{ image_path_final }}"
when: when:
- download.enabled - ansible_os_family not in ["CoreOS", "Container Linux by CoreOS"]
- download.container
- download_run_once - name: sync_container | Remove container image from cache
- pull_required|default(download_always_pull) file:
- (ansible_os_family not in ["CoreOS", "Container Linux by CoreOS"] and state: absent
inventory_hostname != download_delegate or download_delegate == "localhost") path: "{{ image_path_final }}"
when:
- not download_keep_remote_cache
- ansible_os_family not in ["CoreOS", "Container Linux by CoreOS"]
tags: tags:
- upload - upload
- upgrade

View file

@ -1,38 +1,26 @@
--- ---
- name: file_download | create local download destination directory - block:
- name: sync_file | Starting file sync of file
debug:
msg: "Starting file sync of file: {{ download.dest }}"
- name: download_file | Set pathname of cached file
set_fact:
file_path_cached: "{{ download_cache_dir }}/{{ download.dest | regex_replace('^\\/', '') }}"
tags:
- facts
- name: sync_file | Create dest directory on node
file: file:
path: "{{ download.dest|dirname }}" path: "{{ download.dest | dirname }}"
owner: "{{ download.owner | default(omit) }}"
mode: 0755
state: directory state: directory
recurse: yes recurse: yes
mode: 0755
delegate_to: localhost
become: false
run_once: true
when:
- download_delegate != "localhost"
- download_run_once
- download.enabled
- download.file
- name: file_download | copy file to ansible host - name: sync_file | Upload file images to node
synchronize: synchronize:
src: "{{ download.dest }}" src: "{{ file_path_cached }}"
dest: "{{ download.dest }}"
use_ssh_args: "{{ has_bastion | default(false) }}"
mode: pull
run_once: true
become: false
when:
- download.enabled
- download.file
- download_run_once
- ansible_os_family not in ["CoreOS", "Container Linux by CoreOS"]
- inventory_hostname == download_delegate
- download_delegate != "localhost"
- name: file_download | upload file to nodes
synchronize:
src: "{{ download.dest }}"
dest: "{{ download.dest }}" dest: "{{ download.dest }}"
use_ssh_args: "{{ has_bastion | default(false) }}" use_ssh_args: "{{ has_bastion | default(false) }}"
mode: push mode: push
@ -42,12 +30,16 @@
retries: 4 retries: 4
delay: "{{ retry_stagger | random + 3 }}" delay: "{{ retry_stagger | random + 3 }}"
when: when:
- download.enabled - ansible_os_family not in ["CoreOS", "Container Linux by CoreOS"]
- download.file
- download_run_once - name: sync_file | Set mode and owner
- (ansible_os_family not in ["CoreOS", "Container Linux by CoreOS"] and file:
inventory_hostname != download_delegate or path: "{{ download.dest }}"
download_delegate == "localhost") mode: "{{ download.mode | default(omit) }}"
owner: "{{ download.owner | default(omit) }}"
- name: sync_file | Extract file archives
include_tasks: "extract_file.yml"
tags: tags:
- upload - upload
- upgrade