Refactor download role (#5697)

* download file

* download containers

* fix push image to nodes

* pull if none image on host

* fix

* improve docker image tag checks.
do not pull already cached images

* rebase fix merge conflict

* add support download_run_once when upgrade and scale cluster
add some test with download_run_once

* set default values to temp flag for every download cycle

* add save,load abilty for containerd and crio when download_run_once=true

* return redefine image save/load command to  set_docker_image_facts.yml

* move set command to set_container_facts

* ctr in containerd_bin_dir

* fix order of ctr image export arguments

* temporary disable download_run_once for containerd and crio
due https://github.com/containerd/containerd/issues/4075

* remove unused files

* fix strict yaml linter warning and errors

* refactor logical conditions to pull and cache container images

* remove comment due lint check

* document role

* remove image_load_on_localhost, because cached images are always loaded to docker on remote sites

* remove XXX from debug output
This commit is contained in:
Kubernetes Prow Robot 2020-03-05 07:31:39 -08:00 committed by GitHub
parent 62b418cd16
commit 66408a87ee
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
21 changed files with 282 additions and 343 deletions

View file

@ -68,6 +68,11 @@ packet_ubuntu18-flannel-containerd:
extends: .packet
when: manual
packet_ubuntu18-flannel-containerd-once:
stage: deploy-part2
extends: .packet
when: manual
packet_debian9-macvlan-sep:
stage: deploy-part2
extends: .packet
@ -80,6 +85,13 @@ packet_debian9-calico-upgrade:
variables:
UPGRADE_TEST: graceful
packet_debian9-calico-upgrade-once:
stage: deploy-part2
extends: .packet
when: on_success
variables:
UPGRADE_TEST: graceful
packet_debian10-containerd:
stage: deploy-part2
extends: .packet
@ -90,6 +102,11 @@ packet_centos7-calico-ha:
extends: .packet
when: manual
packet_centos7-calico-ha-once-localhost:
stage: deploy-part2
extends: .packet
when: manual
packet_centos7-kube-ovn:
stage: deploy-part2
extends: .packet

View file

@ -13,11 +13,13 @@ There is also a "pull once, push many" mode as well:
NOTE: When `download_run_once` is true and `download_localhost` is false, all downloads will be done on the delegate node, including downloads for container images that are not required on that node. As a consequence, the storage required on that node will probably be more than if download_run_once was false, because all images will be loaded into the docker instance on that node, instead of just the images required for that node.
:warning: [`download_run_once: true` support only for `container_manager: docker`](https://github.com/containerd/containerd/issues/4075) :warning:
On caching:
* When `download_run_once` is `True`, all downloaded files will be cached locally in `download_cache_dir`, which defaults to `/tmp/kubespray_cache`. On subsequent provisioning runs, this local cache will be used to provision the nodes, minimizing bandwidth usage and improving provisioning time. Expect about 800MB of disk space to be used on the ansible node for the cache. Disk space required for the image cache on the kubernetes nodes is a much as is needed for the largest image, which is currently slightly less than 150MB.
* By default, if `download_run_once` is false, kubespray will not retrieve the downloaded images and files from the remote node to the local cache, or use that cache to pre-provision those nodes. To force the use of the cache, set `download_force_cache` to `True`.
* By default, cached images that are used to pre-provision the remote nodes will be deleted from the remote nodes after use, to save disk space. Setting download_keep_remote_cache will prevent the files from being deleted. This can be useful while developing kubespray, as it can decrease provisioning times. As a consequence, the required storage for images on the remote nodes will increase from 150MB to about 550MB, which is currently the combined size of all required container images.
* By default, if `download_run_once` is false, kubespray will not retrieve the downloaded images and files from the download delegate node to the local cache, or use that cache to pre-provision those nodes. If you have a full cache with container images and files and you dont need to download anything, but want to use a cache - set `download_force_cache` to `True`.
* By default, cached images that are used to pre-provision the remote nodes will be deleted from the remote nodes after use, to save disk space. Setting `download_keep_remote_cache` will prevent the files from being deleted. This can be useful while developing kubespray, as it can decrease provisioning times. As a consequence, the required storage for images on the remote nodes will increase from 150MB to about 550MB, which is currently the combined size of all required container images.
Container images and binary files are described by the vars like ``foo_version``,
``foo_download_url``, ``foo_checksum`` for binaries and ``foo_image_repo``,

View file

@ -476,7 +476,13 @@ dashboard_image_repo: "{{ gcr_image_repo }}/google_containers/kubernetes-dashboa
dashboard_image_tag: "v1.10.1"
image_pull_command: "{{ docker_bin_dir }}/docker pull"
image_info_command: "{{ docker_bin_dir }}/docker images -q | xargs {{ docker_bin_dir }}/docker inspect -f \"{{ '{{' }} if .RepoTags {{ '}}' }}{{ '{{' }} (index .RepoTags 0) {{ '}}' }}{{ '{{' }} end {{ '}}' }}{{ '{{' }} if .RepoDigests {{ '}}' }},{{ '{{' }} (index .RepoDigests 0) {{ '}}' }}{{ '{{' }} end {{ '}}' }}\" | tr '\n' ','"
image_save_command: "{{ docker_bin_dir }}/docker save {{ image_reponame }} | gzip -{{ download_compress }} > {{ image_path_final }}"
image_load_command: "{{ docker_bin_dir }}/docker load < {{ image_path_final }}"
image_info_command: "{{ docker_bin_dir }}/docker images -q | xargs {{ docker_bin_dir }}/docker inspect -f \"{{ '{{' }} if .RepoTags {{ '}}' }}{{ '{{' }} (join .RepoTags \\\",\\\") {{ '}}' }}{{ '{{' }} end {{ '}}' }}{{ '{{' }} if .RepoDigests {{ '}}' }},{{ '{{' }} (join .RepoDigests \\\",\\\") {{ '}}' }}{{ '{{' }} end {{ '}}' }}\" | tr '\n' ','"
image_pull_command_on_localhost: "{{ docker_bin_dir }}/docker pull"
image_save_command_on_localhost: "{{ docker_bin_dir }}/docker save {{ image_reponame }} | gzip -{{ download_compress }} > {{ image_path_cached }}"
image_info_command_on_localhost: "{{ docker_bin_dir }}/docker images"
downloads:
netcheck_server:

View file

@ -6,19 +6,17 @@
# nginx:1.15,gcr.io/google-containers/kube-proxy:v1.14.1,gcr.io/google-containers/kube-proxy@sha256:44af2833c6cbd9a7fc2e9d2f5244a39dfd2e31ad91bf9d4b7d810678db738ee9,gcr.io/google-containers/kube-apiserver:v1.14.1,etc...
- name: check_pull_required | Generate a list of information about the images on a node
shell: "{{ image_info_command }}"
delegate_to: "{{ download_delegate if download_run_once else inventory_hostname }}"
no_log: true
register: docker_images
failed_when: false
changed_when: false
check_mode: no
become: "{{ not download_localhost }}"
when: not download_always_pull
- name: check_pull_required | Set pull_required if the desired image is not yet loaded
set_fact:
pull_required: >-
{%- if image_reponame in docker_images.stdout.split(',') %}false{%- else -%}true{%- endif -%}
{%- if image_reponame | regex_replace('^docker\.io/(library/)?','') in docker_images.stdout.split(',') %}false{%- else -%}true{%- endif -%}
when: not download_always_pull
- name: check_pull_required | Check that the local digest sha256 corresponds to the given image tag

View file

@ -1,19 +1,26 @@
---
- name: container_download | Make download decision if pull is required by tag or sha256
include_tasks: set_docker_image_facts.yml
when:
- download.enabled
- download.container
tags:
- facts
- block:
- name: download_container | Set a few facts
import_tasks: set_container_facts.yml
run_once: "{{ download_run_once }}"
- name: set default values for flag variables
set_fact:
image_is_cached: false
image_changed: false
pull_required: "{{ download_always_pull }}"
tags:
- facts
- name: download_container | Set a few facts
import_tasks: set_container_facts.yml
tags:
- facts
- name: download_container | Prepare container download
include_tasks: check_pull_required.yml
when:
- not download_always_pull
- debug:
msg: "Pull {{ image_reponame }} required is: {{ pull_required }}"
- name: download_container | Determine if image is in cache
stat:
path: "{{ image_path_cached }}"
@ -27,12 +34,58 @@
- name: download_container | Set fact indicating if image is in cache
set_fact:
image_is_cached: "{{ cache_image.stat.exists | default(false) }}"
image_is_cached: "{{ cache_image.stat.exists }}"
tags:
- facts
when:
- download_force_cache
- name: Stop if image not in cache on ansible host when download_force_cache=true
assert:
that: not image_is_cached
msg: "Image cache file {{ image_path_cached }} not found for {{ image_reponame }} on localhost"
when:
- download_force_cache
- not download_run_once
- name: download_container | Download image if required
command: "{{ image_pull_command_on_localhost if download_localhost else image_pull_command }} {{ image_reponame }}"
delegate_to: "{{ download_delegate if download_run_once else inventory_hostname }}"
delegate_facts: yes
run_once: "{{ download_run_once }}"
register: pull_task_result
until: pull_task_result is succeeded
delay: "{{ retry_stagger | random + 3 }}"
retries: 4
become: "{{ user_can_become_root | default(false) or not download_localhost }}"
when:
- pull_required
- not image_is_cached
- name: download_container | Save and compress image
shell: "{{ image_save_command_on_localhost if download_localhost else image_save_command }}"
delegate_to: "{{ download_delegate }}"
delegate_facts: no
register: container_save_status
failed_when: container_save_status.stderr
run_once: true
become: "{{ user_can_become_root | default(false) or not download_localhost }}"
when:
- not image_is_cached
- download_run_once
- name: download_container | Copy image to ansible host cache
synchronize:
src: "{{ image_path_final }}"
dest: "{{ image_path_cached }}"
use_ssh_args: "{{ has_bastion | default(false) }}"
mode: pull
when:
- not image_is_cached
- download_run_once
- not download_localhost
- download_delegate == inventory_hostname
- name: download_container | Upload image to node if it is cached
synchronize:
src: "{{ image_path_cached }}"
@ -42,88 +95,20 @@
delegate_facts: no
register: upload_image
failed_when: not upload_image
run_once: "{{ download_run_once }}"
until: upload_image is succeeded
retries: 4
delay: "{{ retry_stagger | random + 3 }}"
when:
- pull_required
- download_force_cache
- image_is_cached
- not download_localhost
- ansible_os_family not in ["CoreOS", "Coreos", "Container Linux by CoreOS", "Flatcar", "Flatcar Container Linux by Kinvolk"]
- name: download_container | Load image into docker
shell: "{{ docker_bin_dir }}/docker load < {{ image_path_cached if download_localhost else image_path_final }}"
delegate_to: "{{ download_delegate if download_run_once else inventory_hostname }}"
run_once: "{{ download_run_once }}"
shell: "{{ image_load_command }}"
register: container_load_status
failed_when: container_load_status is failed
become: "{{ user_can_become_root | default(false) or not (download_run_once and download_localhost) }}"
when:
- pull_required
- download_force_cache
- image_is_cached
- ansible_os_family not in ["CoreOS", "Coreos", "Container Linux by CoreOS", "Flatcar", "Flatcar Container Linux by Kinvolk"]
- name: download_container | Prepare container download
include_tasks: check_pull_required.yml
run_once: "{{ download_run_once }}"
when:
- not download_always_pull
- debug:
msg: "XXX Pull required is: {{ pull_required }}"
# NOTE: Pre-loading docker images will not prevent 'docker pull' from re-downloading the layers in that image
# if a pull is forced. This is a known issue with docker. See https://github.com/moby/moby/issues/23684
- name: download_container | Download image if required
command: "{{ image_pull_command }} {{ image_reponame }}"
delegate_to: "{{ download_delegate if download_run_once else inventory_hostname }}"
delegate_facts: yes
run_once: "{{ download_run_once }}"
register: pull_task_result
until: pull_task_result is succeeded
delay: "{{ retry_stagger | random + 3 }}"
retries: 4
become: "{{ user_can_become_root | default(false) or not download_localhost }}"
when:
- pull_required | default(download_always_pull)
# NOTE: image_changed is only valid if a pull is was needed or forced.
- name: download_container | Check if image changed
set_fact:
image_changed: "{{ true if pull_task_result.stdout is defined and not 'up to date' in pull_task_result.stdout else false }}"
run_once: true
when:
- download_force_cache
tags:
- facts
- name: download_container | Save and compress image
shell: "{{ docker_bin_dir }}/docker save {{ image_reponame }} | gzip -{{ download_compress }} > {{ image_path_cached if download_localhost else image_path_final }}"
delegate_to: "{{ download_delegate if download_run_once else inventory_hostname }}"
delegate_facts: no
register: container_save_status
failed_when: container_save_status.stderr
run_once: true
become: "{{ user_can_become_root | default(false) or not download_localhost }}"
when:
- download_force_cache
- not image_is_cached or (image_changed | default(true))
- ansible_os_family not in ["CoreOS", "Coreos", "Container Linux by CoreOS", "Flatcar", "Flatcar Container Linux by Kinvolk"]
- name: download_container | Copy image to ansible host cache
synchronize:
src: "{{ image_path_final }}"
dest: "{{ image_path_cached }}"
use_ssh_args: "{{ has_bastion | default(false) }}"
mode: pull
delegate_facts: no
when:
- download_force_cache
- not download_localhost
- download_delegate == inventory_hostname
- not image_is_cached or (image_changed | default(true))
- ansible_os_family not in ["CoreOS", "Coreos", "Container Linux by CoreOS", "Flatcar", "Flatcar Container Linux by Kinvolk"]
- name: download_container | Remove container image from cache
file:
@ -131,7 +116,5 @@
path: "{{ image_path_final }}"
when:
- not download_keep_remote_cache
- ansible_os_family not in ["CoreOS", "Coreos", "Container Linux by CoreOS", "Flatcar", "Flatcar Container Linux by Kinvolk"]
tags:
- download

View file

@ -28,30 +28,54 @@
delegate_facts: false
run_once: true
become: false
when:
- download_force_cache
- download_localhost
tags:
- localhost
- name: download_file | Check if file is available in cache
stat:
path: "{{ file_path_cached }}"
register: cache_file
- name: download_file | Create cache directory on download_delegate host
file:
path: "{{ file_path_cached | dirname }}"
state: directory
recurse: yes
delegate_to: "{{ download_delegate }}"
delegate_facts: false
run_once: true
changed_when: false
delegate_to: localhost
delegate_facts: no
become: false
when:
- download_force_cache
tags:
- facts
- not download_localhost
- name: download_file | Set file_is_cached fact based on previous task
set_fact:
file_is_cached: "{{ cache_file.stat.exists | default(false) }}"
# This must always be called, to check if the checksum matches. On no-match the file is re-downloaded.
- name: download_file | Download item
get_url:
url: "{{ download.url }}"
dest: "{{ file_path_cached if download_force_cache else download.dest }}"
owner: "{{ omit if download_localhost else (download.owner | default(omit)) }}"
mode: "{{ omit if download_localhost else (download.mode | default(omit)) }}"
checksum: "{{ 'sha256:' + download.sha256 if download.sha256 else omit }}"
validate_certs: "{{ download_validate_certs }}"
url_username: "{{ download.username | default(omit) }}"
url_password: "{{ download.password | default(omit) }}"
force_basic_auth: "{{ download.force_basic_auth | default(omit) }}"
delegate_to: "{{ download_delegate if download_force_cache else inventory_hostname }}"
run_once: "{{ download_force_cache }}"
register: get_url_result
become: "{{ not download_localhost }}"
until: "'OK' in get_url_result.msg or 'file already exists' in get_url_result.msg"
retries: 4
delay: "{{ retry_stagger | default(5) }}"
- name: download_file | Copy file back to ansible host file cache
synchronize:
src: "{{ file_path_cached }}"
dest: "{{ file_path_cached }}"
use_ssh_args: "{{ has_bastion | default(false) }}"
mode: pull
when:
- download_force_cache
tags:
- facts
- not download_localhost
- download_delegate == inventory_hostname
- name: download_file | Copy file from cache to nodes, if it is available
synchronize:
@ -59,64 +83,23 @@
dest: "{{ download.dest }}"
use_ssh_args: "{{ has_bastion | default(false) }}"
mode: push
run_once: "{{ download_run_once }}"
register: get_task
until: get_task is succeeded
delay: "{{ retry_stagger | random + 3 }}"
retries: 4
when:
- download_force_cache
- file_is_cached
- ansible_os_family not in ["CoreOS", "Coreos", "Container Linux by CoreOS", "Flatcar", "Flatcar Container Linux by Kinvolk"]
- name: download_file | Set mode and owner
file:
path: "{{ download.dest }}"
mode: "{{ download.mode | default(omit) }}"
owner: "{{ download.owner | default(omit) }}"
run_once: "{{ download_run_once }}"
when:
- download_force_cache
- file_is_cached
- ansible_os_family not in ["CoreOS", "Coreos", "Container Linux by CoreOS", "Flatcar", "Flatcar Container Linux by Kinvolk"]
# This must always be called, to check if the checksum matches. On no-match the file is re-downloaded.
- name: download_file | Download item
get_url:
url: "{{ download.url }}"
dest: "{{ file_path_cached if download_localhost else download.dest }}"
owner: "{{ omit if download_localhost else (download.owner | default(omit)) }}"
mode: "{{ omit if download_localhost else (download.mode | default(omit)) }}"
checksum: "{{ 'sha256:' + download.sha256 if download.sha256 or omit }}"
validate_certs: "{{ download_validate_certs }}"
url_username: "{{ download.username | default(omit) }}"
url_password: "{{ download.password | default(omit) }}"
force_basic_auth: "{{ download.force_basic_auth | default(omit) }}"
delegate_to: "{{ download_delegate if download_run_once else inventory_hostname }}"
run_once: "{{ download_run_once }}"
register: get_url_result
become: "{{ not download_localhost }}"
until: "'OK' in get_url_result.msg or 'file already exists' in get_url_result.msg"
retries: 4
delay: "{{ retry_stagger | default(5) }}"
- name: "download_file | Extract file archives"
include_tasks: "extract_file.yml"
when:
- not download_localhost
- name: download_file | Copy file back to ansible host file cache
synchronize:
src: "{{ download.dest }}"
dest: "{{ file_path_cached }}"
use_ssh_args: "{{ has_bastion | default(false) }}"
mode: pull
when:
- download_force_cache
- not file_is_cached or get_url_result.changed
- download_delegate == inventory_hostname
- not (download_run_once and download_delegate == 'localhost')
- ansible_os_family not in ["CoreOS", "Coreos", "Container Linux by CoreOS", "Flatcar", "Flatcar Container Linux by Kinvolk"]
tags:
- download

View file

@ -33,17 +33,3 @@
- item.value.enabled
- (not (item.value.container | default(false))) or (item.value.container and download_container)
- (download_run_once and inventory_hostname == download_delegate) or (group_names | intersect(download.groups) | length)
- name: download | Sync files / images from ansible host to nodes
include_tasks: "{{ include_file }}"
with_dict: "{{ downloads | combine(kubeadm_images) }}"
vars:
download: "{{ download_defaults | combine(item.value) }}"
include_file: "sync_{% if download.container %}container{% else %}file{% endif %}.yml"
when:
- not skip_downloads | default(false)
- download.enabled
- item.value.enabled
- download_run_once
- group_names | intersect(download.groups) | length
- not (inventory_hostname == download_delegate)

View file

@ -5,42 +5,17 @@
tags:
- facts
- name: Set image info command for containerd
- name: prep_download | Set image info command for containerd and crio
set_fact:
image_info_command: "{{ bin_dir }}/crictl images --verbose | awk -F ': ' '/RepoTags|RepoDigests/ {print $2}' | tr '\n' ','"
when: container_manager == 'containerd'
image_pull_command: "{{ bin_dir }}/crictl pull"
when: container_manager in ['crio' ,'containerd']
- name: Register docker images info
shell: "{{ image_info_command }}"
no_log: true
register: docker_images
failed_when: false
changed_when: false
check_mode: no
when: download_container
- name: prep_download | Create staging directory on remote node
file:
path: "{{ local_release_dir }}/images"
state: directory
recurse: yes
mode: 0755
owner: "{{ ansible_ssh_user | default(ansible_user_id) }}"
when:
- ansible_os_family not in ["CoreOS", "Coreos", "Container Linux by CoreOS", "Flatcar", "Flatcar Container Linux by Kinvolk"]
- name: prep_download | Create local cache for files and images
file:
path: "{{ download_cache_dir }}/images"
state: directory
recurse: yes
mode: 0755
delegate_to: localhost
delegate_facts: no
run_once: true
become: false
tags:
- localhost
- name: prep_download | Set image info command for containerd and crio on localhost
set_fact:
image_info_command_on_localhost: "{{ bin_dir }}/crictl images --verbose | awk -F ': ' '/RepoTags|RepoDigests/ {print $2}' | tr '\n' ','"
image_pull_command_on_localhost: "{{ bin_dir }}/crictl pull"
when: container_manager_on_localhost in ['crio' ,'containerd']
- name: prep_download | On localhost, check if passwordless root is possible
command: "true"
@ -57,13 +32,12 @@
- asserts
- name: prep_download | On localhost, check if user has access to docker without using sudo
shell: "{{ docker_bin_dir }}/docker images"
shell: "{{ image_info_command_on_localhost }}"
delegate_to: localhost
run_once: true
register: test_docker
changed_when: false
ignore_errors: true
become: false
when:
- download_localhost
tags:
@ -90,3 +64,37 @@
tags:
- localhost
- asserts
- name: prep_download | Register docker images info
shell: "{{ image_info_command }}"
no_log: true
register: docker_images
failed_when: false
changed_when: false
check_mode: no
when: download_container
- name: prep_download | Create staging directory on remote node
file:
path: "{{ local_release_dir }}/images"
state: directory
recurse: yes
mode: 0755
owner: "{{ ansible_ssh_user | default(ansible_user_id) }}"
when:
- ansible_os_family not in ["CoreOS", "Coreos", "Container Linux by CoreOS", "Flatcar", "Flatcar Container Linux by Kinvolk"]
- name: prep_download | Create local cache for files and images on control node
file:
path: "{{ download_cache_dir }}/images"
state: directory
recurse: yes
mode: 0755
delegate_to: localhost
delegate_facts: no
run_once: true
become: false
when:
- download_force_cache
tags:
- localhost

View file

@ -7,16 +7,6 @@
- not skip_downloads | default(false)
- downloads.kubeadm.enabled
- name: prep_kubeadm_images | Sync kubeadm binary to nodes
include_tasks: "sync_file.yml"
vars:
download: "{{ download_defaults | combine(downloads.kubeadm) }}"
when:
- not skip_downloads | default(false)
- downloads.kubeadm.enabled
- download_run_once
- group_names | intersect(download.groups) | length
- name: prep_kubeadm_images | Create kubeadm config
template:
src: "kubeadm-images.yaml.j2"

View file

@ -21,3 +21,14 @@
set_fact:
image_path_cached: "{{ download_cache_dir }}/images/{{ image_filename }}"
image_path_final: "{{ local_release_dir }}/images/{{ image_filename }}"
- name: Set image save/load command for containerd and crio
set_fact:
image_save_command: "{{ containerd_bin_dir }}/ctr -n k8s.io image export {{ image_path_final }} {{ image_reponame }}"
image_load_command: "{{ containerd_bin_dir }}/ctr -n k8s.io image import --base-name {{ download.repo }} {{ image_path_final }}"
when: container_manager in ['crio' ,'containerd']
- name: Set image save/load command for containerd and crio on localhost
set_fact:
image_save_command_on_localhost: "{{ containerd_bin_dir }}/ctr -n k8s.io image export {{ image_path_cached }} {{ image_reponame }}"
when: container_manager_on_localhost in ['crio' ,'containerd']

View file

@ -1,54 +0,0 @@
---
- name: Set if containers should be pulled by digest
set_fact:
pull_by_digest: >-
{%- if download.sha256 is defined and download.sha256 -%}true{%- else -%}false{%- endif -%}
- name: Set pull_args
set_fact:
pull_args: >-
{%- if pull_by_digest %}{{ download.repo }}@sha256:{{ download.sha256 }}{%- else -%}{{ download.repo }}:{{ download.tag }}{%- endif -%}
- name: Set image pull command for containerd
set_fact:
image_pull_command: "{{ bin_dir }}/crictl pull"
when: container_manager in ['crio' ,'containerd']
- name: Register docker images info
shell: "{{ image_info_command }}"
no_log: true
register: docker_images
failed_when: false
changed_when: false
check_mode: no
when:
- not download_always_pull
- group_names | intersect(download.groups) | length
- name: Set if pull is required per container
set_fact:
pull_required: >-
{%- if pull_args in docker_images.stdout.split(',') %}false{%- else -%}true{%- endif -%}
when:
- not download_always_pull
- group_names | intersect(download.groups) | length
- name: Does any host require container pull?
vars:
hosts_pull_required: "{{ hostvars.values() | map(attribute='pull_required') | select('defined') | list }}"
set_fact:
any_pull_required: "{{ True in hosts_pull_required }}"
run_once: true
changed_when: false
when: not download_always_pull
- name: Check the local digest sha256 corresponds to the given image tag
assert:
that: "{{ download.repo }}:{{ download.tag }} in docker_images.stdout.split(',')"
when:
- group_names | intersect(download.groups) | length
- not download_always_pull
- not pull_required
- pull_by_digest
tags:
- asserts

View file

@ -1,37 +0,0 @@
---
- block:
- name: sync_container | Gather information about the current image (how to download, is it cached etc.)
import_tasks: set_container_facts.yml
tags:
- facts
- name: sync_container | Upload container image to node
synchronize:
src: "{{ image_path_cached }}"
dest: "{{ image_path_final }}"
use_ssh_args: "{{ has_bastion | default(false) }}"
mode: push
delegate_facts: no
register: get_task
become: true
until: get_task is succeeded
retries: 4
delay: "{{ retry_stagger | random + 3 }}"
when:
- ansible_os_family not in ["CoreOS", "Coreos", "Container Linux by CoreOS", "Flatcar", "Flatcar Container Linux by Kinvolk"]
- name: sync_container | Load container image into docker
shell: "{{ docker_bin_dir }}/docker load < {{ image_path_final }}"
when:
- ansible_os_family not in ["CoreOS", "Coreos", "Container Linux by CoreOS", "Flatcar", "Flatcar Container Linux by Kinvolk"]
- name: sync_container | Remove container image from cache
file:
state: absent
path: "{{ image_path_final }}"
when:
- not download_keep_remote_cache
- ansible_os_family not in ["CoreOS", "Coreos", "Container Linux by CoreOS", "Flatcar", "Flatcar Container Linux by Kinvolk"]
tags:
- upload

View file

@ -1,45 +0,0 @@
---
- block:
- name: sync_file | Starting file sync of file
debug:
msg: "Starting file sync of file: {{ download.dest }}"
- name: download_file | Set pathname of cached file
set_fact:
file_path_cached: "{{ download_cache_dir }}/{{ download.dest | basename }}"
tags:
- facts
- name: sync_file | Create dest directory on node
file:
path: "{{ download.dest | dirname }}"
owner: "{{ download.owner | default(omit) }}"
mode: 0755
state: directory
recurse: yes
- name: sync_file | Upload file images to node
synchronize:
src: "{{ file_path_cached }}"
dest: "{{ download.dest }}"
use_ssh_args: "{{ has_bastion | default(false) }}"
mode: push
become: true
register: get_task
until: get_task is succeeded
retries: 4
delay: "{{ retry_stagger | random + 3 }}"
when:
- ansible_os_family not in ["CoreOS", "Coreos", "Container Linux by CoreOS", "Flatcar", "Flatcar Container Linux by Kinvolk"]
- name: sync_file | Set mode and owner
file:
path: "{{ download.dest }}"
mode: "{{ download.mode | default(omit) }}"
owner: "{{ download.owner | default(omit) }}"
- name: sync_file | Extract file archives
include_tasks: "extract_file.yml"
tags:
- upload

View file

@ -249,3 +249,21 @@
that: kubeadm_control_plane
msg: "kubeadm etcd mode requires experimental control plane"
when: etcd_kubeadm_enabled
- name: Stop if download_localhost is enabled but download_run_once is not
assert:
that: download_run_once
msg: "download_localhost requires enable download_run_once"
when: download_localhost
- name: Stop if download_localhost is enabled when container_manager not docker
assert:
that: container_manager == 'docker'
msg: "download_run_once support only for docker. See https://github.com/containerd/containerd/issues/4075 for details"
when: download_run_once or download_force_cache
- name: Stop if download_localhost is enabled for CoreOS or Flatcar
assert:
that: ansible_os_family not in ["CoreOS", "Coreos", "Container Linux by CoreOS", "Flatcar", "Flatcar Container Linux by Kinvolk"]
msg: "download_run_once not support for CoreOS or Flatcar"
when: download_run_once or download_force_cache

View file

@ -201,6 +201,9 @@ kube_profiling: false
# Container for runtime
container_manager: docker
# Container on localhost (download images when download_localhost is true)
container_manager_on_localhost: "{{ container_manager }}"
# CRI socket path
cri_socket: >-
{%- if container_manager == 'crio' -%}

View file

@ -34,6 +34,14 @@
- { role: kubespray-defaults}
- { role: etcd, tags: etcd, etcd_cluster_setup: false }
- name: Download images to ansible host cache via first kube-master node
hosts: kube-master[0]
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults, when: "not skip_downloads and download_run_once and not download_localhost"}
- { role: kubernetes/preinstall, tags: preinstall, when: "not skip_downloads and download_run_once and not download_localhost" }
- { role: download, tags: download, when: "not skip_downloads and download_run_once and not download_localhost" }
- name: Target only workers to get kubelet installed and checking in on any new nodes
hosts: kube-node
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"

View file

@ -0,0 +1,14 @@
---
# Instance settings
cloud_image: centos-7
mode: ha
# Kubespray settings
kube_network_plugin: calico
download_localhost: true
download_run_once: true
deploy_netchecker: true
dns_min_replicas: 1
typha_enabled: true
calico_backend: kdd
typha_secure: true

View file

@ -5,7 +5,7 @@ mode: ha
# Kubespray settings
kube_network_plugin: calico
download_localhost: true
download_localhost: false
download_run_once: true
deploy_netchecker: true
dns_min_replicas: 1

View file

@ -0,0 +1,10 @@
---
# Instance settings
cloud_image: debian-9
mode: default
# Kubespray settings
kube_network_plugin: calico
deploy_netchecker: true
dns_min_replicas: 1
download_run_once: true

View file

@ -0,0 +1,29 @@
---
# Instance settings
cloud_image: ubuntu-1804
mode: ha
vm_memory: 1600Mi
# Kubespray settings
kubeadm_control_plane: true
kubeadm_certificate_key: 3998c58db6497dd17d909394e62d515368c06ec617710d02edea31c06d741085
kube_proxy_mode: iptables
kube_network_plugin: flannel
helm_enabled: true
kubernetes_audit: true
container_manager: containerd
etcd_events_cluster_enabled: true
local_volume_provisioner_enabled: true
etcd_deployment_type: host
deploy_netchecker: true
dns_min_replicas: 1
kube_encrypt_secret_data: true
ingress_nginx_enabled: true
cert_manager_enabled: true
# Disable as health checks are still unstable and slow to respond.
metrics_server_enabled: false
metrics_server_kubelet_insecure_tls: true
kube_token_auth: true
kube_basic_auth: true
enable_nodelocaldns: false
download_run_once: true

View file

@ -30,6 +30,15 @@
- { role: kubespray-defaults}
- { role: bootstrap-os, tags: bootstrap-os}
- name: Download images to ansible host cache via first kube-master node
hosts: kube-master[0]
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"
roles:
- { role: kubespray-defaults, when: "not skip_downloads and download_run_once and not download_localhost"}
- { role: kubernetes/preinstall, tags: preinstall, when: "not skip_downloads and download_run_once and not download_localhost" }
- { role: download, tags: download, when: "not skip_downloads and download_run_once and not download_localhost" }
environment: "{{ proxy_env }}"
- name: Prepare nodes for upgrade
hosts: k8s-cluster:etcd:calico-rr
any_errors_fatal: "{{ any_errors_fatal | default(true) }}"