Search
You can find the results of your search below.
Matching pagenames:
Fulltext results:
- universal_multiboot_grub_bios_uefi @linux_faq
- z } menuentry "SystemRescueCd 5.3.1 Live (64bit, cache all files in memory and startX)" { loopback loop
- kubernetes_using_single_node_as_master_and_worker @linux_faq
- ubeadm** нам доступны sudo apt-get update apt-cache madison kubeadm Разрешаем обновление **kubernetes
- wiki_backup @linux_faq
- -R -e --exclude=data/tmp/captcha/ --exclude=data/cache/ $dir_to_backup $remote_www_dir; bye;' ftp://$us
- npm_codeartifacts_auth_proxy @devops
- ommand: > /bin/bash -c " apk add --no-cache aws-cli && export CODEARTIFACTS_OWNER=`aws ... imedout_connection on; server_tokens off; # Cache 10G worth of packages for up to 1 month proxy_cache_path /var/lib/nginx/npm levels=1:2 keys_zone=npm:1... e npm.example.com; root /var/www; proxy_cache npm; proxy_cache_key $uri; proxy_cache_lo
- мультизагрузочная_флешка_с_помощью_grub @linux_faq
- z } menuentry "SystemRescueCd 4.6.1 Live (64bit, cache all files in memory and startX)" { loopback loop
- aws_certified_cloud_practitioner @devops
- оступные по **iSCSI**. Может работать в режимах **Cache mode**, когда локально хранятся только "горячие д
- proxmox_storage_optimization @proxmox
- r.com/questions/952016/how-can-i-use-one-lvmcache-cache-pool-lv-for-multiple-origin-lvs Пример конфигура... vdc Создаем том с кешем на SSD: lvcreate --type cache-pool -L1G -n cache vgname /dev/vdb Создаем том с данными: lvcreate -L9G -n data vgname /dev/vdc И теперь собираем конструкцию: lvconvert --type cache --cachepool vgname/cache vgname/data Do you wan
- boot_linux_on_amlogic_tv_box @android
- 0000 1 14: product 0000000008000000 1 15: cache 0000000046000000 2 16: data ffffffffffff... 5449479 active slot = 0 wipe_data=successful wipe_cache=successful upgrade_step=2 reboot_mode:::: normal ... ff_protect=echo wipe_data=${wipe_data}; echo wipe_cache=${wipe_cache};if test ${wipe_data} = failed; then run init_display; run storeargs;if mmcinfo ; then run
- armbian_install_xfce_desktop @android
- gstreamer1.0-tools gstreamer1.0-x gtk-update-icon-cache gtk2-engines gtk2-engines-murrine gtk2-engines-pi... gstreamer1.0-tools gstreamer1.0-x gtk-update-icon-cache gtk2-engines gtk2-engines-murrine gtk2-engines-pi
- kde_on_ubuntu_server_minimal @linux_faq
- = /bin/bash krb5_store_password_if_offline = True cache_credentials = True krb5_realm = VOXIMPLANT.LOCAL
- zfs_zil_l2arc @linux_faq
- e> zpool add pve-data log /dev/pve/zfs-zil zpool add pve-data cache /dev/pve/zfs-l2arc swapon -a </code>
- deploy_elk_using_helm @devops
- [kubernetes] kubernetes/watcher.go:184 cache sync done 2021-10-24T11:11:24.788Z DEBUG [kubernetes] kubernetes/watcher.go:184 cache sync done 2021-10-24T11:11:24.889Z DEBUG [kubernetes] kubernetes/watcher.go:184 cache sync done ... </code> Однако, при нормальной рабо... AR_PREFIX=/var/nginx RUN set -ex \ && apk --no-cache add \ libgcc \ libpcrecpp \ libpcre16
- transparent_squid_proxy_with_ssl_bumping @linux_faq
- resh_pattern . 0 20% 4320 cache_dir aufs /var/spool/squid 20000 49 256 maximum_object_size 61440 KB minimum_object_size 3 KB cache_swap_low 90 cache_swap_high 95 maximum_object_size_in_memory 512 KB memory_replacement_policy lru logfile_rotate 4 cache_peer 10.77.70.7 parent 3128 0 no-query default lo
- настройка_аутентификации_kerberos_для_сервиса_systemd @linux_faq
- онфиге сквида нужно прописать **parent proxy**: cache_peer parent-proxy.domain.local parent 3128 0 no-q... прокси не умеет **digest** и **netdb-exchange** cache_peer srv-proxy.rdleas.ru parent 3128 0 default no... : debug_options ALL,2 И читаем лог. Логи работы с **parent proxy** будут в **/var/log/squid/cache.log**
- kde_kioexec_cache_is_a_folder_but_file_was_expected @linux_faq
- smb://server/share получаю ошибку: <code>.../.cache/kioexec/krun/346_0/ is a folder, but a file was e