-
简单(Easy),备份和恢复只需要简单一个命令即可完成,不需要太复杂的配置和指令。
-
快(Fast),备份和恢复速度仅受限于网络带宽和磁盘读写速率,工具本身不应该成为性能瓶颈。
-
可校验(Verifiable),用户可以随时查看和检索任意备份点中备份的所有文件内容,从而确定备份是OK的。
-
安全(Secure),数据备份强加密存储,即使远端存储仓库泄露被攻击者拿到,攻击者也拿不到真实明文数据。
-
高效(Efficient),基于文件备份,只备份增量文件,自动去重,从而节省存储空间。
restic generate --bash-completion restic.bash_completion
source restic.bash_completion
export AWS_ACCESS_KEY_ID=93E0...2MV4K
export AWS_SECRET_ACCESS_KEY=wulg1N...rXgGR
# restic -r s3:http://int32bit-minio-server/local-backup init
enter password for new repository:
enter password again:
created restic repository 94c40f5300 at s3:http://int32bit-minio-server/local-backup
# restic -r s3:http://int32bit-minio-server/local-backup stats
enter password for new repository:
enter password again:
repository 94c40f53 opened successfully, password is correct
scanning...
Stats in restore-size mode:
Snapshots processed: 0
Total Size: 0 B
export RESTIC_PASSWORD=*********
export RESTIC_REPOSITORY=s3:http://int32bit-minio-server/local-backup
# restic stats
scanning...
Stats in restore-size mode:
Snapshots processed: 0
Total Size: 0 B
# mkdir -p backup-demo
# echo "hello" >backup-demo/hello.txt
# restic backup backup-demo/
no parent snapshot found, will read all files
Files: 1 new, 0 changed, 0 unmodified
Dirs: 1 new, 0 changed, 0 unmodified
Added to the repo: 754 B
processed 1 files, 6 B in 0:00
snapshot 55572d0c saved
# echo "new_file" >backup-demo/new_file.txt
# echo "helloworld!" >backup-demo/hello.txt
# restic backup backup-demo/
using parent snapshot 55572d0c
Files: 1 new, 1 changed, 0 unmodified
Dirs: 0 new, 1 changed, 0 unmodified
Added to the repo: 1.107 KiB
processed 2 files, 21 B in 0:00
snapshot f7d5b7c5 saved
# restic snapshots
ID Time Host Paths
--------------------------------------------------------------------
55572d0c 2021-10-11 14:09:11 int32bit-test-1 /root/backup-demo
f7d5b7c5 2021-10-11 14:12:39 int32bit-test-1 /root/backup-demo
--------------------------------------------------------------------
# restic diff 55572d0c f7d5b7c5
comparing snapshot 55572d0c to f7d5b7c5:
M /backup-demo/hello.txt
+ /backup-demo/new_file.txt
Files: 1 new, 0 removed, 1 changed
Dirs: 0 new, 0 removed
Others: 0 new, 0 removed
Data Blobs: 2 new, 1 removed
Tree Blobs: 2 new, 2 removed
Added: 1.107 KiB
Removed: 754 B
# restic ls f18cccc5
snapshot f18cccc5:
/backup-demo
/backup-demo/hello.txt
/backup-demo/hello2.txt
/backup-demo/new_file.txt
# restic ls -l f18cccc5
snapshot f18cccc5:
drwxr-xr-x 0 0 0 2021-10-11 14:40:30 /backup-demo
--w-r--r-- 0 0 12 2021-10-11 14:12:20 /backup-demo/hello.txt
-rw-r--r-- 0 0 7 2021-10-11 14:40:30 /backup-demo/hello2.txt
-rw-r--r-- 0 0 9 2021-10-11 14:11:38 /backup-demo/new_file.txt
# restic find hello*
Found matching entries in snapshot f18cccc5 from 2021-10-11 14:40:36
/backup-demo/hello.txt
/backup-demo/hello2.txt
Found matching entries in snapshot 55572d0c from 2021-10-11 14:09:11
/backup-demo/hello.txt
Found matching entries in snapshot 7728a603 from 2021-10-11 14:23:47
/backup-demo/hello.txt
Found matching entries in snapshot f7d5b7c5 from 2021-10-11 14:12:39
/backup-demo/hello.txt
# restic dump f18cccc5 /backup-demo/hello2.txt
hello2
# restic mount /mnt
Now serving the repository at /mnt
When finished, quit with Ctrl-c or umount the mountpoint.
# mount | grep /mnt
restic on /mnt type fuse (ro,nosuid,nodev,relatime,user_id=0,group_id=0)
# cat /mnt/snapshots/latest/backup-demo/hello2.txt
hello2
# umount /mnt
# restic restore f18cccc5 -t /tmp/restore_data
restoring <Snapshot f18cccc5 of [/root/backup-demo] at 2021-10-11 14:40:36.459899498 +0800 CST by root@k8s-master-1> to /tmp/restore_data
# find /tmp/restore_data/
/tmp/restore_data/
/tmp/restore_data/backup-demo
/tmp/restore_data/backup-demo/new_file.txt
/tmp/restore_data/backup-demo/hello.txt
/tmp/restore_data/backup-demo/hello2.txt
restic dump f18cccc5 /backup-demo/hello2.txt >hello2.txt
# restic forget --keep-last=3 --dry-run
Applying Policy: keep 3 latest snapshots
keep 3 snapshots:
ID Time Host Tags Reasons Paths
--------------------------------------------------------------------------------------------------------
7728a603 2021-10-11 14:23:47 int32bit-test-1 last snapshot /root/backup-demo
f18cccc5 2021-10-11 14:40:36 int32bit-test-1 last snapshot /root/backup-demo
56e7b24f 2021-10-11 15:37:50 int32bit-test-1 app_name=test last snapshot /root/backup-demo
--------------------------------------------------------------------------------------------------------
3 snapshots
remove 2 snapshots:
ID Time Host Tags Paths
--------------------------------------------------------------------------------------
55572d0c 2021-10-11 14:09:11 int32bit-test-1 /root/backup-demo
f7d5b7c5 2021-10-11 14:12:39 int32bit-test-1 /root/backup-demo
--------------------------------------------------------------------------------------
2 snapshots
keep 1 snapshots:
ID Time Host Tags Reasons Paths
--------------------------------------------------------------------------------------------
112668f0 2021-10-11 15:28:22 int32bit-test-1 last snapshot /recover
--------------------------------------------------------------------------------------------
1 snapshots
Would have removed the following snapshots:
{55572d0c f7d5b7c5}
# restic forget --dry-run --keep-hourly 2
Applying Policy: keep 2 hourly snapshots
keep 2 snapshots:
ID Time Host Tags Reasons Paths
----------------------------------------------------------------------------------------------------------
56e7b24f 2021-10-11 15:37:50 int32bit-test-1 app_name=test hourly snapshot /root/backup-demo
f949e14b 2021-10-11 16:11:44 int32bit-test-1 hourly snapshot /root/backup-demo
----------------------------------------------------------------------------------------------------------
2 snapshots
remove 5 snapshots:
ID Time Host Tags Paths
--------------------------------------------------------------------------------------
55572d0c 2021-10-11 14:09:11 int32bit-test-1 /root/backup-demo
f7d5b7c5 2021-10-11 14:12:39 int32bit-test-1 /root/backup-demo
7728a603 2021-10-11 14:23:47 int32bit-test-1 /root/backup-demo
f18cccc5 2021-10-11 14:40:36 int32bit-test-1 /root/backup-demo
ab268923 2021-10-11 16:01:04 int32bit-test-1 /root/backup-demo
--------------------------------------------------------------------------------------
5 snapshots
Would have removed the following snapshots:
{55572d0c 7728a603 ab268923 f18cccc5 f7d5b7c5}
./velero install
--provider aws
--plugins xxx/velero-plugin-for-aws:v1.0.0
--bucket velero
--secret-file ./aws-iam-creds
--backup-location-config region=test,s3Url=http://192.168.0.1,s3ForcePathStyle="true"
--snapshot-location-config region=test
--image xxx/velero:v1.6.3
--features=EnableCSI
--use-restic
--dry-run -o yaml
-
--plugins以及--image参数指定镜像仓库地址,仅当使用私有镜像仓库时需要配置。
-
--use-restic参数开启使用restic备份PV数据卷功能。
-
早期Kubernetes的volume卷不支持快照,因此备份PV卷时需要安装特定的后端存储卷插件,Kubernetes从v1.12开始CSI引入Snapshot后可以利用Snapshot特性实现备份,指定--features=EnableCSI参数开启,开启该模式的底层存储必须支持snapshot,并且配置了snapshot相关的CRD以及volumesnapshotclass(类似storageclass)。
apiVersion: v1
kind: ConfigMap
metadata:
name: restic-config
namespace: velero
labels:
velero.io/plugin-config: ""
velero.io/restic: RestoreItemAction
data:
image: xxx/velero-restic-restore-helper:v1.6.3
# nginx-app-demo.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: nginx-app
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-demo
namespace: nginx-app
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ceph-rbd-sata
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
namespace: nginx-app
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
annotations:
backup.velero.io/backup-volumes: mypvc
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- name: mypvc
mountPath: /usr/share/nginx/html
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: pvc-demo
readOnly: false
---
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx
name: nginx
namespace: nginx-app
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
-
声明了一个PVC,并挂载到Nginx Pod的/usr/share/nginx/html路径。
-
Pod添加了注解backup.velero.io/backup-volumes: mypvc用于指定需要备份的Volume。因为并不是所有的Volume都必须备份,实际生产中可根据数据的重要性设置合理的备份策略,因此不建议开启--default-volumes-to-restic选项,该选项会默认备份所有的Volume。
# kubectl exec -t -i nginx-86f99c968-sj8ds -- /bin/bash
cd /usr/share/nginx/html/
echo "HelloWorld" >index.html
echo "hello1" >hello1.html
echo "hello2" >hello2.html
velero backup create nginx-backup-1 --include-namespaces nginx-app
# velero describe backups nginx-backup-1
Name: nginx-backup-1
Namespace: velero
Labels: velero.io/storage-location=default
Phase: Completed
Namespaces:
Included: nginx-app
Storage Location: default
Velero-Native Snapshot PVs: auto
TTL: 720h0m0s
Backup Format Version: 1.1.0
Started: 2021-12-18 09:35:35 +0800 CST
Completed: 2021-12-18 09:35:47 +0800 CST
Expiration: 2022-01-17 09:35:35 +0800 CST
Total items to be backed up: 22
Items backed up: 22
Restic Backups :
Completed: 1
-
备份状态为Completed,说明备份完成,记录中会有备份开始时间和完成时间。
-
备份的资源数和完成数。
-
备份的Volume数(Restic Backups)。
# aws s3 ls velero/backups/nginx-backup-1/
2021-12-18 09:35:47 29 nginx-backup-1-csi-volumesnapshotcontents.json.gz
2021-12-18 09:35:47 29 nginx-backup-1-csi-volumesnapshots.json.gz
2021-12-18 09:35:47 4730 nginx-backup-1-logs.gz
2021-12-18 09:35:47 936 nginx-backup-1-podvolumebackups.json.gz
2021-12-18 09:35:47 372 nginx-backup-1-resource-list.json.gz
2021-12-18 09:35:47 29 nginx-backup-1-volumesnapshots.json.gz
2021-12-18 09:35:47 10391 nginx-backup-1.tar.gz
2021-12-18 09:35:47 2171 velero-backup.json
# velero backup download nginx-backup-1
Backup nginx-backup-1 has been successfully downloaded to /tmp/nginx-backup-1-data.tar.gz
# mkdir -p nginx-backup-1
# tar xvzf nginx-backup-1-data.tar.gz -C nginx-backup-1/
# ls -l nginx-backup-1/resources/
total 48
drwxr-xr-x 4 root root 4096 Dec 18 09:46 deployments.apps
drwxr-xr-x 4 root root 4096 Dec 18 09:46 endpoints
drwxr-xr-x 4 root root 4096 Dec 18 09:46 endpointslices.discovery.k8s.io
drwxr-xr-x 4 root root 4096 Dec 18 09:46 events
drwxr-xr-x 4 root root 4096 Dec 18 09:46 namespaces
drwxr-xr-x 4 root root 4096 Dec 18 09:46 persistentvolumeclaims
drwxr-xr-x 4 root root 4096 Dec 18 09:46 persistentvolumes
drwxr-xr-x 4 root root 4096 Dec 18 09:46 pods
drwxr-xr-x 4 root root 4096 Dec 18 09:46 replicasets.apps
drwxr-xr-x 4 root root 4096 Dec 18 09:46 secrets
drwxr-xr-x 4 root root 4096 Dec 18 09:46 serviceaccounts
drwxr-xr-x 4 root root 4096 Dec 18 09:46 services
# kubectl get secrets velero-restic-credentials
-o jsonpath='{.data.repository-password}' | base64 -d
# restic -r s3:http://192.168.0.1/velero/restic/nginx-app snapshots
14fc2081 2021-12-18 09:35:45 ... # 输出有点长,省去了后面的输出内容
# restic -r s3:http://192.168.0.1/velero/restic/nginx-app ls 14fc2081
snapshot 14fc2081:
/hello1.html
/hello2.html
/index.html
# restic -r s3:http://192.168.0.1/velero/restic/nginx-app dump 14fc2081 /hello2.html
hello2
kubectl delete -f nginx-app-demo.yaml
# kubectl get all -n nginx-app
No resources found in nginx-app namespace.
# kubectl get ns nginx-app
Error from server (NotFound): namespaces "nginx-app" not found
# velero restore create --from-backup nginx-backup-1
Restore request "nginx-backup-1-20211218102506" submitted successfully.
# velero restore get
NAME BACKUP STATUS
nginx-backup-1-20211218102506 nginx-backup-1 InProgress
# velero restore get
NAME BACKUP STATUS
nginx-backup-1-20211218102506 nginx-backup-1 Completed
# kubectl get pod -n nginx-app
NAME READY STATUS RESTARTS AGE
nginx-86f99c968-8zh6m 1/1 Running 0 103s
# kubectl get svc -n nginx-app
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx ClusterIP 10.106.140.195 <none> 80/TCP 2m37
# kubectl exec -t -i -n nginx-app nginx-86f99c968-8zh6m -- ls /usr/share/nginx/html/
hello1.html hello2.html index.html lost+found
# kubectl get svc -n nginx-app
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx ClusterIP 10.106.140.195 <none> 80/TCP 2m37s
# curl 10.106.140.195
HelloWorld
apiVersion: v1
kind: Pod
metadata:
annotations:
backup.velero.io/backup-volumes: mypvc
labels:
app: nginx
pod-template-hash: 86f99c968
velero.io/backup-name: nginx-backup-1
velero.io/restore-name: nginx-backup-1-20211218102506
name: nginx-86f99c968-8zh6m
namespace: nginx-app
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html
name: mypvc
initContainers:
- args:
- ead72033-f495-4223-9358-6f97c920e9ae
command:
- /velero-restic-restore-helper
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
image: velero-restic-restore-helper:v1.6.3
imagePullPolicy: IfNotPresent
name: restic-wait
volumeMounts:
- mountPath: /restores/mypvc
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: rbd-pvc-demo
-
Pod增加Velero备份和恢复相关label。
-
嵌入了一个initContainer,通过velero-restic-restore-helper实现volume数据的恢复,该工具其实就是restic命令的包装。
-
备份时间,crontab语法。
-
备份保留时间,通过ttl指定,默认30天。
-
备份内容,支持指定namespace或者基于label指定具体的备份资源。
# Create a backup every 6 hours.
velero create schedule NAME --schedule="0 */6 * * *"
# Create a backup every 6 hours with the @every notation.
velero create schedule NAME --schedule="@every 6h"
# Create a daily backup of the web namespace.
velero create schedule NAME --schedule="@every 24h" --include-namespaces web
# Create a weekly backup, each living for 90 days (2160 hours).
velero create schedule NAME --schedule="@every 168h" --ttl 2160h0m0s
-
minio为开源的对象存储,为velero/restic提供备份存储后端,实际生产时调整为企业对象存储系统。
-
远端存储为异地存储系统,比如异地磁带库、NBU,或者跨region的异地对象存储系统。
-
Kubernetes的所有资源包括Pod、Deployment、ConfigMap、Secret、PV卷数据等通过Velero备份到对象存储。
-
通过minio-sync实现实时同步数据到远端同城异地存储系统。
ETCDCTL_API=3 etcdctl --endpoints $ENDPOINT snapshot save snapshotdb
#!/bin/sh
# bootstrap.sh
export ETCDCTL_API=3
MASTER_ENDPOINT=$(etcdctl --endpoints=$ETCD_ENDPOINTS
--cacert=/etc/ssl/etcd/ca.crt
--cert=/etc/ssl/etcd/etcd.crt
--key=/etc/ssl/etcd/etcd.key
endpoint status
| awk -F ',' '{printf("%s %sn", $1,$5)}'
| tr -s ' ' | awk '/true/{print $1}')
echo "etcd master endpoint is ${MASTER_ENDPOINT}"
BACKUP_FILE=etcd-backup-$(date +%Y%m%d%H%M%S).db
etcdctl --endpoints=$MASTER_ENDPOINT
--cacert=/etc/ssl/etcd/ca.crt
--cert=/etc/ssl/etcd/etcd.crt
--key=/etc/ssl/etcd/etcd.key
snapshot save $BACKUP_FILE
aws --endpoint $S3_ENDPOINT s3 cp $BACKUP_FILE s3://$BUCKET_NAME
for f in $(aws --endpoint $S3_ENDPOINT
s3 ls $BUCKET_NAME | head -n "-${KEEP_LAST_BACKUP_COUNT}"
| awk '{print $4}'); do
aws --endpoint $S3_ENDPOINT s3 rm s3://$BUCKET_NAME/$f
done
FROM python:alpine
ARG ETCD_VERSION=v3.4.3
RUN apk add --update --no-cache ca-certificates tzdata openssl
RUN wget https://github.com/etcd-io/etcd/releases/download/${ETCD_VERSION}/etcd-${ETCD_VERSION}-linux-amd64.tar.gz
&& tar xzf etcd-${ETCD_VERSION}-linux-amd64.tar.gz
&& mv etcd-${ETCD_VERSION}-linux-amd64/etcdctl /usr/local/bin/etcdctl
&& rm -rf etcd-${ETCD_VERSION}-linux-amd64*
RUN pip3 install awscli
ENV ETCDCTL_API=3
ADD bootstrap.sh /
RUN chmod +x /bootstrap.sh
CMD ["/bootstrap.sh"]
#!/bin/bash
kubectl create secret generic etcd-tls -o yaml
--from-file /etc/kubernetes/pki/etcd/ca.crt
--from-file /etc/kubernetes/pki/etcd/server.crt
--from-file /etc/kubernetes/pki/etcd/server.key
| sed 's/server/etcd/g'
kubectl create secret generic s3-credentials
-o yaml --from-file ~/.aws/credentials
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: etcd-backup
namespace: etcd-backup
spec:
jobTemplate:
metadata:
name: etcd-backup
spec:
template:
spec:
containers:
- image: etcd-backup:v3.4.3
imagePullPolicy: IfNotPresent
name: etcd-backup
volumeMounts:
- name: s3-credentials
mountPath: /root/.aws
- name: etcd-tls
mountPath: /etc/ssl/etcd
- name: localtime
mountPath: /etc/localtime
readOnly: true
env:
- name: ETCD_ENDPOINTS
value: "192.168.1.1:2379,192.168.1.2:2379,192.168.1.3:2379"
- name: BUCKET_NAME
value: etcd-backup
- name: S3_ENDPOINT
value: "http://192.168.1.53"
- name: KEEP_LAST_BACKUP_COUNT
value: "7"
volumes:
- name: s3-credentials
secret:
secretName: s3-credentials
- name: etcd-tls
secret:
secretName: etcd-tls
- name: localtime
hostPath:
path: /etc/localtime
restartPolicy: OnFailure
schedule: '0 0 * * *'
-
https://restic.readthedocs.io/en/latest/010_introduction.html
-
https://github.com/restic/restic/releases/tag/v0.12.1
-
https://velero.io/docs/v1.7/
-
https://stash.run/
-
https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#backing-up-an-etcd-cluster
今天,我们就供应链安全,继续探讨。如果已经遵循良好的采购和合同实践,以下提供可以考虑的其他因素,以便我们更好的评估供应链管理实践的优劣好坏。好的坏的与供应商建立伙伴关系。如果组织供应商采用组织供应链安全方法作为他们自己的方法,那么与组织仅仅强制要求合规相比,成…
- 左青龙
- 微信扫一扫
-
- 右白虎
- 微信扫一扫
-
评论