FreeBSD Z file system ZFS
작성중인 문서 -- 2018.03.16
official site: https://www.freebsd.org/doc/handbook/zfs.html
ZFS tuning : https://www.freebsd.org/doc/handbook/zfs-advanced.html
ZFS 특징 및 용어 설명: https://www.freebsd.org/doc/handbook/zfs-term.html#zfs-term-vdev
Test 환경 kvm - FreeBSD11 / Disk VirtIO 3 Disk
ZFS tuning point 의 경우 별도로 테스트 하도록 하겠습니다.
이번 포스트의 경우 FreeBSD 11 환경에서 ZFS 파일시스템만 초점을 맞추도록 하겠습니다.
ZFS 테스트를 위하여 vm 에서 Disk 를 3개 붙였습니다.
ZFS 설정
ZFS 사용을 위하여 /boot/loader.conf 를 수정 합니다.
root@bsd11:~ # vi /boot/loader.conf vfs.zfs.min_auto_ashift=12 zfs_load="YES"
ZFS Service 를 실행 하기 위하여 rc.conf 에 zfs_enable 를 추가 합니다.
root@bsd11:~ # sysrc zfs_enable=YES zfs_enable: NO -> YES root@bsd11:~ #
시스템을 리부팅 합니다.
root@bsd11:~ # init 6
Disk 확인
kvm 에서 VirtIO 로 Disk 를 붙였기 때문에 /dev/adaX 가 아닌 /dev/vtbDX 로 인식이 됩니다.
root@bsd11:~ # ls -al /dev/vtb* crw-r----- 1 root operator 0x3d Mar 13 20:30 /dev/vtbd0 crw-r----- 1 root operator 0x48 Mar 13 20:30 /dev/vtbd1 crw-r----- 1 root operator 0x49 Mar 13 20:30 /dev/vtbd2 root@bsd11:~ #
Single Disk Pool 추가
root@bsd11:~ # zpool create test /dev/vtbd0 root@bsd11:~ # df -h Filesystem Size Used Avail Capacity Mounted on /dev/ada0s1a 18G 7.0G 9.9G 41% / devfs 1.0K 1.0K 0B 100% /dev test 19G 23K 19G 0% /test root@bsd11:~ #
mount 확인
zpool create 명령어로 붙인 test 를 바로 디렉토리에 마운트 합니다. ;;
확인해 보니 zfs 기능을 사용하지 않고 단순하게 test 를 만들고 test 디렉토리에 마운트 하는것으로 보입니다.
root@bsd11:/test # mount /dev/ada0s1a on / (ufs, local, journaled soft-updates) devfs on /dev (devfs, local, multilabel) test on /test (zfs, local, nfsv4acls)
ZFS 압축 파일 시스템 ex)
root@bsd11:~ # zfs create test/compressed # zfs set compression=gzip test/compressed root@bsd11:~ # zfs set compression=gzip test/compressed
ZFS 압축파일 시스템 해제
root@bsd11:~ # zfs set compression=off test/compressed
디렉토리 확인
root@bsd11:~ # df -h Filesystem Size Used Avail Capacity Mounted on /dev/ada0s1a 18G 7.0G 9.9G 41% / devfs 1.0K 1.0K 0B 100% /dev test 19G 23K 19G 0% /test test/compressed 19G 23K 19G 0% /test/compressed root@bsd11:~ #
data 파일시스템을 만들고 데이터 블록 복사본을 2개씩 유지
root@bsd11:~ # zfs create test/data root@bsd11:~ # zfs copies=2 test/data
test 풀의 각 파일 시스템의 사용 가능한 공간은 동일 합니다.
root@bsd11:~ # df -h Filesystem Size Used Avail Capacity Mounted on /dev/ada0s1a 18G 7.0G 9.9G 41% / devfs 1.0K 1.0K 0B 100% /dev test 19G 23K 19G 0% /test test/compressed 19G 23K 19G 0% /test/compressed test/data 19G 23K 19G 0% /test/data root@bsd11:~ #
ZFS pool 삭제
root@bsd11:~ # zfs destroy test/compressed root@bsd11:~ # zfs destroy test/data root@bsd11:~ # zpool destroy test
Z-RAID
디스크 오류로 인한 데이터 손실을 피하는 방법으로 사용할수 있을것으로 보이며, Z-RAID 구성시 3개 이상의 Disk 가 필요 합니다.
Sun™ recommends that the number of devices used in a RAID-Z configuration be between three and nine. For environments requiring a single pool consisting of 10 disks or more, consider breaking it up into smaller RAID-Z groups. If only two disks are available and redundancy is a requirement, consider using a ZFS mirror. Refer to zpool(8) for more details.
ZFS 구성시 Disk 구성은 3~9 개를 권장 합니다.
Disk 확인
root@bsd11:~ # ls -al /dev/vtb* crw-r----- 1 root operator 0x3d Mar 13 20:30 /dev/vtbd0 crw-r----- 1 root operator 0x48 Mar 13 20:30 /dev/vtbd1 crw-r----- 1 root operator 0x49 Mar 13 20:30 /dev/vtbd2 root@bsd11:~ #
Z-RAID 구성
zpool create 명령어로 storage 를 생성 합니다.
root@bsd11:~ # zpool create storage raidz vtbd0 vtbd1 vtbd2
home file system 생성
root@bsd11:~ # zfs create storage/home
2개의 복사본을 유지 하며 gzip 으로 압축 가능하게 설정
root@bsd11:~ # zfs set copies=2 storage/home root@bsd11:~ # zfs set compression=gzip storage/home
기존 /home 디렉토리 (유저 디렉토리) 를 마이그레이션 합니다.
root@bsd11:~ # cp -rp /home/* /storage/home root@bsd11:~ # rm -rf /home /usr/home root@bsd11:~ # ln -s /storage/home /home root@bsd11:~ # ln -s /storage/home /usr/home
기존 유저로 ssh login 을 Test 합니다.
정상적으로 login 이 됩니다.
[root@test ~]# ssh test@bsd11 Last login: Sat Mar 3 21:49:33 2018 FreeBSD 11.1-RELEASE (GENERIC) #0 r321309: Fri Jul 21 02:08:28 UTC 2017 Welcome to FreeBSD! Release Notes, Errata: https://www.FreeBSD.org/releases/ Security Advisories: https://www.FreeBSD.org/security/ FreeBSD Handbook: https://www.FreeBSD.org/handbook/ FreeBSD FAQ: https://www.FreeBSD.org/faq/ Questions List: https://lists.FreeBSD.org/mailman/listinfo/freebsd-questions/ FreeBSD Forums: https://forums.FreeBSD.org/ Documents installed with the system are in the /usr/local/share/doc/freebsd/ directory, or can be installed later with: pkg install en-freebsd-doc For other languages, replace "en" with a language code like de or fr. Show the version of FreeBSD installed: freebsd-version ; uname -a Please include that output and any error messages when posting questions. Introduction to manual pages: man man FreeBSD directory layout: man hier Edit /etc/motd to change this login announcement. You can change the video mode on all consoles by adding something like the following to /etc/rc.conf: allscreens="80x30" You can use "vidcontrol -i mode | grep T" for a list of supported text modes. -- Konstantinos Konstantinidis <kkonstan@duth.gr> $ ls -al total 22 drwxr-xr-x 3 test wheel 11 Mar 3 23:42 . drwxr-xr-x 3 root wheel 3 Mar 13 21:04 .. -rw-r--r-- 1 test wheel 1055 Mar 3 21:43 .cshrc -rw-r--r-- 1 test wheel 254 Mar 3 21:43 .login -rw-r--r-- 1 test wheel 163 Mar 3 21:43 .login_conf -rw------- 1 test wheel 379 Mar 3 21:43 .mail_aliases -rw-r--r-- 1 test wheel 336 Mar 3 21:43 .mailrc -rw-r--r-- 1 test wheel 802 Mar 3 21:43 .profile -rw------- 1 test wheel 281 Mar 3 21:43 .rhosts -rw-r--r-- 1 test wheel 849 Mar 3 21:43 .shrc drwxr-xr-x 2 test wheel 3 Mar 3 23:42 public_html $ whoami test
ZFS snapshot 생성
root@bsd11:~ # zfs snapshot storage/home@2018-03-13
ZFS snapshot rollback Test
Test 를 위하여 public_html 을 삭제 합니다.
root@bsd11:~ # ls -al /home/test/ |grep -i public_html drwxr-xr-x 2 test wheel 3 Mar 3 23:42 public_html root@bsd11:~ # root@bsd11:~ # rm -rf /home/test/public_html/ root@bsd11:~ # ls -al /home/test | grep -i publ
ZFS rollback 을 테스트 합니다.
zfs list -t 을 지정 하여 snapshot 을 확인 한후 rollback 을 진행 합니다.
root@bsd11:~ # zfs list -t snapshot NAME USED AVAIL REFER MOUNTPOINT storage/home@2017-03-13 32.0K - 57.3K - root@bsd11:~ # root@bsd11:~ # zfs rollback storage/home@2018-03-13 root@bsd11:~ # ls -al /home/test/ |grep -i public_html drwxr-xr-x 2 test wheel 3 Mar 3 23:42 public_html root@bsd11:~ #
ZFS snapshot 제거
snapshot 제거및 snapshot 확인
root@bsd11:~ # zfs destroy storage/home@2018-03-13 root@bsd11:~ # zfs list -t snapshot no datasets available root@bsd11:~ #
ZFS 복구
모든 Z-RAID 를 확인 합니다.
root@bsd11:~ # zpool status -x all pools are healthy root@bsd11:~ #
모든 풀이 온라인 상태이며 정상 입니다.
Disk 확인
root@bsd11:~ # zpool status pool: storage state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM storage ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 vtbd0 ONLINE 0 0 0 vtbd1 ONLINE 0 0 0 vtbd2 ONLINE 0 0 0 errors: No known data errors root@bsd11:~ #
비정상일시 아래와 같이 출력됩니다.
pool: storage state: DEGRADED status: One or more devices has been taken offline by the administrator. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Online the device using 'zpool online' or replace the device with 'zpool replace'. scrub: none requested config: NAME STATE READ WRITE CKSUM storage DEGRADED 0 0 0 raidz1 DEGRADED 0 0 0 da0 ONLINE 0 0 0 da1 OFFLINE 0 0 0 da2 ONLINE 0 0 0 errors: No known data errors
ZFS Disk 중 vtbd0 Disk 를 offline 으로 상태를 변경 합니다.
만약 Disk 가 하드웨어 이상이라고 하여도 아래와 같이 메시지가 출력 되며 System 을 Shutdown 후 Disk 교체후 storage 볼륨에 Repalace 하면 됩니다.
root@bsd11:~ # zpool status pool: storage state: DEGRADED status: One or more devices has been taken offline by the administrator. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Online the device using 'zpool online' or replace the device with 'zpool replace'. scan: none requested config: NAME STATE READ WRITE CKSUM storage DEGRADED 0 0 0 raidz1-0 DEGRADED 0 0 0 4767353646844092173 OFFLINE 0 0 0 was /dev/vtbd0 vtbd1 ONLINE 0 0 0 vtbd2 ONLINE 0 0 0 errors: No known data errors root@bsd11:~ #
Replace 명령어는 다음과 같습니다.
zpool replace $zfs_pool_name $disk_name
root@bsd11:~ # zpool replace storage vtbd0 invalid vdev specification use '-f' to override the following errors: /dev/vtbd0 is part of active pool 'storage' root@bsd11:~ #
붙어 있는 Disk 를 그대로 활용할 경우 zfs online 으로 설정 하시면 됩니다.
별도로 Disk 를 붙여서 replace 할경우에만 zpool replace storage 를 사용 하면 됩니다.
root@bsd11:~ # zpool replace storage vtbd0 invalid vdev specification use '-f' to override the following errors: /dev/vtbd0 is part of active pool 'storage' root@bsd11:~ # zpool online storage vtbd0 root@bsd11:~ # zpool status pool: storage state: ONLINE scan: resilvered 17.5K in 0h0m with 0 errors on Tue Mar 13 22:16:57 2018 config: NAME STATE READ WRITE CKSUM storage ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 vtbd0 ONLINE 0 0 0 vtbd1 ONLINE 0 0 0 vtbd2 ONLINE 0 0 0 errors: No known data errors root@bsd11:~ #
Disk 를 변경하여 테스트
root@bsd11:~ # zpool status pool: storage state: DEGRADED status: One or more devices has been taken offline by the administrator. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Online the device using 'zpool online' or replace the device with 'zpool replace'. scan: resilvered 17.5K in 0h0m with 0 errors on Tue Mar 13 22:16:57 2018 config: NAME STATE READ WRITE CKSUM storage DEGRADED 0 0 0 raidz1-0 DEGRADED 0 0 0 4767353646844092173 OFFLINE 0 0 0 was /dev/vtbd0 vtbd1 ONLINE 0 0 0 vtbd2 ONLINE 0 0 0 errors: No known data errors root@bsd11:~ #
vm Shtudown 후 10G Volume 을 추가 하였습니다.
root@bsd11:~ # ls -al /dev/vtbd* crw-r----- 1 root operator 0x3d Mar 13 22:20 /dev/vtbd0 crw-r----- 1 root operator 0x48 Mar 13 22:20 /dev/vtbd1 crw-r----- 1 root operator 0x49 Mar 13 22:20 /dev/vtbd2 crw-r----- 1 root operator 0x4a Mar 13 22:20 /dev/vtbd3 root@bsd11:~ #
Zpool Replace
zfs replace $pool_name $기존disk $신규disk 형식으로 사용 하시면 됩니다.
root@bsd11:~ # zpool replace storage vtbd0 vtbd3 root@bsd11:~ # zpool status pool: storage state: ONLINE scan: resilvered 178K in 0h0m with 0 errors on Tue Mar 13 22:29:36 2018 config: NAME STATE READ WRITE CKSUM storage ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 vtbd3 ONLINE 0 0 0 vtbd1 ONLINE 0 0 0 vtbd2 ONLINE 0 0 0 errors: No known data errors root@bsd11:~ #
Data Verification
Checksums can be disabled, but it is not recommended! Checksums take very little storage space and provide data integrity. Many ZFS features will not work properly with checksums disabled. There is no noticeable performance gain from disabling these checksums.
요약:
zfs 에서는 물결성을 위하여 Checksum을 사용 하며 해당기능을 Disable 할수 있습니다.
성능차이가 미비함으로 Disable 을 하지 않는것이 좋습니다.
체크섬 확인은 스크러빙 이며 다음 명령어를 사용하여 zfs pool 무결성을 검증할수 있습니다.
root@bsd11:~ # zpool scrub storage
검사전
scan: resilvered 178K in 0h0m with 0 errors on Tue Mar 13 22:29:36 2018
root@bsd11:~ # zpool status pool: storage state: ONLINE scan: resilvered 178K in 0h0m with 0 errors on Tue Mar 13 22:29:36 2018 config: NAME STATE READ WRITE CKSUM storage ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 vtbd3 ONLINE 0 0 0 vtbd1 ONLINE 0 0 0 vtbd2 ONLINE 0 0 0 errors: No known data errors root@bsd11:~ #
검사후
scan: scrub repaired 0 in 0h0m with 0 errors on Tue Mar 13 22:35:57 2018
root@bsd11:~ # zpool status storage pool: storage state: ONLINE scan: scrub repaired 0 in 0h0m with 0 errors on Tue Mar 13 22:35:57 2018 config: NAME STATE READ WRITE CKSUM storage ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 vtbd3 ONLINE 0 0 0 vtbd1 ONLINE 0 0 0 vtbd2 ONLINE 0 0 0 errors: No known data errors root@bsd11:~ #
스크럽 지속 시간의 경우 저장된 데이터의 양에 따라 다르며 많은 양의 데이터 확인시 오랜 시간이 걸릴 것입니다. 스크럼 확인시 한번에 하나씩 확인이 가능 합니다.
Zpool 관리
https://www.freebsd.org/doc/handbook/zfs-zpool.html
ZFS 관리의 경우 2가지 유틸로 나뉘며 zpool 유틸의 경우 풀의 작동을 제어
디스크 추가, 제거 , 교체 및 관리를 합니다.
zfs 유틸리티는 파일 시스템 과 볼륨 모두의 데이터 셋트 생성, 삭제를 관리 합니다.
pool 생성및 제거
mirror pool 을 생성합니다.
root@bsd11:~ # zpool create testpool mirror /dev/vtbd0 /dev/vtbd1 invalid vdev specification use '-f' to override the following errors: /dev/vtbd0 is part of potentially active pool 'storage' root@bsd11:~ #
기존에 storage pool 에서 사용하여 정상적으로 생성이 안됩니다.
zpool labelclear 로 GPT 헤더를 삭제 합니다.
root@bsd11:~ # zpool labelclear -f /dev/vtbd0 root@bsd11:~ # zpool labelclear -f /dev/vtbd1 root@bsd11:~ # zpool labelclear -f /dev/vtbd2 root@bsd11:~ # zpool labelclear -f /dev/vtbd3
zpool create 로 testpool 을 mirror 로 생성 합니다.
root@bsd11:~ # zpool create testpool mirror /dev/vtbd0 /dev/vtbd1 root@bsd11:~ # zpool status pool: testpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM testpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 vtbd0 ONLINE 0 0 0 vtbd1 ONLINE 0 0 0 errors: No known data errors root@bsd11:~ #
여러 vdev 를 만들때는 아래와 같이 생성 합니다.
zfs 용어의 경우 포스트 최상의 링크중 zfs 특징및 용어 설명을 참고 하시기 바랍니다.
root@bsd11:~ # zpool create testpool mirror /dev/vtbd0 /dev/vtbd1 mirror /dev/vtbd2 /dev/vtbd3 root@bsd11:~ # zpool status pool: testpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM testpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 vtbd0 ONLINE 0 0 0 vtbd1 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 vtbd2 ONLINE 0 0 0 vtbd3 ONLINE 0 0 0 errors: No known data errors root@bsd11:~ #
RAID-Z2 pool 생성
root@bsd11:~ # zpool create testpool raidz2 /dev/vtbd0p1 /dev/vtbd0p2 /dev/vtbd0p3 /dev/vtbd0p4 /dev/vtbd0p5 /dev/vtbd0p6 root@bsd11:~ # zpool status pool: testpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM testpool ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 vtbd0p1 ONLINE 0 0 0 vtbd0p2 ONLINE 0 0 0 vtbd0p3 ONLINE 0 0 0 vtbd0p4 ONLINE 0 0 0 vtbd0p5 ONLINE 0 0 0 vtbd0p6 ONLINE 0 0 0 errors: No known data errors root@bsd11:~ #
장치 추가 및 제거
zpool attach 를 시용 하여 기존 vdev 에 디스크를 추가하거나, zpool add를 사용하여 vdev를 풀에 추가 할수 있습니다.
단일 디스크의 경우 중복성이 없기 때문에 손상이 발견 되었을시 복구 되지 않습니다.
zpool attach 를 사용하여 vdev에 디스크를 추가 하여 미러를 생성할수 있으며 중복성과 읽기 성능을 향상시킬수 있습니다.
root@bsd11:~ # zpool create testpool /dev/vtbd0p1 root@bsd11:~ # zpool status pool: testpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM testpool ONLINE 0 0 0 vtbd0p1 ONLINE 0 0 0 errors: No known data errors root@bsd11:~ #
zpool attach 명령어를 이용하여 mirror 구성을 합니다.
root@bsd11:~ # zpool attach testpool vtbd0p1 vtbd0p2 root@bsd11:~ # zpool status pool: testpool state: ONLINE scan: resilvered 78.5K in 0h0m with 0 errors on Thu Mar 15 21:22:36 2018 config: NAME STATE READ WRITE CKSUM testpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 vtbd0p1 ONLINE 0 0 0 vtbd0p2 ONLINE 0 0 0 errors: No known data errors root@bsd11:~ #
Test 를 위하여 vm 을 zfs 로 설치 하였으며
Disk 를 동일하게 40G x2개 붙였습니다.
OS 40G Test Disk 40G
기존 Disk 의 ada0p3 용량을 확인 합니다.
root@bsd11:~ # gpart list Geom name: ada0 modified: false state: OK fwheads: 16 fwsectors: 63 last: 83886039 first: 40 entries: 152 scheme: GPT Providers: 1. Name: ada0p1 Mediasize: 524288 (512K) Sectorsize: 512 Stripesize: 0 Stripeoffset: 20480 Mode: r0w0e0 rawuuid: edce08d3-2127-11e8-a62a-8fd7aec5b81f rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f label: gptboot0 length: 524288 offset: 20480 type: freebsd-boot index: 1 end: 1063 start: 40 2. Name: ada0p2 Mediasize: 2147483648 (2.0G) Sectorsize: 512 Stripesize: 0 Stripeoffset: 1048576 Mode: r1w1e0 rawuuid: edd96866-2127-11e8-a62a-8fd7aec5b81f rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b label: swap0 length: 2147483648 offset: 1048576 type: freebsd-swap index: 2 end: 4196351 start: 2048 3. Name: ada0p3 Mediasize: 40800092160 (38G) Sectorsize: 512 Stripesize: 0 Stripeoffset: 2148532224 Mode: r1w1e1 rawuuid: ede26582-2127-11e8-a62a-8fd7aec5b81f rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b label: zfs0 length: 40800092160 offset: 2148532224 type: freebsd-zfs index: 3 end: 83884031 start: 4196352 Consumers: 1. Name: ada0 Mediasize: 42949672960 (40G) Sectorsize: 512 Mode: r2w2e3
신규 디스크의 용량을 설정 합니다.
root@bsd11:~ # gpart create -s GPT ada1 ada1 created root@bsd11:~ # gpart add -t freebsd-zfs -s 512K ada1 ada1p1 added root@bsd11:~ # gpart add -t freebsd-zfs -s 2G ada1 ada1p2 added root@bsd11:~ # gpart add -t freebsd-zfs ada1 ada1p3 added root@bsd11:~ # root@bsd11:~ # gpart show => 40 83886000 ada0 GPT (40G) 40 1024 1 freebsd-boot (512K) 1064 984 - free - (492K) 2048 4194304 2 freebsd-swap (2.0G) 4196352 79687680 3 freebsd-zfs (38G) 83884032 2008 - free - (1.0M) => 40 83886000 ada1 GPT (40G) 40 1024 1 freebsd-zfs (512K) 1064 4194304 2 freebsd-zfs (2.0G) 4195368 79690672 3 freebsd-zfs (38G) root@bsd11:~ #
zpool attach zroot vdev 에 ada1p3 를 추가 합니다.
root@bsd11:~ # zpool status pool: zroot state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 ada0p3 ONLINE 0 0 0 errors: No known data errors root@bsd11:~ # zpool attach zroot ada0p3 ada1p3 Make sure to wait until resilver is done before rebooting. If you boot from pool 'zroot', you may need to update boot code on newly attached disk 'ada1p3'. Assuming you use GPT partitioning and 'da0' is your new boot disk you may use the following command: gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0 root@bsd11:~ #
gpart 명령어로 부팅 가능하게 설정 합니다.
root@bsd11:~ # gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1 partcode written to ada1p1 bootcode written to ada1 root@bsd11:~ # zpool status pool: zroot state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Thu Mar 15 22:35:38 2018 4.70G scanned out of 6.65G at 50.2M/s, 0h0m to go 4.70G resilvered, 70.75% done config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ada0p3 ONLINE 0 0 0 ada1p3 ONLINE 0 0 0 (resilvering) errors: No known data errors root@bsd11:~ #
resilvering 의 경우 ada0p3 의 내용을 ada1p3 로 mirror 구성 하게 되며 시간이 다소 걸립니다.
(용량에 따라 차등합니다.)
완료후 상태
root@bsd11:~ # zpool status pool: zroot state: ONLINE scan: resilvered 6.65G in 0h2m with 0 errors on Thu Mar 15 22:37:50 2018 config: NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ada0p3 ONLINE 0 0 0 ada1p3 ONLINE 0 0 0 errors: No known data errors root@bsd11:~ #
testpool 을 생성 합니다. vtbd0 와 vtbd1 Disk 를 추가 하였습니다.
root@bsd11:~ # zpool create testpool mirror /dev/vtbd0 /dev/vtbd1 root@bsd11:~ # zpool status pool: testpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM testpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 vtbd0 ONLINE 0 0 0 vtbd1 ONLINE 0 0 0 errors: No known data errors root@bsd11:~ #
지금 상태에서는 vtbdX Disk 를 제거 할수 없습니다.
Disk 를 추가 합니다.
root@bsd11:~ # zpool add testpool mirror vtbd2 vtbd3 root@bsd11:~ # zpool status pool: testpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM testpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 vtbd0 ONLINE 0 0 0 vtbd1 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 vtbd2 ONLINE 0 0 0 vtbd3 ONLINE 0 0 0 errors: No known data errors root@bsd11:~ #
Test 를 위하여 vtbd2 Disk 를 제거 합니다.
충분한 여분의 중복이 있는 경우에만 Disk 를 제거 할수 있습니다.
미러구룹에 있는 하나의 디스크는 스트라이프로 동작합니다.
root@bsd11:~ # zpool detach testpool vtbd2 root@bsd11:~ # zpool status pool: testpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM testpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 vtbd0 ONLINE 0 0 0 vtbd1 ONLINE 0 0 0 vtbd3 ONLINE 0 0 0 errors: No known data errors root@bsd11:~ #
POOL 상태 체크
Disk 교체
zpool replace 는 이전 디스크의 모든 데이터를 새 디스크로 복사 합니다.
작업이 완료되면 이전 디스크가 vdev 에서 연결이 끊어 집니다.
root@bsd11:~ # zpool create testpool mirror /dev/vtbd0 /dev/vtbd1 root@bsd11:~ # zpool status pool: testpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM testpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 vtbd0 ONLINE 0 0 0 vtbd1 ONLINE 0 0 0 errors: No known data errors root@bsd11:~ #
vtbd1 Disk 를 vtbd2 로 교체 합니다.
root@bsd11:~ # zpool status pool: testpool state: ONLINE scan: resilvered 80K in 0h0m with 0 errors on Thu Mar 15 23:07:04 2018 config: NAME STATE READ WRITE CKSUM testpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 vtbd0 ONLINE 0 0 0 vtbd2 ONLINE 0 0 0 errors: No known data errors root@bsd11:~ #
Scrubbing pool
zfs pool 을 정기적으로 매월 한번이상 scrub 작업을 하는게 좋습니다.
디스크를 많이 사용하는 동안 실행 하면 성능이 저하 됩니다.
root@bsd11:~ # zpool scrub testpool root@bsd11:~ # zpool status pool: testpool state: ONLINE scan: scrub repaired 0 in 0h0m with 0 errors on Thu Mar 15 23:10:20 2018 config: NAME STATE READ WRITE CKSUM testpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 vtbd0 ONLINE 0 0 0 vtbd2 ONLINE 0 0 0 errors: No known data errors root@bsd11:~ #
Self-Healing Test
데이터 블록과 함께 저장되는 체크섬은 파일 시스템이 자동으로 복구 되도록 합니다.
체크섬이 저장 장치 풀의 다른 Disk 에 기록된 데이터와 일치하는 데이터를 자동으로 복구 합니다.
root@bsd11:/usr/local/etc # zpool status pool: testpool state: ONLINE scan: scrub repaired 5K in 0h0m with 0 errors on Thu Mar 15 23:17:43 2018 config: NAME STATE READ WRITE CKSUM testpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 vtbd0 ONLINE 0 0 0 vtbd1 ONLINE 0 0 0 errors: No known data errors root@bsd11:/usr/local/etc #
Self-healing test 를 위하여 임이의 파일을 복사 한후 checksum 을 생성 합니다.
root@bsd11:~ # cd /usr/local/etc/ root@bsd11:/usr/local/etc # cp * /testpool/ cp: apache24 is a directory (not copied). cp: bash_completion.d is a directory (not copied). cp: man.d is a directory (not copied). cp: newsyslog.conf.d is a directory (not copied). cp: periodic is a directory (not copied). cp: php is a directory (not copied). cp: php-fpm.d is a directory (not copied). cp: rc.d is a directory (not copied). cp: ssl is a directory (not copied). root@bsd11:/usr/local/etc # cd root@bsd11:~ # sha1 /testpool > checksum.txt root@bsd11:~ # cat checksum.txt SHA1 (/testpool) = 34d4723284883bf65b788e6674c7e475dc4102e9 root@bsd11:~ #
vtbd0 Disk 를 dd 로 날립니다.
root@bsd11:~ # zpool export testpool root@bsd11:~ # dd if=/dev/random of=/dev/vtbd0 bs=1m count=200 200+0 records in 200+0 records out 209715200 bytes transferred in 2.680833 secs (78227611 bytes/sec) root@bsd11:~ # root@bsd11:~ # zpool import testpool
testpool 의 cksum 을 확인 합니다.
root@bsd11:~ # zpool status testpool pool: testpool state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://illumos.org/msg/ZFS-8000-9P scan: scrub repaired 5K in 0h0m with 0 errors on Thu Mar 15 23:17:43 2018 config: NAME STATE READ WRITE CKSUM testpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 vtbd0 ONLINE 0 0 0 vtbd1 ONLINE 0 0 1 errors: No known data errors root@bsd11:~ #
오류의 영향을 받지 않은 vtbd0 미러 디스크에 있는 중복성 사용을 감지
원본과 체크섬을 비교 하여 동일한지 여부를 출력합니다.
root@bsd11:~ # sha1 /testpool >> checksum.txt root@bsd11:~ # cat checksum.txt SHA1 (/testpool) = 34d4723284883bf65b788e6674c7e475dc4102e9 SHA1 (/testpool) = 34d4723284883bf65b788e6674c7e475dc4102e9 root@bsd11:~ #
풀 데이터를 의도적으로 변경 하기 전/후의 체크섬이 동일 합니다.
ZFS 가 체크섬이 다를때 자동으로 오류를 감지 하고 수정합니다.
pool 이 충분한 중복이 있는 경우에만 가능하며 단일 장치로 구성된 풀에는 자체 치유 기능이 없습니다.
스크러빙 작업 에서 vtbd0 에서 데이터를 읽고 vtbd1 에 잘못된 체크섬의 데이터를 다시 작성 합니다.
root@bsd11:~ # zpool scrub testpool root@bsd11:~ # zpool status pool: testpool state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected. action: Determine if the device needs to be replaced, and clear the errors using 'zpool clear' or replace the device with 'zpool replace'. see: http://illumos.org/msg/ZFS-8000-9P scan: scrub repaired 253K in 0h0m with 0 errors on Thu Mar 15 23:30:15 2018 config: NAME STATE READ WRITE CKSUM testpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 vtbd0 ONLINE 0 0 0 vtbd1 ONLINE 0 0 29 errors: No known data errors root@bsd11:~ #
스크럽작업이 완료되면 zpool clear 를 실행하여 오류를 지울수 있습니다.
root@bsd11:~ # zpool clear testpool root@bsd11:~ # zpool status pool: testpool state: ONLINE scan: scrub repaired 253K in 0h0m with 0 errors on Thu Mar 15 23:30:15 2018 config: NAME STATE READ WRITE CKSUM testpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 vtbd0 ONLINE 0 0 0 vtbd1 ONLINE 0 0 0 errors: No known data errors root@bsd11:~ #
imports and export pool
root@bsd11:~ # zpool export testpool root@bsd11:~ # zpool import pool: testpool id: 4252914017303616931 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: testpool ONLINE mirror-0 ONLINE vtbd0 ONLINE vtbd1 ONLINE root@bsd11:~ # df -h Filesystem Size Used Avail Capacity Mounted on /dev/ada0s1a 18G 7.8G 9.1G 46% / devfs 1.0K 1.0K 0B 100% /dev root@bsd11:~ #
import -o 옵션을 사용하여 경로를 지정 할수 있습니다.
경로를 지정하게 되면 $경로/$pool_name 로 됩니다.
root@bsd11:~ # zpool import -o altroot=/mnt testpool root@bsd11:~ # df -h Filesystem Size Used Avail Capacity Mounted on /dev/ada0s1a 18G 7.8G 9.1G 46% / devfs 1.0K 1.0K 0B 100% /dev testpool 9.6G 265K 9.6G 0% /mnt/testpool root@bsd11:~ #
Storage pool upgrade
FreeBSD 를 업그레이드 하면 ZFS Version 이 업그레이드 되며 새로운 기능을 지원할수 있습니다. 업그레이드 할수 있지만 다운그레이드는 불가 합니다.
# 이전에 사용하던 pool 도 upgrade 해야 하는지 여부는 확인이 필요 할것으로 보입니다. 🙂
root@bsd11:~ # zpool upgrade
이전에 사용하던 pool upgrade
root@bsd11:~ # zpool upgrade testpool
zpool history
root@bsd11:~ # zpool history History for 'testpool': 2018-03-15.23:06:26 zpool create testpool mirror /dev/vtbd0 /dev/vtbd1 2018-03-15.23:07:09 zpool replace testpool vtbd1 vtbd2 2018-03-15.23:10:25 zpool scrub testpool 2018-03-15.23:13:10 zpool replace testpool vtbd2 vtbd1 2018-03-15.23:14:25 zpool export testpool 2018-03-15.23:16:48 zpool import testpool 2018-03-15.23:17:48 zpool scrub testpool 2018-03-15.23:18:08 zpool clear testpool 2018-03-15.23:24:30 zpool export testpool 2018-03-15.23:25:49 zpool import testpool 2018-03-15.23:30:20 zpool scrub testpool 2018-03-15.23:32:46 zpool clear testpool 2018-03-15.23:37:30 zpool export testpool 2018-03-15.23:39:04 zpool import -o altroot=/mnt testpool root@bsd11:~ #
-i 옵션 zfs 이벤트 까지 표시
root@bsd11:~ # zpool history -i History for 'testpool': 2018-03-15.23:06:26 [txg:5] create pool version 5000; software version 5000/5; uts bsd11 11.1-RELEASE 1101001 amd64 2018-03-15.23:06:26 zpool create testpool mirror /dev/vtbd0 /dev/vtbd1 2018-03-15.23:07:04 [txg:16] scan setup func=2 mintxg=3 maxtxg=16 2018-03-15.23:07:04 [txg:17] scan done errors=0 2018-03-15.23:07:04 [txg:18] vdev attach replace vdev=/dev/vtbd2 for vdev=/dev/vtbd1 2018-03-15.23:07:09 [txg:19] detach vdev=/dev/vtbd1 2018-03-15.23:07:09 zpool replace testpool vtbd1 vtbd2 2018-03-15.23:10:20 [txg:57] scan setup func=1 mintxg=0 maxtxg=57 2018-03-15.23:10:20 [txg:58] scan done errors=0 2018-03-15.23:10:25 zpool scrub testpool 2018-03-15.23:13:05 [txg:94] scan setup func=2 mintxg=3 maxtxg=94 2018-03-15.23:13:05 [txg:95] scan done errors=0 2018-03-15.23:13:05 [txg:96] vdev attach replace vdev=/dev/vtbd1 for vdev=/dev/vtbd2 2018-03-15.23:13:10 [txg:97] detach vdev=/dev/vtbd2 2018-03-15.23:13:10 zpool replace testpool vtbd2 vtbd1 2018-03-15.23:14:25 zpool export testpool 2018-03-15.23:16:43 [txg:117] open pool version 5000; software version 5000/5; uts bsd11 11.1-RELEASE 1101001 amd64 2018-03-15.23:16:43 [txg:119] import pool version 5000; software version 5000/5; uts bsd11 11.1-RELEASE 1101001 amd64 2018-03-15.23:16:48 zpool import testpool 2018-03-15.23:17:43 [txg:132] scan setup func=1 mintxg=0 maxtxg=132 2018-03-15.23:17:43 [txg:133] scan done errors=0 2018-03-15.23:17:48 zpool scrub testpool 2018-03-15.23:18:08 zpool clear testpool 2018-03-15.23:24:30 zpool export testpool 2018-03-15.23:25:44 [txg:219] open pool version 5000; software version 5000/5; uts bsd11 11.1-RELEASE 1101001 amd64 2018-03-15.23:25:44 [txg:221] import pool version 5000; software version 5000/5; uts bsd11 11.1-RELEASE 1101001 amd64 2018-03-15.23:25:49 zpool import testpool 2018-03-15.23:30:15 [txg:276] scan setup func=1 mintxg=0 maxtxg=276 2018-03-15.23:30:15 [txg:277] scan done errors=0 2018-03-15.23:30:20 zpool scrub testpool 2018-03-15.23:32:46 zpool clear testpool 2018-03-15.23:37:30 zpool export testpool 2018-03-15.23:38:58 [txg:369] open pool version 5000; software version 5000/5; uts bsd11 11.1-RELEASE 1101001 amd64 2018-03-15.23:38:58 [txg:371] import pool version 5000; software version 5000/5; uts bsd11 11.1-RELEASE 1101001 amd64 2018-03-15.23:39:04 zpool import -o altroot=/mnt testpool root@bsd11:~ #
-l 옵션 사용자 이름 및 hostname 표시
root@bsd11:~ # zpool history -l History for 'testpool': 2018-03-15.23:06:26 zpool create testpool mirror /dev/vtbd0 /dev/vtbd1 [user 0 (root) on bsd11] 2018-03-15.23:07:09 zpool replace testpool vtbd1 vtbd2 [user 0 (root) on bsd11] 2018-03-15.23:10:25 zpool scrub testpool [user 0 (root) on bsd11] 2018-03-15.23:13:10 zpool replace testpool vtbd2 vtbd1 [user 0 (root) on bsd11] 2018-03-15.23:14:25 zpool export testpool [user 0 (root) on bsd11] 2018-03-15.23:16:48 zpool import testpool [user 0 (root) on bsd11] 2018-03-15.23:17:48 zpool scrub testpool [user 0 (root) on bsd11] 2018-03-15.23:18:08 zpool clear testpool [user 0 (root) on bsd11] 2018-03-15.23:24:30 zpool export testpool [user 0 (root) on bsd11] 2018-03-15.23:25:49 zpool import testpool [user 0 (root) on bsd11] 2018-03-15.23:30:20 zpool scrub testpool [user 0 (root) on bsd11] 2018-03-15.23:32:46 zpool clear testpool [user 0 (root) on bsd11] 2018-03-15.23:37:30 zpool export testpool [user 0 (root) on bsd11] 2018-03-15.23:39:04 zpool import -o altroot=/mnt testpool [user 0 (root) on bsd11] root@bsd11:~ #
zpool 모니터링
root@bsd11:~ # zpool iostat capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- testpool 372K 9.94G 0 0 96 447 root@bsd11:~ #
zpool iostat -v 의 Verbose 옵션 입니다.
Disk read/write 까지 표시 됩니다.
root@bsd11:~ # zpool iostat -v capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- testpool 372K 9.94G 0 0 82 382 mirror 372K 9.94G 0 0 82 382 vtbd0 - - 0 0 2.02K 1.65K vtbd1 - - 0 0 1.38K 1.65K ---------- ----- ----- ----- ----- ----- ----- root@bsd11:~ #
ZFS 관리
https://www.freebsd.org/doc/handbook/zfs-zfs.html
zfs 유틸리티는 pool에 있는 ZFS 데이터 세트를 생성, 삭제 및 관리 합니다.
데이터 세트 생성 및 제거
root@bsd11:~ # zfs list NAME USED AVAIL REFER MOUNTPOINT zroot 6.65G 29.9G 88K /zroot zroot/ROOT 627M 29.9G 88K none zroot/ROOT/default 627M 29.9G 627M / zroot/jails 4.76G 29.9G 112K /usr/jails zroot/jails/basejail 1003M 29.9G 979M /usr/jails/basejail zroot/jails/database 1.95G 29.9G 1.95G /usr/jails/database zroot/jails/httpd 1.83G 29.9G 1.83G /usr/jails/httpd zroot/jails/newjail 4.66M 29.9G 4.66M /usr/jails/newjail zroot/jails/www 4.75M 29.9G 4.75M /usr/jails/www zroot/tmp 88K 29.9G 88K /tmp zroot/usr 1.27G 29.9G 88K /usr zroot/usr/home 88K 29.9G 88K /usr/home zroot/usr/ports 665M 29.9G 665M /usr/ports zroot/usr/src 633M 29.9G 633M /usr/src zroot/var 604K 29.9G 88K /var zroot/var/audit 88K 29.9G 88K /var/audit zroot/var/crash 88K 29.9G 88K /var/crash zroot/var/log 164K 29.9G 164K /var/log zroot/var/mail 88K 29.9G 88K /var/mail zroot/var/tmp 88K 29.9G 88K /var/tmp root@bsd11:~ #
신규 데이터 세트를 생성 하고 LZ4 압축을 활성화 합니다.
root@bsd11:~ # zfs create -o compress=lz4 zroot/zroottest root@bsd11:~ # zfs list NAME USED AVAIL REFER MOUNTPOINT zroot 6.65G 29.9G 88K /zroot zroot/ROOT 627M 29.9G 88K none zroot/ROOT/default 627M 29.9G 627M / zroot/jails 4.76G 29.9G 112K /usr/jails zroot/jails/basejail 1003M 29.9G 979M /usr/jails/basejail zroot/jails/database 1.95G 29.9G 1.95G /usr/jails/database zroot/jails/httpd 1.83G 29.9G 1.83G /usr/jails/httpd zroot/jails/newjail 4.66M 29.9G 4.66M /usr/jails/newjail zroot/jails/www 4.75M 29.9G 4.75M /usr/jails/www zroot/tmp 88K 29.9G 88K /tmp zroot/usr 1.27G 29.9G 88K /usr zroot/usr/home 88K 29.9G 88K /usr/home zroot/usr/ports 665M 29.9G 665M /usr/ports zroot/usr/src 633M 29.9G 633M /usr/src zroot/var 604K 29.9G 88K /var zroot/var/audit 88K 29.9G 88K /var/audit zroot/var/crash 88K 29.9G 88K /var/crash zroot/var/log 164K 29.9G 164K /var/log zroot/var/mail 88K 29.9G 88K /var/mail zroot/var/tmp 88K 29.9G 88K /var/tmp zroot/zroottest 88K 29.9G 88K /zroot/zroottest root@bsd11:~ #
이전에 생성된 데이터 세트를 삭제 합니다.
root@bsd11:~ # zfs list NAME USED AVAIL REFER MOUNTPOINT zroot 6.65G 29.9G 88K /zroot zroot/ROOT 627M 29.9G 88K none zroot/ROOT/default 627M 29.9G 627M / zroot/jails 4.76G 29.9G 112K /usr/jails zroot/jails/basejail 1003M 29.9G 979M /usr/jails/basejail zroot/jails/database 1.95G 29.9G 1.95G /usr/jails/database zroot/jails/httpd 1.83G 29.9G 1.83G /usr/jails/httpd zroot/jails/newjail 4.66M 29.9G 4.66M /usr/jails/newjail zroot/jails/www 4.75M 29.9G 4.75M /usr/jails/www zroot/tmp 88K 29.9G 88K /tmp zroot/usr 1.27G 29.9G 88K /usr zroot/usr/home 88K 29.9G 88K /usr/home zroot/usr/ports 665M 29.9G 665M /usr/ports zroot/usr/src 633M 29.9G 633M /usr/src zroot/var 604K 29.9G 88K /var zroot/var/audit 88K 29.9G 88K /var/audit zroot/var/crash 88K 29.9G 88K /var/crash zroot/var/log 164K 29.9G 164K /var/log zroot/var/mail 88K 29.9G 88K /var/mail zroot/var/tmp 88K 29.9G 88K /var/tmp zroot/zroottest 88K 29.9G 88K /zroot/zroottest root@bsd11:~ # zfs destroy zroot/zroottest root@bsd11:~ # zfs list NAME USED AVAIL REFER MOUNTPOINT zroot 6.65G 29.9G 88K /zroot zroot/ROOT 627M 29.9G 88K none zroot/ROOT/default 627M 29.9G 627M / zroot/jails 4.76G 29.9G 112K /usr/jails zroot/jails/basejail 1003M 29.9G 979M /usr/jails/basejail zroot/jails/database 1.95G 29.9G 1.95G /usr/jails/database zroot/jails/httpd 1.83G 29.9G 1.83G /usr/jails/httpd zroot/jails/newjail 4.66M 29.9G 4.66M /usr/jails/newjail zroot/jails/www 4.75M 29.9G 4.75M /usr/jails/www zroot/tmp 88K 29.9G 88K /tmp zroot/usr 1.27G 29.9G 88K /usr zroot/usr/home 88K 29.9G 88K /usr/home zroot/usr/ports 665M 29.9G 665M /usr/ports zroot/usr/src 633M 29.9G 633M /usr/src zroot/var 604K 29.9G 88K /var zroot/var/audit 88K 29.9G 88K /var/audit zroot/var/crash 88K 29.9G 88K /var/crash zroot/var/log 164K 29.9G 164K /var/log zroot/var/mail 88K 29.9G 88K /var/mail zroot/var/tmp 88K 29.9G 88K /var/tmp root@bsd11:~ #
zfs 최신 버전의 경우 zfs destroy 가 비동식 으로 동작 합니다.
여유공간이 풀에 표시되는데 몇분 정도 걸릴수 있습니다.
볼륨 생성 및 삭제
zfs 볼륨은 모든 파일 시스템으로 포멧될수 있으며, 파일 시스템 없이 원시 데이터를
저장 할수 있습니다. 사용자에게는 볼륨은 일반 디스크처럼 보입니다.
root@bsd11:~ # zfs create -V 250m -o compression=on zroot/fat32 root@bsd11:~ # zfs list zroot NAME USED AVAIL REFER MOUNTPOINT zroot 6.90G 29.7G 88K /zroot root@bsd11:~ # zfs list NAME USED AVAIL REFER MOUNTPOINT zroot 6.90G 29.7G 88K /zroot zroot/ROOT 627M 29.7G 88K none zroot/ROOT/default 627M 29.7G 627M / zroot/fat32 260M 29.9G 56K - zroot/jails 4.76G 29.7G 112K /usr/jails zroot/jails/basejail 1003M 29.7G 979M /usr/jails/basejail zroot/jails/database 1.95G 29.7G 1.95G /usr/jails/database zroot/jails/httpd 1.83G 29.7G 1.83G /usr/jails/httpd zroot/jails/newjail 4.66M 29.7G 4.66M /usr/jails/newjail zroot/jails/www 4.75M 29.7G 4.75M /usr/jails/www zroot/tmp 88K 29.7G 88K /tmp zroot/usr 1.27G 29.7G 88K /usr zroot/usr/home 88K 29.7G 88K /usr/home zroot/usr/ports 665M 29.7G 665M /usr/ports zroot/usr/src 633M 29.7G 633M /usr/src zroot/var 604K 29.7G 88K /var zroot/var/audit 88K 29.7G 88K /var/audit zroot/var/crash 88K 29.7G 88K /var/crash zroot/var/log 164K 29.7G 164K /var/log zroot/var/mail 88K 29.7G 88K /var/mail zroot/var/tmp 88K 29.7G 88K /var/tmp root@bsd11:~ #
zroot 볼륨의 fat32 를 msdos 타입으로 포멧 한후 /mnt 에 마운트 합니다.
-F32 지정시 잘못된 인수로 인식 하여 -F 옵션 없이 테스트 하였습니다.
root@bsd11:~ # newfs_msdos /dev/zvol/zroot/fat32 newfs_msdos: cannot get number of sectors per track: Operation not supported newfs_msdos: cannot get number of heads: Operation not supported newfs_msdos: trim 62 sectors to adjust to a multiple of 63 /dev/zvol/zroot/fat32: 511648 sectors in 31978 FAT16 clusters (8192 bytes/cluster) BytesPerSec=512 SecPerClust=16 ResSectors=1 FATs=2 RootDirEnts=512 Media=0xf0 FATsecs=125 SecPerTrack=63 Heads=16 HiddenSecs=0 HugeSectors=511938 root@bsd11:~ # mount -t msdosfs /dev/zvol/zroot/fat32 /mnt mount_msdosfs: /dev/zvol/zroot/fat32: Invalid argument root@bsd11:~ # mount -t msdosfs /dev/zvol/zroot/fat32 /mnt root@bsd11:~ # df -h |grep -i mnt /dev/zvol/zroot/fat32 250M 16K 250M 0% /mnt root@bsd11:~ # mount |grep -i mnt /dev/zvol/zroot/fat32 on /mnt (msdosfs, local) root@bsd11:~ #
데이터세트 이름변경
데이터 세트의 이름은 zfs rename 으로 변경 할수 있으며 최상위 볼륨의 이름도 변경 할수 있습니다. 최상위 볼륨의 이름을 변경 하게 되면 상속된 속성값이 변경 됩니다.
데이터 세트의 이름을 변경하면 마운트를 해제된 다음 새 위치 에서 다시 마운트 됩니다.
-u 옵션 사용시 해당 remount 를 방지 할수 있습니다.
root@bsd11:~ # zfs list NAME USED AVAIL REFER MOUNTPOINT zroot 6.90G 29.7G 88K /zroot zroot/ROOT 627M 29.7G 88K none zroot/ROOT/default 627M 29.7G 627M / zroot/fat32 260M 29.9G 68K - zroot/jails 4.76G 29.7G 112K /usr/jails zroot/jails/basejail 1003M 29.7G 979M /usr/jails/basejail zroot/jails/database 1.95G 29.7G 1.95G /usr/jails/database zroot/jails/httpd 1.83G 29.7G 1.83G /usr/jails/httpd zroot/jails/newjail 4.66M 29.7G 4.66M /usr/jails/newjail zroot/jails/www 4.75M 29.7G 4.75M /usr/jails/www zroot/tmp 88K 29.7G 88K /tmp zroot/usr 1.27G 29.7G 88K /usr zroot/usr/home 88K 29.7G 88K /usr/home zroot/usr/ports 665M 29.7G 665M /usr/ports zroot/usr/src 633M 29.7G 633M /usr/src zroot/var 604K 29.7G 88K /var zroot/var/audit 88K 29.7G 88K /var/audit zroot/var/crash 88K 29.7G 88K /var/crash zroot/var/log 164K 29.7G 164K /var/log zroot/var/mail 88K 29.7G 88K /var/mail zroot/var/tmp 88K 29.7G 88K /var/tmp root@bsd11:~ # zfs rename zroot/fat32 zroot/fat16 root@bsd11:~ # zfs list NAME USED AVAIL REFER MOUNTPOINT zroot 6.90G 29.7G 88K /zroot zroot/ROOT 627M 29.7G 88K none zroot/ROOT/default 627M 29.7G 627M / zroot/fat16 260M 29.9G 68K - zroot/jails 4.76G 29.7G 112K /usr/jails zroot/jails/basejail 1003M 29.7G 979M /usr/jails/basejail zroot/jails/database 1.95G 29.7G 1.95G /usr/jails/database zroot/jails/httpd 1.83G 29.7G 1.83G /usr/jails/httpd zroot/jails/newjail 4.66M 29.7G 4.66M /usr/jails/newjail zroot/jails/www 4.75M 29.7G 4.75M /usr/jails/www zroot/tmp 88K 29.7G 88K /tmp zroot/usr 1.27G 29.7G 88K /usr zroot/usr/home 88K 29.7G 88K /usr/home zroot/usr/ports 665M 29.7G 665M /usr/ports zroot/usr/src 633M 29.7G 633M /usr/src zroot/var 604K 29.7G 88K /var zroot/var/audit 88K 29.7G 88K /var/audit zroot/var/crash 88K 29.7G 88K /var/crash zroot/var/log 164K 29.7G 164K /var/log zroot/var/mail 88K 29.7G 88K /var/mail zroot/var/tmp 88K 29.7G 88K /var/tmp root@bsd11:~ #
스냅 샷은 위와 같은 방법으로 이름을 바꿀수 없으며 스냅샷의 특성으로 인하여
다른 상위 데이터 세트로 이름 변경이 불가 합니다.
재귀된 스냅샷의 이름을 변경 할려면 -r 옵셔을 지정 하고 하위 데이터 세트의 이름 같은 모든 스냅샷의 이름을 변경 해야 합니다.
root@bsd11:~ # zfs rename zroot/var/test@2018-03-15 new_test@2018-03-16 root@bsd11:~ # zfs list -t snapshot
데이터 세트 속성 설정
root@bsd11:~ # zfs set custom:costcenter=1234 zroot root@bsd11:~ # zfs get custom:costcenter zroot NAME PROPERTY VALUE SOURCE zroot custom:costcenter 1234 local root@bsd11:~ #
사용자정의 등록 정보를 제거 할려면 zfs -r 옵션을 사용합니다.
root@bsd11:~ # zfs get custom:costcenter zroot NAME PROPERTY VALUE SOURCE zroot custom:costcenter 1234 local root@bsd11:~ # zfs inherit -r custom:costconter zroot root@bsd11:~ # zfs get custom:costconter NAME PROPERTY VALUE SOURCE zroot custom:costconter - - zroot/ROOT custom:costconter - - zroot/ROOT/default custom:costconter - - zroot/fat16 custom:costconter - - zroot/jails custom:costconter - - zroot/jails/basejail custom:costconter - - zroot/jails/basejail@20180306_19:50:50 custom:costconter - - zroot/jails/basejail@20180306_20:02:39 custom:costconter - - zroot/jails/database custom:costconter - - zroot/jails/httpd custom:costconter - - zroot/jails/newjail custom:costconter - - zroot/jails/www custom:costconter - - zroot/test custom:costconter - - zroot/tmp custom:costconter - - zroot/usr custom:costconter - - zroot/usr/home custom:costconter - - zroot/usr/ports custom:costconter - - zroot/usr/src custom:costconter - - zroot/var custom:costconter - - zroot/var/audit custom:costconter - - zroot/var/crash custom:costconter - - zroot/var/log custom:costconter - - zroot/var/mail custom:costconter - - zroot/var/tmp custom:costconter - - root@bsd11:~ # zfs get custom:costconter zroot NAME PROPERTY VALUE SOURCE zroot custom:costconter - - root@bsd11:~ #
공유속성 및 설정
NFS / SMB 공유 옵션 입니다.
ZFS 데이터 세트를 네트워크에서 공유 할수 있는 방법을 정의 할수 있습니다.
둘다 off 로 되어 있습니다.
root@bsd11:~ # zfs get sharenfs zroot/usr/home NAME PROPERTY VALUE SOURCE zroot/usr/home sharenfs off default root@bsd11:~ # zfs get sharesmb zroot/usr/home NAME PROPERTY VALUE SOURCE zroot/usr/home sharesmb off default root@bsd11:~ #
/usr/home nfs 공유 설정
root@bsd11:~ # zfs set sharenfs=on zroot/usr/home root@bsd11:~ # zfs get sharenfs zroot/usr/home NAME PROPERTY VALUE SOURCE zroot/usr/home sharenfs on local root@bsd11:~ #
zfs set sharenfs 사용 예)
다음과 같이 nfs 연결 정보를 설정 할수 있습니다.
root@bsd11:~ # zfs set sharenfs="-alldirs,=maproot=root,-network=192.168.0.0/24" zroot/usr/home root@bsd11:~ # zfs get sharenfs zroot/usr/home NAME PROPERTY VALUE SOURCE zroot/usr/home sharenfs -alldirs,=maproot=root,-network=192.168.0.0/24 local root@bsd11:~ #
스냅 샷 관리
-작성중
Comments