Categories
ceph storage virtualization

OpenStack使用ceph存储时,glance 上传镜像输出

[root@node2 ~(keystone_admin)]# glance image-create --name bc_win2012 --disk-format qcow2 --container-format ovf --f /meta/iso/bc_win2012.qcow2 
+------------------+----------------------------------------------------------------------------------+
| Property         | Value                                                                            |
+------------------+----------------------------------------------------------------------------------+
| checksum         | 13422230096bef83fade0418d64e9890                                                 |
| container_format | ovf                                                                              |
| created_at       | 2020-02-26T12:03:58Z                                                             |
| direct_url       | rbd://d484bdf2-c9ba-4e1f-a69f-86586e0dc8ad/images/b9168a42-244f-4642-b08f-       |
|                  | 3e6fdc05645e/snap                                                                |
| disk_format      | qcow2                                                                            |
| id               | b9168a42-244f-4642-b08f-3e6fdc05645e                                             |
| min_disk         | 0                                                                                |
| min_ram          | 0                                                                                |
| name             | bc_win2012                                                                       |
| os_hash_algo     | sha512                                                                           |
| os_hash_value    | 30025e558627c60e7cd88aab01f193c2ca38fb73454c4fcd8c5bfc9b38cd23e963fa33aac3089f55 |
|                  | 8de746524a290d3c9e2dd87a1e919c60928fa51b9646e18d                                 |
| os_hidden        | False                                                                            |
| owner            | 504fb6fa98c443899288ec9e35b487a8                                                 |
| protected        | False                                                                            |
| size             | 8959033344                                                                       |
| status           | active                                                                           |
| tags             | []                                                                               |
| updated_at       | 2020-02-26T12:10:15Z                                                             |
| virtual_size     | Not available                                                                    |
| visibility       | shared                                                                           |
+------------------+----------------------------------------------------------------------------------+

Categories
ceph storage

使用docker安装ceph

使用docker安装ceph

#划分子网
docker network create --driver bridge --subnet 172.20.0.0/16 ceph-network
docker network ls
docker network inspect ceph-network

#用于存放ceph配置
mkdir -p  /myceph/etc/ceph
#创建目录代表osd
mkdir -p  /myceph/osd/0  /myceph/osd/1  /myceph/osd/2

# monitor node
docker run -itd --name monnode --network ceph-network --ip 172.20.0.10 -e MON_NAME=monnode -e MON_IP=172.20.0.10 -v /myceph/etc/ceph:/etc/ceph ceph/mon

#
docker exec monnode ceph osd create
docker exec monnode ceph osd create
docker exec monnode ceph osd create

#osd
docker run -itd --name osdnode0 --network ceph-network -e CLUSTER=ceph -e WEIGHT=1.0 -e MON_NAME=monnode -e MON_IP=172.20.0.10 -v /myceph/etc/ceph:/etc/ceph -v /myceph/osd/0:/var/lib/ceph/osd/ceph-0 ceph/osd
docker run -itd --name osdnode1 --network ceph-network -e CLUSTER=ceph -e WEIGHT=1.0 -e MON_NAME=monnode -e MON_IP=172.20.0.10 -v /myceph/etc/ceph:/etc/ceph -v /myceph/osd/1:/var/lib/ceph/osd/ceph-1 ceph/osd
docker run -itd --name osdnode2 --network ceph-network -e CLUSTER=ceph -e WEIGHT=1.0 -e MON_NAME=monnode -e MON_IP=172.20.0.10 -v /myceph/etc/ceph:/etc/ceph -v /myceph/osd/2:/var/lib/ceph/osd/ceph-2 ceph/osd

#object storage gateway
docker run -itd --name gwnode --network ceph-network --ip 172.20.0.9 -p 9080:80 -e RGW_NAME=gwnode -v /myceph/etc/ceph:/etc/ceph ceph/radosgw

docker exec monnode ceph -s

基于Docker部署ceph分布式文件系统(Luminous版本)

Categories
linux storage tool

结合vmstat分析iostat输出结果

man vmstat

FIELD DESCRIPTION FOR VM MODE

Procs

  • r: The number of processes waiting for run time.

    r 表示运行队列 (就是说多少个进程真的分配到CPU). 当这个值超过了CPU数目, 就会出现CPU瓶颈.

  • b: The number of processes in uninterruptible sleep.

Memory

  • swpd: the amount of virtual memory used.
  • free: the amount of idle memory.
  • buff: the amount of memory used as buffers.
  • cache: the amount of memory used as cache.
  • inact: the amount of inactive memory. (-a option)
  • active: the amount of active memory. (-a option)

Swap

  • si: Amount of memory swapped in from disk (/s).
  • so: Amount of memory swapped to disk (/s).

IO

  • bi: Blocks received from a block device (blocks/s).
  • bo: Blocks sent to a block device (blocks/s).

System

  • in: The number of interrupts per second, including the clock.
  • cs: The number of context switches per second.

CPU

  • These are percentages of total CPU time.
  • us: Time spent running non-kernel code. (user time, including nice time)
  • sy: Time spent running kernel code. (system time)
  • id: Time spent idle. Prior to Linux 2.5.41, this includes IO-wait time.
  • wa: Time spent waiting for IO. Prior to Linux 2.5.41, included in idle.

    wa 列显示了IO等待所占用的CPU时间的百分比。这里wa的参考值为30%,如果wa超过30%,说明IO等待严重,这可能是磁盘大量随机访问造成的,也可能磁盘或者磁盘访问控制器的带宽瓶颈造成的(主要是块操作)

  • st: Time stolen from a virtual machine. Prior to Linux 2.6.11, unknown.