-

   rss_rss_hh_new

 - e-mail

 

 -

 LiveInternet.ru:
: 17.03.2011
:
:
: 51

:


[ ] SDS Ceph

, 26 2017 . 18:29 +
mpv86 18:29

SDS Ceph

, !

Block/File Storage SAN' - , Object Storage?

Object Storage?


, - , . Storage.

- WORM (write once read many) .
- , , , , bucket . , .
-, - .. , , . .

:

CentOS;
, ;
.
.

:

  • ScaleIO , , , . ;
  • OpenIO ;
  • Ceph , . .

Ceph , , .

, . Luminous , - , . :) Hammer . Jewel stable . , ceph.com, nginx. EPEL:

# Ceph-Jewel
/usr/bin/rsync -avz --delete --exclude='repo*' rsync://download.ceph.com/ceph/rpm-jewel/el7/SRPMS/ /var/www/html/repos/ceph/ceph-jewel/el7/SRPMS/
/usr/bin/rsync -avz --delete --exclude='repo*' rsync://download.ceph.com/ceph/rpm-jewel/el7/noarch/ /var/www/html/repos/ceph/ceph-jewel/el7/noarch/
/usr/bin/rsync -avz --delete --exclude='repo*' rsync://download.ceph.com/ceph/rpm-jewel/el7/x86_64/ /var/www/html/repos/ceph/ceph-jewel/el7/x86_64/
# EPEL7
/usr/bin/rsync -avz --delete --exclude='repo*' rsync://mirror.yandex.ru/fedora-epel/7/x86_64/ /var/www/html/repos/epel/7/x86_64/
/usr/bin/rsync -avz --delete --exclude='repo*' rsync://mirror.yandex.ru/fedora-epel/7/SRPMS/ /var/www/html/repos/epel/7/SRPMS/
# Ceph-Jewel
/usr/bin/createrepo --update /var/www/html/repos/ceph/ceph-jewel/el7/x86_64/
/usr/bin/createrepo --update /var/www/html/repos/ceph/ceph-jewel/el7/SRPMS/
/usr/bin/createrepo --update /var/www/html/repos/ceph/ceph-jewel/el7/noarch/
# EPEL7
/usr/bin/createrepo --update /var/www/html/repos/epel/7/x86_64/
/usr/bin/createrepo --update /var/www/html/repos/epel/7/SRPMS/


.

, , , . :

OSD ;
MON ( ), Ceph; (- , VMWare)
RGW ( ), API Object Storage S3 Swift; (-RadosGW , VMWare)
; ( , ' VMWare)
/ . ( , )

Ceph . , RGW :

Ceph , ;
public, - .
heartbeat OSD , , latency .

: - CentOS.
, . (, ) EPEL-;
/etc/hosts . DHCP, bind - /etc/hosts;
ntp , , Ceph;
Ceph, , ceph.

:

sudo useradd -d /home/cephadmin -m cephadmin
sudo passwd cephadmin
echo "cephadmin ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephadmin
chmod 0440 /etc/sudoers.d/cephadmin


ssh- , , , sudo NOPASSWD. - ;
.ssh config, 600 ;

[cephadmin@ceph-deploy .ssh]$ cat config
Host ceph-cod1-osd-n1
Hostname ceph-cod1-osd-n1
User cephadmin
...................
Host ceph-cod2-osd-n3
Hostname ceph-cod2-osd-n3
User cephadmin


6789/tcp 6800-7100/tcp firewalld, ;
SELinux; ( ceph Jewel SE-)
yum install ceph-deploy.

!


, . , .. . , .

MON : ceph-deploy new #__MON_. ceph.conf, .

Ceph-Jewel : ceph-deploy install --release=jewel --no-adjust-repos #1 #2 #N. --no-adjust-repos , /etc/yum.repos.d/*.repo, . Jewel stable , .

ceph-deploy mon create-initial

, ceph.conf , fsid. fsid , , ! , ceph.conf ( backup) , . --overwrite-conf. :

[root@ceph-deploy ceph-cluster]# cat /home/cephadmin/ceph-cluster/ceph.conf
[global]
fsid = #-_
mon_initial_members = ceph-cod1-mon-n1, ceph-cod1-mon-n2, ceph-cod2-mon-n1
mon_host = ip-adress1,ip-adress2,ip-adress3
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

#Choose reasonable numbers for number of replicas and placement groups.
osd pool default size = 2 # Write an object 2 times
osd pool default min size = 1 # Allow writing 1 copy in a degraded state
osd pool default pg num = 256
osd pool default pgp num = 256

#Choose a reasonable crush leaf type
#0 for a 1-node cluster.
#1 for a multi node cluster in a single rack
#2 for a multi node, multi chassis cluster with multiple hosts in a chassis
#3 for a multi node cluster with hosts across racks, etc.
osd crush chooseleaf type = 1

[client.rgw.ceph-cod1-rgw-n1]
host = ceph-cod1-rgw-n1
keyring = /var/lib/ceph/radosgw/ceph-rgw.ceph-cod1-rgw-n1/keyring
rgw socket path = /var/run/ceph/ceph.radosgw.ceph-cod1-rgw-n1.fastcgi.sock
log file = /var/log/ceph/client.radosgw.ceph-cod1-rgw-n1.log
rgw dns name = ceph-cod1-rgw-n1.**.*****.ru
rgw print continue = false
rgw frontends = civetweb port=8888

[client.rgw.ceph-cod2-rgw-n1]
host = ceph-cod2-rgw-n1
keyring = /var/lib/ceph/radosgw/ceph-rgw.ceph-cod2-rgw-n1/keyring
rgw socket path = /var/run/ceph/ceph.radosgw.ceph-cod2-rgw-n1.fastcgi.sock
log file = /var/log/ceph/client.radosgw.ceph-cod2-rgw-n1.log
rgw dns name = ceph-cod2-rgw-n1.**.*****.ru
rgw print continue = false
rgw frontends = civetweb port=8888


:

ceph-radosgw.* , . 0666;
API civetweb, fastcgi apache, :
As of firefly (v0.80), Ceph Object Gateway is running on Civetweb (embedded into the ceph-radosgw daemon) instead of Apache and FastCGI. Using Civetweb simplifies the Ceph Object Gateway installation and configuration.

, , , RGW , . osd pool default pg num osd pool default pgp num :
And for example a count of 64 total PGs. Honestly, protection group calculations is something that still does not convince me totally, I dont get the reason why it should be left to the Ceph admin to be manually configured, and then often complain that is wrong. Anyway, as long as it cannot be configured automatically, the rule of thumb Ive find out to get rid of the error is that Ceph seems to be expecting between 20 and 32 PGs per OSD. A value below 20 gives you this error, and a value above 32 gives another error.
So, since in my case there are 9 OSDs, the minimum value would be 9*20=180, and the maximum value 9*32=288. I chose 256 and configured it dinamically.

:
PG (Placement Groups) Ceph, OSD. OSD, . Ceph :

- PG = (- OSD * 100) / -

(, = 700, = 512).
PGP (Placement Group for Placement purpose) . .

, , : sudo chmod +r /etc/ceph/ceph.client.admin.keyring

, OSD , OSD :

. SSD ( , ) 4 , .. primary:

parted /dev/SSD
mkpart journal-1 1 15G
mkpart journal-2 15 30G
mkpart journal-3 31G 45G
mkpart journal-4 45G 60G


xfs. OSD , Ceph:

chown ceph:ceph /dev/sdb1
chown ceph:ceph /dev/sdb2
chown ceph:ceph /dev/sdb3


GUID , udev . , OSD , failed. .. udev root:root. :

sgdisk -t 1:45B0969E-9B03-4F30-B4C6-B4B80CEFF106 /dev/sdb
GUID


ceph-deploy disk zap ceph-deploy osd create. ceph -w ceph osd tree.

'?

Ceph crushmap
rack rack '. , , rack rack. .. Ceph . :) , , ' .

[cephadmin@ceph-deploy ceph-cluster]$ ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 1.17200 root default
-8 0.58600 rack ceph-cod1
-2 0.19499 host ceph-cod1-osd-n1
0 0.04900 osd.0 up 1.00000 1.00000
1 0.04900 osd.1 up 1.00000 1.00000
2 0.04900 osd.2 up 1.00000 1.00000
3 0.04900 osd.3 up 1.00000 1.00000
-3 0.19499 host ceph-cod1-osd-n2
4 0.04900 osd.4 up 1.00000 1.00000
5 0.04900 osd.5 up 1.00000 1.00000
6 0.04900 osd.6 up 1.00000 1.00000
7 0.04900 osd.7 up 1.00000 1.00000
-4 0.19499 host ceph-cod1-osd-n3
8 0.04900 osd.8 up 1.00000 1.00000
9 0.04900 osd.9 up 1.00000 1.00000
10 0.04900 osd.10 up 1.00000 1.00000
11 0.04900 osd.11 up 1.00000 1.00000
-9 0.58600 rack ceph-cod2
-5 0.19499 host ceph-cod2-osd-n1
12 0.04900 osd.12 up 1.00000 1.00000
13 0.04900 osd.13 up 1.00000 1.00000
14 0.04900 osd.14 up 1.00000 1.00000
15 0.04900 osd.15 up 1.00000 1.00000
-6 0.19499 host ceph-cod2-osd-n2
16 0.04900 osd.16 up 1.00000 1.00000
17 0.04900 osd.17 up 1.00000 1.00000
18 0.04900 osd.18 up 1.00000 1.00000
19 0.04900 osd.19 up 1.00000 1.00000
-7 0.19499 host ceph-cod2-osd-n3
20 0.04900 osd.20 up 1.00000 1.00000
21 0.04900 osd.21 up 1.00000 1.00000
22 0.04900 osd.22 up 1.00000 1.00000
23 0.04900 osd.23 up 1.00000 1.00000


P.S. ?

Ceph . . , . .

. FC Ceph , . iSCSI true happy way. , MPIO.

, :


Original source: habrahabr.ru (comments, light).

https://habrahabr.ru/post/338782/

:  

: [1] []
 

:
: 

: ( )

:

  URL