In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-22 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
One: preface
GlusterFS is the core of Scale-Out storage solution Gluster. It is an open source distributed file system with strong scale-out ability to support several PB storage capacity and handle thousands of clients. GlusterFS gathers physically distributed storage resources together with TCP/IP or InfiniBand RDMA networks and uses a single global namespace to manage data. GlusterFS is based on a stackable user space design that provides excellent performance for a wide variety of data loads.
Server:
10.116.137.196 k8s_master
10.116.82.28 k8s_node1
10.116.36.57 k8s_node2
Two: install glusterfs
We use yum installation directly on the physical machine and execute the following installation commands on each of the three servers.
1. Install the gluster source first
Yum install centos-release-gluster-y
two。 Install glusterfs components
Yum install-y glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma glusterfs-geo-replication glusterfs-devel
3. Create a glusterfs directory
Mkdir / mnt/glusterd
4. Modify the glusterd directory
Click (here) to collapse or open
Volume management
Type mgmt/glusterd
Option working-directory / mnt/glusterd
Option transport-type socket,rdma
Option transport.socket.keepalive-time 10
Option transport.socket.keepalive-interval 2
Option transport.socket.read-fail-log off
Option ping-timeout 0
Option event-threads 1
# option lock-timer 180
# option transport.address-family inet6
# option base-port 49152
End-volume5. Start glusterfs
Systemctl enable glusterd.service
Systemctl start glusterd.service
Systemctl status glusterd.service
Three: configure glusterfs
1. Open port
Iptables-I INPUT-p tcp-- dport 24007-j ACCEPT
two。 Create a storage directory
Mkdir / opt/gfs_data
3. Add nodes to the cluster
Do the following on 10.116.137.196:
Gluster peer probe 10.116.82.28
Gluster peer probe 10.116.36.57
4. View cluster status
Gluster peer status
Click (here) to collapse or open
Number of Peers: 2
Hostname: 10.116.82.28
Uuid: f73138ca-e32e-4d87-a99d-cf842fc29447
State: Peer in Cluster (Connected)
Hostname: 10.116.36.57
Uuid: 18e22d2c-049b-4b0c-8cc7-2560319e6c05
State: Peer in Cluster (Connected)
[root@iZwz95trb3stk6afg8oozuZ kubernetes] # clear
[root@iZwz95trb3stk6afg8oozuZ kubernetes] # gluster volume create k8s-volume transport tcp 10.116.137.196:/mnt/gfs_data 10.116.82.28:/mnt/gfs_data 10.116.36.57:/mnt/gfs_data force
Volume create: k8s-volume: success: please start the volume to access data
[root@iZwz95trb3stk6afg8oozuZ kubernetes] # gluster volume info
Volume Name: k8s-volume
Type: Distribute
Volume ID: 62900029-02c9-4870-951c-38fafd5f5d9b
Status: Created
Snapshot Count: 0
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 10.116.137.196:/mnt/gfs_data
Brick2: 10.116.82.28:/mnt/gfs_data
Brick3: 10.116.36.57:/mnt/gfs_data
Options Reconfigured:
Transport.address-family: inet
Nfs.disable: on
Four: configure volume
The mode of 1.volume
a. Distributed volumes (default mode): DHT, also known as distributed volumes: files have been randomly distributed to a server node for storage by the hash algorithm.
b. Replication mode: AFR, number of replica x when creating volume: copy files to x nodes of replica.
c. Stripe mode: Striped, the number of volume created with stripe x: cut the file into data blocks and store them in stripe x nodes (similar to raid 0).
d. Distributed stripe mode: a minimum of 4 servers are required to create. Stripe 2 server = 4 nodes when creating volume: it is a combination of DHT and Striped.
e. Distributed replication mode: a minimum of 4 servers are required to create. Replica 2 server = 4 nodes when creating volume: it is a combination of DHT and AFR.
f. Stripe replication volume mode: a minimum of 4 servers are required to create. Stripe 2 replica 2 server = 4 nodes when creating volume: a combination of Striped and AFR.
g. A mix of three modes: at least 8 servers are required to create. Stripe 2 replica 2, every four nodes form a group.
two。 Create a distribution volume
Gluster volume create k8s-volume transport tcp 10.116.137.196:/mnt/gfs_data 10.116.82.28:/mnt/gfs_data 10.116.36.57:/mnt/gfs_data force
3. View volume status
Gluster volume info
Click (here) to collapse or open
Volume Name: k8s-volume
Type: Distribute
Volume ID: 62900029-02c9-4870-951c-38fafd5f5d9b
Status: Created
Snapshot Count: 0
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 10.116.137.196:/mnt/gfs_data
Brick2: 10.116.82.28:/mnt/gfs_data
Brick3: 10.116.36.57:/mnt/gfs_data
Options Reconfigured:
Transport.address-family: inet
Nfs.disable: on4. Start the distribution volume
Gluster volume start k8s-volume
Five: Glusterfs tuning
1. Enable the quota for the specified volume
Gluster volume quota k8s-volume enable
two。 Limit the quota for the specified volume
Gluster volume quota k8s-volume limit-usage / 5GB
3. Set cache size, default 32MB
Gluster volume set k8s-volume performance.cache-size 64MB
4. Set the io thread, too much will cause the process to crash
Gluster volume set k8s-volume performance.io-thread-count 16
5. Set network detection time. Default is 42s.
Gluster volume set k8s-volume network.ping-timeout 10
6. Sets the size of the write buffer. Default is 1m.
Gluster volume set k8s-volume performance.write-behind-window-size 512MB
Six: configure glusterfs in Kubernetes
1. Configure endpoints (glusterfs-endpoints.json)
Click (here) to collapse or open
{
"kind": "Endpoints"
"apiVersion": "v1"
"metadata": {
"name": "glusterfs-cluster"
}
"subsets": [
{
"addresses": [
{
"ip": "10.116.137.196"
}
]
"ports": [
{
"port": 1990
}
]
}
]
} kubectl apply-f glusterfs-endpoints.json
two。 Configure service (glusterfs-service.json)
Click (here) to collapse or open
{
"kind": "Service"
"apiVersion": "v1"
"metadata": {
"name": "glusterfs-cluster"
}
"spec": {
"ports": [
{"port": 1990}
]
}
} kubectl apply-f glusterfs-service.json
3. Create a test pod
Click (here) to collapse or open
{
"apiVersion": "v1"
"kind": "Pod"
"metadata": {
"name": "glusterfs"
}
"spec": {
"containers": [
{
"name": "glusterfs"
"image": "registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
"volumeMounts": [
{
"mountPath": "/ mnt/glusterfs"
"name": "glusterfsvol"
}
]
}
]
"volumes": [
{
"name": "glusterfsvol"
"glusterfs": {
"endpoints": "glusterfs-cluster"
"path": "k8s-volume"
"readOnly": true
}
}
]
}
} kubectl apply-f glusterfs-pod.json execute df-h on the Node node where Pod is located to view the k8s-volume mount directory
4. Configure PersistentVolume (glusterfs-pv.yaml)
Click (here) to collapse or open
ApiVersion: v1
Kind: PersistentVolume
Metadata:
Name: gluster-dev-volume
Spec:
Capacity:
Storage: 1Gi
AccessModes:
-ReadWriteMany
Glusterfs:
Endpoints: "glusterfs-cluster"
Path: "k8s-volume"
ReadOnly: falsekubectl apply-f glusterfs-pv.yaml
5. Configure PVC (glusterfs-pvc.yaml)
Click (here) to collapse or open
Kind: PersistentVolumeClaim
ApiVersion: v1
Metadata:
Name: glusterfs-nginx
Spec:
AccessModes:
-ReadWriteMany
Resources:
Requests:
Storage: 1Gikubectl apply-f glusterfs-pvc.yaml
6. Create nginx deployment mount volume (nginx-deployment.yaml)
Click (here) to collapse or open
ApiVersion: extensions/v1beta1
Kind: Deployment
Metadata:
Name: nginx-dm
Spec:
Replicas: 2
Template:
Metadata:
Labels:
Name: nginx
Spec:
Containers:
-name: nginx
Image: nginx:latest
ImagePullPolicy: IfNotPresent
Ports:
-containerPort: 80
VolumeMounts:
-name: gluster-dev-volume
MountPath: "/ usr/share/nginx/html"
Volumes:
-name: gluster-dev-volume
PersistentVolumeClaim:
ClaimName: glusterfs-nginxkubectl apply-f nginx-deployment.yaml
a. View deployment
Kubectl get pods | grep nginx-dm
Click (here) to collapse or open
Nginx-dm-64dcbb8d55-pp9vn 1/1 Running 0 1h
Nginx-dm-64dcbb8d55-sclj6 1/1 Running 0 1hb. View mount
Kubectl exec-it nginx-dm-64dcbb8d55-pp9vn-- df-h
Click (here) to collapse or open
Filesystem Size Used Avail Use% Mounted on
Overlay 40G 23G 16G 60% /
Tmpfs 3.7G 0 3.7G 0% / dev
Tmpfs 3.7G 0 3.7G 0% / sys/fs/cgroup
/ dev/xvda1 40G 23G 16G 60% / etc/hosts
Shm 64m 0 64m 0% / dev/shm
10.116.137.196:k8s-volume 5.0G 0 5.0G 0% / usr/share/nginx/html
Tmpfs 3.7G 12K 3.7G 1% / run/secrets/kubernetes.io/serviceaccountc. Create a file test
Kubectl exec-it nginx-dm-64dcbb8d55-pp9vn-- touch / usr/share/nginx/html/index.html
Kubectl exec-it nginx-dm-64dcbb8d55-sclj6-- ls-lt / usr/share/nginx/html/index.html
d. Verify glusterfs
Because we use distributed volumes, we can see that there are files in a node
Ls / mnt/gfs_data/
Index.html
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.