In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-12 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
1. Fastdfs introduction
Fastdfs is an open source lightweight distributed file system written in C language. it manages file processes and has functions such as file storage, file synchronization, file access (file upload, file download), etc., which solves the problems of mass storage and load balancing, and is especially suitable for online services based on files, such as photo albums, video websites, etc.
Fastdfs is tailored for the Internet, taking full account of redundant backup, load balancing, linear expansion, etc., and pays attention to high availability, high performance and other indicators. Using fastdfs, it is easy to build a set of high-performance file server cluster to provide file upload and download services.
2. Fastdfs architecture
Fastdfs architecture includes Tracker server and Storage server
The client requests Tracker server to upload and download files, and through Tracker server scheduling, Storage server finally completes the upload and download of files.
Tracker server functions as load balancing and scheduling. When uploading files through Tracker server, you can find Storage server to provide file upload services according to some policies. It can be said that tracker becomes a tracking server or scheduling server.
Storage server as a file storage, the files uploaded by the client are finally stored on the storage server. Storage server does not implement its own file system, but uses the file system of the operating system to manage files. It can be said that storage has become a storage server.
3. The FastDFS system has three roles: tracking server (Tracker Server), storage server (Storage Server) and client (Client).
Tracker Server: tracking server, mainly doing scheduling work, plays a balanced role; responsible for managing all storage server and group, each storage will connect to Tracker after startup, inform itself of group and other information, and maintain a periodic heartbeat.
Storage Server: storage server, which mainly provides capacity and backup services. In group units, there can be multiple storage server in each group, and the data can be backed up each other.
Client: the client, the server for uploading and downloading data, that is, the server on which our own project is deployed.
4. Fastdfs principle
About the module: both the tracking server and the storage node can be composed of one or more servers, and the tracking server and storage node can be added or offline at any time without affecting the online service, in which all the servers in the tracking server are peer-to-peer and can be increased or decreased at any time according to the pressure on the server.
5. File upload process
Storage server will connect to all Tracker server in the cluster and report its status to them regularly, including statistics such as disk free space, file synchronization status, number of file uploads and downloads, and so on.
About uploading
**
File index information includes: group name, virtual disk path, data two-level directory, file name.
N group name: the name of the storage group to which the file is uploaded, which is returned by the storage server after the file is uploaded successfully, and needs to be saved by the client.
N virtual disk path: * the virtual path configured by storage, corresponding to the disk option store_path. M00 if store_path0 is configured, M01 if store_path2 is configured, and so on.
N data two-level directory: a two-level directory created by the storage server under each virtual disk path to store data files.
N file name: different from when the file was uploaded. It is generated by the storage server according to specific information, and the file name contains information such as source storage server IP address, file creation timestamp, file size, random number and file extension name.
When client sends a download request to a tracker, it must bring the file name information. Tracker parses the file's group, size, creation time and other information from the file name, and then selects a storage for the request to serve the read request. Because the files in the group are synchronous in the background, they may appear when reading, and the files are not synchronized to some storage server. In order to avoid accessing such storage as far as possible Tracker selects the readable storage within the group according to the following rules
File creation timestamp-the timestamp to which the storage was synchronized and (current time-file creation timestamp) > the maximum file synchronization time (5 minutes), indicating that after the file creation, it must have been synchronized to other storage after the maximum synchronization time
6. Install Fastdfs
192.168.1.10 nginx proxy
192.168.1.20 tracker server1
192.168.1.30 tracker server2
192.168.1.40 storage server group1
192.168.1.50 storage server group2
Data storage location / storage/fastdfs
One. Deploy and install traker server
1. Install the compilation environment, install libfastcommon and fastdfs (all nodes)
Libfastcommon contains some basic libraries needed for fastdfs to run.
17 mkdir / storage/fastdfs-p (create data storage location) 20 tar zxf libfastcommon.tar.gz (extract lib) 21 cd libfastcommon/ 22. / make.sh &. / make.sh install (compile and install lib) 23 cd 24 tar zxf fastdfs.tar.gz (extract fas) 25 cd fastdfs/ 26. / make.sh &. / make.sh install (compile and install fastdfs) 27 ls / etc/init.d/
28 cp conf/mime.types conf/http.conf / etc/fdfs/ (copy nginx extension file to fdfs)
two。 The first one to write a script (easy to deploy traker)
The path to the configuration file of vim installation, root/fastdfs/cp conf/mime.types conf/http.conf, root/fastdfs/cp conf/mime.types conf/http.conf, etc/fdfs/ls / etc/fdfs#/etc/fdfs/, & &. / make.sh installcd / root/fastdfs/./make.sh & &. / make.sh installcd / root/fastdfs/cp conf/mime.types conf/http.conf / etc/fdfs/ls / etc/fdfs#/etc/fdfs/
3. Copy scripts and software packages to the other four hosts
Scp root fastdfs.tar.gz libfastcommon.tar.gz install-fastdfs.sh root@192.168.1.20:/root
4. Four, execute the script to see if the script is successful.
Sh install-fastdfs.sh
5. Configure track (both)
29 cd / etc/fdfs/ 30 ls 31 cp tracker.conf.sample tracker.conf 32 vim tracker.conf modify tracker configuration file bind_addr=192.168.1.20 # listen on local address 8base_path=/storage/fastdfs # here write the path where the newly created data is stored 22 34 / etc/init.d/fdfs_trackerd start launch trackerd 35 netstat-anpt | grep 22122
6. Install nginx (the first one)
36 yum-y install pcre-devel openssl-devel zlib-devel # install nginx dependency package 35 tar zxf nginx-1.14.0.tar.gz # extract nginx tar package 37 cd nginx-1.14.0/ 38. / configure & & make & & make install # compile and install II. Configure storage server server (both)
1. Install nginx (both)
30 yum-y install pcre-devel openssl-devel installation dependency package 31 tar zxf nginx-1.14.0.tar.gz extract nginx package 32 tar zxf fastdfs-nginx-module.tar.gz extract nginx extension package 33 cd nginx-1.14.0/ 36. / configure-- add-module=../fastdfs-nginx-module/src & & make & & make install compilation installation
two。 Configure storage
38 cd / etc/fdfs/ 39 cp storage.conf.sample storage.conf 40 vim storage.conf modify storage configuration file group_ name=group1 the first storage is group1 the second storage is group2 11bind _ addr=192.168.1.40port=23000 # default port, do not modify the base_ path=/storage/fastdfs/ # data and log storage directory address 41store _ path0=/storage/fastdfs/ # the first storage directory is the same as the basr_path path 110tracker _ server=192. 168.1.20:22122 119tracker_ server=192.168.1.30:22122 120http. End of port of server_ port=8888 # http access file
3. Configure nginx extension Profil
43 cd / fastdfs-nginx-module/src/ 44 cp mod_fastdfs.conf / etc/fdfs/ 45 vim / etc/fdfs/mod_fastdfs.confbase_path=/storage/fastdfs/ # directory address for data and logs 10tracker_server=192.168.1.20:22122 # 40tracker_server=192.168.1.30:22122 # 41storage_server_port=23000 44group_name=group1 # the second storage group is called group2 47url_have_group_name = True # needs to be changed to true when there is more than one group Access 53store_path0=/storage/fastdfs # the first storage directory with the same basr_path path 62 [group1] # 119to comment, modify or directly add group_name=group1storage_server_port=23000store_path_count=2store_path0=/storage/ fastdfs [group2] group_name=group2storage_server_port=23000store_path_count=2store_path0=/storage/fastdfs
* 4. Copy the configuration file of the nginx extension software, the configuration file of storage, the nginx installation package and the nginx extension package to the second storage host (after copying, follow the steps 2 and 3 above)
48 cd / etc/fdfs/ 49 scp mod_fastdfs.conf storage.conf root@192.168.1.50:/etc/fdfs/ 50 cd 52 scp fastdfs-nginx-module.tar.gz nginx-1.14.0.tar.gz root@192.168.1.50:/root/
Modification of 5.Nginx main configuration file (both sets)
[root@localhost ~] # vim / usr/local/nginx/conf/nginx.conf adds server {listen 8888; server_name localhost; location ~ / group [0-9] / M00 / {ngx_fastdfs_module; # nginx extension Module} to the original server
6. Start the service
Start storaged server (both should be started)
[root@localhost fdfs] # / etc/init.d/fdfs_storaged start
Start nginx (both start)
[root@localhost ~] # / usr/local/nginx/sbin/nginx
Check the log [root@localhost fdfs] # cd / storage/fastdfs/logs/ [root@localhost logs] # cat trackerd.log on track
Test it
Modify the client.conf.sample configuration file of the first station
[root@localhost ~] # cd / etc/fdfs/ [root@localhost fdfs] # vim client.conf.samplebase_path=/storage/fastdfs # 10tracker_server=192.168.1.20:22122 # 14tracker_server=192.168.1.30:22122 # 15 upload an image to storage (first) [root@localhost ~] # fdfs_upload_file / etc/fdfs/client.conf.sample whale. Png (this id is best placed in a file for easy search)
Check it on storage
[root@localhost data] # cd / storage/fastdfs/data/00/00 [root@localhost 00] # ls
Visit the browser.
Http://192.168.1.50:8888/group2/M00/00/00/wKgBMl3cpCWAZGQIADuP0gQAyTs723.png
Download and rename the picture you just uploaded
[root@localhost ~] # fdfs_download_file / etc/fdfs/client.conf.sample group2/M00/00/00/wKgBMl3coGKACR1RADuP0gQAyTs874.png xgp.png
Delete the picture you just uploaded
[root@localhost ~] # fdfs_delete_file / etc/fdfs/client.conf.sample group2/M00/00/00/wKgBMl3coGKACR1RADuP0gQAyTs874.png
Third, configure nginx reverse proxy server (the first one)
Add [root@localhost nginx-1.15.4] # vim / usr/local/nginx/conf/nginx.conf upstream fdfs_group1 {# to the http module 20 server 192.168.1.40 usr/local/nginx/conf/nginx.conf upstream fdfs_group1 8888 weight=1 max_fails=2 fail_timeout=30s;} upstream fdfs_group2 {server 192.168.1.50 usr/local/nginx/conf/nginx.conf upstream fdfs_group1 8888 weight=1 max_fails=2 fail_timeout=30s } # add two location location ~ / group1 (/ *) {# 48 proxy_pass http://fdfs_group1;} location ~ / group2 (/ *) {proxy_pass http://fdfs_group2;} below
Start nginx
[root@localhost ~] # / usr/local/nginx/sbin/nginx [root@localhost ~] # netstat-anpt | grep 80
Check the browser and still use the same id (need to upload it again)
[root@localhost] # fdfs_upload_file / etc/fdfs/client.conf.sample whale. Png group2/M00/00/00/wKgBMl3cpCWAZGQIADuP0gQAyTs723.png
Browser access: http://192.168.1.10/group2/M00/00/00/wKgBMl3cpCWAZGQIADuP0gQAyTs723.png
The experiment is over
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.