In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-27 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/02 Report--
What is FastDFS for FastDFS distributed file storage?
FastDFS is an open source lightweight distributed file system. He solved the problems of massive data storage and load balancing. Especially suitable for small and medium-sized files (4KB
< FileSize < 500MB)为载体的在线服务,如视频,音频,图片网站等等。 FastDFS是一款开源的轻量级分布式文件系统,他是由纯 C 实现,支持Linux,FreeBSD等UNIX系统类,不是通用的文件系统,只能通过专有的API访问,目前提供了C、Java和PHP API为互联网应用量身定做,解决大容量文件存储问题,追求高性能和高扩展性FastDFS可以看做是基于文件的key value pair存储系统,称作分布式文件存储服务更为合适。 FastDFS的特性?文件不分块存储,上传的文件和OS文件系统中的文件一一对应支持相同内容的文件只保存一份,节约磁盘空间(一个group里面只设置一个storage)下载文件支持HTTP协议,可以使用内置的Web Server,也可以和其他的Web Server配合使用支持在线扩容支持主从文件存储服务器上可以保存文件属性(meta-data)V2.0网络通信采用libevent,支持发兵法访问,整体性能更好FastDFS架构Tracker Server tracking server
Tracking server, mainly do scheduling work, play the role of load balancing. Recording the status information of all storage groups and storage servers in the cluster in memory is the hub of the interaction between the client and the data server. Compared with the master in GlastFS, it is more concise, does not record file access information, and takes up a small amount of memory.
Tracker is the coordinator of FastDFS and is responsible for managing all storgae server and Group. After startup, each storage will connect to tracker, inform tarcker of its own group and other information, and maintain a periodic heartbeat. According to the heartbeat information of storage, tracker establishes a mapping table of group-> storage server list.
Tracker has very little meta-information to manage and will store it all in memory. In addition, the meta-information on tracker is generated by the information reported by storage, which does not need to persist any data, which makes tracker very easy to expand. Directly adding tracker machines can be extended to tracker cluster to serve. Every tracker in cluster is completely equivalent, and all tracker accept the heartbeat information of storage. Generate metadata information to provide read and write services.
Storage server storage server
Storage server, also known as storage node or data server, files and file attributes (meta-data) are stored on the storage server. Storage server manages files directly using OS's file system calls.
Stroage is organized in groups (group). Multiple stroage machines are included in a group, and the data is backed up each other. The storage space is based on the smallest storage in the group. Therefore, it is recommended that the storage capacity in the same group is the same to prevent resource waste.
Organizing storage in group is convenient for application isolation, responsible for balancing, and the definition of the number of replicas (the number of storage in group is the number of replicas of the group). For example, write the data of different services to different group to achieve resource isolation. We can also write the data of one service to multiple group to do load balancing.
The capacity of group is limited by the storage capacity of a single machine, and when the storag in the group is damaged, data recovery can only rely on other storage machines in the group, so the recovery time will be very long.
The storage of each storage within group depends on the local file system, and storage can configure multiple data storage directories.
When storage receives a write to a file, it selects one of the storage directories to store the file according to the configured rules. In order to avoid too many files in a single directory, when storage starts for the first time, a level 2 subdirectory of 65536 subdirectories will be created in each data storage directory, and the newly written files will be routed to one of the subdirectories in the way of hash, and then the file data will be directly stored in this directory as a local file.
Client server client
As the initiator of the business request, the client uses the TCP/IP protocol to interact with the tracking server or storage node through the proprietary interface.
Group group
Group, also known as volume. The files on the servers in the same group are exactly the same, and the storage server in the same group is peer-to-peer. File upload, deletion and other operations can be carried out on any storage server, how many groups can be on a storage server, and each group can correspond to a device on the storage server.
Properties related to meta data files
Key-value pair (Key Value Pair) mode, such as width=1024,heigth=768
FastDFS DDL operation upload mechanism
First, the client requests the Tracker service to obtain the ip address and port of the storage server corresponding to the group group to be operated, then the client requests to upload the file according to the returned IP address and port number, and the storage server produces the file after receiving the request, and writes the file content to the disk and returns it to the client file_id, path information, file name and other information, and the client saves the relevant information to upload.
FastDFS memory storage mechanism:
1. Select tracker server
When there is more than one tracker server in the cluster, because the tracker server is completely peer-to-peer, the client can choose any tracker in the upload file. When tracker receives a request from upload file, it will assign a group to the file that can store the file. The following rules for selecting group are supported:
Round Robin, polling all groupSpecified Group to specify a certain groupLoad Balance. Group with more storage space is preferred.
2. Select storage server
When group is selected, tracker will select a storage server in the group for the client. The following rules for selecting storage are supported.
Round Robin all storage polling First server ordered by ip within group sort by IP First server ordred by priority by priority, priority is configured on storage
3. Select storage path
When the storage server is allocated, the client will send a write request to storage, and storage will assign a data storage directory to the file, which supports the following rules:
Round Robin multiple storage directories poll the priority of the most remaining storage space
4, generate FileID
After the directory is selected, storage generates a FileID for the file, which is concatenated by storage server ip + file creation time + file size + file crc32 + random number stitching. The binary string is then base64 encoded and converted into a printable string. Select a two-level directory when a storage directory is selected, storage allocates a fileid to the file, and each storage directory has two levels of 256cm 256 subdirectories. Storage will hash (guess) twice according to the file fileid, route to one of the subdirectories, and then store the files in that subdirectory with the file name of fileid.
5, generate the file name
When a file is stored in a subdirectory, the file is considered to be stored successfully, and then a file name is generated for the file. The file name is composed of group, storage directory, two-level subdirectory, fileid, and file suffix (specified by the client, mainly used to distinguish file types).
6 recording storage writes to disk
After each storage writes a file, a binlog,binlog is written that does not contain file data, and the value contains meta-information such as the file name. This binlog is used for background synchronization, and the storage records the progress of synchronization with other storage in the Group, so that the progress can be synchronized with the last progress after restart, and the progress is recorded by timestamp.
7. Report information to tracker regularly.
Storage synchronization progress will be reported on all tracker as part of the metadata, and tracker will use this as a reference when selecting storage (see download mechanism below)
The file name generated by storage contains the source storage ID/IP address and the file creation timestamp. Storage periodically reports file synchronization to tracker, including file timestamps synchronized to other storage in the same group. After receiving the file synchronization report from storage, tracker finds out the minimum timestamp to which each storage in the group is synchronized, and saves it in memory as a storage attribute.
Download mechanism
With the file name information, the client requests the Tracker service to obtain the IP and Port of the storage server, and then the client requests to download the file with the returned IP address and port number, and the storage server returns the file to the client after not receiving the request.
Like Upload file, the client can choose any tracker server when downloading the file. When tracker sends a download request to a trakcer, it must bring the file name information. Tracker parses the file's group, size, creation time and other information from the file name, and then selects a storage for the request to serve the read request. Because the files in the group are synchronized asynchronously in the background, it is possible that the files have not been synchronized to some storage server during reading. To avoid accessing such storage,tracker, select the readable storage in the group according to the following rules:
1, the source storage of the file upload (judged by the storage ID/IP of the file name) 2, (current time-file creation timestamp) > file synchronization delay threshold (such as one day) 3, file creation timestamp
< storage被同步到的时间戳。(如果当前storage同步到的时间戳为10,文件创建时间戳为8,说明这个文件已经被同步)4,文件创建时间戳 == storage 被同步到的时间戳,且(当前时间-文件创建时间戳) >The maximum number of times a file is synchronized (e. G. 5 minutes).
The above two parameters, the file synchronization delay threshold and the maximum duration of synchronizing a file, are configured in tracker.conf, and the configuration items are storage_sync_file_max_delay and storage_sync_file_max_time, respectively.
Because FastDFS uses timestamps to solve file access problems caused by file synchronization delays. The time of the servers in the cluster needs to be consistent, and the time error is no more than 1s, so it is recommended to use NTP time server to ensure time consistency.
FastDFS file synchronization mechanism
We've been talking about file synchronization above, but what does the real file synchronization mechanism look like?
FastDFS synchronous files are replicated asynchronously by binlog. Stroage server uses binlog (record file metadata) files to record file uploads, deletions and other operations, and synchronizes files according to binlog. Here are some examples of the contents of the binlog file:
The path to the binlog file $base_path/data/sync/binlog.*
1574850031 C M00/4C/2D/rBCo3V3eTe-AP6SRAABk7o3hUY4681.wav1574850630 C M00/4C/2D/rBCo3V3eUEaAPMwRAABnbqEmTEs918.wav1574851230 C M00/4C/2D/rBCo3V3eUp6ARGlEAABhzomSJJo079.wav1574851230 C M00/4C/2D/rBCo3V3eUp6ABSZWAABoDiunCqc737.wav1574851830 C M00/4C/2D/rBCo3V3eVPaAYKlIAABormd65Ds168.wav1574851830 C M00/4C/2D/rBCo3V3eVPaAPs-CAABljrrCwyI452.wav1574851830 C M00/4C/2D/rBCo3V3eVPaAdSeKAABrLhlwnkU907.wav1574852429 C M00/4C/2D/rBCo3V3eV02Ab4yKAABmLjpCyas766.wav1574852429 C M00/4C/2D/rBCo3V3eV02AASzFAABorpw6oJw030.wav1574852429 C M00/4C/2D/rBCo3V3eV02AHSM7AAB0jpYtHQA019.wav
As you can see from the above, the binlog file has three columns as the timestamp, the operation type, and the file ID (without the group name)
The file operation type is encoded by a single letter, in which the source operation is represented by uppercase letters, and the synchronized operation is used as the corresponding small shoe letter.
C: upload files (upload)
D: delete files (delete)
A: append files (append)
M: partial file update (modify)
U: entire file update (set metadata)
T: truncate the file (truncate)
L: create a symbolic link (file deduplication function, only one copy of the same content is saved)
The storage server in the same group is peer-to-peer. File upload, deletion and other operations can be performed on any storage server. File synchronization is only carried out between storage server in the same group, using push method, that is, the source server is synchronized to other storage servers in the group. For other storage server in the same group, each storage server starts a thread for file synchronization.
File synchronization is incremental, recording the synchronized locations to the mark file. The path to the mark file is $base_path/data/sync/. Example of mark file content:
Binlog_index=3binlog_offset=382need_sync_old=1sync_old_done=1until_timestamp=1571976211scan_row_count=2033425sync_row_count=2033417
Using the asynchronous replication mode of binlog, there must be the problem of synchronization delay, such as the master-slave data synchronization of mysql.
Data recovery single disk data recovery
When one of the storage disks in our Group is damaged, when we want to replace the disk, the data is automatically recovered after the disk is replaced.
How to determine whether a single-disk data recovery is needed: check whether two subdirectories under the $Store_path/data directory 00Universe 00 and FF/FF exist (if one of them does not exist, then automatically establish the required subdirectories and start the automatic recovery of single-disk data.
Single disk data recovery logic:
1. Obtain an available storage server from tracker server as the source server; 2, pull the binlog of the storage path (corresponding to store_path order) from storage server and store it to local 3, and more local binlog copy the file from the source storage server to the corresponding directory under $store_path/data/; 4. The service can be provided only after the data recovery of single disk is completed. Pre-startup Analysis of Docker installation FastDFS
We use season/fastdfs:1.2 as an image here. Let's get his dockerfile and see how it starts.
Cat > Obtain_dockerfile.sh / etc/ld.so.confRUN echo'/ usr/local/libevent-2.0.14/lib' > / etc/ld.so.confRUN ldconfigWORKDIR / FastDFS_v4.08RUN. / make.sh C_INCLUDE_PATH=/usr/local/libevent-2.0.14/include LIBRARY_PATH=/usr/local/libevent-2.0.14/lib &. / make.sh install & &. / make.sh cleanWORKDIR / nginx-1.8.0RUN. / configure- -user=root-group=root-- prefix=/etc/nginx-- with-http_stub_status_module-- with-zlib=/zlib-1.2.8-- without-http_rewrite_module-- add-module=/fastdfs-nginx-module/srcRUN makeRUN make installRUN make cleanRUN ln-sf / etc/nginx/sbin/nginx / sbin/nginxRUN mkdir / fastdfsRUN mkdir / fastdfs/trackerRUN mkdir / fastdfs/store_pathRUN mkdir / fastdfs/clientRUN mkdir / fastdfs/storageRUN mkdir / fdfs_confRUN cp / FastDFS_v4.08/conf/* / fdfs_confRUN cp / fastdfs-nginx-module/src/mod_fastdfs.conf / fdfs_confWORKDIR / RUN chmod adepx / entrypoint.shENTRYPOINT & {["/ entrypoint.sh"]}
You can see that our startup is implemented by executing this script, so let's take a look at the contents of this script as follows:
#! / bin/bash#set-eTRACKER_BASE_PATH= "/ fastdfs/tracker" TRACKER_LOG_FILE= "$TRACKER_BASE_PATH/logs/trackerd.log" STORAGE_BASE_PATH= "/ fastdfs/storage" STORAGE_LOG_FILE= "$STORAGE_BASE_PATH/logs/storaged.log" TRACKER_CONF_FILE= "/ etc/fdfs/tracker.conf" STORAGE_CONF_FILE= "/ etc/fdfs/storage.conf" NGINX_ACCESS_LOG_FILE= "/ etc/nginx/logs/access.log" NGINX_ERROR_ LOG_FILE= "/ etc/nginx/logs/error.log" MOD_FASTDFS_CONF_FILE= "/ etc/fdfs/mod_fastdfs.conf" # remove log filesif [- f "/ fastdfs/tracker/logs/trackerd.log"] Then rm-rf "$TRACKER_LOG_FILE" fiif [- f "/ fastdfs/storage/logs/storaged.log"]; then rm-rf "$STORAGE_LOG_FILE" fiif [- f "$NGINX_ACCESS_LOG_FILE"]; then rm-rf "$NGINX_ACCESS_LOG_FILE" fiif [- f "$NGINX_ERROR_LOG_FILE"]; then rm-rf "$NGINX_ERROR_LOG_FILE" fiif ["$1" = 'shell'] Then / bin/bashfiif ["$1" = 'tracker']; then echo "start fdfs_trackerd..." If [!-d "/ fastdfs/tracker/logs"]; then mkdir "/ fastdfs/tracker/logs" fi noun0 array= () # read the configuration file while read line do array [$n] = "${line}"; ((nasty +)); done
< /fdfs_conf/tracker.conf rm "$TRACKER_CONF_FILE" #${!array[@]} 为数组的下标 for i in "${!array[@]}"; do #判断组名是否为空 if [ ${STORE_GROUP} ]; then #如果不为空,则判断是否包含storage_group 这个字段,然后把这行地换掉 [[ "${array[$i]}" =~ "store_group=" ]] && array[$i]="store_group=${STORE_GROUP}" fi # 循环追加配置 echo "${array[$i]}" >> "$TRACKER_CONF_FILE" done touch "$TRACKER_LOG_FILE" ln-sf / dev/stdout "$TRACKER_LOG_FILE" fdfs_trackerd $TRACKER_CONF_FILE sleep 3s # delay wait for pid file # tail-F-- pid= `cat / fastdfs/tracker/data/fdfs_ trackerd.pid` / fastdfs/tracker/logs/trackerd.log # wait `cat / fastdfs/tracker/data/fdfs_ trackerd.pid` tail-F-pid= `cat / fastdfs/tracker/data/ Fdfs_ trackerd.pid` / dev/null fiif ["$1" = 'storage'] Then echo "start fdfs_storgaed..." N: 0 array= () while read line do array [$n] = "${line}"; ((n +)); done
< /fdfs_conf/storage.conf rm "$STORAGE_CONF_FILE" for i in "${!array[@]}"; do if [ ${GROUP_NAME} ]; then [[ "${array[$i]}" =~ "group_name=" ]] && array[$i]="group_name=${GROUP_NAME}" fi if [ ${TRACKER_SERVER} ]; then [[ "${array[$i]}" =~ "tracker_server=" ]] && array[$i]="tracker_server=${TRACKER_SERVER}" fi echo "${array[$i]}" >> "$STORAGE_CONF_FILE" done if [!-d "/ fastdfs/storage/logs"] Then mkdir "/ fastdfs/storage/logs" fi touch "$STORAGE_LOG_FILE" ln-sf / dev/stdout "$STORAGE_LOG_FILE" fdfs_storaged "$STORAGE_CONF_FILE" sleep 3s # delay wait for pid file # tail-F-- pid= `cat / fastdfs/storage/data/fdfs_ storaged.pid` / fastdfs/storage/logs/storaged.log # wait-n `cat / fastdfs/storage/data/fdfs_ storaged.pid` tail-F -- pid= `cat / fastdfs/storage/data/fdfs_ storaged.pid` / dev/nullfiif ["$1" = 'nginx'] Then echo "starting nginx..." # ln log files to stdout/stderr touch "$NGINX_ACCESS_LOG_FILE" ln-sf / dev/stdout "$NGINX_ACCESS_LOG_FILE" touch "$NGINX_ERROR_LOG_FILE" ln-sf / dev/stderr "$NGINX_ERROR_LOG_FILE" # change mod_fastfdfs.conf nude 0 array= () while read line do array [$n] = "${line}"; ((nasty +)); done
< /fdfs_conf/mod_fastdfs.conf if [ -f "$MOD_FASTDFS_CONF_FILE" ]; then rm -rf "$MOD_FASTDFS_CONF_FILE" fi for i in "${!array[@]}"; do if [ ${GROUP_NAME} ]; then [[ "${array[$i]}" =~ "group_name=" ]] && array[$i]="group_name=${GROUP_NAME}" fi if [ ${TRACKER_SERVER} ]; then [[ "${array[$i]}" =~ "tracker_server=" ]] && array[$i]="tracker_server=${TRACKER_SERVER}" fi if [ ${URL_HAVE_GROUP_NAME} ]; then [[ "${array[$i]}" =~ "url_have_group_name=" ]] && array[$i]="url_have_group_name=${URL_HAVE_GROUP_NAME}" fi if [ ${STORAGE_SERVER_PORT} ]; then [[ "${array[$i]}" =~ "storage_server_port=" ]] && array[$i]="storage_server_port=${STORAGE_SERVER_PORT}" fi echo "${array[$i]}" >"$MOD_FASTDFS_CONF_FILE" done nginx-g "daemon off;"
We simply analyze the script and find that the script determines whether to start trakcer, storage or nginx according to the parameters passed after startup. Let's take a look at the script and let's start the test now.
Start trackerdocker run-ti-d-- name trakcer\-v / etc/localtime:/etc/localtime-v / tracker_data:/fastdfs/tracker/data\-- net=host\-- restart=always\ season/fastdfs tracker
After startup, tracker will listen on 22122, and we can also change the port by passing environment variables.
-e port=22222
All the configurations in the configuration file can be passed in as environment variables.
# Environment variables and default values that can be passed by tracker disabled=falsebind_addr=port=22122connect_timeout=30network_timeout=60base_path=/fastdfs/trackermax_connections=256accept_threads=1work_threads=4store_lookup=2store_group=group1store_server=0store_path=0download_server=0reserved_storage_space = 10%log_level=inforun_by_group=run_by_user=allow_hosts=*sync_log_buff_interval = 10check_active_interval = 120thread_stack_size = 64KBstorage_ip_changed_auto_adjust = truestorage_sync_file_max_delay = 86400storage_sync_file_max_time = 300use_trunk_file = falseslot_min_size = 256slot_max_size = 16MBtrunk_file_size = 64MBtrunk_create_file_advance = falsetrunk_create_file_time_base = 02:00trunk_create_file_interval = 86400trunk_create_file_space_threshold = 20Gtrunk_init_check_occupying = falsetrunk_init_reload_from_binlog = falseuse_storage_id = falsestorage_ids_filename = storage_ids.confid_type_in_filename = ipstore_slave_file_use_link = falserotate_error_log = falseerror_log_rotate _ time=00:00rotate_error_log_size = 0use_connection_pool = falseconnection_pool_max_idle_time = 3600http.server_port=8080http.check_alive_interval=30http.check_alive_type=tcphttp.check_alive_uri=/status.html launch storagedocker run-di-- name storage\-- restart=always\-v / storage_data:/fastdfs/storage/data\-v / store_path:/fastdfs/store_path\-- net=host\-e TRACKER_ SERVER=172.16.1.170:22122 season/fastdfs:1.2 storage
Transferable environment variables
Disabled=falsegroup_name=group1bind_addr=client_bind=trueport=23000connect_timeout=30network_timeout=60heart_beat_interval=30stat_report_interval=60base_path=/fastdfs/storagemax_connections=256buff_size = 256KBaccept_threads=1work_threads=4disk_rw_separated = truedisk_reader_threads = 1disk_writer_threads = 1syncregions waitworthy msects 50syncstores intervals.0syncparts starts.00syncregions endurance timepieces 2359 writebread markmarks fileholders freqflowers 500storefloor pathogens counts.1storefloor pathogens such as fastdfsUniverse pathsubtractors, dirt tracks, pathogens, 256trackerplates serverstocks 172.16.1.170170vehicles 2212logboxes levels for unloaded by groupboxes runners by usernames allowable hostswords Distribute_path_mode=0file_distribute_rotate_count=100fsync_after_written_bytes=0sync_log_buff_interval=10sync_binlog_buff_interval=10sync_stat_file_interval=300thread_stack_size=512KBupload_priority=10if_alias_prefix=check_file_duplicate=0file_signature_method=hashkey_namespace=FastDFSkeep_alive=0use_access_log = falserotate_access_log = falseaccess_log_rotate_time=00:00rotate_error_log = falseerror_log_rotate_time=00:00rotate_access_log_size = 0rotate_error_log_size = 0file_sync_skip_invalid_record=falseuse_connection_pool = Falseconnection_pool_max_idle_time = 3600http.domain_name=http.server_port=8888 Test fastdfs
Enter into the tracker container
Docker exec-it tracker bashgrep 22122 / home/fdfs/client.confsed-I "s# `grep 22122 / home/fdfs/ client.conf` # tracker_server=172.16.1.170:22122#g" / home/fdfs/client.conf# use the following command to view fastdfs cluster status fdfs_monitor / etc/fdfs/client.conf# upload a file to test the download text of fdfs_upload_file / etc/fdfs/client.conf / etc/hosts group1/M00/00/00/rBABql33W5CAK7yFAAAAnrLoM8Y9254622# Root@test01:/etc/fdfs# fdfs_download_file / etc/fdfs/client.conf group1/M00/00/00/rBABql33W5CAK7yFAAAAnrLoM8Y9254622root@test01:/etc/fdfs# ls-l rBABql33W5CAK7yFAAAAnrLoM8Y9254622-rw-r--r-- 1 root root 158Dec 16 10:26 rBABql33W5CAK7yFAAAAnrLoM8Y9254622root@test01:/etc/fdfs# cat rBABql33W5CAK7yFAAAAnrLoM8Y9254622127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6# delete file fdfs_delete_file / etc/fdfs/client. Conf group1/M00/00/00/rBABql33W5CAK7yFAAAAnrLoM8Y9254622 starts nginx for other programs to access using docker run-id-- name fastdfs_nginx\-- restart=always\-v / store_path:/fastdfs/store_path\-p 8888pur80\-e GROUP_NAME=group1\-e TRACKER_SERVER=172.16.1.170:22122\-e STORAGE_SERVER_PORT=23000\ season/fastdfs:1.2 nginx
Note:
```nginx.conf
Server {
Listen 8888
Server_name localhost
Location ~ / group ([0-9]) / M00 {
Root / / fastdfs/store_path/data
Ngx_fastdfs_module
}
Error_page 500 502 503 504 / 50x.html
Location = / 50x.html {
Root html
}
}
Changeable configuration file ```bashgrep-Ev "^ # | ^ $" mod_fastdfs.confconnect_timeout=2network_timeout=30base_path=/tmpload_fdfs_parameters_from_tracker=truestorage_sync_file_max_delay = 86400use_storage_id = falsestorage_ids_filename = storage_ids.conftracker_server=tracker:22122storage_server_port=23000group_name=group1url_have_group_name=truestore_path_count=1#storage_path0 to build the same cluster as storage's store_path0=/fastdfs/store_pathlog_level=infolog_filename=response_mode=proxyif_alias_prefix=flv_support = trueflv_extension = flvgroup_count = 0
We just need to start storage and nginx on another machine, and then do a load balancing for all
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.