In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article focuses on "how to use docker compose to build a fastDFS file server", interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Let's let the editor take you to learn how to use docker compose to build a fastDFS file server.
Today, I will show you how to use docker compose to build a fastDFS file server. The details are as follows:
Platform: Mac M1
Remarks: about IP Address
With regard to the network model of docker, the Host mode of docker is mentioned in the above article:
If you start the container in host mode, the container will not get a separate Network Namespace, but will share a Network Namespace with the host. The container will not virtualize its own network card or configure its own IP, but will use the IP and port of the host. However, other aspects of the container, such as file systems, process lists, and so on, are isolated from the host.
The problem is that if you use the IP and port of the host, you can access the container if you enter localhost in the IP in the configuration file, but in fact it is not. The IP Address method for personal understanding (please correct if you have any questions) is as follows:
Console output when starting tracker:
The network of 192.168.64.2 is:
The network of 192.168.65.4 is:
File directory
├── docker-compose.yaml ├── nginx │ └── nginx.conf ├── storage │ └── data └── tracker │ └── conf │ └── client.conf └── store_path
. / docker-compose.yaml
Version: "2" services: fastdfs-tracker: hostname: fastdfs-tracker container_name: fastdfs-tracker image: season/fastdfs:1.2 network_mode: "host" command: tracker volumes: -. / tracker/data:/fastdfs/tracker/data -. / tracker/conf:/etc/fdfs fastdfs-storage: hostname: fastdfs-storage container_name: fastdfs-storage image: season/fastdfs:1.2 network_mode: "host" volumes: -. / storage/data:/fastdfs/storage/data -. / store_path:/fastdfs/store_path environment:-TRACKER_SERVER=192.168.64.2:22122 command: storage depends_on:-fastdfs-tracker fastdfs-nginx: hostname: fastdfs-nginx container_name: fastdfs-nginx Image: season/fastdfs:1.2 network_mode: "host" volumes: -. / nginx/nginx.conf:/etc/nginx/conf/nginx.conf -. / store_path:/fastdfs/store_path environment:-TRACKER_SERVER=192.168.64.2:22122 command: nginx
. / tracker/conf/client.conf
# connect timeout in seconds# default value is 30sconnect_timeout=30# network timeout in seconds# default value is 30snetwork_timeout=60# the base path to store log filesbase_path=/fastdfs/client# tracker_server can ocur more than once, and tracker_server format is# "host:port", host can be hostname or ip address# needs to modify the iptracker_server=192.168.64.2:22122#standard log level as syslog and case insensitive here Value list:### emerg for emergency### alert### crit for critical### error### warn for warning### notice### info### debuglog_level=info# if use connection pool# default value is false# since V4.05use_connection_pool = false# connections whose the idle time exceeds this time will be closed# unit: second# default value is 360 default value is falseload_fdfs since V4.05connection_pool_max_idle_time = 360 if load FastDFS parameters from tracker server# since V4.0 default value is falseload_fdfs _ parameters_from_tracker=false# if use storage ID instead of IP address# same as tracker.conf# valid only when load_fdfs_parameters_from_tracker is false# default value is false# since V4.05use_storage_id = false# specify storage ids filename Can use relative or absolute path# same as tracker.conf# valid only when load_fdfs_parameters_from_tracker is false# since V4.05storage_ids_filename = storage_ids.conf#HTTP settingshttp.tracker_server_port=80#use "# include" directive to include HTTP other settiongs##include http.conf
. / nginx/nginx.conf
# user nobody;worker_processes 1 the errorless log logs/error.log;#error_log logs/error.log notice;#error_log logs/error.log info;#pid logs/nginx.pid;events {worker_connections 1024;} http {include mime.types; default_type application/octet-stream # log_format main'$remote_addr-$remote_user [$time_local] "$request" #'$status $body_bytes_sent "$http_referer" # "$http_user_agent" $http_x_forwarded_for "; # access_log logs/access.log main; sendfile on; # tcp_nopush on # keepalive_timeout 0; keepalive_timeout 65; # gzip on; server {listen 9800; server_name localhost; # charset koi8-r; # access_log logs/host.access.log main # modify part location / {root / fastdfs/store_path/data; ngx_fastdfs_module;} # error_page 404 / 404.html # redirect server error pages to the static page / 50x.html # error_page 500 502 503 504 / 50x.html; location = / 50x.html {root html;}}
SpringBoot integrated fastDFS
Add dependency
Com.github.tobato fastdfs-client 1.27.2
Applicaiton.yaml
# distributed file system configuration fdfs: # change according to your ip ip: 192.168.64.2 # socket connection timeout soTimeout: 1500 connectTimeout: 600 # support multiple trackerList:-${fdfs.ip}: 22122 # nginx's ip and port # IDEA prompt to use https, # nginx configure SSL please move to: web-server-url: http://${fdfs.ip}:9800/
FastDFSConfig.java
@ Configuration// Import FastDFS-Client component @ Import (FdfsClientConfig.class) / / solve the problem of jmx re-registering bean @ EnableMBeanExport (registration = RegistrationPolicy.IGNORE_EXISTING) public aspect FastDFSConfig {}
FastDFSUtil.java
@ Componentpublic class FastDFSUtil {@ Resource private FastFileStorageClient fastFileStorageClient; @ Resource private FdfsWebServer fdfsWebServer; public String uploadFile (MultipartFile file) throws IOException {StorePath storePath = fastFileStorageClient.uploadFile (file.getInputStream (), file.getSize (), FilenameUtils.getExtension (file.getOriginalFilename ()), null); String fullPath = storePath.getFullPath (); getResAccessUrl (fullPath); return fullPath } public String uploadFile (File file) {try {FileInputStream inputStream = new FileInputStream (file); StorePath storePath = fastFileStorageClient.uploadFile (inputStream, file.length (), FilenameUtils.getExtension (file.getName ()), null); return storePath.getFullPath ();} catch (Exception e) {e.printStackTrace (); return null } public byte [] downloadFile (String filePath) {StorePath storePath = StorePath.parseFromUrl (filePath); return fastFileStorageClient.downloadFile (storePath.getGroup (), storePath.getPath (), new DownloadByteArray ());} public Boolean deleteFile (String filePath) {if (StringUtils.isEmpty (filePath)) {return false;} try {StorePath storePath = StorePath.parseFromUrl (filePath) FastFileStorageClient.deleteFile (storePath.getGroup (), storePath.getPath ());} catch (Exception e) {e.printStackTrace (); return false;} return true The full URL address of the package file * * @ param path * @ return * / public String getResAccessUrl (String path) {return fdfsWebServer.getWebServerUrl () + path;}}
FastDFSController.java
@ RestController@RequestMapping ("/ fast-dfs") public class FastDFSController {/ * @ param file * @ return * / @ PostMapping (") @ Transactional public void uploadFile (MultipartFile file, String cuisineId) throws IOException {String s = fastDfsUtil.uploadFile (file); String resAccessUrl = fastDfsUtil.getResAccessUrl (s) } / * * @ param response * @ throws IOException * / @ GetMapping ("") public void downloadFile (String filePath, HttpServletResponse response) throws IOException {byte [] bytes = fastDfsUtil.downloadFile (filePath); String [] split = filePath.split ("/"); String fileName = split [split.length-1] / / set mandatory download not to open response.setContentType ("application/force-download"); fileName = URLEncoder.encode (fileName, StandardCharsets.UTF_8); response.setHeader ("Content-Disposition", "attachment;filename=" + fileName); IOUtils.write (bytes, response.getOutputStream ()) } / * streaming media mode: you can only see the video from the beginning to the end. You cannot manually click to re-view the content you have seen * @ param filePath * @ param response * @ throws IOException * / @ GetMapping ("/ play") public void streamMedia (String filePath, HttpServletResponse response) throws IOException {byte [] bytes = fastDfsUtil.downloadFile (filePath) IOUtils.copy (new ByteArrayInputStream (bytes), response.getOutputStream ()); response.flushBuffer ();}} so far, I believe you have a deeper understanding of "how to use docker compose to build a fastDFS file server". You might as well do it in practice! Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.