In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/01 Report--
Editor to share with you how TensorFlow-Serving switches model storage engines HDFS and S3. I hope you will get something after reading this article. Let's discuss it together.
1. TensorFlow-Serving uses S3 storage model
Note: S3 here not only refers to Amazon S3, but also supports minio and other object storage that implements S3 protocol. It is worth noting that Tencent Cloud cos supports 2.2.0 and previous versions during testing. There is a compatibility problem with 2.3.0 and there is no problem with minio.
# Please set the following environment variable AWS_ACCESS_KEY_ID=XXXXX # to access the ID credential AWS_SECRET_ACCESS_KEY=XXXXX # required by S3 to access the KEY credential AWS_REGION=us-east-1 # S3 bucket region information required by S3 (optional) whether the S3_ENDPOINT=s3.us-east-1.amazonaws.com # S 3 access address S3_USE_HTTPS=1 # uses the HTTPS protocol. If the shutdown setting is 0S3_VERIFY_SSL=1 # if the HTTPS protocol is used, control whether SSL should be enabled. If the shutdown setting is set to 02, TensorFlow-Serving uses the HDFS storage model.
Currently, if you need to use HDFS as the storage engine, you need to repackage the existing docker images.
1. Download the corresponding Hadoop package and decompress it
Https://archive.apache.org/dist/hadoop/core/hadoop-2.6.0/hadoop-2.6.0.tar.gz
2. Dockerfile is as follows:
FROM tensorflow/serving:2.3.0RUN apt update & & apt install-y openjdk-8-jreCOPY hadoop-2.6.0 / root/hadoopENV JAVA_HOME / usr/lib/jvm/java-8-openjdk-amd64/ENV HADOOP_HDFS_HOME / root/hadoopENV LD_LIBRARY_PATH ${LD_LIBRARY_PATH}: ${JAVA_HOME} / jre/lib/amd64/serverRUN echo'#! / bin/bash\ n\ n CLASSPATH=$ (${HADOOP_HDFS_HOME} / bin/hadoop classpath-- Glob) tensorflow_model_server-- port=8500-- rest_api_port=9000\-- model_name=$ {MODEL_NAME}-- model_base_path=$ {MODEL_BASE_PATH} / ${MODEL_NAME}\ "$@" > / usr/bin/tf_serving_entrypoint.sh\ & & chmod + x / usr/bin/tf_serving_entrypoint.shEXPOSE 8500EXPOSE 9000ENTRYPOINT ["/ usr/bin/tf_serving_entrypoint.sh"]
3 、 Docker Build
Docker build-t tensorflow_serving:2.3.0-hadoop-2.6.0.
4. Run TensorFlow-Serving
Docker run-p 9001MODEL_BASE_PATH=hdfs://172.16.36.234:8020/user/model/tensorflow/ 9000-- name tensorflow-serving-e MODEL_NAME=resnet-e MODEL_BASE_PATH=hdfs://172.16.36.234:8020/user/model/tensorflow/-t tensorflow_serving:2.3.0-hadoop-2.6.0 after reading this article, I believe you have some understanding of "how TensorFlow-Serving switches model storage engines HDFS and S3". If you want to know more about it, welcome to follow the industry information channel. Thank you for reading!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 239
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.