In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)06/01 Report--
k8s用户界面k8s-manager的本质及如何进行部署,相信很多没有经验的人对此束手无策,为此本文总结了问题出现的原因和解决方法,通过这篇文章希望你能解决这个问题。
k8s用户界面--k8s-manager1、k8s-manager简介:
①、k8s-manager是一个基于浏览器的接口Kubernetes API,功能同kubectl。
②、当前版本和官方的kube-dashboard,但不是一个等级
③、k8s-manager镜像有200M左右,有点大,登陆进去,运行的是nginx服务,nginx配置文件:/etc/nginx/sites-enabled/default,主目录:/var/www
④、web界面只能查看namespace、events、nodes等简单信息
⑤、Resource Quotas功能,未开通,期待
⑥、仅供学习扩展
2、部署:①、下载镜像:
docker pull mlamina/k8s-manager:latest
看下载过程,发现这个镜像在build的时候,有很多RUN
[root@localhost kube-1.2]# docker pull mlamina/k8s-manager:latest
Trying to pull repository docker.io/mlamina/k8s-manager ... latest: Pulling from mlamina/k8s-manager
cacc99976415: Pull complete
5b66679e02f4: Pull complete
1f8c9c887b89: Pull complete
f0dc6e5bff03: Pull complete
9dcf5abc367e: Pull complete
9fffafe11022: Pull complete
ca36392ac8a5: Pull complete
d26280472f7d: Pull complete
32e3fac5657a: Pull complete
3972aa49a3e3: Pull complete
4caa999e314e: Pull complete
1a2d1ead0644: Pull complete
2a06190131f5: Pull complete
104b4e83cb1b: Pull complete
2db0330dd6ee: Pull complete
af5842bf4830: Pull complete
d47dfdd163f1: Pull complete
d84ad8831181: Pull complete
dd66db5193b6: Pull complete
67fc7feb5c34: Pull complete
3cead704ab86: Pull complete
22b28de9f034: Already exists
Digest: sha256:53e80dcc60a6b23169025f323bcf85a6f5518ab1a973a8e3d294cc81e5ee0017
Status: Downloaded newer image for docker.io/mlamina/k8s-manager:latest
②、创建Namespace:
#创建namespace
[1065][root@www : kube-1.2]# cat kube-system.json
{
"kind": "Namespace",
"apiVersion": "v1",
"metadata": {
"name": "kube-system"
}
}
[1066][root@www : kube-1.2]#
[1066][root@www : kube-1.2]# kubectl create -f kube-system.json
namespace "kube-system" created
[1067][root@www : kube-1.2]#
③、创建k8s-manager副本及pod:
[1069][root@www: kube-1.2]# cat k8s-manager-rc.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: k8s-manager
namespace: kube-system
labels:
app: k8s-manager
spec:
replicas: 1
selector:
app: k8s-manager
template:
metadata:
labels:
app: k8s-manager
spec:
containers:
- image: docker.io/mlamina/k8s-manager:latest
name: k8s-manager
resources:
limits:
cpu: 100m
memory: 50Mi
ports:
- containerPort: 80
name: http
[1070][root@www: kube-1.2]#
[1070][root@www: kube-1.2]# kubectl create -f k8s-manager-rc.yml
replicationcontroller "k8s-manager" created
[1071][root@www: kube-1.2]#
[1072][root@www: kube-1.2]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system k8s-manager-3ry8b 0/1 ContainerCreating 0 15s
[1072][root@www: kube-1.2]#
[1073][root@www: kube-1.2]# /usr/bin/kubectl describe pod k8s-manager-3ry8b --namespace=kube-system
Name: k8s-manager-3ry8b
Namespace: kube-system
Node: 192.168.16.234/192.168.16.234
Start Time: Wed, 29 Jun 2016 14:16:30 +0800
Labels: app=k8s-manager
Status: Running
IP: 172.22.3.2
Controllers: ReplicationController/k8s-manager
Containers:
k8s-manager:
Container ID: docker://54cdc0e3727ef91c9d32560fa8c16ec06b4371d33e0a38bfe36a2938c2324195
Image: docker.io/mlamina/k8s-manager:latest
Image ID: docker://22b28de9f0345b9add9bae73ea191a7441783ecfc3a4f0a68abec8dc3ee803ef
Port: 80/TCP
QoS Tier:
memory: Guaranteed
cpu: Guaranteed
Limits:
cpu: 100m
memory: 50Mi
Requests:
cpu: 100m
memory: 50Mi
State: Running
Started: Wed, 29 Jun 2016 14:16:50 +0800
Ready: True
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Ready True
Volumes:
default-token-pu9pf:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-pu9pf
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
52s 52s 1 {kubelet 192.168.16.234} spec.containers{k8s-manager} Normal Pulling pulling image "docker.io/mlamina/k8s-manager:latest"
47s 47s 1 {default-scheduler } Normal Scheduled Successfully assigned k8s-manager-3ry8b to 192.168.16.234
37s 37s 1 {kubelet 192.168.16.234} spec.containers{k8s-manager} Normal Pulled Successfully pulled image "docker.io/mlamina/k8s-manager:latest"
36s 36s 1 {kubelet 192.168.16.234} spec.containers{k8s-manager} Normal Created Created container with docker id 54cdc0e3727e
35s 35s 1 {kubelet 192.168.16.234} spec.containers{k8s-manager} Normal Started Started container with docker id 54cdc0e3727e
[1074][root@www: kube-1.2]#
[1074][root@www: kube-1.2]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system k8s-manager-3ry8b 1/1 Running 0 53s
[1075][root@www: kube-1.2]#
④、创建service:
[1075][root@www: kube-1.2]# cat k8s-manager-svc.yml
apiVersion: v1
kind: Service
metadata:
name: k8s-manager
namespace: kube-system
labels:
app: k8s-manager
spec:
ports:
- port: 80
targetPort: http
selector:
app: k8s-manager
[1076][root@www: kube-1.2]#
[1076][root@www: kube-1.2]# kubectl create -f k8s-manager-svc.yml
service "k8s-manager" created
[1077][root@www: kube-1.2]#
[1077][root@www: kube-1.2]#
[1077][root@www: kube-1.2]# kubectl get services --all-namespaces
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes 10.1.0.1 443/TCP 8m
kube-system k8s-manager 10.1.65.137 80/TCP 4s
[1078][root@www: kube-1.2]#
3、k8s-manager界面介绍:
访问web:
http://192.168.16.100:8080/api/v1/proxy/namespaces/kube-system/services/k8s-manager/
官网上是用https的,但不行,无所谓了
After reading the above, do you understand the essence of k8s user interface k8s-manager and how to deploy it? If you still want to learn more skills or want to know more related content, welcome to pay attention to the industry information channel, thank you for reading!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.