In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-15 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains "how to use cgroup". Friends who are interested might as well take a look. The method introduced in this paper is simple, fast and practical. Now let the editor take you to learn how to use cgroup.
In the runC project, the code related to cgroups is all under the directory runc/libcontainer/cgroups/
The main content we focus on is the implementation definition file of apply_raw.go and the operation methods of each cgroups subsystem, such as the file shown in the red font section in the image above. Apply_raw.go mainly implements a series of interfaces defined in cgroups.go:
Type Manager interface {/ / Applies cgroup configuration to the process with the specified pid Apply (pid int) error / / Returns the PIDs inside the cgroup set GetPids () ([] int, error) / / Returns the PIDs inside the cgroup set & all sub-cgroups GetAllPids () ([] int, error) / / Returns statistics for the cgroup set GetStats () (* Stats Error) / / Toggles the freezer cgroup according with specified state Freeze (state configs.FreezerState) error / / Destroys the cgroup set Destroy () error / / NewCgroupManager () and LoadCgroupManager () require following attributes: / / Paths map [string] string / / Cgroups * cgroups.Cgroup / / Paths maps cgroup subsystem to path at which it is mounted. / / Cgroups specifies specific cgroup settings for the various subsystems / / Returns cgroup paths to save in a state file and to be able to / / restore the object later. GetPaths () map [string] string / / Sets the cgroup as configured. Set (container * configs.Config) error}
Implement each of the eight interfaces in the Manager defined above in apply_raw.go:
Type Manager struct {mu sync.Mutex Cgroups * configs.Cgroup Paths map [string] string} func (m * Manager) Apply (pid int) (err error) {if m.Cgroups = = nil {return nil} m.mu.Lock () var c = m.Cgroups d, err: = getCgroupData (m.Cgroups) Pid) if err! = nil {return err} if c.Paths! = nil {paths: = make (map [string] string) for name, path: = range c.Paths {_ Err: = d.path (name) if err! = nil {if cgroups.IsNotFound (err) {continue} return err} paths [ Name] = path} m.Paths = paths return cgroups.EnterPid (m.Paths Pid)} paths: = make (map [string] string) for _, sys: = range subsystems {if err: = sys.Apply (d) Err! = nil {return err} / / TODO: Apply should, ideally, be reentrant or be broken up into a separate / / create and join phase so that the cgroup hierarchy for a container can be / / created then join consists of writing the process pids to cgroup.procs p Err: = d.path (sys.Name ()) if err! = nil {/ / The non-presence of the devices subsystem is / / considered fatal for security reasons. If cgroups.IsNotFound (err) & & sys.Name ()! = "devices" {continue} return err} paths [sys.Name ()] = p} m.Paths = paths return nil} func (m * Manager) Destroy () error {if m .Cgroups.paths! = nil {return nil} m.mu.Lock () defer m.mu.Unlock () if err: = cgroups.RemovePaths (m.Paths) Err! = nil {return err} m.Paths = make (map [string] string) return nil} func (m * Manager) GetPaths () map [string] string {m.mu.Lock () paths: = m.Paths m.mu.Unlock () return paths} func (m * Manager) GetStats () (* cgroups.Stats Error) {m.mu.Lock () defer m.mu.Unlock () stats: = cgroups.NewStats () for name, path: = range m.Paths {sys, err: = subsystems.Get (name) if err = = errSubsystemDoesNotExist | |! cgroups.PathExists (path) {continue} if err: = sys.GetStats (path, stats) Err! = nil {return nil, err}} return stats, nil} func (m * Manager) Set (container * configs.Config) error {/ / If Paths are set, then we are just joining cgroups paths / / and there is no need to set any values. If m.Cgroups.Paths! = nil {return nil} paths: = m.GetPaths () for _, sys: = range subsystems {path: = paths [sys.Name ()] if err: = sys.Set (path, container.Cgroups) Err! = nil {return err}} if m.Paths ["cpu"]! = "" {if err: = CheckCpushares (m.Paths ["cpu"], container.Cgroups.Resources.CpuShares) Err! = nil {return err}} return nil} / / Freeze toggles the container's freezer cgroup depending on the state// providedfunc (m * Manager) Freeze (state configs.FreezerState) error {paths: = m.GetPaths () dir: = paths ["freezer"] prevState: = m.Cgroups.Resources.Freezer m.Cgroups.Resources.Freezer = state freezer Err: = subsystems.Get ("freezer") if err! = nil {return err} err = freezer.Set (dir, m.Cgroups) if err! = nil {m.Cgroups.Resources.Freezer = prevState return err} return nil} func (m * Manager) GetPids () ([] int Error) {paths: = m.GetPaths () return cgroups.GetPids (paths ["devices"])} func (m * Manager) GetAllPids () ([] int, error) {paths: = m.GetPaths () return cgroups.GetAllPids (paths ["devices"])}
Taking cpu subsystem as an example, take a look at the specific operation method definitions of each subsystem:
Type CpuGroup struct {} func (s * CpuGroup) Name () string {return "cpu"} / / Update cgroup configuration and corresponding pid to cpu subsystemfunc (s * CpuGroup) Apply (d * cgroupData) error {/ / We always want to join the cpu group, to allow fair cpu scheduling / / on a container basis path Err: = d.path ("cpu") if err! = nil & &! cgroups.IsNotFound (err) {return err} return s.ApplyDir (path, d.config, d.pid)} func (s * CpuGroup) ApplyDir (path string, cgroup * configs.Cgroup, pid int) error {/ / This might happen if we have no cpu cgroup mounted / / Just do nothing and don't fail. If path = "" {return nil} if err: = os.MkdirAll (path, 0755); err! = nil {return err} / / We should set the real-Time group scheduling settings before moving / / in the process because if the process is already in SCHED_RR mode / / and no RT bandwidth is set, adding it will fail. If err: = s.SetRtSched (path, cgroup); err! = nil {return err} / / because we are not using d.join we need to place the pid into the procs file / / unlike the other subsystems if err: = cgroups.WriteCgroupProc (path, pid) Err! = nil {return err} return nil} func (s * CpuGroup) SetRtSched (path string, cgroup * configs.Cgroup) error {if cgroup.Resources.CpuRtPeriod! = 0 {if err: = writeFile (path, "cpu.rt_period_us", strconv.FormatInt (cgroup.Resources.CpuRtPeriod, 10)) Err! = nil {return err} if cgroup.Resources.CpuRtRuntime! = 0 {if err: = writeFile (path, "cpu.rt_runtime_us", strconv.FormatInt (cgroup.Resources.CpuRtRuntime, 10)) Err! = nil {return err}} return nil} func (s * CpuGroup) Set (path string, cgroup * configs.Cgroup) error {if cgroup.Resources.CpuShares! = 0 {if err: = writeFile (path, "cpu.shares", strconv.FormatInt (cgroup.Resources.CpuShares, 10)) Err! = nil {return err} if cgroup.Resources.CpuPeriod! = 0 {if err: = writeFile (path, "cpu.cfs_period_us", strconv.FormatInt (cgroup.Resources.CpuPeriod, 10)) Err! = nil {return err}} if cgroup.Resources.CpuQuota! = 0 {if err: = writeFile (path, "cpu.cfs_quota_us", strconv.FormatInt (cgroup.Resources.CpuQuota, 10)); err! = nil {return err}} if err: = s.SetRtSched (path, cgroup) Err! = nil {return err} return nil} func (s * CpuGroup) Remove (d * cgroupData) error {return removePath (d.path ("cpu"))} func (s * CpuGroup) GetStats (path string, stats * cgroups.Stats) error {f, err: = os.Open (filepath.Join (path) "cpu.stat") if err! = nil {if os.IsNotExist (err) {return nil} return err} defer f.Close () sc: = bufio.NewScanner (f) for sc.Scan () {t, v Err: = getCgroupParamKeyValue (sc.Text ()) if err! = nil {return err} switch t {case "nr_periods": stats.CpuStats.ThrottlingData.Periods = v case "nr_throttled": stats.CpuStats.ThrottlingData.ThrottledPeriods = v Case "throttled_time": stats.CpuStats.ThrottlingData.ThrottledTime = v}} return nil}
Looking at the state.json file of a container launched by a runC, you can see the cgroup and namespace path information corresponding to the container: $cat / var/run/runc/$containerName/state.json | jq.
"namespace_paths": {"NEWUTS": "/ proc/30097/ns/uts", "NEWUSER": "/ proc/30097/ns/user", "NEWPID": "/ proc/30097/ns/pid", "NEWNS": "/ proc/30097/ns/mnt", "NEWNET": "/ proc/30097/ns/net", "NEWIPC": "/ proc/30097/ns/ipc"} "cgroup_paths": {"perf_event": "/ sys/fs/cgroup/perf_event/user.slice/container1", "net_cls": "/ sys/fs/cgroup/net_cls/user.slice/container1", "name=systemd": "/ sys/fs/cgroup/systemd/user.slice/container1", "blkio": "/ sys/fs/cgroup/blkio/user.slice/container1", "cpu": "/ sys/fs/cgroup/cpu" Cpuacct/user.slice/container1 "," cpuacct ":" / sys/fs/cgroup/cpu,cpuacct/user.slice/container1 "," cpuset ":" / sys/fs/cgroup/cpuset/user.slice/container1 "," devices ":" / sys/fs/cgroup/devices/user.slice/container1 "," freezer ":" / sys/fs/cgroup/freezer/user.slice/container1 "," hugetlb ":" / sys/fs/cgroup/hugetlb/user.slice/container1 " "memory": "/ sys/fs/cgroup/memory/user.slice/container1"}, so far I believe you have a deeper understanding of "how to use cgroup". You might as well do it in practice. Here is the website, more related content can enter the relevant channels to inquire, follow us, continue to learn!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.