In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-02-05 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article mainly explains "how to access asynchronous tasks and use log". The content of the explanation is simple and clear, and it is easy to learn and understand. Please follow the editor's ideas to study and learn "how to access asynchronous tasks and use log".
Delay Job
In the opening of daily tasks, we will have many asynchronous, batch, scheduled and delayed tasks to deal with. Go-queue is included in go-zero, and go-queue is recommended to deal with it. Go-queue itself is also developed based on go-zero, and there are two modes:
Dq: depending on beanstalkd, distributed, storable, delay, timing settings, shutdown and restart can be performed again, messages will be lost, and it is very easy to use. Redis setnx is used in go-queue to ensure that each message is consumed only once, and the usage scenario is mainly used for daily tasks.
Kq: depending on kafka, this will not be introduced. The famous kafka is mainly used for logging.
Let's mainly say that the use of dq,kq is the same, but it depends on the underlying layer. If you haven't used beanstalkd, you can google it first if you haven't been in contact with beanstalkd. It's easy to use.
I created a new message-job.api service under jobs using goctl
Info (title: / / message task desc: / / message task author: "Mikael" email: "13247629622@163.com") type BatchSendMessageReq {} type BatchSendMessageResp {} service message-job-api {@ handler batchSendMessageHandler / / bulk SMS post batchSendMessage (BatchSendMessageReq) returns (BatchSendMessageResp)}
Since there is no need to use routing, I deleted the routes.go under handler and created a new jobRun.go under handler, as follows:
Package handlerimport ("fishtwo/lib/xgo"fishtwo/app/jobs/message/internal/svc") / * @ Description launches job* @ Author Mikael* @ Date 2021-1-18 12 fishtwo/app/jobs/message/internal/svc 05* @ Version 1.0**/func JobRun (serverCtx * svc.ServiceContext) {xgo.Go (func () {batchSendMessageHandler (serverCtx) / /. Many job})}
In fact, xgo.Go is go batchSendMessageHandler (serverCtx), which encapsulates go Ctrip to prevent wild goroutine panic.
Then modify the startup file message-job.go
Package mainimport ("flag"fmt"fishtwo/app/jobs/message/internal/config"fishtwo/app/jobs/message/internal/handler"fishtwo/app/jobs/message/internal/svc"github.com/tal-tech/go-zero/core/conf"github.com/tal-tech/go-zero/rest") var configFile = flag.String ("f", "etc/message-job-api.yaml" "the config file") func main () {flag.Parse () var c config.Config conf.MustLoad (* configFile, & c) ctx: = svc.NewServiceContext (c) server: = rest.MustNewServer (c.RestConf) defer server.Stop () handler.JobRun (ctx) fmt.Printf ("Starting server at% s c.RestConf% d.\ n", c.Host, c.Port) server.Start ()}
Mainly handler.RegisterHandlers (server, ctx) is modified to handler.JobRun (ctx)
Next, we can introduce dq, first adding dqConf under etc/xxx.yaml
.DqConf: Beanstalks:-Endpoint: 127.0.0.1 Tube 7771 Tube: tube1-Endpoint: 127.0.0.1 Tube 7772 Tube: tube2 Redis: Host: 127.0.0.1 Tube 6379 Type: node
I use different ports locally to simulate two nodes, 7771 and 7772.
Add configuration resolution object in internal/config/config.go
Type Config struct {.... DqConf dq.DqConf}
Modify handler/batchsendmessagehandler.go
Package handlerimport ("context"fishtwo/app/jobs/message/internal/logic"fishtwo/app/jobs/message/internal/svc"github.com/tal-tech/go-zero/core/logx") func batchSendMessageHandler (ctx * svc.ServiceContext) {rootCxt:= context.Background () l: = logic.NewBatchSendMessageLogic (context.Background ()) Ctx) err: = l.BatchSendMessage () if err! = nil {logx.WithContext (rootCxt) .error ("[JOB-ERR]:% + v", err)}}
Modify the batchsendmessagelogic.go under logic to write our consumer consumption logic
Package logicimport ("context"fishtwo/app/jobs/message/internal/svc"fmt"github.com/tal-tech/go-zero/core/logx") type BatchSendMessageLogic struct {logx.Logger ctx context.Context svcCtx * svc.ServiceContext} func NewBatchSendMessageLogic (ctx context.Context, svcCtx * svc.ServiceContext) BatchSendMessageLogic {return BatchSendMessageLogic {Logger: logx.WithContext (ctx), ctx: ctx, svcCtx: svcCtx } func (l * BatchSendMessageLogic) BatchSendMessage () error {fmt.Println ("job BatchSendMessage start") l.svcCtx.Consumer.Consume (func (body [] byte) {fmt.Printf ("job BatchSendMessage% s\ n" + string (body))}) fmt.Printf ("job BatchSendMessage finish\ n") return nil}
In this way, it's done. Start message-job.go and ok.
Go run message-job.go
Then we can add tasks to dq in the business code, and it can be consumed automatically.
Producer.Delay delivers 5 delayed tasks to dq:
Producer: = dq.NewProducer ([] dq.Beanstalk {{Endpoint: "localhost:7771", Tube: "tube1",}, {Endpoint: "localhost:7772", Tube: "tube2" },}) for I: = 1000 I
< 1005; i++ { _, err := producer.Delay([]byte(strconv.Itoa(i)), time.Second * 1) if err != nil { fmt.Println(err) } } producer.At 可以指定某个时间执行,非常好用,感兴趣的朋友自己可以研究下。 错误日志 在前面说到gateway改造时候,如果眼神好的童鞋,在上面的httpresult.go中已经看到了log的身影:Let's take a look at how it is handled in rpc.
Yes, I've added the grpc interceptor https://www.yuque.com/tal-tech/go-zero/ttzlo1 to each rpc-launched main, so let's see what's going on in the grpc interceptor.
Then I use the github/pkg/errors package in my code to handle errors, and this package is still very easy to use.
So:
We print the log logx.WithContext (ctx) .Errorf ("[RPC-SRV-ERR]% + v", err) in grpc
Print log logx.WithContext (r.Context ()) .error ("[GATEWAY-SRV-ERR]:% + v", err) in api
Printing logs in go-zero, using logx.WithContext will bring trace-id into, such a request, such as
User-api-- > user-srv-- > message-srv
If there is an error in messsage-srv, the three of them are in the same trace-id. Can you search this request for error stack information in elk by entering this trace-id at one time? Of course, you can also access jaeger, zipkin, skywalking, etc., which I haven't connected yet.
Thank you for reading, the above is the content of "how to access asynchronous tasks and use log". After the study of this article, I believe you have a deeper understanding of how to access asynchronous tasks and use log, and the specific use needs to be verified in practice. Here is, the editor will push for you more related knowledge points of the article, welcome to follow!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.