In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article introduces the relevant knowledge of "how to use Fizz Gateway for super-high-performance API gateways". In the operation of actual cases, many people will encounter this dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
Development and debugging problems
The deployment of the middle tier on the Web website is on the front side, generally deployed behind the firewall and Nginx, and is more oriented to C-end user services, so there are higher requirements in terms of performance concurrency, and most teams will choose asynchronous framework in type selection. Because it is directly oriented to the C side and changes a lot, most of the code that needs to be changed or configured frequently will be arranged at this level, and it will be released very frequently. In addition, many teams use compiled languages for coding rather than interpreted languages. The combination of these three factors makes debugging and development very painful for developers. For example, we used to choose the Play2 framework, which is an asynchronous Java framework that requires developers to write async fluently, but not many colleagues are familiar with debugging skills. Configuring various request parameters and result handling in the code seems simple, but it takes a lot of time and effort to co-debug, unit test, or wait for Java compilation after the configuration file is modified. If there is something wrong with the asynchronous coding specification, it will be a torture for developers.
Public F.Promise getGoodsByCondi (final StringBuilder searchParams, final GoodsQueryParam param) {final Map params = new TreeMap (); final OutboundApiKey apiKey = OutboundApiKeyUtils.getApiKey ("search.api"); params.put ("apiKey", apiKey.getApiKey ()); params.put ("service", "Search.getMerchandiseBy") If (StringUtils.isNotBlank (param.getSizeName)) {try {searchParams.append ("sizes:" + URLEncoder.encode (param.getSizeName (), "utf-8") + ";");} catch (UnsupportedEncodingException e) {e.printStackTrace () }} if (param.getStock ()! = null) {searchParams.append ("hasStock:" + param.getStock () + ";") } if (param.getSort ()! = null & &! param.getSort (). IsEmpty ()) {searchParams.append ("orderBy:" + param.getSort () + ";");} searchParams.append ("limit:" + param.getLimit () + "; page:" + param.getStart () Params.put ("traceId", "open.api.vip.com"); ApiKeySignUtil.getApiSignMap (params,apiKey.getApiSecret (), "apiSign"); String url = RemoteServiceUrl.SEARCH_API_URL; Promise promise = HttpInvoker.get (url, params); final GoodListBaseDto retVal = new GoodListBaseDto () Promise goodListPromise = promise.map (new Function () {@ Override public BaseDto apply (HttpResponse httpResponse) throws Throwable {JsonNode json = JsonUtil.toJsonNode (httpResponse.getBody () If (json.get ("code") .asInt ()! = 200) {Logger.error ("Error:" + httpResponse.getBody ()); return new BaseDto (CommonError.SYS_ERROR) } JsonNode result = json.get ("items"); Iterator iterator = result.elements (); final List goods = new ArrayList () While (iterator.hasNext ()) {final Good good = new Good (); JsonNode goodJson = iterator.next (); good.setGid (goodJson.get ("id") .asText ()) Good.setDiscount (String.format ("% .2f", goodJson.get ("discount") .asDouble ()); good.setAgio (goodJson.get ("setAgio") .asText ()) If (goodJson.get ("brandStoreSn")! = null) {good.setBrandStoreSn (goodJson.get ("brandStoreSn") .asText ()) } Iterator whIter = goodJson.get ("warehouses") .elements () While (whIter.hasNext ()) {good.getWarehouses () .add (whIter.next () .asText ()) } if (goodJson.get ("saleOut") .asInt () = = 1) {good.setSaleOut (true) } good.setVipPrice (goodJson.get ("vipPrice"). AsText (); goods.add (good);} retVal.setData (goods); return retVal }}); if (param.getBrandId ()! = null & &! param.getBrandId (). IsEmpty () {final Promise pmsPromise = service.getActiveTipsByBrand (param.getBrandId ()) Return goodListPromise.flatMap (new Function () {@ Override public Promise apply (BaseDto listBaseDto) throws Throwable {return pmsPromise.flatMap (new Function () {) @ Override public Promise apply (List activeTips) throws Throwable {retVal.setPmsList (activeTips) BaseDto baseDto = (BaseDto) retVal; return Promise.pure (baseDto);}}) }});} return goodListPromise;}
The above code is just an excerpt of one of the procedure functions. If we make the scene of the middle layer more complex, we will not only solve the problems of coding performance, coding quality and coding time.
# # "complex" scenarios
The granularity of micro-service is fine. In order to achieve concise front-end logic and less service calls, most of the output on the C side is the result of aggregation. For example, we have a search middle-tier logic whose service is a process like this:
Get membership information, membership card list, membership points balance, because different levels of members will have different prices
Get the user's coupon information, which will have an impact on the calculated price
Get the search results, the results come from three parts, the inventory price of business travel goods, guess your favorite inventory price, the recommended inventory price, and the inventory price of overseas goods.
The services involved are: middle-tier service (aggregation service), member service, coupon service, recommendation service, enterprise service, overseas search service, search service. In addition, there are various types of caching facilities and database configuration services.
Public List searchProduct (String traceId, ExtenalProductQueryParam param, MemberAssetVO memberAssetVO, ProductInfoResultVO resultVO,boolean needAddPrice) {/ / configId String configIds of the coupon available to the user = memberAssetVO = = null? Null: memberAssetVO.getConfigIds (); / / Special items that restrict the use of the coupon feature if (customProperties.getIgnoreChannel (). Contains (param.getChannelCode () {configIds = null;} final String configIdConstant = configIds; / / main search list information Mono innInfos = this.search (traceId, param, configIds, resultVO) Return innInfos.flatMap (inns-> {/ / Business Travel Product recommendation Mono busiProduct = this.recommendProductService.getBusiProduct (traceId, param, configIdConstant); / / member Product recommendation (guess you like it) Mono guessPref = this.recommendProductService.getGuessPref (traceId, param, configIdConstant) / / Business-related query String registChainId = memberAssetVO = = null | | memberAssetVO.getMember () = = null? Null: memberAssetVO.getMember (). GetRegistChainId (); Mono registChain = this.recommendProductService.registChain (traceId, param, configIdConstant, registChainId); / / hot product Mono advert = this.recommendProductService.advert (traceId, param, configIdConstant) Return Mono.zip (busiProduct, guessPref, registChain, advert) .flatMap (product-> {/ / recommended position) Packaging List products = recommendProductService.setRecommend (inns, product.getT1 (), product.getT2 (), product.getT3 (), product.getT4 (), param) / / set other parameters return this.setOtherParam (traceId, param, products, memberAssetVO);});}) .block ();}
The Service layer of this service will often make adjustments and changes according to the changes of the product requirements and the underlying micro-service interface, while the developed interface call sequence diagram does not correspond to the code because of these changes by the team.
In addition to the above problems, the coding problem of multiple micro-service asynchronous invocation aggregations in the service has not been properly handled because the Spring-MVC framework coding style it uses is synchronous, while the Service layer uses asynchronous Mono, which can only use block untimely. These code changes, missing documents, and coding quality together constitute the code management problems of the middle tier.
# # barbaric Development
I was involved in building a start-up technical team. At first, because of the need of rapid development, we tend to do a fat service, but when the team size begins to expand, we need to gradually split the fat service into micro services and start to produce middle-tier teams. their main purpose is to apply to the aggregation of the underlying services.
However, for a period of time, our recruitment rate could not fully catch up with the growth of the number of services, so colleagues at the bottom need to constantly switch coding ideas. Because in addition to writing the underlying micro-services after the split, we also need to write converged middle-tier services.
When I stopped some projects and began to clean up my staff, I found another cruel fact: everyone had dozens of middle-tier services, so it was impossible to replace anyone. Because after changing hands many times, colleagues can no longer figure out the connection of intermediate services.
In addition, there are a variety of authorization methods, because the team has been barbaric growth, a variety of authorization methods are mixed together, both simple, complex, reasonable and unreasonable. In short, no one on the team can figure it out.
After a period of development, by sorting out online services, we find that a lot of resources are wasted, for example, sometimes only one interface uses a microservice. In the early morning, these microservices were requested on a large scale, but later, the project was abandoned and there was no traffic, but the running interface was still online. As a team manager, I don't even have any written statistics on the interface summary.
When my boss told me to suspend the docking service of my partner company, I couldn't logically stop and return a business exception. As an upstream inventory supplier with multi-channel development, we have a lot of docking channels, and there are many customized requirements for the interfaces provided to customers. These requirements are generally in the middle logic control code, and the channels are offline. Will not make any adjustments, because developers need to update the code according to the requirements.
Moreover, the joint debugging of middle-tier teams has been a problem for a long time. Front-end colleagues often complain to me that back-end colleagues refuse to add code for data processing logic, and as a front-end, they have to add a lot of code to convert data to adapt to the logic of the interface. In an environment like Mini Program, where packet size is limited, the movement of this code has become an age-old problem in the later stages of development.
# failed to select the gateway
At that time, there were two types of solutions on the market:
The solution of the middle tier. The middle-tier solution generally provides naked asynchronous services, other plug-ins and functions customized according to the requirements, and some of the middle-tier services also have part of the functions of the gateway after transformation.
The solution of the gateway. Gateway solutions are generally provided around the whole family bucket of micro-services, or in their own way, providing universal functions (such as routing functions). Of course, some gateways can also be added to the business functions of the middle tier after custom modification.
Our business is changing very fast. If the existing gateway scheme on the market can meet the demand, and we have the ability to carry out secondary development, we are very happy to use it.
At that time, Eolinker was our provider of API automated testing, providing a corresponding managed gateway, but the language was Go. Our team's technology stack is mainly based on Java, and the deployment plan of operation and maintenance has always been around Java, which means that our selection is too narrow, so we have to give up this idea.
We have chosen Kong gateway before, but it is not cheap to introduce a new complex technology stack. For example, the recruitment and secondary development of Lua is an unavoidable pain.
In addition, Gravitee, Zuul, and Vert.x are all gateways used by different small teams. The most talked about features are:
1. Support circuit breaker, flow control and overload protection
2. Support high concurrency
3. Second kill
However, for businesses, circuit breakers, flow control and overload protection should be the last resort. Moreover, for a growing team, service overload collapse requires a long period of business precipitation.
In addition, the traffic of the second kill business is more likely to maintain a normal level, and its occasional high concurrency is also within the scope of our team's processing capacity. In other words, when selecting a model, it is more necessary to combine with reality than to consider traffic like Alibaba. I only need to consider a way that is above medium level and has cluster scalability.
Previously, our team used a more widely used gateway is Vert.x, the coding style is like this, gorgeous and cool.
Private void dispatchRequests (RoutingContext context) {int initialOffset = 5; / / length of `/ api/` / / run with circuit breaker in order to deal with failure circuitBreaker.execute (future-> {/ / (1) getAllEndpoints (). SetHandler (ar-> {/ / (2) if (ar.succeeded ()) {List recordList = ar.result (); / / get relative path and retrieve prefix to dispatch client String path = context.request (). Uri () If (path.length () record.getMetadata (). GetString ("api.name")! = null) .filter (record-> record.getMetadata (). GetString ("api.name") .equals (prefix)) / / (3) .findAny (); / / (4) simple load balance if (client.isPresent ()) {doDispatch (context, newPath, discovery.getReference (client.get ()). Get (), future) / / (5)} else {notFound (context); / / (6) future.complete ();}} else {future.fail (ar.cause ());}}) .setHandler (ar-> {if (ar.failed ()) {badGateway (ar.cause (), context); / / (7)}});}
However, the lack of support in the Vert.x community and the high cost of getting started persisted, and the team couldn't even find more suitable colleagues to maintain the code.
The failure of the above gateway made us realize that there is no Swiss Army knife in the market that is fully in line with our company's situation, so we began to embark on the road of self-research and began to design the Fizz gateway.
# embark on the road of self-research gateway
Do we need a gateway? What problem does the gateway layer solve? These two problems are self-evident. We need a gateway because it can help us solve a series of problems such as load balancing, aggregation, authorization, monitoring, current restriction, logging, permission control and so on. At the same time, we also need the middle tier, and the micro-services that refine the service granularity make us have to aggregate them through the middle tier.
What we don't need is complex coding, redundant glue code, and a lengthy release process.
Design considerations of Fizz
In order to solve these problems, we need to blur the boundary between the gateway and the middle tier, erase the gap between the gateway and the middle tier, let the gateway support the dynamic coding of the middle tier, and release and deploy as little as possible. To achieve this, you only need to use a concise gateway model and make use of the low-code feature to cover the functions of the middle tier as much as possible.
# # requirements starting from the origin
In reviewing this choice, I need to re-emphasize the need to start from the origin:
1. Java technology stack, which supports Spring family bucket.
2. Easy to use, zero training can also be arranged
3. Dynamic routing capability, enabling new API anytime, anywhere
4. High performance and scalable clusters
5. Strong hot service choreography ability, support front and back end coding, update API anytime, anywhere
6. Online coding logic support
7. Scalable security authentication capability to facilitate logging
API audit function to control all services
Extensibility, powerful plug-in development mechanism
# # Technology selection of Fizz
After selecting Spring WebFlux, because of its strong monomer characteristics, colleagues suggested naming it Fizz (Fizz is one of the hero characters in the competitive game "League of Legends", it is a melee mage, it has one of the best single outbreaks in AP, so it can contain most mages and can be used as a good anti-hero).
WebFlux is a typical non-blocking asynchronous framework, and its core is based on the related API implementation of Reactor. Compared to the traditional web framework, it can run on containers such as Netty, Undertow, and Servlet3.1, so its running environment is much more optional than the traditional web framework.
Spring WebFlux is an asynchronous non-blocking Web framework, which can make full use of the hardware resources of multi-core CPU to handle a large number of concurrent requests. It relies on Spring's technology stack, and the code style is as follows:
Public Mono getAll (ServerRequest serverRequest) {printlnThread ("get all users"); Flux userFlux = Flux.fromStream (userRepository.getUsers (). EntrySet (). Stream (). Map (Map.Entry::getValue)); return ServerResponse.ok () .body (userFlux, User.class);}
# # Core implementation of Fizz
For us, this is a project from scratch, and many of our colleagues are not confident at first. I wrote fizz, the first core package of service orchestration code for this service, and wrote the commit as "get started".
I intend that the definition of all service aggregations is solved by a single configuration file. So, there is such a model: if you take the user request as input, then the response is naturally output, that is, a pipeline Pipe; in a Pipe, there will be different Step corresponding to different serial steps; while in a Step, at least one Input receives the output processed in the previous step, all Input are parallel and can be executed in parallel There is a unique Context holding intermediate context throughout the life cycle of Pipe.
In the input and output of each Input, I have increased the extensibility of dynamic scripts. Up to now, I have supported both JavaScript and groove, and the front-end logic that supports JavaScript can be extended as necessary at the back end. All our configuration file needs is this script:
/ / aggregation interface configuration var aggrAPIConfig = {name: "input name", / / custom aggregation interface name debug: false, / / whether it is debug mode. Default falsetype: "REQUEST", / / type, REQUEST/MYSQLmethod: "GET/POST", path: "/ proxy/aggr-hotel/hotel/rates", / / format: / aggr/+ service name + path. The grouping name starts with aggr-. Indicates the aggregation interface langDef: {/ / optional, prompt language definition, and provide prompt information in different languages according to configuration when input parameter verification fails Currently, Chinese and English langParam: "input.request.body.languageCode", / / input language field langMapping: {/ / the mapping relationship between field value and language zh: "0", / / Chinese en: "1" / / English}}, headersDef: {/ / optional, define some parameters of aggregation interface header, use JSON Schema specification (see: http://json-schema.org/specification.html for details) For parameter verification, the API document generates type: "object", properties: {appId: {type: "string", title: "Application ID", description: "description"}, required: ["appId"]}, paramsDef: {/ / optional, define some parameters of the aggregation interface parameter, and use JSON Schema specification (see: http://json-schema.org/specification.html for details) for parameter verification The API document generates type: "object", properties: {lang: {type: "string", title: "language", description: "description"}}, bodyDef: {/ / optional, define some parameters of the aggregation interface body, and use JSON Schema specification (see: http://json-schema.org/specification.html for details) for parameter verification Type: "object", properties: {userId: {type: "string", title: "user name", description: "description"}}, required: ["userId"]}, scriptValidate: {/ / optional, used for input parameter verification scenarios that cannot be covered by headersDef, paramsDef and bodyDef type: "", / / groovysource: "/ / script returns List object, null: verification passed List: error message list}, validateResponse: {/ / input parameter verification failed response Handled in the same way as dataMapping.responsefixedBody: {/ / fixed body "code":-411}, fixedHeaders: {/ / fixed header "a": "b"}, headers: {/ / referenced header}, body: {/ / referenced header "msg": "validateMsg"}, script: {type: "", / / groovysource: ""}} DataMapping: {/ / aggregation interface data conversion rules response: {fixedBody: {/ / fixed body "code": "b"}, fixedHeaders: {/ / fixed header "a": "b"}, headers: {/ / referenced header. Default is source data type. If you want to convert a type, start with the target type + space, for example: "int"abc": "int step1.requests.request1.headers.xyz"}, body: {/ / referenced header. The default is the source data type. If you want to convert the type, start with the target type + space. For example, "int", "abc": "int step1.requests.request1.response.id", "inn.innName": "step1.requests.request2.response.hotelName", "ddd": {/ / script, when the returned object of the script contains the _ stopAndResponse field and the value is true The final request will be sent to the browser "type": "groovy", "source": ""}}, script: {/ / script calculates the value of body type: ", / / groovysource:"}, stepConfigs: [{/ / step configuration name:" step1 ", / / step name stop: false / / whether to return dataMapping after executing the current step: {/ / step response data conversion rules response: {fixedBody: {/ / fixed body "a": "b"}, body: {/ / step result "abc": "step1.requests.request1.response.id", "inn.innName": "step1.requests.request2.response.hotelName"} Script: {/ / script calculates the value of body type: ", / / groovysource:"}, requests: [/ / each step can call multiple interfaces {/ / custom interface name name:" request1 ", / / interface name Format request+N type: "REQUEST", / / Type, REQUEST/MYSQL url: "", / / default url Use devUrl: "http://baidu.com", / / testUrl:" http://baidu.com", / / preUrl: "http://baidu.com", / / prodUrl:" http://baidu.com", / / method: "GET", / / GET/POST, default GETtimeout: 3000, / / timeout in milliseconds when the environment url is null. Values between 1 and 10000 seconds are allowed. The default value is 3 seconds if left empty or less than 1 millisecond. Condition: {type: "", / / groovysource: "return\" ABC\ ".equals (variables.get (\" param1\ ")) & & variables.get (\" param2\ ") > = 10 "/ / the script execution result returns TRUE to execute the API call, but FALSE does not execute}, fallback: {mode:" stop | continue ", / / whether to continue to execute defaultResult when the request fails:"/ / when mode=continue You can set the default response message (json string)}, dataMapping: {/ / data conversion rules request: {fixedBody: {}, fixedHeaders: {}, fixedParams: {}, headers: {/ / defaults to the source data type, and starts with the target type + space if you want to convert the type For example, "int", "abc": "step1.requests.request1.headers.xyz"}, body: {"*": "input.request.body.*", / / * is used to transparently transmit a json object "inn.innId": "int step1.requests.request1.response.id" / / is the source data type by default. If you want to convert the type, it starts with the target type + space. For example: "int"}, params: {/ / defaults to the source data type. If you want to convert the type, start with the target type + space For example, "int", "userId": "input.requestBody.userId"}, script: {/ / script calculates the value of body type: ", / / groovysource:"}}, response: {fixedBody: {}, fixedHeaders: {}, headers: {" abc ":" step1.requests.request1.headers.xyz "}, body: {" inn.innId ":" step1.requests.request1.response.id "} Script: {/ / script calculates the value of body / / type: ", / / groovysource:"}]}]}
The context format for running is:
/ / runtime context Used to save customer input and input and output results of each step var stepContext = {/ / whether DEBUG mode debug:false,// elapsed time elapsedTimes: [{[actionName]: 123, / / Operation name: time}], / / input datainput: {request: {path: ", method:" GET/POST ", headers: {}, body: {}, params: {}} Response of response: {/ / aggregation interface headers: {}, body: {}}, / / step namestepName: {/ / step request datarequests: {request1: {request: {url: ", method:" GET/POST ", headers: {}, body: {}}, response: {headers: {}, body: {}, request2: {request: {url:" Method: "GET/POST", headers: {}, body: {}}, response: {headers: {}, body: {}} / /...}, / / step result result: {}
When I look at Input as just an input and output, plus the intermediate process of data processing, then it has a great possibility of expansion. For example, in the code, we can even write a MysqlInput class that extends Input
Public class MySQLInput extends Input {}
It only needs to define a small number of class methods of Input to support the input of MySQL, even with dynamic parsing MySQL scripts, and do data parsing transformation.
Public class Input {protected String name; protected InputConfig config; protected InputContext inputContext; protected StepResponse lastStepResponse = null; protected StepResponse stepResponse; public void setConfig (InputConfig inputConfig) {config = inputConfig;} public InputConfig getConfig () {return config } public void beforeRun (InputContext context) {this.inputContext = context;} public String getName () {if (name = = null) {return name = "input" + (int) (Math.random () * 100);} return name } / * check whether the Input needs to be run. By default, run * @ stepContext Step context * @ return TRUE: run * / public boolean needRun (StepContext stepContext) {return Boolean.TRUE;} public Mono run () {return null } public void setName (String configName) {this.name = configName;} public StepResponse getStepResponse () {return stepResponse;} public void setStepResponse (StepResponse stepResponse) {this.stepResponse = stepResponse;}}
The content of extended coding does not involve asynchronous processing. In this way, Fizz has handled asynchronous logic more amicably.
# # Service choreography of Fizz
The visual background can perform the service orchestration function of Fizz, although the above core code is not very complex, but it is enough to abstract our whole step. Now, the visual interface through fizz-manager only needs to generate the corresponding configuration file and make it quickly update and load. Complex logical verification is realized by defining the request header, request body and Query parameters in the Request Input, as well as verification rules or custom scripts. In defining its Fallback, we implement a Request Input. Through some Step assembly, a service choreographed online can be put into use in real time. If it is a read-only interface, or even we recommend direct online real-time testing, of course, you can isolate the test interface from the formal interface, return the context, and view the input and output of each step and request throughout the execution process.
# # script Verification of Fizz
Fizz also provides more flexible scripting when the built-in script validation is not enough to cover the scenario.
/ / javascript script function name cannot modify function dyFunc (paramsJsonStr) {/ / context. For data structure, please refer to context.js var context = JSON.parse (paramsJsonStr) ['context']; / / common is a built-in context convenience tool class. For more information, please see common.js. For example: / / var data = common.getStepRespBody (context, 'step2',' request1', 'data') / / do something / / Custom return result. If the returned Object contains a _ stopAndResponse=true field, the request will be terminated and the script result will be sent to the client (mainly used in the scenario where there is an abnormal situation to terminate the request) var result = {/ _ stopAndResponse: true,msgCode: '0century stopAndResponse message:', data: null}; / / if the returned result is Array or Object, the json string return JSON.stringify (result) should be converted first. }
# # data processing of Fizz
Fizz has the ability to transform the input and output of the request. It makes full use of the characteristics of json path to change the input and output of Input by loading the definition of the configuration file in order to get reasonable results.
# # powerful routing for Fizz
The dynamic routing function of Fizz is also designed to be practical. It has a scheme to smoothly replace gateways. Initially, Fizz can coexist with other gateways, such as the Vert.x-based gateways mentioned earlier. Therefore, Fizz has a reverse proxy scheme similar to Nginx, which is purely based on the implementation of routing. So, at the beginning of the project, the traffic through Nginx is forwarded to Fizz and then to Vert.x, which proxies all Vert.x traffic. After that, the traffic is gradually forwarded to the back-end micro-services, some specially customized common code on Vert.x is sunk to the underlying micro-services, Vert.x and middle-tier services are completely abandoned, and the number of servers is reduced by 50%. After we made the adjustments, the problems that had plagued me in the middle tier and the server were finally solved, and we could reduce the list of services in the hands of each colleague and put the work on more valuable projects. When all this becomes clear, the project naturally shows its value.
For the channel, the routing function here also has a very practical function. Because of the existence of the concept of Fizz service group, it can set different groups for different channels, so as to solve the problem of channel differences. In fact, there can be multiple groups of different versions of API online, but also to solve the problem of API version management in disguise.
# # Extensible Authentication of Fizz
Fizz also has a special solution for authorization. Our company was set up relatively early, and the team has old code written for many years, so there will be a variety of authentication methods in the code. At the same time, there are also problems with external platform support, such as code on App and Wechat, which require different authentication support.
The figure above shows the signature-checking configuration of the passed configuration. In fact, Fizz provides two ways: a common built-in check, and a custom plug-in check. Users can choose conveniently through the drop-down menu when using it.
# # plug-in Design of Fizz
In the early days of Fizz design, we fully considered the importance of plug-ins, so we designed plug-in standards that are easy to implement. Of course, this requires developers to have a deep understanding of asynchronous programming, which is suitable for teams with customized requirements. The plug-in only needs to inherit PluginFilter, and only two functions need to be implemented:
Public abstract class PluginFilter {private static final Logger log = LoggerFactory.getLogger (PluginFilter.class); public Mono filter (ServerWebExchange exchange, Map config, String fixedConfig) {return Mono.empty ();} public abstract Mono doFilter (ServerWebExchange exchange, Map config, String fixedConfig);}
# # Management features of Fizz
The resource protection of medium and large enterprises is also very important. Once all traffic passes through Fizz, the corresponding routing function needs to be established in Fizz, and the corresponding API audit system is also one of its major features. All company API interface resources are easily protected, and there is a strict audit mechanism to ensure that each API is audited by team managers. Moreover, it has the API fast offline function and downgrade response function.
# # other features of Fizz
Of course, Fizz adapts to Spring's family bucket and uses the configuration center Apollo to balance the load, access logs, blacklist and whitelist and a series of gateway features that we think should be available.
# performance issues with Fizz
Although it does not take performance as a selling point, it does not mean that the performance of Fizz is poor. To benefit from the addition of WebFlux, we compare Fizz with the official spring-cloud-gateway. Using the same environment and conditions, the test objects are all single nodes. The test results show that our QPS is slightly higher than spring-cloud-gateway. Of course, we still have room for imagination to optimize.
Intel ®Xeon ®CPU X5675 @ 3.07GHz
Linux version 3.10.0-327.el7.x86_64
Intel ®Xeon ®CPU X5675 @ 3.07GHz
Linux version 3.10.0-327.el7.x86_64
| | condition | QPS (/ s) | 90% Latency (ms) |
|-|
| | directly access the backend | 9087.46 | 10.76 |
| | fizz-gateway | 5927.13 | 19.86 |
| | spring-cloud-gateway | 5044.04 | 22.91 | |
Application and performance of Fizz
At the beginning of the design of Fizz, we took into account the complex middle-tier situation within the enterprise: it can intercept all traffic and replace existing gateways in parallel and gradually. So in the internal implementation, Fizz is very smooth. In the initial research and development, we selected the C-end business as the target business, and only replaced some of the complex scenarios when we launched. After a quarterly trial, we solved various problems such as performance and memory. After the stable version, Fizz was extended to the entire BU business line to replace the original numerous application gateways, followed by the entire company's applicable business began to use. It turns out that the R & D of our C-side and B-side middle-tier teams can free up their hands to engage in the bottom-level business R & D. although the middle-tier personnel have been reduced, the R & D efficiency has been greatly improved. for example, the R & D time of a group of replicated services that originally needed to be developed for many days was shortened to 1/7 of the previous one. With the help of Fizz, we carry out service consolidation, and the number of servers in the middle tier is reduced by 50%, while the carrying capacity of services is increased.
# Communication Development of Fizz
In the early days, Fizz began to be used on a large scale based on configuration alone, but as the number of users increased, configuration file writing and management required us to start extending the project. Now, Fizz contains two main back-end projects, fizz-gateway and fizz-manager. Fizz-admin is the front-end configuration interface of Fizz, while fizz-manager and fizz-admin provide a graphical configuration interface for Fizz. All Pipe can be written and launched in the user interface.
In order to enable more large and medium-sized fast-growing teams to apply this management-oriented gateway to solve practical problems, Fizz provides a fizz-gateway-community community version of the solution, and as a technology exchange, the core implementation of its technology will be open in the form of GNU v3 authorization. All API of fizz-gateway-community will be published for secondary development. Because the professional version of fizz-gateway-professional is tied to the team business, it is commercially closed. The corresponding management platform code fizz-manger-professional, as a free download of the commercial version of the open binary package, is available for free use by projects that use the GNU v3 open source protocol (if your project is commercial in nature, please contact us for authorization). In addition, we will choose the right time to communicate with you about the rich plug-ins available in Fizz.
No matter whether our project communication can help you or not, we sincerely hope to get your feedback. No matter whether the project technology is powerful, perfect or not, we will never forget our original ideal and ambition: Fizz, a management gateway for large and medium-sized enterprises.
This is the end of the content of "how to use Fizz Gateway for a super-high-performance API gateway". Thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.