In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-17 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly introduces what are the front-end interview questions based on Node.js, which are introduced in great detail and have certain reference value. Friends who are interested must finish it!
1. What is the basic concept of Node 1.1what is Node
Node.js is an open source and cross-platform JavaScript runtime environment. Run the V8 JavaScript engine (the kernel of Google Chrome) outside the browser, using event-driven, non-blocking, and asynchronous I / O models to improve performance. We can understand that Node.js is a server-side, non-blocking, event-driven JavaScript runtime environment.
There are several basic concepts for understanding Node: non-blocking asynchrony and event-driven.
Non-blocking asynchronism: Nodejs uses the non-blocking Istroke O mechanism, which does not cause any blocking when doing the Iamp O operation, and notifies the execution of the operation in the form of time when it is completed. For example, after the code that accesses the database is executed, the code that follows it will be executed immediately, and the processing code of the result returned by the database will be placed in the callback function, thus improving the execution efficiency of the program.
Event-driven: event-driven means that when a new request comes in, the request will be pushed into an event queue, and then a loop will be used to detect the event state change in the queue. If an event with a state change is detected, then execute the handling code corresponding to the event, which is usually a callback function. For example, after reading a file, the corresponding state will be triggered and processed by the corresponding callback function.
1.2 Application scenarios of Node and its shortcomings 1.2.1 advantages and disadvantages
Node.js is suitable for Ihambo-intensive applications. The value is that when the application is running at the limit, the CPU occupancy rate is still relatively low, and most of the time it is reading and writing to the hard disk memory. The disadvantages are as follows:
Not suitable for CPU-intensive applications
Only single-core CPU is supported, and CPU cannot be fully utilized.
Low reliability, once a part of the code crashes, the whole system crashes.
For the third point, a common solution is to use a Nnigx reverse proxy, open multiple processes to bind multiple ports, or open multiple processes to listen on the same port.
1.2.1 Application scenario
After familiar with the advantages and disadvantages of Nodejs, we can see that it is suitable for the following application scenarios:
I am good at calculating, but not good at calculating. Because Nodejs is a single thread, if you compute (synchronize) too much, the thread will be blocked.
With a large number of concurrent iUnites, there is no need for very complex processing within the application.
Cooperate with WeSocket to develop long-connected real-time interactive applications.
Specific usage scenarios are as follows:
User form collection system, background management system, real-time interaction system, examination system, networking software, high concurrency web application.
Based on web, canvas and other multiplayer online games.
Web-based multi-person real-time chat client, chat room, live picture and text.
Single-page browser application.
Operate the database and provide json-based API for front-end and mobile end.
2. Node all objects
In browser JavaScript, window is a global object, while a global object in Nodejs is global.
In NodeJS, it is not possible to define a variable at the outermost layer, because all user code is the current module, available only in the current module, but can be passed to the outside of the module through the use of the exports object. Therefore, in NodeJS, variables declared with var are not global variables and only take effect in the current module. A global global object like above is in the global scope, and any global variable, function, or object is a property value of the object.
2.1 Common global objects
Some common global objects in Node are as follows:
Class:Buffer
Process
Console
ClearInterval 、 setInterval
ClearTimeout 、 setTimeout
Global
Class:BufferClass:Buffer can be used to handle both binary and non-Unicode encoded data, and the raw data is stored in the Buffer class instantiation. Buffer is similar to an array of integers, allocated memory in the original storage space of the V8 heap, and cannot be resized once a Buffer instance is created.
Processprocess represents a process object and provides information and control about the current process. Including in the process of executing the node program, if we need to pass a parameter, we want to get this parameter in the process built-in object. For example, we have the following file:
Process.argv.forEach ((val, index) = > {console.log (`${index}: ${val}`);})
When we need to start a process, we can use the following command:
Node index.js parameter.
Consoleconsole is mainly used to print stdout and stderr, the most commonly used such as log output: console.log. The command to clear the console is: console.clear. If you need to print the call stack of the function, you can use the command console.trace.
ClearInterval and setIntervalsetInterval are used to set timers. The syntax format is as follows:
SetInterval (callback, delay [,... args])
ClearInterval is used to clear the timer, and callback is repeated every delay millisecond.
ClearTimeout 、 setTimeout
Like setInterval, setTimeout is mainly used to set delay timers, while clearTimeout is used to clear the set delay timer.
Globalglobal is a global namespace object. The process, console, setTimeout, and so on mentioned earlier can be put into global, for example:
Console.log (process = global.process) / / output global objects in the true2.2 module
In addition to the global objects provided by the system, there are some that just appear in the module and look like global variables, as follows:
_ _ dirname
_ _ filename
Exports
Module
Require
_ _ dirname__dirname is mainly used to obtain the path where the current file is located, excluding the following file name. For example, run node example.js in / Users/mjr and print the result as follows:
Console.log (_ _ dirname); / / print: / Users/mjr
_ _ filename__filename is used to get the path and file name of the current file, including the following file name. For example, run node example.js in / Users/mjr and print the result as follows:
Console.log (_ _ filename); / / print: / Users/mjr/example.js
Exportsmodule.exports is used to export the contents of a specified module, and then you can also use require () to access the contents.
Exports.name = name;exports.age = age;exports.sayHello = sayHello
Requirerequire is mainly used to bring in modules, JSON, or local files, and modules can be imported from node_modules. You can use relative paths to bring in local modules or JSON files, which are processed based on the directory name defined by _ _ dirname or the current working directory.
III. Talking about the understanding of process 3.1 basic concepts
We know that the basic unit of resource allocation and scheduling in the process computer system is the basis of the operating system structure and the container of threads. When we start a js file, we actually start a service process, and each process has its own independent space address and data stack, just like another process cannot access the variables and data structures of the current process. Only after data communication can the data be shared between processes.
The process object is a global variable of Node that provides information about and controls the current Node.js process. Because JavaScript is a single-threaded language, when you start a file through node xxx, there is only one main thread.
3.2 Common properties and methods
The common properties of process are as follows:
Process.env: environment variable, such as obtaining configuration information of different environment projects through `process.env.NODE_ENV
Process.nextTick: this is often mentioned when talking about EventLoop
Process.pid: gets the current process id
Process.ppid: the parent process corresponding to the current process
Process.cwd (): gets the current process working directory
Process.platform: get the operating system platform on which the current process is running
Process.uptime (): the running time of the current process, for example: the uptime value of the pm2 daemon
Process events: process.on ('uncaughtException',cb) captures exception information, process.on (' exit',cb) process launches listening
Three standard streams: process.stdout standard output, process.stdin standard input, process.stderr standard error output
Process.title: used to specify a process name. Sometimes you need to specify a name for the process.
Talk about your understanding of the fs module 4.1What is fs
Fs (filesystem) is a file system module that provides the ability to read and write local files and is basically a simple wrapper of POSIX file manipulation commands. It can be said that all operations with files are realized through the core module of fs.
Before using it, you need to import the fs module, as follows:
Const fs = require ('fs'); 4.2 basic knowledge of documents
In a computer, there are some basic knowledge about files as follows:
Permission bit mode
Identification bit flag
The file is described as fd
4.2.1 permission bit mode
Permissions are assigned to the file owner, the group to which the file belongs, and other users, in which the type is divided into read, write and execution, with permission bits 4, 2, 1, and without permission 0. For example, the command to view file permission bits in linux is as follows:
Drwxr-xr-x 1 PandaShen 197121 0 Jun 28 14:41 core-rw-r--r-- 1 PandaShen 197121 293 Jun 23 17:44 index.md
In the first ten digits, d is a folder,-is a file, and the last nine represent the permission bits of the current user, the group to which the user belongs, and other users, divided by every three bits, representing read (r), write (w) and execution (x), respectively, and-means there is no permission for the current bit.
4.2.2 Identification bit
The identity bit represents the operation of the file, such as readable, writable, readable, writable, and so on, as shown in the following table:
4.2.3 File description fd
The operating system assigns a numeric identity called a file descriptor to each open file, and file operations use these file descriptors to identify and track each specific file.
The Window system uses a different but similar mechanism to track resources. For the convenience of users, NodeJS abstracts the differences between different operating systems and assigns numeric file descriptors to all open files.
In NodeJS, every time you operate a file, the file descriptor is incremented, and the file descriptor usually starts with 3, because there are three special descriptors, 0, 1, and 2, which represent process.stdin (standard input), process.stdout (standard output) and process.stderr (error output), respectively.
4.3 Common methods
Because the fs module mainly manipulates files, the common file manipulation methods are as follows:
File reading
File write
File append write
File copy
Create a directory
4.3.1 File read
There are two common methods of reading files: readFileSync and readFile. Where readFileSync indicates synchronous read, as follows:
Const fs = require ("fs"); let buf = fs.readFileSync ("1.txt"); let data = fs.readFileSync ("1.txt", "utf8"); console.log (buf); / / console.log (data); / / Hello
The first parameter is the path or file descriptor that reads the file.
The second parameter is options, and the default value is null, where there are encoding (encoding, default is null) and flag (identification bit, default is r), or you can pass encoding directly.
ReadFile is an asynchronous read method. The first two parameters of readFile and readFileSync are the same, and the last parameter is a callback function. There are two parameters err (error) and data (data) in the function. This method does not return a value. The callback function is executed after the file is read successfully.
Const fs = require ("fs"); fs.readFile ("1.txt", "utf8", (err, data) = > {if (! err) {console.log (data); / / Hello}}); 4.3.2 File write
Two methods, writeFileSync and writeFile, are used to write files. WriteFileSync means synchronous writes, as shown below.
Const fs = require ("fs"); fs.writeFileSync ("2.txt", "Hello world"); let data = fs.readFileSync ("2.txt", "utf8"); console.log (data); / / Hello world
The first parameter is the path or file descriptor where the file is written.
The second parameter is the written data, of type String or Buffer.
The third parameter is options, and the default value is null, which includes encoding (encoding, default is utf8), flag (identity bit, default is w) and mode (permission bit, default is 0o666). You can also pass encoding directly.
WriteFile means asynchronous write. The first three parameters of writeFile are the same as those of writeFileSync. The last parameter is a callback function, and there is a parameter err (error) in the function. The callback function is executed after the file has successfully written data.
Const fs = require ("fs"); fs.writeFile ("2.txt", "Hello world", err = > {if (! err) {fs.readFile ("2.txt", "utf8", (err, data) = > {console.log (data); / / Hello world});}}); 4.3.3 additional write to the file
Two methods, appendFileSync and appendFile, are used to append writing to a file. AppendFileSync means synchronous writes, as follows.
Const fs = require ("fs"); fs.appendFileSync ("3.txt", "world"); let data = fs.readFileSync ("3.txt", "utf8")
The first parameter is the path or file descriptor where the file is written.
The second parameter is the written data, of type String or Buffer.
The third parameter is options, and the default value is null, which includes encoding (encoding, default is utf8), flag (identity bit, default is a) and mode (permission bit, default is 0o666). You can also pass encoding directly.
AppendFile means asynchronous append write. The first three parameters of the method appendFile are the same as those of appendFileSync. The last parameter is a callback function, and there is a parameter err (error) in the function. The callback function is executed after the file appends to write data successfully, as shown below.
Const fs = require ("fs"); fs.appendFile ("3.txt", "world", err = > {if (! err) {fs.readFile ("3.txt", "utf8", (err, data) = > {console.log (data); / / Hello world});}}); 4.3.4 create a directory
There are two main methods to create a directory: mkdirSync and mkdir. Where mkdirSync is created synchronously, the parameter is the path of a directory, and there is no return value. In the process of creating a directory, you must ensure that all the file directories in front of the passed path exist, otherwise an exception will be thrown.
/ / assume that you already have folder an and folder b under a fs.mkdirSync ("a/b/c")
Mkdir is created asynchronously, and the second parameter is the callback function, as shown below.
Fs.mkdir ("a/b/c", err = > {if (! err) console.log ("created successfully"); 5. Talk about your understanding of the basic concepts of Stream
Stream is a means of data transmission and a way of end-to-end information exchange, and it is sequential. It reads data and processes content block by block, and is used to read input or write output sequentially. In Node, Stream is divided into three parts: source, dest, and pipe.
Among them, there is a connection between source and dest, pipe, whose basic syntax is source.pipe (dest). Source and dest are connected through pipe to flow data from source to dest, as shown in the following figure:
5.2 Classification of streams
In Node, streams can be divided into four categories:
Writable stream: a stream that can write data, such as fs.createWriteStream (), that can be used to write data to a file.
Readable stream: a stream that can read data, such as fs.createReadStream (), which can read content from a file.
Duplex flow: a stream that is both readable and writable, such as net.Socket.
Conversion stream: a stream that can modify or transform data as it is written and read. For example, in a file compression operation, you can write compressed data to the file and read the decompressed data from the file.
In the HTTP server module of Node, request is a readable stream and response is a writable stream. For the fs module, being able to handle both readable and writable file streams is one-way and easier to understand. Socket is bi-directional, readable and writable.
5.2.1 Duplex flow
In Node, the more common full-duplex communication is websocket, because the sender and receiver are independent methods, and there is no relationship between sending and receiving.
The basic usage is as follows:
Const {Duplex} = require ('stream'); const myDuplex = new Duplex ({read (size) {/ /...}, write (chunk, encoding, callback) {/ /...}); 5.3 usage scenarios
Common usage scenarios for streams are:
Get requests that the file be returned to the client
File operation
The underlying operation of some packaging tools
5.3.1 Network request
A common use scenario for streaming is a network request, such as using a stream stream to return a file. Res is also a stream object that returns file data through a pipe pipeline.
Const server = http.createServer (function (req, res) {const method = req.method; / / get request if (method = 'GET') {const fileName = path.resolve (_ _ dirname,' data.txt'); let stream = fs.createReadStream (fileName); stream.pipe (res);}}); server.listen (8080); 5.3.2 File Operation
File reading is also a stream operation, creating a readable data stream readStream, a writable data stream writeStream, and transferring the data through the pipe pipeline.
Const fs = require ('fs') const path = require (' path') / / two file names const fileName1 = path.resolve (_ _ dirname, 'data.txt') const fileName2 = path.resolve (_ _ dirname,' data-bak.txt') / / stream object const readStream = fs.createReadStream (fileName1) / / stream object written to the file const writeStream = fs.createWriteStream (fileName2) / / copy is performed through pipe Data flow readStream.pipe (writeStream) / / data read completion monitoring, that is, copy completion readStream.on ('end', function () {console.log (' copy complete')})
In addition, some packaging tools, such as Webpack and Vite, involve a lot of streaming operations.
VI. Event Loop Mechanism 6.1 what is browser event Loop
Node.js maintains an event queue in the main thread, and when a request is received, it puts the request into this queue as an event, and then continues to receive other requests. When the main thread is idle (when there is no request for access), it starts to loop the event queue to check whether there are any events to be handled in the queue. There are two situations: if it is a non-Imax O task, handle it personally and return to the upper layer to call through the callback function. In the case of the Icano task, take a thread from the thread pool to handle the event, specify a callback function, and then continue to cycle through other events in the queue.
When the IWeiO task in the thread is completed, the specified callback function is executed, and the completed event is placed at the end of the event queue, waiting for the event loop, and when the main thread loops to the event again, it is directly processed and returned to the upper layer call. This process is called an event loop (Event Loop), and it works as shown in the following figure.
Node.js is divided into four layers from left to right and from top to bottom, namely, the application layer, the V8 engine layer, the Node API layer, and the LIBUV layer.
Application layer: JavaScript interaction layer. The common ones are Node.js modules, such as http,fs.
V8 engine layer: that is, using V8 engine to parse JavaScript syntax, and then interact with lower-level API
Node API layer: provides system calls for the upper module, which is usually implemented by C language and interacts with the operating system.
LIBUV layer: it is the underlying encapsulation of cross-platform, which realizes event loop, file operation and so on. It is the core of Node.js asynchronism.
In Node, what we call an event loop is implemented based on libuv, a multi-platform library focused on asynchronous IO. The EVENT_QUEUE in the figure above appears to have only one queue, but in fact there are six phases in EventLoop, each of which has a corresponding first-in-first-out callback queue.
6.2 six stages of the event cycle
The event loop can be divided into six phases, as shown in the following figure.
Timers phase: this phase mainly implements the callback of timer (setTimeout, setInterval).
I _ callbacks O event callback phase: the execution of the I _ swap O callback that is deferred to the next iteration of the loop, that is, some of the I _ swap O callbacks that were not executed in the previous loop.
Idle phase (idle, prepare): for internal use only.
The polling phase (poll): retrieves the new node O event; executes the callback associated with it (in almost all cases, except for the callback functions that are turned off, those scheduled by the timer and setImmediate ()), the rest of the case will block here at the appropriate time.
Check phase (check): the setImmediate () callback function is executed here
Close event callback phase (close callback): some closed callback functions, such as: socket.on ('close',...)
Each phase corresponds to a queue, and when the event loop enters a certain stage, callbacks will be executed during that stage until the queue is exhausted or the maximum number of callbacks has been executed, then the next processing phase will be moved on, as shown in the following figure.
VII. Basic concepts of EventEmitter7.1
As mentioned earlier, Node uses an event-driven mechanism, and EventEmitter is the basis for Node to implement event-driven. On the basis of EventEmitter, almost all modules of Node inherit this class. These modules have their own events, can bind and trigger listeners, and implement asynchronous operations.
Many objects in Node.js distribute events, such as fs.readStream objects that trigger an event when a file is opened. The objects that generate events are instances of events.EventEmitter and are used to bind one or more functions to named events.
7.2 basic use
Node's events module provides only one EventEmitter class that implements the basic pattern of Node's asynchronous event-driven architecture: the Observer pattern.
In this mode, the observed (subject) maintains a group of observers sent (registered) by other objects, registers the observer if any new object is interested in the subject, cancels the subscription if not interested, and notifies the observer in turn when the subject has updates, as follows.
Const EventEmitter = require ('events') class MyEmitter extends EventEmitter {} const myEmitter = new MyEmitter () function callback () {console.log (' event event triggered!') } myEmitter.on ('event', callback) myEmitter.emit (' event') myEmitter.removeListener ('event', callback)
In the above code, we register an event named event through the on method of the instance object, trigger it through the emit method, and removeListener is used to unlisten for the event.
In addition to some of the methods described above, other commonly used methods are as follows:
Emitter.addListener/on (eventName, listener): add a listening event of type eventName to the end of the event array.
Emitter.prependListener (eventName, listener): add a listening event of type eventName to the event array header.
Emitter.emit (eventName [,... args]): triggers a listening event of type eventName.
Emitter.removeListener/off (eventName, listener): removes listening events of type eventName.
Emitter.once (eventName, listener): add a listening event of type eventName, which can only be executed once and deleted later.
Emitter.removeAllListeners ([eventName]): removes all listening events of type eventName.
7.3 implementation principle
EventEmitter is actually a constructor with an object that contains all the events inside.
Class EventEmitter {constructor () {this.events = {};}}
The function of listening events stored in events is structured as follows:
{"event1": [F1 force f2jue f3], "event2": [f4meme f5],...}
Then, start to implement the instance method step by step, the first is emit, the first parameter is the type of event, and the second parameter is the parameter of the trigger event function, which is implemented as follows:
Emit (type,... args) {this.events [type] .forEach ((item) = > {Reflect.apply (item, this, args);});}
After the emit method is implemented, the three instance methods on, addListener, and prependListener are implemented in turn, all of which add event listening trigger functions.
On (type,handler) {if (! this.events [type]) {this.events [type] = [];} addListener. Events [type] .push (handler);} addListener (type,handler) {this.on (type,handler)} prependListener (type,handler) {if (! this.events [type]) {this.events [type] = [];} this.events [type] .unshift (handler);}
To remove event listening, you can use the method removeListener/on.
RemoveListener (type,handler) {if (! this.events [type]) {return;} this.events [type] = this.events [type] .filter (item = > item! = = handler);} off (type,handler) {this.removeListener (type,handler)}
Implement the once method, and then encapsulate it when passing in the event listener handler function, use the characteristics of the closure to maintain the current state, and determine whether the event function has been executed by the value of the fired attribute.
Once (type, handler) {this.on (type, this._onceWrap (type, handler, this);} _ onceWrap (type, handler, target) {const state = {fired: false, handler, type, target}; const wrapFn = this._onceWrapper.bind (state); state.wrapFn = wrapFn; return wrapFn;} _ onceWrapper (. Args) {if (! this.fired) {this.fired = true Reflect.apply (this.handler, this.target, args); this.target.off (this.type, this.wrapFn);}}
Here is the completed test code:
Class EventEmitter {constructor () {this.events = {};} on (type, handler) {if (! this.events [type]) {this.events [type] = [];} this.events [type] .push (handler) } addListener (type,handler) {this.on (type,handler)} prependListener (type,handler) {if (! this.events [type]) {this.events [type] = [];} this.events [type] .unshift (handler);} removeListener (type,handler) {if (! this.events [type]) {return } this.events [type] = this.events [type] .filter (item = > item! = = handler);} off (type,handler) {this.removeListener (type,handler)} emit (type,... args) {this.events [type] .forEach ((item) = > {Reflect.apply (item, this, args);}) } once (type, handler) {this.on (type, this._onceWrap (type, handler, this));} _ onceWrap (type, handler, target) {const state = {fired: false, handler, type, target}; const wrapFn = this._onceWrapper.bind (state); state.wrapFn = wrapFn; return wrapFn } _ onceWrapper (... args) {if (! this.fired) {this.fired = true; Reflect.apply (this.handler, this.target, args); this.target.off (this.type, this.wrapFn);}} VIII. Basic concepts of middleware 8.1
Middleware (Middleware) is a kind of software between application system and system software. it uses the basic services (functions) provided by the system software to connect various parts or different applications of the application system on the network, and can achieve the purpose of resource sharing and function sharing. In Node, middleware mainly refers to the method of encapsulating the details of http requests. For example, in web frameworks such as express and koa, middleware is essentially a callback function, with parameters containing request object, response object, and functions to execute the next middleware. The architecture diagram is as follows.
Usually, in these middleware functions, we can execute business logic code, modify request and response objects, return response data, and so on.
8.2 koa
Koa is based on Node's current popular web framework, which does not support many functions, and all the functions can be extended through middleware. Instead of bundling any middleware, Koa provides an elegant way to help developers write server-side applications quickly and happily.
The Koa middleware uses the onion ring model, and two parameters are passed in each execution of the next middleware:
Ctx: variables that encapsulate request and response
Next: the function to go to the next middleware to be executed
From the previous introduction, we know that Koa middleware is essentially a function, which can be an async function or a normal function. Here is the middleware encapsulation for koa:
/ / async function app.use (async (ctx, next) = > {const start = Date.now (); await next (); const ms = Date.now ()-start; console.log (`${ctx.method} ${ctx.url}-${ms} ms`);}); / / ordinary function app.use ((ctx, next) = > {const start = Date.now (); return next (). Then () = > {const ms = Date.now ()-start) Console.log (`${ctx.method} ${ctx.url}-${ms} ms`);})
Of course, we can also encapsulate several functions commonly used in the http request process through middleware:
Token check
Module.exports = (options) = > async (ctx, next) {try {/ / get token const token = ctx.header.authorization if (token) {try {/ / verify function verification token And get the user-related information await verify (token)} catch (err) {console.log (err)}} / / enter the next middleware await next ()} catch (err) {console.log (err)}
Log module
Const fs = require ('fs') module.exports = (options) = > async (ctx, next) = > {const startTime = Date.now () const requestTime = new Date () await next () const ms = Date.now ()-startTime; let logout = `${ctx.request.ip}-${requestTime}-${ctx.method}-${ctx.url}-${ms} ms` / / output log file fs.appendFileSync ('. / log.txt', logout +'\ n')}
There are many third-party middleware in Koa, such as koa-bodyparser, koa-static and so on.
8.3 Koa middleware
Koa-bodyparserkoa-bodyparser middleware converts our post request and query string submitted by the form into objects and hangs them on ctx.request.body, making it convenient for us to take values at other middleware or interfaces.
/ / File: my-koa-bodyparser.jsconst querystring = require ("querystring"); module.exports = function bodyParser () {return async (ctx, next) = > {await new Promise ((resolve, reject) = > {/ / Array let dataArr = [] to store data; / / receive data ctx.req.on ("data", data = > dataArr.push (data)) / / integrate data and use Promise to successfully ctx.req.on ("end", () = > {/ / get the type of request data json or form let contentType = ctx.get ("Content-Type"); / / get data Buffer format let data = Buffer.concat (dataArr) .toString () If (contentType = = "application/x-www-form-urlencoded") {/ / if it is a form submission, convert the query string to an object and assign it to ctx.request.body ctx.request.body = querystring.parse (data) } else if (contentType = = "applaction/json") {/ / if it is json, convert an object in string format to an object that assigns a value to ctx.request.body ctx.request.body = JSON.parse (data);} / / A successful callback resolve () });}); / / continue to execute await next ();};}
The role of koa-statickoa-static middleware is to help us process static files when the server receives a request, such as.
Const fs = require ("fs"); const path = require ("path"); const mime = require ("mime"); const {promisify} = require ("util"); / / convert stat and access to Promiseconst stat = promisify (fs.stat) Const access = promisify (fs.access) module.exports = function (dir) {return async (ctx, next) = > {/ / treat the access route as an absolute path, here use join because it may be / let realPath = path.join (dir, ctx.path); try {/ / get stat object let statObj = await stat (realPath) / / if it is a file, set the file type and respond to the content directly, otherwise look for index.html if (statObj.isFile ()) {ctx.set ("Content-Type", `${mime.getType ()}; charset= utf8`) as a folder; ctx.body = fs.createReadStream (realPath) } else {let filename = path.join (realPath, "index.html"); / / if the file does not exist, execute the next in catch to other middleware to process await access (filename); / / set the file type and respond to the content ctx.set ("Content-Type", "text/html") Charset=utf8 "); ctx.body = fs.createReadStream (filename);}} catch (e) {await next ();}
In general, when implementing middleware, a single middleware should be simple enough, the responsibility should be single, and the coding of the middleware should be efficient, and if necessary, the data should be obtained repeatedly through caching.
9. How to design and implement JWT authentication 9.1 what is JWT
JWT (JSON Web Token), which is essentially a string writing specification, is used to transfer security and reliability between the user and the server, as shown in the following figure.
In the current development process of separation of front and back ends, the use of token authentication mechanism for authentication is the most common solution, and the process is as follows:
When the server verifies that the user's account and password are correct, it issues a token to the user, which serves as a credential for subsequent users to access some interfaces.
Subsequent access will determine when the user has permission to access based on this token.
Token, divided into three parts, head (Header), load (Payload), signature (Signature), and with. To splice. The head and load are stored in JSON format, but encoded, as shown in the diagram below.
9.1.1 header
Each JWT carries a header information, and here the algorithm used is mainly stated. The field that declares the algorithm is named alg, and there is also a field of typ, which defaults to JWT. The algorithm in the following example is HS256:
{"alg": "HS256", "typ": "JWT"}
Because JWT is a string, we also need to encode the above content with Base64. The encoded string is as follows:
EyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ99.1.2 payload
Payload is the message body, which stores the actual content, that is, the data declaration of Token, such as the user's id and name. By default, it also carries the token's issuance time iat. You can also set the expiration time, as shown below:
{"sub": "1234567890", "name": "John Doe", "iat": 1516239022}
After the same Base64 encoding, the string is as follows:
EyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ9.1.3 Signature
Signature is to sign the header and payload content. In general, set a secretKey, and use the HMACSHA25 algorithm for the first two results. The formula is as follows:
Signature = HMACSHA256 (base64Url (header) +. + base64Url (payload), secretKey)
Therefore, even if the first two parts of the data are tampered with, as long as the key used by the server encryption is not disclosed, the signature must be inconsistent with the previous signature.
9.2 Design and implementation
In general, the use of Token is divided into two parts: generating token and verifying token.
Generate token: issue token when login is successful.
Validate token: verify token when accessing certain resources or interfaces.
9.2.1 generate token
With the help of the third-party library jsonwebtoken, a token is generated through the sign method of jsonwebtoken. Sign has three parameters:
The first parameter refers to Payload.
The second is the secret key, which is unique to the server.
The third parameter is option, which defines the token expiration time.
Here is an example of a front-end generating token:
Const crypto = require ("crypto"), jwt = require ("jsonwebtoken"); / / TODO: use the database / / here should be stored in the database, here is just a demonstration of using let userList = []; class UserController {/ / user login static async login (ctx) {const data = ctx.request.body If (! data.name | |! data.password) {return ctx.body = {code: "000002" Message: "invalid parameters"} const result = userList.find (item = > item.name = = data.name & & item.password = crypto.createHash ('md5') .update (data.password) .digest (' hex')) if (result) {/ / generate token const token = jwt.sign ({name: result.name}, "test_token") / / secret {expiresIn: 60 * 60} / / Expiration time: 60 * 60 s) Return ctx.body = {code: "0", message: "login succeeded", data: {token}};} else {return ctx.body = {code: "000002", message: "wrong username or password"};} module.exports = UserController
After the front end receives the token, it is generally cached through localStorage, and then the token is placed in the HTTP request header Authorization. For the setting of Authorization, you need to add Bearer in front of it, and note that there is a space after it, as shown below.
Axios.interceptors.request.use (config = > {const token = localStorage.getItem ('token'); config.headers.common [' Authorization'] = 'Bearer' + token; / / notice the Authorization return config;} here) 9.2.2 verify token
First of all, we need to use koa-jwt middleware for verification, which is relatively simple and can be verified before the route jump, as shown below.
App.use (koajwt ({secret: 'test_token'}) .unless ({/ / configure whitelist path: [/\ / api\ / register/, /\ / api\ / login/]}))
When using koa-jwt middleware for verification, you should pay attention to the following points:
Secret must be consistent with sign.
You can configure the whitelist of interfaces through unless, that is, which URL can be configured without verification, such as login / registration without verification.
The verification middleware needs to be placed in front of the route that needs to be verified, and the previous URL cannot be verified.
The method to obtain the user's token information is as follows:
Router.get ('/ api/userInfo',async (ctx,next) = > {const authorization = ctx.header.authorization / / get jwt const token = authorization.replace ('Beraer','') const result = jwt.verify (token,'test_token') ctx.body = result}
Note: the above HMA256 encryption algorithm is in the form of a single secret key, and the consequences of disclosure are very dangerous.
In a distributed system, each subsystem needs to obtain the secret key, so the subsystem can issue and verify tokens according to the secret key, but some servers only need to verify tokens. At this time, asymmetric encryption can be used, tokens can be issued with private keys, tokens can be verified by public keys, and asymmetric algorithms such as RS256 can be selected.
In addition, JWT authentication needs to pay attention to the following points:
The payload part is simply encoded, so it can only be used to store non-sensitive information necessary for logic.
The encryption key needs to be protected, and the consequences will be unimaginable if it is disclosed.
To avoid token hijacking, it is best to use the https protocol.
10. Node performance monitoring and optimization 10.1 Node optimization point
As a server-side language, Node is particularly important in terms of performance. Its metrics are generally as follows:
CPU
Memory
I/O
The network
10.1.1 CPU
For the indicators of CPU, we mainly focus on the following two points:
CPU load: the total number of processes occupied and waiting for CPU in a certain period of time.
CPU usage: CPU time occupancy, equal to 1-idle CPU time (idle time) / total CPU time.
These two indicators are both quantitative indicators used to evaluate the current CPU busy level of the system. Node applications generally do not consume a lot of CPU. If the CPU occupancy rate is high, it indicates that there are many synchronous operations in the application, resulting in asynchronous task callback blocking.
10.1.2 memory metrics
Memory is a very easy indicator to quantify. Memory occupancy is a common index to judge the memory bottleneck of a system. For Node, the usage status of the internal memory stack is also a quantifiable indicator. You can use the following code to obtain memory-related data:
/ / app/lib/memory.jsconst os = require ('os'); / / get the current Node memory stack const {rss, heapUsed, heapTotal} = process.memoryUsage (); / / get system free memory const sysFree = os.freemem (); / / get system total memory const sysTotal = os.totalmem () Module.exports = {memory: () = > {return {sys: 1-sysFree / sysTotal, / / system memory occupancy heap: heapUsed / headTotal, / / Node heap memory occupancy node: rss / sysTotal, / / percentage of system memory occupied by Node}
Rss: represents the total amount of memory consumed by the node process.
HeapTotal: represents the total heap memory.
HeapUsed: actual heap memory usage.
External: the memory usage of external programs, including the memory usage of C++ programs that are the core of Node.
In Node, the maximum memory capacity of a process is 1.5GB, so please reasonably control the use of memory in actual use.
10.13 disk Imax O
The IO cost of the hard disk is very expensive, and the CPU clock cycle of the hard disk IO is 164000 times that of memory. Memory IO is much faster than disk IO, so using memory to cache data is an effective optimization method. Commonly used tools such as redis, memcached, etc.
Moreover, not all data need to be cached, and caching is considered only for those with high access frequency and high generation cost, that is to say, caching is considered to affect your performance bottleneck, and there are cache avalanche, cache penetration and other problems to be solved.
10.2 how to monitor
Generally speaking, performance monitoring needs the help of tools, such as Easy-Monitor, Ali Node performance platform and so on.
Easy-Monitor 2.0 is used here, which is a lightweight Node.js project kernel performance monitoring + analysis tool. In the default mode, you only need to require once in the project entry file without changing any business code to enable kernel-level performance monitoring analysis.
The use of Easy-Monitor is also relatively simple and is introduced in the project entry file as follows.
Const easyMonitor = require ('easy-monitor'); easyMonitor (' project name')
Open your browser and visit http://localhost:12333 to see the process interface. For more details, please refer to the official website.
10.3 Node performance optimization
There are several ways to optimize the performance of Node:
Use the latest version of Node.js
Correct use of streaming Stream
Code level optimization
Memory management optimization
10.3.1 use the latest version of Node.js
The performance improvement for each version comes mainly from two aspects:
Version update of V8
Update and optimization of internal code in Node.js
10.3.2 proper use of streams
In Node, many objects implement streaming, and a large file can be sent in the form of a stream without having to read it completely into memory.
Const http = require ('http'); const fs = require (' fs'); / / wrong http.createServer (function (req, res) {fs.readFile (_ dirname +'/ data.txt', function (err, data) {res.end (data);}); / / correct http.createServer (function (req, res) {const stream = fs.createReadStream (_ dirname +'/ data.txt'); stream.pipe (res) 10.3.3 Code-level optimization
Merge queries, merge multiple queries once, reduce the number of queries in the database.
/ / wrong mode for user_id in userIds let account = user_account.findOne (user_id) / / correct way const user_account_map = {} / / Note that this object will consume a lot of memory. User_account.find (user_id in user_ids) .forEach (account) {user_account_ map [optimized user _ id] = account} for user_id in userIds var account = user_account_ map [user _ id] 10.3.4 memory management optimization
In V8, memory is mainly divided into two generations: the new generation and the old generation:
The new generation: the survival time of the object is short. New objects or objects that have been garbage collected only once.
The old age: the object lives for a long time. An object that has experienced one or more garbage collection.
If there is not enough memory space in the new generation, it will be allocated directly to the old generation. By reducing memory footprint, you can improve the performance of the server. If there is a memory leak, it will also cause a large number of objects to be stored in the older generation, which will greatly degrade the performance of the server, such as the following example.
Const buffer = fs.readFileSync (_ _ dirname +'/ source/index.htm'); app.use (mount ('/', async (ctx) = > {ctx.status = 200; ctx.type = 'html'; ctx.body = buffer; leak.push (fs.readFileSync (_ _ dirname +' / source/index.htm');}); const leak = []
When leak memory is very large, it is possible to cause memory leaks, and such operations should be avoided.
Reducing memory usage can significantly improve service performance. The best way to save memory is to use pooling, which stores frequently used, reusable objects, reducing creation and destruction operations. For example, there is a picture request interface, each request, need to use the class. If you need to re-new these classes every time, it is not appropriate to create and destroy these classes frequently during a large number of requests, resulting in memory jitter. On the other hand, using the mechanism of object pool, the objects that need to be created and destroyed frequently are saved in an object pool, so as to avoid the initialization of rereading and improve the performance of the framework.
The above is all the content of the article "what are the front-end interview questions based on Node.js". Thank you for reading! Hope to share the content to help you, more related knowledge, welcome to follow the industry information channel!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.