In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-19 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)06/03 Report--
This article is about how to achieve a simple NpmInstall, the editor feels very practical, so share with you to learn, I hope you can learn something after reading this article, say nothing, follow the editor to have a look.
Now we generally do not write all the code ourselves, but more based on third-party packages for development, which is reflected in the directory is the src and node_modules directories.
The proportion of src and node_modules (third-party package) varies from project to project.
The runtime looks for third-party packages in a different way:
In the node environment, the runtime supports node_modules lookup. So just deploy the src part, and then install the relevant dependencies.
Node_modules is not supported in the browser environment and needs to be packaged into a form supported by the browser.
Which of the above is it in a cross-end environment?
No, the implementation of different cross-end engines will be different. Cross-end engines will implement require and can run-time lookup modules (built-in and third-party), but it is not the way of node lookup, it is their own set.
It is similar to module lookup in node environment, but the directory structure is different, so you need to implement xxx install yourself.
Train of thought analysis
Npm has its own registry server to support the download of release packages, which are downloaded from registry server. If we implement it ourselves, there is no need to achieve this, just download the source code from gitlab with git clone.
Dependency analysis
To achieve a download, you need to determine what to download, which is different from the packaging tool:
The packaging tool uses AST to analyze the contents of the file to determine the dependency and package it.
Dependency installation tools determine dependencies through user-declared dependency files (package.json / bundle.json) for installation
Here we call the package description file bundle.json, which declares the dependent package:
{"name": "xxx", "dependencies": {"yyyy": "aaaa/bbbb#release/1111"}}
By analyzing the bundle.json of the project root as an entry, download each dependency, analyze the bundle.json, and then continue to download each dependency and recurse the process. This is the process of dependency analysis.
In this way, the package is downloaded in the process of dependency analysis, and the package download is over when the dependency analysis is over. This is a feasible way of thinking.
But there are problems with this way of thinking, such as: what about version conflicts? What about circular dependency?
Resolve version conflicts
Version conflict is that multiple packages rely on the same package, but the dependent version is different, so you have to choose a version to install it. We can simply set the rule to use the higher version.
Solve circular dependencies
There may be circular dependencies between packages (which is why it is called a dependency graph, not a dependency tree). The solution to this problem is to record the processed packages, and if the same version of the package has been analyzed, it will not be analyzed for a long time. Just take the cache.
This kind of thinking is a general way to solve the problem of circular dependence.
We have solved the problems of version conflicts and circular dependences. are there any other problems?
The highest version of the package will be downloaded when the version conflicts, but by this time the previous lower version of the package has already been downloaded, so there will be more unnecessary downloads, can you remove this part of the redundant download.
Dependency analysis and download separation
The reason for downloading more low-version packages is that we downloaded them in the process of dependency analysis, so can we only download bundle.json for analysis when we rely on the analysis, and then download the dependency in bulk after analyzing and determining the dependency graph?
Downloading only the bundle.json file from gitlab needs to be downloaded through the ssh protocol, which is slightly more complicated, and we can do it in a simpler way:
Git clone-depth=1-branch=bb xxx
Plus-- after depth, git clone will only download a single commit, which will be very fast, although it is not as fast as downloading only bundle.json, but it is also available (when I tried to download all commit for 20 seconds, it only takes 1 second to download a single commit).
In this way, when we are dependent on analysis, we only download a commit to a temporary directory, analyze dependencies, resolve conflicts, determine the dependency graph, and then download it in batches, and then download all the commit with git clone. Finally, delete the temporary directory.
In this way, by separating dependency analysis and downloading, we removed some unnecessary downloads of low-version packages. There will be some improvement in download speed.
Global caching
When there are multiple projects locally, each project downloads its own dependent packages independently, so there will be repeated downloads for some common packages, and the solution is global caching.
When downloading each dependency package after analyzing the dependency, first check whether the package is available globally, and if so, copy it directly and pull down the latest code. If not, download to the global, and then copy to the local directory.
By adding an additional layer of global cache, we achieve cross-project reuse of dependency packages.
Code implementation
In order to think more clearly, pseudo code will be written below.
Dependency analysis
Dependency analysis recursively processes bundle.json, analyzes dependencies and downloads them to a temporary directory, and records the analyzed dependencies. It will solve the problems of version conflicts and circular dependencies.
Const allDeps = {}; function installDeps (projectDir) {const bundleJsonPath = path.resolve (projectDir, 'bundle.json'); const bundleInfo = JSON.parse (fs.readFileSync (bundleJsonPath)); const bundleDeps = bundleInfo.dependencies; for (let depName in bundleDeps) {if (allDeps [depName]) {if (allDeps [depName] and bundleDeps [depName] branches are the same as version) {continue / / Skip installation} if (allDeps [depName] and bundleDeps [depName] branches and versions are different) {if (bundleDeps [depName] version)
< allDeps[depName] 版本 ) { continue; } else { // 记录下版本冲突 allDeps[depName].conflit = true; } } } childProcess.exec(`git clone --depth=1 ${临时目录/depName}`); allDeps[depName] = { name: depName url: xxx branch: xxx version: xxx } installDeps(`${临时目录/depName}`); } }下载 下载会基于上面分析出的 allDeps 批量下载依赖,首先下载到全局缓存目录,然后复制到本地。 function batchInstall(allDeps) { allDeps.forEach(dep =>{const Global Catalog = path.resolve (os.homedir (), '.xxx'); if (Global Catalog / dep.name exists) {/ / copy to the local childProcess.exec (`cp Global Catalog / dep.name Local Catalog / dep.name`) } else {/ / download to global childProcess.exec (`global-- depth=1 ${global catalog / dep.name} `); / / copy to local childProcess.exec (`cp global catalog / dep.name local directory / dep.name`);}});}
In this way, we have completed the dependency analysis and download, and implemented the global cache.
We first sort out the different ways of handling third-party packages in different environments (browser, node, cross-end engine). Browsers need to be packaged, node is run-time lookup, and cross-end engines are also run-time lookups, but use a set of mechanisms implemented by themselves.
Then it makes it clear that the packaging tool determines the dependency by AST analysis, while the dependency download tool is analyzed based on the package description file bundl.json (package.json). Then we implement recursive dependency analysis and solve the problems of version conflict and circular dependency.
In order to reduce unnecessary downloads, we separate dependency analysis from downloads, downloading only a single commit in the dependency analysis phase and downloading all of them in subsequent batch downloads. The download method does not implement the set of registry, but git clone directly from gitlab.
In order to avoid repeated downloads of public dependencies on multiple projects, we implemented a global cache, downloading to the global catalog and then copying it locally.
The implementation process of npm install and yarn install will be more detailed, but the overall process is similar. I hope this article will help you sort out how third-party packages are handled in different environments, and what the xxx install dependency analysis and download process looks like.
The above is how to achieve a simple NpmInstall, the editor believes that there are some knowledge points that we may see or use in our daily work. I hope you can learn more from this article. For more details, please follow the industry information channel.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.