Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

How to use Serverless to elegantly realize the artistic application of pictures

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Servers >

Share

Shulou(Shulou.com)05/31 Report--

In this issue, the editor will bring you about how to use Serverless to elegantly realize the artistic application of pictures. The article is rich in content and analyzes and narrates it from a professional point of view. I hope you can get something after reading this article.

How to build an artistic image application based on Tencent Cloud Serverless from scratch.

Overview of project highlights:

Front-end react (Next.js), backend node (koa2)

Full use of ts for development, the ultimate development experience (although the performance of the back-end runtime ts is poor, it is better than that it does not need to compile and is suitable for writing demo)

Break through the limit of cloud function code 500mb (provide a solution)

TensorFlow2 + Serverless expands the margin of imagination

High performance, easy to deal with ten thousand levels of high concurrency, to achieve high availability (confident expression, anyway, it is the work of the platform)

Deployment in seconds, deployment online in ten seconds

Short development cycle (this article will take you to complete the development)

This project is deployed with the help of Serverless component, so the current development environment needs to install Serverless command line tools globally.

Npm install-g serverless requirements and architecture

The overall requirement of this application is simple: picture upload and display.

Module Overview

Upload pictures

Browse pictures

Provide storage service with object storage

Before developing, let's create an oss to provide image storage (you can use your existing object storage)

Mkdir oss

Add serverless.yml under the newly created oss directory

Component: cosname: xart-ossapp: xartstage: devinputs: src: src:. / exclude:-.env # prevent keys from being uploaded bucket: ${name} # bucket name. If the AppId suffix is not added, the system will automatically add it with an uppercase (xart-oss-) website: false targetDir: / protocol: https region: ap-guangzhou # configuration region, which is configured in the same region as the service as far as possible Faster acl: permissions: public-read # read and write configuration: private write, shared read

After executing sls deploy for a few seconds, you should see the following prompt indicating that the new object is stored successfully.

Here, we see url https://art-oss-.cos.ap-guangzhou.myqcloud.com, and we can see that the default naming convention is https://.cos..myqcloud.com

For a brief record, it will be used in later services. It doesn't matter if you forget it. Take a look at the TENCENT_APP_ID field in .env (.env will be automatically generated after deployment)

Implement back-end services

Create a new directory and initialize it

Mkdir art-api & & cd art-api & & npm init

Install dependencies (for ts type prompts, please install @ types yourself)

Npm i koa @ koa/router @ koa/cors koa-body typescript ts-node cos-nodejs-sdk-v5 axios dotenv

Configure tsconfig.json

{"compilerOptions": {"target": "es2018", "module": "commonjs", "lib": ["es2018", "esnext.asynciterable"], "experimentalDecorators": true, "emitDecoratorMetadata": true, "esModuleInterop": true}}

Entry file sls.js

Require ("ts-node") .register ({transpileOnly: true}); / / load the ts runtime environment, and configure to ignore the type error module.exports = require (". / app.ts"); / / introduce the business logic directly, which I will implement with you below

Add two practical points of knowledge:

Node-r

Introducing require ("ts-node") .register ({transpileOnly: true}) into the entry file is actually equivalent to node-r ts-node/register/transpile-only

So node-r is to load some specific modules before execution, and use this ability to quickly support some functions.

For example, through the esm module, node-r esm main.js can quickly load and export modules with import and export without babel and webpack.

Ts load path

If you don't want to use.. / to load the module, then

Configure baseUrl: "." in tsconfig.json

Ts-node-r tsconfig-paths/register main.ts or require ("tsconfig-paths") .register ()

Import utils from 'src/utils' can happily load the module from the project root path

The following is to implement the specific logic:

App.ts

Require ("dotenv") .config (); / load the .env environment variable, which can configure some keys in the environment variable and prevent the submission of import Koa from "koa"; import Router from "@ koa/router"; import koaBody from "koa-body"; import cors from'@ koa/cors'import util from 'util'import COS from' cos-nodejs-sdk-v5'import axios from 'axios'const app = new Koa (); const router = new Router () Var cos = new COS ({SecretId: process.env.SecretId / / your id, SecretKey: process.env.SecretKey / / your key,}); const cosInfo = {Bucket: "xart-oss-", / / get Region after deploying oss: "ap-guangzhou",} const putObjectSync = util.promisify (cos.putObject.bind (cos)); const getBucketSync = util.promisify (cos.getBucket.bind (cos)) Ctx.body = files.Contents.map ((it) = > {const [timestamp, size] = it.Key.split (".jpg") [0] .split ("_"); const [width, height] = size.split ("_"); return {url: `${cosURL} / ${it.Key}`, width, height, timestamp: Number (timestamp), name: it.Key,} ) .filter (Boolean) .sort ((a, b) = > b.timestamp-a.timestamp);}); router.post ("/ api/images/upload", async (ctx) = > {const {imgBase64, style} = JSON.parse (ctx.request.body) const buf = Buffer.from (imgBase64.replace (/ ^ data:image\ /\ w +) Base64,/, ""), 'base64') / / call to provide tensorflow service in advance to process pictures Replace it with your own service const {data} = await axios.post ('https://service-edtflvxk-1254074572.gz.apigw.tencentcs.com/release/', {imgBase64: buf.toString (' base64'), style}) if (data.success) {const afterImg = await putObjectSync ({... cosInfo, Key: `result/$ {Date.now ()} _ 400 _ 200.jpg`, Body: Buffer.from (data.data) 'base64'),}) Ctx.body = {success: true, data: 'https://' + afterImg.Location}); app.use (cors ()); app.use ({formLimit: "10mb", jsonLimit:' 10mbbread, textLimit: "10mb"})); app.use (router.routes ()) .use (router.allowedMethods ()); const port = 8080 App.listen (port, () = > {console.log ("listen in http://localhost:%s", port);}); module.exports = app

As you can see in the code, the image upload is in the form of base64. It should be noted that when the scf is triggered through the api gateway, the gateway cannot transmit binary transparently. For more information on the upload rules, please see the official documentation:

One additional point of knowledge: in fact, we visit the api gateway, and then trigger the cloud function to get the return result of the request, so we need to pay attention to the full link when debug.

Back to the point, then configure the environment variable .env

NODE_ENV=development# configures the key required for oss upload, which needs to be configured by yourself, and don't tell me:) # key View address: https://console.cloud.tencent.com/cam/capiSecretId=xxxxSecretKey=xxxx

Above, the development of the server section is complete, and we can verify it by executing node sls.js locally, and you should see the prompt to start the service.

Listen in http://localhost:8080

Let's simply configure serverless.yml, deploy the service online, and then further optimize it using layer

Component: koa # fill in the corresponding componentapp: artname: art-apistage: devinputs: src: src:. / exclude:-.env functionName: ${name} region: ap-guangzhou runtime: Nodejs10.15 functionConf: timeout: 60 # the timeout configuration is slightly longer environment: variables: # configure environment variables You can also configure NODE_ENV: production apigatewayConf: enableCORS: true protocols:-https-http environment: release directly in the scf console.

Then execute the deployment command sls deploy

Wait for tens of seconds and you should get the following output (if it is executed for the first time, platform authorization is required)

Where url is the address on the current service deployment line, we can try to visit it to see if you see the default hello world.

At this point, the server has basically been deployed. If there is a change in the code, then change it and execute sls deploy again. Officially, online editing capabilities are provided for projects with a code less than 10m.

However, as the complexity of the project increases, deploy uploads will slow down. So, let's optimize it again.

Create a new layer directory

Mkdir layer

Add serverless.yml under the layer directory

Component: layerapp: artname: art-api-layerstage: devinputs: region: ap-guangzhou name: ${name} src:.. / node_modules # upload node_modules packaged and uploaded runtimes:-Nodejs10.15 # Note: configure to the same environment

Go back to the project root directory and adjust the serverless.yml of the root directory

Component: koa # enter the corresponding componentapp: artname: art-apistage: devinputs: src: src:. / exclude:-.env-node_modules/** # deploy excluding node_modules functionName: ${name} region: ap-guangzhou runtime: Nodejs10.15 functionConf: timeout: 60 # slightly longer environment: variables: # configuration environment variable for timeout configuration At the same time, you can also configure NODE_ENV: production apigatewayConf: enableCORS: true protocols:-https-http environment: release layers:-name: ${output:$ {stage}: ${app}: ${name}-layer.name} # configure the corresponding layer version: ${output:$ {stage}: ${app}: ${name}-layer.version} # configuration

Then execute the command sls deploy-target=./layer to deploy layer, and then the speed of this deployment should be about 10s.

Sls deploy

Add two additional knowledge points about layer and cloud functions:

Loading and accessing of layer

Layer will extract the contents to the / opt directory when the function is running. If there is more than one layer, it will be decompressed in time sequence. If you need to access files in layer, you can access them directly through / opt/xxx. If you are accessing node_module, you can import directly because the NODE_PATH environment variable of scf already contains the / opt/node_modules path by default.

Quota

Cloud function scf has a certain quota limit for each user account:

The most important thing to pay attention to is the upper limit of the 500mb volume of a single function code. In practice, although the cloud function provides 500mb. But there is also an upper limit for deploy decompression.

On the question of bypassing quotas:

If there is not too much, then using npm install-production can solve the problem.

If the overload is too much, you can avoid it by mounting the cfs file system. I will talk about how to deploy 800mb tensorflow's package + model to SCF in the following deployment tensorflow algorithm model service section.

Implement front-end SSR services

Next you will use next.js to build a front-end SSR service.

Create a new directory and initialize the project:

Mkdir art-front & & cd art-front & & npm init

Installation dependencies:

Npm install next react react-dom typescript @ types/node swr antd @ ant-design/icons dayjs

Add ts support (next.js will be automatically configured when running):

Touch tsconfig.json

Open the package.json file and add a scripts configuration segment:

"scripts": {"dev": "next", "build": "next build", "start": "next start"}

Write the front-end business logic (only the main logic is shown in this article, and the source code is obtained in GitHub)

Pages/_app.tsx

Import React from "react"; import "antd/dist/antd.css"; import {SWRConfig} from "swr"; export default function MyApp ({Component, pageProps}) {return (fetch (args [0], args [1]). Then ((res) = > res.json ()),}} >);}

Pages/index.tsx complete code

Import React from "react"; import {Card, Upload, message, Radio, Spin, Divider} from "antd"; import {InboxOutlined} from "@ ant-design/icons"; import dayjs from "dayjs"; import useSWR from "swr" Let origin = 'http://localhost:8080'if (process.env.NODE_ENV = =' production') {/ / use your own deployed art-api service address origin = 'https://service-5yyo7qco-1254074572.gz.apigw.tencentcs.com/release'} / / slightly. Export default function Index () {const {data} = useSWR (`${origin} / api/ images`); const [img, setImg] = React.useState ("") Const [loading, setLoading] = React.useState (false); const uploadImg = React.useCallback ((file, style) = > {const reader = new FileReader (); reader.readAsDataURL (file)) Reader.onload = async () = > {const res = await fetch (`${origin} / api/images/ roomad`, {method: 'POST', body: JSON.stringify ({imgBase64: reader.result, style}), mode:' cors'}) .then ((res) = > res.json ()) If (res.success) {setImg (res.data);} else {message.error (res.message);} setLoading (false);}}, []); const [artStyle, setStyle] = React.useState (STYLE_MODE.cube); return ({const {status} = info.file) If (status! = = "uploading") {console.log (info.file, info.fileList);} if (status = "done") {setImg (info.file.response); message.success (`${info.file.name} uploaded successfully`); setLoading (false) } else if (status = = "error") {message.error (`${info.file.name} upload failed`); setLoading (false) }, beforeUpload: (file) = > {if (! ["image/png", "image/jpg", "image/jpeg"] .pictures (file.type)) {message.error ("picture format must be png, jpg, jpeg"); return false } const isLt10M = file.size / 1024 / 1024

< 10; if (!isLt10M) { message.error("文件大小超过10M"); return false; } setLoading(true); uploadImg(file, artStyle); return false; }, }} // 略... 使用 npm run dev 把前端跑起来看看,看到以下提示就是成功了 ready - started server on http://localhost:3000 接着配置 serverless.yml(如果有需要可以参考前文,使用 layer 优化部署体验) component: nextjsapp: artname: art-frontstage: devinputs: src: dist: ./ hook: npm run build exclude: - .env region: ap-guangzhou functionName: ${name} runtime: Nodejs12.16 staticConf: cosConf: bucket: art-front # 将前端静态资源部署到oss,减少scf的调用频次 apigatewayConf: enableCORS: true protocols: - https - http environment: release # customDomains: # 如果需要,可以自己配置自定义域名 # - domain: xxxxx # certificateId: xxxxx # 证书 ID # # 这里将 API 网关的 release 环境映射到根路径 # isDefaultMapping: false # pathMappingSet: # - path: / # environment: release # protocols: # - https functionConf: timeout: 60 memorySize: 128 environment: variables: apiUrl: ${output:${stage}:${app}:art-api.apigw.url} # 此处可以将api通过环境变量注入 由于我们额外配置了 oss,所以需要额外配置一下 next.config.js const isProd = process.env.NODE_ENV === "production";const STATIC_URL = "https://art-front-.cos.ap-guangzhou.myqcloud.com/";module.exports = { assetPrefix: isProd ? STATIC_URL : "",};提供 Tensorflow 2.x 算法模型服务 在上面的例子中,我们使用的 Tensorflow,暂时还是调用我预先提供的接口。 接着让我们会把它替换成我们自己的服务。 基础信息 tensoflow2.3 model scf 在 python 环境下,默认提供了 tensorflow1.9 依赖包,使用 python 可以用较低的成本直接上手。 问题所在 但如果你想使用 2.x 版本,或不熟悉 python,想用 node 来跑 tensorflow,那么就会遇到代码包大小的限制的问题。 Python 中 Tensorflow 2.3 包体积 800mb 左右 node 中 tfjs-node2.3 安装后,同样会超过 400mb(tfjs core 版本,非常小,不过速度太慢) 怎么解决 -- 文件存储服务! 先看看 CFS 文档的介绍 挂载后,就可以正常使用了,腾讯云提供了一个简单例子。 var fs = requiret('fs');exports.main_handler = async (event, context) =>

{await fs.promises.writeFile ('/ mnt/myfolder/filel.txt', JSON.stringify (event)); return event;}

Now that you can read and write normally, you can load the npm package normally. You can see that I directly loaded the package in the / mnt directory, and the model is also placed under / mnt.

Tf = require ("/ mnt/nodelib/node_modules/@tensorflow/tfjs-node"); jpeg = require ("/ mnt/nodelib/node_modules/jpeg-js"); images = require ("/ mnt/nodelib/node_modules/images"); loadModel = async () = > tf.node.loadSavedModel ("/ mnt/model")

If you use Python, you may encounter a problem, that is, scf provides dependent packages for tensorflow 1.9 by default, so you need to use insert to increase the priority of packages under the / mnt directory

Sys.path.insert (0, ". / mnt/xxx")

If the solution is provided above, it may be troublesome in specific development, because csf must be configured in the same subnet as scf and cannot be mounted locally for operation.

Therefore, in the actual deployment process, you can purchase an on-demand ecs CVM instance under the corresponding network. Then mount the hard disk and operate it directly. Finally, after the cloud function is successfully deployed, the instance is terminated:)

Sudo yum install nfs-utilsmkdir sudo mount-t nfs- o vers=4.0,noresvport: /

The specific business code is as follows:

Const fs = require ("fs"); let tf, jpeg, loadModel, images;if (process.env.NODE_ENV! = = "production") {tf = require ("@ tensorflow/tfjs-node"); jpeg = require ("jpeg-js"); images = require ("images"); loadModel = async () = > tf.node.loadSavedModel (". / model");} else {tf = require ("/ mnt/nodelib/node_modules/@tensorflow/tfjs-node") Jpeg = require ("/ mnt/nodelib/node_modules/jpeg-js"); images = require ("/ mnt/nodelib/node_modules/images"); loadModel = async () = > tf.node.loadSavedModel ("/ mnt/model");} exports.main_handler = async (event) = > {const {imgBase64, style} = JSON.parse (event.body) if (! imgBase64 | | style) {return {success: false, message: "complete parameters imgBase64, style"} } time = Date.now (); console.log ("parse the picture--"); const styleImg = tf.node.decodeJpeg (fs.readFileSync (`. / imgs/style_$ {style} .jpeg`)); const contentImg = tf.node.decodeJpeg (images (Buffer.from (imgBase64, 'base64')) .size (400). Encode ("jpg", {operation: 50}) / / compress the image size) Const a = styleImg.toFloat (). Div (tf.scalar (255)). ExpandDims (); const b = contentImg.toFloat (). Div (tf.scalar (255)). ExpandDims (); console.log ("- parse picture% s ms", Date.now ()-time); time = Date.now (); console.log ("load model--"); const model = await loadModel () Console.log ("- load model% s ms", Date.now ()-time); time = Date.now (); console.log ("execution model--"); const stylized = tf.tidy (() = > {const x = model.predict ([b, a]) [0]; return x.squeeze ();}); console.log ("- execution model% s ms", Date.now ()-time) Time = Date.now (); const imgData = await tf.browser.toPixels (stylized); var rawImageData = {data: Buffer.from (imgData), width: stylized.shape [1], height: stylized.shape [0],} Const result = images (jpeg.encode (rawImageData, 50) .data) .draw (images (". / imgs/logo.png"), Math.random () * rawImageData.width * 0.9, Math.random () * rawImageData.height * 0.9) .encode ("jpg", {operation: 50}); return {success: true, data: result.toString ('base64')};} This is how the editor shares with you how to use Serverless to elegantly realize the artistic application of pictures. If you happen to have similar doubts, please refer to the above analysis to understand. If you want to know more about it, you are welcome to follow the industry information channel.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Servers

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report