Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Introduction to Analysis of Hive (2)

2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/03 Report--

5 Hive parameters

Hive.exec.max.created.files

Description: the sum of files that can be generated by all map and reduce tasks run by hive

Default value: 100000

Hive.exec.dynamic.partition

Description: whether it is automatic partition

Default value: false

Hive.mapred.reduce.tasks.speculative.execution

Description: whether to turn on speculative execution

Default value: true

Hive.input.format

Description: Hive default input format

Default value: org.apache.hadoop.hive.ql.io.CombineHiveInputFormat

If you have a problem, you can use org.apache.hadoop.hive.ql.io.HiveInputFormat.

Hive.exec.counters.pull.interval

Description: time for Hive and JobTracker to pull counter information

Default value: 1000ms

Hive.script.recordreader

Description: the default read class when using scripts

Default value: org.apache.hadoop.hive.ql.exec.TextRecordReader

Hive.script.recordwriter

Description: default data writing classes when using scripts

Default value: org.apache.hadoop.hive.ql.exec.TextRecordWriter

Hive.mapjoin.check.memory.rows

Description: the number of rows that can store data in memory

Default value: 100000

Hive.mapjoin.smalltable.filesize

Description: enter the threshold of the file size of the small table, and if it is less than this value, use the normal join

Default value: 25000000

Hive.auto.convert.join

Description: whether to convert Join to normal Map Join according to the size of the input file

Default value: false

Hive.mapjoin.followby.gby.localtask.max.memory.usage

Description: when map join does group by operation, how much memory can be used to store data? if the data is too large, it will not be saved in memory.

Default value: 0.55

Hive.mapjoin.localtask.max.memory.usage

Description: percentage of memory that can be used by local tasks

Default value: 0.90

Hive.heartbeat.interval

Description: the time to send a heartbeat during MapJoin and filtering operations

Default value 1000

Hive.merge.size.per.task

Description: the size of the merged file

Default value: 256000000

Hive.mergejob.maponly

Description: merge output results when there are only Map tasks

Default value: true

Hive.merge.mapredfiles

Default value: whether to merge small files at the end of the job

Description: false

Hive.merge.mapfiles

Description: whether Map-Only Job merges small files

Default value: true

Hive.hwi.listen.host

Description: Hive UI default host

Default value: 0.0.0.0

Hive.hwi.listen.port

Description: Ui listening port

Default value: 9999

Hive.exec.parallel.thread.number

Description: the number of threads that hive can process Job in parallel

Default value: 8

Hive.exec.parallel

Description: whether to submit tasks in parallel

Default value: false

Hive.exec.compress.output

Description: output uses compression

Default value: false

Hive.mapred.mode

Description: the limited mode of operation of MapReduce, and there is no restriction on the operation of operation in this mode.

Default value: nonstrict

Hive.join.cache.size

Note: the number of entries in memory can exist during join operation.

Default value: 25000

Hive.mapjoin.cache.numrows

Description: the amount of data stored in memory in mapjoin

Default value: 25000

Hive.join.emit.interval

Description: the cache time of Hive before output when there is a connection

Default value: 1000

Hive.optimize.groupby

Description: whether to use bucket table when doing grouping statistics

Default value: true

Hive.fileformat.check

Description: whether to detect the file input format

Default value: true

Hive.metastore.client.connect.retry.delay

Description: the interval between client when the retry connection fails

Default value: 1 second

Hive.metastore.client.socket.timeout

Description: timeout of Client socket

Default value: 20 seconds

Mapred.reduce.tasks

Default value:-1

Description: default value of reduce for each task

-1 means to automatically set the value of reduce according to the situation of the job.

Hive.exec.reducers.bytes.per.reducer

Default value: 1000000000 (1G)

Description: the amount of data accepted per reduce

If the data sent to reduce is 10G, then 10 reduce tasks will be generated

Hive.exec.reducers.max

Default value: 999

Description: maximum number of reduce

Hive.exec.reducers.max

Default value: 999

Description: maximum number of reduce

Hive.metastore.warehouse.dir

Default value: / user/hive/warehouse

Description: default database location

Hive.default.fileformat

Default value: TextFile

Description: default fileformat

Hive.map.aggr

Default value: true

Description: Map end aggregation, equivalent to combiner

Hive.exec.max.dynamic.partitions.pernode

Default value: 100

Description: maximum number of partitions that can be generated per task node

Hive.exec.max.dynamic.partitions

Default value: 1000

Description: default number of partitions that can be created

Hive.metastore.server.max.threads

Default value: 100000

Description: the default maximum number of processing threads for metastore

Hive.metastore.server.min.threads

Default value: 200

Description: the default minimum number of processing threads for metastore

6 Hive Advanced programming

6.1 generate background

In order to meet the personalized needs of customers, Hive is designed as a very open system, a lot of content can be customized by users, including:

File format: Text File,Sequence File

Data format in memory: Java Integer/String, Hadoop IntWritable/Text

User-provided map/reduce script: no matter what language, use stdin/stdout to transfer data

1. User-defined function

Although Hive provides many functions, some of them are still difficult to meet our needs. Therefore, Hive provides custom function development.

Custom functions include three types of UDF, UADF, and UDTF

UDF (User-Defined-Function)

UDAF (User- Defined Aggregation Funcation)

UDTF (User-DefinedTable-Generating Functions) is used to solve the requirement of input one line and output multiple lines (On-to-many maping).

2. Three ways to use defined functions in HIVE

In a HIVE session, add the jar file for the custom function, then create the function, and then use the function

Create the function automatically before entering the HIVE session, without the need for users to create it manually

Write the custom function to the system function and make it a default function for HIVE so that you don't need create temporary function.

6.2 UDF

UDF (User-Defined-Function): the UDF function can be applied directly to the select statement, formatting the query structure and then outputting the content.

There are a few points to pay attention to when writing UDF functions

A. Custom UDF needs to inherit org.apache.hadoop.hive.ql.UDF

B. Need to implement the evaluate function

C, evaluate functions support overloading

D, UDF can only achieve one-in-one-out operation, if you need to achieve multi-in-one-out, you need to implement UDAF.

UDF usage code example

Import org.apache.Hadoop.hive.ql.exec.UDF public class Helloword extends UDF {public Stringevaluate () {return "hello world!";} public Stringevaluate (String str) {return "hello world:" + str;}}

Development steps

Development code

Package the program on the target machine

Enter the hive client

Add jar package: hive > add jar/run/jar/udf_test.jar

Create a temporary function: hive > CREATE TEMPORARY FUNCTION my_add AS'com.hive.udf.Add'

Query the HQL statement:

SELECT my_add (8,9) FROM scores

SELECT my_add (scores.math, scores.art) FROM scores

Destroy temporary function: hive > DROP TEMPORARY FUNCTION my_add

Details

Type conversions are performed automatically when using UDF, for example:

SELECT my_add (8pc9.1) FROM scores

The result is that 17.1 Int UDFs convert parameters of type Int to double. The type of diet conversion is controlled by UDFResolver

6.3 UDAF

UDAF

When querying data in Hive, some clustering functions are not included in HQL and need to be implemented by users.

User-defined aggregate functions: Sum, Average... N-1

UDAF (User- Defined Aggregation Funcation)

Usage

The following two packages are required, import org.apache.hadoop.hive.ql.exec.UDAF and org.apache.hadoop.hive.ql.exec.UDAFEvaluator

Development steps

The function class needs to inherit the UDAF class and the inner class Evaluator real UDAFEvaluator interface

Evaluator needs to implement init, iterate, terminatePartial, merge and terminate functions.

A) the init function implements the init function of the interface UDAFEvaluator.

B) iterate receives the incoming parameters and rotates internally. Its return type is boolean.

C) terminatePartial has no parameters, which means that the rotation data is returned after the rotation of the iterate function ends. TerminatePartial is similar to the Combiner of hadoop.

D) merge receives the return result of terminatePartial and performs data merge operation, and its return type is boolean.

E) terminate returns the final aggregate function result.

Execution steps

To perform the steps of finding the average function

A) compile the java file into Avg_test.jar.

B) enter the hive client and add the jar package:

Hive > add jar/ run/jar/Avg_test.jar.

C) create a temporary function:

Hive > create temporary function avg_test 'hive.udaf.Avg'

D) query statement:

Hive > select avg_test (scores.math) from scores

E) destroy temporary functions:

Hive > drop temporary function avg_test

UDAF code example

Public class MyAvg extends UDAF {public static class AvgEvaluator implements UDAFEvaluator {} public void init () {} public boolean iterate (Double o) {} public AvgState terminatePartial () {} public boolean terminatePartial (Double o) {} public Double terminate () {}}

6.4 UDTF

UDTF:UDTF (User-Defined Table-GeneratingFunctions) is used to solve the requirement of input one line and output multiple lines (On-to-many maping).

Development steps

Must inherit org.apache.Hadoop.hive.ql.udf.generic.GenericUDTF

Implement three methods: initialize, process and close

UDTF first calls the initialize method, which returns the information (number and type) of the returned rows of UDTF. After initialization, the process method is called to process the passed parameters, and the result can be returned through the forword () method.

Finally, the close () method is called to clean up the methods that need to be cleaned.

Usage

There are two ways to use UDTF, one directly behind select and the other with lateral view

Used in direct select: select explode_map (properties) as (col1,col2) from src

No other fields can be added using: select a, explode_map (properties) as (col1,col2) from src

Cannot be nested: select explode_map (explode_map (properties)) from src

Cannot be used with group by/cluster by/distribute by/sort by: select explode_map (properties) as (col1,col2) from src group bycol1, col2

Use with lateral view: select src.id,mytable.col1, mytable.col2 from src lateral view explode_map (properties) mytable as col1, col2

This method is more convenient for daily use. The execution process is equivalent to performing two separate decimations, and then union into a table.

Lateral view

Syntax: lateralView: LATERAL VIEW udtf (expression) tableAlias AScolumnAlias (', 'columnAlias) * fromClause: FROM baseTable (lateralView) *

LateralView is used in UDTF (user-defined table generating functions) to convert rows into columns, such as explode ().

Currently, Lateral View does not support top-down optimizations. If you use the Where clause, the query may not be compiled.

For the solution, see: execute set hive.optimize.ppd=false before query

Examples

PageAds . It has two columns.

String pageid

Array adid_list

"front_page"

[1, 2, 3]

"contact_page"

[3, 4, 5]

SELECT pageid, adid FROM pageAds LATERAL VIEWexplode (adid_list) adTable AS adid

The following results will be output

String pageid int adid

"front_page" 1

…… .

"contact_page" 3

Code example

Public class MyUDTF extends GenericUDTF {public StructObjectInspector initialize (ObjectInspector [] args) {} public void process (Object [] args) throws HiveException {}}

7 HiveQL

7.1 DDL

1. DDL function

Build a table

Delete tabl

Modify table structure

Create / delete views

Create a database

Show command

Add partition, delete partition

Rename table

Modify column name, type, location, comment

Add / update columns

Increase the metadata information of the table

2. Build a table

CREATE [EXTERNAL] TABLE [IF NOT EXISTS] table_name [(col_namedata_type [COMMENT col_comment],...)] [COMMENTtable_comment] [PARTITIONED BY (col_name data_type [COMMENT col_comment],...)] [CLUSTERED BY (col_name, col_name,...) [SORTED BY (col_name [ASC | DESC],...)] INTO num_buckets BUCKETS] [ROW FORMATrow_format] [STORED ASfile_format] [LOCATIONhdfs_path]

CREATE TABLE creates a table with a specified name. If a table with the same name already exists, an exception is thrown; the user can use the IF NOT EXIST option to ignore the exception

The EXTERNAL keyword allows the user to create an external table and specify a path to the actual data (LOCATION) while creating the table.

LIKE allows users to copy existing table structures, but not data

COMMENT can add descriptions to tables and fields

ROW FORMAT

DELIMITED [FIELDS TERMINATED BY char] [COLLECTION ITEMS TERMINATED BY char] [MAP KEYSTERMINATED BY char] [LINES TERMINATED BY char] | SERDEserde_name [WITH SERDEPROPERTIES (property_name=property_value,property_name=property_value,...)]

Users can customize SerDe or use their own SerDe when creating tables. If no ROW FORMAT or ROW FORMAT DELIMITED is specified, the built-in SerDe will be used. When you create a table, you also need to specify columns for the table. When you specify the columns of the table, you will also specify a custom SerDe,Hive to determine the data of the specific columns of the table through SerDe.

STORED AS SEQUENCEFILE | TEXTFILE | RCFILE | INPUTFORMAT input_format_classname OUTPUTFORMAT output_format_classname

If the file data is plain text, you can use STORED AS TEXTFILE. If the data needs to be compressed, use STORED AS SEQUENCE.

Create an external table

CREATE EXTERNAL TABLE page_view (viewTime INT, useridBIGINT, page_urlSTRING, referrer_url STRING, ip STRINGCOMMENT'IP Address of the User', country STRINGCOMMENT 'country of origination') COMMENT' This isthe staging page view table' ROW FORMATDELIMITED FIELDS TERMINATED BY'\ 054' STORED AS TEXTFILE LOCATION''

Build partition table

CREATE TABLE par_table (viewTime INT, userid BIGINT, page_urlSTRING, referrer_url STRING, ip STRINGCOMMENT'IP Address of the User') COMMENT 'This isthe page view table' PARTITIONEDBY (date STRING, pos STRING) ROW FORMAT DELIMITED'\ t 'FIELDSTERMINATED BY'\ n'STORED AS SEQUENCEFILE

Build Bucket table

CREATE TABLE par_table (viewTime INT, userid BIGINT, page_urlSTRING, referrer_url STRING, ip STRINGCOMMENT'IP Address of the User') COMMENT 'This isthe page view table' PARTITIONEDBY (date STRING, pos STRING) CLUSTEREDBY (userid) SORTED BY (viewTime) INTO 32 BUCKETS ROW FORMAT DELIMITED'\ t' FIELDSTERMINATED BY'\ n'STORED AS SEQUENCEFILE

Copy an empty table

CREATE TABLE empty_key_value_storeLIKE key_value_store

Delete tabl

DROP TABLE table_name

Add or delete partitions

Increase

ALTER TABLE table_name ADD [IF NOT EXISTS] partition_spec [LOCATION 'location1'] partition_spec [LOCATION' location2']... Partition_spec:: PARTITION (partition_col = partition_col_value, partition_col = partiton_col_value,...)

Delete

ALTER TABLE table_name DROP partition_spec,partition_spec,...

Rename table

ALTER TABLE table_name RENAME TO new_table_name

Modify column name, type, location, comment

ALTER TABLE table_name CHANGE [COLUMN] col_old_namecol_new_name column_type [COMMENT col_comment] [FIRST | AFTER column_name]

This command allows you to change column names, data types, comments, column locations, or any combination of them

Add / update columns

ALTER TABLE table_name ADD | REPLACE COLUMNS (col_namedata_type [COMMENT col_comment],...)

ADD represents a new field, with the field after all columns (before the partition column)

REPLACE means to replace all fields in the table.

Increase the metadata information of the table

ALTER TABLE table_name SET TBLPROPERTIES table_propertiestable_properties:: [property_name = property_value... ..]

Users can use this command to add metadata to the table

Change the format and organization of table files

ALTER TABLE table_name SET FILEFORMAT file_format;ALTER TABLE table_name CLUSTERED BY (userid) SORTEDBY (viewTime) INTO num_buckets BUCKETS

This command modifies the physical storage properties of the table

Create / delete views

CREATE VIEW [IF NOT EXISTS] view_name [(column_ name [comment column_comment],...)] [COMMENT view_comment] [TBLPROPERTIES (property_name = property_value,...)] AS SELECT

Add view

If no table name is provided, the name of the view column is automatically generated by the defined SELECT expression

If you modify the properties of the basic table, it will not be reflected in the view and the invalid query will fail.

View is read-only and cannot use LOAD/INSERT/ALTER

DROP VIEW view_name

Delete View

Create a database

CREATE DATABASE name

Show command

Show tables;show databases;show partitions; show functionsdescribe extended table_name dot col_name

7.2 DML

1. DML function

Load files into the datasheet

Insert the query results into the Hive table

0.8 what's New insert into

2. Load files into the datasheet

LOAD DATA [LOCAL] INPATH 'filepath' [OVERWRITE] INTOTABLE tablename [PARTITION (partcol1=val1, partcol2=val2...)]

The Load operation is simply a copy / move operation, moving the data file to the location corresponding to the Hive table.

Filepath

Relative path, for example: project/data1

Absolute path, for example: / user/hive/project/data1

Contains the full URI of the schema, for example:

Hdfs://namenode:9000/user/hive/project/data1

3. Load files into the datasheet

The target of the load can be a table or partition. If the table contains partitions, you must specify the partition name for each partition

Filepath can reference a file (in this case, Hive will move the file to the directory corresponding to the table) or a directory (in this case, Hive will move all the files in the directory to the directory corresponding to the table)

4. LOCAL keyword

LOCAL specified

The load command looks for filepath in the local file system. If it is found to be a relative path, the path is interpreted as the current path relative to the current user. Users can also specify a complete URI for local files, such as file:///user/hive/project/data1.

The load command copies the files from filepath to the destination file system. The destination file system is determined by the location attribute of the table. The copied data file is moved to the location corresponding to the data of the table

No LOCAL specified

If the filepath points to a complete URI,hive, the URI will be used directly. otherwise

If no schema is specified or authority,Hive specifies the URI of Namenode using the schema and authority,fs.default.name defined in the hadoop configuration file

If the path is not absolute, Hive is interpreted relative to / user/. Hive moves the contents of the file specified in filepath to the path specified by table (or partition)

5 、 OVERWRITE

OVERWRITE specified

The contents of the target table (or partition), if any, are deleted, and then the contents of the file / directory pointed to by filepath are added to the table / partition.

If the target table (partition) already has a file and the file name conflicts with the file name in filepath, the existing file will be replaced by the new file.

6. Insert the query results into the Hive table

Insert query results into the Hive table

Write query results to the HDFS file system

Basic mode

INSERT OVERWRITE TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2...)] Select_statement1 FROM from_statement

Multi-insert mode

FROM from_statementINSERT OVERWRITE TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2...)] Select_ statement1[INSERT OVERWRITE TABLE tablename2 [PARTITION...] select_statement2]...

Automatic partition mode

INSERT OVERWRITETABLE tablename PARTITION (partcol1 [= val1], partcol2 [= val2]...) select_statement FROM from_statement

7. Write the query results to the HDFS file system

INSERT OVERWRITE [LOCAL] DIRECTORY directory1 SELECT... FROM... FROMfrom_statement INSERTOVERWRITE [LOCAL] DIRECTORY directory1 select_statement1 [INSERTOVERWRITE [LOCAL] DIRECTORY directory2 select_statement2]

When data is written to the file system, the text is serialized, and each column is distinguished by ^ A,\ nWrap

8 、 INSERT INTO

INSERT INTO TABLEtablename1 [PARTITION (partcol1=val1, partcol2=val2...)] Select_statement1FROM from_statement

7.3 HiveQL query operation

1. SQL operation

Basic Select operation

Query based on Partition

Join

2. Basic Select operation

SELECT [ALL | DISTINCT] select_expr, select_expr,... FROM table_ reference [where where_condition] [GROUP BYcol_list [HAVING condition]] [CLUSTER BYcol_list | [DISTRIBUTE BYcol_list] [SORT BY | ORDER BYcol_list]] [LIMIT number]

Use the ALL and DISTINCT options to distinguish between the handling of duplicate records. The default is ALL, which means that all records are queried. DISTINCT means to remove duplicate records.

Where condition

Similar to our traditional SQL where condition

Currently supports AND,OR, version 0.9 supports between

IN, NOT IN

EXIST, NOT EXIST are not supported

The difference between ORDER BY and SORT BY

ORDER BY global sort, with only one Reduce task

SORT BY only sorts on the local machine

3 、 Limit

Limit can limit the number of records in a query

SELECT * FROM T1 LIMIT 5

Implement Top k query

The following query queries the five sales representatives with the largest sales records.

SET mapred.reduce.tasks = 1

SELECT * FROMtest SORT BY amount DESC LIMIT 5

REGEX Column Specification

The SELECT statement can use regular expressions to make column selections. The following statement queries all columns except ds and hr:

SELECT `(ds | hr)? +. + `FROM test

Query based on Partition

In general, a SELECT query scans the entire table and uses the PARTITIONED BY clause to build the table, so the query can take advantage of the input pruning feature of partition pruning

The current implementation of Hive is that partition pruning is enabled only if the partition assertion appears in the WHERE clause closest to the FROM clause

4 、 Join

Syntaxjoin_table: table_referenceJOIN table_factor [join_condition] | table_reference {LEFT | RIGHT | FULL} [OUTER] JOIN table_reference join_condition | table_referenceLEFT SEMI JOIN table_reference join_condition table_reference: table_factor | join_table table_factor: tbl_ name [alias] | table_subqueryalias | (table_references) join_condition: ONequality__expression (AND equality_expression) * equality_expression: expression = expression

Hive only supports equivalent connections (equality joins), external connections (outer joins), and (left semi joins). Hive does not support all non-equivalent connections because they are very difficult to convert to map/reduce tasks

The LEFT,RIGHT and FULL OUTER keywords are used to handle join hollow records

LEFT SEMI JOIN is a more efficient implementation of IN/EXISTS subquery

In join, the logic of each map/reduce task goes like this: reducer caches the records of all tables in the join sequence except the last table, and serializes the results to the file system through the last table

In practice, the largest table should be written at the end.

5. When querying join, you should pay attention to several key points.

Only equivalent join is supported

SELECT a.* FROM a JOIN b ON (a.id = b.id) SELECT a.* FROM a JOIN b ON (a.id = b.idAND a.department = b.department)

You can join more than 2 tables, for example:

SELECT a.valre B. Val, c.val FROM a JOIN b ON (a.key = b.key1) JOIN c ON (c.key = b.key2)

If the join key of multiple tables in join is the same, the join is converted to a single map/reduce task

LEFT,RIGHT and FULL OUTER

Example: SELECT a.val, b.val FROM a LEFT OUTER JOIN b ON (a.key=b.key)

If you want to limit the output of join, you should write filter conditions in the WHERE clause-or in the join clause.

The confusing problem is the situation of table partitioning.

SELECT c.val, d.val FROM c LEFT OUTER JOIN d ON (c.key=d.key)

WHEREa.ds='2010-07-07 'AND b.ds='2010-07-07'

If no record for the corresponding c table is found in the d table, all columns of the d table list NULL, including the ds column. That is, join will filter all records in the d table that cannot be found matching the c table join key. In this case, LEFT OUTER makes the query result independent of the WHERE clause

Solution.

SELECT c.val, d.val FROM c LEFT OUTER JOIN d ON (c.key=d.keyAND d.ds='2009-07-07 'AND c.ds='2009-07-07')

LEFT SEMI JOIN

The limitation of LEFT SEMI JOIN is that the table on the right in the JOIN clause can only set filtering conditions in the ON clause, not in the WHERE clause, SELECT clause, or anywhere else.

SELECT a.key, a.value FROM a WHERE a.key in (SELECT b.key FROM B); can be rewritten as: SELECT a. Keymark a. Val FROM a LEFT SEMIJOIN b on (a.key = b.key)

UNION ALL

To merge the query results of multiple select, you need to ensure that the fields in the select must be consistent

Select_statement UNION ALLselect_statement UNION ALLselect_statement...

7.4 several habits that should be changed from SQL to HiveQL

1. Hive does not support equivalent connection.

The inlining of two tables in SQL can be written as:

Select * from dual a dint dual b where a.key = b.key

In Hive should be

Select * from dual a join dual b on a.key = b.key

2. Semicolon character

A semicolon is the closing tag of a SQL statement, as well as in HiveQL, but in HiveQL, semicolon recognition is less intelligent, for example:

Select concat (key,concat (';', key)) from dual

However, when parsing a statement, HiveQL prompts:

FAILED:Parse Error: line 0 mismatched input''expecting) in functionspecification

The solution is to escape using the semicolon octal ASCII code, then the above statement should be written as follows:

Select concat (key,concat ('\ 073recording precinct key)) from dual

3 、 IS [NOT] NULL

Null represents a null value in SQL. It is worth noting that if a field of type String in HiveQL is an empty (empty) string, that is, the length is 0, then the result of ISNULL judgment on it is False.

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report