Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the basic knowledge of PHP interview questions?

2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)06/01 Report--

This article mainly introduces "what is the basic knowledge of PHP interview questions". In daily operation, I believe that many people have doubts about the basic knowledge of PHP interview questions. The editor consulted all kinds of materials and sorted out simple and easy-to-use methods of operation. I hope it will be helpful for you to answer the questions of "what are the basic knowledge of PHP interview questions?" Next, please follow the editor to study!

First, the underlying implementation principle of php array.

1. The underlying implementation is through hash table (hash table) + two-way linked list (resolve hash conflicts)

Hashtable: calculate the hash value (Bucket- > h) of different keywords (key) through the mapping function and then directly index to the corresponding Bucket.

The hash table holds the pointer to the current loop, so foreach is faster than for

Bucket: holds the key and value of array elements, as well as the hash value h

2. How to ensure order

1. Add a mapping table between the hash function and the array of elements (Bucket) that is the same size as the array of storage elements.

two。 Used to store the subscript of the element in the actual storage array

3. Elements are inserted into the actual storage array in the order of the mapping table

4. The mapping table is only an idea in principle. In fact, there will be no actual mapping table, but when Bucket memory is allocated during initialization, the same amount of uint32_t-sized space is allocated, and then the arData is offset to the location of the array of storage elements.

3. Solve hash duplication (linked list method used by php):

1. Linked list method: when different keywords point to the same unit, use the linked list to save the keywords (traversing the linked list to match key)

two。 Open addressing: when the keyword points to a cell that already has data, continue to look for other units until you find the available unit (occupy the location of other units, more prone to hash conflicts, performance degradation)

4. Basic knowledge

Linked lists: queues, stacks, two-way linked lists,

Linked list: element + pointer to the next element

Two-way linked list: pointer to the previous element + element + pointer to the next element

Second, the time complexity and space complexity of bubble sorting

1. Code implementation

$arr = [2,4,1,5,3,6]; for ($I = 0; $I

< (count($arr)); $i++) { for ($j = $i + 1; $j < (count($arr)); $j++) { if ($arr[$i] 0 添加消息发送记录表及重试机制,防止异步消息丢失 创建订单 前端建立websocket连接或者轮询监听订单状态 消费验证记录状态,防止重复消费 回仓 创建订单之后发送延时消息,验证订单支付状态及库存是否需要回仓 十二、防 sql 注入 1、过滤特殊字符 2、过滤数据库关键字 3、验证数据类型及格式 4、使用预编译模式,绑定变量 十三、事务隔离级别 1、标准的sql隔离级别实现原理 未提交读:其他事务可以直接读到没有提交的:脏读 事务对当前被读取的数据不加锁 在更新的瞬间加行级共享锁到事务结束释放 提交读:事务开始和结束之间读取的数据可能不一致,事务中其他事务修改了数据:不可重复度 事务对当前读取的数据(被读到的时候)行级共享锁,读完释放 在更新的瞬间加行级排他锁到事务结束释放 可重复读:事务开始和结束之前读取的数据保持一致,事务中其他事务不能修改数据:可重复读 事务对当前读到的数据从事务一开始就加一个行级共享锁 在更新的瞬间加行级排他锁到事务结束释放 其他事务再事务过程中可能会新增数据导致幻读 串行化 事务读取数据时加表级共享锁 事务更新数据时加表级排他锁 2、innodb的事务隔离级别及实现原理(!!和上面的不一样,区分理解一个是隔离级别 一个是!!事务!!隔离级别) 1)基本概念 mvcc:多版本并发控制:依赖于undo log 和read view 让数据都读不会对数据加锁,提高数据库并发处理能力 写操作才会加锁 一条数据有多个版本,每次事务更新数据的时候会生成一个新的数据版本,旧的数据保留在undo log 一个事务启动的时候只能看到所有已经提交的事务结果 当前读:读取的是最新版本 快照读:读取的是历史版本 间隙锁:间隙锁会锁住一个范围内的索引 update id between 10 and 20 无论是否范围内是否存在数据,都会锁住整个范围:insert id = 15,将被防止 只有可重复读隔离级别才有间隙锁 next-key lock: 索引记录上的记录锁+ 间隙锁(索引值到前一个索引值之间的间隙锁) 前开后闭 阻止幻读 2)事务隔离级别 未提交读 事务对当前读取的数据不加锁,都是当前读 在更新的瞬间加行级共享锁到事务结束释放 提交读 事务对当前读取的数据不加锁,都是快照读 在更新的瞬间加行级排他锁到事务结束释放 可重复读 主从复制的情况下 ,如果没有间隙锁,master库的A、B进程 A进程 delete id < 6 ;然后还没有commit B进程insert id = 3,commit A进程提交commit 该场景下,主库会存在一条id =3 的记录,但是binlog里面是先删除再新增就会导致从库没有数据,导致主从的数据不一致 事务对当前读取的数据不加锁,都是快照读 事务再更新某数据的瞬间,必须加行级排他锁(Record 记录锁、GAP间隙锁、next-key 锁),事务结束释放 间隙锁解决的是幻读问题 MVCC的快照解决的是不可重复读问题 串行化 事务读取数据时加表级,当前读 事务更新数据时加表级排他锁 十四、索引原理 索引就是帮助数据库高效查找数据的存储结构,存储再磁盘中,需要消耗磁盘IO 1、存储引擎 myisam 支持表锁,索引和数据分开存储适合跨服务器迁移 innodb 支持行锁,索引和数据存储再一个文件 2、索引类型 hash索引 适合精确查询且效率高 无法排序、不适合范围查询 hash冲突的情况下需要遍历链表(php数组的实现原理、redis zset 的实现原理类似) b-tree、b+tree b+tree 的数据全部存储在叶子节点,内部节点只存key,一次磁盘IO能获取到更多的节点 b-tree 的内部节点和叶子节点都存储key和数据,查找数据不需要找到叶子节点,内部节点可以直接返回数据 b+tree 增加了叶子节点到相邻节点的指针,方便返回查询遍历 b-tree 和b+tree的去区别 聚簇索引和非聚簇索引 聚簇索引 :索引和数据存储在一个节点 非聚簇索引:索引和数据分开存储,通过索引找到数据实际存储的地址 概念 详解: innodb 使用的聚簇索引,且默认主键索引为聚簇索引(没有主键索引的时候,选择一个非空索引,还没有则隐式的主键索引),辅助索引指向聚簇索引位置,然后在找到实际存储地址 myisam 使用非聚簇索引,所有的索引都只需要查询一次就能找到数据 聚簇索引的优势和略势 1. 索引和数据在一起,同一页的数据会被缓存到(buffer)内存中,所以查看同一页数据的时候只需要从内存中取出, 2. 数据更新之后之只需要维护主键索引即可,辅助索引不受影响 3. 辅助索引存的是主键索引的值,占用更多的物理空间。所以会受到影响 4. 使用随机的UUID,数据分布不均匀,导致聚簇索引可能扫全表,降低效率,所以尽量使用自增主键id 十五、分表 (分库) 的策略 1、流程 评估容量和分表数量->

According to the business selected sub-table key- > sub-table rules (hash, balance, range)-> execute-> consider the expansion problem

2. Horizontal split

Split into multiple tables horizontally according to the field

Each table has the same structure

The collection of all subtables is the full quantity.

3. Vertical split

Split vertically based on field

The table structure is different, and the same associated row of the sub-table is a complete piece of data.

Split of extension tables, hotspot fields and non-hotspot fields (split of lists and details)

When getting data, try to avoid using join, but combine the results of two queries.

4. Problems

Cross-library join problem

Global tables: scenarios where partial system tables need to be associated

Redundancy method: redundancy of commonly used fields

Assembly method: assemble the results of multiple queries

Paging, sorting, and function problems across nodes

Transaction consistency

Global primary key id

Using uuid-> will reduce the efficiency of clustering index.

Using distributed self-increasing id

Capacity expansion problem

The new data is double-written and written into the new and old databases at the same time.

Copy the old data to the new database

Based on the old database, delete redundant data after verifying data consistency

Upgrade from the library to the main database, the data is consistent, you only need to delete redundant data

Multiply the capacity: need to double the slave library

Upgrade slave library

Double write migration:

XVI. The implementation process of select and update

1. Mysql composition

Server layer: connector-> buffer-> Analyzer (preprocessor)-> Optimizer-> Actuator

Engine layer: querying and storing data

2. Select execution process

The client sends a request to establish a connection

The server layer looks up the cache and returns directly if hit, otherwise continue.

Analysis 7 analyze sql statements and pre-processing (verify the validity and type of fields, etc.)

The optimizer generates an execution plan

Executor calls engine API to query results

Return the query result

3. Update execution process

Basic concept

Redo log fixed size, circular write

Redo log is like a circle, preceded by check point (to which point begins to overwrite the old log), followed by write point (where it is currently written)

When write point and check point overlap, it proves that redo log is full and needs to start synchronizing redo log to disk.

Logging is a sequential IO

Writing directly to disk (flushing disk) is random IO, because the data is random and may be distributed in different sectors.

Sequential IO is more efficient. Writing the modification log first can delay the flushing time and improve the throughput.

Redo log (redo log), innodb-specific log, physical log, record changes

Redo log is repeated, the space is fixed and will run out, and the old log will be overwritten.

Binlog is the log, logic log, and the original logic of recording statements shared by the server layer.

Binlog is appended to a certain size and switched to the next one. It will not overwrite the previous log.

Redo log is mainly used to recover from crashes, and bin log is used to record archived binary logs.

Redo log can only recover data in a short period of time, and binlog can restore larger data by setting.

Buffer pool (cache pool), in memory, the next time the same page of data is read, it can be returned directly from buffer pool (innodb's clustered index)

Update buffer pool when updating data, and then update disk

Dirty pages: the cache pool in memory is updated, but the disk is not updated

Scrubbing: there is a special process in inndb that writes buffer pool data to disk, and writes multiple modifications to disk at regular intervals.

Redo log and binlog

WAL (write-ahead-logging) write log scheme first

Redo log flushing mechanism, check point

Execution steps (two-phase commit-distributed transaction to ensure the consistency of the two logs)

Analyze update conditions and find data that needs to be updated (cache will be used)

Server calls the API,Innodb of the engine layer to update the data into memory, then writes it to redo log, and then enters prepare

The engine notifies the server layer to start submitting data

The server layer writes to the binlog log and calls the interface of innodb to issue a commit request.

The engine layer submits the commit after receiving the request

Data crash recovery rules after downtime

If the redo log status is commit, submit it directly

If the redo log status is prepare, determine whether the transaction in binlog is commit. If so, commit, otherwise roll back.

If you do not use a twice submitted error case (update table_x set value = 10 where value = 9)

Redo log before writing to binlog

1. After redo log finished writing, but binlog didn't finish writing, the computer went down at this time.

two。 Redo log is intact after reboot, so restore data value = 10

3. It is not recorded in the bin log log. If you need to recover data, value = 9.

Write binlog first and then redo log

1. Binlog write completed, redo log not completed

two。 There is no redo log after restart, so value is still 9.

3. When the data needs to be recovered, the binlog log is complete and the value is updated to 10.

Undo log

Record before updates are written to buffer pool

If an error occurs during the update, roll back to the status of undo log directly

The function and three formats of binlog

Function:

1. Data recovery

two。 Master-slave replication

Format (binary file):

1) statement

1. Record the original text of each sql statement

two。 Delete a table only need to record a sql statement, do not need to record the changes of each row, save IO, improve performance, and reduce log volume

3. There may be primary inconsistencies (stored procedures, functions, etc.)

4. RC isolation level (read commit), because the binlog record order is recorded according to the transaction commit order, it may result in inconsistent master-slave replication. It can be solved through the introduction of gap locks at repeatable readable levels.

2) row

1. Record the modification of each record without recording the context record of the sql statement

two。 Resulting in a large number of binlog logs

3. Delete a table: record how each record is deleted

3) mixed

1. A mixed version of the first two formats

two。 Automatically choose which one to use based on the statement:

General sql statement modification using statement

Modify table structure, functions, stored procedures and other operations to select row

Update and delete will still record all recorded changes.

The principle and problems of master-slave synchronization (master-slave replication) and the separation of reading and writing

1. Problems to be solved

Data distribution

Load balancing

Data backup, highly available to avoid a single point of failure

Realize the separation of read and write to alleviate the pressure on the database

Upgrade testing (using a higher version of mysql when the slave library)

2. Supported replication types (three formats of binlog)

Replication based on sql statement

Row-based replication

Hybrid replication

3. Principle

1) basic concepts

Generate two threads from the library

IPUBO thread

SQL thread

Main library generation thread

Log dumo thread

2) process (the bin log function must be enabled on the master node,)

1. After opening the start slave command from the slave node, create an IO process to connect to the master node

two。 After the connection is successful, the master node creates a log dump thread (the master node creates a log dump thread for each slave node)

3. When the binlog changes, the dump log thread of the master node reads the bin-log content and sends it to the slave node

4. When the master node dump log thread reads the contents of the bin-log, it locks the bin-log of the master node and releases the lock before it is sent to the slave node.

5. The slave node's IO thread receives the binlog content sent by the master node and writes it to the local relay log file

6. The master-slave node locates the location of the master-slave synchronization through the binlog file + position offset, and the slave node saves the received position offset. If the slave node is down and restarts, the synchronization is initiated automatically from the postion location.

7. Copy and read the contents of the local relay log from the SQL thread of the node, parse it into a specific operation and execute it to ensure the consistency of the master and slave data

4. Master-slave replication mode

1) Asynchronous mode (default)

1. May lead to master inconsistency (master-slave delay)

two。 After receiving the transaction committed by the client, the master node directly commits the transaction and returns it to the client.

3. If the log dump goes down before it can be written after the master node transaction is committed, the master-slave data will be inconsistent.

4. You don't have to worry about the master-slave synchronization operation, which has the best performance.

2) full synchronization mode

1. More reliable, but it will affect the corresponding time of the main database.

two。 After the master node receives the transaction submitted by the client, it must wait for binlog to send it to the slave library, and all the slave libraries have completed the transaction before returning to the client.

3) semi-synchronous mode

1. Increase part of the reliability, increase the corresponding time of the main library

two。 After receiving the transaction submitted by the client, the master node waits for binlog to send at least one slave library and successfully save it to the local relay log. At this time, the master library commits the transaction and returns it to the client.

4) server-id configuration and server-uuid

1. Server-id is used to identify database instances to prevent infinite loops of SQL statements in chained master-slave, multi-master and multi-slave topologies.

2. The default value of server-id is 0. Binary logs will still be recorded for the host, but all slave connections will be rejected.

2. Server-id = 0 will refuse to connect to other instances for the slave.

3. Server-id is a global variable, and the modified hi must restart the service.

4. When the server-id of the master library and the slave library are repeated

By default replicate-same-server-id = 0, the slave library skips all master-slave synchronized data, resulting in inconsistency of master-slave data.

Replicate-same-server-id = 1, which may cause wireless loop to execute sql

The repetition of two slave libraries (B, C) server-id will lead to abnormal master-slave connection.

The main library (A) finds that the same server-id will disconnect the previous connection and re-register the new connection

The connection of B and C from the library will be reconnected over and over again.

The MySQL service automatically creates and generates the server-uuid configuration

If the master-slave instance has the same server-uuid, it will report an error exit when the master-slave synchronization occurs. However, we can set replicate-same-server-id=1 to avoid error reporting (not recommended)

5. Separation of reading and writing

1) reduce hardware expenditure based on code implementation

2) implementation based on intermediate agent

3) Master-slave delay

The performance of slave library is worse than that of master library.

A large number of queries lead to great pressure on the slave database, consume a lot of CPU resources, and affect the synchronization speed: one master and more slaves.

Large transaction execution: after the transaction is executed, the binlog is written, and the delay is read from the library.

Main library ddl (alter, drop, create)

Deadlock

1. Four necessary conditions for production

1. Mutually exclusive condition

two。 Request and retention conditions: allocate all resources at once, otherwise none of them will be allocated

3. Non-deprivation condition: release occupied resources when a process acquires some resources and waits for other resources

4. Loop wait condition:

Understanding: a resource can only be occupied by one process, the process can obtain resources and apply for new resources, and the resources already acquired cannot be deprived, while multiple processes wait for the resources occupied by other processes.

2. Release the deadlock

1. Terminate the process (kill all)

two。 Plant one by one (kill one to see if it has been lifted)

20. Mysql optimized large page query limit 100000 (offset), 10 (page_sie)

1. Reason

When querying paged data, mysql does not skip offset (100000) directly, but takes offset + page_size = 100000 + 10 = 100010 pieces of data, and then gives up the previous 100000 pieces of data, so it is efficient.

2. Optimization scheme

Deferred associations: using override indexes

Primary key threshold method: when the primary key is self-increasing, calculate the maximum and minimum value of the primary key that meets the condition (using overlay index).

Record the result location on the previous page and avoid using OFFSET

21, redis cache and mysql data consistency

Method:

1. Update redis first and then update the database

Scene: update set value = 10 where value = 9

1) redis updated successfully: redis value = 10

2) failed to update the database: mysql value = 9

3) data inconsistency

2. Update the database first, then update redis

Scenario: process A update set value = 10 where value = 9; process B update set value = 11 where value = 9

1) process A updates the database first, but has not yet written to the cache: mysql value = 10; redis value = 9

2) process B updates the database and commits the transaction, writing to the cache: mysql value = 11

3) after processing the request to commit the transaction, A process writes to the cache: redis value = 10

4) final mysql value = 11; redis value = 10

3. Delete the cache before updating the database

Scenario: process A update set value = 10 where value = 9; process B queries value

1) A process deletes the cache before it has time to modify the data or the transaction has not been committed

2) process B starts the query and does not hit the cache, so check the library and write to the cache redis value = 9

3) A process updates database and completes mysql value = 10

4) final mysql value = 10 Redis value = 9

Solution:

1. Delayed double deletion

Scenario: process A update set value = 10 where value = 9; process B queries value

1) A process deletes the cache before it has time to modify the data or the transaction has not been committed

2) process B starts the query and does not hit the cache, so check the library and write to the cache redis value = 9

3) A process updates database and completes mysql value = 10

4) A process estimates the delay time, and deletes the cache again after sleep

5) the final mysql value = 10 Redis value is empty (the next query will directly check the database)

6) the reason for the delay prevents process B from writing to the cache after the A process has been updated.

2. Request serialization

1) create two queues: update queue and query queue

2) store the key in the update queue when the cache does not exist.

3) if a new request comes in before the query is completed, and if there is still a key in the update queue, put the key into the query queue, then wait; if it does not exist, repeat the second step

4) if the data of the query finds that the query queue already exists, there is no need to write to the queue again

5) after the data update is completed, rpop updates the queue, and rpop the query queue to release the query request.

6) query requests can be cached using while + sleep and the maximum delay time can be set. If the query request is not completed, null is returned.

22, connect and pconnect in redis

1. Connect: release the connection after the end of the script

1. Close: release the connection

2. Pconnect (persistent connection): at the end of the script, the connection is not released, and the connection remains in the php-fpm process. The life cycle follows the life cycle of the php-fpm process.

1. Close does not release the connection

It's just that redis cannot be requested again in the current php-cgi process.

Subsequent connections in the current php-cgi can still be reused until the end of the php-fpm lifecycle

two。 Reduce the consumption of establishing a redis connection

3. Reduce a php-fpm to establish a connection multiple times

4. Consume more memory, and the number of connections continues to increase

5. The previous request of the woker child process (php-cgi) of the same php-fpm may affect the next request

3. The problem of connection reuse in pconnect

Variable A select db 1; variable B select db 2; affects the db of variable A

Solution: create a connection instance for each db

23. The principle of using redis zset ordered sets using skiplist

1. Basic concepts

1. Skiplist is a random data that stores elements in a hierarchical linked list in an ordered manner (only if the elements are ordered)

2. Skiplist evolved on the basis of ordered linked list and multi-layer linked list.

3. Duplicate values are allowed, so the comparison check compares not only key but also value

4. Each node has a backward pointer with a height of 1, which is used to iterate from the header to the footer.

5. Time complexity O (logn), space complexity O (n)

2. Comparison between jump table and balance tree.

1) range query efficiency

The jump table range query is more efficient, because after finding the minimum value, you only need to traverse the linked list of the first layer until it is less than the maximum value.

After finding the minimum value in the balanced tree range query, we have to traverse the middle order to find other nodes that do not exceed the maximum value.

2) memory footprint

The number of pointers per node of the skiplist is 1 / (1murp).

The number of pointers per node of the balanced tree is 2.

3) insert and delete operations

Skiplist only needs to modify the pointers of neighboring nodes

The change of balance tree will cause the adjustment of subtree.

24. Expired deletion and elimination mechanism of redis

1. Regular expiration deletion policy

1) regular deletion

Delete immediately when it expires through the timer

Memory is released in time but consumes more CPU. In case of large concurrency, CPU resources are consumed, which affects the speed of processing requests.

Memory friendly, CPU unfriendly

2) lazy deletion

Let the laissez-faire key expire, check whether it expires and delete it the next time you need to take it out.

There may be a large number of expired keys that will not be used, resulting in a memory overflow

Memory-unfriendly, CPU-friendly

3) Delete periodically

Check at regular intervals and delete expired keys

The algorithm determines how much to delete and how much to check.

2. Lazy deletion + periodic deletion adopted by redis

Periodically randomly test some keys with expiration time set to check, and delete when they expire.

Each cleaning time does not exceed 25% of the CPU. If the time is reached, exit the inspection.

Keys that are not deleted regularly, and keys that will not be used in the future will still be stored in memory, so you need to cooperate with the elimination strategy.

3. Elimination strategy (executed when there is not enough memory to write new data)

Volatile-lru: the expiration time is set and the less recently used, the more priority is given to elimination

Volatile-ttl: the expiration time is set, and the earlier the expiration time, the more priority the elimination.

Volatile-random: random deletion in expiration time is set

Allkeys-lru: the earlier the expiration time of all keys, the more priority will be given to elimination

Allkeys-random: random elimination of expiration in all keys

No-enviction: elimination is not allowed. Error due to insufficient memory

25. Common problems and solutions of redis

1. Cache avalanche: a large number of cache invalidation at the same time, resulting in requests to directly query the database, database memory and CPU pressure increase or even downtime

Resolve:

Hot spot data never expires or is distributed to different instances to reduce the problem of single machine failure.

Add a random number to cache time to prevent a large number of caches from invalidating at the same time

Do two-level cache or double cache. An is the short duration of the original cache, and B is the backup cache, which is valid for a long time. Double write cache on update

2. Cache traversal: there is no data in the cache and database. Under a large number of requests, all requests are directly penetrated to the database, resulting in downtime.

Resolve:

Bloom filter: composed of a bit vector or a list of bits of length m (only a list of 0 or 1 bit values)

Use multiple unused hash functions to generate multiple index values, and fill the corresponding positions with a value of 1

The Bloom filter can check whether the value is "may be in the collection" or "absolutely not in the collection"

May be misjudged, but the basic filtering efficiency is high.

In extreme cases, when the Bloom filter has no free space, each query returns true

Empty cache (short duration)

Business layer parameter filtering

3. Cache breakdown: there is data in the database, but a large number of requests occur after the sudden expiration of the cache, resulting in increased pressure on the database and even downtime

Resolve:

Hot spot data never expires

Mutex: release the lock regardless of success or failure after acquiring the lock

26, detailed explanation and life cycle of php-fpm

1. Basic knowledge

1) CGI protocol

The code file of the dynamic language needs to be recognized by the server through the corresponding parser.

The CGI protocol is used to make the server and interpreter communicate with each other.

Server parsing PHP files requires PHP interpreter plus corresponding CGI protocol

2) CGI program = php-cgi

Php-cgi is a CGI program that complies with the CGI protocol.

Also known as the PHP interpreter

Standard CGI parses php.ini for every request, initializes execution environment, etc., which degrades performance

After each configuration modification, you need to re-php-cgi for php.ini to take effect.

Cannot schedule worker dynamically, only a specified number of worker can be specified at the beginning

3) FastCGI protocol

It is also a protocol / specification like CGI, but it is optimized on the basis of CGI and is more efficient.

Used to improve the performance of CGI programs

Realize the management of CGI process

4) FastCGI program = php-fpm

Php-fpm is a FastCGI program that complies with the FastCGI protocol.

The Management Mode of FastCGI Program to CGI Program

Start a master process, parse the configuration file, and initialize the environment

Start multiple worker child processes

After the request is received, it is passed to the woker process for execution

Resolve the problem of smooth restart after modification of php.ini

Process_control_timeout: the timeout for a child process to accept multiplexed signals from the main process (if the request is processed within a specified time, it doesn't matter if it can't be completed)

Sets the time php-fpm leaves for the fastcgi process to respond to the restart signal

Process_control_timeout = 0, which means it does not take effect and cannot guarantee a smooth restart

Excessive process_control_timeout setting may cause system request congestion

In the case of process_control_timeout = 10, if the code logic requires 11s, restarting the old may cause the code execution to exit.

Suggested value: request_terminate_timeout

Restart type

Elegant restart

Forced restart

2. Php-fpm life cycle: to be updated

Communication between 27, Nginx and php

1. Communication method: fastcgi_pass

1) tcp socket

This method can only be used when cross-server, nginx and php are not on the same machine.

Connection-oriented protocol to better ensure the correctness and integrity of communication

2) unix socket

There is no need for network protocol stack, packing and unpacking, etc.

Reduce tcp overhead and be more efficient than tcp socket

When high concurrency is unstable, the sudden increase in the number of connections results in a large number of long-term caches. Big data's packet may return an exception directly.

At this point, the study of "what are the basic knowledge of PHP interview questions" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report