In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-18 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)05/31 Report--
This article mainly explains "what is the implementation logic of PortalRunMulti function and PortalRun function in PostgreSQL". Interested friends may wish to have a look. The method introduced in this paper is simple, fast and practical. Next, let the editor take you to learn "what is the implementation logic of PortalRunMulti function and PortalRun function in PostgreSQL?"
I. basic information
The data structures, macro definitions, and dependent functions used by the PortalRunMulti function.
Data structure / Macro definition
1 、 Portal
Typedef struct PortalData * Portal; typedef struct PortalData {/ * Bookkeeping data * / const char * name; / * portal's name * / const char * prepStmtName; / * source prepared statement (NULL if none) * / MemoryContext portalContext; / * subsidiary memory for portal * / ResourceOwner resowner; / * resources owned by portal * / void (* cleanup) (Portal portal) / * cleanup hook * / * * State data for remembering which subtransaction (s) the portal was * created or used in. If the portal is held over from a previous * transaction, both subxids are InvalidSubTransactionId. Otherwise, * createSubid is the creating subxact and activeSubid is the last subxact * in which we ran the portal. * / SubTransactionId createSubid; / * the creating subxact * / SubTransactionId activeSubid; / * the last subxact with activity * / / * The query or queries the portal will execute * / const char * sourceText; / * text of query (as of 8.4, never NULL) * / const char * commandTag; / * command tag for original query * / List * stmts; / * list of PlannedStmts * / CachedPlan * cplan / * CachedPlan, if stmts are from one * / ParamListInfo portalParams; / * params to pass to query * / QueryEnvironment * queryEnv; / * environment for query * / / * Features/options * / PortalStrategy strategy; / * see above * / int cursorOptions; / * DECLARE CURSOR option bits * / bool run_once; / * portal will only be run once * / / * Status data * / PortalStatus status / * see above * / bool portalPinned; / * a pinned portal can't be dropped * / bool autoHeld; / * was automatically converted from pinned to * held (see HoldPinnedPortals ()) * / / * If not NULL, Executor is active; call ExecutorEnd eventually: * / QueryDesc * queryDesc / * info needed for executor invocation * / * If portal returns tuples, this is their tupdesc: * / TupleDesc tupDesc; / * descriptor for result tuples * / / * and these are the format codes to use for the columns: * / int16 * formats; / * a format code for each column * / / * Where we store tuples for a held cursor or a PORTAL_ONE_RETURNING or * PORTAL_UTIL_SELECT query. (A cursor held past the end of its * transaction no longer has any active executor state.) * / Tuplestorestate * holdStore; / * store for holdable cursors * / MemoryContext holdContext; / * memory containing holdStore * / / * Snapshot under which tuples in the holdStore were read. We must keep a * reference to this snapshot if there is any possibility that the tuples * contain TOAST references, because releasing the snapshot could allow * recently-dead rows to be vacuumed away, along with any toast data * belonging to them. In the case of a held cursor, we avoid needing to * keep such a snapshot by forcibly detoasting the data. * / Snapshot holdSnapshot; / * registered snapshot, or NULL if none * / * * atStart, atEnd and portalPos indicate the current cursor position. * portalPos is zero before the first row, N after fetching N'th row of * query. After we run off the end, portalPos = # of rows in query, and * atEnd is true. Note that atStart implies portalPos = 0, but not the * reverse: we might have backed up only as far as the first row, not to * the start. Also note that various code inspects atStart and atEnd, but * only the portal movement routines should touch portalPos. * / bool atStart; bool atEnd; uint64 portalPos; / * Presentation data, primarily used by the pg_cursors system view * / TimestampTz creation_time; / * time at which this portal was defined * / bool visible; / * include this portal in pg_cursors? * /} PortalData
2 、 List
Typedef struct ListCell ListCell; typedef struct List {NodeTag type; / * T_List, T_IntList, or T_OidList * / int length; ListCell * head; ListCell * tail;} List; struct ListCell {union {void * ptr_value; int int_value; Oid oid_value;} data ListCell * next;}
3 、 Snapshot
Typedef struct SnapshotData * Snapshot; / * * Struct representing all kind of possible snapshots. * * There are several different kinds of snapshots: * * Normal MVCC snapshots * * MVCC snapshots taken during recovery (in Hot-Standby mode) * * Historic MVCC snapshots used during logical decoding * * snapshots passed to HeapTupleSatisfiesDirty () * * snapshots passed to HeapTupleSatisfiesNonVacuumable () * * snapshots used for SatisfiesAny, Toast, Self where no members are * accessed. * * TODO: It's probably a good idea to split this struct using a NodeTag * similar to how parser and executor nodes are handled, with one type for * each different kind of snapshot to avoid overloading the meaning of * individual fields. * / typedef struct SnapshotData {SnapshotSatisfiesFunc satisfies; / * tuple test function * / / * The remaining fields are used only for MVCC snapshots, and are normally * just zeroes in special snapshots. (But xmin and xmax are used * specially by HeapTupleSatisfiesDirty, and xmin is used specially by * HeapTupleSatisfiesNonVacuumable.) * An MVCC snapshot can never see the effects of XIDs > = xmax. It can see * the effects of all older XIDs except those listed in the snapshot. Xmin * is stored as an optimization to avoid needing to search the XID arrays * for most tuples. * / TransactionId xmin; / * all XID
< xmin are visible to me */ TransactionId xmax; /* all XID >= xmax are invisible to me * / * * For normal MVCC snapshot this contains the all xact IDs that are in * progress, unless the snapshot was taken during recovery in which case * it's empty. For historic MVCC snapshots, the meaning is inverted, i.e. * it contains * committed* transactions between xmin and xmax. * * note: all ids in xip [] satisfy xmin = xmin, but we don't bother filtering * out any that are > = xmax * / TransactionId * subxip; int32 subxcnt; / * # of xact ids in subxip [] * / bool suboverflowed; / * has the subxip array overflowed? * / bool takenDuringRecovery; / * recovery-shaped snapshot? * / bool copied / * false if it's a static snapshot * / CommandId curcid; / * in my xact, CID
< curcid are visible */ /* * An extra return value for HeapTupleSatisfiesDirty, not used in MVCC * snapshots. */ uint32 speculativeToken; /* * Book-keeping information, used by the snapshot manager */ uint32 active_count; /* refcount on ActiveSnapshot stack */ uint32 regd_count; /* refcount on RegisteredSnapshots */ pairingheap_node ph_node; /* link in the RegisteredSnapshots heap */ TimestampTz whenTaken; /* timestamp when snapshot was taken */ XLogRecPtr lsn; /* position in the WAL stream when taken */ } SnapshotData; 依赖的函数 1、lfirst_* /* * NB: There is an unfortunate legacy from a previous incarnation of * the List API: the macro lfirst() was used to mean "the data in this * cons cell". To avoid changing every usage of lfirst(), that meaning * has been kept. As a result, lfirst() takes a ListCell and returns * the data it contains; to get the data in the first cell of a * List, use linitial(). Worse, lsecond() is more closely related to * linitial() than lfirst(): given a List, lsecond() returns the data * in the second cons cell. */ #define lnext(lc) ((lc)->Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.