Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the common problems in Meta analysis application

2025-03-26 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >

Share

Shulou(Shulou.com)06/01 Report--

This article will explain in detail what are the common problems in the application of Meta analysis. The content of the article is of high quality, so the editor will share it with you for reference. I hope you will have a certain understanding of the relevant knowledge after reading this article.

Meta analysis is a research method for systematic and quantitative statistical comprehensive analysis and comprehensive evaluation of multiple independent research results (usually requiring 5 or more) with the same research purpose. Its significance lies in that the contradictory results of similar studies can be explained reasonably, evaluate the degree of variation between different studies, increase the statistical testing efficiency, improve the universal significance of the research results, and provide basis and reference for more in-depth research.

Data preparation for Meta Analysis

Meta analysis is essentially an observational study, which requires strict research and design at first. There are generally two important tasks to be completed before statistical processing of Meta analysis: research design and collection and evaluation of related literature. The research design includes: defining the research purpose, the source and scope of literature research data and the provisions of related contents, as well as the choice of statistical analysis methods. After defining the standard of literature collection, the integrity of literature retrieval will directly affect the reliability of Meta analysis results. In literature retrieval, it is best to find all relevant literatures (including unpublished ones) in order to reduce the impact of publication bias on research results.

The quality evaluation of literature mainly depends on two aspects: one is the research design, the results of random grouping are more reliable than the results of observation and comparison; the other is the sample size, and the study of large samples is more reliable than that of small samples.

The quality of the literature is generally evaluated by the relevant scales and tools. For example, Cochrane wind risk bias assessment tools are commonly used in random trials, NOS quality evaluation scale and CASP case-control study evaluation list are used in observational studies. In Meta analysis, the literature with high reliability should be given a larger weight (weight), while the literature with poor reliability should be given a small weight or deleted.

Example 1: in order to better explore the role of Slian in the prevention of neural tube abnormalities, search was made in the full text database of the number Picture Library of the Chinese Medical College, and the articles published in various journals with the key words "folic acid" and "Shenjing tube malformation" from 1994 to 2004 were selected. Results 5 articles were found to meet the requirements.

Case 1 does not give a complete retrieval strategy during the research and design, but only defines the source of the research data (single retrieval system), keywords, search method and search scope. However, it does not specify the inclusion or exclusion criteria of a single study, the type of study, the selection conditions of individuals in each individual study, the measurement of variables, and the statistics that should be recorded in each study (such as mortality, OR value, etc.). In the process of literature collection, in order to obtain comprehensive data to the maximum, it should be retrieved from multiple electronic literature databases or retrieval systems. Case 1 only collects the relevant research of one database, and the integrity of literature retrieval is poor, which will directly affect the reliability of the research results. At the same time, the original author did not give the flow chart according to the PRISMA statement, and could not clearly determine or exclude the number and reasons of the literature in each stage.

Example 2: Meta analysis was used to evaluate the efficacy and safety of gangliosides in children with cerebral palsy. Pubmed, Embase, Cochrane Library, China Biomedical Literature Database, China Journal full-text Database and Wanfang Digital full-text Journal Database were searched for randomized controlled trials of gangliosides in the treatment of children with cerebral palsy. A total of 10 articles were included according to the retrieval strategy, and the quality score was 2 points.

In case 2, when evaluating the quality of the literature, the original author found that the retrieved literature generally did not report the specific randomized methods and randomized hidden measures (or blind method), as well as the loss of follow-up and withdrawal of the trial. Therefore, the quality of 10 randomized controlled trials was given a score of 2 points (the same quality). It was considered that the quality of the literature was low, but Meta analysis was still carried out. Therefore, the results of the original author need to be further verified.

Statistical processing of Meta Analysis

In the past, the simple fixed ensemble square method of merging P value, such as Fisher method and Stouffer method, was often used in the comprehensive analysis of several independent research results. The square method of merging P value can only get the qualitative comprehensive conclusion that the location effect is "meaningful" or "meaningless", lacks a quantitative comprehensive result, and the information provided by each study is synthesized mechanically regardless of seriousness. It ignores the fact that each study has different reliability due to the differences in the level of authors, test conditions and sample size. However, in practical work, doctors are more willing to know how many percentage points the efficiency of medicine An is higher than that of medicine B. If you compare the blood pressure-lowering effects of the two drugs, you would like to know how much difference is the average between the two drugs? In order to obtain the quantitative results of these differences, Meta analysis is needed.

Meta analysis emphasizes the combination of the corresponding amount of effect (effect size,ES) to get a quantitative result. The so-called effect quantity, also known as effect scale (effect magnitude) or effect size, refers to the dimensionless statistics that reflect the correlation between the processing factors (level) and response variables of each study, such as the logarithm of OR (or relative risk RR), the difference between the two rates (rate difference, rate difference,RD), and the mean difference between the test group and the control group (mean difference). MD) or standardized mean difference, correlation coefficient, etc.

The statistical processing of Meta analysis is mainly divided into two steps: one is to test the homogeneity (or heterogeneity) of the statistics, and the other is to combine the statistics (effects) in each research report. Homogeneity test is an important part of Meta analysis, the purpose is to check the bias, to determine whether the research results are consistent, in order to find and eliminate obviously unreasonable research results. Therefore, before the weighted merging of effects in Meta analysis, the heterogeneity and its source among studies should be identified and investigated. The homogeneity test is generally carried out by Q statistics (χ 2 test), and I2 statistics are used to quantitatively judge the heterogeneity. If the results of each study are consistent (homogeneous), the fixed effect model (fixedeffect model) can be used for weighted merging. On the contrary, the causes of inconsistencies should be analyzed, and some extra-large, super-small or opposite studies can be eliminated (elimination should be cautious! If the cause of inconsistency is caused by some special factor, if there are too many lost-up cases in a study, the results of the study should not be included in Meta analysis) or weighted by random effect model (randomeffect model). If heterogeneity is confirmed by homogeneity test, the source of heterogeneity can be explored by subgroup analysis. if a confounding factor can better explain heterogeneity, it can be divided into subgroups for effect weighted combination.

Meta analysis covers a wide range and application fields, and there are many statistical methods that can be used. The most common methods in pediatric health and clinical research are: the combination of OR (or relative risk RR), the combination of the difference between the two rates and the combination of the difference between the two means. Meta analysis method.

The above example 1 Central Plains author's alignment test has the following description: "through the analysis of Meta-analysis (fixed effect) model, each group of data has homogeneity". The original author should first judge whether the research results are consistent (that is, homogeneity test), and then determine whether to use the fixed effect model for weighted combination of effects.

Example 3: a comprehensive quantitative analysis of 21 case-control studies on the risk factors of simple obesity in children in China was conducted by Meta analysis. The results showed that 11 risk factors such as overweight at birth were significantly associated with childhood obesity. In case 3, the original author described the consistency as follows: "the results of the three factors of overweight at birth, frequent eating of greasy food and partial / picky eating are in good agreement, and the fixed effect model is used in the combined analysis. The rest of the statistics (refers to other risk factors) have obvious heterogeneity among the research results, and the random effect model is used in the combined analysis." However, the original author did not give the specific statistics and test results related to the homogeneity test.

Example 4: Meta analysis was carried out on the randomized controlled trials published at home and abroad since 2000 about probiotics in the prevention of allergic eczema in children, and subgroup analysis was carried out according to different intervention strains and different follow-up time points. The results showed that there was no significant effect in the intervention group using only Lactobacillus or Bifidobacterium to prevent allergic eczema in children.

Case 4 Meta analysis finally included 23 randomized controlled trial literatures, of which 10 articles used mixed strains, 12 articles used Lactobacillus, and only one article used Bifidobacterium alone. Therefore, according to the subgroup analysis of different intervention strains, the conclusion that "only Bifidobacterium is not effective in the prevention of allergic eczema in children" is open to question. The number of research literatures is too few (only one article), and the reliability and stability of its conclusions are in doubt. In addition, it was reported that [9] A total of 9 articles were retrieved in the Meta analysis of the relationship between cesarean section and the incidence of cerebral palsy. The original author conducted a subgroup analysis according to the mode of cesarean section. Two articles were included in the study of the relationship between premature cesarean section, emergency cesarean section and the incidence of cerebral palsy. It is concluded that there is no statistical meaning between the control group and the intervention group (OR preterm birth: 0.84, 95% CI: 0.63 ~ 1.13). OR Emergency: 9.77% 95% CI: 7.37 ~ 12.96) due to the small number of articles included in the subgroup analysis (only 2 articles), the conclusion still needs to be further verified.

The simple and intuitive expression form of the statistical results of Meta analysis is forest map, which is a necessary part of the report of Meta analysis results. A forest map is a pattern in which a vertical invalid line (Abscissa scale is 1 or 0) is used as the center in the plane Cartesian coordinate system, and multiple line segments parallel to the horizontal axis are used to describe each effect and confidence interval included in the study. a pattern in which a diamond (or other figure) is used to describe the combined effect and its confidence interval.

Example 5: in order to explore the relationship between vitamin D and childhood asthma, the levels of vitamin D in children with asthma and normal children were analyzed and compared by Meta. The forest map is shown in figure 2. There is heterogeneity in the studies in figure 2 (P 50%), and the random effect model should be used to merge the effects, rather than the fixed effect model as shown in figure 2. In addition, figure 2 does not specifically and accurately describe the effect and confidence interval of each study and merger, that is, the forest map is not intuitive enough.

Publication bias and sensitivity Analysis of Meta Analysis

The most prominent question of Meta analysis is form bias (publishingbias), such as medical journals tend to publish P

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Internet Technology

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report