In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-30 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Internet Technology >
Share
Shulou(Shulou.com)06/02 Report--
This article mainly introduces "how to use Python to do spatial correlation analysis of NetCDF data". In daily operation, I believe many people have doubts about how to use Python to do spatial correlation analysis of NetCDF data. Xiaobian consulted all kinds of materials and sorted out simple and easy-to-use methods of operation. I hope it will be helpful to answer the doubts of "how to use Python to do spatial correlation analysis of NetCDF data". Next, please follow the editor to study!
Introduction: I have always wanted to understand the computational thinking of spatial correlation analysis, so today I pick up Python scripts and data to do exercises. First of all, it should be explained that the data and Python script of this experiment are from the boss. After explaining to the boss, allow me to write the official account to share with you. I would like to thank you for their advice. The script of this experiment can be searched in the meteorological home or app (if I remember correctly) to find the relevant content of the experiment, or you can send me a message on Wechat or backstage to get it. Before that, I didn't think I had understood the computational thinking of this method, and the test was whether I could quickly apply it to other aspects. So I came back to review again today, and I communicated my understanding with everyone.
First of all, the format of the data is NetCDF (.nc) data, the two data are respectively [Hadley central SST sst data, pc data is the EOF acquisition of the East Pacific SSTA]. Once we know the data, we are ready to start running the program. The original script includes regression analysis and correlation analysis, but today I did spatial correlation analysis. If you are interested, you can read in the weather home of the boss for a long time. If you haven't installed the Cartopy package, please contact me in the background.
In order to understand each step conveniently, I choose to run it in Jupyter, because it is more convenient to run the program one by one. The drawing part is not very difficult, the key lies in the data preprocessing part.
The script for spatial correlation analysis is as follows:
Import numpy as np # for numerical calculation, such as correlation coefficient import xarray as xr # to read .nc file to do significance test with from sklearn.feature_selection import f_regression # to draw and display graphics using import cartopy.crs as ccrs # to draw maps, if not installed, please contact me in the background to add some vectors, which is not used here. Because I do not have the data from cartopy.mpl.ticker import LongitudeFormatter, LatitudeFormatter # longitude and latitude format import cmaps # ncl color, if not, please contact me, can also be found in the meteorological home
# use the context manager to read .nc data and extract variables in the data. You can use NASA's panoply software to view .nc information with xr.open_dataset (rusted D:\ inuyasha\ codeX\ codeLEARN\ sst.DJF.mean.anom.nc') as F1: pre = F1 ['sst_anom'] [:-1,:] # all 3D data is fetched, time Latitude + longitude lat, lon = F1 ['lat'], F1 [' lon'] # extract longitude and latitude, and then gridding needs to use pre2d = np.array (pre) .reshape (pre.shape [0], pre.shape [1] * pre.shape [2]) # 0 to indicate the number of rows, and the number represented by one column 2 the number of longitudes represented by with xr.open_dataset (rhombic D:\ inuyasha\ codeX\ codeLEARN\ pc.DJF.sst.nc') as f2: pc = f2 ['pc'] [0,:]
# correlation coefficient calculation pre_cor = np.corrcoef (pre2d.T, pc) [:-1,-1] .reshape (len (lat), len (lon))
# do significance test pre_cor_sig = f_regression (np.nan_to_num (pre2d), pc) [1] .reshape (len (lat), len (lon)) # replace NaNarea = np.where (pre_cor_sig) with 0
< 0.05)# numpy的作用又来了 nx, ny = np.meshgrid(lon, lat) # 格网化经纬度,打印出来看看就知道为什么要这么做了plt.figure(figsize=(16, 8)) #创建一个空画布#让colorbar字体设置为新罗马字符plt.rcParams['font.family'] = 'Times New Roman'plt.rcParams['font.size'] = 16 ax2 = plt.subplot(projection=ccrs.PlateCarree(central_longitude=180))# 在画布上绘图,这个叫axes,这不是坐标轴喔ax2.coastlines(lw=0.4)ax2.set_global()c2 = ax2.contourf(nx, ny, pre_cor, extend='both', cmap=cmaps.nrl_sirkes, transform=ccrs.PlateCarree())plt.colorbar(c2,fraction=0.05,orientation='horizontal', shrink=0.4, pad=0.06)# extend关键字设置colorbar的形状,both为两端尖的,pad是距离主图的距离,其他参数web搜索 # 显著性打点sig2 = ax2.scatter(nx[area], ny[area], marker='+', s=1, c='k', alpha=0.6, transform=ccrs.PlateCarree())# 凸显显著性区域plt.title('Correlation Analysis', fontdict={'family' : 'Times New Roman', 'size' : 16})#标题字体也修改为新罗马字符,数字和因为建议都用新罗马字符ax2.set_xticks(np.arange(0, 361, 30),crs=ccrs.PlateCarree())# 经度范围设置,nunpy的作用这不就又来了嘛plt.xticks(fontproperties = 'Times New Roman',size=16) #修改xy刻度字体为新罗马字符plt.yticks(fontproperties = 'Times New Roman',size=16)ax2.set_yticks(np.arange(-90, 90, 15),crs=ccrs.PlateCarree())# 设置yax2.xaxis.set_major_formatter(LongitudeFormatter(zero_direction_label = False))#经度0度不加东西ax2.yaxis.set_major_formatter(LatitudeFormatter())# 设置经纬度格式,就是多少度显示那样的,而不是一些数字ax2.set_extent([-178, 178, -70, 70], crs=ccrs.PlateCarree())# 设置空间范围plt.grid(color='k')# 画一个网格吧plt.show()# 显示出图形 那么就运行看看效果吧If you don't like this color, change it for ncl. Ncl has many and beautiful colors, and you can change whatever you like.
In order to understand the computational thinking of this method, it is necessary to observe the original data and the style after data processing. Understanding the data style may be more helpful for us to understand the whole program.
Import numpy as npimport xarray as xrfrom sklearn.feature_selection import f_regressionimport matplotlib.pyplot as pltimport cartopy.crs as ccrsimport cartopy.feature as cfeaturefrom cartopy.mpl.ticker import LongitudeFormatter, LatitudeFormatterimport cmaps
With xr.open_dataset (ringing D:\ inuyasha\ codeX\ codeLEARN\ sst.DJF.mean.anom.nc') as F1: pre = F1 ['sst_anom'] [:-1,:] # all 3D data are taken, time Latitude + longitude lat, lon = F1 ['lat'], F1 [' lon'] pre2d = np.array (pre). Reshape (pre.shape [0], pre.shape [1] * pre.shape [2]) # 0 rows represents the number, 1 latitude, 2 longitude # pre2d.shape is a matrix with 39 rows and 16020 columns, which becomes 16020 rows and 39 columns after T.
With xr.open_dataset (ringing D:\ inuyasha\ codeX\ codeLEARN\ pc.DJF.sst.nc') as f2: pc = f2 ['pc'] [0,:] # pc is an array of 39 lines
# # correlation coefficient pre_cor = np.corrcoef (pre2d.T, pc) [:-1,-1] .reshape (len (lat), len (lon)) # pre_cor.shape, (16020,)-> reshape (89180) # # significance test
# pre_cor_sig = f_regression (np.nan_to_num (pre2d), pc) [1] .reshape (len (lat), len (lon)) # replace NaN# area = np.where with 0 (pre_cor_sig < 0.05)
Nx,ny = np.meshgrid (lon, lat) # gridded nx,ny
Look at the normalization of longitude and latitude after gridding. It might be more intuitive to draw a picture.
At this point, the study on "how to use Python to do spatial correlation analysis of NetCDF data" is over. I hope to be able to solve your doubts. The collocation of theory and practice can better help you learn, go and try it! If you want to continue to learn more related knowledge, please continue to follow the website, the editor will continue to work hard to bring you more practical articles!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.