In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-04-06 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >
Share
Shulou(Shulou.com)05/31 Report--
This article introduces the relevant knowledge of "how to realize the map search of Python artificial intelligence". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!
I. Experimental requirements
After an image is given, images similar to this image (at least 5) are found in the entire data set (at least 100 samples), and the images are displayed sequentially.
II. Environmental configuration
Interpreter: python3.10
Compiler: Pycharm
Required configuration package:
Numpy 、 h6py 、 matplotlib 、 keras 、 pillow
Code file 1, vgg.py#-*-coding: utf-8-*-import numpy as npfrom numpy import linalg as LA from keras.applications.vgg16 import VGG16from keras.preprocessing import imagefrom keras.applications.vgg16 import preprocess_input as preprocess_input_vggclass VGGNet: def _ init__ (self): self.input_shape = (224,224,3) self.weight = 'imagenet' self.pooling =' max' self.model_vgg = VGG16 (weights = self.weight) Input_shape = (self.input_shape [0], self.input_shape [1], self.input_shape [2]), pooling = self.pooling, include_top = False) self.model_vgg.predict (np.zeros ((1,224,224,3)) # extract the last convolution feature of vgg16 def vgg_extract_feat (self, img_path): img = image.load_img (img_path, target_size= (self.input_shape [0]) Self.input_ shape [1]) img = image.img_to_array (img) img = np.expand_dims (img) Axis=0) img = preprocess_input_vgg (img) feat = self.model_vgg.predict (img) # print (feat.shape) norm_feat = feat [0] / LA.norm (feat [0]) return norm_feat2, index.py#-*-coding: utf-8-*-import osimport h6pyimport numpy as npimport argparsefrom vgg import VGGNet def get_imlist (path): return [os.path.join (path) F) for f in os.listdir (path) if f.endswith ('.jpg')] if _ _ name__ = = "_ _ main__": database = ruddy:\ pythonProject5\ flower_roses' index = 'vgg_featureCNN.h6' img_list = get_imlist (database) print ("feature extraction starts") feats = [] names = [] model = VGGNet () for I Img_path in enumerate (img_list): norm_feat = model.vgg_extract_feat (img_path) # modify the network img_name = os.path.split (img_path) [1] feats.append (norm_feat) names.append (img_name) print ("extracting feature from image No. % d,% d images in total "% ((I + 1), len (img_list)) feats = np.array (feats) output = index print (" writing feature extraction results... ") H6f = h6py.File (output,'w') h6f.create_dataset ('dataset_1', data=feats) # h6f.create_dataset (' dataset_2', data= names) h6f.create_dataset ('dataset_2' Data=np.string_ (names) h6f.close () 3, test.py#-*-coding: utf-8-*-from vgg import VGGNetimport numpy as npimport h6pyimport matplotlib.pyplot as pltimport matplotlib.image as mpimgimport argparse query = rudder:\ pythonProject5\ rose\ red_rose.jpg'index = 'vgg_featureCNN.h6'result = rudder:\ pythonProject5\ flower_roses'# read in indexed images' feature vectors and corresponding image namesh6f = h6py.File (index) 'r') # feats = h6f ['dataset_1'] [:] feats = h6f [' dataset_1'] [:] print (feats) imgNames = h6f ['dataset_2'] [:] print (imgNames) h6f.close () print ("searching starts") queryImg = mpimg.imread (query) plt.title ("Query Image") plt.imshow (queryImg) plt.show () # init VGGNet16 modelmodel = VGGNet () # extract query image's feature Compute simlarity score and sortqueryVec = model.vgg_extract_feat (query) # modify the network print (queryVec.shape) print (feats.shape) scores = np.dot (queryVec, feats.T) rank_ID = np.argsort (scores) [::-1] rank_score = score [rank _ ID] # print (rank_ID) print (rank_score) # number of top retrieved images to showmaxres = 6 # six images with the highest similarity imlist = [] for I were retrieved Index in enumerate (rank_ ID [0: maxres]): imlist.append (imgNams [index]) print (type (imgNams [index]) print ("image names:" + str (imgNams [index]) + "scores:% f"% rank_ score [I]) print ("top% d images in order are:"% maxres, imlist) # show top # maxres retrieved result one by onefor i, im in enumerate (imlist): image = mpimg.imread (result + "/" + str (im) 'utf-8')) plt.title ("search output% d"% (I + 1)) plt.imshow (np.uint8 (image)) f = plt.gcf () # get the current image f.savefig (rusted D:\ pythonProject5\ result\ {} .jpg' .format (I), dpi=100) # f.clear () # Free memory plt.show () IV.
1. Project folder
Data set
Result (before operation)
Original drawing
2. Similarity ranking output
3. Save the results
Fifth, the end
Share a practical and simple crawler code, search the best!
Import osimport timeimport requestsimport redef imgdata_set (save_path,word,epoch): Qroom0 # stop crawling picture condition aqum0 # picture name while (True): time.sleep (1) url= "https://image.baidu.com/search/flip?tn=baiduimage&ie=utf-8&word={}&pn={}&ct=&ic=0&lm=-1&width=0&height=0".format(word, Q) # the name word= needs to search for: headers= {'User-Agent':' Mozilla/5.0 (Windows NT 10.0) Win64 X64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.96 Safari/537.36 Edg/88.0.705.56'} response=requests.get (url,headers=headers) # print (response.request.headers) html=response.text # print (html) urls=re.findall ('"objURL": "(. *?)" Html) # print (urls) for url in urls: print (a) # the name of the picture response = requests.get (url, headers=headers) image=response.content with open (os.path.join (save_path, "{} .jpg" .format (a)) 'wb') as f: f.write (image) a=a+1 q=q+20 if (Q _ main__ 20) > = int (epoch): breakif _ _ name__== "_ _ main__": save_path = input (' the path you want to save:') word = input ('what picture do you want to download? Please enter:') epoch = input ('how many rounds of pictures do you want to download? Please enter (about 60 pictures in a round):') # need to iterate several times the picture imgdata_set (save_path, word, epoch) "Python artificial intelligence actual combat to map search how to achieve" content is introduced here, thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.