Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

What are the commonly used machine learning and in-depth learning libraries in python

2025-04-07 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Development >

Share

Shulou(Shulou.com)05/31 Report--

This article introduces the relevant knowledge of "what are the common machine learning and deep learning libraries of python". In the operation of actual cases, many people will encounter such a dilemma, so let the editor lead you to learn how to deal with these situations. I hope you can read it carefully and be able to achieve something!

Preface

At present, with the popularity of artificial intelligence, it has attracted many industries' attention to artificial intelligence, and it has also ushered in wave after wave of artificial intelligence learning. although the principle behind artificial intelligence can not be introduced in detail in a short article, but like all disciplines, we do not need to "build wheels" from scratch, we can quickly build artificial intelligence models by using rich artificial intelligence frameworks. In order to introduce the trend of artificial intelligence.

Artificial intelligence refers to a series of technologies that enable machines to process information like human beings; machine learning is the process of using computer programming to learn from historical data and predict new data; neural network is a computer model of machine learning based on the structure and characteristics of biological brain; deep learning is a subset of machine learning, which processes a large amount of unstructured data, such as human voice, text and image. Therefore, these concepts are interdependent at the level, artificial intelligence is the broadest term, and deep learning is the most specific:

In order for everyone to have a preliminary understanding of the Python library commonly used in artificial intelligence, in order to choose the library that can meet their own needs for learning, this paper gives a brief and comprehensive introduction to the more common artificial intelligence library.

Introduction of python common machine learning and deep learning library 1. Numpy

NumPy (Numerical Python) is an extension library of Python, which supports a large number of dimensional arrays and matrix operations. In addition, it also provides a large number of mathematical function libraries for array operations. Numpy is written in C language at the bottom, and objects are directly stored in the array instead of object pointers, so its operation efficiency is much higher than that of pure Python code.

In the example, we can compare the speed of calculating the sin value of a list using pure Python and using the Numpy library:

Import numpy as npimport mathimport randomimport timestart = time.time () for i in range (10): list_1 = list (range (110000) for j in range (len (list_1)): list_1 [j] = math.sin (list_1 [j]) print ("{} s" .format (time.time ()-start) start = time.time () for i in range (10): list_1 = np.array (np.arange (110000th)) list_1 = np.sin (list_1) print ("use Numpy {} s" .format (time.time ()-start))

From the following running results, you can see that using the Numpy library is faster than the code written in pure Python:

Time to use pure Python 0.017444372177124023s time to use Numpy 0.001619577407836914s2, OpenCV

OpenCV is a cross-platform computer vision library that runs on Linux, Windows and Mac OS operating systems. It is lightweight and efficient-composed of a series of C functions and a small number of C++ classes, but also provides a Python interface to implement many general algorithms in image processing and computer vision.

The following code tries to use some simple filters, including image smoothing, Gaussian blur, and so on:

Import numpy as npimport cv2 as cvfrom matplotlib import pyplot as pltimg = cv.imread ('h89817032p0.png') kernel = np.ones ((5Magi 5), np.float32) / 25dst = cv.filter2D (img,-1,kernel) blur_1 = cv.GaussianBlur (img, (5Magi 5), 0) blur_2 = cv.bilateralFilter (img,9,75,75) plt.figure (figsize= (10Mague 10)) plt.subplot (221), plt.imshow (img [:,:,:-1]), plt.title (' Original') plt.xticks ([]) Plt.yticks ([]) plt.subplot (222), plt.imshow (dst [:,::-1]), plt.title ('Averaging') plt.xticks ([]), plt.yticks ([]) plt.subplot (223), plt.imshow (blur_1 [:,::-1]), plt.title (' Gaussian') plt.xticks ([]), plt.yticks ([]) plt.subplot (224), plt.imshow (blur_1 [:) Plt.title ('Bilateral') plt.xticks ([]), plt.yticks ([]) plt.show ()

You can refer to the basics of OpenCV image processing (transformation and denoising) for more OpenCV image processing operations.

3 、 Scikit-image

Scikit-image is a scipy-based image processing library that processes pictures as numpy arrays.

For example, you can use scikit-image to change the image scale, and scikit-image provides functions such as rescale, resize, and downscale_local_mean.

From skimage import data, color, iofrom skimage.transform import rescale, resize, downscale_local_meanimage = color.rgb2gray (io.imread ('h89817032p0.png')) image_rescaled = rescale (image, 0.25, anti_aliasing=False) image_resized = resize (image, (image.shape [0] / / 4, image.shape [1] / / 4), anti_aliasing=True) image_downscaled = downscale_local_mean (image, (4) 3) plt.figure (figsize= (20Power20)) plt.subplot (221), plt.imshow (image, cmap='gray'), plt.title ('Original') plt.xticks ([]), plt.yticks ([]) plt.subplot (22222), plt.imshow (image_rescaled, cmap='gray'), plt.title (' Rescaled') plt.xticks ([]), plt.yticks ([]) plt.subplot (223), plt.imshow (image_resized, cmap='gray') Plt.title ('Resized') plt.xticks ([]), plt.yticks ([]) plt.subplot, plt.imshow (image_downscaled, cmap='gray'), plt.title (' Downscaled') plt.xticks ([]), plt.yticks ([]) plt.show ()

4. Python Imaging Library (PIL)

Python Imaging Library (PIL) has become the de facto image processing standard library of Python, because PIL is very powerful, but API is very easy to use.

But because PIL only supports Python 2.7and is in disrepair, a group of volunteers have created a compatible version based on PIL, called Pillow, which supports the latest Python 3.x, and adds many new features, so we can skip PIL and install Pillow directly.

5 、 Pillow

Use Pillow to generate alphabetic CAPTCHA pictures:

From PIL import Image, ImageDraw, ImageFont, ImageFilterimport random# Random letter: def rndChar (): return chr (random.randint (65,90)) # Random Color 1:def rndColor (): return (random.randint (64,255), random.randint (64,255), random.randint (64,255)) # Random Color 2:def rndColor2 (): return (random.randint (32,127), random.randint (32,127), random.randint (32) ) # 240 x 60:width = 60 * 6height = 60 * 6image = Image.new ('RGB', (width, height), (255,255,255)) # create Font object: font = ImageFont.truetype (' / usr/share/fonts/wps-office/simhei.ttf', 60) # create Draw object: draw = ImageDraw.Draw (image) # fill each pixel: for x in range (width): for y in range (height): draw.point (x, y) Fill=rndColor () # output text: for t in range (6): draw.text ((60 * t + 10,150), rndChar (), font=font, fill=rndColor2 ()) # Fuzzy: image = image.filter (ImageFilter.BLUR) image.save ('code.jpg',' jpeg')

6 、 SimpleCV

SimpleCV is an open source framework for building computer vision applications. With it, you can access high-performance computer vision libraries, such as OpenCV, without having to first understand terms such as bit depth, file format, color space, buffer management, eigenvalues, or matrices. However, its support for Python3 is very poor, so use the following code in Python3.7:

From SimpleCV import Image, Color, Display# load an image from imgurimg = Image ('http://i.imgur.com/lfAeZ4n.png')# use a keypoint detector to find areas of interestfeats = img.findKeypoints () # draw the list of keypointsfeats.draw (color=Color.RED) # show the resulting image. Img.show () # apply the stuff we found to the image.output = img.applyLayers () # save the results.output.save ('juniperfeats.png')

The following error is reported, so it is not recommended to use in Python3:

SyntaxError: Missing parentheses in call to 'print'. Did you mean print ('unit test')? 7, Mahotas

Mahotas is a fast computer vision algorithm library, which is built on Numpy. It has more than 100 image processing and computer vision functions, and is growing.

Use Mahotas to load the image and manipulate the pixels:

Import numpy as npimport mahotasimport mahotas.demosfrom mahotas.thresholding import soft_thresholdfrom matplotlib import pyplot as pltfrom os import pathf = mahotas.demos.load ('lena', as_grey=True) f = f [128 plt.gray () # Show the data:print ("Fraction of zeros in original image: {0}" .format (np.mean (favored 0) plt.imshow (f) plt.show ()

8 、 Ilastik

Ilastik can provide users with good bioinformation image analysis services based on machine learning, using machine learning algorithms to easily segment, classify, track and count cell or other experimental data. Most operations are interactive and do not require machine learning expertise. You can refer to https://www.ilastik.org/documentation/basics/installation.html for installation.

9 、 Scikit-learn

Scikit-learn is a free software machine learning library for the Python programming language. It has a variety of classification, regression and clustering algorithms, including support vector machine, random forest, gradient lifting, k-means and DBSCAN and other machine learning algorithms.

Use Scikit-learn to implement the KMeans algorithm:

Import timeimport numpy as npimport matplotlib.pyplot as pltfrom sklearn.cluster import MiniBatchKMeans, KMeansfrom sklearn.metrics.pairwise import pairwise_distances_argminfrom sklearn.datasets import make_blobs# Generate sample datanp.random.seed (0) batch_size = 45centers = [[1,1], [- 1], [1,-1]] len (centers) X, labels_true = make_blobs (n_samples=3000, centers=centers, cluster_std=0.7) # Compute clustering with Meansk_means = KMeans (init='k-means++', n_clusters=3) N_init=10) t0 = time.time () k_means.fit (X) t_batch = time.time ()-tweak Compute clustering with MiniBatchKMeansmbk = MiniBatchKMeans (init='k-means++', n_clusters=3, batch_size=batch_size, n_init=10, max_no_improvement=10, verbose=0) t0 = time.time () mbk.fit (X) t_mini_batch = time.time ()-tact Plot resultfig = plt.figure (figsize= (8,3) fig.subplots_adjust (left=0.02) Right=0.98, bottom=0.05, top=0.9) colors = ['# 4EACC5','# FF9C34','# 4E9A06'] # We want to have the same colors for the same cluster from the# MiniBatchKMeans and the KMeans algorithm. Let's pair the cluster centers per# closest one.k_means_cluster_centers = k_means.cluster_centers_order = pairwise_distances_argmin (mbk.cluster_centers_) mbk_means_cluster_centers = mbk.cluster_centers_ [order] k_means_labels = pairwise_distances_argmin (X, k_means_cluster_centers) mbk_means_labels = pairwise_distances_argmin (X) Mbk_means_cluster_centers) # KMeansfor k, col in zip (range (n_clusters), colors): my_members = k_means_labels = = k cluster_center = k plt.plot [k] plt.plot (X [my _ members, 0], X [my _ members, 1], 'wicked, markerfacecolor=col, marker='.') Plt.plot (cluster_center [0], cluster_center [1], 'KMeans', markerfacecolor=col, markeredgecolor='k', markersize=6) plt.title (' KMeans') plt.xticks () plt.yticks (()) plt.show ()

10 、 SciPy

The SciPy library provides many user-friendly and efficient numerical calculations, such as numerical integration, interpolation, optimization, linear algebra and so on.

SciPy library defines many special functions of mathematical physics, including elliptic function, Bessel function, gamma function, beta function, hypergeometric function, parabolic cylindrical function and so on.

From scipy import specialimport matplotlib.pyplot as pltimport numpy as npdef drumhead_height (n, k, distance, angle, t): kth_zero = special.jn_zeros (n, k) [- 1] return np.cos (t) * np.cos (n*angle) * special.jn (n Distance*kth_zero) theta = np.r_ [0:2*np.pi:50j] radius = np.r_ [00:2*np.pi:50j] x = np.array ([r * np.cos (theta) for r in radius]) y = np.array ([r * np.sin (theta) for r in radius]) z = np.array ([drumhead_height (1,1, r, theta, 0.5) for r in radius]) fig = plt.figure () ax = fig.add_axes (rect= (0,0.05) 0.95,0.95), projection='3d') ax.plot_surface (x, y, z, rstride=1, cstride=1, cmap='RdBu_r', vmin=-0.5, vmax=0.5) ax.set_xlabel ('X') ax.set_ylabel ('Y') ax.set_xticks (np.arange (- 1,1.1,0.5) ax.set_yticks (np.arange (- 1,1.1,0.5) ax.set_zlabel ('Z') plt.show ()

11 、 NLTK

NLTK is a library that builds Python programs to handle natural languages. It provides an easy-to-use interface for more than 50 corpora and lexical resources such as WordNet, as well as a set of text processing libraries for classification, word segmentation, stemming, tagging, parsing and semantic reasoning, and wrappers for industrial-level natural language processing (Natural Language Processing, NLP) libraries.

NLTK is called "a wonderful tool for teaching, and working in, computational linguistics using Python".

The first use of import nltkfrom nltk.corpus import treebank# requires downloading nltk.download ('punkt') nltk.download (' averaged_perceptron_tagger') nltk.download ('maxent_ne_chunker') nltk.download (' words') nltk.download ('treebank') sentence = "At eight o'clock on Thursday morning Arthur didn't feel very good."# Tokenizetokens = nltk.word_tokenize (sentence) tagged = nltk.pos_tag (tokens) # Identify named entitiesentities = nltk.chunk.ne _ chunk (tagged) # Display a parse treet = treebank.parsed_sents ('wsj_0001.mrg') [0] t.draw ()

12 、 spaCy

SpaCy is a free open source library for advanced NLP in Python. It can be used to build applications that process large amounts of text, to build information extraction or natural language understanding systems, or to preprocess text for in-depth learning.

Import spacy texts = ["Net income was $9.4million compared to the prior year of $2.7million.", "Revenue exceeded twelve billion dollars, with a loss of $1b.",] nlp = spacy.load ("en_core_web_sm") for doc in nlp.pipe (texts, disable= ["tok2vec", "tagger", "parser", "attribute_ruler", "lemmatizer"]): # Do something with the doc here print ([(ent.text) Ent.label_) for ent in doc.ents])

Nlp.pipe generates Doc objects, so we can iterate over them and access named entity predictions:

[('$9.4 million', 'MONEY'), (' the prior year', 'DATE'), (' $2.70 million', 'MONEY')] [(' twelve billion dollars', 'MONEY'), (' 1baked, 'MONEY')] 13, LibROSA

Librosa is a Python library for music and audio analysis, which provides the functions and functions necessary to create a music information retrieval system.

# Beat tracking exampleimport librosa# 1. Get the file path to an included audio examplefilename = librosa.example ('nutcracker') # 2. Load the audio as a waveform `y` # Store the sampling rate as `sr `y, sr= librosa.load (filename) # 3. Run the default beat trackertempo, beat_frames = librosa.beat.beat_track (yellowy, sr=sr) print (' Estimated tempo: {: .2f} beats per minute'.format (tempo)) # 4. Convert the frame indices of beat events into timestampsbeat_times = librosa.frames_to_time (beat_frames Sr=sr) 14, Pandas

Pandas is a fast, powerful, flexible and easy-to-use open source data analysis and manipulation tool. Pandas can import data from various file formats such as CSV, JSON, SQL, Microsoft Excel, and perform operations on a variety of data, such as merging, re-shaping, selection, as well as data cleaning and data processing features. Pandas is widely used in academic, financial, statistics and other data analysis fields.

Import matplotlib.pyplot as pltimport pandas as pdimport numpy as npts = pd.Series (np.random.randn (1000), index=pd.date_range ("1 df 2000", periods=1000)) ts = ts.cumsum () df = pd.DataFrame (np.random.randn (1000, 4), index=ts.index, columns=list ("ABCD")) df = df.cumsum () df.plot () plt.show ()

15 、 Matplotlib

Matplotlib is the drawing library of Python, which provides a set of commands similar to matlab API, which can generate beautiful graphics at the level of publication quality. Matplotlib makes drawing very simple and achieves an excellent balance between ease of use and performance.

Use Matplotlib to draw multiple graphs:

# plot_multi_curve.pyimport numpy as npimport matplotlib.pyplot as pltx = np.linspace (0.1,2 * np.pi, 100) yellow1 = xy_2 = np.square (x) yellow3 = np.log (x) yellow4 = np.sin (x) plt.plot (xMaginyyong1) plt.plot (xmaijinyao2) plt.plot (xmemyyong3) plt.plot (xmemyyong4) plt.show ()

For more introduction to Matplotlib drawing, please refer to the previous blog post-Python-Matplotlib Visualization.

16 、 Seaborn

Seaborn is a more advanced API encapsulated Python data visualization library on the basis of Matplotlib, which makes it easier to draw. Seaborn should be regarded as a supplement to Matplotlib rather than a substitute.

Import seaborn as snsimport matplotlib.pyplot as pltsns.set_theme (style= ticks) df = sns.load_dataset ("penguins") sns.pairplot (df, hue= "species") plt.show ()

17 、 Orange

Orange is an open source data mining and machine learning software that provides a series of data exploration, visualization, preprocessing and modeling components. Orange has a beautiful and intuitive interactive user interface, which is very suitable for novice exploratory data analysis and visual display; at the same time, advanced users can also use it as a programming module of Python for data manipulation and component development.

Use pip to install Orange, high praise ~

$pip install orange3

After the installation is complete, enter the orange-canvas command on the command line to start the Orange graphical interface:

$orange-canvas

After the startup is complete, you can see the Orange graphical interface to do a variety of operations.

18 、 PyBrain

PyBrain is the modular machine learning library of Python. Its goal is to provide flexible, easy-to-use and powerful algorithms for machine learning tasks and various predefined environments to test and compare algorithms. PyBrain is the abbreviation of Python-Based Reinforcement Learning, Artificial Intelligence and Neural Network Library.

We will use a simple example to demonstrate the use of PyBrain to build a multi-layer perceptron (Multi Layer Perceptron, MLP).

First, we create a new feedforward network object:

From pybrain.structure import FeedForwardNetworkn = FeedForwardNetwork ()

Next, build the input, hide, and output layers:

From pybrain.structure import LinearLayer, SigmoidLayerinLayer = LinearLayer (2) hiddenLayer = SigmoidLayer (3) outLayer = LinearLayer (1)

In order to use the layers you build, you must add them to the network:

N.addInputModule (inLayer) n.addModule (hiddenLayer) n.addOutputModule (outLayer)

Multiple input and output modules can be added. In order to calculate forward and reverse error propagation, the network must know which layers are inputs and which are outputs.

This requires a clear determination of how they should be connected. To do this, we use the most common connection type, the full connection layer, implemented by the FullConnection class:

From pybrain.structure import FullConnectionin_to_hidden = FullConnection (inLayer, hiddenLayer) hidden_to_out = FullConnection (hiddenLayer, outLayer)

Like layers, we must explicitly add them to the network:

N.addConnection (in_to_hidden) n.addConnection (hidden_to_out)

All the elements are now ready to be in place, and finally, we need to call the .sortModules () method to make MLP available:

N.sortModules ()

This call performs some internal initialization, which is necessary before using the network.

19 、 Milk

MILK (MACHINE LEARNING TOOLKIT) is a machine learning tool kit for Python language. It mainly includes many classifiers such as SVMS, K-NN, random forest and supervised classification in decision tree. It can also perform feature selection and form different classification systems, such as unsupervised learning, close relationship propagation and K-means clustering supported by MILK.

Train a classifier using MILK:

Import numpy as npimport milkfeatures = np.random.rand, labels = np.zeros, features [50:] + = .5 labels [50:] = 1learner = milk.defaultclassifier () model = learner.train (features, labels) # Now you can use the model on new examples:example = np.random.rand (10) print (model.apply (example)) example2 = np.random.rand (10) example2 + = .5print (model.apply (example2)) 20, TensorFlow

TensorFlow is an end-to-end open source machine learning platform. It has a comprehensive and flexible ecosystem, which can generally be divided into TensorFlow1.x and TensorFlow2.x,TensorFlow1.x. The main difference between TensorFlow2.x and TF1.x is that TF1.x uses static diagrams and TF2.x uses Eager Mode dynamic diagrams.

Here we mainly use TensorFlow2.x as an example to show the construction of convolution neural networks (Convolutional Neural Network, CNN) in TensorFlow2.x.

Import tensorflow as tffrom tensorflow.keras import datasets, layers, models# data loading (train_images, train_labels), (test_images, test_labels) = datasets.cifar10.load_data () # data preprocessing train_images, test_images = train_images / 255.0, test_images / 255.Modeling model = models.Sequential () model.add (layers.Conv2D (32, (3, 3), activation='relu', input_shape= (32, 32) )) model.add (layers.MaxPooling2D ((2,2)) model.add (layers.Conv2D (64, (3,3), activation='relu')) model.add (layers.MaxPooling2D ((2,2) model.add (layers.Conv2D (64, (3,3), activation='relu')) model.add (layers.Flatten ()) model.add (layers.Dense (64) Activation='relu')) model.add (layers.Dense (10)) # Model compilation and training model.compile (optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy (from_logits=True), metrics= ['accuracy']) history = model.fit (train_images, train_labels, epochs=10, validation_data= (test_images, test_labels))

For more examples of Tensorflow2.x, refer to the column Tensorflow.

21 、 PyTorch

The predecessor of PyTorch is Torch, whose underlying layer is the same as the Torch framework, but it rewrites a lot of content using Python, which is not only more flexible and supports dynamic diagrams, but also provides a Python interface.

# Import library import torchfrom torch import nnfrom torch.utils.data import DataLoaderfrom torchvision import datasetsfrom torchvision.transforms import ToTensor, Lambda, Composeimport matplotlib.pyplot as plt# model construction device = "cuda" if torch.cuda.is_available () else "cpu" print ("Using {} device" .format (device)) # Define modelclass NeuralNetwork (nn.Module): def _ init__ (self): super (NeuralNetwork) Self). _ init__ () self.flatten = nn.Flatten () self.linear_relu_stack = nn.Sequential (nn.Linear (28,512), nn.ReLU (), nn.Linear (512,512), nn.ReLU (), nn.Linear (512,10) Nn.ReLU () def forward (self, x): X = self.flatten (x) logits = self.linear_relu_stack (x) return logitsmodel = NeuralNetwork (). To (device) # loss function and optimizer loss_fn = nn.CrossEntropyLoss () optimizer = torch.optim.SGD (model.parameters (), lr=1e-3) # Model training def train (dataloader, model, loss_fn Optimizer): size = len (dataloader.dataset) for batch, (X, y) in enumerate (dataloader): X, y = X.to (device), y.to (device) # Compute prediction error pred = model (X) loss = loss_fn (pred) Y) # Backpropagation optimizer.zero_grad () loss.backward () optimizer.step () if batch% 100 = 0: loss, current = loss.item (), batch * len (X) print (f "loss: {loss: > 7f} [{current: > 5d} / {size: > 5d}]") 22, Theano

Theano is a Python library that allows you to define, optimize, and efficiently evaluate mathematical expressions involving multidimensional arrays, built on top of NumPy.

Calculate the Jacobian matrix in Theano:

Import theanoimport theano.tensor as Tx = T.dvector ('x') y = x * * 2J, updates= theano.scan (lambda I, ypencil x: T.grad (y [I], x), sequences=T.arange (y.shape [0]), non_sequences= [ymai x]) f = theano.function ([x], J, updates=updates) f ([4,4]) 23, Keras

Keras is an advanced neural network API written in Python, which can run with TensorFlow, CNTK, or Theano as the back end. The development of Keras focuses on supporting rapid experiments and the ability to convert ideas into experimental results with minimal delay.

From keras.models import Sequentialfrom keras.layers import Dense# Model Construction model = Sequential () model.add (Dense (units=64, activation='relu', input_dim=100)) model.add (Dense (units=10, activation='softmax')) # Model compilation and training model.compile (loss='categorical_crossentropy', optimizer='sgd', metrics= ['accuracy']) model.fit (x_train, y_train, epochs=5, batch_size=32) 24, Caffe

On the official website of Caffe2, it says: Caffe2 is now part of PyTorch. Although these api will continue to work, the use of PyTorch api is encouraged.

25 、 MXNet

MXNet is a deep learning framework designed for efficiency and flexibility. It allows mixed symbolic programming and imperative programming to maximize efficiency and productivity.

Use MXNet to build a handwritten digit recognition model:

Import mxnet as mxfrom mxnet import gluonfrom mxnet.gluon import nnfrom mxnet import autograd as agimport mxnet.ndarray as F# data load mnist = mx.test_utils.get_mnist () batch_size = 100train_data = mx.io.NDArrayIter (mnist ['train_data'], mnist [' train_label'], batch_size, shuffle=True) val_data = mx.io.NDArrayIter (mnist ['test_data'], mnist [' test_label'] Batch_size) # CNN model class Net (gluon.Block): def _ init__ (self, * * kwargs): super (Net, self). _ init__ (* * kwargs) self.conv1 = nn.Conv2D (20, kernel_size= (5)) self.pool1 = nn.MaxPool2D (pool_size= (2), strides = (2) self.conv2 = nn.Conv2D (50) Kernel_size= (5)) self.pool2 = nn.MaxPool2D (pool_size= (2), strides = (2)) self.fc1 = nn.Dense (5) self.fc2 = nn.Dense (10) def forward (self, x): X = self.pool1 (F.tanh (self.conv1 (x) x = self.pool2 (F.tanh (self.conv2 (x) # 0 means copy over size from corresponding dimension. #-1 means infer size from the rest of dimensions. X = x.reshape ((0,-1)) x = F.tanh (self.fc1 (x)) x = F.tanh (self.fc2 (x)) return xnet = Net () # initialization and optimizer definition # set the context on GPU is available otherwise CPUctx = [mx.gpu () if mx.test_utils.list_gpus () else mx.cpu ()] net.initialize (mx.init.Xavier (magnitude=2.24) Ctx=ctx) trainer = gluon.Trainer (net.collect_params (), 'sgd', {' learning_rate': 0.03}) # Model training # Use Accuracy as the evaluation metric.metric = mx.metric.Accuracy () softmax_cross_entropy_loss = gluon.loss.SoftmaxCrossEntropyLoss () for i in range (epoch): # Reset the train data iterator. Train_data.reset () for batch in train_data: data = gluon.utils.split_and_load (batch.data [0], ctx_list=ctx, batch_axis=0) label = gluon.utils.split_and_load (batch.label [0], ctx_list=ctx, batch_axis=0) outputs = [] # Inside training scope with ag.record (): for x, y in zip (data Label): Z = net (x) # Computes softmax cross entropy loss. Loss = softmax_cross_entropy_loss (z, y) # Backpropogate the error for one iteration. Loss.backward () outputs.append (z) metric.update (label, outputs) trainer.step (batch.data [0] .shape [0]) # Gets the evaluation result. Name, acc = metric.get () # Reset evaluation result to initial state. Metric.reset () print ('training acc at epoch% d:% slots% fags% (I, name, acc)) 26, PaddlePaddle

PaddlePaddle is based on Baidu's deep learning technology research and business application for many years, and integrates deep learning core training and reasoning framework, basic model base, end-to-end development kit and rich tool components. It is China's first independent research and development, fully functional, open source and open industrial deep learning platform.

Use PaddlePaddle to implement LeNtet5:

# Import required packages import paddleimport numpy as npfrom paddle.nn import Conv2D, MaxPool2D, Linear## networking import paddle.nn.functional as F# define LeNet network structure class LeNet (paddle.nn.Layer): def _ init__ (self, num_classes=1): super (LeNet, self). _ _ init__ () # create convolution and pooling layer # create the first convolution layer self.conv1 = Conv2D (in_channels=1 Out_channels=6, kernel_size=5) self.max_pool1 = MaxPool2D (kernel_size=2, stride=2) # size logic: pooling layer does not change the number of channels The current number of channels is 6 # create the second convolution layer self.conv2 = Conv2D (in_channels=6, out_channels=16, kernel_size=5) self.max_pool2 = MaxPool2D (kernel_size=2, stride=2) # create the third convolution layer self.conv3 = Conv2D (in_channels=16, out_channels=120, kernel_size=4) # size logic: the input layer flattens the data [BMagceCMague HMagw]-> [B] C*H*W] # input size is [28Person28] After three convolutions and two pooling, C*H*W equals 120 self.fc1 = Linear (in_features=120, out_features=64) # create a fully connected layer, and the number of output neurons in the first fully connected layer is 64 The number of output neurons in the second fully connected layer is the number of categories labeled self.fc2 = Linear (in_features=64, out_features=num_classes) # forward calculation process of the network def forward (self, x): X = self.conv1 (x) # each convolution layer uses the Sigmoid activation function This is followed by a pooling of 2x2 x = F.sigmoid (x) x = self.max_pool1 (x) x = F.sigmoid (x) x = self.conv2 (x) x = self.max_pool2 (x) x = self.conv3 (x) # size logic: the input layer flattens the data [BMague C MagneW]-> [B] C*H*W] x = paddle.reshape (x, [x.shape [0],-1]) x = self.fc1 (x) x = F.sigmoid (x) x = self.fc2 (x) return x27, CNTK

CNTK (Cognitive Toolkit) is a deep learning toolkit that describes neural networks as a series of computational steps through a directed graph. In this directed graph, leaf nodes represent input values or network parameters, while other nodes represent matrix operations on their inputs. CNTK can easily implement and combine popular model types, such as CNN.

CNTK uses network description languages (network description language, NDL) to describe a neural network. To put it simply, describe the feature of the input, the label of the input, some parameters, the computational relationship between the parameters and the input, and what the target node is.

NDLNetworkBuilder= [run=ndlLR ndlLR= [# sample and label dimensions SDim=$dimension$ LDim=1 features=Input (SDim, 1) labels=Input (LDim, 1) # parameters to learn B0 = Parameter (4) W0 = Parameter (4, SDim) B = Parameter (LDim) W = Parameter (LDim, 4) # operations t0 = Times (W0, features) Z0 = Plus (t0) B0) S0 = Sigmoid (Z0) t = Times (W, S0) z = Plus (t, B) s = Sigmoid (z) LR = Logistic (labels, s) EP = SquareError (labels, s) # root nodes FeatureNodes= (features) LabelNodes= (labels) CriteriaNodes= (LR) EvalNodes= (EP) OutputNodes= W0)]] Summary and classification python commonly used machine learning and deep learning library summary library name introduction to the official website NumPy http://www.numpy.org/ provides support for large multi-dimensional arrays NumPy is a key library in computer vision, because images can be represented as multi-dimensional arrays. Representing images as NumPy arrays has many advantages. OpenCV https://opencv.org/ open source computer vision library Scikit-imagehttps:// scikit-image.org/ image processing algorithms. Images operated by scikit-image can only be NumPy array Python Imaging Library (PIL) http://www.pythonware.com/products/pil/ image processing library. Provides powerful image processing and graphics functions SimpleCV http://simplecv.org/ computer vision framework, a branch of Pillow https://pillow.readthedocs.io/PIL, provides key functions for image processing Mahotas https://mahotas.readthedocs.io/ provides a set of functions for image processing and computer vision, which was originally designed for bioimage informatics However, now it also plays an important role in other fields, it is entirely based on numpy array as its data type Ilastik http://ilastik.org/ user-friendly and simple interactive image segmentation, classification and analysis tool Scikit-learn http://scikit-learn.org/ machine learning library With various classification, regression and clustering algorithms SciPy https://www.scipy.org/ science and technology computing library NLTK https://www.nltk.org/ natural language data processing library and program spaCy https://spacy.io/ open source software library, advanced natural language processing library in Python LibROSA https://librosa.github.io/librosa/ library for music and audio processing Pandas https://pandas.pydata.org/ library built on NumPy Provides advanced data computing tools and easy-to-use data structure Matplotlib https://matplotlib.org drawing library, which provides a complete set of commands similar to matlab API, which can generate the required publication quality level of graphics Seaborn https://seaborn.pydata.org/ is a drawing library based on Matplotlib Orange https://orange.biolab.si/ open source machine learning and data visualization toolkit PyBrain http://pybrain.org/ machine learning library for beginners and experts Provides the latest easy-to-use algorithm for machine learning Milk http://luispedro.org/software/milk/ machine learning toolbox, mainly used to supervise multi-classification problems in learning TensorFlow https://www.tensorflow.org/ open source machine learning and deep learning library PyTorch https://pytorch.org/ open source machine learning and deep learning library Theano http://deeplearning.net/software/theano/ library for fast mathematical expressions, evaluations, and calculations Compiled to run the Keras https://keras.io/ advanced deep learning library on CPU and GPU architectures, and run on TensorFlow, CNTK, Theano or Microsoft Cognitive Toolkit. Caffe2 https://caffe2.ai/Caffe2 is a deep learning framework with expressiveness, speed and modularity. It is an experimental reconfiguration of Caffe. It can organize and calculate MXNet https://mxnet.apache.org/ in a more flexible way as an efficient and flexible deep learning framework. Allow mixed symbolic programming and imperative programming PaddlePaddle https://www.paddlepaddle.org.cn is based on Baidu's years of deep learning technology research and business applications, integrates deep learning core training and reasoning framework, basic model base, end-to-end development kit, and rich tool components into one CNTK https://cntk.ai/ deep learning toolkit. The neural network is described as a series of calculation steps through directed graph. In this directed graph, leaf nodes represent input values or network parameters, while other nodes represent the classification of matrix operations for their inputs.

These libraries can be classified according to their primary purpose:

Category Library Image processing NumPy, OpenCV, scikit image, PIL, Pillow, SimpleCV, Mahotas, ilastik text processing NLTK, spaCy, NumPy, scikit learn, PyTorch Audio processing LibROSA Machine Learning pandas, scikit-learn, Orange, PyBrain, Milk data View Matplotlib, Seaborn, scikit-learn, Orange Deep Learning TensorFlow, Pytorch, Theano, Keras, Caffe2, MXNet, PaddlePaddle, CNTK Scientific Computing SciPy "what are the common machine learning and deep learning libraries of python" is introduced here, thank you for reading. If you want to know more about the industry, you can follow the website, the editor will output more high-quality practical articles for you!

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Development

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report