CN110837885B  Sigmoid function fitting method based on probability distribution  Google Patents
Sigmoid function fitting method based on probability distribution Download PDFInfo
 Publication number
 CN110837885B CN110837885B CN201910957062.4A CN201910957062A CN110837885B CN 110837885 B CN110837885 B CN 110837885B CN 201910957062 A CN201910957062 A CN 201910957062A CN 110837885 B CN110837885 B CN 110837885B
 Authority
 CN
 China
 Prior art keywords
 function
 fitting
 linear
 sigmoid function
 sigmoid
 Prior art date
 Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
 Active
Links
 210000002569 neurons Anatomy 0.000 claims abstract description 35
 230000001537 neural Effects 0.000 claims abstract description 17
 230000004913 activation Effects 0.000 claims abstract description 6
 230000000875 corresponding Effects 0.000 claims description 8
 238000000034 method Methods 0.000 claims description 4
 241001442055 Vipera berus Species 0.000 claims description 3
 238000004364 calculation method Methods 0.000 description 2
 230000004048 modification Effects 0.000 description 2
 238000006011 modification reaction Methods 0.000 description 2
 238000005094 computer simulation Methods 0.000 description 1
 238000010586 diagram Methods 0.000 description 1
 230000000694 effects Effects 0.000 description 1
Classifications

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
 G06N3/00—Computer systems based on biological models
 G06N3/02—Computer systems based on biological models using neural network models
 G06N3/04—Architectures, e.g. interconnection topology
 G06N3/0481—Nonlinear activation functions, e.g. sigmoids, thresholds

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
 G06N3/00—Computer systems based on biological models
 G06N3/02—Computer systems based on biological models using neural network models
 G06N3/04—Architectures, e.g. interconnection topology
 G06N3/0472—Architectures, e.g. interconnection topology using probabilistic elements, e.g. prams, stochastic processors
Abstract
The invention relates to the field of artificial intelligence neural networks, in particular to a Sigmoid function fitting method based on probability distribution, which is characterized in that a Sigmoid function is applied to an activation function of a neural network and divided into three fixed areas according to a second derivative of the Sigmoid function; the approximate constant region Sigmoid function value is fixed to be 0 or 1, the other two fixed regions are divided into a plurality of fitting function subsections, and the interval containing the point is larger when the secondorder derivative is smaller. The invention divides the Sigmoid function into three fixed areas, and each layer of neuron adopts different piecewise linear functions according to the probability distribution of the output values of the layer of neuron in the three areas, so that more limited hardware resources can be used in the areas with higher probability.
Description
Technical Field
The invention relates to the field of artificial intelligent neural networks, in particular to a probability distributionbased Sigmoid function fitting method and hardware implementation thereof.
Background
Artificial neural networks are computational models that work like biological neural networks. Because of its nature as a parallel structure, artificial neural networks are increasingly implemented in hardware to increase execution speed.
The basic unit of an artificial neural network is a neuron. A neuron comprises two operations: multiplyadd operations and activate functions. While Sigmoid function is an activation function widely used in neural networks. Since the Sigmoid function includes division and exponential operations, it is difficult to implement this function directly on hardware. To solve this problem, many fitting methods are proposed to efficiently implement the Sigmoid function on hardware. The fitting methods take the error of the fitting function and the Sigmoid function as a measure, and reduce the complexity of hardware on the basis of the error. However, the recognition rate of the network does not increase with the decrease of the error of the fitting function and the Sigmoid function, so that it is meaningful to increase the network recognition rate on the basis of reducing the complexity of hardware.
Disclosure of Invention
In order to solve the defects in the prior art, the invention provides a Sigmoid function fitting method based on probability distribution, which reduces the hardware complexity of a Sigmoid function and improves the recognition rate of a neural network.
In order to achieve the above purpose, the invention adopts the following technical scheme: a Sigmoid function fitting method based on probability distribution applies a Sigmoid function to an activation function of a neural network, and comprises the following steps:
the method comprises the following steps: according to the second derivative of the Sigmoid function, dividing the Sigmoid function into three fixed areas, namely an approximate linear area, a saturated area and an approximate constant area;
step two: the Sigmoid function value of the approximate constant area is fixed to be 0 or 1, the other two fixed areas are divided into a plurality of fitting function subsections, the number of the fitting function subsections changes according to the change of the statistical probability of the output value of each layer of neuron, the sizes of the intervals of the fitting function subsections are different and depend on the size of the absolute value of the second derivative of a point in the subintervals, the interval containing the point is smaller when the second derivative is larger, and the interval containing the point is larger when the second derivative is smaller;
step three: setting the slope of the linear function of the subsegments of the fitting function to 2^{n}Changing n and linear function bias b, and determining a linear function of the subsegments of the fitting function according to the minimum value of the maximum absolute error of the obtained new function and the original function to obtain a fitted piecewise linear function;
step four: and obtaining the hardware circuit fitted by the Sigmoid function based on probability distribution according to the fitted piecewise linear function.
In the first step, according to the second derivative of Sigmoid function f (x)The Sigmoid function is divided into three fixed areas, namely an approximate linear area, a saturation area and an approximate constant area, and  f' (x) is satisfied_{1})>f″(x_{2})>f″(x_{3}) Where x is the abscissa value of Sigmoid function, e is a natural constant, and x_{1}∈x、x_{2}∈x、x_{3}∈x，x_{1}Belonging to an approximately linear region, x_{2}Belonging to the saturation region, x_{3}Belonging to an approximate constant region.
In the second step, the statistical probability of the neuron output values of each layer refers to the distribution result of the neuron output values of each layer after the neural network training, and the number of the subsegments of each fixed region satisfies the formula N_{i}＝P_{i}×N_{total}，P_{i}Representing the statistical probability of the output value of the neuron in a certain fixed region; n is a radical of_{i}The representation corresponds to a probability of P_{i}The number of subsections of a region; n is a radical of_{total}Indicates the total number of stages.
In step three, the linear function slope of the subsegment of the fitting function is set to be 2^{n}New functionn and b_{i}Determined simultaneously by the following formula:
as n varies, the set D determines b for different n according to the new function and the minimum of the maximum absolute error of the original function_{i}And the set T obtains the minimum value of the maximum absolute error in the set D to determine the optimal n and b_{i}Wherein b is_{i}Function bias for corresponding subpiecewise functions, a_{i}For the minimum of the abscissa of the corresponding subpiecewise function, c_{i}Is the maximum value of the abscissa of the corresponding subpiecewise function.
According to the probability distribution of different fixed regions, the approximate linear function is divided into three different piecewise linear functions, and when the probability of the neuron output value is 0% 30% in the approximate linear region, the piecewise linear function is F_{0}(ii) a When the probability of the output value of the neuron is between 30 and 70 percent in the approximate linear region, the piecewise linear function is F_{1}(ii) a When the probability of the output value of the neuron is between 70 and 100 percent in the approximate linear region, the piecewise linear function is F_{3}。
Each layer of neurons adopts one of three different piecewise linear functions according to the probability distribution of the layer of neuron values.
The approximate constant area is fixed to be 0 or 1 and does not change according to the probability change of the neuron output value in the area; n is a radical of_{total}The value is 12 when it relates to an approximately linear region and a saturation region.
The fitting function only carries out the reasoning process of the neural network on hardware, the training process is realized in software, and the activation function adopts a Sigmoid function.
The hardware implementation only comprises an adder and a shifter, and the complexity of the hardware implementation of the Sigmoid function is reduced.
The invention achieves the following beneficial effects: the invention divides the Sigmoid function into three fixed areas, and each layer of neuron adopts different piecewise linear functions according to the probability distribution of the output values of the layer of neuron in the three areas, so that more limited hardware resources can be used in the areas with higher probability. Fitting functions are used for recognition of the handwritten digital data set MNIST, which have a higher recognition rate in Deep Neural Networks (DNN) than Sigmoid functions and a higher recognition rate in Convolutional Neural Networks (CNN) than existing fitting functions.
Drawings
FIG. 1 is a schematic diagram of a Sigmoid function, the first four derivatives of the Sigmoid function except the third derivative, and three fixed partitions;
fig. 2 is a hardware schematic of the fitting function.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and the protection scope of the present invention is not limited thereby.
A Sigmoid function fitting method based on probability distribution comprises the following steps:
the method comprises the following steps: dividing the Sigmoid function into three fixed areas (an approximate linear area, a saturated area and an approximate constant area) on the basis of a second derivative of the Sigmoid function;
as shown in fig. 1, the Sigmoid curve is divided into three fixed regions according to the degree of change of the second derivative of the Sigmoid function: approximately linear region, saturated region, approximately constant region
The second derivatives of the points in the three fixed regions satisfy the following formula:
f″(x_{1})>f″(x_{2})>f″(x_{3})
x_{1}belonging to an approximately linear region, x_{2}Belonging to the saturation region, x_{3}Belonging to an approximate constant region. The rate of change of the slope (second derivative) of the point within the approximately linear region is the greatest, and the rate of change of the slope of the point within the saturation region is significantly reduced, so the demarcation point of the approximately linear region and the saturation region is the inflection point of the second derivative of the Sigmoid function, i.e., the fourth derivative of the Sigmoid function:
when the above formula is 0, x is equal to 2.2. So the approximate linear region is 0 ≦ x < 2.2.
The boundary point of the approximate constant region and the saturation region depends on the maximum allowable error delta of the approximate constant region and '1', and the calculation formula of the boundary point is as follows:
in the invention, if delta is 0.005, the demarcation point x_{d}Is 5. Therefore, the approximate constant region is x ≧ 5; x is more than or equal to 2.2 in the saturation region<5。
Step two: the approximate constant area is fixed to be 0 or 1, and the number of the subsections of the other two fixed areas changes according to the change of the statistical probability of the neuron output value of each layer;
arranging the neuron output values of each layer in the order from small to large as follows:
N^{l}number of layer I neurons.
The fixed area is subdivided into: the approximately linear region is [0.1,0.9], and the saturated region is (0.005,0.1) < u (0.9, 0.995). The probabilities for the three fixed regions are calculated as follows:
based on the statistical probability, the number of subsections of each fixed region satisfies the following formula:
N_{i}＝P_{i}×N_{total}
P_{i}representing the statistical probability of the output value of the neuron in a certain fixed region; n is a radical of_{i}The representation corresponds to a probability of P_{i}The number of subsections of a region; n is a radical of_{total}Representing the total number of segments.
Step three: the fit function subsegments vary in interval size depending on the magnitude of the second derivative absolute value of the point within the subinterval. When the second derivative is larger, the interval containing the point is relatively smaller; the interval encompassing this point is relatively large when the second derivative is larger.
Step four: setting the subpiecewise linear function slope to 2^{n}Wherein n is a positive integer; and changing n and the linear function bias b, and determining the subpiecewise linear function according to the minimum value of the maximum absolute error of the obtained function and the original function to obtain the fitted piecewise linear function.
The slope of the subpiecewise linear function of the approximate linear function is 2^{n}The following formula is satisfied:
n and b_{i}Determined simultaneously by the following formula:
as n varies, the set D determines b for different n according to the minimum of the maximum absolute error of the fitting function and the primitive function_{i}. The set T obtains the minimum value of the maximum absolute error in the set D to determine the optimal n and b_{i}. The approximate linear function is divided into three different piecewise linear functions according to the probability distribution of different fixed regions. When the probability of the output value of the neuron is between 0 and 30 percent in the approximate linear region, the piecewise linear function is F_{0}(ii) a When the probability of the output value of the neuron is between 30 and 70 percent in the approximate linear region, the piecewise linear function is F_{1}(ii) a When the probability of the output value of the neuron is between 70 and 100 percent in the approximate linear region, the piecewise linear function is F_{2}. Each layer of the piecewise function is selected according to the following formula:
the approximate constant area is fixed to be 0 or 1 and does not change according to the probability change of the neuron output value in the area; n is a radical of_{total}The value is 12 when it relates to an approximately linear region and a saturation region. The values of the parameters of the piecewise function are shown in table 1:
TABLE 1 piecewise fitting function different interval parameter values
Step four: a hardware circuit for Sigmoid function fitting based on probability distribution is provided.
The hardware circuit of Sigmoid function fitting based on probability distribution is shown in fig. 2. In the first stage, the first stage is that,the input function encoder generates corresponding n and b according to the input value and the layer probability value_{i}The address of (a); in the second stage, the multiplexer outputs the shifted value according to n; and finally, adding the output value of the multiplexer and the corresponding b to obtain the output value of the neuron.
Using the fitting function for recognition of a handwritten digital data set MNIST, the recognition rate of which at DNN is higher than that of Sigmoid function, table 2 shows the accuracy comparison for different DNN structures:
TABLE 2 comparison of accuracy of different network architectures
Because the slope of the fitting function is 2^{n}Compared with a general linear piecewise fitting function, the Sigmoid fitting function based on the probability distribution reduces the calculation amount of the Sigmoid function in hardware, and a hardware circuit only needs an adder and a shifter to realize the function.
The above description is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the protection scope of the present invention.
Claims (8)
1. A Sigmoid function fitting method based on probability distribution applies a Sigmoid function to an activation function of a neural network, and is characterized in that: comprises the steps of
The method comprises the following steps: according to the second derivative of the Sigmoid function, dividing the Sigmoid function into three fixed areas, namely an approximate linear area, a saturated area and an approximate constant area;
step two: the Sigmoid function value of the approximate constant area is fixed to be 0 or 1, the other two fixed areas are divided into a plurality of fitting function subsections, the number of the fitting function subsections changes according to the change of the statistical probability of the output value of each layer of neuron, the sizes of the intervals of the fitting function subsections are different and depend on the size of the absolute value of the second derivative of a point in the subintervals, the interval containing the point is smaller when the second derivative is larger, and the interval containing the point is larger when the second derivative is smaller;
step three: setting the slope of the linear function of the subsegments of the fitting function to 2^{n}Wherein n is a positive integer, changing n and the linear function bias b, determining the linear function of the fitting function subsegment according to the minimum value of the maximum absolute error of the obtained new function and the original function to obtain a fitted piecewise linear function, and setting the slope of the linear function of the fitting function subsegment to be 2^{n}New functionn and b_{i}Determined simultaneously by the following formula:
as n varies, the set D determines b for different n according to the new function and the minimum of the maximum absolute error of the original function_{i}And the set T obtains the minimum value of the maximum absolute error in the set D to determine the optimal n and b_{i}Wherein b is_{i}Function bias for corresponding subpiecewise functions, a_{i}For the minimum of the abscissa of the corresponding subpiecewise function, c_{i}Is the maximum value of the abscissa of the corresponding subpiecewise function;
step four: and obtaining the hardware circuit fitted by the Sigmoid function based on probability distribution according to the fitted piecewise linear function.
2. The probability distributionbased Sigmoid function fitting method of claim 1, wherein: in the first step, according to the second derivative of Sigmoid function f (x)The Sigmoid function is divided into three fixed areas, namely an approximate linear area, a saturation area and an approximate constant area, and  f' (x) is satisfied_{1})>f″(x_{2})>f″(x_{3}) Where x is the abscissa value of Sigmoid function, e is a natural constant, and x_{1}∈x、x_{2}∈x、x_{3}∈x，x_{1}Belonging to an approximately linear region, x_{2}Belonging to the saturation region, x_{3}Belonging to an approximate constant region.
3. The probability distributionbased Sigmoid function fitting method of claim 1, wherein: in the second step, the statistical probability of the neuron output values of each layer refers to the distribution result of the neuron output values of each layer after the neural network training, and the number of the subsegments of each fixed region satisfies the formula N_{i}＝P_{i}×N_{total}，P_{i}Representing the statistical probability of the output value of the neuron in a certain fixed region; n is a radical of_{i}The representation corresponds to a probability of P_{i}The number of subsections of a region; n is a radical of_{total}Indicates the total number of stages.
4. The probability distributionbased Sigmoid function fitting method of claim 1, wherein: according to the probability distribution of different fixed regions, the approximate linear function is divided into three different piecewise linear functions, and when the probability of the neuron output value is 0% 30% in the approximate linear region, the piecewise linear function is F_{0}(ii) a When the probability of the output value of the neuron is between 30 and 70 percent in the approximate linear region, the piecewise linear function is F_{1}(ii) a When the probability of the output value of the neuron is between 70 and 100 percent in the approximate linear region, the piecewise linear function is F_{3}。
5. The probability distributionbased Sigmoid function fitting method of claim 1, wherein: each layer of neurons adopts one of three different piecewise linear functions according to the probability distribution of the layer of neuron values.
6. The probability distributionbased Sigmoid function fitting method of claim 1, wherein: the approximate constant area is fixed to be 0 or 1 and does not change according to the probability change of the neuron output value in the area; n is a radical of_{total}The value is 12 when it relates to an approximately linear region and a saturation region.
7. The probability distributionbased Sigmoid function fitting method of claim 1, wherein: the fitting function only carries out the reasoning process of the neural network on hardware, the training process is realized in software, and the activation function adopts a Sigmoid function.
8. The probability distributionbased Sigmoid function fitting method of claim 1, wherein: the hardware implementation only comprises an adder and a shifter, and the complexity of the hardware implementation of the Sigmoid function is reduced.
Priority Applications (1)
Application Number  Priority Date  Filing Date  Title 

CN201910957062.4A CN110837885B (en)  20191011  20191011  Sigmoid function fitting method based on probability distribution 
Applications Claiming Priority (1)
Application Number  Priority Date  Filing Date  Title 

CN201910957062.4A CN110837885B (en)  20191011  20191011  Sigmoid function fitting method based on probability distribution 
Publications (2)
Publication Number  Publication Date 

CN110837885A CN110837885A (en)  20200225 
CN110837885B true CN110837885B (en)  20210302 
Family
ID=69575364
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

CN201910957062.4A Active CN110837885B (en)  20191011  20191011  Sigmoid function fitting method based on probability distribution 
Country Status (1)
Country  Link 

CN (1)  CN110837885B (en) 
Family Cites Families (5)
Publication number  Priority date  Publication date  Assignee  Title 

US5535148A (en) *  19930923  19960709  Motorola Inc.  Method and apparatus for approximating a sigmoidal response using digital circuitry 
US8572144B2 (en) *  20090302  20131029  Analog Devices, Inc.  Signal mapping 
CN104484703B (en) *  20141230  20170630  合肥工业大学  A kind of sigmoid Function Fitting hardware circuits based on row maze approximate algorithm 
CN108537332A (en) *  20180412  20180914  合肥工业大学  A kind of Sigmoid function hardwareefficient rate implementation methods based on Remez algorithms 
CN108710944A (en) *  20180430  20181026  南京大学  One kind can train piecewise linear activation primitive generation method 

2019
 20191011 CN CN201910957062.4A patent/CN110837885B/en active Active
Also Published As
Publication number  Publication date 

CN110837885A (en)  20200225 
Similar Documents
Publication  Publication Date  Title 

CN109472353B (en)  Convolutional neural network quantization circuit and method  
US20190087713A1 (en)  Compression of sparse deep convolutional network weights  
JP2020077143A (en)  Learning program, learning method and information processing apparatus  
CN110837885B (en)  Sigmoid function fitting method based on probability distribution  
WO2020237904A1 (en)  Neural network compression method based on power exponent quantization  
Struharik et al.  CoNNA–compressed CNN hardware accelerator  
CN109034372B (en)  Neural network pruning method based on probability  
CN110276451A (en)  One kind being based on the normalized deep neural network compression method of weight  
Fuketa et al.  Imageclassifier deep convolutional neural network training by 9bit dedicated Hardware to realize validation accuracy and energy efficiency superior to the half precision floating point format  
Park et al.  Squantizer: Simultaneous learning for both sparse and lowprecision neural networks  
Li et al.  Highperformance Convolutional Neural Network Accelerator Based on Systolic Arrays and Quantization  
CN110265002A (en)  Audio recognition method, device, computer equipment and computer readable storage medium  
CN110874635A (en)  Deep neural network model compression method and device  
CN110110852A (en)  A kind of method that deep learning network is transplanted to FPAG platform  
CN109784472A (en)  A kind of nonlinear and timevarying system method for solving neural network based  
JPH0713768A (en)  Continuous logic computation system and its usage method  
CN111738427B (en)  Operation circuit of neural network  
Jo et al.  BitSerial multiplier based Neural Processing Element with Approximate adder tree  
CN111582229A (en)  Network selfadaptive semiprecision quantized image processing method and system  
CN113255576B (en)  Face recognition method and device  
CN111047013A (en)  Convolutional neural network structure optimization method and device and electronic equipment  
WO2021036412A1 (en)  Data processing method and device, computer apparatus and storage medium  
Oh et al.  Retraining and Regularization to Optimize Neural Networks for Stochastic Computing  
Sarkar et al.  An Incremental Pruning Strategy for Fast Training of CNN Models  
US20210357758A1 (en)  Method and device for deep neural network compression 
Legal Events
Date  Code  Title  Description 

PB01  Publication  
PB01  Publication  
SE01  Entry into force of request for substantive examination  
SE01  Entry into force of request for substantive examination  
GR01  Patent grant  
GR01  Patent grant 