US4965443A - Focus detection apparatus using neural network means - Google Patents

Focus detection apparatus using neural network means Download PDF

Info

Publication number
US4965443A
US4965443A US07/414,943 US41494389A US4965443A US 4965443 A US4965443 A US 4965443A US 41494389 A US41494389 A US 41494389A US 4965443 A US4965443 A US 4965443A
Authority
US
United States
Prior art keywords
neural network
output
units
light
main part
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US07/414,943
Inventor
Masafumi Yamasaki
Toshiyuki Toyofuku
Junichi Itoh
Shinichi Kodama
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Olympus Corp
Original Assignee
Olympus Optical Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Olympus Optical Co Ltd filed Critical Olympus Optical Co Ltd
Assigned to OLYMPUS OPTICAL CO., LTD., 43-2, 2-CHOME, HATAGAYA, SHIBUYA-KU, TOKYO, JAPAN, A CORP. OF JAPAN reassignment OLYMPUS OPTICAL CO., LTD., 43-2, 2-CHOME, HATAGAYA, SHIBUYA-KU, TOKYO, JAPAN, A CORP. OF JAPAN ASSIGNMENT OF ASSIGNORS INTEREST. Assignors: ITOH, JUNICHI, KODAMA, SHINICHI, TOYOFUKU, TOSHIYUKI, YAMASAKI, MASAFUMI
Application granted granted Critical
Publication of US4965443A publication Critical patent/US4965443A/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • G02B7/36Systems for automatic generation of focusing signals using image sharpness techniques, e.g. image processing techniques for generating autofocus signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals

Definitions

  • FIG. 2 is a view showing a model of a neural network used in the present invention
  • FIG. 3 is a view showing a model of each unit constituting the neural network
  • FIG. 6 is a flow chart for calculating an output of each unit
  • FIG. 9 is a flow chart for judging a learning level
  • FIG. 13 is a block diagram of an apparatus for causing the neural network of this embodiment to learn
  • equation (3) can be rearranged as follows.
  • the logistic (sigmund) function f(i) is shown in FIG. 4.
  • the ⁇ p W ji k -calculations are started from the units of the output layer and are shifted to the units of the intermediate layer. In this manner, learning is started in a direction opposite to the input data processing direction.
  • Learning by the BP model can be performed as follows. Learning data is input, and a calculation result is output. The coupling strength coefficients are changed to reduce the error of the result (i.e., the difference between the actual output and the teacher signal). Another learning data is input again. This operation is repeated until ⁇ p W ji k is converged to zero.
  • FIG. 6 is a flow chart for calculating the output values O pj k
  • FIG. 7 is a flow chart for calculating the amount of back propagation ⁇ pj k of the errors
  • FIG. 8 is a flow chart for calculating the coupling strength coefficients W ij k
  • FIG. 9 is a flow chart for judging a learning level.
  • steps S11 to S20 of FIG. 7 the amount of back propagation ⁇ pj 0 of the errors are calculated in accordance with equation (11) using the output values O pj 0 and the teacher signals t pj by the arithmetic logic unit 6.
  • FIG. 9 is a flow chart for calculating a mean square error E p between an actual output value O pj and the teacher signal t pj .
  • E p mean square error
  • the actual output value becomes close to the desirable output value. If the error E p is smaller than a threshold value s relating to a degree of learning or a learning level, learning is completed. Otherwise, learning is repeated.
  • FIG. 11 is a basic circuit arrangement in this case.
  • a coefficient memory 14 for storing the coupling strength coefficient W ji k comprises a ROM or a programmable ROM.
  • the same learning process need not be repeated for identical products.
  • the ROM 17 may be copied and used for other identical products.
  • the BP model shown in FIG. 2 can be constituted by hardware. This concept must be utilized when a high-speed operation is performed by parallel processing and when the BP model is applied to low-end industrial products. This can be achieved by constituting each unit in FIG. 2 by an inverter and replacing the coupling strength coefficient W ij with a resistor network R ij . This can be easily achieved by recent LSI techniques.
  • a focal point is detected on the basis of not the brightness data of the entire object but the brightness data of only the main part of the object. Selection of the main part is determined by the neural network in correspondence with the brightness pattern of the object. For this reason, satisfactory focus detection can be performed for all objects.
  • the main part of the object may cross a plurality of photoelectric transducer elements. In this case, an average value of the output BV values of the plurality of photoelectric transducer elements corresponding to the main part is calculated by the arithmetic logic unit 28.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Focusing (AREA)
  • Automatic Focus Adjustment (AREA)
  • Image Analysis (AREA)

Abstract

An optical image transmitted through a photographing lens is incident on a light-receiving unit of a two-dimensional matrix. An output from the light-receiving unit is input to a first arithmetic logic unit, and the first arithmetic logic unit calculates actual object brightness values in consideration of an aperture value of an aperture. An output from the first arithmetic logic unit is supplied to a multiplexer and a neural network. The neural network determines a main part of the object from a pattern of brightness values of the respective photoelectric transducer elements and outputs a position signal of the main part. The multiplexer selectively passes the brightness value of the photoelectric transducer element corresponding to the main part of the object from the outputs generated by the first arithmetic logic unit. An output from the multiplexer is supplied to a second arithmetic logic unit. The second arithmetic logic unit performs a focus detection calculation based on only the brightness of the main part. The photographing lens is moved along the optical axis, thereby performing a focusing operation.

Description

BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to a focus detection apparatus capable of detecting focal points for all objects with accuracy to some extent.
2. Description of the Related Art
In most of cameras having conventional focus detection apparatuses, an object area to be focused is fixed to almost the center of the viewfinder, and framing cannot be designed freely. In order to solve this problem, Japanese Patent Disclosure No. 54-59964 discloses a means capable of arbitrarily changing an angle of direction of distance measurement, wherein an operation for changing the direction of distance measurement is synchronized with a displacement of a display member for displaying the direction of distance measurement. Japanese Patent Disclosure Nos. 59-107685 and 60-254968 disclose techniques for arbitrarily selecting a sampling interval of a brightness signal to focus on any object.
A cumbersome operation for selecting a focus area is required in these prior arts and is not suitable for high-speed photographing for a moving object. A mechanical focus detection technique results in a complicated mechanism and degradation of accuracy due to the mechanism.
SUMMARY OF THE INVENTION
It is an object of the present invention to provide a focus detection apparatus for allowing a neural network for generating desired outputs in response to various inputs to learn main parts of objects by using a large number of model patterns, thereby automatically focusing on a desired area for every object pattern.
The focus detection apparatus of the present invention comprises a light-receiving unit, having a plurality of photoelectric transducers arranged in a two-dimensional matrix, for outputting analog signals representing brightnesses of respective parts of an object; a converter for converting the analog signals output from the light-receiving unit into digital signals; a neural network for receiving the digital signals from the converter and determining a main part of the object; a multiplexer for selecting a signal representing the brightness value of the determined main part from the digital signals; and a device for performing focus detection on the basis of an output from the multiplexer.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram of a focus detection apparatus according to a first embodiment of the present invention;
FIG. 2 is a view showing a model of a neural network used in the present invention;
FIG. 3 is a view showing a model of each unit constituting the neural network;
FIG. 4 is a view showing a logistic function;
FIG. 5 is a block diagram of a back propagation model type neural network;
FIGS. 6 to 9 are flow charts showing simulation for simulating the neural network of FIG. 5 by using a Neumann type computer, in which
FIG. 6 is a flow chart for calculating an output of each unit,
FIG. 7 is a flow chart for obtaining an amount of back propagation of an error,
FIG. 8 is a flow chart for calculating a coupling strength coefficient, and
FIG. 9 is a flow chart for judging a learning level;
FIG. 10 is a block diagram of a parallel processing system for realizing the neural network shown in FIG. 5;
FIG. 11 is a block diagram of an apparatus which applies a learning result;
FIG. 12 is a block diagram for causing the apparatus which applies a learning result to learn;
FIG. 13 is a block diagram of an apparatus for causing the neural network of this embodiment to learn;
FIG. 14 is a view showing the layout of photoelectric transducer elements of this embodiment;
FIG. 15 is a view showing a neural network of the first embodiment;
FIG. 16 is a view showing the detailed layout of photoelectric transducer elements; and
FIGS. 17A to 17C are views showing examples of an object to be learnt by the neural network.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
A focus detection apparatus according to an embodiment of the present invention will be described with reference to the accompanying drawings. FIG. 1 is a block diagram showing an arrangement of a first embodiment. As is apparent from FIG. 1, this embodiment uses a neural network to perform focus detection. The neural network will be described prior to a description of the embodiment with reference to FIGS. 2 to 12.
FIG. 2 shows a model of a neural network. This model is proposed by Rumlhart et al and is called a back projection model (to be referred to as a BP model hereinafter). The neural network is formed of a large number of units (neurons). The units are classified into an input layer, an intermediate layer, and an output layer. The input and output layers are single layers, but the intermediate layer may be a single layer or a plurality of layers. The units are connected in a direction of the input layer, the intermediate layer, and the output layer in that order to constitute a network. The coupling strength between the units is determined by learning. However, there are no connections between the units within each layer. A model of each unit is shown in FIG. 3.
The principle of a learning algorithm of a BP model will be described below. An error Epj between an actual output value and a desirable output value is given as follows:
E.sub.pj =0.5(t.sub.pj -O.sub.pj.sup.0).sup.2              (1)
where Opj 0 is the actual output value appearing at the output layer when a given pattern P is input to the input layer, and tpj is the desired output value (to be referred to as a teacher signal hereinafter) at that time.
In order to cause the network to learn, all the coupling strengths are changed to reduce the error Epj.
A change in coupling strength coefficient Wji for the jth unit of the Kth layer from the ith unit of the (K-1)th layer upon input of the pattern P is defined as follows. In this case, K designates the layer and increases from the output layer side given a "0" toward the input layer side. ##EQU1##
When a logistic function is given as f(i) and Opk k =f(netpk k), equation (3) can be rearranged as follows. The logistic (sigmund) function f(i) is shown in FIG. 4.
∂E.sub.pj /∂W.sub.ji.sup.k =δ.sub.pj.sup.k.O.sub.pi.sup.k+1                    (4)
where δpj k is an amount of back propagation of an error in the Kth layer, i.e., δpj k =-∂Epj /∂netpj k. Therefore, equation (2) can be rearranged as follows:
Δ.sub.p W.sub.ji.sup.k =η.δ.sub.pj.sup.k.O.sub.pi.sup.k+1 ( 5)
where η is a constant.
Since equations Epj =0.5(tpj -Opj 0)2 and Opj 0 =f(netpj 0) are established for the output layer, the amount of back propagation δpj 0 of the output layer is defined as follows:
δ.sub.pj.sup.0 =(t.sub.pj -O.sub.pj.sup.0).f'(.sub.k ΣW.sub.jk.sup.0.O.sub.pk.sup.1)                     (6)
Since there are no connections between the units within each layer, the amount of back propagation of the error of the intermediate layer is given as follows: ##EQU2## δ in equation (7) is a recursive function. Δp Wji k is given by a general mathematical expression as follows:
Δ.sub.p W.sub.ji.sup.k (n+1)=η.δ.sub.pj.sup.k.O.sub.pi.sup.k+1 +α.Δ.sub.p W.sub.ji.sup.k (n)                                        (8)
for Δp Wji k (0)=0
where n is the number of learning cycles. The second term of the right side of equation (8) is added to reduce variations in error and accelerate convergence. The coupling strength coefficient can be updated from equation (8) as follows:
W.sub.ji.sup.k (n+1)=W.sub.ji.sup.k (n)+Δ.sub.p W.sub.ji.sup.k (n)(k=0, 1, 2, . . . )                                    (9)
If the logistic function f(i) is defined as follows:
f(i)=1/(1+e.sup.-net.sbsp.i)                               (10)
f'(i)=f(i){1-f(i)} so that the amounts of back propagation of the errors can be simplified as follows:
for output unit:
δ.sub.pj.sup.0 =O.sub.pj.sup.0 (1-O.sub.pj.sup.0)(t.sub.pj -O.sub.pj.sup.0)                                          (11)
for intermediate unit:
δ.sub.pj.sup.k =O.sub.pj.sup.k (1-O.sub.pj.sup.k)..sub.k Σ{δ.sub.pk.sup.k-1.W.sub.kj.sup.k-1 (n+1)}    (12)
As is apparent from the above calculations, the Δp Wji k -calculations are started from the units of the output layer and are shifted to the units of the intermediate layer. In this manner, learning is started in a direction opposite to the input data processing direction.
Learning by the BP model can be performed as follows. Learning data is input, and a calculation result is output. The coupling strength coefficients are changed to reduce the error of the result (i.e., the difference between the actual output and the teacher signal). Another learning data is input again. This operation is repeated until Δp Wji k is converged to zero.
FIG. 5 shows a basic circuit arrangement of a BP model neural network. A random access memory (to be referred to as a RAM hereinafter) 1 stores coupling strength coefficients Wji k and has N pages for the K (1 to N) layers. A RAM 2 stores changes Δp Wji k in the coupling strength coefficient Wji k when the pattern P is input to the neural network and has N pages, i.e., K=1 to N. A RAM 3 stores the amounts of back propagation δpj k of the errors and has (N+1) pages, i.e., K=0 to N. A RAM 4 stores output values Opj k of all units and has (N+1) pages, i.e., K=0 to N. A page 4a stores output values of the output layer, a page 4b stores output values of the intermediate layer and the input layer, and a page 4c stores input values of the input layer. An Opj k -arithmetic logic unit 5, a δ.sub. pjk -arithmetic logic unit 6, a Δp Wji k -arithmetic logic unit 7, and a Wji k -arithmetic logic unit 8 are connected to the RAMs 4, 3, 2, and 1, respectively. Reference numeral 9 denotes a sequence controller for controlling the overall sequence.
The learning process by the BP model shown in FIG. 5 will be described below. Simulation of the BP model by a Neumann type computer will be described with reference to flow charts of FIGS. 6 to 9. More specifically, FIG. 6 is a flow chart for calculating the output values Opj k, FIG. 7 is a flow chart for calculating the amount of back propagation δpj k of the errors, FIG. 8 is a flow chart for calculating the coupling strength coefficients Wij k, and FIG. 9 is a flow chart for judging a learning level.
A random value is initially set in the coupling strength coefficient Wji k in the RAM 1 in step S1. In step S2, the input values Opj N+1 (j=1 to m) are set in the RAM 4. The output values Opj k of the respective units are calculated by the arithmetic logic unit 5 from the input layer to the output layer in steps S3 to S9.
In steps S11 to S20 of FIG. 7, the amount of back propagation δpj 0 of the errors are calculated in accordance with equation (11) using the output values Opj 0 and the teacher signals tpj by the arithmetic logic unit 6.
In steps S21 to step S24 of FIG. 8, changes δp Wji 0 (1) in coupling strength coefficients are calculated by the arithmetic logic units 7 in accordance with equation (8). All initial values Δp Wji k (0) of the changes Δp Wji 0 are zero. The coupling strength coefficients Wji 0 (1) are calculated by the arithmetic logic unit 8 in accordance with equation (9) in step S25. Therefore, all the Opj 0, δpj 0, Δp Wji 0 (1), and Wji 0 (1) of the output layer are obtained. These data are stored to update the initial data in the RAMs 1 to 4.
Learning of the intermediate layer is then started. Referring back to the flow chart of FIG. 7, the data δpj 0 and Wji 0 (1) obtained by the arithmetic logic unit 6 and the data Opj 0 stored in the RAM 4 are used to obtain the amount of back propagation δpj k of the errors. As shown in the flow chart of FIG. 8, changes Δp Wji k (1) of the coupling strength coefficients are calculated by the arithmetic logic unit 7 in accordance with equation (8), and coupling strength coefficients Wji k (1) are calculated by the arithmetic logic unit 8 in accordance with equation (9). The calculated data are stored to update the previous data in the RAMs 1 to 4 in the same manner as in the output layer. The above flow is sequentially repeated toward the input layer (K=N+1) to complete the first learning cycle.
The above learning cycle is repeated a plurality of times to determine the coupling strength coefficients Wji between the units. Therefore, a network for obtaining a desired output value Ppj upon inputting of the input value Opj representing the given pattern P can be automatically formed.
FIG. 9 is a flow chart for calculating a mean square error Ep between an actual output value Opj and the teacher signal tpj. When the mean square error Ep is decreased, the actual output value becomes close to the desirable output value. If the error Ep is smaller than a threshold value s relating to a degree of learning or a learning level, learning is completed. Otherwise, learning is repeated.
Learning for one input pattern P has been described above. However, in the exposure control apparatus learning is performed to output a plurality of output patterns corresponding to a plurality of input patterns. Alternatively, it is possible to perform learning to output one specific output pattern in response to a plurality of input patterns.
The above BP model can be realized by a Neumann type computer used in existing industrial equipment. However, with this arrangement, a high-speed function by parallel processing as one of the advantages of the neural network cannot be achieved. For this reason, the operations in FIGS. 6 to 9 are preferably parallel-processed by a plurality of computers.
FIG. 10 shows a configuration of a parallel processing system. A plurality of microprocessors P1 to Pn are connected to a host processor 11. The neural network shown in FIG. 2 is divided into local networks which are respectively controlled by the microprocessors P1 to Pn. The host processor 11 controls operation timings of the microprocessors P1 to Pn and combines data dispersed in the microprocessors P1 to Pn to perform processing such as pattern recognition. Each of the microprocessors P1 to Pn performs calculations of a plurality of continuous arrays of the output values Opj shown in FIG. 5, such as Op1, Op2, . . . The microprocessors P1 to Pn comprise RAMs and arithmetic logic units which are required to calculate the corresponding output values and store the necessary data δpj, ΔWji, and Wji. When the calculations of all the units shared by the microprocessors are completed, all the microprocessors P1 to Pn synchronously communicate with each other to update the data. The host processor 11 determines the achieved learning level and controls the operation timings of the microprocessors P1 to Pn.
When processing such as pattern recognition is performed on the basis of the learning result, a calculation Opj k =f(k ΣWjk k.Opk k+1) is performed from the input layer to the output layer in FIG. 2, thereby obtaining a final output value Opj 0. In this case, as shown in FIG. 11, distributed processing by a plurality of microprocessors is performed to achieve a high-speed operation by parallel processing of the neural network.
The circuit shown in FIG. 5 is basically required during learning. However, when a learning result is simply applied, its arrangement can be extremely simplified.
FIG. 11 is a basic circuit arrangement in this case. Input data is input to an arithmetic logic unit 13 through an input device 12 (e.g., an A/D converter), and the arithmetic logic unit 13 sequentially performs calculations Opj k =f(k ΣWjk k.Opk k+1) to obtain output data Opj 0. A coefficient memory 14 for storing the coupling strength coefficient Wji k comprises a ROM or a programmable ROM.
FIG. 12 is a schematic block diagram of a learning system for manufacturing a product to which the learning result is applied. A product 16 incorporates a coefficient memory (ROM) 17 which stores a coupling strength coefficient Wji k. Reference numeral 18 denotes a learning device. A combination of the ROM 17 and the learning device 18 is basically the same as the circuit shown in FIG. 5. When data Wji k is written in the ROM 17, the product 16 is disconnected from the learning device 18.
The same learning process need not be repeated for identical products. In this case, the ROM 17 may be copied and used for other identical products.
In the above description, learning of the BP model and an application of the learning result are realized by simulation using the existing Neumann type computer because a complicated algorithm is required during learning and it is very difficult for hardware to automatically self-systematize coupling strength coefficients of connections between the units. However, if the coupling strength coefficients Wij are known and the application is limited to a machine which employs the learning result, the BP model shown in FIG. 2 can be constituted by hardware. This concept must be utilized when a high-speed operation is performed by parallel processing and when the BP model is applied to low-end industrial products. This can be achieved by constituting each unit in FIG. 2 by an inverter and replacing the coupling strength coefficient Wij with a resistor network Rij. This can be easily achieved by recent LSI techniques.
A focus detection apparatus which employs the neural network described above according to a first embodiment of the present invention will be described with reference to FIG. 1. An aperture 19 is located in front of a photographing lens 20. An object image transmitted through the aperture 19 is incident on a light-receiving unit 21 which is formed of photoelectric transducer elements Pmn arranged in a two-dimensional matrix shown in FIG. 14. For this reason, brightness data of the object in a given closed state of the aperture 19 are output from the light-receiving unit 21 in units of photoelectric transducer elements. These outputs are amplified by an amplifier 22, and the amplified signals are converted into digital signals as BV' values by an A/D converter 23. The BV' values are supplied to an arithmetic logic unit (ALU) 24. The arithmetic logic unit 24 is a circuit for calculating actual BV values (=BV'-AVo). For this purpose, a fully open aperture value AVo of the aperture 19 is input to the arithmetic logic unit 24.
The BV values output from the arithmetic logic unit 24 in units of photoelectric transducer elements are supplied to a multiplexer 27 and a neural network 25. A coefficient memory 26 is connected to the neural network 25. The multiplexer 27 selectively passes an output BV value of the photoelectric transducer element corresponding to the main part of the object from the all BV values in accordance with a control signal Pxy from the neural network 25.
An output from the multiplexer 27 is supplied to an arithmetic logic unit 28. The arithmetic logic unit 28 performs calculations for focus detection in accordance with the climbing servo scheme using the brightness signal of the selected main part. An output from the arithmetic logic unit 28 is supplied to a driver 29. The driver 29 drives a focusing mechanism 20a on the basis of the output from the arithmetic logic unit 28, thereby moving the position of the photographing lens 20. A sequence controller 30 is arranged to perform overall control.
In this embodiment, a focal point is detected on the basis of not the brightness data of the entire object but the brightness data of only the main part of the object. Selection of the main part is determined by the neural network in correspondence with the brightness pattern of the object. For this reason, satisfactory focus detection can be performed for all objects. The main part of the object may cross a plurality of photoelectric transducer elements. In this case, an average value of the output BV values of the plurality of photoelectric transducer elements corresponding to the main part is calculated by the arithmetic logic unit 28.
The basic block arrangement of the neural network 25 shown in FIG. 5 may be used. However, in order to achieve high-speed learning, a parallel processing system shown in FIG. 10 should be used. When an object pattern as an input Opj 0 is supplied to the network, a calculation Opj k =f(k ΣWjk k.Opk k+1) is performed, and finally a signal Pxy representing the position of the main part of the object is output. In this manner, a coupling strength coefficient Wji of each unit is learnt by the network in advance. The coupling strength coefficients Wji obtained by learning are stored in the coefficient memory 26.
This learning system is shown in FIG. 13. A model pattern 32 is an input unit for inputting various object patterns Opj N+1 to a neural network 33 and includes an A/D converter and the like. Position signals Pxy (target values) of the main parts corresponding to the various object patterns Opj N+1 are input as teacher signals tpj to the neural network 33. The neural network 33 obtains coupling strength coefficients Wji for allowing a coincidence between the actual outputs Opj 0 and the corresponding teacher signals tpj by learning. The obtained coupling strength coefficients Wji are stored in a coefficient memory 34. This coefficient memory 34 is incorporated in a camera as the coefficient memory 26 of FIG. 1.
The neural network 25 therefore automatically obtains a main part of an object from an object brightness distribution pattern as an output from the arithmetic logic unit 24 and outputs its position Pxy.
The neural network has a nature for generating an accurate output for a pattern which is not input during learning if learning is performed to some extent. It is very effective to solve problems (e.g., designation of the main part of the object) which involve human senses and are difficult to be standardized. Learning by the neural network allows self-systematization of a relationship between the main parts of the objects and various types of object patterns which could not be programmed by the existing Neumann type computers. Therefore, desirable exposure control can be performed. In addition, high-speed calculations can be performed by parallel processing in the neural network. Therefore, a neural network is suitable for a camera which requires high-speed operations.
In order to achieve effective learning, in the neural network 25, learning is performed in units of rows of the photoelectric transducer elements Pmn by independent neural networks S11, . . . , and the independent learning results are synthesized by an output layer So, as shown in FIG. 15. The neural network comprises an input layer 37, an intermediate layer 38, and an output layer 39, and the principle of learning has been described above.
For illustrative convenience, the photoelectric transducer elements Pmm form a matrix consisting of four rows and seven columns, as shown in FIG. 16. The main part position pxy denotes any of photoelectric transducer element position Pmn.
FIGS. 17A to 17C show different objects. In a portrait shown in FIG. 17A, since eyes are to be focused, a main part of the object is represented by the photoelectric transducer element P33. In a scene of FIG. 17B, since a bird is to be focused, a main part of the object is represented by the photoelectric transducer element P35. In a scene of FIG. 17C, a tower serves as a main part of the object. These object patterns are input to the neural network 25 and are learnt to obtain coupling strength coefficients Wji such that actual outputs coincide with corresponding teacher signals. Therefore, an accurate main part can be always and automatically output regardless of the type of object. Only three object models are exemplified. However, in practice, several hundreds of model patterns are repeatedly learnt.
According to this embodiment as described above, the main parts of objects are learnt by the neural network in accordance with a large number of model patterns. Therefore, there is provided a focus detection apparatus capable of automatically focusing desired areas in all object patterns.
The present invention is not limited to the particular embodiment described above. Various changes and modifications may be made. Other factors such as temperature and humidity may be input as input parameters of the neural network in addition to the brightness values of the object.

Claims (10)

What is claimed is:
1. A focus detection apparatus comprising:
light-receiving means having a plurality of photoelectric transducer elements arranged in a two dimensional matrix;
an optical system for focusing an object image on said light-receiving means and outputting analog signals representing brightnesses of respective parts of an object;
means for converting the analog signals output from said light-receiving means into digital signals;
neural network means for receiving the digital signals output from said converting means and determining a main part of the object;
means for selecting a signal representing the brightness of the main part determined by said neural network means from the digital signals output from said converting means; and
focus detecting means for performing focus detection on the basis of an output from said selecting means.
2. An apparatus according to claim 1, in which said neural network means comprises:
an input layer having a plurality of units, connected to an output of said converting means, for respectively receiving outputs from said photoelectric transducer elements;
one or a plurality of intermediate layers having a plurality of units coupled to individual units of said input layer at predetermined coupling strengths; and
an output layer having a plurality of units coupled to individual units of said intermediate layer at predetermined coupling strengths.
3. An apparatus according to claim 2, in which said neural network means further comprises a coefficient memory for storing coefficients representing the coupling strengths of the units.
4. An apparatus according to claim 1, in which said neural network means comprises a plurality of microprocessors connected in parallel with each other to perform distributed processing.
5. An apparatus according to claim 1, further comprising means for moving said optical system along an optical axis in accordance with an output of said focus detecting means.
6. A focus detection apparatus comprising:
light-receiving means having a plurality of photoelectric transducer elements arranged in a two-dimensional matrix;
an optical system for focusing an object image on said light-receiving means;
neural network means for receiving brightnesses output from said light-receiving means and determining a position of a main part of an object, said neural network means comprising an input layer having a plurality of units for respectively connected to a group of said photoelectric transducer elements, one or a plurality of intermediate layers having a plurality of units coupled to individual units of said input layer at predetermined coupling strengths, an output layer having a plurality of units coupled to individual units of said intermediate layer at predetermined coupling strengths, and
means for causing learning of each unit of said input layer; and
means for performing a focus detection calculation of the object main part.
7. A focus detection apparatus comprising:
light-receiving means having a plurality of photoelectric transducer elements;
an optical system for focusing an object image on said light-receiving means;
neural network means, connected to an output of said light-receiving means, for outputting a signal representing a position of a main part of an object; and
means for performing a focus detection on the basis of a signal selected on the basis of an output of said neural network means from outputs of said light-receiving means.
8. An apparatus according to claim 7, in which said neural network means comprises:
an input layer having a plurality of units connected to the output of said light-receiving means;
one or a plurality of intermediate layers having a plurality of units coupled to the individual units of said input layer at predetermined coupling strengths; and
an output layer having a plurality of units coupled to the individual units of said intermediate layer at predetermined coupling strengths.
9. An apparatus according to claim 8, in which said neural network means further comprises:
means for learning the coupling strengths; and
a memory for storing learning results.
10. A focus detection apparatus comprising:
a plurality of photoelectric transducer elements, arranged in a two-dimensional matrix, for detecting a brightness of an object; and
a back projection model type neural network, having a large number of units coupled to each other at predetermined coupling strengths determined by prelearning, for receiving outputs from said plurality of photoelectric transducer elements and performing a focus calculation.
US07/414,943 1988-10-04 1989-09-29 Focus detection apparatus using neural network means Expired - Lifetime US4965443A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP63-250466 1988-10-04
JP63250466A JP2761391B2 (en) 1988-10-04 1988-10-04 Focus detection device

Publications (1)

Publication Number Publication Date
US4965443A true US4965443A (en) 1990-10-23

Family

ID=17208292

Family Applications (1)

Application Number Title Priority Date Filing Date
US07/414,943 Expired - Lifetime US4965443A (en) 1988-10-04 1989-09-29 Focus detection apparatus using neural network means

Country Status (2)

Country Link
US (1) US4965443A (en)
JP (1) JP2761391B2 (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0458656A2 (en) * 1990-05-25 1991-11-27 Canon Kabushiki Kaisha Optical information processing apparatus having a "neuro network" for inducing an error signal
US5227835A (en) * 1990-12-21 1993-07-13 Eastman Kodak Company Teachable camera
US5227830A (en) * 1989-11-27 1993-07-13 Olympus Optical Co., Ltd. Automatic camera
US5285231A (en) * 1990-11-29 1994-02-08 Minolta Camera Kabushiki Kaisha Camera having learning function
US5295227A (en) * 1991-07-09 1994-03-15 Fujitsu Limited Neural network learning system
US5331422A (en) * 1991-03-15 1994-07-19 Sharp Kabushiki Kaisha Video camera having an adaptive automatic iris control circuit
US5331176A (en) * 1992-04-10 1994-07-19 Veritec Inc. Hand held two dimensional symbol reader with a symbol illumination window
US5359385A (en) * 1990-03-01 1994-10-25 Minolta Camera Kabushiki Kaisha Camera having learning function
US5365302A (en) * 1992-05-01 1994-11-15 Olympus Optical Company, Ltd. Focus area setting apparatus of camera
US5396057A (en) * 1993-09-08 1995-03-07 Northrop Grumman Corporation Method for optimum focusing of electro-optical sensors for testing purposes with a haar matrix transform
US5408588A (en) * 1991-06-06 1995-04-18 Ulug; Mehmet E. Artificial neural network method and architecture
US5467428A (en) * 1991-06-06 1995-11-14 Ulug; Mehmet E. Artificial neural network method and architecture adaptive signal filtering
US5475429A (en) * 1991-07-25 1995-12-12 Olympus Optical Co., Ltd. In-focus sensing device for sensing an in-focus condition using a ratio of frequency components at different positions
US5491776A (en) * 1991-08-05 1996-02-13 Kawasaki Steel Corporation Signal processing apparatus and learning method therefor
US5604529A (en) * 1994-02-02 1997-02-18 Rohm Co., Ltd. Three-dimensional vision camera
US5608819A (en) * 1993-07-19 1997-03-04 Matsushita Electric Industrial Co., Ltd. Image processing system utilizing neural network for discrimination between text data and other image data
US5627586A (en) * 1992-04-09 1997-05-06 Olympus Optical Co., Ltd. Moving body detection device of camera
US5666562A (en) * 1989-09-20 1997-09-09 Canon Kabushiki Kaisha Automatic focusing system
US6078410A (en) * 1995-12-28 2000-06-20 Sharp Kabushiki Kaisha Image processing apparatus
US6614480B1 (en) * 1997-11-28 2003-09-02 Oki Electric Industry Co., Ltd. Apparatus and a method for automatically focusing on a subject
US20060092307A1 (en) * 2004-10-26 2006-05-04 Canon Kabushiki Kaisha Image pickup apparatus for photographing desired area in image with high image quality and control method for controlling the apparatus
US20060093205A1 (en) * 2004-10-29 2006-05-04 Bryll Robert K System and method for automatically recovering video tools in a vision system
US20060204121A1 (en) * 2005-03-03 2006-09-14 Bryll Robert K System and method for single image focus assessment
US20090074393A1 (en) * 2007-09-14 2009-03-19 Samsung Electronics Co., Ltd. Method and apparatus for auto focusing
WO2020104521A3 (en) * 2018-11-20 2020-08-13 Leica Microsystems Cms Gmbh Learning autofocus
WO2021059679A1 (en) * 2019-09-27 2021-04-01 Sony Corporation Information processing apparatus, electronic device, terminal apparatus, information processing system, information processing method, and program
US11496682B2 (en) * 2019-07-29 2022-11-08 Canon Kabushiki Kaisha Information processing apparatus that performs arithmetic processing of neural network, and image pickup apparatus, control method, and storage medium
US20230360473A1 (en) * 2022-05-06 2023-11-09 Northernvue Corporation Game Monitoring Device

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4536248B2 (en) * 2000-11-24 2010-09-01 オリンパス株式会社 Imaging device
JP2008046221A (en) * 2006-08-11 2008-02-28 Fujifilm Corp Focus adjustment method and apparatus
JP7570806B2 (en) * 2019-12-06 2024-10-22 キヤノン株式会社 Imaging device, information processing device, control method and program thereof, and trained model selection system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4289959A (en) * 1978-11-15 1981-09-15 Olympus Optical Company Limited Apparatus for detecting the in-focusing conditions
US4314150A (en) * 1978-11-01 1982-02-02 Olympus Optical Company Limited Apparatus for detecting the in-focusing conditions
US4331864A (en) * 1978-10-30 1982-05-25 Olympus Optical Company Limited Apparatus for detecting an in-focused condition of optical systems
US4618238A (en) * 1982-04-21 1986-10-21 Olympus Optical, Co. Ltd. Camera
US4748321A (en) * 1985-08-05 1988-05-31 Minolta Camera Kabushiki Kaisha Focus detection device with wavefront aberration correction
US4882601A (en) * 1986-05-16 1989-11-21 Minolta Camera Kabushiki Kaisha Camera with an automatic focusing device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4331864A (en) * 1978-10-30 1982-05-25 Olympus Optical Company Limited Apparatus for detecting an in-focused condition of optical systems
US4314150A (en) * 1978-11-01 1982-02-02 Olympus Optical Company Limited Apparatus for detecting the in-focusing conditions
US4289959A (en) * 1978-11-15 1981-09-15 Olympus Optical Company Limited Apparatus for detecting the in-focusing conditions
US4618238A (en) * 1982-04-21 1986-10-21 Olympus Optical, Co. Ltd. Camera
US4748321A (en) * 1985-08-05 1988-05-31 Minolta Camera Kabushiki Kaisha Focus detection device with wavefront aberration correction
US4882601A (en) * 1986-05-16 1989-11-21 Minolta Camera Kabushiki Kaisha Camera with an automatic focusing device

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5666562A (en) * 1989-09-20 1997-09-09 Canon Kabushiki Kaisha Automatic focusing system
US5227830A (en) * 1989-11-27 1993-07-13 Olympus Optical Co., Ltd. Automatic camera
US5359385A (en) * 1990-03-01 1994-10-25 Minolta Camera Kabushiki Kaisha Camera having learning function
US5333125A (en) * 1990-05-25 1994-07-26 Canon Kabushiki Kaisha Optical information processing apparatus having a neural network for inducing an error signal
EP0458656A3 (en) * 1990-05-25 1992-08-05 Canon Kabushiki Kaisha Optical information processing apparatus having a "neuro network" for inducing an error signal
EP0458656A2 (en) * 1990-05-25 1991-11-27 Canon Kabushiki Kaisha Optical information processing apparatus having a "neuro network" for inducing an error signal
US5285231A (en) * 1990-11-29 1994-02-08 Minolta Camera Kabushiki Kaisha Camera having learning function
US5572278A (en) * 1990-11-29 1996-11-05 Minolta Camera Kabushiki Kaisha Camera having learning function
US5634140A (en) * 1990-11-29 1997-05-27 Minolta Camera Kabushiki Kaisha Camera having learning function
US5227835A (en) * 1990-12-21 1993-07-13 Eastman Kodak Company Teachable camera
US5331422A (en) * 1991-03-15 1994-07-19 Sharp Kabushiki Kaisha Video camera having an adaptive automatic iris control circuit
US5497196A (en) * 1991-03-15 1996-03-05 Sharp Kabushiki Kaisha Video camera having an adaptive automatic iris control circuit
US5408588A (en) * 1991-06-06 1995-04-18 Ulug; Mehmet E. Artificial neural network method and architecture
US5467428A (en) * 1991-06-06 1995-11-14 Ulug; Mehmet E. Artificial neural network method and architecture adaptive signal filtering
US5295227A (en) * 1991-07-09 1994-03-15 Fujitsu Limited Neural network learning system
US5475429A (en) * 1991-07-25 1995-12-12 Olympus Optical Co., Ltd. In-focus sensing device for sensing an in-focus condition using a ratio of frequency components at different positions
US5491776A (en) * 1991-08-05 1996-02-13 Kawasaki Steel Corporation Signal processing apparatus and learning method therefor
US5627586A (en) * 1992-04-09 1997-05-06 Olympus Optical Co., Ltd. Moving body detection device of camera
US5331176A (en) * 1992-04-10 1994-07-19 Veritec Inc. Hand held two dimensional symbol reader with a symbol illumination window
US5365302A (en) * 1992-05-01 1994-11-15 Olympus Optical Company, Ltd. Focus area setting apparatus of camera
US5608819A (en) * 1993-07-19 1997-03-04 Matsushita Electric Industrial Co., Ltd. Image processing system utilizing neural network for discrimination between text data and other image data
US5396057A (en) * 1993-09-08 1995-03-07 Northrop Grumman Corporation Method for optimum focusing of electro-optical sensors for testing purposes with a haar matrix transform
US5604529A (en) * 1994-02-02 1997-02-18 Rohm Co., Ltd. Three-dimensional vision camera
US6078410A (en) * 1995-12-28 2000-06-20 Sharp Kabushiki Kaisha Image processing apparatus
US6614480B1 (en) * 1997-11-28 2003-09-02 Oki Electric Industry Co., Ltd. Apparatus and a method for automatically focusing on a subject
US20060092307A1 (en) * 2004-10-26 2006-05-04 Canon Kabushiki Kaisha Image pickup apparatus for photographing desired area in image with high image quality and control method for controlling the apparatus
US7864228B2 (en) 2004-10-26 2011-01-04 Canon Kabushiki Kaisha Image pickup apparatus for photographing desired area in image with high image quality and control method for controlling the apparatus
US20060093205A1 (en) * 2004-10-29 2006-05-04 Bryll Robert K System and method for automatically recovering video tools in a vision system
US7454053B2 (en) 2004-10-29 2008-11-18 Mitutoyo Corporation System and method for automatically recovering video tools in a vision system
US7668388B2 (en) 2005-03-03 2010-02-23 Mitutoyo Corporation System and method for single image focus assessment
US20060204121A1 (en) * 2005-03-03 2006-09-14 Bryll Robert K System and method for single image focus assessment
US7936987B2 (en) * 2007-09-14 2011-05-03 Samsung Electronics Co., Ltd. Method and apparatus for auto focusing
US20090074393A1 (en) * 2007-09-14 2009-03-19 Samsung Electronics Co., Ltd. Method and apparatus for auto focusing
WO2020104521A3 (en) * 2018-11-20 2020-08-13 Leica Microsystems Cms Gmbh Learning autofocus
DE102018219867B4 (en) * 2018-11-20 2020-10-29 Leica Microsystems Cms Gmbh Learning autofocus
US11971529B2 (en) 2018-11-20 2024-04-30 Leica Microsystems Cms Gmbh Learning autofocus
US11659275B2 (en) 2019-07-29 2023-05-23 Canon Kabushiki Kaisha Information processing apparatus that performs arithmetic processing of neural network, and image pickup apparatus, control method, and storage medium
US11496682B2 (en) * 2019-07-29 2022-11-08 Canon Kabushiki Kaisha Information processing apparatus that performs arithmetic processing of neural network, and image pickup apparatus, control method, and storage medium
CN114424518A (en) * 2019-09-27 2022-04-29 索尼集团公司 Information processing device, electronic apparatus, terminal device, information processing system, information processing method, and program
WO2021059679A1 (en) * 2019-09-27 2021-04-01 Sony Corporation Information processing apparatus, electronic device, terminal apparatus, information processing system, information processing method, and program
US12058441B2 (en) 2019-09-27 2024-08-06 Sony Group Corporation Information processing apparatus with automatic adjustment of focal point
US20230360473A1 (en) * 2022-05-06 2023-11-09 Northernvue Corporation Game Monitoring Device
US11948425B2 (en) * 2022-05-06 2024-04-02 Northernvue Corporation Game monitoring device

Also Published As

Publication number Publication date
JPH0296707A (en) 1990-04-09
JP2761391B2 (en) 1998-06-04

Similar Documents

Publication Publication Date Title
US4965443A (en) Focus detection apparatus using neural network means
US4978990A (en) Exposure control apparatus for camera
JP4208485B2 (en) Pulse signal processing circuit, parallel processing circuit, pattern recognition device, and image input device
US5617490A (en) Camera system with neural network compensator for measuring 3-D position
US11450017B1 (en) Method and apparatus for intelligent light field 3D perception with optoelectronic computing
EP0591415A1 (en) Sparse comparison neural network
EP0475659B1 (en) Optical pattern recognition apparatus
US7991719B2 (en) Information processing method and apparatus, and image pickup device
US5592589A (en) Tree-like perceptron and a method for parallel distributed training of such perceptrons
JPH05127706A (en) Neural network type simulator
KR20190048279A (en) Image processing method and apparatus using convolution neural network
JP2629291B2 (en) Manipulator learning control method
JP2793816B2 (en) Camera with learning function
Grossman et al. Composite damage assessment employing an optical neural network processor and an embedded fiber-optic sensor array
Solgi et al. WWN-8: incremental online stereo with shape-from-X using life-long big data from multiple modalities
JP2793817B2 (en) Exposure control device
JPH05128082A (en) Data processor constituting hierarchical network and its learning processing method
Webster Artificial neural networks and their application to weapons
KR0139572B1 (en) Image distortion calibration method using nervous network
Nikravesh et al. Process control of nonlinear time variant processes via artificial neural network
JP2793815B2 (en) Exposure control device
Wen et al. Modified two-dimensional Hamming neural network and its optical implementation using Dammann gratings
JP2635443B2 (en) How to train neural networks for multi-source data integration
JP2000048187A (en) Method for image transform
JPH02173732A (en) Controller for flashing device

Legal Events

Date Code Title Description
AS Assignment

Owner name: OLYMPUS OPTICAL CO., LTD., 43-2, 2-CHOME, HATAGAYA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST.;ASSIGNORS:YAMASAKI, MASAFUMI;TOYOFUKU, TOSHIYUKI;ITOH, JUNICHI;AND OTHERS;REEL/FRAME:005147/0613

Effective date: 19890921

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

FPAY Fee payment

Year of fee payment: 12