JP2003057531A - Method and device for phase difference detection, range finder, and image pickup device - Google Patents
Method and device for phase difference detection, range finder, and image pickup deviceInfo
- Publication number
- JP2003057531A JP2003057531A JP2001243917A JP2001243917A JP2003057531A JP 2003057531 A JP2003057531 A JP 2003057531A JP 2001243917 A JP2001243917 A JP 2001243917A JP 2001243917 A JP2001243917 A JP 2001243917A JP 2003057531 A JP2003057531 A JP 2003057531A
- Authority
- JP
- Japan
- Prior art keywords
- pair
- phase difference
- video data
- image
- correction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Automatic Focus Adjustment (AREA)
- Measurement Of Optical Distance (AREA)
- Focusing (AREA)
Abstract
Description
【0001】[0001]
【発明の技術分野】本発明は、位相差検出方法、位相差
検出装置、測距装置および撮像装置に関する。BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a phase difference detecting method, a phase difference detecting device, a distance measuring device and an image pickup device.
【0002】[0002]
【従来の技術】従来、自動焦点カメラにおいて、いわゆ
るパッシブ方式で被写体にピントを合わせる際、非TT
Lカメラの場合は撮影レンズを通過していない被写体像
を用いて被写体距離を検出した後にその検出した被写体
距離に応じて撮影レンズの位置を制御しており、TTL
カメラの場合は撮影レンズを通過して得られた被写体像
を用いて合焦状態からのずれ量を検出した後にその検出
したずれ量に応じて撮影レンズの位置を制御している。
以下、図3(a)を参照して上述した動作原理を説明す
る。2. Description of the Related Art Conventionally, in an autofocus camera, when a subject is focused by a so-called passive method, a non-TT
In the case of the L camera, the position of the photographing lens is controlled according to the detected subject distance after detecting the subject distance using the subject image that has not passed through the photographing lens.
In the case of a camera, the amount of deviation from the in-focus state is detected using a subject image obtained by passing through the photographing lens, and then the position of the photographing lens is controlled according to the detected amount of deviation.
Hereinafter, the operation principle described above will be described with reference to FIG.
【0003】同図において、1対のレンズ1aと1bは
所定の基線長bだけ離して配設してあり、1対のレンズ
1aと1bから焦点距離fだけ離して配設された1対の
光センサアレイ3aと3bに互いに異なる光路AとBを
介して対象2の映像をそれぞれ結像させる。対象2は1
対のレンズ1a、1bから正面方向に距離Lだけ離れた
位置に存在するものとする。In the figure, a pair of lenses 1a and 1b are arranged apart from each other by a predetermined base line length b, and a pair of lenses 1a and 1b are arranged apart from the pair of lenses 1a and 1b by a focal length f. Images of the target 2 are formed on the optical sensor arrays 3a and 3b through different optical paths A and B, respectively. Target 2 is 1
It is assumed that the pair of lenses 1a and 1b are present at a position separated by a distance L in the front direction.
【0004】対象2が無限遠の位置に存在するとき一対
の光センサアレイ3aと3bに結像される映像の中心は
光センサアレイ3a、3b上のレンズ1a、1bの光軸
に対応する基準位置(3a1、3b1)に結像される
が、対象2が無限遠位置よりも近づくと、これら基準位
置(3a1、3b1)からαだけずれた位置に結像され
る。三角測距の原理から対象2までの距離LはL=bf
/αとなる。ここで、基線長bと焦点距離fは定数なの
で、ずれ量αを検出すれば距離Lを測定できる。これが
非TTLカメラに用いられているパッシブ測距(いわゆ
る外光三角測距)の原理である。なお、非TTLカメラ
では測距装置の出力値として距離Lを用いるかわりにず
れ量αをそのまま使う場合もある。本願では、このずれ
量αを位相差と称する。The center of the image formed on the pair of photosensor arrays 3a and 3b when the object 2 is located at infinity is the reference corresponding to the optical axes of the lenses 1a and 1b on the photosensor arrays 3a and 3b. The image is formed at the positions (3a1, 3b1), but when the object 2 approaches the infinity position, the image is formed at a position deviated from the reference positions (3a1, 3b1) by α. From the principle of triangulation, the distance L to the target 2 is L = bf
/ Α. Since the base line length b and the focal length f are constants, the distance L can be measured by detecting the shift amount α. This is the principle of passive distance measurement (so-called external light triangulation) used in non-TTL cameras. In a non-TTL camera, instead of using the distance L as the output value of the distance measuring device, the shift amount α may be used as it is. In the present application, this shift amount α is referred to as a phase difference.
【0005】TTLカメラの場合は、撮像レンズ(図示
せず。)を通過した光を一対のレンズ1a、1bに与え
て上記と同様に左右の映像対間の位相差αを検出する。
なお、この場合は合焦状態にあるときの映像の中心を各
光センサアレイ3a、3b上の基準位置とする。よっ
て、位相差αの正負が前ピント状態か後ピント状態かを
その絶対値が合焦からのずれの程度をそれぞれ示す。In the case of the TTL camera, the light passing through the image pickup lens (not shown) is given to the pair of lenses 1a and 1b to detect the phase difference α between the left and right image pairs as described above.
In this case, the center of the image in the focused state is the reference position on each of the photosensor arrays 3a and 3b. Therefore, whether the phase difference α is positive or negative is the front focus state or the rear focus state, and the absolute value thereof indicates the degree of deviation from the focus.
【0006】これらはいずれも光学系により対象の映像
を1対の光センサアレイ上に結像させ、1対の光センサ
アレイが出力する1対の映像信号の相対的な位置ずれ、
すなわち位相差を1対の映像信号からそれぞれ抽出した
部分映像データ群(図3(b)参照)について相関演算
を行うことにより検出している。なお、このような位相
差検出は自動焦点カメラに限るものではなく、種々の測
距装置や焦点検出装置等に用いることが可能である。In both of these, an image of an object is formed on a pair of photosensor arrays by an optical system, and a relative position shift of a pair of video signals output by the pair of photosensor arrays,
That is, the phase difference is detected by performing the correlation calculation on the partial video data groups (see FIG. 3B) extracted from the pair of video signals. Note that such phase difference detection is not limited to the autofocus camera, and can be used in various distance measuring devices, focus detection devices, and the like.
【0007】このような位相差を検出する方式におい
て、被写体の背景に太陽等の強い光源がある場合などに
発生するフレア等の迷光に起因する検出精度の悪化を低
減する方式として、例えば特開平8−166237号公
報に開示されたものがある。具体的には3つの技術が開
示されており、第1の技術は1対の光センサが出力する
1対のセンサデータの平均値が2つのセンサデータ間で
差がなくなるように補正する技術、第2の技術は1対の
センサデータから抽出した部分映像データ群の組合せご
とに両部分映像データ群間の代表値に差がなくなるよう
に補正する技術、第3の技術は1対のセンサデータの各
データ値をデータ変数番号に関する微分近似値で置き換
えるように補正する技術である。In such a method for detecting a phase difference, as a method for reducing deterioration of detection accuracy due to stray light such as flare that occurs when a strong light source such as the sun exists in the background of the subject, for example, Japanese Patent Laid-Open No. There is one disclosed in Japanese Patent Laid-Open No. 8-166237. Specifically, three technologies are disclosed, and the first technology is a technology for correcting an average value of a pair of sensor data output by a pair of optical sensors so that there is no difference between the two sensor data. The second technique is a technique for correcting each combination of partial image data groups extracted from a pair of sensor data so that there is no difference in the representative value between both partial image data groups, and the third technique is a pair of sensor data. Is a technique for correcting so that each data value of is replaced with a differential approximation value related to the data variable number.
【0008】[0008]
【発明が解決しようとする課題】しかしながら、上記第
1の技術すなわち1対の光センサが出力する1対のセン
サデータの平均値が2つのセンサデータ間で差がなくな
るように補正する場合は、相関演算を行う際に用いる部
分映像データ群がその部分映像データ群に含まれないセ
ンサデータ間の誤差の影響を受けてしまう。よって、例
えば、補正しない状態で最も相関度の高くなる部分映像
データ群の組合せがフレアによる影響を受けていなく、
他の部分がフレアによる影響を受けている場合、補正後
のほうが補正前より位相差検出精度が悪化してしまうと
いう問題点を有していた。However, in the case of the above-mentioned first technique, that is, when the average value of the pair of sensor data output by the pair of optical sensors is corrected so that there is no difference between the two sensor data, The partial video data group used when performing the correlation calculation is affected by the error between the sensor data not included in the partial video data group. Therefore, for example, the combination of partial image data groups having the highest correlation in the uncorrected state is not affected by flare,
When other parts are affected by flare, there is a problem that the phase difference detection accuracy after correction becomes worse than before correction.
【0009】また、上記第2の技術すなわち1対のセン
サデータから抽出した部分映像データ群の組合せごとに
両部分映像データ群間の代表値に差がなくなるように補
正する場合は、部分映像データ群の組合せごとに補正量
を求めなければならなくなり、部分映像データ群の組合
せ数が多くなるほど処理動作が増大してしまうという問
題点を有していた。Further, in the case of the above-mentioned second technique, that is, in the case of correcting so that there is no difference in the representative value between the partial video data groups for each combination of the partial video data groups extracted from a pair of sensor data, the partial video data There is a problem that a correction amount must be calculated for each combination of groups, and the processing operation increases as the number of combinations of partial video data groups increases.
【0010】また、上記第3の技術すなわち1対のセン
サデータの各データ値をデータ変数番号に関する微分近
似値で置き換えるように補正する場合は、個々のデータ
に対して複雑な微分近似値置換え処理が必要となる。In the third technique, that is, when correcting each data value of a pair of sensor data by replacing it with a differential approximation value relating to a data variable number, a complicated differential approximation value replacement process is performed for each data. Is required.
【0011】本発明は、複雑な処理および検出精度の悪
化を可及的に抑制可能な位相差検出方法、位相差検出装
置、測距装置および撮像装置を提供することを目的とす
る。It is an object of the present invention to provide a phase difference detecting method, a phase difference detecting device, a distance measuring device and an image pickup device capable of suppressing complicated processing and deterioration of detection accuracy as much as possible.
【0012】[0012]
【課題を解決するための手段】第1の発明は、測定対象
からの像が結像される1対の光センサアレイの出力に応
じた複数の映像データからなる1対の映像データ列から
相対位置が異なる部分映像データ群を抽出して1対の部
分映像データ群の組合せを複数作り、当該1対の部分映
像データ群の各組合せに対して相関度を求める相関度取
得ステップと、当該相関度の中で最大相関度を示す上記
1対の部分映像データ群間のデータ差に応じた補正値を
求める補正値取得ステップと、少なくとも上記1対の映
像データ列のそれぞれの一部の間のデータ差を上記補正
値に基づき補正する補正ステップと、補正後の上記映像
データに基づいて上記1対のセンサアレイに結像された
像の相互間の位相差を検出する位相差検出ステップとを
含む位相差検出方法である。このようなステップを含む
方法によれば、最大相関度を示す1対の部分映像データ
群の差に応じた補正値により1対のセンサアレイに結像
された像の相互間の位相差を検出するために用いる映像
データ間のデータ差を補正するので、もともと相関度が
高い1対の部分映像データ群の差に基づいた補正が行
え、もともと相関度が高くないデータの影響を受けるこ
とを低減でき、部分映像データ群の組合せごとに補正量
を求めるという煩わしい処理を防止でき、個々のデータ
に対して複雑な微分近似値置換え処理をなくすことが可
能となる。According to a first aspect of the present invention, a pair of video data strings composed of a plurality of video data corresponding to the outputs of a pair of photosensor arrays on which an image from a measurement object is formed is relative. Correlation degree obtaining step for obtaining a correlation degree for each combination of the pair of partial video data groups by extracting partial video data groups at different positions and forming a plurality of combinations of the pair of partial video data groups, and the correlation Between the correction value acquisition step of obtaining a correction value according to the data difference between the pair of partial video data groups showing the maximum degree of correlation, and at least a part of each of the pair of video data strings. A correction step of correcting the data difference based on the correction value, and a phase difference detection step of detecting a phase difference between the images formed on the pair of sensor arrays based on the corrected image data. Including phase difference detection method It is. According to the method including such steps, the phase difference between the images formed on the pair of sensor arrays is detected by the correction value according to the difference between the pair of partial image data groups showing the maximum correlation. Since the data difference between the video data used for the correction is corrected, the correction can be performed based on the difference between the pair of partial video data groups having a high correlation originally, and the influence of the data not having a high correlation is reduced. Therefore, it is possible to prevent the troublesome processing of obtaining the correction amount for each combination of the partial video data groups, and to eliminate the complicated differential approximation value replacement processing for each data.
【0013】第2の発明は、測定対象からの像が結像さ
れる1対のセンサアレイと、上記1対のセンサアレイの
出力に応じた複数の映像データからなる1対の映像デー
タ列から相対位置が異なる部分映像データ群を抽出して
1対の部分映像データ群の組合せを複数作り、当該1対
の部分映像データ群の各組合せに対して相関度を求める
相関度取得部と、上記相関度取得部が最大相関度を示す
上記1対の部分映像データ群間のデータ差に応じた補正
値を求める補正値取得部と、少なくとも上記1対の映像
データ列のそれぞれの一部の間のデータ差を上記補正値
に基づき補正する補正部と、上記補正部により補正され
た上記映像データに基づいて上記1対のセンサアレイに
結像された像の相互間の位相差を検出する位相差検出部
とを含む位相差検出装置である。このような構成によれ
ば、最大相関度を示す1対の部分映像データ群の差に応
じた補正値により1対のセンサアレイに結像された像の
相互間の位相差を検出するために用いる映像データ間の
データ差を補正するので、もともと相関度が高い1対の
部分映像データ群の差に基づいた補正が行え、もともと
相関度が高くないデータの影響を受けることを低減で
き、部分映像データ群の組合せごとに補正量を求めると
いう煩わしい処理を防止でき、個々のデータに対して複
雑な微分近似値置換え処理をなくすことが可能となる。A second invention comprises a pair of sensor arrays on which an image from a measurement object is formed, and a pair of image data strings consisting of a plurality of image data corresponding to the outputs of the pair of sensor arrays. A correlation degree acquisition unit that extracts partial video data groups having different relative positions to form a plurality of combinations of a pair of partial video data groups, and obtains a correlation degree for each combination of the pair of partial video data groups; Between the correction value acquisition unit for obtaining a correction value according to the data difference between the pair of partial video data groups, in which the correlation degree acquisition unit shows the maximum correlation degree, and at least a part of each of the pair of video data strings Correction unit for correcting the data difference between the pair of sensor arrays based on the image data corrected by the correction unit, and a phase difference for detecting the phase difference between the images formed on the pair of sensor arrays. Phase difference detection including phase difference detector It is a device. According to such a configuration, the phase difference between the images formed on the pair of sensor arrays is detected by the correction value corresponding to the difference between the pair of partial image data groups showing the maximum correlation. Since the data difference between the video data used is corrected, correction can be performed based on the difference between a pair of partial video data groups that originally have a high degree of correlation, and it is possible to reduce the influence of data that originally has a low degree of correlation. It is possible to prevent the troublesome process of obtaining the correction amount for each combination of the image data groups, and to eliminate the complicated differential approximation value replacement process for each data.
【0014】第3の発明は、上記位相差検出装置と、上
記位相差検出装置が検出する位相差に基づき上記測定対
象までの距離に応じた距離データを求める距離検出部と
を備えた測距装置である。かかる構成によれば、上記の
効果に加え、距離データの精度が向上するので測距精度
の低下が防止できる。According to a third aspect of the present invention, there is provided a distance measuring device comprising the phase difference detecting device and a distance detecting section for obtaining distance data corresponding to the distance to the object to be measured based on the phase difference detected by the phase difference detecting device. It is a device. According to such a configuration, in addition to the above effects, the accuracy of the distance data is improved, so that it is possible to prevent a decrease in the distance measurement accuracy.
【0015】第4の発明は、上記位相差検出装置と、対
物レンズと、上記対物レンズを通過した被写体像が結像
される結像部と、上記位相差検出装置が求めた上記位相
差に応じて上記対物レンズと上記結像部との間の合焦動
作を行う合焦制御部とを含む撮像装置である。かかる構
成によれば、上記の効果に加え、対物レンズの合焦精度
が向上するので撮像画質の向上が図れる。According to a fourth aspect of the present invention, the phase difference detecting device, the objective lens, an image forming section on which an object image passing through the objective lens is formed, and the phase difference obtained by the phase difference detecting device are obtained. Accordingly, the imaging apparatus includes a focusing control unit that performs a focusing operation between the objective lens and the image forming unit. According to this configuration, in addition to the above effects, the focusing accuracy of the objective lens is improved, so that the image quality of the captured image can be improved.
【0016】[0016]
【発明の実施の形態】以下、本発明の実施の形態を図面
に示した一実施例を参照して説明する。図1は撮像装置
に本発明を採用した例である。なお、図1において図3
と同一構成のものには同一符号を附してある。BEST MODE FOR CARRYING OUT THE INVENTION Embodiments of the present invention will be described below with reference to an embodiment shown in the drawings. FIG. 1 shows an example in which the present invention is applied to an image pickup apparatus. In addition, in FIG.
The same components as those in FIG.
【0017】図1において、1対のレンズ1a、1bは
上述したように対象2の映像を1対の光センサアレイ3
a、3b上にそれぞれ結像させる。光センサアレイ3
a、3bはそれぞれ162個の画素(光電変換素子)を
ライン上に配置してあり、それらの各画素は当該画素上
に結像された対象2の映像の光量に対応する電気信号を
出力する。なお、1対の光センサアレイ3a、3bの画
素の個数は適宜変更可能である。出力部4は1対の光セ
ンサアレイ3a、3bの出力をCPU5に出力する。C
PU5は入力する1対の光センサアレイ3a、3bの出
力をメモリ部6に記憶してある種々の動作プログラムや
各種のデータに基づき以下に示すように処理する。合焦
制御部7はCPU5により制御され、対物レンズ8を両
矢印X方向に動かし、対物レンズ8と結像部9との間の
合焦動作を行う。なお、結像部9は銀塩フィルムでもよ
いし、いわゆるCCDセンサやCMOSセンサ等のよう
な光電変換素子を有する固体撮像素子でもよい。In FIG. 1, the pair of lenses 1a and 1b are used to form the image of the object 2 into the pair of photosensor arrays 3 as described above.
Images are formed on a and 3b, respectively. Optical sensor array 3
Each of a and 3b has 162 pixels (photoelectric conversion elements) arranged on a line, and each pixel outputs an electric signal corresponding to the light amount of the image of the object 2 imaged on the pixel. . The number of pixels of the pair of photosensor arrays 3a and 3b can be changed as appropriate. The output unit 4 outputs the outputs of the pair of photosensor arrays 3a and 3b to the CPU 5. C
The PU 5 processes the input outputs of the pair of photosensor arrays 3a and 3b based on various operation programs and various data stored in the memory section 6 as described below. The focus control unit 7 is controlled by the CPU 5, moves the objective lens 8 in the direction of the double-headed arrow X, and performs the focusing operation between the objective lens 8 and the image forming unit 9. The image forming unit 9 may be a silver salt film or a solid-state image sensor having a photoelectric conversion element such as a so-called CCD sensor or CMOS sensor.
【0018】次に、CPU5の機能を中心に図1、図2
を参照して動作を説明する。なお、図1ではCPU5が
有する機能を説明するためCPU5においては機能ブロ
ック図を採用している。Next, focusing on the function of the CPU 5, FIG.
The operation will be described with reference to. In FIG. 1, a functional block diagram is adopted in the CPU 5 in order to explain the functions of the CPU 5.
【0019】図示しないレリーズスイッチが操作される
と、1対の光センサアレイ3a、3bが動作を開始する
(ステップ3a)。上述したように1対の光センサアレ
イ3a、3bには1対のレンズ1a、1bにより対象2
の映像が互いに異なる光路AとBを介してそれぞれ結像
されており、1対の光センサアレイ3a、3bから結像
された映像の光量に対応する電気信号が出力される。When a release switch (not shown) is operated, the pair of photosensor arrays 3a and 3b start operating (step 3a). As described above, the pair of optical sensor arrays 3a and 3b are provided with the pair of lenses 1a and 1b, respectively.
Images are formed respectively via different optical paths A and B, and an electric signal corresponding to the light amount of the formed images is output from the pair of photosensor arrays 3a and 3b.
【0020】A/D変換部5aは出力部4を介して入力
される1対の光センサアレイ3a、3bの出力をA/D
変換する。メモリ部5bはA/D変換された1対の光セ
ンサアレイ3a、3bの出力を1対の映像データ列(I
L、IR)として1対のメモリ領域5bL、5bRに記
憶する。本例ではA/D変換された光センサアレイ3a
の出力をメモリ領域5bLに記憶し、A/D変換された
光センサアレイ3bの出力をメモリ領域5bRに記憶す
る。また、本例では1対の光センサアレイ3a、3bの
画素数がそれぞれ162なので、映像データ列(IL、
IR)はそれぞれ162個のデータ(IL(1〜16
2)、IR(1〜162))で構成される。The A / D conversion section 5a A / D converts the outputs of the pair of photosensor arrays 3a and 3b input through the output section 4.
Convert. The memory section 5b outputs the output of the pair of A / D converted photosensor arrays 3a and 3b to the pair of video data strings (I
L, IR) are stored in the pair of memory areas 5bL, 5bR. In this example, the A / D converted optical sensor array 3a
Is stored in the memory area 5bL and the A / D converted output of the photosensor array 3b is stored in the memory area 5bR. Further, in this example, since the number of pixels of each of the pair of photosensor arrays 3a and 3b is 162, the video data string (IL,
Each IR has 162 data (IL (1 to 16)
2), IR (1-162)).
【0021】左右差判定部5cはメモリ部5b内のメモ
リ領域5bL、5bRに記憶された1対の映像データ列
(IL、IR)を読み出し(ステップ3b)、それらの
平均値の差もしくは最大値の差を求め、求めた値が所定
値A以上か否かを判定する(ステップ3c)。すなわ
ち、1対の映像データ列(IL、IR)の差を左右差と
し、この左右差が所定値A以上か否かを判定する。The left / right difference determining section 5c reads out a pair of video data strings (IL, IR) stored in the memory areas 5bL, 5bR in the memory section 5b (step 3b), and calculates the difference or maximum value of their average values. Is calculated, and it is determined whether the calculated value is greater than or equal to a predetermined value A (step 3c). That is, the difference between the pair of video data strings (IL, IR) is taken as the left-right difference, and it is determined whether or not the left-right difference is equal to or larger than the predetermined value A.
【0022】左右差補正部5dは左右差判定部5cが1
対の映像データ列(IL、IR)の平均値の差もしくは
最大値の差が所定値A以上と判定した場合、すなわち1
対の映像データ列(IL、IR)の差が所定値A以上あ
ると判定した場合(ステップ3c)、1対の映像データ
列(IL、IR)の平均値の差もしくは最大値の差が所
定値Aより小さくなるように映像データ列の全てのデー
タを一律の補正量で補正し、補正したデータをメモリ領
域5bL、5bRに記憶し、1対の映像データ列(I
L、IR)を更新する(ステップ3d)。なお、左右差
補正部5dによる補正は既に従来技術として説明した第
1の技術とほぼ同様である。The left / right difference correction unit 5d has a left / right difference determination unit 5c.
When it is determined that the difference between the average values or the maximum values of the pair of video data strings (IL, IR) is equal to or larger than the predetermined value A, that is, 1
When it is determined that the difference between the pair of video data strings (IL, IR) is not less than the predetermined value A (step 3c), the difference between the average value or the maximum value of the pair of video data strings (IL, IR) is predetermined. All the data of the video data string is corrected by a uniform correction amount so as to be smaller than the value A, and the corrected data is stored in the memory areas 5bL and 5bR, and a pair of video data strings (I
L, IR) is updated (step 3d). The correction by the left / right difference correction unit 5d is almost the same as the first technique already described as the conventional technique.
【0023】ステップ3cにおいて1対の映像データ列
(IL、IR)の差が所定値Aより小さい場合もしくは
ステップ3dによる処理が終了したら、相関度取得部と
しての相関演算部5eによる相関演算を行う(ステップ
3e)。具体的な処理としては、1対の映像データ列
(IL、IR)からそれぞれ部分映像データ群(iL、
iR)をそれらの光センサアレイ上での相対位置が異な
るように抽出し、抽出した各部分映像データ群(iL、
iR)の組合せに対して相関度を求める。本例では、部
分映像データ群のデータ数を26とし、図3(b)に示
すように映像データ列(IL)から抽出する部分映像デ
ータ群(iL)を固定し、映像データ列(IR)から抽
出する部分映像データ群(iR)を1つずつずらしてい
く方式を採用している。具体的には、以下の式(1)に
基づき相関演算を行う。When the difference between the pair of video data strings (IL, IR) is smaller than the predetermined value A in step 3c or when the process in step 3d is completed, the correlation calculation unit 5e as the correlation degree acquisition unit performs the correlation calculation. (Step 3e). As a specific process, a partial video data group (iL, IR) is generated from a pair of video data strings (IL, IR).
iR) are extracted so that their relative positions on the optical sensor array are different, and each extracted partial image data group (iL,
The correlation degree is calculated for the combination of iR). In this example, the number of data of the partial video data group is 26, the partial video data group (iL) extracted from the video data string (IL) is fixed, and the video data string (IR) is fixed as shown in FIG. 3B. The method of shifting the partial video data group (iR) extracted from each one by one is adopted. Specifically, the correlation calculation is performed based on the following equation (1).
【0024】[0024]
【数1】 [Equation 1]
【0025】ステップ3eの相関演算が終了すると、最
大相関度検出部5fが相関演算部5eが行った式(1)
の演算結果に基づきS(l)の極小値(以下、図3
(b)に示したS(x)とする。)すなわち最大相関度
を検出し、検出したS(x)と相関演算関数S(l)
(l=1〜137の整数)とをメモリ部5gに格納する
(ステップ3f)。When the correlation calculation in step 3e is completed, the maximum correlation degree detecting section 5f executes the equation (1) calculated by the correlation calculating section 5e.
The minimum value of S (l) based on the calculation result of
Let S (x) shown in (b). ) That is, the maximum correlation is detected, and the detected S (x) and the correlation calculation function S (l)
And (l = 1 to 137) are stored in the memory unit 5g (step 3f).
【0026】メモリ部5gへの格納が終了すると、最大
相関度判定部5hは最大相関度検出部5fが求めた極小
値S(x)が所定値B以上所定値C以下に含まれるか否
かを判定し、1対の部分映像データ群がフレア等の迷光
の影響を受けていないか判定する(ステップ3g)。When the storage in the memory unit 5g is completed, the maximum correlation degree determining unit 5h determines whether the minimum value S (x) obtained by the maximum correlation degree detecting unit 5f is within the predetermined value B or more and the predetermined value C or less. Then, it is determined whether the pair of partial image data groups is not affected by stray light such as flare (step 3g).
【0027】ステップ3gにおいて、極小値S(x)が
所定値B以上所定値C以下に含まれておらず1対の部分
映像データ群(IL、IR)がフレア等の迷光の影響を
受けている可能性が高いと判定した場合、補正値取得部
としての左右差補正量取得部5iは極小値S(x)を部
分映像データ群のデータ数(26個)で割って1データ
あたりの誤差量を求め、これを左右差補正値とする(ス
テップ3h)。In step 3g, the minimum value S (x) is not included in the range from the predetermined value B to the predetermined value C and the pair of partial image data groups (IL, IR) is affected by stray light such as flare. If it is determined that there is a high possibility that there is a high probability, the left / right difference correction amount acquisition unit 5i as the correction value acquisition unit divides the minimum value S (x) by the number of data pieces (26 pieces) of the partial video data group and the error per data The amount is calculated and used as the left-right difference correction value (step 3h).
【0028】左右差補正値が求まると、補正部としての
相関演算部5jは1対の映像データ列(IL、IR)か
らそれぞれ部分映像データ群(iL、iR)をそれらの
相対位置が異なるように抽出し、抽出した各部分映像デ
ータ群(iL、iR)を左右差補正量取得部5iが求め
た左右差補正値で補正する(ステップ3i)。補正の例
としては、例えば極小値S(x)を示す部分映像データ
群(iLx、iRx)の各データの総和を求めるととも
にその総和の大小関係を判定し、部分映像データ群(i
Lx)の総和が大きい場合、部分映像データ群(iL)
を抽出した際にその部分映像データ群(iL)に含まれ
る各データから左右差補正値を減算する補正を行い、部
分映像データ群(iLx)の総和が部分映像データ群
(iRx)の総和より小さい場合、部分映像データ群
(iL)を抽出した際にその部分映像データ群(iL)
に含まれる各データから左右差補正値を加算する補正を
行う。このような補正を行った後の部分映像データ群
(iL)を用いて相関演算部5jは上述した式(1)の
相関演算を行う(ステップ3j)。When the left-right difference correction value is obtained, the correlation calculation unit 5j as a correction unit makes the relative positions of the partial video data groups (iL, iR) different from each other from the pair of video data strings (IL, IR). Each of the extracted partial video data groups (iL, iR) is corrected with the left / right difference correction value obtained by the left / right difference correction amount acquisition unit 5i (step 3i). As an example of the correction, for example, the sum of each data of the partial video data group (iLx, iRx) indicating the minimum value S (x) is obtained, and the magnitude relation of the total is determined to determine the partial video data group (i
Lx) is large, the partial video data group (iL)
Correction is performed by subtracting the left-right difference correction value from each data included in the partial video data group (iL), and the total sum of the partial video data group (iLx) is less than the total sum of the partial video data group (iRx). If it is small, the partial video data group (iL) is extracted when the partial video data group (iL) is extracted.
The correction is performed by adding the left-right difference correction value from each data included in. Using the partial video data group (iL) after such correction, the correlation calculation unit 5j performs the correlation calculation of the above-mentioned formula (1) (step 3j).
【0029】このように、最大相関度を示す1対の部分
映像データ群の差に応じた補正値により部分映像データ
群の各組合せごとに1対の部分映像データ群を補正し、
補正後の1対の部分映像データ群の各組合せに対して再
度相関演算を行って相関度を求めるので、補正前の段階
で相関度が高い1対の部分映像データ群の差に基づいた
補正が行え、補正に際して元来相関度が高くないデータ
の影響を受けることを低減でき、部分映像データ群の組
合せごとに補正量を求めるという煩わしい処理を防止で
き、個々のデータに対して複雑な微分近似値置換え処理
をなくすことが可能となる。In this way, the pair of partial video data groups is corrected for each combination of the partial video data groups by the correction value according to the difference between the pair of partial video data groups showing the maximum correlation,
Since the correlation calculation is performed again for each combination of the corrected pair of partial image data groups, the correlation degree is obtained, so that the correction is performed based on the difference between the pair of partial image data groups having a high correlation degree before the correction. It is possible to reduce the influence of data that is not originally high in correlation during correction, prevent the troublesome processing of calculating the correction amount for each combination of partial image data groups, and perform complicated differentiation on individual data. It is possible to eliminate the approximate value replacement process.
【0030】ステップ3jによる相関演算が終了する
と、最大相関度検出部5kは相関演算部5jが行った演
算結果に基づきS(l)の極小値(S(x’)とす
る。)すなわち最大相関度を検出する。最大相関度検出
部5kによりS(x’)が検出されると、比較判定部5
lはメモリ部5gに格納してあるS(x)と最大相関度
検出部5kが検出したS(x’)の大小比較を行う(ス
テップ3k)。この動作について補足すると、迷光は迷
光の元となる強い光源の形状やその光源と対象2との位
置関係等により複雑に変わることが考えられ、迷光の状
況により上述した補正で迷光の影響が低減される場合と
されない場合が生じる可能性がある。よって、ステップ
3kは上述した補正で迷光の影響が低減されるか否かを
確認するために実行される。When the correlation calculation in step 3j is completed, the maximum correlation degree detection unit 5k determines the minimum value of S (l) (S (x ')), that is, the maximum correlation, based on the calculation result performed by the correlation calculation unit 5j. Detect the degree. When S (x ′) is detected by the maximum correlation degree detection unit 5k, the comparison determination unit 5k
For l, the magnitude of S (x) stored in the memory unit 5g and S (x ') detected by the maximum correlation degree detection unit 5k are compared (step 3k). Supplementing this operation, it is considered that stray light may change intricately depending on the shape of the strong light source that is the source of stray light and the positional relationship between the light source and the target 2, and the effect of stray light is reduced by the above-mentioned correction depending on the situation of stray light. There may be cases where it is done and cases where it is not done. Therefore, step 3k is executed to confirm whether the influence of stray light is reduced by the above-described correction.
【0031】比較判定部5lはS(x)とS(x’)の
うち小さい方すなわち相関度の高い方を有効データとす
る(ステップ3l、3m)。具体的には、S(x)がS
(x’)より小さい場合、ステップ3iによる補正前の
S(x)と相関演算関数S(l)(l=1〜137の整
数)をメモリ部5gから読み出し補間演算部5mに出力
し、S(x’)がS(x)より小さい場合、最大相関度
検出部5kが用いたステップ3iによる補正後のS
(x’)とその相関演算関数S(l)(l=1〜137
の整数)を読み出し補間演算部5mに出力する。The comparison / decision unit 5l uses the smaller one of S (x) and S (x '), that is, the one with a higher degree of correlation as valid data (steps 3l, 3m). Specifically, S (x) is S
If it is smaller than (x ′), S (x) before correction in step 3i and the correlation calculation function S (l) (an integer of 1 = 1 to 137) are read from the memory unit 5g and output to the interpolation calculation unit 5m. If (x ′) is smaller than S (x), S after correction in step 3i used by the maximum correlation degree detection unit 5k
(X ′) and its correlation calculation function S (l) (l = 1 to 137
Is output to the interpolation calculation unit 5m.
【0032】このように、補正前の最大相関度と補正後
の最大相関度を比較し、補正により最大相関度が悪化し
た場合は、補正前の最大相関度を有効データとするの
で、複雑な迷光による影響を補正により除去できなかっ
た場合、その補正を無効にできる。よって、効果の上が
らない補正による位相差検出精度の悪化を防止できる。As described above, the maximum correlation before correction and the maximum correlation after correction are compared, and when the maximum correlation deteriorates due to the correction, the maximum correlation before correction is used as valid data, which is complicated. If the effect of stray light cannot be removed by the correction, the correction can be invalidated. Therefore, it is possible to prevent the deterioration of the phase difference detection accuracy due to the ineffective correction.
【0033】補間演算部5mは入力する極小値(S
(x’)またはS(x))とその前後の相関演算関数値
(S(x’−1)とS(x’+1))またはS(x−
1)とS(x+1))等を用いた補間法によりx’また
はxを補正する(ステップ3n)。この補間演算は公知
の技術であるので、詳細な説明は割愛する。The interpolation calculation unit 5m inputs the minimum value (S
(X ') or S (x)) and the correlation operation function values (S (x'-1) and S (x' + 1)) before and after it or S (x-).
X'or x is corrected by the interpolation method using 1) and S (x + 1)) (step 3n). Since this interpolation calculation is a known technique, detailed description will be omitted.
【0034】補間演算によりx’またはxが補正される
と、位相差検出部5nは補正されたx’またはxの光セ
ンサ3b側に設定してある基準位置(例えば、非TTL
カメラのような外光三角測距の場合は測定方向における
無限遠位置の対象の映像の中心位置に対応する位置と
し、TTLカメラ等で用いられる焦点検出装置の場合は
撮影レンズが合焦状態にあるときの対象の映像の中心位
置に対応する位置とする。)からのずれ量すなわち位相
差を検出する(ステップ3o)。When x'or x is corrected by the interpolation calculation, the phase difference detecting section 5n causes the corrected x'or x to have a reference position (for example, non-TTL) set on the optical sensor 3b side.
In the case of external light triangulation such as a camera, it is set to a position corresponding to the center position of the target image at the infinity position in the measurement direction, and in the case of a focus detection device used in a TTL camera or the like, the taking lens is in focus The position corresponds to the center position of the target image at a given time. ), That is, a phase difference is detected (step 3o).
【0035】合焦制御部7は位相差検出部5nが検出し
た位相差に基づき対物レンズ8の位置を制御し、対物レ
ンズ8と結像部9との間の合焦動作を行う。なお、非T
TLカメラの場合は、上記に限らず位相差検出部5nで
検出した位相差に基づき距離検出部5oで対象2までの
距離データを求め、この距離データに基づき合焦制御部
7が対物レンズ8の位置を制御し、対物レンズ8と結像
部9との間の合焦動作を行うようにしてもよい。The focus control section 7 controls the position of the objective lens 8 based on the phase difference detected by the phase difference detection section 5n, and performs the focus operation between the objective lens 8 and the image forming section 9. In addition, non-T
In the case of a TL camera, the distance detection unit 5o obtains the distance data to the object 2 based on the phase difference detected by the phase difference detection unit 5n, and the focusing control unit 7 causes the objective lens 8 to operate based on the distance data. The position may be controlled to perform the focusing operation between the objective lens 8 and the image forming unit 9.
【0036】なお、上記では相関演算を行う際に一方の
部分映像データ群(iL)を固定し、他方の部分映像デ
ータ群(iR)を1つずつずらしていく例を示したが、
相関演算の方式は上記に限らず適宜変更可能である。例
えば、従来技術として示した特開平8−166237号
公報に開示されているように両方の部分映像データ群を
それぞれの相対位置が異なるように順次ずらすようにし
てもよい。In the above description, when performing the correlation calculation, one partial video data group (iL) is fixed and the other partial video data group (iR) is shifted one by one.
The correlation calculation method is not limited to the above, and can be changed as appropriate. For example, as disclosed in Japanese Unexamined Patent Publication No. 8-166237, which is shown as a conventional technique, both partial video data groups may be sequentially shifted so that their relative positions are different.
【0037】また、上記では各映像データ列(IL、I
R)のデータ数を162とし、部分映像データ群のデー
タ数を26としたが、これらも適宜変更可能である。Further, in the above, each video data string (IL, I
The number of data of R) is 162 and the number of data of the partial video data group is 26, but these can be changed appropriately.
【0038】また、上記では撮像装置に本発明を採用し
た例を示したが、撮像装置に限るものではない。例え
ば、種々の測距装置や焦点検出装置等に用いることが可
能である。Further, although the example in which the present invention is applied to the image pickup apparatus has been shown above, the invention is not limited to the image pickup apparatus. For example, it can be used for various distance measuring devices, focus detection devices, and the like.
【0039】[0039]
【発明の効果】本発明によれば、最大相関度を示す1対
の部分映像データ群の差に応じた補正値により1対のセ
ンサアレイに結像された像の相互間の位相差を検出する
ために用いる映像データ間のデータ差を補正するので、
もともと相関度が高い1対の部分映像データ群の差に基
づいた補正が行え、もともと相関度が高くないデータの
影響を受けることを低減でき、部分映像データ群の組合
せごとに補正量を求めるという煩わしい処理を防止で
き、個々のデータに対して複雑な微分近似値置換え処理
をなくすことが可能となる。According to the present invention, the phase difference between the images formed on the pair of sensor arrays is detected by the correction value according to the difference between the pair of partial image data groups showing the maximum correlation. Since it corrects the data difference between the video data used to
It is possible to perform correction based on the difference between a pair of partial video data groups that originally have a high degree of correlation, reduce the influence of data that is not originally high in correlation, and obtain a correction amount for each combination of partial image data groups. It is possible to prevent troublesome processing and eliminate complicated differential approximation value replacement processing for individual data.
【図1】本発明の一実施例を示したブロック回路図。FIG. 1 is a block circuit diagram showing an embodiment of the present invention.
【図2】図1の動作説明のためのフローチャート。FIG. 2 is a flowchart for explaining the operation of FIG.
【図3】図1の動作説明のための説明図。FIG. 3 is an explanatory diagram for explaining the operation of FIG. 1.
3a 光センサアレイ 3b 光センサアレイ 5e 相関度取得部 5i 補正値取得部 5j 補正部 5n 位相差検出部 5o 距離検出部 7 合焦制御部 8 対物レンズ 9 結像部 3a Optical sensor array 3b optical sensor array 5e Correlation degree acquisition unit 5i Correction value acquisition unit 5j Correction unit 5n Phase difference detector 5o Distance detector 7 Focus control section 8 Objective lens 9 Imaging unit
Claims (4)
センサアレイの出力に応じた複数の映像データからなる
1対の映像データ列から相対位置が異なる部分映像デー
タ群を抽出して1対の部分映像データ群の組合せを複数
作り、当該1対の部分映像データ群の各組合せに対して
相関度を求める相関度取得ステップと、 当該相関度の中で最大相関度を示す上記1対の部分映像
データ群間のデータ差に応じた補正値を求める補正値取
得ステップと、 少なくとも上記1対の映像データ列のそれぞれの一部の
間のデータ差を上記補正値に基づき補正する補正ステッ
プと、 補正後の上記映像データに基づいて上記1対のセンサア
レイに結像された像の相互間の位相差を検出する位相差
検出ステップとを含むことを特徴とする位相差検出方
法。1. A partial image data group having a different relative position is extracted from a pair of image data strings consisting of a plurality of image data corresponding to the outputs of a pair of photosensor arrays on which an image from a measurement object is formed. A plurality of combinations of a pair of partial video data groups, and a correlation degree acquisition step of obtaining a correlation degree for each combination of the pair of partial video data groups; and showing the maximum correlation degree among the correlation degrees. A correction value acquisition step of obtaining a correction value according to a data difference between a pair of partial video data groups, and at least a data difference between a part of each of the pair of video data strings is corrected based on the correction value. A phase difference detection method comprising: a correction step; and a phase difference detection step of detecting a phase difference between images formed on the pair of sensor arrays based on the corrected image data. .
ンサアレイと、 上記1対のセンサアレイの出力に応じた複数の映像デー
タからなる1対の映像データ列から相対位置が異なる部
分映像データ群を抽出して1対の部分映像データ群の組
合せを複数作り、当該1対の部分映像データ群の各組合
せに対して相関度を求める相関度取得部と、 上記相関度取得部が最大相関度を示す上記1対の部分映
像データ群間のデータ差に応じた補正値を求める補正値
取得部と、 少なくとも上記1対の映像データ列のそれぞれの一部の
間のデータ差を上記補正値に基づき補正する補正部と、 上記補正部により補正された上記映像データに基づいて
上記1対のセンサアレイに結像された像の相互間の位相
差を検出する位相差検出部とを含むことを特徴とする位
相差検出装置。2. A relative position is different from a pair of sensor arrays on which an image from a measurement target is formed and a pair of image data rows composed of a plurality of image data corresponding to outputs of the pair of sensor arrays. A correlation degree acquisition unit for extracting a partial video data group to create a plurality of combinations of a pair of partial video data groups and obtaining a correlation degree for each combination of the pair of partial video data groups; Is a maximum correlation, and a correction value acquisition unit that obtains a correction value according to a data difference between the pair of partial video data groups, and a data difference between at least a part of each of the pair of video data strings. A correction unit that corrects based on the correction value, and a phase difference detection unit that detects a phase difference between the images formed on the pair of sensor arrays based on the image data corrected by the correction unit. A position characterized by including Difference detection device.
記位相差検出装置が検出する位相差に基づき上記測定対
象までの距離に応じた距離データを求める距離検出部と
を備えたことを特徴とする測距装置。3. The phase difference detection device according to claim 2, and a distance detection unit that obtains distance data according to the distance to the measurement target based on the phase difference detected by the phase difference detection device. Distance measuring device characterized by.
物レンズと、上記対物レンズを通過した被写体像が結像
される結像部と、上記位相差検出装置が求めた上記位相
差に応じて上記対物レンズと上記結像部との間の合焦動
作を行う合焦制御部とを含むことを特徴とする撮像装
置。4. The phase difference detecting device according to claim 2, an objective lens, an image forming section on which an object image passing through the objective lens is formed, and the phase difference obtained by the phase difference detecting device. An image pickup apparatus comprising: a focusing control unit that performs a focusing operation between the objective lens and the image forming unit according to the above.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2001243917A JP4817552B2 (en) | 2001-08-10 | 2001-08-10 | Phase difference detection method, phase difference detection device, distance measuring device, and imaging device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2001243917A JP4817552B2 (en) | 2001-08-10 | 2001-08-10 | Phase difference detection method, phase difference detection device, distance measuring device, and imaging device |
Publications (2)
Publication Number | Publication Date |
---|---|
JP2003057531A true JP2003057531A (en) | 2003-02-26 |
JP4817552B2 JP4817552B2 (en) | 2011-11-16 |
Family
ID=19073933
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP2001243917A Expired - Fee Related JP4817552B2 (en) | 2001-08-10 | 2001-08-10 | Phase difference detection method, phase difference detection device, distance measuring device, and imaging device |
Country Status (1)
Country | Link |
---|---|
JP (1) | JP4817552B2 (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7156524B2 (en) | 2003-07-17 | 2007-01-02 | Sanyo Electric Co., Ltd. | Projection type video display and method of adjusting the same at factory shipping |
US7209225B2 (en) | 2003-08-08 | 2007-04-24 | Casio Computer Co., Ltd. | Inclination angle detection device and inclination angle detection method |
US7370980B2 (en) | 2003-07-17 | 2008-05-13 | Sanyo Electric Co., Ltd. | Projection type video display |
CN100395654C (en) * | 2004-01-16 | 2008-06-18 | 三洋电机株式会社 | Projection type video display |
JP2012128278A (en) * | 2010-12-16 | 2012-07-05 | Canon Inc | Focus detector and control method for the same |
JP2012173309A (en) * | 2011-02-17 | 2012-09-10 | Canon Inc | Focus detection device and control method therefor |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS62163008A (en) * | 1986-01-13 | 1987-07-18 | Minolta Camera Co Ltd | Focus detector |
JPH04230715A (en) * | 1991-06-07 | 1992-08-19 | Nikon Corp | Focus detector |
JPH05264892A (en) * | 1992-03-17 | 1993-10-15 | Olympus Optical Co Ltd | Automatic focusing device |
JPH08159755A (en) * | 1994-12-12 | 1996-06-21 | Fuji Electric Co Ltd | Method for detecting phase difference between image pair |
JPH08166237A (en) * | 1994-12-15 | 1996-06-25 | Fuji Electric Co Ltd | Method of detecting phase difference between a pair of images |
JPH10122855A (en) * | 1996-10-17 | 1998-05-15 | Olympus Optical Co Ltd | Rangefinder |
JP2000193879A (en) * | 1998-12-25 | 2000-07-14 | Olympus Optical Co Ltd | Distance measuring equipment |
-
2001
- 2001-08-10 JP JP2001243917A patent/JP4817552B2/en not_active Expired - Fee Related
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS62163008A (en) * | 1986-01-13 | 1987-07-18 | Minolta Camera Co Ltd | Focus detector |
JPH04230715A (en) * | 1991-06-07 | 1992-08-19 | Nikon Corp | Focus detector |
JPH05264892A (en) * | 1992-03-17 | 1993-10-15 | Olympus Optical Co Ltd | Automatic focusing device |
JPH08159755A (en) * | 1994-12-12 | 1996-06-21 | Fuji Electric Co Ltd | Method for detecting phase difference between image pair |
JPH08166237A (en) * | 1994-12-15 | 1996-06-25 | Fuji Electric Co Ltd | Method of detecting phase difference between a pair of images |
JPH10122855A (en) * | 1996-10-17 | 1998-05-15 | Olympus Optical Co Ltd | Rangefinder |
JP2000193879A (en) * | 1998-12-25 | 2000-07-14 | Olympus Optical Co Ltd | Distance measuring equipment |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7156524B2 (en) | 2003-07-17 | 2007-01-02 | Sanyo Electric Co., Ltd. | Projection type video display and method of adjusting the same at factory shipping |
US7370980B2 (en) | 2003-07-17 | 2008-05-13 | Sanyo Electric Co., Ltd. | Projection type video display |
US7209225B2 (en) | 2003-08-08 | 2007-04-24 | Casio Computer Co., Ltd. | Inclination angle detection device and inclination angle detection method |
CN100395654C (en) * | 2004-01-16 | 2008-06-18 | 三洋电机株式会社 | Projection type video display |
JP2012128278A (en) * | 2010-12-16 | 2012-07-05 | Canon Inc | Focus detector and control method for the same |
JP2012173309A (en) * | 2011-02-17 | 2012-09-10 | Canon Inc | Focus detection device and control method therefor |
Also Published As
Publication number | Publication date |
---|---|
JP4817552B2 (en) | 2011-11-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5012236B2 (en) | Digital camera | |
JP5430795B2 (en) | Imaging apparatus and program | |
US9681037B2 (en) | Imaging apparatus and its control method and program | |
JP2008122835A (en) | Correlation calculation method, correlation calculation device, focus detector and imaging apparatus | |
JP2003247823A (en) | Method and device for detecting phase difference, range finder, and image pickup device | |
JP2013037166A (en) | Focus detector, and lens device and imaging device having the same | |
JP2007310043A (en) | Correlation calculation method, correlation calculation device, focus detector and imaging apparatus | |
US10999491B2 (en) | Control apparatus, image capturing apparatus, control method, and storage medium | |
JP4817552B2 (en) | Phase difference detection method, phase difference detection device, distance measuring device, and imaging device | |
WO2016080157A1 (en) | Focus control device, focus control method, focus control program, lens device, and imaging device | |
JP2003279348A (en) | Method and apparatus for detection of phase difference, distance measuring device and imaging device | |
JP5338112B2 (en) | Correlation calculation device, focus detection device, and imaging device | |
JP2009258451A (en) | Focus detection device | |
JP5338113B2 (en) | Correlation calculation device, focus detection device, and imaging device | |
JP6974599B2 (en) | Image pickup device, distance measurement method, distance measurement program and recording medium | |
JP4085720B2 (en) | Digital camera | |
JP2003090953A (en) | Method and device for phase difference detection, range finder, and imaging device | |
JP6532411B2 (en) | IMAGE PROCESSING DEVICE, IMAGING DEVICE, AND IMAGE PROCESSING PROGRAM | |
JP2001165622A (en) | Optical device | |
JP2010287986A (en) | Imaging system and imaging method | |
JP5338118B2 (en) | Correlation calculation device, focus detection device, and imaging device | |
JP2005010353A (en) | Projector | |
JP2001154225A (en) | Deflection detecting device and device with deflection detecting function | |
JP2017215500A (en) | Image processing apparatus, imaging device, image processing system, method for controlling image processing apparatus, and program | |
JP2004117296A (en) | Distance measuring apparatus and camera provided therewith |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A621 | Written request for application examination |
Free format text: JAPANESE INTERMEDIATE CODE: A621 Effective date: 20080222 |
|
A977 | Report on retrieval |
Free format text: JAPANESE INTERMEDIATE CODE: A971007 Effective date: 20100916 |
|
A131 | Notification of reasons for refusal |
Free format text: JAPANESE INTERMEDIATE CODE: A131 Effective date: 20101005 |
|
A521 | Written amendment |
Free format text: JAPANESE INTERMEDIATE CODE: A523 Effective date: 20101206 |
|
TRDD | Decision of grant or rejection written | ||
A01 | Written decision to grant a patent or to grant a registration (utility model) |
Free format text: JAPANESE INTERMEDIATE CODE: A01 Effective date: 20110823 |
|
A01 | Written decision to grant a patent or to grant a registration (utility model) |
Free format text: JAPANESE INTERMEDIATE CODE: A01 |
|
A61 | First payment of annual fees (during grant procedure) |
Free format text: JAPANESE INTERMEDIATE CODE: A61 Effective date: 20110830 |
|
FPAY | Renewal fee payment (event date is renewal date of database) |
Free format text: PAYMENT UNTIL: 20140909 Year of fee payment: 3 |
|
R150 | Certificate of patent or registration of utility model |
Free format text: JAPANESE INTERMEDIATE CODE: R150 |
|
A521 | Written amendment |
Free format text: JAPANESE INTERMEDIATE CODE: A523 Effective date: 20101206 |
|
LAPS | Cancellation because of no payment of annual fees |