CN102814006B - Image contrast device, patient positioning device and image contrast method - Google Patents

Image contrast device, patient positioning device and image contrast method Download PDF

Info

Publication number
CN102814006B
CN102814006B CN201210022145.2A CN201210022145A CN102814006B CN 102814006 B CN102814006 B CN 102814006B CN 201210022145 A CN201210022145 A CN 201210022145A CN 102814006 B CN102814006 B CN 102814006B
Authority
CN
China
Prior art keywords
image
dimension
benchmark
posture
pattern match
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210022145.2A
Other languages
Chinese (zh)
Other versions
CN102814006A (en
Inventor
平泽宏祐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Mitsubishi Electric Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mitsubishi Electric Corp filed Critical Mitsubishi Electric Corp
Publication of CN102814006A publication Critical patent/CN102814006A/en
Application granted granted Critical
Publication of CN102814006B publication Critical patent/CN102814006B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)
  • Radiation-Therapy Devices (AREA)

Abstract

The present invention relates to an image contrast device, a patient positioning device, and an image contrast method. The image contrast device comprises a contrast processing unit. The contrast processing unit is used for contrasting a three-dimensional reference image and a three-dimensional current image, and calculating positional correction so that the position and posture of the patient in the current image site are consistent with the position and posture of the patient in the reference image. The contrast processing unit comprises: a primary contrast unit, which conducts primary contrast according to the reference image; and a secondary contrast unit, which conducts secondary contrast based on a predetermined template region for a predetermined retrieval target region, wherein the predetermined template region is generated according to one of the reference image or the current image and based on the primary contrast result, and the predetermined retrieval target area is generated according to one of the reference image or the current image different from the predetermined template region and based on the result of the primary contrast.

Description

Image contrast device, patient positioning and image contrast method
Technical field
The present invention relates to a kind of image contrast device and patient positioning, irradiating in the radiotherapy apparatus carrying out treatment of cancer with the affected part of lonizing radiation to patient such as X-ray, gamma-rays, particle ray, this image contrast device utilizes CT view data etc., and this patient positioning utilizes this image contrast device, patient is positioned at the radiation exposure position irradiating lonizing radiation.
Background technology
In recent years, in the radiotherapy apparatus for the purpose for the treatment of of cancer, development and construction is carried out to the treatment of cancer device (being referred to as particle-beam therapeutic apparatus especially) that make use of the particle ray such as proton or heavy ion.As everyone knows, compared with the existing radiation cure such as X-ray, gamma-rays, utilize the particle-beam therapeutic of particle ray intensively can be irradiated to cancerous lesion, namely, the shape of affected part can be corresponded to and accurately irradiate particle ray, can treat not affecting in Normocellular situation.
In particle-beam therapeutic, particle ray is irradiated to accurately the affected parts such as cancer very important.Therefore, when carrying out particle-beam therapeutic, utilizing fixture etc. to fix patient and making to misplace relative to the treatment table of therapeutic room (exposure cell).In order to accurately the affected parts such as cancer are positioned in radiation exposure scope, utilize laser designator etc. to carry out the rough fixing setting that waits to patient, then, utilizing radioscopic image etc. accurately to locate patient's affected part.
In patent documentation 1, propose bed (bed) positioner and localization method thereof, in this positioner and localization method, to the benchmark image of radioscopy image with utilize any one image in the present image captured by X-ray receptor not specify the same position of identical multiple marks (monument), carry out two benches pattern match, generate the location information driving treatment table.In 1 pattern match, second setting regions is set to 2 dimension present images, this second setting regions is roughly the same with the first setting regions size, wherein the first setting regions comprise to 2 dimension benchmark images arrange etc. center (isocenter) (beam exposure center), the second setting regions is moved successively in the region of 2 dimension present images, under each position of the second setting regions, 2 dimension present images in 2 dimension benchmark images in first setting regions and the second setting regions are compared, extract to have and tie up with 2 of the first setting regions the second setting regions that benchmark image most similar 2 ties up present image.In 2 pattern match, benchmark image is tieed up in 2 in 2 dimension present images in the second setting regions extracted and described first setting regions and compares, carry out pattern match to make two images the most consistent in 1 pattern match.
Prior art document
Patent documentation
Patent documentation 1: Japan's patent No. 3748433 publication (0007 ~ 0009 section, 0049 section, Fig. 8, Fig. 9)
Invent technical problem to be solved
Shape due to affected part is 3 dimension three-dimensional shapes, during affected part position when therefore affected part being navigated to treatment plan, uses 3 d image to tie up image compared to use 2 and positioning precision can be made higher.Generally speaking, when making treatment plan data, use X ray CT (computerized tomograph: Computed Tomography) image to determine the affected part shape of 3 dimensions.In recent years, have following requirement: will possess X ray CT device in therapeutic room, X ray CT image when X ray CT present image when use treatment captured by X ray CT device and treatment plan, positions.In radioscopy image, the affected part as soft tissue can not be reflected well, therefore substantially use skeleton to carry out location matches, and use X ray CT image to position to be between the affected part owing to can reflect X ray CT image and carry out location matches.
Therefore, in existing 2 stage pattern match, situation benchmark image and present image being expanded to 3 d image is considered.3 dimension benchmark images and 3 dimension present images comprise the multiple faultage images (sectioning image) with X ray CT device shooting.3 dimension present images are envisioned for the less situation of image sheet number for by viewpoints such as X-radiations, therefore need the 3 dimension benchmark images to having intensive image information to compare with the 3 dimension present images with the image information more sparse than 3 dimension benchmark images.In existing 2 stage pattern match, there are the following problems: although the 2 dimension benchmark images and 2 respectively with the image information of equal densities are tieed up between present image and can be compared, but to image information density different 3 dimension benchmark images and 3 dimension present images compare time, merely can not bring up to 3 dimensions from 2 dimensions by means of only by the image dimension of prior art, realize 2 stage pattern match.Namely, there are the following problems: can not same as the prior artly, merely carry out 1 pattern match from 3 dimension benchmark images in the first set setting regions to 3 dimension present images in the second setting regions, merely benchmark image is tieed up in 3 in 3 dimension present images in the second extracted setting regions and the first setting regions to compare, to realize making two pattern match that 3 d image is the most consistent.
Summary of the invention
The object of the invention is to, when positioning the patient of radiation cure, even when the faultage image number of 3 dimension present images is fewer than 3 dimension benchmark images, high-precision 2 stage pattern match (contrast of 2 stages) also can be realized.
For the technical scheme of technical solution problem
Image contrast device involved in the present invention comprises: 3 d image input part, 3 dimension benchmark images captured when this 3 d image input part reads the treatment plan of radiation cure respectively and 3 dimension present images captured when treating; Control treatment portion, this control treatment portion contrasts 3 dimension benchmark images and 3 dimension present images, calculates position correction consistent to make the posture of the affected part in 3 dimension present images and 3 tie up the posture of the affected part in benchmark image.Control treatment portion has: 1 comparing part; This 1 comparing part carries out 1 pattern match according to 3 dimension benchmark images to 3 dimension present images; And 2 comparing part, 2 pattern match are carried out in the searching object region of this 2 comparing part template area according to the rules to regulation, the template area wherein specified is according in 3 dimension benchmark images or 3 dimension present images and generate based on 1 pattern match result, and the searching object region of regulation according to the formation base difference 3 of template area specified tie up benchmark image or 3 tie up in present image another and generate based on 1 pattern match result.
Invention effect
Image contrast device involved in the present invention carries out 1 pattern match according to 3 dimension benchmark images to 3 dimension present images, then, based on 1 pattern match result, generate the template area of regulation and the searching object region of regulation, perform 2 pattern match of searching object region and template area, even therefore when the faultage image number of 3 dimension present images is fewer than 3 dimension benchmark images, high-precision 2 stage pattern match also can be realized.
Accompanying drawing explanation
Fig. 1 is the figure of the structure representing image contrast device involved by embodiments of the present invention 1 and patient positioning.
Fig. 2 is the figure representing the integral device structure relevant with patient positioning to image contrast device of the present invention.
Fig. 3 represents that 3 involved by embodiments of the present invention 1 tie up the figure of benchmark images and benchmark image template area.
Fig. 4 is the figure of the 3 dimension present images represented involved by embodiments of the present invention 1.
Fig. 5 is the figure be described 1 pattern matching method involved by embodiments of the present invention 1.
Fig. 6 is the figure be described the relation of the benchmark image template area in 1 pattern matching method of Fig. 5 and sectioning image.
Fig. 7 represents the figure extracting region for 1 time of the sectioning image that 1 pattern matching method involved by embodiments of the present invention 1 extracts.
Fig. 8 is the figure be described 2 pattern matching methods involved by embodiments of the present invention 1.
Fig. 9 is the figure be described the relation of the benchmark image template area in 2 pattern matching methods of Fig. 8 and sectioning image.
Figure 10 is the figure be described 1 pattern matching method involved by embodiments of the present invention 2.
Figure 11 is the figure be described the relation of the benchmark image template area in 1 pattern matching method of Figure 10 and sectioning image.
Figure 12 is the figure of 3 dimension benchmark images after representing the posture conversion involved by embodiments of the present invention 2.
Figure 13 is the figure be described 2 pattern matching methods involved by embodiments of the present invention 2.
Figure 14 is the figure of the structure representing image contrast device involved by embodiments of the present invention 3 and patient positioning.
Detailed description of the invention
Embodiment 1
To be the figure of the structure representing image contrast device involved by embodiments of the present invention 1 and patient positioning, Fig. 2 be Fig. 1 represents the figure of the integral device structure relevant with patient positioning to image contrast device of the present invention.In fig. 2,1 is the CT emulator room for carrying out the treatment plan will carried out before radiation cure, the top board 3 of CT stand 2, CT image taking bed is there is in this CT emulator room, make patient 4 accumbency on top board 3, and take treatment plan CT view data and comprise affected part 5 to make it.On the other hand, 6 are used to the therapeutic room carrying out radiation cure, there is CT stand 7, rotation therapy platform 8, and have top board 9 on the top of rotation therapy platform 8 in this therapeutic room, make patient 10 accumbency on top board 9, and shooting location CT view data to make it comprise treatment time affected part 11.
Herein, location refers to: the patient 10 when calculating treatment according to treatment plan CT view data and the position of affected part 11, calculating position correction makes consistent with treatment plan, carries out location matches arrives radiation cure beam exposure center 12 with the affected part 11 when making treatment.By carrying out to rotation therapy platform 8 position that drived control carrys out movable top plate 9 with the state that top board 9 carries patient 10, thus realize location matches.Rotation therapy platform 8 can carry out the driving correction of the 6DOF of translation/rotation, and by the top board 9 of rotation therapy platform 8 is revolved turnback, thus a certain treatment position (being represented by dotted lines in Fig. 2) of the irradiation bed 13 carrying out radiation exposure can be moved to from CT camera site (representing with solid line Fig. 2).In addition, although CT camera site shown in Figure 2 and treatment position have the opposed locations relation of 180 degree, but configuration mode is not limited thereto, and both position relationship also can be into the position relationship of other angle of 90 degree etc.
Treatment plan CT view data and location CT view data are transferred to position computer 14.Treatment plan CT view data becomes 3 dimension benchmark images, and location CT view data becomes 3 dimension present images.Image contrast device 29 in the present invention and patient positioning 30 are all relevant to the computer software be present in this position computer 14, and image contrast device 29 calculates above-mentioned position correction (translational movement, rotation amount), and patient positioning 30 comprises image contrast device 29 but also has and calculates function to the parameter that each driving shaft of rotation therapy platform 8 (being according to circumstances simply called treatment table 8) controls based on this position correction.Patient positioning 30 controls treatment table 8 by the matching result (results of comparison) obtained according to image contrast device 29, thus guides to the object affected part of particle-beam therapeutic the beam exposure center 12 being located at therapy equipment.
In location in existing radiation cure, radioscopy image captured in therapeutic room during by contrasting DRR (digital reconstruction radiography: the Digitally Reconstructed Radiography) image or the radioscopy image meanwhile taken and treatment that generate according to treatment plan CT view data, calculates position offset.In radioscopy image, due to the affected part as soft tissue can not be reflected well, thus substantially carry out the location matches using skeleton.The location of use CT view data stated in the present embodiment has following feature: in therapeutic room 6, arrange CT stand 7, and be about to the CT view data before treating and treatment plan CT view data to carry out location matches owing to utilizing, therefore directly can depict affected part, and the location matches of affected part can be carried out.
Then, the calculation procedure of the above-mentioned position correction of the image contrast device 29 in present embodiment and patient positioning 30 is described.Fig. 1 represents the relation between composing images comparison device and each data processing division of patient positioning, and herein, image contrast device 29 possesses: the 3 d image input part 21 reading CT view data; Control treatment portion 22; Results of comparison display part 23; And results of comparison efferent 24.The device that image contrast device 29 be with the addition of to treatment table controling parameters calculating part 26 is patient positioning 30.
As mentioned above, be used for the treatment of the data of plan and shooting when 3 dimension benchmark images are and carry out treatment plan, it is characterized in that, be shown as the affected part information (affected part shape etc.) of the affected part into particle-beam therapeutic object by artificial input table.The data of taking for patient location when 3 dimension present images are and treat, it is characterized in that, for suppression by the viewpoint of X-radiation, the sheet number of faultage image (being also called sectioning image) is less.
In the present invention, adopt the structure of carrying out 2 stage pattern match: according to 3 dimension benchmark images, 1 pattern match is carried out to 3 dimension present images; Then, based on 1 pattern match result, generate the template area of regulation and the searching object region of regulation, use the template area of this regulation, carry out 2 pattern match in the same way or oppositely.In 2 stage pattern match, by making match parameter when carrying out 1 pattern match not identical with match parameter when carrying out 2 pattern match, thus high speed and high-precision process can be realized.Such as, there is following method: under low resolution, to carry out 1 pattern match as object on a large scale, and use the template area or searching object region found, carry out 2 pattern match at high resolutions, using the scope filtered out as object.
3 d image input part 21 is described.3 d image input part 21 read group of pictures captured by X ray CT device, that be made up of multiple faultage image, DICOM (Digital imaging in medicine and communication: Digital Imaging and Communications in Medicine) form view data (sectioning image group) to tie up volume data as 3.Treatment plan CT view data is 3 dimension volume datas when carrying out treatment plan, i.e. 3 dimension benchmark images.Location CT view data is 3 dimension volume datas when treating, i.e. 3 dimension present images.In addition, CT view data is not limited to DICOM form, also can be the data of other form.
Control treatment portion 22 contrasts (pattern match) 3 dimension benchmark images and 3 dimension present images, calculates position correction consistent to make the affected part posture in 3 dimension present images and described 3 tie up the posture of the affected part in benchmark image.Results of comparison display part 23 shows the result after being contrasted by control treatment portion 22 (following position correction or 3 dimension present images and 3 after moving with this position correction are tieed up benchmark image to coincide the image etc. shown) on the display picture of position computer 14.Correction when results of comparison efferent 24 output utilizes control treatment portion 22 to contrast 3 dimension benchmark images and 3 dimension present images, the position correction (translational movement, rotation amount) utilizing control treatment portion 22 to calculate.Treatment table controling parameters calculating part 26 is by output valve (translation 3 axle [Δ X, Δ Y, Δ Z] of results of comparison efferent 24, rotate 3 axles [Δ A, Δ B, Δ C], totally 6 degree of freedom) convert the parameter that each axle for the treatment of table 8 is controlled to, namely calculate parameter.Treatment table 8, based on the treatment table controling parameters utilizing treatment table controling parameters calculating part 26 to calculate, drives the driving device of each axle for the treatment of table 8.Thus, position correction can be calculated and make consistent with treatment plan, and location matches can be carried out with the beam exposure center 12 making the affected part 11 when treating arrive radiation cure.
Control treatment portion 22 has: posture transformation component 25; 1 comparing part 16; 2 comparing part 17; Reference templates Area generation portion 18.When carrying out 1 pattern match or 2 pattern match, posture transformation component 25 changes the posture of object data.1 time comparing part 16 carries out 1 pattern match according to 3 dimension benchmark images to 3 dimension present images.2 times 2 pattern match are carried out in the comparing part 17 searching object region of template area to regulation according to the rules, the template area wherein specified is according in 3 dimension benchmark images or 3 dimension present images and generate based on 1 pattern match result, and the searching object region of regulation according to different from the formation base of template area specified 3 tie up benchmark image or 3 tie up in present image another and generate based on 1 pattern match result.
Utilize Fig. 3 to Fig. 9, describe control treatment portion 22 in detail.Fig. 3 represents that 3 involved by embodiments of the present invention 1 tie up the figure of benchmark images and benchmark image template area.Fig. 4 is the figure of the 3 dimension present images represented involved by embodiments of the present invention 1.Fig. 5 is the figure be described 1 pattern matching method involved by embodiments of the present invention 1.Fig. 6 is the figure be described the relation of the benchmark image template area in 1 pattern matching method of Fig. 5 and sectioning image.Fig. 7 represents the figure extracting region for 1 time of the sectioning image that 1 pattern matching method involved by embodiments of the present invention 1 extracts.Fig. 8 is the figure be described 2 pattern matching methods involved by embodiments of the present invention 1.Fig. 9 is the figure be described the relation of the benchmark image template area in 2 pattern matching methods of Fig. 8 and sectioning image.
The reference templates Area generation portion 18 in control treatment portion 22 is used in the affected part shape (affected part information) inputted when carrying out treatment plan, generates benchmark image template area 33 from 3 dimension benchmark images 31.3 dimension benchmark images 31 are made up of multiple sectioning image 32.In figure 3, for convenience, the example be made up of 5 sectioning images 32a, 32b, 32c, 32d, 32e is shown.Affected part shape as ROI (interested region: Region of Interest) 35, as surround in each sectioning image affected part close profile to input.Can will comprise the above-mentioned region closing profile such as external tetragon 34, and will the cuboid region of each external tetragon 34 be comprised as template area.Using this template area as benchmark image template area 33.1 comparing part 16 in control treatment portion 22 carries out 1 pattern match so that benchmark image template area 33 is matched 3 dimension present images 36.3 dimension present images 36 shown in Fig. 4 represent the example be made up of 3 sectioning images 37a, 37b, 37c.Present image area 38 shown in Fig. 5 represents becomes the cuboid comprising 3 sectioning images 37a, 37b, 37c.As shown in Figure 5, in present image area 38, make benchmark image template area 33 (33a, 33b, 33c) move with raster scanning shape, calculate the correlation tieing up present image 36 with 3.As correlation, all correlations normalized crosscorrelation value etc. can be utilized, utilizing in images match (image contrast).
Benchmark image template area 33a moves along scanning pattern 39a with raster scanning shape in sectioning image 37a.Equally, benchmark image template area 33b moves along scanning pattern 39b with raster scanning shape in sectioning image 37b, and benchmark image template area 33c moves along scanning pattern 39c with raster scanning shape in sectioning image 37c.In order to make accompanying drawing simple, scanning pattern 39b, 39c are shown briefly.
When carrying out 1 pattern match, as shown in Figure 6, to each sectioning image 53 forming benchmark image template area 33, carry out image contrast with the sectioning image 37 forming present image area 38.Sectioning image 53 is the images be divided into by benchmark image template area 33 in the sectioning image 32 of 3 dimension benchmark images 31.Benchmark image template area 33 is formed by with 35 sectioning images tieing up 5 sectioning images 32a, 32b, 32c, 32d, 32e of benchmark image corresponding.Thus, when carrying out 1 pattern match, utilize 5 sectioning images in benchmark image template area 33 respectively, image contrast is carried out to the sectioning image 37a of 3 dimension present images 36.To sectioning image 37b, 37c of 3 dimension present images 36, carry out image contrast equally.
1 comparing part 16 extracts 1 time from each sectioning image 37 of 3 dimension present images 36 and extracts region 43, comprises the region the highest with the correlation of present image template area 33, present image area 38 to make it.As shown in Figure 7, extract 1 time from the sectioning image 37a of 3 dimension present images 36 and extract region 43a.Equally, extract 1 time from sectioning image 37b, 37c of 3 dimension present images 36 and extract region 43b, 43c.Generate 1 extraction present image area 42 as the searching object region being used for 2 pattern match, comprise 1 time to make it and extract region 43a, 43b, 43c.Like this, 1 comparing part 16 generates 1 time and extracts present image area 42, extracts present image area 42 for this 1 time as the searching object region for 2 pattern match.
Herein, due under the state before location, the posture (rotating 3 axles) that 3 dimension benchmark images 31 and 3 tie up present image 36 is inconsistent, therefore in the simple raster scanning of such as Fig. 5, when the section sheet number of 3 dimension present images 36 is less, although can not carry out the high-precision coupling that angular deflection also can detect, the region 43 of extracting for 1 time extracted for carrying out 2 pattern match is not a problem.Therefore, in 1 pattern match, calculate correlation and the skew of non-detection angles, in time pattern match of 2 thereafter, carry out the coupling that precision that angular deflection also can detect is high.
2 pattern match are described.In 2 pattern match, the posture transformation component 25 in control treatment portion 22 is utilized to generate the posture conversion template area 40 after converting the posture of the benchmark image template area 33 generated from 3 dimension benchmark images 31.In 2 pattern match, as shown in Figure 8 and Figure 9, when mating, add the postural change amount (rotating 3 axles) of benchmark image template area 33 using as parameter.Posture conversion template area 40 after utilizing posture transformation component 25 to carry out posture conversion of 2 comparing part 17 and less 31 times of tieing up present image 36 of slice map photo number are extracted between present image area 42, carry out the high-precision coupling that angular deflection also comprises.By like this, the high-precision 2 stage pattern match that angular deflection also comprises can be realized.By will the narrow range in the region obtained by 1 pattern match be comprised as object using the exploration scope as 2 pattern match, thus can use comprise using low resolution, wide region is carried out 1 pattern match as object and extract region 43 find out for 1 time extract present image area 42 for 1 time, carry out 2 pattern match with high-resolution, and the time needed for pattern match can be shortened.
Shown in Fig. 81 time is extracted present image area 42 and is expressed as the cuboid comprising 31 extraction region 43a, 43b, 43c.As the posture conversion template area 40a of the benchmark image template area after converting posture along scanning pattern 39a, move extracting in the 43a of region for 1 time of sectioning image 37a with raster scanning shape.Equally, as the posture conversion template area 40b of the benchmark image template area after converting posture along scanning pattern 39b, move extracting in the 43b of region for 1 time of sectioning image 37b with raster scanning shape, as the posture conversion template area 40c of the benchmark image template area after converting posture along scanning pattern 39c, move extracting in the 43c of region for 1 time of sectioning image 37c with raster scanning shape.In order to make accompanying drawing simple, scanning pattern 39b, 39c are shown briefly.
When carrying out 2 pattern match, as shown in Figure 9, utilize 2 comparing part 17, the section 41 in posture conversion template area 40 and formation are extracted extracting for 1 time between region 43 of the sectioning image 37 of present image area 42 for 1 time and are carried out image contrast.In addition, also can carry out image contrast between sectioning image 55 and section 41, this sectioning image 55 is the images extracting present image area 42 be divided into by 1 time in the sectioning image 37 of 3 dimension present images 36.The section 41 of posture conversion template area 40 is generated from multiple sectioning images 32 of 3 dimension benchmark images 31.Such as, the data of section 41 tie up multiple sectioning images 32 intercepting of benchmark image 31 from formation 3.Usually, the packing density extracting region 43 for 1 time that the packing density and 3 of section 41 of posture conversion template area 40 ties up present image 36 is not identical, but calculates the correlation of each pixel of section 41.In addition, the section 41 of posture conversion template area 40 also can comprise and carried out completion and make the packing density of section 41 and 3 tie up the identical data of the packing density extracting region 43 for 1 time of present image 36.
Herein, 2 stage pattern matching methods of embodiment 1 are summarized.First, the reference templates Area generation portion 18 in control treatment portion 22 generates benchmark image template area 33 (benchmark image template area generation step) from 3 dimension benchmark images 31.1 time comparing part 16 performs 1 pattern match (1 pattern match step) according to benchmark image template area 33 to 3 dimension present images 36.1 pattern match, to each sectioning image 53 forming benchmark image template area 33, carries out image contrast with the sectioning image 37 forming present image area 38.1 comparing part 16 is when each scanning benchmark image template area 33, calculate the correlation (correlation value calculation step) between present image area 38 and benchmark image template area 33, by 1 pattern match, extract and extract region 43 for 1 time and comprise the highest region (extracting region extraction step 1 time) of correlation between present image area 38 and benchmark image template area 33 to make it.1 comparing part 16 generation, as 1 extraction present image area 42 in the searching object region for 2 pattern match, extracts region 43 (searching object generation step) 1 time that comprises each sectioning image 37 being formed present image area 38 to make it.2 stage pattern matching methods of embodiment 1 comprise: benchmark image template area generation step; 1 pattern match step; And following 2 pattern match steps.1 time pattern match step comprises: correlation value calculation step; Extract region extraction step 1 time; And searching object generation step.
Then, 2 comparing part 17 in control treatment portion 22 convert according to the posture by posture transformation component 25 pairs of benchmark image template area 33 after posture conversion template area 40 present image area 42 extracted to 3 dimension for 1 time of present images 36 perform 2 pattern match (2 pattern match steps).2 pattern match generate multiple sections 41 (section generation step) of the conversion of the posture after being transformed into the posture of regulation template area 40, for each section 41, extract extracting region 43 or carrying out image contrast between sectioning image 55 and this section 41 for 1 time of the sectioning image 37 of present image area 42 for 1 time in formation.When each scanning position posture conversion template area 40,2 comparing part 17 calculate extracts the correlation (correlation value calculation step) that present image area 42 and posture convert multiple sections 41 of template area 40 for 1 time.In addition, posture transformation component 25 carries out converting to become the posture (posture shift step) different from posture before, 2 comparing part 17 generate multiple sections 41 (section generation step) of the posture conversion template area 40 in this posture, when each scanning position posture conversion template area 40, calculates and extract the correlation (correlation value calculation step) that present image area 42 and posture convert multiple sections 41 of template area 40 for 1 time.The posture relation (posture information) tieing up present image as 3 dimension benchmark images and 3 of the highest correlation in the correlation calculated is chosen to be optimum solution (optimum solution selectes step) by 2 comparing part 17 in control treatment portion 22.Realizing pattern match thus and tie up present image to make 3 dimension benchmark images with 3--these two kinds of 3 d images are the most consistent.2 times pattern match step comprises: section generation step; Correlation value calculation step; Posture shift step; And optimum solution selectes step.
After pattern match terminates, control treatment portion 22, according to its correlation posture for the highest posture conversion template area 40 in the correlation calculated, calculates the position correction (translational movement, rotation amount) (position correction calculation procedure) when contrasting 3 dimension benchmark images 31 and 3 dimension present images 36.Results of comparison display part 23 shows position correction in the display picture of computer 14,3 dimension present images and 3 after moving with this position correction are tieed up benchmark image coincides the image etc. shown.Results of comparison efferent 24 exports and utilizes control treatment portion 22 to tie up the position correction (translational movement, rotation amount) (position correction exports step) when present image 36 contrasts to 3 dimension benchmark images 31 and 3.Treatment table controling parameters calculating part 26 is by output valve (translation 3 axle [Δ X, Δ Y, Δ Z] of results of comparison efferent 24, rotate 3 axles [Δ A, Δ B, Δ C], totally 6 degree of freedom) convert the parameter that each axle for the treatment of table 8 is controlled to, namely calculate parameter (treatment table controling parameters calculation procedure).Treatment table 8, based on the treatment table controling parameters utilizing treatment table controling parameters calculating part 26 to calculate, drives the driving device (treatment table actuation step) of each axle for the treatment of table 8.
Image contrast device 29 involved by embodiment 1 carries out 1 pattern match tieing up present image 36 from 3 dimension benchmark images 31 to 3, then, based on 1 pattern match result, the posture conversion template area 40 as the template area for 2 pattern match of regulation is generated from 3 dimension benchmark images 31, region 43 is extracted to make it comprise 1 time in the present image area 42 of extracting for 1 time generated as the searching object region of the regulation being used for 2 pattern match from 3 dimension present images 36, even if when therefore the faultage image number (sectioning image number) of 3 dimension present images 36 is fewer than 3 dimension benchmark images 31, also the 2 stage pattern match that precision is high can be realized.
Even if the image contrast device 29 involved by embodiment 1 is when the faultage image number (sectioning image number) of 3 dimension present images 36 is fewer than 3 dimension benchmark images 31, also the 2 stage pattern match that precision is high can be realized, therefore the faultage image number tieing up present images 36 from 3 of X ray CT device during location matches can be reduced, the patient irradiation's exposed amount because of X ray CT device during the coupling that can dip.
Image contrast device 29 involved by embodiment 1 generates 1 extraction present image area 42 based on performing the result of 1 pattern match from 3 dimension benchmark images 31 to 3 dimension present images 36, by extracting present image area 42 as searching object using narrower than present image area 38 for its region 1 time, thus utilize 1 the extraction present image area 42 comprising and extract region 43 for 1 time, high-resolution 2 pattern match can be carried out, pattern match required time can be shortened, wherein 1 time is extracted region 43 is with low resolution, wide region is carried out as object 1 pattern match finds.
Patient positioning 30 involved by embodiment 1, based on the position correction utilizing image contrast device 29 to calculate, can make to match with posture when carrying out treatment plan.Owing to can make to match with posture when carrying out treatment plan, location matches therefore can be carried out with the beam exposure center 12 making the affected part 11 when treating arrive radiation cure.
Patient positioning 30 involved by embodiment 1 can utilize posture transformation component 25 to generate posture conversion template area 40, this posture conversion template area 40 be suitable for from matched by the 3 benchmark image template area 33 that obtain of dimension benchmark images 31 faultage image number (sectioning image number) fewer than 3 dimension benchmark images 31 3 tie up present images 36, the 2 stage pattern match that precision that angular deflection also comprises is high can be realized.
Image contrast device 29 involved by embodiment 1 comprises: 3 d image input part 21,3 dimension benchmark images 31 captured when this 3 d image input part 21 reads the treatment plan of radiation cure respectively and 3 dimension present images 36 captured when treating, and control treatment portion 22, this control treatment portion 22 ties up present image 36 to 3 dimension benchmark images 31 and 3 and contrasts, calculate position correction consistent to make the posture of the affected part in 3 dimension present images 36 and 3 tie up the posture of the affected part in benchmark image 31, control treatment portion 22 has: 1 comparing part 16, and this 1 comparing part 16 carries out 1 pattern match according to 3 dimension benchmark images 31 to 3 dimension present images 36, and 2 comparing part 17, 2 pattern match are carried out to the searching object region 42 of regulation in these 2 comparing part 17 template area according to the rules (posture conversion template area 40), the template area wherein specified is tieed up in present image 36 according to 3 dimension benchmark images 31 or 3 and generates based on 1 pattern match result, and the searching object region 42 of regulation according to different from the formation base of template area (posture domain transformation 40) of regulation 3 tie up benchmark image 31 or 3 tie up in present image 36 another and generate based on 1 pattern match result, therefore, even if when the faultage image number of 3 dimension present images 36 is fewer than 3 dimension benchmark images 31, also the 2 stage pattern match that precision is high can be realized.
Patient positioning 30 involved by embodiment 1 comprises: image contrast device 29; And treatment table controling parameters calculating part 26, this treatment table controling parameters calculating part 26 controls each axle for the treatment of table 8 based on the position correction utilizing image contrast device 29 to calculate, and image contrast device 29 comprises: 3 d image input part 21,3 dimension benchmark images 31 captured when this 3 d image input part 21 reads the treatment plan of radiation cure respectively and 3 dimension present images 36 captured when treating; And control treatment portion 22, this control treatment portion 22 ties up present images 36 to 3 dimension benchmark images 31 and 3 and contrasts, and calculates position correction consistent to make the posture of the affected part in 3 dimension present images 36 and 3 tie up the posture of the affected part in benchmark image 31.Control treatment portion 22 has: 1 comparing part 16, and this 1 comparing part 16 carries out 1 pattern match according to 3 dimension benchmark images 31 to 3 dimension present images 36, and 2 comparing part 17, 2 pattern match are carried out to the searching object region 42 of regulation in these 2 comparing part 17 template area according to the rules (posture conversion template area 40), the template area wherein specified is tieed up in present image 36 according to 3 dimension benchmark images 31 or 3 and generates based on 1 pattern match result, and the searching object region 42 of regulation according to different from the formation base of template area (posture domain transformation 40) of regulation 3 tie up benchmark image 31 or 3 tie up in present image 36 another and generate based on 1 pattern match result, therefore, even if when the faultage image number of 3 dimension present images 36 is fewer than 3 dimension benchmark images 31, also the high location of precision can be carried out.
Embodiment 1 relates to image contrast method, 3 dimension benchmark images 31 captured during the treatment plan of this image contrast method to radiation cure and when treating 3 captured dimension present images 36 contrast, this image contrast method comprises: 1 pattern match step, and this 1 pattern match step carries out 1 pattern match according to 3 dimension benchmark images 31 to 3 dimension present images 36, and 2 pattern match steps, 2 pattern match are carried out to the searching object region 42 of regulation in these 2 pattern match step template area according to the rules (posture conversion template area 40), the template area wherein specified is tieed up in present image 36 according to 3 dimension benchmark images 31 or 3 and generates based on 1 pattern match result, and the searching object region 42 of regulation according to different from the formation base of template area (posture domain transformation 40) of regulation 3 tie up benchmark image 31 or 3 tie up in present image 36 another and generate based on 1 pattern match result, therefore, even if when the faultage image number of 3 dimension present images 36 is fewer than 3 dimension benchmark images 31, also the 2 stage pattern match that precision is high can be realized.
Embodiment 2
In 2 stage pattern match of embodiment 2, carry out 1 pattern match tieing up present image 36 from 3 dimension benchmark images 31 to 3, then, based on the result of 1 pattern match, the present image template area 44 of the template area for 2 pattern match as regulation is generated from 3 dimension present images 36, posture after converting the posture of 3 dimension benchmark images 31 is converted benchmark image-region 47 as searching object, carries out 2 pattern match according to 44 pairs, current template region posture conversion benchmark image-region 47.2 pattern match are the pattern match reverse with 1 pattern match.
Figure 10 figure that to be the figure be described 1 pattern matching method involved by embodiments of the present invention 2, Figure 11 be is described the relation of the benchmark image template area in 1 pattern matching method of Figure 10 and sectioning image.In embodiment 2, by 1 pattern match, 1 comparing part 16 is carried out exploration that rotation 3 axle also comprises and is obtained postural change amount.
Present image area 38 shown in Figure 10 represents becomes the cuboid comprising 3 sectioning images 37a, 37b, 37c.Posture conversion template area 40a, 40b, the 40c becoming the benchmark image template area of embodiment 2 is the region after utilizing posture transformation component 25 to carry out posture conversion.But initial position posture is default conditions, such as, the parameter rotating 3 axles is 0.As the posture conversion template area 40a of the benchmark image template area after converting posture along scanning pattern 39a, move in sectioning image 37a with raster scanning shape.Equally, as the posture conversion template area 37b of the benchmark image template area 40b after converting posture along scanning pattern 39b, move in sectioning image 37b with raster scanning shape, the posture conversion template area 40c after converting posture is along scanning pattern 39c, move in sectioning image 37c with raster scanning shape.In order to make accompanying drawing simple, scanning pattern 39b, 39c are shown briefly.
While convert posture, and carry out the correlation computations that sectioning image 37a, 37b, 37c of 3 dimension present images 36 and posture convert template area 40.Such as, each axle of rotation 3 axle is changed with the variable quantity of regulation or rate of change, carries out correlation computations, move to next scanning position, carry out correlation computations.As shown in figure 11,1 comparing part 16 carries out image contrast between the section 41 and the sectioning image 37 forming present image area 38 of posture conversion template area 40.The section 41 of posture conversion template area 40 posture is converted the face that template area 40 cuts off using the face paralleled with the sectioning image 32 tie up benchmark image 31 as 3 of initial position posture, and be (the section generation step) of multiple sectioning images 32 generation from 3 dimension benchmark images 31.Such as, method illustrated in embodiment 1 can be used.That is, section 41 data can from formation 3 tie up benchmark image 31 multiple sectioning images 32 intercept.In addition, the section 41 of posture conversion template area 40 also can comprise and carried out completion and make the packing density of section 41 and 3 tie up the identical data of the packing density of present image 36.
Then, 1 time comparing part 16 generates present image template area 44, and this current image template region 44 is for 2 couplings.1 comparing part 16, such as according to the exploration result that rotation 3 axle in each image of each sectioning image 37a, 37b, 37c comprises, obtains the section 41 of the highest posture conversion template area 40 of correlation, the posture conversion postural change amount of the template area 40 and extraction region of the sectioning image 37 corresponding with this section 41 now.1 time comparing part 16 generates present image template area 44 with the extraction region making it comprise 3 the highest dimension present images of correlation from the extraction region of each sectioning image obtained.Present image template area 44 is 2 dimension images.
Then, as shown in figure 12, utilize the posture transformation component 25 in control treatment portion 22, when making the posture of 3 dimension benchmark image 31 entirety to generate present image template area 44, calculated above-mentioned postural change amount changes, and generate 3 dimension posture conversion benchmark images 45 after posture conversion, namely generate posture conversion benchmark image-region 47.Figure 12 is the figure of 3 dimension benchmark images after representing the posture conversion involved by embodiments of the present invention 2.Sectioning image 46a, 46b, 46c, 46d, 46e are respectively and carry out the sectioning image after postural change to sectioning image 32a, 32b, 32c, 32d, 32e with above-mentioned postural change amount.
Then, as shown in figure 13,2 comparing part 17 along scanning pattern 49, the posture conversion benchmark image-region 47 using raster scanning shape, present image template area 44 being matched 3 dimension postures conversion benchmark images 45 after converting as posture, thus only can detect translational offsets at high speed.Figure 13 is the figure be described 2 pattern matching methods involved by embodiments of the present invention 2.Carry out the expression of the conversion of the posture after posture conversion benchmark image-region 47 and become the cuboid comprising 5 sectioning images 46a, 46b, 46c, 46d, 46e.Contrast execution face 48 is image surfaces corresponding with following posture, this posture is and utilizes 1 pattern match to correspond to the highest posture of correlation between 3 postures tieing up the sectioning image 37 of present image 36, namely be the face becoming the posture equal with following posture, this posture and posture convert 3 in benchmark image-region 47 to tie up the sectioning image 37 of present image 36 corresponding.2 comparing part 17 generate the contrast execution face 48 of regulation from posture conversion benchmark image-region 47, generate (contrast execution face generation step) from multiple sectioning images 46 of 3 dimension posture conversion benchmark images 45.Such as, method illustrated in embodiment 1 can be used.That is, the data contrasting execution face 48 can tie up multiple sectioning images interceptings of posture conversion benchmark image 45 from formation 3.In addition, contrast execution face 48 to comprise and carried out completion to make to contrast the packing density in execution face 48 identical with the packing density of present image template area 44.
2 stage pattern matching methods of embodiment 2 are summarized.First, control treatment portion 22 utilizes posture transformation component 25, generates the conversion of the posture after carrying out evolution template area 40 (posture conversion template area generation step) from 3 dimension benchmark images 31.Posture is converted template area 40 and performs 1 pattern match (1 pattern match step) to 3 dimension present images 36 by 1 comparing part 16 in control treatment portion 22.Make at every turn posture convert the posture of template area 40 change (during each executing location posture shift step) time, 1 pattern match generates the section 41 (section generation step) of posture conversion template area 40 to each sectioning image 37 forming present image area 38, between the section 41 and the sectioning image 37 forming present image area 38 of this posture conversion template area 40, carry out image contrast.
Make posture convert the posture of template area 40 when changing, 1 comparing part 16 calculates the correlation (correlation value calculation step) that present image area 38 and posture convert template area 40 at every turn.In addition, during each scanning position posture conversion template area 40,1 comparing part 16 calculates the correlation that present image area 38 and posture convert template area 40, utilize 1 pattern match, generate present image template area 44, convert the extraction region (present image template area generation step) of template area 40 with the posture that the correlation making it comprise present image area 38 and posture conversion template area 40 is the highest.
Then, control treatment portion 22 utilizes posture transformation component 25, when making the posture of 3 dimension benchmark image 31 entirety to generate present image template area 44, calculated above-mentioned postural change amount changes, and generate 3 dimension posture conversion benchmark images 45 after posture conversion, namely generate posture conversion benchmark image-region 47 (posture conversion benchmark image-region generation step).2 times 44 pairs, present image template area posture conversion benchmark image-region 47 is performed 2 pattern match (2 pattern match steps) by comparing part 17.2 times pattern match shines execution face 48 by contrast execution face generation step next life in pairs, carries out image contrast to the contrast execution face 48 generated by contrast execution face generation step and present image template area 44.When carrying out this image contrast, not making present image template area 44 rotate and carry out translation, calculating the correlation (correlation value calculation step) of contrast execution face 48 and present image template area 44 simultaneously.
In 2 pattern match, 3 of the highest correlation in calculated correlation dimension posture conversion benchmark images 45 are chosen to be optimum solution (optimum solution selectes step) with the posture relation (posture information) of present image template area 44 by 2 comparing part 17 in control treatment portion 22.Thus, realizing pattern match by 2 stage match, to make 3 dimension benchmark images 31 and 3 tie up these two kinds of 3 d images of present image 36 the most consistent.2 stage pattern matching methods of embodiment 2 comprise: posture conversion template area generation step; 1 pattern match step; Posture conversion benchmark image-region generation step; And 2 pattern match steps.1 time pattern match step comprises: section generation step; Correlation value calculation step; Posture shift step; And present image template area generation step.2 times pattern match step comprises: contrast execution face generation step; Correlation value calculation step; And optimum solution selectes step.
After pattern match terminates, control treatment portion 22 calculates the position correction (translational movement, rotation amount) (position correction calculation procedure) when contrasting 3 dimension benchmark images 31 and 3 dimension present images 36 from the posture that its correlation ties up for the highest 3 the high correlation regions that postures convert in benchmark images 45 the correlation calculated.Results of comparison display part 23 shows position correction or 3 dimension present images after moving with this position correction is overlapped to the image etc. that 3 dimension benchmark images show in the display picture of computer 14.Results of comparison efferent 24 exports and utilizes control treatment portion 22 to tie up the position correction (translational movement, rotation amount) (position correction exports step) when present image 36 contrasts to 3 dimension benchmark images 31 and 3.Treatment table controling parameters calculating part 26 is by output valve (translation 3 axle [Δ X, Δ Y, Δ Z] of results of comparison efferent 24, rotate 3 axles [Δ A, Δ B, Δ C], totally 6 degree of freedom) convert the parameter that each axle for the treatment of table 8 is controlled to, namely calculate parameter (treatment table controling parameters calculation procedure).Treatment table 8, based on the treatment table controling parameters utilizing treatment table controling parameters calculating part 26 to calculate, drives the driving device (treatment table actuation step) of each axle for the treatment of table 8.
Image contrast device 29 involved by embodiment 2 carries out 1 pattern match of the image contrast also comprised as rotation 3 axle to 3 dimension present images 36 according to the posture conversion template area 40 of 3 dimension benchmark images 31, then, based on 1 pattern match result, the present image template area 44 as the template area being used for 2 pattern match is generated from 3 dimension present images 36, therefore, even if when the faultage image number (sectioning image number) of 3 dimension present images 36 is fewer than 3 dimension benchmark images 31, the 2 stage pattern match that precision is high also can be realized.
Image contrast device 29 involved by embodiment 2 is by generating 3 dimension posture conversion benchmark images 45 of 3 dimension benchmark images after as posture conversion from 3 dimension benchmark images 31, namely by generating posture conversion benchmark image-region 47, thus the present image template area 44 of 2 dimensions can be used, and utilize to move without translation in rotary moving direct pattern match is realized to posture conversion benchmark image-region 47.In 2 pattern match, owing to only calculating the correlation of each translation movement, therefore compare with the situation of the correlation of translation movement with calculating is at every turn in rotary moving, achieve the high speed of 2 pattern match.
Embodiment 3
Embodiment 3 is from the different of embodiment 1 and 2, utilizes somatic data storehouse (spectrum model: atlas model) to generate the benchmark image template area 33 on the benchmark image template area 33 for 1 pattern match of embodiment 1 or the basis as posture conversion template area 40 of embodiment 2.Figure 14 is the figure of the structure representing image contrast device involved by embodiments of the present invention 3 and patient positioning.Image contrast device 29 involved by embodiment 3 is with the difference of the image contrast device 29 involved by embodiment 1 and 2, and it has: somatic data storehouse input part 50; And average template Area generation portion 51.Patient positioning 30 involved by embodiment 3 has: image contrast device 29 and treatment table controling parameters calculating part 26.
Somatic data storehouse input part 50 obtains somatic data storehouse (spectrum model) from storage devices such as data library devices.Average template Area generation portion 51 is from the internal organs portion intercepts average template region 54 with the affected part 5 of patient 4,10,11 corresponding somatic data storehouses.The reference templates Area generation portion 18 in control treatment portion 22 by this average template region 54 pattern match is tieed up benchmark images 31 to 3, thus generates benchmark image template area 33 (benchmark image template area generation step) automatically.
Utilize said reference image template region 33, perform 2 stage pattern match of embodiment 1 or 2 stage pattern match of embodiment 2.By like this, even if do not prepare the information (affected part shape etc.) representing affected part in advance on 3 dimension benchmark images, 2 stage pattern match also can be realized.
In addition, the situation of average template Area generation portion 51 from the average template region of tieing up with the internal organs portion intercepts 2 in the affected part 5 of patient 4,10,11 corresponding somatic data storehouses is also contemplated.When the average template region 54 of 2 dimensions, intercept the average template region of multiple 2 dimensions, collect the average template region of multiple 2 dimensions, output to control treatment portion 22.The reference templates Area generation portion 18 in control treatment portion 22 by the average template zone map of the plurality of 2 dimensions is matched 3 dimension benchmark images 31, thus generates benchmark image template area 33 automatically.
Label declaration
16 ... 1 comparing part; 17 ... 2 comparing part; 18 ... reference templates Area generation portion; 21 ... 3 d image input part; 22 ... control treatment portion; 25 ... posture transformation component; 26 ... treatment table controling parameters calculating part; 29 ... image contrast device; 30 ... patient positioning; 31 ... 3 dimension benchmark images; 33 ... benchmark image template area; 36 ... 3 dimension present images; 40,40a, 40b, 40c ... posture conversion template area; 41 ... section; 42 ... extract present image area 1 time; 44 ... present image template area; 45 ... 3 dimension posture conversion benchmark images; 48 ... contrast execution face; 50 ... somatic data storehouse input part; 51 ... average template Area generation portion.

Claims (16)

1. an image contrast device, is characterized in that, comprising:
3 d image input part, 3 dimension benchmark images captured when this 3 d image input part reads in the treatment plan of radiation cure respectively and 3 dimension present images captured when treating; And
Control treatment portion, this control treatment portion contrasts described 3 dimension benchmark images and described 3 dimension present images, calculate position correction consistent to make the posture and described 3 of affected parts in described 3 dimension present images tie up the posture of the affected part in benchmark image
Described control treatment portion has:
1 comparing part, this 1 comparing part carries out 1 pattern match from described 3 dimension benchmark images to described 3 dimension present images; And
2 comparing part, 2 pattern match are carried out in the searching object region of this 2 comparing part template area according to the rules to regulation, the template area of wherein said regulation according in described 3 dimension benchmark images or described 3 dimension present images and based on described 1 pattern match result and generate, and the searching object region of described regulation according to the formation base of the template area from described regulation different described 3 tie up benchmark image or described 3 tie up in present image another and based on described 1 pattern match result and generate.
2. image contrast device as claimed in claim 1, is characterized in that,
Described control treatment portion comprises reference templates Area generation portion, and this reference templates Area generation portion, based on the affected part information prepared in described 3 dimension benchmark images, generates the benchmark image template area in 3 dimension regions from described 3 dimension benchmark images.
3. image contrast device as claimed in claim 1, is characterized in that, comprising:
Somatic data storehouse input part, this somatic data storehouse input part obtains somatic data storehouse from data library device; And
Average template Area generation portion, this average template Area generation portion generates average template region from the internal organs part corresponding with the affected part of the patient described somatic data storehouse,
Described control treatment portion has reference templates Area generation portion, this reference templates Area generation portion carries out pattern match according to described average template region to described 3 dimension benchmark images, and based on the result of described pattern match, generate the benchmark image template area in 3 dimension regions from described 3 dimension benchmark images.
4. image contrast device as claimed in claim 2 or claim 3, is characterized in that,
When described 1 pattern match, described 1 comparing part carries out pattern match according to described benchmark image template area to described 3 dimension present images.
5. image contrast device as claimed in claim 4, is characterized in that,
The present image area of extracting for 1 time that described 1 comparing part generates as described searching object region from described 3 dimension present images comprises the region the highest with the correlation of described benchmark image template area to make it.
6. image contrast device as claimed in claim 5, is characterized in that,
Described control treatment portion comprises posture transformation component, and the posture of this posture transformation component to 3 d image converts,
Described posture transformation component generates and the posture of described benchmark image template area is transformed into the conversion of the posture after the posture of regulation template area,
When described 2 pattern match, described 2 comparing part are extracted present image area according to the described posture template area of the template area as described regulation to described 1 time and are carried out pattern match.
7. image contrast device as claimed in claim 6, is characterized in that,
Described 2 comparing part generate the section of described posture conversion template area, extract between present image area and described section carry out pattern match at described 1 time.
8. image contrast device as claimed in claim 2 or claim 3, is characterized in that,
Described control treatment portion comprises posture transformation component, and the posture of this posture transformation component to 3 d image converts,
Described posture transformation component generates and the posture of described benchmark image template area is for conversion into the conversion of the posture after the posture of regulation template area,
When described 1 pattern match, described 1 comparing part carries out pattern match according to described posture conversion template area to described 3 dimension present images.
9. image contrast device as claimed in claim 8, is characterized in that,
Described 1 comparing part generates the section of described posture conversion template area, between described 3 dimension present images and described section, carry out pattern match, and perform following operation: determine to be correlated with section as the height of the highest section of correlation from multiple described section; Computing is carried out to the postural change amount in described posture conversion template area; And extract the corresponding extraction region of the relevant section of described height tieed up to described 3 in present image.
10. image contrast device as claimed in claim 9, is characterized in that,
Described 1 comparing part generates the present image template area as the template area of described regulation, the described extraction region extracted when being included in described 1 pattern match to make it,
Described posture transformation component generates 3 dimension posture conversion benchmark image-regions, the size of the postural change amount in described extraction region corresponding with described present image template area for the posture conversion of described 3 dimension benchmark images is formed by this 3 dimension posture conversion benchmark image-region
When described 2 pattern match, described 2 comparing part carry out pattern match according to described present image template area to the 3 dimension posture conversion benchmark image-regions as described searching object region.
11. image contrast devices as claimed in claim 10, is characterized in that,
The contrast that described 2 comparing part generate as the section of described posture conversion template area performs face, carries out pattern match in described present image template area with described contrast between execution face.
12. 1 kinds of patient positionings, is characterized in that, comprising:
As claims 1 to 3,5 to 7, image contrast device as described in any one of 9 to 11; And
Treatment table controling parameters calculating part, this treatment table controling parameters calculating part, based on the position correction utilizing described image contrast device to calculate, calculates the parameter controlled each axle for the treatment of table.
13. 1 kinds of image contrast methods, 3 dimension benchmark images captured during the treatment plan of this image contrast method to radiation cure and when treating 3 captured dimension present images contrast, the feature of this image contrast method is, comprises:
1 pattern match step, this 1 pattern match step carries out 1 pattern match according to described 3 dimension benchmark images to described 3 dimension present images; And
2 pattern match steps, the searching object region of this 2 pattern match steps template area according to the rules to regulation performs 2 pattern match, the template area of wherein said regulation according in described 3 dimension benchmark images or described 3 dimension present images and based on described 1 pattern match result and generate, and the searching object region of described regulation according to the formation base of the template area from described regulation different described 3 tie up benchmark image or described 3 tie up in present image another and based on described 1 pattern match result and generate.
14. image contrast methods as claimed in claim 13, is characterized in that,
Comprise benchmark image template area generation step, this benchmark image template area generation step generates the benchmark image template area in 3 dimension regions from described 3 dimension benchmark images,
Described 1 pattern match step performs 1 pattern match according to institute's benchmark image template area to described 3 dimension present images.
15. 1 kinds of patient positionings, is characterized in that, comprising:
Image contrast device as claimed in claim 4; And
Treatment table controling parameters calculating part, this treatment table controling parameters calculating part, based on the position correction utilizing described image contrast device to calculate, calculates the parameter controlled each axle for the treatment of table.
16. 1 kinds of patient positionings, is characterized in that, comprising:
Image contrast device as claimed in claim 8; And
Treatment table controling parameters calculating part, this treatment table controling parameters calculating part, based on the position correction utilizing described image contrast device to calculate, calculates the parameter controlled each axle for the treatment of table.
CN201210022145.2A 2011-06-10 2012-01-13 Image contrast device, patient positioning device and image contrast method Active CN102814006B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2011-130074 2011-06-10
JP2011130074A JP5693388B2 (en) 2011-06-10 2011-06-10 Image collation device, patient positioning device, and image collation method

Publications (2)

Publication Number Publication Date
CN102814006A CN102814006A (en) 2012-12-12
CN102814006B true CN102814006B (en) 2015-05-06

Family

ID=47298678

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210022145.2A Active CN102814006B (en) 2011-06-10 2012-01-13 Image contrast device, patient positioning device and image contrast method

Country Status (3)

Country Link
JP (1) JP5693388B2 (en)
CN (1) CN102814006B (en)
TW (1) TWI425963B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI573565B (en) * 2013-01-04 2017-03-11 shu-long Wang Cone - type beam tomography equipment and its positioning method
US10092251B2 (en) * 2013-03-15 2018-10-09 Varian Medical Systems, Inc. Prospective evaluation of tumor visibility for IGRT using templates generated from planning CT and contours
JP6192107B2 (en) * 2013-12-10 2017-09-06 Kddi株式会社 Video instruction method, system, terminal, and program capable of superimposing instruction image on photographing moving image
CN104135609B (en) * 2014-06-27 2018-02-23 小米科技有限责任公司 Auxiliary photo-taking method, apparatus and terminal
JP6338965B2 (en) * 2014-08-08 2018-06-06 キヤノンメディカルシステムズ株式会社 Medical apparatus and ultrasonic diagnostic apparatus
JP6452987B2 (en) * 2014-08-13 2019-01-16 キヤノンメディカルシステムズ株式会社 Radiation therapy system
US9878177B2 (en) 2015-01-28 2018-01-30 Elekta Ab (Publ) Three dimensional localization and tracking for adaptive radiation therapy
JP6164662B2 (en) * 2015-11-18 2017-07-19 みずほ情報総研株式会社 Treatment support system, operation method of treatment support system, and treatment support program
JP2018042831A (en) 2016-09-15 2018-03-22 株式会社東芝 Medical image processor, care system and medical image processing program
JP6869086B2 (en) * 2017-04-20 2021-05-12 富士フイルム株式会社 Alignment device, alignment method and alignment program
CN109859213B (en) * 2019-01-28 2021-10-12 艾瑞迈迪科技石家庄有限公司 Method and device for detecting bone key points in joint replacement surgery
JP7513980B2 (en) 2020-08-04 2024-07-10 東芝エネルギーシステムズ株式会社 Medical image processing device, treatment system, medical image processing method, and program
CN114073827B (en) * 2020-08-15 2023-08-04 中硼(厦门)医疗器械有限公司 Radiation irradiation system and control method thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1235684A (en) * 1996-10-29 1999-11-17 匹兹堡大学高等教育联邦体系 Device for matching X-ray images with refrence images
CN101032650A (en) * 2006-03-10 2007-09-12 三菱重工业株式会社 Radiotherapy device control apparatus and radiation irradiation method
JP2009189461A (en) * 2008-02-13 2009-08-27 Mitsubishi Electric Corp Patient positioning apparatus and its method
CN101708126A (en) * 2008-09-19 2010-05-19 株式会社东芝 Image processing apparatus and x-ray computer tomography apparatus
WO2010133982A2 (en) * 2009-05-18 2010-11-25 Koninklijke Philips Electronics, N.V. Marker-free tracking registration and calibration for em-tracked endoscopic system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3748433B2 (en) * 2003-03-05 2006-02-22 株式会社日立製作所 Bed positioning device and positioning method thereof
JP2007014435A (en) * 2005-07-06 2007-01-25 Fujifilm Holdings Corp Image processing device, method and program
JP4425879B2 (en) * 2006-05-01 2010-03-03 株式会社日立製作所 Bed positioning apparatus, positioning method therefor, and particle beam therapy apparatus
JP5233374B2 (en) * 2008-04-04 2013-07-10 大日本印刷株式会社 Medical image processing system
TWI381828B (en) * 2009-09-01 2013-01-11 Univ Chang Gung Method of making artificial implants

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1235684A (en) * 1996-10-29 1999-11-17 匹兹堡大学高等教育联邦体系 Device for matching X-ray images with refrence images
CN101032650A (en) * 2006-03-10 2007-09-12 三菱重工业株式会社 Radiotherapy device control apparatus and radiation irradiation method
JP2009189461A (en) * 2008-02-13 2009-08-27 Mitsubishi Electric Corp Patient positioning apparatus and its method
CN101708126A (en) * 2008-09-19 2010-05-19 株式会社东芝 Image processing apparatus and x-ray computer tomography apparatus
WO2010133982A2 (en) * 2009-05-18 2010-11-25 Koninklijke Philips Electronics, N.V. Marker-free tracking registration and calibration for em-tracked endoscopic system

Also Published As

Publication number Publication date
JP5693388B2 (en) 2015-04-01
TWI425963B (en) 2014-02-11
TW201249496A (en) 2012-12-16
JP2012254243A (en) 2012-12-27
CN102814006A (en) 2012-12-12

Similar Documents

Publication Publication Date Title
CN102814006B (en) Image contrast device, patient positioning device and image contrast method
CA2339497C (en) Delivery modification system for radiation therapy
EP2032039B1 (en) Parallel stereovision geometry in image-guided radiosurgery
JP6886565B2 (en) Methods and devices for tracking surface movements
EP2175931B1 (en) Systems for compensating for changes in anatomy of radiotherapy patients
CN101553281B (en) Use the target following of direct target registration
US20110176723A1 (en) Motion Correction in Cone-Beam CT by Tracking Internal and External Markers Using Cone-Beam Projection From a kV On-Board Imager: Four-Dimensional Cone-Beam CT and Tumor Tracking Implications
CN111386555B (en) Image guidance method and device, medical equipment and computer readable storage medium
CN102049106B (en) Precise image positioning system and method of radiotherapy system of interfractionated radiotherapy
JP2008043567A (en) Positioning system
JP2017035314A (en) Apparatus, method and program for radiotherapy
JP2009000369A (en) Radiotherapy equipment and positioning method of treated part
KR102619994B1 (en) Biomedical image processing devices, storage media, biomedical devices, and treatment systems
US20100195890A1 (en) Method for completing a medical image data set
JP7513980B2 (en) Medical image processing device, treatment system, medical image processing method, and program
JP2019098057A (en) Radiation therapy equipment
Talbot et al. A method for patient set-up guidance in radiotherapy using augmented reality
WO2022181663A1 (en) Radiation therapy device, medical image processing device, radiation therapy method, and program
Liu et al. Pre-treatment and real-time image guidance for a fixed-beam radiotherapy system
De Molina et al. Calibration of a C-arm X-ray System for Its Use in Tomography
Graham et al. Dynamic surface matching for patient positioning in radiotherapy
KR20230117404A (en) Medical image processing apparatus, medical image processing method, computer readable storage medium, and radiation therapy apparatus
Talbot et al. A patient position guidance system in radiation therapy using augmented reality
JP2021168824A (en) Positioning device, radiation therapy equipment, positioning method and computer program
Zhou et al. Comparing Setup Errors of CBCT Guidance System and Optical Positioning System Using Phantom Experiments

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20190123

Address after: Tokyo, Japan

Patentee after: Hitachi Ltd.

Address before: Tokyo, Japan

Patentee before: Missubishi Electric Co., Ltd.

TR01 Transfer of patent right