CN109934798A - Internal object information labeling method and device, electronic equipment, storage medium - Google Patents
Internal object information labeling method and device, electronic equipment, storage medium Download PDFInfo
- Publication number
- CN109934798A CN109934798A CN201910068317.1A CN201910068317A CN109934798A CN 109934798 A CN109934798 A CN 109934798A CN 201910068317 A CN201910068317 A CN 201910068317A CN 109934798 A CN109934798 A CN 109934798A
- Authority
- CN
- China
- Prior art keywords
- information
- image
- internal object
- feature information
- dimensional reconstruction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Analysis (AREA)
Abstract
The present invention provides a kind of internal object information labeling method, the described method comprises the following steps: obtaining the image of picture under the live visual area of target person;The shape feature information of internal object is extracted from described image;The shape feature information is matched with the three-dimensional feature information library of established target person;According to matching result, the rendering image comprising default prompt information is generated;The image data of the rendering image is exported to the imaging screen of hysteroscope or default projection device.The present invention also provides a kind of internal object information labeling devices, electronic equipment, storage medium.The present invention solves existing body surface projection technology and lacks the technical issues of enough precision cannot carry out Precise fusion with the body corresponding position of patient with accuracy, projected image.
Description
Technical field
The present invention relates to field of locating technology more particularly to a kind of internal object information labeling method and devices, electronics
Equipment, storage medium.
Background technique
In traditional operation, surgical selects operative approach generally according to the anatomic landmark between body surface or histoorgan
Judge with the surgical experience of doctor itself.Such modus operandi illustrates that the organ under surgical field of view is recognized without any prompt
It spends low.Existing body surface projection is medically having a wide range of applications, but for the individual body surface projection that object refines in vivo
Calculating still lacks enough precision, and human body three-dimensional imaging cannot carry out Precise fusion with the body corresponding position of patient.For
It is not very perfect due to body surface projection technology for the shallower doctor of surgical experience, true organ often with learnt
There are either large or small differences for teaching picture and autopsy practice experience in journey, instruct in operative site generally by generation
The guidance of teacher recognizes organ, and thus leads to organ cognition and position inaccurate, of the young doctor under surgical field of view
Habit practices problem at high cost.
Above content is only used to facilitate the understanding of the technical scheme, and is not represented and is recognized that above content is existing skill
Art.
Summary of the invention
The main purpose of the present invention is to provide a kind of internal object information labeling method and device, electronic equipment, deposit
Storage media, it is intended to solve existing body surface projection technology lack enough precision and accuracy, projected image cannot be with the body of patient
Body corresponding position carries out the technical issues of Precise fusion.
To achieve the above object, the present invention provides a kind of internal object information labeling method, and the method includes following
Step:
Obtain the image of picture under the live visual area of target person;
The shape feature information of internal object is extracted from described image;
The shape feature information is matched with the three-dimensional feature information library of established target person;
According to matching result, the rendering image comprising default prompt information is generated;
The image data of the rendering image is exported to the imaging screen of hysteroscope or default projection device.
Preferably, before under the live visual area for obtaining target person the step of the image of picture, further includes:
Obtain the preoperative detection image of target person;
Three-dimensional reconstruction is carried out to the preoperative detection image;
Based on three-dimensional reconstruction result, the three-dimensional feature information library for being stored with three-dimensional reconstruction image characteristic information is established.
Preferably, the step of shape feature information that internal object is extracted from described image, specifically includes:
Feature extraction and/or image procossing are carried out to the live shooting image, to identify in the live shooting image
Internal object;
Based on recognition result, the shape feature information of internal object is obtained.
Preferably, described by the progress of the three-dimensional feature information library of the shape feature information and established target person
With the step of, specifically include:
The characteristic point position information of internal object is compared with the three-dimensional reconstruction image characteristic information, and
Calculate the similarity of the characteristic point position information Yu the three-dimensional reconstruction image characteristic information;
Judge whether the similarity is greater than preset threshold;
If so, determining that the shape feature information is matched with the three-dimensional reconstruction image characteristic information.
Preferably, the shape feature information include internal object characteristic point position information and/or internal object
Contour feature information.
Preferably, the image data by the rendering image is exported to the step on default projection device, also
Include:
The image of picture under the live visual area of target person is reacquired, with predetermined period to update the spy of internal object
Levy dot position information;
The characteristic point position information for updating the internal object of front and back is compared, to judge the characteristic point position of internal object
Whether change;
If so, the step of re-executing the shape feature information for extracting internal object from described image.
Preferably, the default prompt information includes following one or more: the title of internal object, internal mesh
Mark status information, the surgical procedure points for attention of object.
Preferably, described according to matching result, generating includes specific packet the step of presetting the rendering image of prompt information
It includes:
It is obtained from the three-dimensional feature information library and presets prompt information and the Three-dimensional Gravity with the shape feature information matches
Build image feature information;
The three-dimensional reconstruction image characteristic information and default prompt information are handled, to generate comprising default prompt letter
The rendering image of breath.
In addition, to achieve the above object, the present invention also provides a kind of internal object information labeling device, the internal mesh
Mark object information labeling device includes: memory, processor and is stored on the memory and can run on the processor
Internal object information labeling program, realized when the internal object information labeling program is executed by the processor as above
The step of described internal object information labeling method.
In addition, to achieve the above object, the present invention also provides a kind of electronic equipment, the electronic equipment includes as described above
Internal object information labeling device.
In addition, to achieve the above object, the present invention also provides a kind of readable storage medium storing program for executing, being deposited on the readable storage medium storing program for executing
Internal object information labeling program is contained, the internal object information labeling program realizes institute as above when being executed by processor
The step of internal object information labeling method stated.
The embodiment of the present invention proposes a kind of internal object information labeling method and device, electronic equipment, storage medium, real
When obtain the image of picture under the live visual area of target person, and it is special from the shape of extracting internal object in image is taken on site
Reference breath, and then the shape feature information of internal object is matched with the three-dimensional feature information library of target person,
It thereby determines that and the matched three-dimensional feature information of operative site actual conditions.In turn, the rendering comprising default prompt information is generated
Image, and will output it is reliable and stable to having the characteristics that on the imaging screen or default projection device of hysteroscope, realize high-precision
It spends, the body surface projection of accurate positioning, and realizes and the body corresponding position of projected image and patient are subjected to Precise fusion, help
Cost is practiced in the study for reducing live operation teaching, promotes the effect of on-the-spot teaching.
Detailed description of the invention
Fig. 1 is each composition partial block diagram of the internal object information labeling device of the present invention;
Fig. 2 is the flow diagram of the internal object information labeling method first embodiment of the present invention;
Fig. 3 is the flow diagram of the internal object information labeling method second embodiment of the present invention;
Fig. 4 is the flow diagram of the internal object information labeling method 3rd embodiment of the present invention;
Fig. 5 is the flow diagram of the internal object information labeling method fourth embodiment of the present invention.
The object of the invention is realized, the embodiments will be further described with reference to the accompanying drawings for functional characteristics and advantage.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
As shown in Figure 1, the present embodiments relate to internal object information labeling device (i.e. central control machine) can be with
It is all kinds of for realizing central controlled device/equipment, such as single-chip microcontroller, MCU (Microcontroller Unit, i.e. microcontroller
Unit) etc..As shown in Figure 1, Fig. 1 is the internal object information labeling device running environment that the embodiment of the present invention is related to
The structure of structural schematic diagram, running environment can specifically include: processor 1001, such as CPU, network interface 1004, Yong Hujie
Mouth 1003, memory 1005, communication bus 1002.Wherein, communication bus 1002 is logical for realizing the connection between these components
Letter.User interface 1003 may include display screen (Display), input unit such as keyboard (Keyboard), and optional user connects
Mouth 1003 can also include standard wireline interface and wireless interface.Network interface 1004 optionally may include the wired of standard
Interface, wireless interface (such as WI-FI interface).Memory 1005 can be high speed RAM memory, be also possible to stable memory
(non-volatile memory), such as magnetic disk storage.Memory 1005 optionally can also be independently of aforementioned processor
1001 storage device.
It will be understood by those skilled in the art that the structure of running environment shown in Fig. 1 is not constituted to internal object
The restriction of information labeling device may include perhaps combining certain components or different than illustrating more or fewer components
Component layout.
As shown in Figure 1, as may include operating system, net in a kind of memory 1005 of computer readable storage medium
Network communication module, Subscriber Interface Module SIM and internal object information labeling program.
In terminal shown in Fig. 1, network interface 1004 is mainly used for connecting background server, carries out with background server
Data communication;User interface 1003 is mainly used for connecting client (user terminal), carries out data communication with client;And processor
1001 can be used for calling the internal object information labeling program stored in memory 1005, and execute following operation:
Obtain the image of picture under the live visual area of target person;
The shape feature information of internal object is extracted from described image;
The shape feature information is matched with the three-dimensional feature information library of established target person;
According to matching result, the rendering image comprising default prompt information is generated;
The image data of the rendering image is exported to the imaging screen of hysteroscope or default projection device.
Further, processor 1001 can call the internal object information labeling program stored in memory 1005,
Also execute following operation:
Obtain the preoperative detection image of target person;
Three-dimensional reconstruction is carried out to the preoperative detection image;
Based on three-dimensional reconstruction result, the three-dimensional feature information library for being stored with three-dimensional reconstruction image characteristic information is established.
Further, processor 1001 can call the internal object information labeling program stored in memory 1005,
Also execute following operation:
Feature extraction and/or image procossing are carried out to the live shooting image, to identify in the live shooting image
Internal object;
Based on recognition result, the shape feature information of internal object is obtained.
Further, processor 1001 can call the internal object information labeling program stored in memory 1005,
Also execute following operation:
The characteristic point position information of internal object is compared with the three-dimensional reconstruction image characteristic information, and
Calculate the similarity of the characteristic point position information Yu the three-dimensional reconstruction image characteristic information;
Judge whether the similarity is greater than preset threshold;
If so, determining that the shape feature information is matched with the three-dimensional reconstruction image characteristic information.
Preferably, the shape feature information include internal object characteristic point position information and/or internal object
Contour feature information.
Further, processor 1001 can call the internal object information labeling program stored in memory 1005,
Also execute following operation:
The image of picture under the live visual area of target person is reacquired, with predetermined period to update the spy of internal object
Levy dot position information;
The characteristic point position information for updating the internal object of front and back is compared, to judge the characteristic point position of internal object
Whether change;
If so, the step of re-executing the shape feature information for extracting internal object from described image.
Preferably, the default prompt information includes following one or more: the title of internal object, internal mesh
Mark status information, the surgical procedure points for attention of object.
Further, processor 1001 can call the internal object information labeling program stored in memory 1005,
Also execute following operation:
It is obtained from the three-dimensional feature information library and presets prompt information and the Three-dimensional Gravity with the shape feature information matches
Build image feature information;
The three-dimensional reconstruction image characteristic information and default prompt information are handled, to generate comprising default prompt letter
The rendering image of breath.
Based on above-mentioned hardware configuration, the present invention each embodiment of object information labeling method in vivo is proposed.
The present invention provides a kind of internal object information labeling method.
Fig. 2 is referred to, Fig. 2 is the flow diagram of the first embodiment of the internal object information labeling method of the present invention.
In the present embodiment, it the described method comprises the following steps:
Step S10 obtains the image of picture under the live visual area of target person;
Target person designated herein specifically can be the patient for needing to implement operation.Specifically, it is taken the photograph in operative site use
The picture pick-up devices such as camera are shot, the image of picture under the live visual area to acquire target person;Picture pick-up device preferably uses
High-resolution, the relevant apparatus with preferable imaging capability.
When being shot, by the live visual area of the camera alignment target personnel of picture pick-up device.Wherein, live visual area is
The range of eyesight in one's power when referring to scene operation.Specifically, it by the whole body of picture pick-up device photographic subjects personnel or needs to implement
Specific human organ and its other organs of neighboring area of operation etc., such as needing to implement the organ of operation is the heart of patient
Dirty, then being taken on site in image should include the heart of patient and its other organs (such as left lung, right lung, the liver, the heart of neighboring area
Arteries/the vein blood vessel on dirty periphery etc.) image.On live shooting image by the target person of photographic device acquisition
It reaches backstage or cloud server carries out subsequent processing and analysis.
Step S20 extracts the shape feature information of internal object from described image;
A kind of specific implementation of step S20 includes: step S21, to the live shooting image carry out feature extraction and/or
Image procossing, to identify the internal object in the live shooting image;
Feature extraction and/or image procossing are carried out to the live shooting image (frame image) of shooting, it specifically can be using existing
Some routine techniques, such as feature extraction is carried out by edge detection, Corner Detection, curvature estimation.After carrying out feature extraction,
It can also carry out image procossing.Feature extraction can be convenient with image procossing reliably identify be taken on site image in it is internal
The shape and characteristic point of object.It should be noted that it is referred herein identification and it is non-limiting it needs to be determined that in vivo object tool
Body is directed toward organ.
Step S22 is based on recognition result, obtains the shape feature information of internal object.
It shoots in image at the scene in identifier after object, further obtains the shape feature information of internal object.
The shape feature information of internal object is used to characterize the concrete shape of internal object, and such as shape contour of internal object is special
Sign, internal object are located at relative position of target person body etc..
Preferably, the shape feature information include internal object characteristic point position information and/or internal object
Contour feature information.Wherein, the characteristic point position information of internal object specifically can be the characteristic point of internal object
Coordinate.The characteristic point of internal object can be the edge being taken on site in image, focus, spot, angle point, can specifically be based on
Extraction of Geometrical Features method (such as according to the contour curvature of internal object) or other extracting methods are determined;Determining body
After the characteristic point of interior object, the coordinate of characteristic point is further determined that.The coordinate of characteristic point can characterize internal object and be located at
The relative position of target person body, and the relative positional relationship of each organ in object in vivo.
The contour feature information of internal object can according to the characteristic point position information of fixed internal object into
Row determines, such as according to the coordinate of characteristic point, determines the profile vector of internal object, and the profile vector is for characterizing internal mesh
Mark the profile of object.
Step S30 matches the shape feature information with the three-dimensional feature information library of established target person;
The three-dimensional feature information library of target person is the information bank pre-established, which is the art based on target person
What the three-dimensional reconstruction result of preceding detection image was established.Preoperative detection image specifically can be CT scan image
(i.e. CT image).Before target person is performed the operation, CT scan is carried out to target person, to obtain corresponding CT image.
Then, three-dimensional reconstruction is carried out to CT image, generates corresponding three-dimensional reconstruction image.Three-dimensional reconstruction image based on generation extracts
Three-dimensional reconstruction image characteristic information constructs three-dimensional feature information library.Three-dimensional reconstruction image characteristic information can be three-dimensional reconstruction
The characteristic information of the image characteristic point of image, such as characteristic point position information and/or the dimensional profile features information of internal object.
In step s 30, by the shape feature information and target person of the internal object extracted from live shooting image
Three-dimensional reconstruction image characteristic information in the three-dimensional feature information library of member is compared, and meets the three of preset condition to match
Tie up reconstruction image characteristic information.Wherein, preset condition can be three-dimensional reconstruction image characteristic information and the shape feature information
Similarity be greater than preset threshold.
The specific implementation of three-dimensional feature information library and step S30 are established see hereafter other embodiments.
Step S40 generates the rendering image comprising default prompt information according to matching result;
After matching the three-dimensional reconstruction image characteristic information for meeting preset condition, step S40 is executed.
A kind of specific implementation of step S40 includes:
Step S41, from the three-dimensional feature information library obtain default prompt information and with the shape feature information matches
Three-dimensional reconstruction image characteristic information;
Wherein, presetting prompt information is preset prompt information, the operator for reminding or prompting to implement to perform the operation
The concerns that (such as operating doctor) needs to pay attention in the course of surgery, can according to need and set, be can specifically include
Goal-selling object prompt information and surgery planning scheme prompt information.Goal-selling object prompt information can include but is not limited to as
Under one or more: title, the status information of internal object of internal object.Wherein, the title of internal object
The title that can be a certain organ, such as heart, left lung, arteries, vein blood vessel;The status information of internal object can be
The normal physiological index of internal object, such as normal frequency, the normal arterial pressure range of patient's heart of patient's heart beating.
Surgery planning scheme prompt information to when time operation plan is related, can include but is not limited to following one kind or
Person is a variety of: surgical procedure points for attention, surgical procedure scheme.Wherein, surgical procedure points for attention are for reminding operator to have
The operation item of body, such as " paying attention to evacuation nerve location ".Surgical procedure scheme is specifically performed the operation stream for reminding operator
Journey/step.
In addition it is also necessary to obtain the three-dimensional reconstruction image with the shape feature information matches from three-dimensional feature information library
Characteristic information.Matching process is referred to the specific embodiment of above step S30.
Step S42 handles the three-dimensional reconstruction image characteristic information and default prompt information, to generate comprising pre-
If the rendering image of prompt information.
The specific embodiment that three-dimensional reconstruction image characteristic information and default prompt information are handled with no restriction, example
Such as by calling the GPU on backstage to carry out rendering processing, it is based on three-dimensional reconstruction image characteristic information and default prompt information, is rendered
To the rendering image comprising presetting prompt information, each internal object can be distinguished.Wherein, the type of the rendering image is preferably
X-Y scheme.
Step S50 exports the image data of the rendering image to the imaging screen of hysteroscope or default projection device.
Optionally, after generating rendering image, the image data for rendering image is exported to the imaging screen of hysteroscope, so that
Rendering image is shown on the imaging screen of hysteroscope.The image data for rendering image is exported to the imaging screen of hysteroscope and shown
Image is rendered, the existing hysteroscope Medical Devices resource of operative site can be made full use of, improve the utilization rate of Medical Devices.
Optionally, after generating rendering image, the image data for rendering image is exported to default projection device, then by presetting
Projection device carries out image projection after data conversion.Wherein, specific device type and the type for presetting projection device are unlimited,
Projector can be selected.Default projection device can object information labeling fills in vivo by wire/wireless mode and the present invention
Vertical communication connection is set up, to receive the image data of rendering image.Default projection device by image projection target person body
On interior object.Default prompt information is projected on the internal object of operative site in this way, realizing.
It should be noted that can according to need the visual configuration for carrying out rendering image when generating rendering image.Example
Such as, different colors is set by the contoured interior display color of Different Organs respectively, i.e., is realized by the different of rendered color
The vision difference of Different Organs, such as sets brown for liver display color, and kidney display color is set as green.And no
Contour line with organ is using thick dashed line etc..
By visual configuration, visualization water of the projector, image projection in target person body on object can be improved
It is flat.
In the present embodiment, in real time obtain target person live visual area under picture image, and from be taken on site image
The middle shape feature information for extracting internal object, and then by the shape feature information of internal object and target person
Three-dimensional feature information library is matched, and is thereby determined that and the matched three-dimensional feature information of operative site actual conditions.In turn, it generates
Comprising presetting the rendering image of prompt information, and will be matched with the actual internal object of operative site, comprising default prompt letter
The image projection of breath has the characteristics that reliable and stable on the internal object of the target person of operative site, realizes high-precision
It spends, the body surface projection of accurate positioning, and realizes and the body corresponding position of projected image and patient are subjected to Precise fusion, help
Cost is practiced in the study for reducing live operation teaching, promotes the effect of on-the-spot teaching.
Further, based on the internal object information labeling method first embodiment of the present invention, the internal mesh of the present invention is proposed
Mark object information labeling method second embodiment.As shown in figure 3, in the present embodiment, before step S10, further includes:
Step S60 obtains the preoperative detection image of target person;
Before target person is performed the operation, image inspection is carried out to target person by modality (such as CT machine)
It looks into, the internal object information labeling device of the present invention is connected with modality, obtains target person with data lead-in mode
Preoperative detection image;Or the data of preoperative detection image are manually entered to object information labeling device in vivo of the invention.
The type of preoperative detection image preferably uses CT scan image.CT scan utilizes accurate standard
Multi-angle is made at straight X-ray beam, gamma-rays, ultrasonic wave etc., a certain position that human body is surrounded together with the detector high with sensitivity
Profile scanning, have the characteristics that sweep time is fast, image clearly, can be used for the inspection of a variety of diseases.
Step S61 carries out three-dimensional reconstruction to the preoperative detection image;
Three-dimensional reconstruction is specifically according to single-view or the process of the image reconstruction three-dimensional information of multiple view.Of the invention real
It applies in example, three-dimensional reconstruction is carried out based on preoperative detection image, can specifically include: camera calibration, feature extraction, Stereo matching
And the operation such as three-dimensional reconstruction.Here technology used by three-dimensional reconstruction is not specifically limited.
Step S62 is based on three-dimensional reconstruction result, establishes the three-dimensional feature information for being stored with three-dimensional reconstruction image characteristic information
Library.
After carrying out three-dimensional reconstruction to preoperative detection image, generate three-dimensional reconstruction image (or Three-dimension Reconstruction Model).Base
In the three-dimensional reconstruction image (or Three-dimension Reconstruction Model) of generation, three-dimensional reconstruction image characteristic information is extracted.Wherein, Three-dimensional Gravity
Building image feature information can be the characteristic information of image characteristic point of three-dimensional reconstruction image, such as characteristic point position information and/or
The dimensional profile features information of internal object.Then, the three-dimensional spy for being stored with the three-dimensional reconstruction image characteristic information is established
Levy information bank.It should be noted that different target personnel can register different personal accounts respectively, different personal accounts is built
Stand corresponding three-dimensional feature information library.
In this way, carrying out three-dimensional reconstruction by the preoperative detection image to target person, three-dimensional reconstruction image feature is extracted
Information, and corresponding three-dimensional feature information library is established, in order to carry out subsequent information matches and render the generation of image to operate.
Three-dimensional feature information inventory contains the three-dimensional reconstruction image characteristic information that the internal object preoperative with target person is identical,
Help to generate and there is high-precision rendering image.
Further, based on the internal object information labeling method second embodiment of the present invention, the internal mesh of the present invention is proposed
Mark object information labeling method 3rd embodiment.As shown in figure 4, in the present embodiment, it is described by the shape feature information with
The step of three-dimensional feature information library of the target person of foundation is matched, specifically includes:
Step S31, by the characteristic point position information of internal object and the three-dimensional reconstruction image characteristic information into
Row compares, and calculates the similarity of the characteristic point position information Yu the three-dimensional reconstruction image characteristic information;
The characteristic point position information of internal object is to extract to two dimensional image, and three-dimensional reconstruction image feature is believed
Breath is to be extracted based on three-dimensional reconstruction result, therefore first can carry out dimension-reduction treatment to three-dimensional reconstruction image characteristic information,
Or spatial scaling is carried out, to keep the corresponding relationship with the characteristic point position information.By processed three-dimensional reconstruction image
The characteristic point position information of characteristic information and internal object carries out similarity calculation, to determine the similarity degree of the two.
Specific similarity calculation mode can be each feature point in the characteristic point position information for first calculating internal object
The degree of approximation with the character pair point position of processed three-dimensional reconstruction image characteristic information is set, then each degree of approximation is added
Power summation, acquired results are the degree of approximation that this is calculated.
Step S32, judges whether the similarity is greater than preset threshold;
Preset threshold can be set according to actual needs, such as 90%.
Step S33, if so, determining that the shape feature information is matched with the three-dimensional reconstruction image characteristic information.
If the similarity be greater than preset threshold, illustrate from live shooting image zooming-out shape feature information with it is described
Three-dimensional reconstruction image characteristic information has higher matching degree, can mention corresponding three-dimensional reconstruction image characteristic information at this time
It takes, to generate rendering image.If the similarity is less than or equal to preset threshold, illustrate from live shooting image zooming-out
Shape feature information and the matching degree of the three-dimensional reconstruction image characteristic information be not high, and adjustable three-dimensional reconstruction image is special at this time
The processing parameter of reference breath, so that the feature point of processed three-dimensional reconstruction image characteristic information and internal object
Confidence breath has higher matching degree, to realize of the shape feature information Yu the three-dimensional reconstruction image characteristic information
Match.
Further, based on the internal object information labeling method first embodiment of the present invention, the internal mesh of the present invention is proposed
Mark object information labeling method fourth embodiment.As shown in figure 5, in the present embodiment, the picture number by the rendering image
After output to the step on default projection device, further includes:
Step S70 reacquires the image of picture under the live visual area of target person, with predetermined period to update internal mesh
Mark the characteristic point position information of object;
Specifically, it is arranged in operative site video camera to be shot, the internal object information labeling device of the present invention is by wash with watercolours
The image data of dye image is exported to default projection device, under the live visual area that target person is reacquired by predetermined period
The image of picture.Predetermined period can be set as needed, such as 200 milliseconds.The scene of the target person of reacquisition is clapped
It takes the photograph image and carries out image analysis processing, redefine the characteristic point for the internal object of target person being taken on site in image
Specific location specifically can be the coordinate information of characteristic point.Based on the specific location of identified characteristic point, internal target is updated
The characteristic point position information of object.
Step S71 compares the characteristic point position information for updating the internal object of front and back, to judge the spy of internal object
Whether sign point position changes;
The characteristic point position for updating the internal object of front and back can be specifically compared, the diversity factor of the two is calculated;If the two
Diversity factor be greater than default diversity factor threshold value, it is determined that the characteristic point position of internal object changes.Default diversity factor threshold
Value, which can according to need, to be set.Alternatively, the characteristic point to internal object carries out tracking calculating, when the points shadow traced into
When ringing the precision for having arrived calculating position auto―control, then re-starts recognition detection, renders again, it is reciprocal with this.
Step S72, if so, re-executing the step for extracting the shape feature information of internal object from described image
Suddenly.
When the characteristic point position for judging internal object sends variation, the projected position of the projected image of internal object
It is mismatched with actual internal target object location, needs to go out internal mesh from the live shooting image zooming-out of real-time update again at this time
The shape feature information of object is marked, and executes subsequent other steps, so that the projected position of the projected image of object in vivo
With actual internal target object location real-time matching, so that the projected image of internal object be avoided to deviate actual internal target
Object leads to projection inaccuracy, so that the internal object accurate fit of body surface projection and target person, guarantees the validity of projection
And accuracy.
In addition, the electronic equipment includes object letter in vivo as described above the present invention also provides a kind of electronic equipment
Cease annotation equipment.The electronic equipment specifically can be various kinds of equipment, such as computer, smart phone, tablet computer, notebook
Computer.
In addition, the present invention also provides a kind of read/write memory mediums.
Internal object information labeling program, the internal object information labeling program are stored on the storage medium
The step of as above described in any item internal object information labeling methods are realized when being executed by processor.
The specific embodiment of the internal object information labeling equipment of the present invention and storage medium and above-mentioned internal object are believed
Breath each embodiment of mask method is essentially identical, and therefore not to repeat here.
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and
And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do
There is also other identical elements in the process, method of element, article or device.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
The embodiment of the present invention is described with above attached drawing, but the invention is not limited to above-mentioned specific
Embodiment, the above mentioned embodiment is only schematical, rather than restrictive, those skilled in the art
Under the inspiration of the present invention, without breaking away from the scope protected by the purposes and claims of the present invention, it can also make very much
Form, all of these belong to the protection of the present invention.
Claims (10)
1. a kind of internal object information labeling method, which is characterized in that the described method comprises the following steps:
Obtain the image of picture under the live visual area of target person;
The shape feature information of internal object is extracted from described image;
The shape feature information is matched with the three-dimensional feature information library of established target person;
According to matching result, the rendering image comprising default prompt information is generated;
The image data of the rendering image is exported to the imaging screen of hysteroscope or default projection device.
2. object information labeling method in vivo as described in claim 1, which is characterized in that described to obtain showing for target person
Under visual area the step of the image of picture before, further includes:
Obtain the preoperative detection image of target person;
Three-dimensional reconstruction is carried out to the preoperative detection image;
Based on three-dimensional reconstruction result, the three-dimensional feature information library for being stored with three-dimensional reconstruction image characteristic information is established.
3. object information labeling method in vivo as described in claim 1, which is characterized in that described to be extracted from described image
It the step of shape feature information of internal object, specifically includes:
Feature extraction and/or image procossing are carried out to the live shooting image, to identify the body in the live shooting image
Interior object;
Based on recognition result, the shape feature information of internal object is obtained.
4. object information labeling method in vivo as claimed in claim 2, which is characterized in that described to believe the shape feature
The step of breath is matched with the three-dimensional feature information library of established target person, specifically includes:
The characteristic point position information of internal object is compared with the three-dimensional reconstruction image characteristic information, and is calculated
The similarity of the characteristic point position information and the three-dimensional reconstruction image characteristic information out;
Judge whether the similarity is greater than preset threshold;
If so, determining that the shape feature information is matched with the three-dimensional reconstruction image characteristic information.
5. the internal object information labeling method as described in claim 1,3,4 is any, which is characterized in that the shape feature
Information includes the characteristic point position information of internal object and/or the contour feature information of internal object.
6. object information labeling method in vivo as claimed in claim 5, which is characterized in that described by the rendering image
Image data is exported to the step on default projection device, further includes:
The image of picture under the live visual area of target person is reacquired, with predetermined period to update the characteristic point of internal object
Location information;
Compare update front and back internal object characteristic point position information, with judge internal object characteristic point position whether
It changes;
If so, the step of re-executing the shape feature information for extracting internal object from described image.
7. object information labeling method in vivo as described in claim 1, which is characterized in that it is described according to matching result, it is raw
At the step of including the rendering image of default prompt information, specifically include:
It is obtained from the three-dimensional feature information library and presets prompt information and the three-dimensional reconstruction figure with the shape feature information matches
As characteristic information;
The three-dimensional reconstruction image characteristic information and default prompt information are handled, to generate comprising default prompt information
The rendering image.
8. a kind of internal object information labeling device, which is characterized in that the internal object information labeling device includes: to deposit
Reservoir, processor and the internal object information labeling journey that is stored on the memory and can run on the processor
Sequence is realized as described in any one of claims 1 to 7 when the internal object information labeling program is executed by the processor
Internal object information labeling method the step of.
9. a kind of electronic equipment, which is characterized in that the electronic equipment includes object information in vivo as claimed in claim 8
Annotation equipment.
10. a kind of readable storage medium storing program for executing, which is characterized in that be stored with internal object information labeling on the readable storage medium storing program for executing
Program is realized as described in any one of claims 1 to 7 when the internal object information labeling program is executed by processor
The step of internal object information labeling method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910068317.1A CN109934798A (en) | 2019-01-24 | 2019-01-24 | Internal object information labeling method and device, electronic equipment, storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910068317.1A CN109934798A (en) | 2019-01-24 | 2019-01-24 | Internal object information labeling method and device, electronic equipment, storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109934798A true CN109934798A (en) | 2019-06-25 |
Family
ID=66985161
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910068317.1A Pending CN109934798A (en) | 2019-01-24 | 2019-01-24 | Internal object information labeling method and device, electronic equipment, storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109934798A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112057107A (en) * | 2020-09-14 | 2020-12-11 | 无锡祥生医疗科技股份有限公司 | Ultrasonic scanning method, ultrasonic equipment and system |
CN113643226A (en) * | 2020-04-27 | 2021-11-12 | 成都术通科技有限公司 | Labeling method, device, equipment and medium |
CN115713664A (en) * | 2022-12-06 | 2023-02-24 | 浙江中测新图地理信息技术有限公司 | Intelligent marking method and device for fire-fighting acceptance check |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103246350A (en) * | 2013-05-14 | 2013-08-14 | 中国人民解放军海军航空工程学院 | Man-machine interface device and method for achieving auxiliary information prompting based on regions of interest |
US20130257910A1 (en) * | 2012-03-28 | 2013-10-03 | Samsung Electronics Co., Ltd. | Apparatus and method for lesion diagnosis |
CN104000655A (en) * | 2013-02-25 | 2014-08-27 | 西门子公司 | Combined surface reconstruction and registration for laparoscopic surgery |
CN106491216A (en) * | 2016-10-28 | 2017-03-15 | 苏州朗开医疗技术有限公司 | The internal destination object alignment system of one kind diagnosis and medical treatment alignment system |
CN108280523A (en) * | 2018-03-20 | 2018-07-13 | 中国电子科技集团公司电子科学研究院 | Overhaul of the equipments based on augmented reality and maintaining method, device and storage medium |
CN109166625A (en) * | 2018-10-10 | 2019-01-08 | 欧阳聪星 | A kind of virtual edit methods of tooth and system |
-
2019
- 2019-01-24 CN CN201910068317.1A patent/CN109934798A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130257910A1 (en) * | 2012-03-28 | 2013-10-03 | Samsung Electronics Co., Ltd. | Apparatus and method for lesion diagnosis |
CN104000655A (en) * | 2013-02-25 | 2014-08-27 | 西门子公司 | Combined surface reconstruction and registration for laparoscopic surgery |
CN103246350A (en) * | 2013-05-14 | 2013-08-14 | 中国人民解放军海军航空工程学院 | Man-machine interface device and method for achieving auxiliary information prompting based on regions of interest |
CN106491216A (en) * | 2016-10-28 | 2017-03-15 | 苏州朗开医疗技术有限公司 | The internal destination object alignment system of one kind diagnosis and medical treatment alignment system |
CN108280523A (en) * | 2018-03-20 | 2018-07-13 | 中国电子科技集团公司电子科学研究院 | Overhaul of the equipments based on augmented reality and maintaining method, device and storage medium |
CN109166625A (en) * | 2018-10-10 | 2019-01-08 | 欧阳聪星 | A kind of virtual edit methods of tooth and system |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113643226A (en) * | 2020-04-27 | 2021-11-12 | 成都术通科技有限公司 | Labeling method, device, equipment and medium |
CN113643226B (en) * | 2020-04-27 | 2024-01-19 | 成都术通科技有限公司 | Labeling method, labeling device, labeling equipment and labeling medium |
CN112057107A (en) * | 2020-09-14 | 2020-12-11 | 无锡祥生医疗科技股份有限公司 | Ultrasonic scanning method, ultrasonic equipment and system |
CN115713664A (en) * | 2022-12-06 | 2023-02-24 | 浙江中测新图地理信息技术有限公司 | Intelligent marking method and device for fire-fighting acceptance check |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11310480B2 (en) | Systems and methods for determining three dimensional measurements in telemedicine application | |
US11576645B2 (en) | Systems and methods for scanning a patient in an imaging system | |
KR102018565B1 (en) | Method, apparatus and program for constructing surgical simulation information | |
US11576578B2 (en) | Systems and methods for scanning a patient in an imaging system | |
JP5797352B1 (en) | Method for tracking a three-dimensional object | |
CN112184705B (en) | Human body acupuncture point identification, positioning and application system based on computer vision technology | |
CN110544301A (en) | Three-dimensional human body action reconstruction system, method and action training system | |
CN111627521B (en) | Enhanced utility in radiotherapy | |
CN110459301B (en) | Brain neurosurgery navigation registration method based on thermodynamic diagram and facial key points | |
CN106340015B (en) | A kind of localization method and device of key point | |
KR20210051141A (en) | Method, apparatus and computer program for providing augmented reality based medical information of patient | |
US11954860B2 (en) | Image matching method and device, and storage medium | |
CN109887077B (en) | Method and apparatus for generating three-dimensional model | |
CN108294772A (en) | A kind of CT scan vision positioning method and CT system | |
CN109730768A (en) | A kind of cardiac thoracic surgery supplementary controlled system and method based on virtual reality | |
CN109166177A (en) | Air navigation aid in a kind of art of craniomaxillofacial surgery | |
CN112057107A (en) | Ultrasonic scanning method, ultrasonic equipment and system | |
CN112509119A (en) | Spatial data processing and positioning method and device for temporal bone and electronic equipment | |
CN109934798A (en) | Internal object information labeling method and device, electronic equipment, storage medium | |
US10078906B2 (en) | Device and method for image registration, and non-transitory recording medium | |
KR102433473B1 (en) | Method, apparatus and computer program for providing augmented reality based medical information of patient | |
KR101988531B1 (en) | Navigation system for liver disease using augmented reality technology and method for organ image display | |
WO2020173054A1 (en) | Vrds 4d medical image processing method and product | |
EP3655919A1 (en) | Systems and methods for determining three dimensional measurements in telemedicine application | |
Kirmizibayrak et al. | Digital analysis and visualization of swimming motion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |