CN117138244A - Self-adaptive near amblyopia therapeutic instrument - Google Patents

Self-adaptive near amblyopia therapeutic instrument Download PDF

Info

Publication number
CN117138244A
CN117138244A CN202311192741.XA CN202311192741A CN117138244A CN 117138244 A CN117138244 A CN 117138244A CN 202311192741 A CN202311192741 A CN 202311192741A CN 117138244 A CN117138244 A CN 117138244A
Authority
CN
China
Prior art keywords
laser
model
convolution
image
pupil
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311192741.XA
Other languages
Chinese (zh)
Inventor
倪冰冰
林垠昕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN202311192741.XA priority Critical patent/CN117138244A/en
Publication of CN117138244A publication Critical patent/CN117138244A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N5/00Radiation therapy
    • A61N5/06Radiation therapy using light
    • A61N5/0613Apparatus adapted for a specific treatment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N5/00Radiation therapy
    • A61N5/06Radiation therapy using light
    • A61N5/067Radiation therapy using light using laser light
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N5/00Radiation therapy
    • A61N5/06Radiation therapy using light
    • A61N2005/0635Radiation therapy using light characterised by the body area to be irradiated
    • A61N2005/0643Applicators, probes irradiating specific body areas in close proximity
    • A61N2005/0645Applicators worn by the patient
    • A61N2005/0647Applicators worn by the patient the applicator adapted to be worn on the head
    • A61N2005/0648Applicators worn by the patient the applicator adapted to be worn on the head the light being directed to the eyes

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

An adaptive near amblyopia treatment device comprising: a pair of optical lens group, shielding subassembly, laser subassembly, optical perception subassembly that symmetry set gradually, and the control module who links to each other with shielding subassembly, laser subassembly and optical perception subassembly respectively, wherein: the control module calculates an irradiation area according to the image acquired by the optical sensing assembly and outputs control instructions to the shielding assembly and the laser assembly respectively so as to control the illumination intensity, the frequency, the irradiation angle and the irradiation area. The invention calculates the required light intensity, light frequency and irradiation area of laser irradiation through an image recognition technology, thereby adaptively adjusting the illumination intensity, frequency, irradiation angle and irradiation area and simultaneously removing the macula area.

Description

Self-adaptive near amblyopia therapeutic instrument
Technical Field
The invention relates to a technology in the field of laser ophthalmic equipment, in particular to a self-adaptive near amblyopia therapeutic instrument capable of automatically adjusting illumination intensity, frequency, illumination angle and illumination area.
Background
The existing near amblyopia treatment technology irradiates the human eyes with low-intensity laser, improves the blood circulation of the eyeground, increases the retinal blood flow and restores the choroid thickness. However, the output power cannot be adjusted dynamically in a self-adaptive manner in the prior art, too low irradiation cannot produce an effect, and excessive irradiation easily causes retina damage.
Disclosure of Invention
Aiming at the defect that the prior art is directly applied to low-intensity laser irradiation of near amblyopia, the invention provides a self-adaptive near amblyopia therapeutic instrument, and the required light intensity, light frequency and irradiation area of laser irradiation are calculated through an image recognition technology, so that the macular area is eliminated while the illumination intensity, frequency, irradiation angle and irradiation area are self-adaptively adjusted.
The invention is realized by the following technical scheme:
the invention relates to a self-adaptive near amblyopia therapeutic instrument, which comprises: a pair of optical lens group, shielding subassembly, laser subassembly, optical perception subassembly that symmetry set gradually, and the control module who links to each other with shielding subassembly, laser subassembly and optical perception subassembly respectively, wherein: the control module calculates an irradiation area according to the image acquired by the optical sensing assembly and outputs control instructions to the shielding assembly and the laser assembly respectively so as to control the illumination intensity, the frequency, the irradiation angle and the irradiation area.
The image collected by the optical perception component comprises a pupil image and a retina image.
The optical sensing component comprises: an optical camera for photographing an eye of a person and a fundus camera for photographing a retina, wherein: the optical camera outputs the collected human eye image to the control module for positioning the pupil position, so as to adjust the orientation of the laser generator; the fundus camera outputs retinal images to the control module for identifying and segmenting the location of the macular region, for confirming the areas needed to be blocked by the laser mask and optimizing light intensity and light frequency in combination with the eye image.
The laser component comprises: the two laser generators are respectively used for performing stimulation treatment on the eyes, the laser emitters are connected with the control module, and stable single-frequency parallel laser with adjustable light intensity and frequency is generated according to the instructions.
The shielding component is a liquid crystal mask or a mechanical mask connected with the control module, and under the control of the control module, the area through which laser passes is regulated, so that the effect of shielding the macula lutea area is realized.
The control module comprises: pupil image segmentation unit, retina image segmentation unit, mask control unit and laser control unit, wherein: the pupil image segmentation unit obtains the pupil position in the human eye image through a pupil segmentation AI model according to the human eye image; the retinal image segmentation unit comprehensively segments the AI model through the retinal image according to the retinal image to obtain the positions of blood vessels, optic discs and macular areas in the retinal image; the mask control unit controls a shading mask of the shielding assembly according to the macular region position information obtained by the retina image segmentation unit so that laser irradiation avoids a macular region; the laser control unit calculates the corresponding irradiation angle according to the pupil position information and generates a control instruction, and then outputs the control instruction to the shielding component, and calculates the corresponding laser intensity and frequency according to the retinal information and generates a control instruction and then outputs the control instruction to the laser component, so that the laser irradiation angle, the light intensity and the light frequency are adaptively adjusted.
Technical effects
The invention adopts the AI model to select the area of low-intensity laser irradiation treatment, thereby effectively avoiding irradiation of the macular area and avoiding lesion of the macular area caused by laser irradiation, and ensuring safer treatment; the light intensity and frequency that are more suitable for the user can be intelligently selected so that the treatment is more effective.
Drawings
FIG. 1 is a schematic diagram of the structure of the present invention;
FIG. 2 is a schematic diagram of a device workflow;
FIG. 3 is a schematic diagram of a laser generator;
FIG. 4 is a schematic view of a shutter assembly (liquid crystal);
fig. 5 is a schematic view of the structure of the shielding assembly (machine).
Fig. 6 and 7 are schematic diagrams of the segmentation results according to the embodiments.
Detailed Description
As shown in fig. 1, this embodiment relates to an adaptive near-amblyopia therapeutic equipment, which includes: the casing and set up in its inside laser subassembly, optical lens group, optical perception subassembly, shelter from subassembly and control module, wherein: the inside perception subassembly that is used for shooing human eye and retina image that is provided with two cavitys of inside of casing, the laser subassembly that is used for taking place laser that can accomodate is placed before the perception subassembly, and is deposited in the middle of the cavity and is used for sheltering from the laser shielding subassembly. The remaining area of the housing houses a control module that can be used to calculate and control the illumination intensity, frequency, illumination angle and illumination area.
The image collected by the optical perception component comprises a pupil image and a retina image.
The control module comprises: pupil image segmentation unit, retina image segmentation unit, mask control unit and laser control unit, wherein: the pupil image segmentation unit is internally provided with a pupil segmentation AI model, the retina image segmentation unit is internally provided with a retina image comprehensive segmentation AI model, the mask control unit is internally provided with a control circuit connected to the shielding component, and the laser control unit is internally provided with a regression model and a control circuit connected to the laser component.
The pupil and retina images are preferably resized to 256 x 256 by the torchvision.
The pupil segmentation AI model comprises: encoder, feature processor and decoder, wherein: the encoder extracts five characteristic vectors of 16 multiplied by 1024 from the input pupil image; the feature processor generates soft attention vectors according to the five feature vectors and fuses the soft attention vectors to obtain a feature vector of 16 multiplied by 1024; the decoder generates a pupil division mask map from the feature vector.
The encoder comprises five convolution modules and a maximum pooling layer, wherein each convolution module consists of a convolution layer-a batch processing layer-a ReLU activation function-a convolution layer-a batch processing layer-a ReLU activation function, and the maximum pooling layer is used for reducing the length and the width of a feature vector, and finally obtaining the feature vector of 16 multiplied by 1024.
The feature processor adopts a cavity space convolution pooling pyramid (ASPP) module with attention engineering, which respectively carries out 1. Global average pooling and up-sampling on input vectors to the original size 2. Convolution kernel is 1×1, step length is 1, convolution 3 without filling and expansion, convolution kernel is 3×3, step length is 1, filling is 6, expansion is 4 of 6. Convolution kernel is 3×3, step length is 1, filling is 12, expansion is 5 of 12. Convolution kernel is 3×3, step length is 1, filling is 18, expansion is 18, and five vectors obtained are connected together in the channel number layer, pass through a convolution layer with the convolution kernel of 3×3, filling is 1, step length is 1, input channel number is 5120, output channel number is 1024, a 16×16×1024 vector is obtained as a soft attention vector, and a 16×16×1024 feature vector is obtained by multiplying the original feature vector.
The decoder comprises four upper convolution modules, four residual convolution layers and a convolution layer with a convolution kernel of 1 x 1, wherein: each upper convolution module consists of an upper sampling layer, a convolution layer, a batch processing layer and a ReLU activation function, each upper sampling layer is used for connecting vectors of the corresponding layer number in an encoder with vectors obtained through the upper convolution layer on a channel number layer surface, then the channel number is restored through a residual convolution layer, after four times of the steps, a vector of 256 multiplied by 64 is obtained, a mask image of 256 multiplied by 1 is obtained through the convolution layer with a convolution kernel of 1 multiplied by 1, a threshold value of 0.5 is set for the image, a value larger than 0.5 becomes 1, a value smaller than or equal to 0.5 becomes 0, and finally a pupil segmentation mask image of a human eye image is obtained.
The pupil segmentation AI model is trained by the following modes: 3000 human eye images with pupil masks are adopted as sample sets and are randomly divided into a training set and a test set according to the proportion of 9:1, then a Dataset class in a pytorch packet of python is used for generating a data set, the Dataset class is used for model optimization training, then training set data are passed through the model in batches to obtain results, the results and labels of the training set are used for calculating accuracy and cross entropy, the cross entropy is used as a loss function of training to participate in an optimization process, and an optimal class of Adam algorithm built in pytorch is used for optimizing the model, so that a pupil segmentation model with high precision is finally obtained. As shown in fig. 6, the first column is a labeled pupil mask, the second column is a predicted pupil mask, and the third column is a corresponding human eye image.
In the sample set, a labelme tool is used for marking the pupil outline, and then the pupil outline is processed by using a cv2 package of python to obtain a pupil mask of each image.
The retinal image comprehensive segmentation AI model comprises: encoder, feature processor and decoder, wherein: the encoder extracts five characteristic vectors of 16×16×1024 from the input retina image; the feature processor generates soft attention vectors according to the five feature vectors and fuses the soft attention vectors to obtain a feature vector of 16 multiplied by 1024; the decoder generates a retina segmentation mask map from the feature vectors.
The encoder comprises five convolution modules and a maximum pooling layer, wherein: each convolution module consists of a convolution layer-batch processing layer-ReLU activation function-convolution layer-batch processing layer-ReLU activation function, and the encoder adds a maximum pooling layer after the first four convolution modules to reduce the length and width of the feature vector, and finally obtains the feature vector of 16 multiplied by 1024.
The feature processor adopts a cavity space convolution pooling pyramid (ASPP) module with attention engineering, which respectively carries out 1. Global average pooling and up-sampling to the original size 2. Convolution kernel is 1×1, step length is 1, convolution kernel is 3×3, step length is 1, filling is 6, expansion is 4 of convolution of 6. Convolution kernel is 3×3, step length is 1, filling is 12, expansion is 12 convolution 5 of convolution kernel is 3×3, step length is 1, filling is 18, expansion is 18 convolution, and five vectors obtained by the above are connected together at the channel number level, pass through a convolution layer with the convolution kernel of 3×3, filling is 1, step length is 1, input channel number is 5120, output channel number is 1024, a 16×16×1024 vector is obtained as a soft attention vector, and a 16×16×1024 feature vector is obtained by multiplying the original feature vector.
The decoder comprises four upper convolution modules, four residual convolution layers and a convolution layer with a convolution kernel of 1 multiplied by 1, each upper convolution module is composed of an up-sampling layer, a convolution layer, a batch processing layer, and a ReLU activation function, each up-sampling layer is used for connecting vectors of corresponding layers in the encoder with vectors obtained through the upper convolution layers at a channel number layer, then the channel number is restored through the residual convolution layers, after four times of the steps, a 256 multiplied by 64 vector is obtained, the vector is passed through the convolution kernel and is a convolution layer with the convolution kernel of 1 multiplied by 1, a mask image with the convolution kernel of 256 multiplied by 1 is obtained, three thresholds of 0.5, 1.5 and 2.5 are set for the image, values smaller than or equal to 0.5 are changed to 0, values larger than 0.5 and smaller than or equal to 1.5 are changed to 2, values larger than 2.5 are changed to 3, wherein 0 represents no segmentation result, 1 represents pixels of blood vessels, 2 represents pixels of a pupil of a final region of the pupil region, and a macula region is obtained.
The retinal image comprehensive segmentation AI model is trained by the following modes: 3000 human eye images with pupil masks are adopted as sample sets and are randomly divided into a training set and a test set according to the proportion of 9:1, then a Dataset class in a pytorch packet of python is used for generating a data set, the Dataset class is used for model optimization training, then training set data are passed through the model in batches to obtain results, the results and labels of the training set are used for calculating accuracy and L2 loss, wherein the L2 loss is used as a loss function of training to participate in an optimization process, and an optimal model of an Adam algorithm built in pytorch is used for optimization, so that a high-precision retina comprehensive segmentation model is finally obtained. As shown in fig. 7, the segmentation results are shown in the first column as the original retinal image, the second column as the predicted vascular mask, the third column as the optic disc mask, and the fourth column as the macular area mask.
The blood vessel outline, the optic disc outline and the macular area outline are marked in the sample set by using a labelme tool, and then the blood vessel outline, the optic disc outline and the macular area mask of each image are obtained by processing the blood vessel outline, the optic disc outline and the macular area outline by using a cv2 package of python. 3000 images were taken.
The regression model includes: three convolutional layers, two averaging pooling layers, and a fully-connected layer, wherein: the convolution layer extracts 256×256×64 feature vectors according to the input 256×256×1 retina comprehensive segmentation mask, sequentially changes the 256×256×64 feature vectors into 64×64×256 feature vectors through the average pooling layer, changes the 64×64×256 feature vectors through the convolution layer, downsamples the 1×256 feature vectors through the global average pooling layer, and obtains two values of light intensity and light frequency through the full connection layer.
The regression model is trained by: 3000 images with light intensity and light frequency are adopted as a sample set, the images are randomly divided into a training set and a test set according to the proportion of 9:1, then a Dataset class in a pytorch packet of python is used for generating a data set, the Dataset class is used for model optimization training, then training set data are passed through a model in batches to obtain a result, and the result and a training set label are used for calculating L1 loss, wherein the L1 loss is used as a loss function of training to participate in an optimization process, and an optimal model of an Adam algorithm built in pytorch is used for optimization, so that a high-precision regression model is finally obtained.
The sample set comprises retina images and eye images of 3000 near amblyopia patients, an AI model is used for giving out the approximate content range of various pigments (melanin, lipofuscin, lutein and the like) in eyes of the corresponding patients, and a laser frequency which enables the total energy absorbed by all pigments to be maximum is given based on the difference of each pigment in the laser absorption wavelength range; the AI model was used to give a vascular segmentation mask image in the corresponding patient retinal image and different vascular density ratings were given based on the mask image, the ratings being divided into 10 total, the corresponding laser intensities being set to 1.1mW,1.2mW,1.3mW … … mW (increasing 0.1mW in sequence), respectively, the lower the vascular density the lower the irradiation laser intensity and vice versa. A given laser frequency and laser intensity were used as labels, along with a corresponding 3000 retinal images, as a sample set.
When the device is started, the sensing component firstly collects eye information of a user, a common camera in the sensing component firstly shoots an eye image i1 of the user, and then the fundus camera shoots a retina image i2 of the user; after the acquisition of the images is completed, the sensing component outputs all the images to the control module.
After receiving the image information, the control module calculates a built-in model f1 for dividing the pupil based on the human eye image to obtain a division mask o1=f1 (i 1), on the basis, the intelligent chip performs edge detection on O1 by using a cv2.Canny method of python to obtain an edge point list, and calculates an average value of coordinates in the edge point list, so that an ellipse center O is obtained, namely a pupil center point. Then, the point is projected onto the plane of the observation port of the invention, a connecting line is constructed at the joint of the point and the base of the laser generator, the direction of the connecting line is used as the irradiation angle of laser, and the control circuit is used for adjusting the orientation of the base so as to change the irradiation angle. The circle P of the laser emitted by the laser generator is translated along the connecting line until the center of the circle P coincides with the intersection point, and the projection of the circle P on the mask plane is the area P' of the laser irradiated on the laser mask.
Then, the retina image i2 is calculated by a retina image-based comprehensive segmentation model f2 built in the control module to obtain a comprehensive segmentation mask o2=f2 (i 2), and a macular region mask i2_blo is positioned; calculating a part p_blo=p' ×i2_blo of the region i2_blo in the corresponding i2; the regression model f3 built in the control module further processes o2 to obtain the light intensity and the light frequency (I, f) =f3 (o 2) of the laser adapted by the user; the control group controls the shading component according to P_blo, and controls the laser component according to I, f and the irradiation angle.
As shown in fig. 3, the base for fixing the laser generator in the laser assembly includes: the inside concave half ball seat body of the movable magnet controlled by the electric field and the smooth hemisphere of the built-in magnet arranged at the bottom of the laser transmitter are arranged in the inside concave half ball seat body, wherein: the movable magnet in the base body is connected with the control module, and signals containing irradiation angle information are output through the control module, so that the orientation of the laser generator is adjusted in a small range on two degrees of freedom (excluding autorotation) to achieve a proper irradiation angle; the laser emitter emits (can be regarded as a surface light source) stable single-frequency parallel laser, and the light intensity and the frequency of the laser are regulated by the control module.
As shown in fig. 4, when the shielding assembly uses a liquid crystal mask, the shielding assembly specifically includes: control circuit and a plurality of subunit, every subunit includes a pair of shielding piece and sets up the liquid crystal subunit between them, wherein: the polarization directions of the pair of shielding sheets are mutually perpendicular and are respectively connected with the control circuit, and the control circuit controls the liquid crystal orientation in each subunit by electrifying and cutting off each subunit, so that the capability of the liquid crystal for changing the polarization direction of light is changed, and the light transmission capability of the mask is changed, thereby achieving the purpose of controlling the laser irradiation area.
As shown in fig. 5, when the shielding assembly is a mechanical mask, the shielding assembly specifically includes: control circuit, have shielding plate of a plurality of apertures and activity set up the shielding plate on every aperture, wherein: the shielding sheets are respectively connected with the control circuit to realize lifting, so that the small holes are transparent or light-shielding, and the purpose of controlling the laser irradiation area is achieved.
Compared with the prior art, the invention applies the AI model in the low-intensity laser irradiation treatment of near amblyopia to avoid irradiating the macular area and avoiding the lesion of the macular area caused by laser irradiation, so that the treatment is safer; the light intensity and frequency that are more suitable for the user can be intelligently selected so that the treatment is more effective.
The foregoing embodiments may be partially modified in numerous ways by those skilled in the art without departing from the principles and spirit of the invention, the scope of which is defined in the claims and not by the foregoing embodiments, and all such implementations are within the scope of the invention.

Claims (9)

1. An adaptive near amblyopia treatment device, comprising: a pair of optical lens group, shielding subassembly, laser subassembly, optical perception subassembly that symmetry set gradually, and the control module who links to each other with shielding subassembly, laser subassembly and optical perception subassembly respectively, wherein: the control module calculates an irradiation area according to the image acquired by the optical perception component and outputs control instructions to the shielding component and the laser component respectively so as to control the illumination intensity, the frequency, the irradiation angle and the irradiation area;
the image collected by the optical perception component comprises a pupil image and a retina image;
the shielding component is a liquid crystal mask or a mechanical mask connected with the control module, and under the control of the control module, the area through which laser passes is regulated, so that the effect of shielding the macula lutea area is realized.
2. The adaptive near-amblyopia treatment device of claim 1 wherein said optical sensing component comprises: an optical camera for photographing an eye of a person and a fundus camera for photographing a retina, wherein: the optical camera outputs the collected human eye image to the control module for positioning the pupil position, so as to adjust the orientation of the laser generator; the fundus camera outputs retinal images to the control module for identifying and segmenting the location of the macular region, for confirming the areas needed to be blocked by the laser mask and optimizing light intensity and light frequency in combination with the eye image.
3. The adaptive near-amblyopia treatment device of claim 1, wherein said control module comprises: pupil image segmentation unit, retina image segmentation unit, mask control unit and laser control unit, wherein: the pupil image segmentation unit obtains the pupil position in the human eye image through a pupil segmentation AI model according to the human eye image; the retinal image segmentation unit comprehensively segments the AI model through the retinal image according to the retinal image to obtain the positions of blood vessels, optic discs and macular areas in the retinal image; the mask control unit controls a shading mask of the shielding assembly according to the macular region position information obtained by the retina image segmentation unit so that laser irradiation avoids a macular region; the laser control unit calculates the corresponding irradiation angle according to the pupil position information and generates a control instruction, and then outputs the control instruction to the shielding component, and calculates the corresponding laser intensity and frequency according to the retinal information and generates a control instruction and then outputs the control instruction to the laser component, so that the laser irradiation angle, the light intensity and the light frequency are adaptively adjusted.
4. The adaptive near-amblyopia treatment device of claim 3, wherein the pupil segmentation AI model comprises: encoder, feature processor and decoder, wherein: the encoder extracts five characteristic vectors of 16 multiplied by 1024 from the input pupil image; the feature processor generates soft attention vectors according to the five feature vectors and fuses the soft attention vectors to obtain a feature vector of 16 multiplied by 1024; the decoder generates a pupil segmentation mask map according to the feature vector;
the retinal image comprehensive segmentation AI model comprises: encoder, feature processor and decoder, wherein: the encoder extracts five characteristic vectors of 16×16×1024 from the input retina image; the feature processor generates soft attention vectors according to the five feature vectors and fuses the soft attention vectors to obtain a feature vector of 16 multiplied by 1024; the decoder generates a retina segmentation mask map from the feature vectors.
5. The adaptive near-amblyopia therapeutic equipment of claim 4 wherein the encoder comprises five convolution modules and a max pooling layer, wherein each convolution module consists of a convolution layer-batch layer-ReLU activation function-convolution layer-batch layer-ReLU activation function;
the decoder comprises four upper convolution modules, four residual convolution layers and a convolution layer with a convolution kernel of 1 x 1, wherein: each upper convolution module consists of an upper sampling layer-convolution layer-batch processing layer-ReLU activation function;
the feature processor adopts a cavity space convolution pooling pyramid (ASPP) module with attention engineering, which respectively carries out 1. Global average pooling and up-sampling on input vectors to the original size 2. Convolution kernel is 1×1, step length is 1, convolution 3 without filling and expansion, convolution kernel is 3×3, step length is 1, filling is 6, expansion is 4 of 6. Convolution kernel is 3×3, step length is 1, filling is 12, expansion is 5 of 12. Convolution kernel is 3×3, step length is 1, filling is 18, expansion is 18, and five vectors obtained are connected together in the channel number layer, pass through a convolution layer with the convolution kernel of 3×3, filling is 1, step length is 1, input channel number is 5120, output channel number is 1024, a 16×16×1024 vector is obtained as a soft attention vector, and a 16×16×1024 feature vector is obtained by multiplying the original feature vector.
6. The adaptive near-amblyopia therapeutic equipment of claim 4 wherein the pupil segmentation AI model is trained by: 3000 human eye images with pupil masks are adopted as sample sets and are randomly divided into a training set and a test set according to the proportion of 9:1, then a Dataset class in a pytorch packet of python is used for generating a data set, the Dataset class is used for model optimization training, then training set data are passed through the model in batches to obtain results, the results and labels of the training set are used for calculating accuracy and cross entropy, the cross entropy is used as a loss function of training to participate in an optimization process, and an optimal/Adam class of Adam algorithm built in pytorch is used for optimizing the model, so that a pupil segmentation model with high precision is finally obtained.
7. The adaptive near-amblyopia therapeutic equipment of claim 4 wherein said retinal image is trained by comprehensively segmenting AI models by: 3000 human eye images with pupil masks are adopted as a sample set and are randomly divided into a training set and a test set according to the proportion of 9:1, then a Dataset class in a pytorch packet of python is used for generating a data set, the Dataset class is used for model optimization training, then training set data are passed through the model in batches to obtain results, the results and labels of the training set are used for calculating accuracy and L2 loss, wherein the L2 loss is used as a loss function of training to participate in an optimization process, and an optimal model of an Adam algorithm built in pytorch is used for optimization, so that a high-precision retina comprehensive segmentation model is finally obtained.
8. The adaptive near-amblyopia therapeutic equipment of claim 3 wherein said laser control unit incorporates a regression model and control circuitry connected to the laser assembly;
the regression model includes: three convolutional layers, two averaging pooling layers, and a fully-connected layer, wherein: the convolution layer extracts 256×256×64 feature vectors according to the input 256×256×1 retina comprehensive segmentation mask, sequentially changes the 256×256×64 feature vectors into 64×64×256 feature vectors through the average pooling layer, changes the 64×64×256 feature vectors through the convolution layer, downsamples the 1×256 feature vectors through the global average pooling layer, and obtains two values of light intensity and light frequency through the full connection layer.
9. The adaptive near-amblyopia treatment device of claim 8, wherein the regression model is trained by: 3000 images with light intensity and light frequency are adopted as a sample set, the images are randomly divided into a training set and a test set according to the proportion of 9:1, then a Dataset class in a pytorch packet of python is used for generating a data set, the Dataset class is used for model optimization training, then training set data are passed through the model in batches to obtain results, the results and training set labels are used for calculating L1 loss, wherein the L1 loss is used as a loss function of training to participate in an optimization process, and an optimal/Adam class of Adam algorithm built in pytorch is used for optimizing the model, so that a high-precision regression model is finally obtained.
CN202311192741.XA 2023-09-15 2023-09-15 Self-adaptive near amblyopia therapeutic instrument Pending CN117138244A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311192741.XA CN117138244A (en) 2023-09-15 2023-09-15 Self-adaptive near amblyopia therapeutic instrument

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311192741.XA CN117138244A (en) 2023-09-15 2023-09-15 Self-adaptive near amblyopia therapeutic instrument

Publications (1)

Publication Number Publication Date
CN117138244A true CN117138244A (en) 2023-12-01

Family

ID=88911923

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311192741.XA Pending CN117138244A (en) 2023-09-15 2023-09-15 Self-adaptive near amblyopia therapeutic instrument

Country Status (1)

Country Link
CN (1) CN117138244A (en)

Similar Documents

Publication Publication Date Title
CN105517514B (en) The corneal topographic of operation on cornea program measures and alignment
US9037217B1 (en) Laser coagulation of an eye structure or a body surface from a remote location
CN110338906B (en) Intelligent treatment system for photocrosslinking operation and establishment method
CN114887233B (en) Red light irradiation control device for myopia physiotherapy and related product thereof
US20240122782A1 (en) Vision training device and vision training instrument cross-reference to related applications
KR20210112072A (en) Eye disease treatment apparatus
CN213884754U (en) Light source device and vision training instrument
CN213760235U (en) Light source device and vision training instrument
CN117138244A (en) Self-adaptive near amblyopia therapeutic instrument
JP7453243B2 (en) Methods and related devices for controlling optogenetic devices using imperative laws for the radiated power of a light source
JP7453242B2 (en) Methods and related devices for controlling optogenetic devices using filtering
CN108289757B (en) Method for testing a laser device
CN116570843B (en) Myopia treatment device
CN110585591A (en) Brain vision detection and analysis equipment and method based on nerve feedback
CN113240711B (en) Real-time individual crosslinking system
US20230359016A1 (en) Tunable Prism For Vision Correction Of A Patient And Other Applications
CN105358106A (en) Technique for treating presbyopia
US11793674B2 (en) Contact lenses with bifocal characteristics
CN110314035B (en) Cornea cross-linking device with controllable shape and depth
US11679031B2 (en) Fundus alignment in optical treatment systems
CN116370836A (en) Method and device for improving eyesight by low-energy red light self-adaptive radiation type polarized irradiation
CN118766410A (en) Wearable pupil monitoring device and method
KUBÍČEK CREATING A DEPTH MAP OF EYE IRIS IN VISIBLE SPECTRUM
Ji et al. Precision UV-Modulated Circuits for Corneal Cross-Linking: A Preliminary Study

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination