WO2021255779A1 - Information display method, information display device, and program - Google Patents

Information display method, information display device, and program Download PDF

Info

Publication number
WO2021255779A1
WO2021255779A1 PCT/JP2020/023372 JP2020023372W WO2021255779A1 WO 2021255779 A1 WO2021255779 A1 WO 2021255779A1 JP 2020023372 W JP2020023372 W JP 2020023372W WO 2021255779 A1 WO2021255779 A1 WO 2021255779A1
Authority
WO
WIPO (PCT)
Prior art keywords
eye
fundus image
acquired
image
fundus
Prior art date
Application number
PCT/JP2020/023372
Other languages
French (fr)
Japanese (ja)
Inventor
俊輔 古谷
Original Assignee
株式会社ニコン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社ニコン filed Critical 株式会社ニコン
Priority to PCT/JP2020/023372 priority Critical patent/WO2021255779A1/en
Publication of WO2021255779A1 publication Critical patent/WO2021255779A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes

Definitions

  • the technology of this disclosure relates to an information display method, an information display device, and a program.
  • U.S. Pat. No. 8414122 discloses a fundus camera that captures the fundus. An interface that is easy for users to use is desired.
  • the information display method is an information display method performed by a processor, which includes a first fundus image of the right eye of the subject and a second fundus image of the left eye of the subject.
  • a display screen including one field and a second field for displaying the second fundus image when the second fundus image has been acquired, and for indicating unacquired when the second fundus image has not been acquired. Includes steps to generate.
  • the information display device of the second aspect of the technique of the present disclosure includes a memory and a processor connected to the memory, wherein the processor is a first fundus image of the right eye of the subject and the left of the subject. It is determined whether the second fundus image of the eye has been acquired, and if the first fundus image has been acquired, the first fundus image is displayed, and if the first fundus image has not been acquired, it has not been acquired.
  • the first field for indicating the above, and the second field for indicating that the second fundus image has not been acquired the second fundus image is displayed when the second fundus image has been acquired, and the second field for indicating that the second fundus image has not been acquired.
  • the program of the third aspect of the technique of the present disclosure determines whether the first fundus image of the right eye of the subject and the second fundus image of the left eye of the subject are acquired by the computer. If the first fundus image has been acquired, the first fundus image is displayed, and if the first fundus image has not been acquired, the first field for indicating unacquired and the second fundus image have been acquired. In the case of, the second fundus image is displayed, and if the second fundus image has not been acquired, a display screen including a second field for indicating unacquired is generated.
  • FIG. 1 It is a block diagram of an ophthalmology system 100. It is a block diagram of an ophthalmic apparatus 110. It is a functional block diagram of the CPU of the control unit 36 of the ophthalmic apparatus 110. It is a flowchart of the information display process executed by the CPU of the control unit 36 of the ophthalmic apparatus 110. It is a flowchart of the alignment process of step 202 of FIG. 5A is a flowchart of the display process of the shooting preparation screen 300 in step 250 of FIG. 5A. It is a figure which shows the shooting preparation screen 300. It is a figure which shows the front eye part photographing section 310 which displayed the object 320 of an eye. FIG.
  • FIG. 3 is another diagram showing an anterior ocular segment imaging section 310 displaying an eye object 320. Yet another diagram showing an anterior ocular segment imaging section 310 displaying an eye object 320.
  • FIG. 3 is another diagram showing an anterior ocular segment imaging section 310 displaying an eye object 320. It is still another figure which shows the anterior segment imaging section 310 which displayed the object 320 of the eye. It is a flowchart of the position adjustment process of the ophthalmologic apparatus 110 when the image of the anterior segment of the eye to be inspected 12 is used instead of the object of the eye. It is a figure which shows the (diagonal) image G0 of a pupil based on the image signal from a stereo camera 15B.
  • the ophthalmology system 100 includes an ophthalmology device 110, an AI (artificial intelligence) analysis server 120, a management server device (hereinafter referred to as “management server”) 140, and an image display device (hereinafter referred to as “management server”). , "Viewer”) 150 and.
  • the ophthalmic apparatus 110 acquires a fundus image.
  • the management server 140 stores the fundus image obtained by photographing the fundus of the patient by the ophthalmologic apparatus 110, corresponding to the information of the right eye or the left eye, the patient ID of the patient, and the patient name.
  • the viewer 150 displays medical information such as a fundus image acquired from the management server 140. There may be a plurality of viewers 150.
  • the fundus image is transmitted from the ophthalmic apparatus 110 or the management server 140 connected to the network 130 to the AI analysis server 120.
  • the AI analysis server 120 uses an AI algorithm trained by machine learning to perform a process of generating information about the symptoms of the eye to be inspected. For example, diabetic retinopathy AI screening is performed by performing image processing on the fundus image of a subject suspected of having diabetic retinopathy or a subject predicted to have diabetic retinopathy by the algorithm.
  • the AI analysis server 120 determines the degree of progression of diabetic retinopathy with respect to the left and right fundus images of the subject transmitted from the requester, and transmits the determination result to the requester.
  • the AI analysis server 120 In order for the AI analysis server 120 to execute a process of generating information on the symptoms of the eye to be inspected such as diabetic retinopathy, there are fundus images of the left and right eyes to be inspected, and the fundus image to be the target of the process is present.
  • the image quality needs to be a certain level or higher for processing using the AI algorithm.
  • the ophthalmic apparatus 110, the AI analysis server 120, the management server 140, and the viewer 150 are connected to each other via the network 130.
  • the network 130 may be, for example, a local area network (LAN: Local Area Network) in the hospital.
  • the AI analysis server 120 may exist outside the LAN and may be connected to the ophthalmic apparatus 110, the management server 140, and the viewer 150 via the Internet, for example.
  • the AI analysis server 120, the management server 140, and the viewer 150 include a computer and a display equipped with a CPU, RAM, ROM, a storage device, and the like.
  • the ophthalmology device 110 is a device for photographing the fundus.
  • any specific method may be used as long as the fundus can be photographed.
  • a fundus camera or a scanning laser ophthalmoscope may be used.
  • the case where the ophthalmologic device 110 is a fundus camera will be described.
  • the horizontal direction is "X direction”
  • the direction perpendicular to the horizontal plane is “Y direction”
  • the direction perpendicular to the horizontal direction (X direction) and the vertical direction (Y direction) is “Y direction”.
  • the direction connecting the center of the pupil of the anterior segment of the eye to be inspected 12 and the center of the eyeball is "Z direction".
  • the direction from the correct position in the Z direction toward the ophthalmologic device 110 side is defined as the negative direction in the Z direction, and the direction away from the correct position is defined as the positive direction in the Z direction.
  • the position where the luminous flux becomes the thinnest is an example of the "reference position" of the technique of the present disclosure.
  • the ophthalmic apparatus 110 includes an optical system for photographing the pupil of the eye to be inspected 12.
  • the ophthalmic apparatus 110 includes an objective lens 20, a perforated mirror 22, a relay lens 24, a dichroic mirror 26, a photographing light source 28, an observation light source 30, a focus lens 32, and an image pickup element 34.
  • the ophthalmology device 110 transfers data to the communication unit 38 for communicating with other devices (120, 140, 150), the stereo cameras 15A and 15B for measuring the pupil center of the eye 12 to be inspected, and the ophthalmology device 110. It includes an input / display unit 40 for inputting data and displaying an image or the like of the eye to be inspected 12 obtained by the ophthalmic apparatus 110.
  • the ophthalmic apparatus 110 includes a position adjusting device 42 for adjusting the three-dimensional position of the ophthalmic apparatus 110.
  • the ophthalmologic device 110 is connected to a stereo camera 15A, 15B, a photographing light source 28, an observation light source 30, a drive unit (not shown) of a focus lens 32, an image pickup element 34, a communication unit 38, and a position adjustment device 42, and controls these elements.
  • the control unit 36 is provided.
  • the control unit 36 includes a computer having a CPU, a ROM, a RAM, an input / output port, a storage device, and the like (not shown).
  • the perforated mirror 22 branches the optical path of the eye to be inspected 12 into the optical path of the photographing light source 28 and the observation light source 30 and the optical path of the image pickup element 34.
  • the dichroic mirror 26 branches the optical path of the photographing light source 28 and the observation light source 30 into an optical path of the photographing light source 28 and an optical path of the observation light source 30.
  • the photographing light source 28 is a light source for photographing the fundus of the eye, and is, for example, a light source of visible light (photographing light) including a halogen lamp or an LED (Light Emitting Diode).
  • the photographing light source 28 irradiates visible light toward the dichroic mirror 26.
  • the observation light source 30 is a light source for constantly observing the fundus until the fundus is photographed, and is an infrared ray that emits infrared rays (for example, near infrared rays) (observation light) so that the subject does not feel dazzling. It is a light source.
  • the observation light source 30 irradiates infrared rays toward the dichroic mirror 26.
  • a halogen lamp or LED light source and a visible cut filter 14 that cuts visible light from the light from the light source may be provided, and infrared rays may be emitted toward the dichroic mirror 26.
  • the dichroic mirror 26 reflects infrared rays and transmits visible light.
  • the visible light from the photographing light source 28 passes through the dichroic mirror 26, the infrared rays from the observation light source 30 reflect the dichroic mirror 26, and these lights pass through the relay lens 24, the perforated mirror 22, and the objective lens 20.
  • the light source is irradiated to the fundus of the eye to be inspected 12 through the pupil of the eye to be inspected 12.
  • the light reflected from the fundus of the eye to be inspected 12 is incident on the image sensor 34 via the objective lens 20, the hole of the perforated mirror 22, and the focus lens 32.
  • the focus lens 32 focuses so that the light reflected from the fundus of the eye 12 to be inspected forms an image on the image pickup device 34. Therefore, the fundus of the eye to be inspected 12 is imaged on the image sensor 34.
  • the image sensor 34 forms an image of the fundus of the eye 12 to be inspected.
  • the image pickup device 34 is an image pickup device having sensitivity to visible light and near infrared light.
  • the image sensor for visible light and the image sensor for near-infrared light may be provided separately.
  • a dichroic mirror or the like that separates visible light and near-infrared light is arranged in front of the image sensor.
  • the ophthalmic apparatus 110 guides a fixed-view lamp (visible light source) (not shown) for guiding the line of sight of the eye to be inspected 12 to the optical path of the image pickup element 34, and the fixed-eye light from the fixed-view lamp to the optical path of the eye to be inspected 12. It is equipped with an optical system.
  • a fixed-view lamp visible light source
  • the stereo cameras 15A and 15B will be explained.
  • the photographing light needs to pass through the pupil. Therefore, it is determined whether the pupil is located at the position where the photographing light passes (that is, the optical axis of the photographing light). Need to be detected. Therefore, it is necessary to measure the position of the center of the pupil. Therefore, the ophthalmic apparatus 110 includes stereo cameras 15A and 15B that capture a predetermined region including the pupil of the eye 12 to be inspected from different directions.
  • the stereo cameras 15A and 15B are cameras for visualizing infrared rays emitted from the pupil of the eye to be inspected 12.
  • the stereo cameras 15A and 15B are composed of a light source that irradiates infrared rays and an image sensor that detects infrared rays.
  • the stereo cameras 15A and 15B are arranged at a predetermined distance in the Y direction (vertical direction), and the anterior eye portion including the pupil of the eye to be inspected 12 is photographed from different directions from the upper side and the lower side.
  • the stereo cameras 15A and 15B may be arranged at a predetermined distance in the X direction (horizontal direction) instead of the vertical direction.
  • the stereo cameras 15A and 15B output the image signal obtained by photographing the anterior eye portion to the control unit 36.
  • the control unit 36 forms an image of the anterior eye portion including the pupil by image processing the image signals from the stereo cameras 15A and 15B, and measures the three-dimensional (X, Y, Z) position of the center of the pupil. ..
  • a laser range finder may be provided, and the control unit 36 may measure the three-dimensional position of the center of the pupil.
  • the stereo cameras 15A, 15B, and the laser range finder are examples of the pupil position measuring unit of the technique of the present disclosure.
  • the position adjusting device 42 controls the control unit 36 so that the center of the pupil is located at the position where the photographing light passes. , Adjust the three-dimensional position of the ophthalmic apparatus 110.
  • the position adjusting device 42 is not limited to adjusting the three-dimensional position of the ophthalmic device 110 under the control of the control unit 36, and the position adjusting device 42 may automatically adjust the three-dimensional position of the ophthalmic device 110. .. The three-dimensional position of the ophthalmic apparatus 110 may be manually adjusted.
  • the information display program has a lighting function, an alignment function, a focus adjustment function, a shooting function, a display function, a judgment function, and a transmission function.
  • the CPU of the control unit 36 of the ophthalmic apparatus 110 executes an information display program having each of these functions, the CPU has the lighting unit 50, the alignment unit 52, the focus adjustment unit 54, and the photographing unit, as shown in FIG. It functions as a unit 56, a display unit 58, a determination unit 60, and a transmission unit 62.
  • the information display program by the CPU of the control unit 36 of the ophthalmic apparatus 110 will be described in detail with reference to FIG.
  • the CPU of the control unit 36 of the ophthalmic apparatus 110 executes the information display program, the information display process shown in the flowchart of FIG. 4 is realized.
  • the user (operator) of the ophthalmologic apparatus 110 inputs the patient ID and the patient name via the input / display unit 40.
  • the subject's eye 12 to be inspected is positioned at the imaging position of the ophthalmic apparatus 110.
  • the user operates a start button (not shown) displayed on the input / display unit 40, the information display program starts.
  • the observation light source 30 is turned on according to the instruction of the control unit 36, and the image sensor 34 continues to photograph the fundus of the eye to be inspected 12. Therefore, the fundus image of the eye 12 to be inspected continues to be output from the image sensor 34 to the control unit 36. Therefore, the control unit 36 constantly monitors the fundus from the start.
  • step 200 the lighting unit 50 lights the fixation lamp (not shown above).
  • the line of sight of the eye 12 to be inspected is guided in the front direction (optical axis direction (Z direction)) of the ophthalmic apparatus 110.
  • This direction is for photographing the central part of the fundus (macula, posterior pole where the optic nerve head is present).
  • the lighting portion 50 turns on the lighting position of the fixation lamp at a position different from the front direction.
  • step 202 the alignment unit 52 and the display unit 58 execute the alignment process of the ophthalmic apparatus 110 (see also FIG. 5A), and the positional relationship between the eye to be inspected 12 and the ophthalmologic apparatus 110 is appropriately photographed. Perform alignment processing to adjust the relationship.
  • FIG. 5A shows a flowchart of the alignment process in step 202.
  • step 250 the alignment unit 52 and the display unit 58 execute a display process of the shooting preparation screen (screen) 300 (see also FIG. 6) on the display of the input / display unit 40.
  • FIG. 5B shows a flowchart of the display process of the shooting preparation screen 300 in step 250.
  • step 259 the display unit 58 displays the shooting preparation screen 300 (see also FIG. 6) on the display of the input / display unit 40.
  • the display items include a patient ID display field 302, a patient name display field 304, an acquisition status display section 306 that displays the acquisition status of the fundus image of the right eye, and an acquisition status display section that displays the acquisition status of the fundus image of the left eye. There is 308.
  • the acquisition status display section 306 and the acquisition status display section 308 are examples of the "first section" and the "second section" of the technique of the present disclosure, respectively.
  • the display items include an anterior ocular segment photographing section 310 for displaying the anterior segment of the eye to be inspected, and a fundus display section 312 for displaying the fundus of the inspected eye. Further, the display items include a deviation amount display section 314 for displaying the deviation amount from the correct position of the position in the Z direction of the ophthalmic apparatus 110, an imaging button 316, and an inspection end button 318 for ending the information display process.
  • an eye object 320 generated based on alignment information, which indicates the positional relationship between the eye to be inspected 12 and the ophthalmologic apparatus 110, acquired by the alignment unit 52 is displayed (this). Will be described later).
  • the anterior ocular segment imaging section 310 is an example of a "third section" of the technique of the present disclosure.
  • the display unit 58 of the control unit 36 displays the monitored fundus image on the fundus display section 312. ..
  • FIG. 6 shows a case where the eye to be inspected in the fundus image monitored as described above is the right eye.
  • step 261 the alignment unit 52 confirms the acquisition status of the fundus image of the eye to be inspected (whether or not the fundus images of the left and right eye to be inspected for AI screening are acquired).
  • the confirmation process in step 261 is a process for confirming the previous acquisition state of the fundus image of the eye to be inspected.
  • the fundus image of the right eye to be inspected for AI screening is an example of the "first fundus image" of the technique of the present disclosure
  • the fundus image of the left eye to be inspected for AI screening is the "second fundus image" of the technique of the present disclosure. This is an example of a fundus image.
  • the patient ID, the information of the right eye, and the fundus image of the right eye correspond.
  • the patient ID, the information of the left eye, and the fundus image of the left eye are stored in the storage device of the management server 140 correspondingly.
  • the management server 140 corresponds to the patient ID, the information of the right eye, and the fundus image of the right eye, and transmits the patient ID, the information of the left eye, and the fundus image of the left eye to the ophthalmic apparatus 110. ..
  • the ophthalmologic apparatus 110 that has received the patient ID, the information of the right eye, the fundus image of the right eye, the information of the left eye, and the fundus image of the left eye receives the fundus image of the subject eye as the acquisition state of the fundus image of the right eye and the left eye, respectively. It can be confirmed that the fundus image of the eye has been acquired (taken).
  • the information of the right eye and the fundus image of the right eye correspond to each other corresponding to the patient ID.
  • the information of the left eye and the fundus image of the left eye are stored in the storage device of the management server 140 in correspondence with each other.
  • the management server 140 corresponds to the patient ID, the information of the right eye, and the fundus image of the right eye, or corresponds to the patient ID, the information of the left eye, and the fundus image of the left eye, and transmits the patient ID, the information of the left eye, and the fundus image of the left eye to the ophthalmic apparatus 110. ..
  • the ophthalmologic apparatus 110 which has received the patient ID and the information of the right eye and the fundus image of the right eye, or the information of the left eye and the fundus image of the left eye, sets the fundus of the right eye as the acquisition state of the fundus image of the eye to be inspected.
  • the image has been acquired (photographed), but the fundus image has not been acquired for the left eye (unphotographed), or the fundus image for the right eye has not been acquired (unphotographed), but the fundus for the left eye. It can be confirmed that the image has been acquired (taken).
  • the information of the right eye, the image of the fundus of the right eye, the information of the left eye, and the image of the fundus of the left eye correspond to the patient ID. It is not stored in the storage device of 140.
  • the management server 140 does not transmit the information of the right eye, the image of the fundus of the right eye, the information of the left eye, and the image of the fundus of the left eye to the ophthalmic apparatus 110.
  • the ophthalmologic apparatus 110 that does not receive the information of the right eye, the fundus image of the right eye, the information of the left eye, and the fundus image of the left eye receives the fundus images of the right eye and the left eye as the acquisition state of the fundus image of the inspected eye. Can be confirmed that has not been acquired (not photographed).
  • the technique of the present disclosure is limited to storing the information of the right eye, the fundus image of the right eye, the information of the left eye, and the fundus image of the left eye in the storage device of the management server 140 corresponding to the patient ID. Instead, these information and images may be stored in the storage device of the ophthalmologic device 110. In this case, the alignment unit 52 does not make the above inquiry to the management server 140, but confirms the stored contents of the storage device of the ophthalmic device 110.
  • the display unit 58 determines the display content of the acquisition status display sections 306 and 308 to be the content indicated by the acquisition status confirmed in step 261 and determines the display status of the acquisition status display sections 306 and 308. Make it content. For example, if the right eye is being photographed and neither the right eye nor the left eye is photographed as described above, "not photographed” is displayed in both the acquisition status display section 306 of the right eye and the acquisition status display section 308 of the left eye. do. When the right eye is being photographed and both the right eye and the left eye have been photographed, "photographed” is displayed in both the acquisition status display section 306 of the right eye and the acquisition status display section 308 of the left eye.
  • FIG. 6 shows a case where the right eye is being photographed, the right eye is photographed, and the left eye is not photographed. Specifically, the acquisition status display section 306 of the right eye is “photographed”. Is displayed, and "not photographed” is displayed in the acquisition status display section 308 of the left eye.
  • the display process of the shooting preparation screen 300 in step 250 may be performed by the alignment unit 52 and the display unit 58.
  • the display unit 58 generates the shooting preparation screen 300 based on the alignment information collected by the alignment unit 52, and outputs the image signal of the shooting preparation screen 300 to the display of the input / display unit 40.
  • the display of the input / display unit 40 displays the shooting preparation screen 300 based on the image signal.
  • step 263 When the process of step 263 is completed, the display process (step 250) of the shooting preparation screen 600 of FIG. 5A is completed.
  • the alignment unit 52 measures the position of the eye to be inspected, specifically, XYZ at the center of the pupil, by three-dimensional measurement using images from different directions based on the image signals from the stereo cameras 15A and 15B. Measure the position.
  • the alignment unit 52 first determines the display position of the eye object 320 in the anterior segment imaging section 310 from the X, Y coordinate information of the three-dimensional position (X, Y, Z) at the center of the pupil. decide. Specifically, the alignment unit 52 calculates the amount of deviation between the X-direction and Y-direction positions of the center of the pupil and the reference position (X, Y), and the position of the eye to be inspected deviates from the reference position. The display position of the object 320 of the eye for showing the state of being is determined. If (X, Y) is the origin (0,0) indicating the center of the optical axis, the display position of the eye object 320 is the center of the anterior segment imaging section 310. When (X, Y) is different from the origin, the display position in the anterior segment imaging section 310 of the object 320 of the eye to be displayed is determined based on the position information of X and Y.
  • the eye object 320 is an illustration that imitates the characteristics of the anterior segment of the eye, and is an image object that includes a pupil, an iris, an eyeball conjunctiva (so-called white eye portion), and a circle 328 that highlights the pupil. .. It was
  • the stereo cameras 15A and 15B capture the pupil of the anterior segment of the eye to be inspected 12 from the upper and lower diagonals, when the image of the anterior segment based on the image signals from the stereo cameras 15A and 15B is displayed, they are oblique. Image is displayed.
  • an eye object it is possible to display the eye as an illustration image viewed from the front instead of a realistic front eye image that differs depending on the subject, which has the effect of facilitating the alignment work by the user. ..
  • the realistic front eye part may reflect eyelids and eyelashes in the image, which may make the alignment work difficult. It becomes possible to perform alignment work.
  • the display unit 58 displays an eye object of a determined size at a determined position in the anterior ocular segment photographing section 310.
  • the image data of the eye animation object 320 is stored in advance in the storage device of the control unit 36 of the ophthalmic apparatus 110.
  • the display unit 58 reads out the stored eye object and executes image processing for enlarging or reducing the size of the eye object 320 so as to have a size determined by the alignment unit 52.
  • the resized eye object 320 is displayed at a position in the anterior ocular segment photographing section 310 determined by the alignment portion 52.
  • the display unit 58 superimposes and displays the cross index (322 and 324) and the circle index 326 at the center of the anterior eye portion photographing section 310.
  • the cross index indicates the position of the optical axis of the ophthalmic apparatus 110.
  • the cross index is composed of a vertical line 324 and a horizontal line 322, and the intersection of the vertical line 324 and the horizontal line 322 is the optical axis position of the ophthalmic apparatus 110.
  • the circle index 326 is a circle indicating the standard pupil size, and the center of the circle index coincides with the intersection of the cross indexes.
  • the eye object 320 and the cross index (322 and 324) according to the alignment state between the eye to be inspected 12 and the ophthalmologic apparatus 110 as shown in the anterior ocular segment imaging section 310 of the imaging preparation screen 300 of FIG. And the circle index 326 are displayed.
  • step 258 the alignment portion 52 is positioned in each direction of the measured pupil center in each direction.
  • the position adjusting device 42 such as the XYZ stage and the chin rest
  • the position of the ophthalmic device 110 is adjusted so that the center of the pupil is positioned at the correct position (automatic alignment adjustment).
  • the user may operate the position adjusting device 42 to adjust the position of the ophthalmic apparatus 110 so that the center of the pupil is positioned at the correct position.
  • Good manual alignment adjustment
  • the center of the anterior ocular segment imaging section 310 is the position of the optical axis of the ophthalmologic apparatus 110, and in order to make the position easier to understand, the display unit 58 is attached to the anterior ocular segment imaging section 310.
  • a cross index determined by the horizontal line 322 and the vertical line 324 is displayed, and the intersection of the horizontal line 322 and the vertical line 324 is positioned at the center of the anterior ocular segment imaging section 310.
  • a circular index 326 of the standard size of the pupil located at the correct position in the Z direction about the center of the anterior segment imaging section 310 is displayed in the anterior segment imaging section 310.
  • the first state (the amount of deviation of the pupil of the eye to be inspected 12 from the optical axis in the XY plane) is expressed by the amount of deviation of the center of the circle 328 from the center of the anterior segment imaging section 310, and the size of the circle 328.
  • the second state (whether or not the fundus of the eye to be inspected 12 is in focus or not, the amount of deviation in the Z-axis (optical axis) direction) is expressed by the above.
  • the size of the circle 328 is the same as the circle index 326 of the standard size of the pupil (the object 320 of the eye is displayed by default) because the position in the Z direction is in the correct position.
  • the measured position of the center of the pupil in each direction of XY is located at the correct position, but the position in the direction of Z is deviated from the correct position, especially when the position is far away. It is shown that the object is displayed in the anterior segment imaging section 310.
  • Object 320 is displayed smaller than the default).
  • the measured positions of the center of the pupil in each direction of XY are located at the correct positions, but the positions in the direction of Z are deviated from the correct positions, especially when they are located close to each other. It is shown that the object of the eye is displayed in the anterior segment imaging section 310.
  • the center of the circle 328 coincides with the center of the circle index 326 of the standard size of the pupil, but the size of the circle 328 is larger than the circle index 326 of the standard size of the pupil.
  • FIG. 7E shows how the animation of the eye is displayed in the anterior segment imaging section 310 when the measured position of the center of the pupil in each direction of XYZ is located at the correct position.
  • the center of the circle 328 coincides with the center of the circle index 326 of the standard size of the pupil, and the size of the circle 328 coincides with the circle index 326 of the standard size of the pupil.
  • the intersection of the horizontal line 322 and the vertical line 324 in the cross index is located at the center of the anterior ocular segment imaging section 310.
  • the eye object 320 is displayed in the anterior segment imaging section 310 corresponding to the position measured by the stereo cameras 15A and 15B, and in a size corresponding to the position of the center of the pupil.
  • the user operates the XYZ stage and the chin rest of the ophthalmologic apparatus 110 while looking at the eye object displayed in the anterior ocular segment imaging section 310 to obtain the positional relationship between the eye to be inspected 12 and the ophthalmologic apparatus 110. It can be adjusted to the optimum position for shooting.
  • the display position and size of the eye object are displayed as a moving image such as an animation according to the operation of the XYZ stage and the chin rest. Therefore, the user can perform the alignment work while looking at the display of the ophthalmic apparatus 110 without looking at the XYZ stage, the operation buttons of the jaw rest, or the joystick.
  • the display unit 58 is in the case where the Z position information (coordinate value) measured by the alignment unit 52 is closer to the Z value of the predetermined safety reference value (closer to the ophthalmic apparatus 110 side).
  • a warning display may be displayed on the imaging preparation screen 300 to inform that the device is in danger of colliding with the patient.
  • the control of the three-dimensional drive stage for moving the vertical, horizontal, and front-back positions of the ophthalmic apparatus 110 and the control of the position of the jaw stand are operated while looking at the eye object displayed in the anterior ocular segment imaging section 310. You can easily adjust the optimum position for shooting.
  • the size of the pupil in the object 320 of the eye is displayed in a size corresponding to the position of the center of the pupil in the Z direction. This allows the user to understand whether or not the position of the eye 12 to be inspected 12 in the Z direction is located at the correct position, and if not, how much it is.
  • the color of the display unit 58 changes according to the amount of deviation of the Z direction position of the ophthalmic apparatus 110 from the correct position based on the Z position information measured by the alignment unit 52.
  • the mark to be displayed is displayed in the deviation amount display section 314.
  • the display unit 58 changes the color of the deviation amount display section 314 to blue.
  • the display unit 58 increases the color of the deviation amount display section 314 from blue to red.
  • the display unit 58 increases the color of the deviation amount display section 314 from blue to green.
  • the color of the deviation amount display section 314 is changed according to the deviation amount and the deviation direction in the Z direction from the correct position of the pupil center of the eye to be inspected 12. Therefore, the user can understand whether or not the position of the eye 12 to be inspected 12 in the Z direction is located at the correct position, and if not, how much it is.
  • the alignment unit 52 displays the animation 320 of the eye, but the technique of the present disclosure is not limited to this, and computer graphics (CG) are used instead of the object. (Computer graphics)) images may be displayed with their positions and sizes adjusted in the same manner as in the examples shown in FIGS. 7A to 7E. Further, it may be a three-dimensional three-dimensional object image instead of a two-dimensional object image.
  • the images of the object 320 and CG are examples of simulated images imitating the eye to be inspected.
  • the realistic anterior ocular segment image of the eye 12 to be inspected obtained by the stereo cameras 15A and 15B is not used. Therefore, when the position adjustment process of the ophthalmic apparatus 110 shown in FIG. 5A is executed, the control unit 36 does not need to form the anterior eye portion image including the pupil. Therefore, a laser rangefinder can be used instead of the stereo cameras 15A and 15B.
  • the technique of the present disclosure is not limited to this, and the anterior eye portion image of the eye to be inspected 12 obtained by one of the stereo cameras 15A and 15B may be used instead of the object of the eye.
  • FIG. 8 shows a flowchart of the position adjustment process of the ophthalmologic apparatus 110 when the anterior eye portion image of the eye to be inspected 12 obtained by one of the stereo cameras 15A and 15B is used instead of the eye object. ..
  • steps 251 to 258 are executed instead of steps 252 to 256.
  • step 251 the alignment unit 52 photographs the pupil of the anterior eye portion of the eye to be inspected 12 (from an angle) with the stereo cameras 15A and 15B.
  • step 253 the alignment unit 52 measures the position of the pupil in each direction of XYZ by three-dimensional measurement using images from different directions based on the image signals from the stereo cameras 15A and 15B.
  • the alignment unit 52 uses the position information of the pupil in each direction of XYZ and one of the stereo cameras 15A and 15B held in advance, for example, the ophthalmology of the stereo camera 15B for photographing the anterior segment of the eye from below.
  • the (oblique) image G0 (see FIG. 9A) of the pupil based on the image signal from the stereo camera 15B is converted into the front image G1 as shown in FIG. 9B.
  • step 257 first, the alignment unit 52 displays the position of the front image G1 and the position information of Z based on the position information in the directions of X and Y, as in the process of step 254 of FIG. 5A. Based on, the size of the front image G1 to be displayed is determined.
  • step 257 the alignment unit 52 secondly displays the front image G1 of the determined size and the cross index at the determined position in the anterior ocular segment photographing section 310.
  • the front image G1 of the pupil of the eye to be inspected 12 is displayed, but since the size of the pupil is physiologically changed by the light from the environment, the center of the pupil of the eye to be inspected 12 is displayed. Even if is positioned at the correct position, the size of the pupil in the front image G1 may not match the above standard size. Therefore, in order to measure the position of the pupil in each direction of XYZ, the position of the pupil in each direction of XYZ may be measured by a laser range finder without using the image signals from the stereo cameras 15A and 15B.
  • step 202 in FIG. 4 is completed.
  • step 204 the focus adjustment unit 54 adjusts the position of the focus lens 32 to adjust the focus so that the fundus of the eye to be inspected 12 forms an image on the image sensor 34.
  • step 206 the photographing unit 56 waits for the photographing button 316 to be turned on, and when the photographing button 316 is turned on, the photographing light source 28 is turned on and the fundus of the eye to be inspected 12 is photographed.
  • step 206 by highlighting the imaging button 316, for example, changing the color or blinking the image, the alignment and focus adjustment of the ophthalmologic apparatus 110 are completed, and the operator is in a state where imaging is possible. It may be aroused. A message indicating that shooting may be possible may be displayed.
  • the photographing button 316 may be omitted, and when the process of step 204 is completed, the photographing light source 28 may be automatically turned on to photograph the fundus of the eye to be inspected 12.
  • a mode selection button for setting either an automatic shooting mode or a manual shooting mode may be provided.
  • the auto photographing mode is a mode in which the fundus of the eye to be inspected 12 is automatically photographed when the process of step 204 is completed.
  • the manual shooting mode is a mode for shooting the fundus of the eye to be inspected 12 when the shooting button 316 is turned on.
  • the photographing unit 56 photographs the fundus of the eye to be inspected 12 according to the setting by the mode selection button.
  • step 208 the display unit 58 executes a display process of the shooting result confirmation screen (screen (graphic user interface)) 400 shown in FIG. 11 instead of the shooting preparation screen 300 on the display of the input / display unit 40.
  • FIG. 10 shows a flowchart of the display process of the shooting result confirmation screen 400.
  • the display unit 58 displays the shooting result confirmation screen 400.
  • the display items include a patient ID display field 402, a patient name display field 404, a right eye fundus image display field 410 for displaying the right eye fundus image, and a left eye fundus image display field 412 for displaying the left eye fundus image. be.
  • the right eye fundus image display field 410 and the left eye fundus image display field 412 displaying the imaging result of the left eye are examples of the "first field” and the "second field” of the technique of the present disclosure, respectively.
  • the right eye fundus image display field 410 On the left and right sides of the right eye fundus image display field 410, when the right eye is photographed multiple times and multiple fundus images of the right eye are stored, the plurality of fundus images of the right eye are sequentially displayed in the right eye fundus image display field.
  • the right eye fundus image switching buttons 410A and 410B for instructing the display on the 410 are provided.
  • the right eye fundus image switching button 410A is turned on, it is instructed to switch to the fundus image obtained by taking a picture in the past from the time when the fundus image currently displayed in the right eye fundus image display field 410 is taken.
  • the right eye fundus image switching button 410B When the right eye fundus image switching button 410B is turned on, it is instructed to switch to the fundus image obtained by being photographed in the future from the time when the fundus image currently displayed in the right fundus image display field 410 is taken.
  • Left eye fundus image switching buttons 412A and 412B are also provided on the left and right sides of the left eye fundus image display field 412.
  • a right eye image quality determination result display field 420 for displaying the image quality determination result of the fundus image displayed in the right eye fundus image display field 410 is provided.
  • a left eye image quality determination result display field 422 is also provided below the left eye fundus image display field 412.
  • the right eye image quality determination result display field 420 and the left eye image quality determination result display field 422 are examples of the "third field” and the "fourth field” of the technique of the present disclosure, respectively.
  • the display items include a shooting continuation button 432, an AI analysis execution button 434, and an inspection end button 436 for ending the information display process.
  • the AI analysis execution button 434 is an example of the "button" of the technique of the present disclosure, and the portion where the AI analysis execution button 434 is displayed is an example of the "fifth field" of the technique of the present disclosure.
  • the display unit 58 discriminates between the left and right of the eye to be inspected in the fundus image obtained by the imaging in step 206.
  • the left and right discrimination of the eye to be inspected in the fundus image can be performed as follows. For example, at the time of shooting in step 206, the operator presses a right eye button or a left eye button (not shown) according to the eye to be imaged, and the control unit 36 stores the information in the storage device. ..
  • the display unit 58 determines the left and right of the eye to be inspected of the fundus image based on the information stored in the storage device of the control unit 36. Further, the display unit 58 may perform the above-mentioned left / right discrimination by image processing.
  • the display unit 58 performs image processing for extracting the optic disc and the blood vessel to the optic disc on the fundus image. Then, the display unit 58 compares the direction of the blood vessel to the optic disc of the fundus image with the direction of the blood vessel to the optic disc of the left and right fundus memorized in advance, so that the fundus image can be displayed on either the left or right eye. Judge whether it is a fundus image of. The display unit 58 may verify the left / right discrimination result of the eye to be inspected of the fundus image determined based on the information stored in the storage device of the control unit 36 by the above image processing.
  • step 354 the display unit 58 displays the fundus image corresponding to the discrimination result of step 352 in the right fundus image display field 410 or the left fundus image display field 412.
  • the fundus of the right eye is photographed, but the fundus of the left eye is not photographed.
  • the obtained fundus image is displayed in the right eye fundus image display field 410.
  • the fundus of the left eye is not photographed, nothing is displayed in the fundus image display field 412 of the left eye, indicating that the fundus of the left eye is not photographed.
  • the operator can determine whether or not the fundus of each eye is photographed.
  • the right eye fundus image switching buttons 410A and 410B are displayed. This allows the operator to know that the fundus of the right eye has been photographed multiple times. If the fundus of the left eye has not been photographed even once, the left eye fundus image switching buttons 412A and 412B are not displayed. This allows the operator to know that the fundus of the left eye has never been photographed.
  • step 356 the display unit 58 determines whether or not the image quality of the fundus image obtained by the photographing in step 206 is acceptable.
  • the position of the ophthalmic apparatus 110 is adjusted in step 202, and the focus is adjusted in step 204. There is.
  • the permissible image quality is an image quality of a certain level or higher that can execute a process of generating information regarding the symptom of the eye to be inspected using AI.
  • the fundus image having a certain image quality or higher satisfies at least one condition (both in the present embodiment) that the fundus image is not blurred and that the position of the optic nerve head is within a predetermined position. It is an image.
  • the display unit 58 extracts blood vessels from the fundus image.
  • the display unit 58 determines whether or not the image is out of focus by determining whether or not the contrast between the extracted blood vessel portion and the portion other than the blood vessel portion of the fundus image is equal to or greater than a predetermined value.
  • the display unit 58 determines the position of the optic nerve head from the fundus image from the blood vessel pattern obtained in advance, and determines whether or not the determined position is within the range of the predetermined position of the fundus image.
  • the image quality is determined by comparing the reference image and the photographed fundus image using algorithms such as PSNR (Peak signal-to-noise ratio) and SSIM (Structural Similarity), thereby taking a picture in step 206. It may be determined whether or not the image quality of the fundus image obtained by the above is acceptable.
  • PSNR Peak signal-to-noise ratio
  • SSIM Structuretural Similarity
  • step 358 the display unit 58 displays the image quality determination result in the evaluation result display field 420/422 corresponding to the determination result of the eye to be inspected.
  • a display indicating that the image quality is an acceptable image quality is displayed in the right eye image quality determination result display field 420. For example, "OK" is displayed. Since the fundus image of the left eye has not been obtained as described above, the image quality cannot be determined. Therefore, nothing is displayed in the left eye image quality determination result display field 422.
  • step 358 When the display process of the determination result in step 358 is completed, the display process of the shooting result confirmation screen 400 in step 208 of FIG. 4 is completed.
  • step 210 the determination unit 60 determines whether or not the image quality of the fundus image obtained by the photographing in step 206 is acceptable. This judgment is made based on the judgment result of step 356. If it is determined that the image quality is not acceptable, that is, if the determination result is a negative acceptance result, the information processing returns to step 200 and executes steps 200 to 210. If it is determined that the image quality is acceptable, that is, if the determination result is an acceptable result, the information processing proceeds to step 212.
  • step 212 the determination unit 60 determines whether or not steps 200 to 210 have been completed for both eyes. When it is not determined that steps 200 to 210 are completed for both eyes, for example, when it is determined that the fundus of the right eye is photographed but the fundus of the left eye is not photographed as described above. Information processing proceeds to step 214.
  • step 214 the determination unit 60 determines whether or not the imaging continuation button 432 is turned on, so that the operator has instructed whether or not the fundus of the eye is photographed differently from the eye in which the fundus has already been imaged. to decide.
  • step 214 by highlighting the shooting continuation button 432, for example, changing the color or blinking the color, the image quality of the fundus image of the eye whose fundus has already been photographed is allowed, and the image quality of the fundus image of another eye is allowed. The operator may be alerted that shooting is necessary. It should be noted that a message may be displayed to the effect that the image quality of the fundus image of the eye whose fundus has already been photographed is acceptable and that the fundus of another eye needs to be imaged.
  • the shooting continuation button 432 is turned on, the information processing returns to step 200 and executes steps 200 to 212.
  • step 216 the display unit 58 activates the AI analysis execution button 434, that is, the user can operate it. Change to state.
  • step 218, the determination unit 60 determines whether or not the AI analysis execution button 434 is turned on.
  • step 212 If it is determined in step 212 that steps 200 to 210 have been completed for both eyes, the fundus image of the right eye is displayed in the right eye fundus image display field 410, and the left eye is displayed in the left eye fundus image display field 412.
  • the fundus image is displayed, and "OK" is displayed in the right eye image quality determination result display field 420 and the left eye image quality determination result display field 422. Therefore, it can be understood that the operator can understand that the permissible fundus image was obtained for the left and right eye to be inspected, and that it is possible to execute the process of generating the information regarding the symptom of the inspected eye using AI. Therefore, the operator turns on the AI analysis execution button 434.
  • the display unit 58 may highlight the AI analysis execution button 434, for example, blinking display or changing the color. Further, the shooting result confirmation screen 400 is provided with an AI analysis execution button display field for displaying the AI analysis execution button 434. If it is determined in step 212 that step 210 has been completed for both eyes, the display unit 58 may display the AI analysis execution button 434 in the AI analysis execution button display field.
  • the AI analysis execution button display field is an example of the "fifth field" of the technique of the present disclosure.
  • the transmission unit 62 controls the communication unit 38 to transmit the fundus image of both eyes to the AI analysis server 120.
  • the fundus image to be transmitted is only a fundus image having an acceptable image quality. Further, instead of the information that can identify an individual such as the patient ID and the patient name, the image management ID uniquely assigned to the fundus image is transmitted as additional information together with the fundus image data.
  • the AI analysis server 120 executes a process of generating information on the symptoms of the eye to be inspected, such as the presence or absence of diabetic retinopathy or the estimation of the degree of progression (grade) of the transmitted fundus images of both eyes.
  • the generated information is transmitted to the management server 140 together with the image management ID.
  • the management server 140 identifies the patient ID and the patient name from the image management ID, and stores the generated information.
  • the management server 140 transmits the generated information to the viewer 150. do.
  • the viewer 150 displays the generated information on the display.
  • the AI analysis server 120 in order for the AI analysis server 120 to process the fundus image to generate information on the symptoms of the eye to be inspected such as the presence or absence of diabetic retinopathy and the degree of progression, the image quality of the fundus image to be processed is processed.
  • the image quality is above a certain level so that the above processing can be executed using AI. Therefore, conventionally, it has been necessary to determine in advance a fundus image in which the fundus image obtained by photographing the eye to be inspected has the above-mentioned image quality of a certain level or higher. It is difficult to judge whether or not the image quality of the fundus image is above a certain level unless the person is familiar with the AI diagnosis of ophthalmology.
  • the ophthalmologic apparatus 110 photographs the fundus of each of the left and right eyes to be inspected until a fundus image having a certain image quality or higher is obtained.
  • the fundus image can be output to the AI analysis server 120.
  • the fundus images of the left and right eyes having the above image quality are analyzed by AI. It can be output to the server 120.
  • the ophthalmology system 100 of the present embodiment includes an ophthalmology apparatus 110, an AI analysis server 120, a management server 140, and a viewer 150.
  • the techniques of the present disclosure are not limited to this.
  • the AI analysis server 120 may be omitted, and at least one of the ophthalmology device 110, the management server 140, and the viewer 150 may have the AI function (AI algorithm) of the AI analysis server 120.
  • the present embodiment includes an AI analysis server 120 that executes a process of generating information regarding the symptom of the eye to be inspected by using an AI algorithm trained by machine learning.
  • AI analysis server 120 executes a process of generating information regarding the symptom of the eye to be inspected by using an AI algorithm trained by machine learning.
  • the techniques of the present disclosure are not limited to this.
  • images of various diseases of the eye are stored in advance, and the fundus image obtained by photographing the fundus is matched with the images of various diseases of the eye stored in advance, and the eye to be inspected is examined. It may be provided with a device that performs processing for generating information on the symptom of.
  • an analyzer that executes a program that performs the above.
  • each component may exist only once or two or more as long as there is no contradiction.
  • the image processing is realized by the software configuration using the computer, but the technique of the present disclosure is not limited to this.
  • the image processing may be executed only by the hardware configuration such as FPGA (Field-Programmable Gate Array) or ASIC (Application Specific Integrated Circuit). Some of the image processing may be performed by the software configuration and the rest may be performed by the hardware configuration.
  • the technology of the present disclosure includes the following technology because it includes the case where the image processing is realized and the case where the image processing is not realized by the software configuration using the computer.
  • the generation unit has acquired the first fundus image
  • the first fundus image is displayed, and when the first fundus image has not been acquired, the first field for indicating that the first fundus image has not been acquired, and the second fundus. If the image has been acquired, the second fundus image is displayed, and if the second fundus image has not been acquired, a second field for indicating that the second fundus image has not been acquired, and a step of generating a display screen including the step of generating the display screen.
  • Information display method including.
  • a judgment unit for determining whether the first fundus image of the right eye of the subject and the second fundus image of the left eye of the subject have been acquired.
  • a detection unit that detects the position of the eye to be inspected,
  • a calculation unit that calculates the amount of deviation between the detected position and the reference position where the eye to be inspected should be located in order to photograph the fundus of the eye to be inspected. The detection is based on the first section for indicating that the first fundus image has been acquired or not acquired, the second section for indicating that the second fundus image has been acquired or not acquired, and the amount of deviation.
  • a generator that generates a display screen including a third section for displaying an image showing that the position of the eye to be inspected deviates from the reference position. Information display device.
  • the generation unit is based on the first section for indicating that the first fundus image has been acquired or not acquired, the second section for indicating that the second fundus image has been acquired or not acquired, and the amount of deviation.
  • Information display method including.
  • the alignment unit 52 is an example of the “judgment unit”, “detection unit”, and “calculation unit” of the technique of the present disclosure.
  • the alignment unit 52 and the display unit 58 are examples of the “generation unit” of the technique of the present disclosure.
  • a computer program product for displaying information comprises a computer-readable storage medium that is not itself a temporary signal.
  • a program is stored in the computer-readable storage medium.
  • the program On the computer It is determined whether the first fundus image of the subject's right eye and the second fundus image of the subject's left eye have been acquired. If the first fundus image has been acquired, the first fundus image is displayed, and if the first fundus image has not been acquired, the first field for indicating that the first fundus image has not been acquired and the second fundus image have been acquired.
  • the second fundus image is displayed, and if the second fundus image has not been acquired, a second field for indicating that the second fundus image has not been acquired is displayed, and a computer program product for executing the generation of a display screen including.
  • a computer program product for displaying information comprises a computer-readable storage medium that is not itself a temporary signal.
  • a program is stored in the computer-readable storage medium.
  • the program On the computer It is determined whether the first fundus image of the right eye of the subject is acquired and the second fundus image of the left eye of the subject is acquired. The position of the eye to be inspected is detected, The amount of deviation between the detected position and the reference position where the eye to be inspected should be located in order to photograph the fundus of the eye to be inspected is calculated. The detection is based on the first section for indicating that the first fundus image has been acquired or not acquired, the second section for indicating that the second fundus image has been acquired or not acquired, and the amount of deviation. Generates a display screen that includes a third section for displaying an image showing how the position of the eye to be inspected deviates from the reference position. A computer program product that lets you do that.
  • the control unit 36 is an example of the "computer program product" of the technology of the present disclosure.

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

This information display method executed by a processor includes: determining whether an image is a first fundus image of the right eye of a subject or a second fundus image of the left eye of the subject; and generating a display screen that includes a first field and a second field, the first field being for displaying the first fundus image if the first fundus image has already been captured, and for indicating, if the first fundus image has not been captured yet, that the image has not been captured yet, and the second field being for displaying the second fundus image if the second fundus image has already been captured, and for indicating, if the second fundus image has not been captured yet, that the image has not been captured yet.

Description

情報表示方法、情報表示装置、及びプログラムInformation display method, information display device, and program
 本開示の技術は、情報表示方法、情報表示装置、及びプログラムに関する。 The technology of this disclosure relates to an information display method, an information display device, and a program.
 米国特許第8414122号明細書には、眼底を撮影する眼底カメラが開示されている。ユーザが利用しやすいインターフェースが望まれている。 U.S. Pat. No. 8414122 discloses a fundus camera that captures the fundus. An interface that is easy for users to use is desired.
 本開示の技術の第1の態様の情報表示方法は、プロセッサが行う情報表示方法であって、被検者の右眼の第1眼底画像と前記被検者の左眼の第2眼底画像とが取得済みであることを判断するステップと、前記第1眼底画像が取得済みの場合は前記第1眼底画像が表示され、前記第1眼底画像が未取得の場合は未取得を示すための第1フィールドと、前記第2眼底画像が取得済みの場合は前記第2眼底画像が表示され、前記第2眼底画像が未取得の場合は未取得を示すための第2フィールドと、を含む表示画面を生成するステップと、を含む。 The information display method according to the first aspect of the technique of the present disclosure is an information display method performed by a processor, which includes a first fundus image of the right eye of the subject and a second fundus image of the left eye of the subject. The step of determining that the first tongue image has been acquired, and the first to indicate that the first tongue image has not been acquired, the first tongue image has been acquired, and the first tongue image has not been acquired. A display screen including one field and a second field for displaying the second fundus image when the second fundus image has been acquired, and for indicating unacquired when the second fundus image has not been acquired. Includes steps to generate.
 本開示の技術の第2の態様の情報表示装置は、メモリと、前記メモリに接続するプロセッサとを備え、前記プロセッサは、被検者の右眼の第1眼底画像と前記被検者の左眼の第2眼底画像とが取得済みであるかを判断し、前記第1眼底画像が取得済みの場合は前記第1眼底画像が表示され、前記第1眼底画像が未取得の場合は未取得を示すための第1フィールドと、前記第2眼底画像が取得済みの場合は前記第2眼底画像が表示され、前記第2眼底画像が未取得の場合は未取得を示すための第2フィールドと、を含む表示画面を生成する。 The information display device of the second aspect of the technique of the present disclosure includes a memory and a processor connected to the memory, wherein the processor is a first fundus image of the right eye of the subject and the left of the subject. It is determined whether the second fundus image of the eye has been acquired, and if the first fundus image has been acquired, the first fundus image is displayed, and if the first fundus image has not been acquired, it has not been acquired. The first field for indicating the above, and the second field for indicating that the second fundus image has not been acquired, the second fundus image is displayed when the second fundus image has been acquired, and the second field for indicating that the second fundus image has not been acquired. Generate a display screen that includes ,.
 本開示の技術の第3の態様のプログラムは、コンピュータに、被検者の右眼の第1眼底画像と前記被検者の左眼の第2眼底画像とが取得されているかを判断し、前記第1眼底画像が取得済みの場合は前記第1眼底画像が表示され、前記第1眼底画像が未取得の場合は未取得を示すための第1フィールドと、前記第2眼底画像が取得済みの場合は前記第2眼底画像が表示され、前記第2眼底画像が未取得の場合は未取得を示すための第2フィールドと、を含む表示画面を生成する、ことを実行させる。 The program of the third aspect of the technique of the present disclosure determines whether the first fundus image of the right eye of the subject and the second fundus image of the left eye of the subject are acquired by the computer. If the first fundus image has been acquired, the first fundus image is displayed, and if the first fundus image has not been acquired, the first field for indicating unacquired and the second fundus image have been acquired. In the case of, the second fundus image is displayed, and if the second fundus image has not been acquired, a display screen including a second field for indicating unacquired is generated.
眼科システム100のブロック図である。It is a block diagram of an ophthalmology system 100. 眼科装置110のブロック図である。It is a block diagram of an ophthalmic apparatus 110. 眼科装置110の制御部36のCPUの機能ブロック図である。It is a functional block diagram of the CPU of the control unit 36 of the ophthalmic apparatus 110. 眼科装置110の制御部36のCPUが実行する情報表示処理のフローチャートである。It is a flowchart of the information display process executed by the CPU of the control unit 36 of the ophthalmic apparatus 110. 図4のステップ202の位置合わせ処理のフローチャートであ。It is a flowchart of the alignment process of step 202 of FIG. 図5Aのステップ250の撮影準備画面300の表示処理のフローチャートである。5A is a flowchart of the display process of the shooting preparation screen 300 in step 250 of FIG. 5A. 撮影準備画面300を示す図である。It is a figure which shows the shooting preparation screen 300. 眼のオブジェクト320が表示された前眼部撮影セクション310を示す図である。It is a figure which shows the front eye part photographing section 310 which displayed the object 320 of an eye. 眼のオブジェクト320が表示された前眼部撮影セクション310を示す他の図である。FIG. 3 is another diagram showing an anterior ocular segment imaging section 310 displaying an eye object 320. 眼のオブジェクト320が表示された前眼部撮影セクション310を示す更に他の図である。Yet another diagram showing an anterior ocular segment imaging section 310 displaying an eye object 320. 眼のオブジェクト320が表示された前眼部撮影セクション310を示す別の図である。FIG. 3 is another diagram showing an anterior ocular segment imaging section 310 displaying an eye object 320. 眼のオブジェクト320が表示された前眼部撮影セクション310を示す更に別の図である。It is still another figure which shows the anterior segment imaging section 310 which displayed the object 320 of the eye. 被検眼12の前眼部の画像を、眼のオブジェクトに代えて用いる場合の眼科装置110の位置の調整処理のフローチャートである。It is a flowchart of the position adjustment process of the ophthalmologic apparatus 110 when the image of the anterior segment of the eye to be inspected 12 is used instead of the object of the eye. ステレオカメラ15Bからの画像信号に基づく瞳孔の(斜め)画像G0を示す図である。It is a figure which shows the (diagonal) image G0 of a pupil based on the image signal from a stereo camera 15B. 瞳孔の(斜め)画像G0から変換された正面画像G1を示す図である。It is a figure which shows the front image G1 converted from the (oblique) image G0 of a pupil. 正面画像G1が表示された前眼部撮影セクション310を示す図である。It is a figure which shows the anterior eye part photographing section 310 which displayed the front image G1. 図4のステップ208の撮影結果確認画面400の表示処理のフローチャートである。It is a flowchart of the display process of the shooting result confirmation screen 400 of step 208 of FIG. 撮影結果確認画面400を示す図である。It is a figure which shows the shooting result confirmation screen 400.
 以下、図面を参照して本発明の実施の形態を詳細に説明する。 Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings.
 図1を参照して、眼科システム100の構成を説明する。図1に示すように、眼科システム100は、眼科装置110と、AI(人工知能、Artificial Intelligence)解析サーバ120と、管理サーバ装置(以下、「管理サーバ」という)140と、画像表示装置(以下、「ビューワ」という)150と、を備えている。眼科装置110は、眼底画像を取得する。管理サーバ140は、眼科装置110によって患者の眼底が撮影されることにより得られた眼底画像を、右眼か左眼かの情報、患者の患者ID及び患者氏名に対応して記憶する。ビューワ150は、管理サーバ140から取得した眼底画像などの医療情報を表示する。ビューワ150は、複数存在してもよい。 The configuration of the ophthalmic system 100 will be described with reference to FIG. As shown in FIG. 1, the ophthalmology system 100 includes an ophthalmology device 110, an AI (artificial intelligence) analysis server 120, a management server device (hereinafter referred to as “management server”) 140, and an image display device (hereinafter referred to as “management server”). , "Viewer") 150 and. The ophthalmic apparatus 110 acquires a fundus image. The management server 140 stores the fundus image obtained by photographing the fundus of the patient by the ophthalmologic apparatus 110, corresponding to the information of the right eye or the left eye, the patient ID of the patient, and the patient name. The viewer 150 displays medical information such as a fundus image acquired from the management server 140. There may be a plurality of viewers 150.
 ネットワーク130に接続された眼科装置110あるいは管理サーバ140から、AI解析サーバ120へ眼底画像が送信される。AI解析サーバ120は、機械学習によって訓練されたAIアルゴリズムを用いて、被検眼の症状に関する情報を生成する処理を実行する。例えば、糖尿病網膜症の罹患を疑われる対象者あるいは糖尿病網膜症が予測される被検者の眼底画像に対して当該アルゴリズムによる画像処理を行ことにより、糖尿病網膜症AIスクリーニングを行う。また、AI解析サーバ120は、依頼元から送信された被検者の左右のそれぞれの眼底画像に対して、糖尿病網膜症の進行度を判定し、判定結果を依頼元へ送信する。AI解析サーバ120が上記糖尿病網膜症などの被検眼の症状に関する情報を生成する処理を実行するためには、左右の各々の被検眼の眼底画像が存在し、当該処理の対象となる眼底画像の画質は、AIアルゴリズムを用いて処理するための一定以上の画質である必要がある。 The fundus image is transmitted from the ophthalmic apparatus 110 or the management server 140 connected to the network 130 to the AI analysis server 120. The AI analysis server 120 uses an AI algorithm trained by machine learning to perform a process of generating information about the symptoms of the eye to be inspected. For example, diabetic retinopathy AI screening is performed by performing image processing on the fundus image of a subject suspected of having diabetic retinopathy or a subject predicted to have diabetic retinopathy by the algorithm. In addition, the AI analysis server 120 determines the degree of progression of diabetic retinopathy with respect to the left and right fundus images of the subject transmitted from the requester, and transmits the determination result to the requester. In order for the AI analysis server 120 to execute a process of generating information on the symptoms of the eye to be inspected such as diabetic retinopathy, there are fundus images of the left and right eyes to be inspected, and the fundus image to be the target of the process is present. The image quality needs to be a certain level or higher for processing using the AI algorithm.
 眼科装置110、AI解析サーバ120、管理サーバ140、およびビューワ150は、ネットワーク130を介して、相互に接続されている。ネットワーク130は、例えば、病院内のローカルエリアネットワーク(LAN:Local Area Network)でもよい。なお、AI解析サーバ120が、LANの外に存在し、例えば、インターネットを介して、眼科装置110、管理サーバ140、およびビューワ150に接続されるようにしてもよい。 The ophthalmic apparatus 110, the AI analysis server 120, the management server 140, and the viewer 150 are connected to each other via the network 130. The network 130 may be, for example, a local area network (LAN: Local Area Network) in the hospital. The AI analysis server 120 may exist outside the LAN and may be connected to the ophthalmic apparatus 110, the management server 140, and the viewer 150 via the Internet, for example.
 また、AI解析サーバ120、管理サーバ140、およびビューワ150は、CPU、RAM、ROM、記憶装置等を備えたコンピュータとディスプレイとを備えている。 Further, the AI analysis server 120, the management server 140, and the viewer 150 include a computer and a display equipped with a CPU, RAM, ROM, a storage device, and the like.
 次に、図2を参照して、眼科装置110の構成を説明する。 Next, the configuration of the ophthalmic apparatus 110 will be described with reference to FIG.
 眼科装置110は、眼底を撮影する装置である。眼科装置110としては、眼底を撮影可能であればよく、具体的な方式は問われない。例えば、眼底カメラでも、走査型レーザ検眼鏡(Scanning Laser Ophthalmoscope)でもよい。以下では、眼科装置110として、眼底カメラである場合を説明する。 The ophthalmology device 110 is a device for photographing the fundus. As the ophthalmic apparatus 110, any specific method may be used as long as the fundus can be photographed. For example, a fundus camera or a scanning laser ophthalmoscope may be used. Hereinafter, the case where the ophthalmologic device 110 is a fundus camera will be described.
 眼科装置110が水平面に設置された場合の水平方向を「X方向」、水平面に対する垂直方向を「Y方向」とし、水平方向(X方向)と垂直方向(Y方向)とに垂直な方向を「Z方向」とする。XYZの各方向の座標をとった場合、(X,Y,Z)=(0,0,0)の位置は、眼科装置110の光軸上の位置で、光束が最も細くなる位置である。被検眼12の前眼部の瞳孔中心が、眼科装置110の撮影のための正しい位置に位置する場合、被検眼12の前眼部の瞳孔中心と眼球の中心とを結ぶ方向は「Z方向」に一致する。なお、Z方向における正しい位置から眼科装置110側に向かう方向を、Z方向の負の方向、正しい位置から眼科装置110から遠ざかる方向を、Z方向の正の方向とする。
 なお、光束が最も細くなる位置は、本開示の技術の「基準位置」の一例である。
When the ophthalmic apparatus 110 is installed on a horizontal plane, the horizontal direction is "X direction", the direction perpendicular to the horizontal plane is "Y direction", and the direction perpendicular to the horizontal direction (X direction) and the vertical direction (Y direction) is "Y direction". "Z direction". When the coordinates in each direction of XYZ are taken, the position of (X, Y, Z) = (0,0,0) is the position on the optical axis of the ophthalmologic apparatus 110, and the position where the luminous flux becomes the thinnest. When the center of the pupil of the anterior segment of the eye to be inspected 12 is located at the correct position for imaging of the ophthalmologic apparatus 110, the direction connecting the center of the pupil of the anterior segment of the eye to be inspected 12 and the center of the eyeball is "Z direction". Matches. The direction from the correct position in the Z direction toward the ophthalmologic device 110 side is defined as the negative direction in the Z direction, and the direction away from the correct position is defined as the positive direction in the Z direction.
The position where the luminous flux becomes the thinnest is an example of the "reference position" of the technique of the present disclosure.
 眼科装置110は、被検眼12の瞳孔を撮影するための光学系を備えている。具体的には、眼科装置110は、対物レンズ20、孔開きミラー22、リレーレンズ24、ダイクロイックミラー26、撮影光源28、観察光源30、フォーカスレンズ32、及び撮像素子34を備えている。 The ophthalmic apparatus 110 includes an optical system for photographing the pupil of the eye to be inspected 12. Specifically, the ophthalmic apparatus 110 includes an objective lens 20, a perforated mirror 22, a relay lens 24, a dichroic mirror 26, a photographing light source 28, an observation light source 30, a focus lens 32, and an image pickup element 34.
 眼科装置110は、他の装置(120、140、150)と通信するための通信部38と、被検眼12の瞳孔中心等を計測するためのステレオカメラ15A、15Bと、眼科装置110にデータを入力したり、眼科装置110により得られた被検眼12の画像等を表示したり、する入力/表示部40と、を備えている。 The ophthalmology device 110 transfers data to the communication unit 38 for communicating with other devices (120, 140, 150), the stereo cameras 15A and 15B for measuring the pupil center of the eye 12 to be inspected, and the ophthalmology device 110. It includes an input / display unit 40 for inputting data and displaying an image or the like of the eye to be inspected 12 obtained by the ophthalmic apparatus 110.
 眼科装置110は、眼科装置110の三次元位置を調整するための位置調整装置42を備えている。 The ophthalmic apparatus 110 includes a position adjusting device 42 for adjusting the three-dimensional position of the ophthalmic apparatus 110.
 眼科装置110は、ステレオカメラ15A、15B、撮影光源28、観察光源30、フォーカスレンズ32の図示しない駆動部、撮像素子34、通信部38、及び位置調整装置42が接続され、これらの素子を制御する制御部36を備えている。制御部36は、図示しないCPU、ROM、RAM、入出力ポート、記憶装置等を有するコンピュータを備えている。 The ophthalmologic device 110 is connected to a stereo camera 15A, 15B, a photographing light source 28, an observation light source 30, a drive unit (not shown) of a focus lens 32, an image pickup element 34, a communication unit 38, and a position adjustment device 42, and controls these elements. The control unit 36 is provided. The control unit 36 includes a computer having a CPU, a ROM, a RAM, an input / output port, a storage device, and the like (not shown).
 孔空きミラー22は、被検眼12の光路を、撮影光源28及び観察光源30の光路と、撮像素子34の光路とに分岐する。ダイクロイックミラー26は、撮影光源28及び観察光源30の光路を、撮影光源28の光路と観察光源30の光路とに分岐する。 The perforated mirror 22 branches the optical path of the eye to be inspected 12 into the optical path of the photographing light source 28 and the observation light source 30 and the optical path of the image pickup element 34. The dichroic mirror 26 branches the optical path of the photographing light source 28 and the observation light source 30 into an optical path of the photographing light source 28 and an optical path of the observation light source 30.
 撮影光源28は、眼底を撮影するための光源であり、例えば、ハロゲンランプ又はLED(Light  Emitting  Diode)を含む可視光(撮影光)の光源である。撮影光源28は可視光をダイクロイックミラー26に向けて照射する。 The photographing light source 28 is a light source for photographing the fundus of the eye, and is, for example, a light source of visible light (photographing light) including a halogen lamp or an LED (Light Emitting Diode). The photographing light source 28 irradiates visible light toward the dichroic mirror 26.
 観察光源30は、眼底を撮影するまでの間、眼底を常時観察するための光源であり、被検者がまぶしいと感じないように、赤外線(例えば、近赤外線)(観察光)を発光する赤外線光源である。観察光源30は、赤外線をダイクロイックミラー26に向けて照射する。なお、例えばハロゲンランプ又はLEDの光源と、当該光源からの光から可視光をカットする可視カットフィルタ14とを備え、赤外線をダイクロイックミラー26に向けて照射するようにしてもよい。 The observation light source 30 is a light source for constantly observing the fundus until the fundus is photographed, and is an infrared ray that emits infrared rays (for example, near infrared rays) (observation light) so that the subject does not feel dazzling. It is a light source. The observation light source 30 irradiates infrared rays toward the dichroic mirror 26. For example, a halogen lamp or LED light source and a visible cut filter 14 that cuts visible light from the light from the light source may be provided, and infrared rays may be emitted toward the dichroic mirror 26.
 ダイクロイックミラー26は、赤外線を反射すると共に、可視光を透過する。撮影光源28からの可視光は、ダイクロイックミラー26を透過し、観察光源30からの赤外線は、ダイクロイックミラー26を反射し、これらの光は、リレーレンズ24、孔空きミラー22、及び対物レンズ20を介して被検眼12の瞳孔を通って被検眼12の眼底に照射される。被検眼12の眼底を反射した光は、対物レンズ20、孔空きミラー22の孔、フォーカスレンズ32を介して撮像素子34に入射する。フォーカスレンズ32は、被検眼12の眼底から反射した光が撮像素子34に結像するようにフォーカスを合わせる。よって、被検眼12の眼底は、撮像素子34に結像する。 The dichroic mirror 26 reflects infrared rays and transmits visible light. The visible light from the photographing light source 28 passes through the dichroic mirror 26, the infrared rays from the observation light source 30 reflect the dichroic mirror 26, and these lights pass through the relay lens 24, the perforated mirror 22, and the objective lens 20. The light source is irradiated to the fundus of the eye to be inspected 12 through the pupil of the eye to be inspected 12. The light reflected from the fundus of the eye to be inspected 12 is incident on the image sensor 34 via the objective lens 20, the hole of the perforated mirror 22, and the focus lens 32. The focus lens 32 focuses so that the light reflected from the fundus of the eye 12 to be inspected forms an image on the image pickup device 34. Therefore, the fundus of the eye to be inspected 12 is imaged on the image sensor 34.
 撮像素子34は、被検眼12の眼底の画像を形成する。撮像素子34は可視光と近赤外光とに感度を有する撮像素子である。なお、可視光用の撮像素子と近赤外光用の撮像素子とを別々に設ける構成にしてもよい。この場合は、撮像素子の手前に可視光と近赤外光を分離するダイクロイックミラーなどが配置される。 The image sensor 34 forms an image of the fundus of the eye 12 to be inspected. The image pickup device 34 is an image pickup device having sensitivity to visible light and near infrared light. The image sensor for visible light and the image sensor for near-infrared light may be provided separately. In this case, a dichroic mirror or the like that separates visible light and near-infrared light is arranged in front of the image sensor.
 眼科装置110は、被検眼12の視線を、撮像素子34の光路に誘導するための図示しない固視灯(可視光源)と、固視灯からの固視光を、被検眼12の光路に導く光学系とを備えている。 The ophthalmic apparatus 110 guides a fixed-view lamp (visible light source) (not shown) for guiding the line of sight of the eye to be inspected 12 to the optical path of the image pickup element 34, and the fixed-eye light from the fixed-view lamp to the optical path of the eye to be inspected 12. It is equipped with an optical system.
 ステレオカメラ15A、15Bについて説明する。眼科装置110が被検眼12の眼底を撮影するためには、撮影光が瞳孔を通る必要があるので、撮影光が通る位置(つまり、撮影光の光軸)に瞳孔が位置しているのかを検出する必要がある。このため、瞳孔中心の位置を計測する必要がある。そこで、眼科装置110は、被検眼12の瞳孔を含む所定領域を異なる方向から撮影するステレオカメラ15A、15Bを備えている。ステレオカメラ15A、15Bは、被検眼12の瞳孔から放射される赤外線を可視化するためのカメラである。なお、ステレオカメラ15A、15Bは、赤外線を照射する光源と赤外線を検出する画像センサーから構成されている。 The stereo cameras 15A and 15B will be explained. In order for the ophthalmic apparatus 110 to image the fundus of the eye 12 to be inspected, the photographing light needs to pass through the pupil. Therefore, it is determined whether the pupil is located at the position where the photographing light passes (that is, the optical axis of the photographing light). Need to be detected. Therefore, it is necessary to measure the position of the center of the pupil. Therefore, the ophthalmic apparatus 110 includes stereo cameras 15A and 15B that capture a predetermined region including the pupil of the eye 12 to be inspected from different directions. The stereo cameras 15A and 15B are cameras for visualizing infrared rays emitted from the pupil of the eye to be inspected 12. The stereo cameras 15A and 15B are composed of a light source that irradiates infrared rays and an image sensor that detects infrared rays.
 ステレオカメラ15A、15Bは、Y方向(上下方向)に所定距離離間して配置されており、被検眼12の瞳孔を含む前眼部を、上側及び下側からの異なる方向から、撮影する。なお、ステレオカメラ15A、15Bは、上下方向ではなく、例えば、X方向(左右方向)に所定距離離間して配置してもよい。 The stereo cameras 15A and 15B are arranged at a predetermined distance in the Y direction (vertical direction), and the anterior eye portion including the pupil of the eye to be inspected 12 is photographed from different directions from the upper side and the lower side. The stereo cameras 15A and 15B may be arranged at a predetermined distance in the X direction (horizontal direction) instead of the vertical direction.
 ステレオカメラ15A、15Bは、前眼部を撮影して得られた画像信号を制御部36に出力する。制御部36は、ステレオカメラ15A、15Bからの画像信号を画像処理することにより、瞳孔を含む前眼部の画像を形成すると共に、瞳孔中心の3次元(X,Y,Z)位置を計測する。 The stereo cameras 15A and 15B output the image signal obtained by photographing the anterior eye portion to the control unit 36. The control unit 36 forms an image of the anterior eye portion including the pupil by image processing the image signals from the stereo cameras 15A and 15B, and measures the three-dimensional (X, Y, Z) position of the center of the pupil. ..
 なお、ステレオカメラ15A、15Bに代えて、レーザ距離計を備え、制御部36は、瞳孔中心の3次元位置を計測するようにしてもよい。 Instead of the stereo cameras 15A and 15B, a laser range finder may be provided, and the control unit 36 may measure the three-dimensional position of the center of the pupil.
 ステレオカメラ15A、15B、及び、レーザ距離計は、本開示の技術の瞳孔位置計測部の一例である。 The stereo cameras 15A, 15B, and the laser range finder are examples of the pupil position measuring unit of the technique of the present disclosure.
 位置調整装置42は、制御部36が、撮影光が通る位置に瞳孔中心が位置していないと判断した場合に、制御部36の制御により、撮影光が通る位置に瞳孔中心が位置するように、眼科装置110の三次元位置を調整する。なお、位置調整装置42が制御部36の制御により眼科装置110の三次元位置を調整することに限定せず、位置調整装置42が自動的に眼科装置110の三次元位置を調整してもよい。なお、眼科装置110の三次元位置を手動で調整してもよい。 When the control unit 36 determines that the center of the pupil is not located at the position where the photographing light passes, the position adjusting device 42 controls the control unit 36 so that the center of the pupil is located at the position where the photographing light passes. , Adjust the three-dimensional position of the ophthalmic apparatus 110. The position adjusting device 42 is not limited to adjusting the three-dimensional position of the ophthalmic device 110 under the control of the control unit 36, and the position adjusting device 42 may automatically adjust the three-dimensional position of the ophthalmic device 110. .. The three-dimensional position of the ophthalmic apparatus 110 may be manually adjusted.
 次に、図3を参照して、眼科装置110の制御部36のCPUが情報表示プログラムを実行することで実現される各種機能について説明する。情報表示プログラムは、点灯機能、位置合わせ機能、フォーカス調整機能、撮影機能、表示機能、判断機能、及び送信機能を備えている。眼科装置110の制御部36のCPUがこの各機能を有する情報表示プログラムを実行することで、当該CPUは、図3に示すように、点灯部50、位置合わせ部52、フォーカス調整部54、撮影部56、表示部58、判断部60、及び送信部62として機能する。 Next, with reference to FIG. 3, various functions realized by the CPU of the control unit 36 of the ophthalmic apparatus 110 executing the information display program will be described. The information display program has a lighting function, an alignment function, a focus adjustment function, a shooting function, a display function, a judgment function, and a transmission function. When the CPU of the control unit 36 of the ophthalmic apparatus 110 executes an information display program having each of these functions, the CPU has the lighting unit 50, the alignment unit 52, the focus adjustment unit 54, and the photographing unit, as shown in FIG. It functions as a unit 56, a display unit 58, a determination unit 60, and a transmission unit 62.
 次に、図4を用いて、眼科装置110の制御部36のCPUによる情報表示プログラムを詳細に説明する。眼科装置110の制御部36のCPUが情報表示プログラムを実行することで、図4のフローチャートに示された情報表示処理が実現される。被検眼の眼底を撮影する場合には、まず、眼科装置110のユーザ(オペレータ)が、入力/表示部40を介して、患者ID及び患者氏名を入力する。そして、被検者の被検眼12を眼科装置110の撮影位置に位置させる。入力/表示部40に表示された図示しないスタートボタンをユーザが操作した場合に、情報表示プログラムはスタートする。 Next, the information display program by the CPU of the control unit 36 of the ophthalmic apparatus 110 will be described in detail with reference to FIG. When the CPU of the control unit 36 of the ophthalmic apparatus 110 executes the information display program, the information display process shown in the flowchart of FIG. 4 is realized. When photographing the fundus of the eye to be inspected, first, the user (operator) of the ophthalmologic apparatus 110 inputs the patient ID and the patient name via the input / display unit 40. Then, the subject's eye 12 to be inspected is positioned at the imaging position of the ophthalmic apparatus 110. When the user operates a start button (not shown) displayed on the input / display unit 40, the information display program starts.
 入力/表示部40に表示された図示しないスタートボタンが操作されると、制御部36の指示に従い、観察光源30が点灯し、撮像素子34は、被検眼12の眼底を撮影し続ける。よって、撮像素子34からは被検眼12の眼底画像が制御部36に出力され続ける。よって、制御部36は、当該スタート時から常時眼底をモニタしている。 When the start button (not shown) displayed on the input / display unit 40 is operated, the observation light source 30 is turned on according to the instruction of the control unit 36, and the image sensor 34 continues to photograph the fundus of the eye to be inspected 12. Therefore, the fundus image of the eye 12 to be inspected continues to be output from the image sensor 34 to the control unit 36. Therefore, the control unit 36 constantly monitors the fundus from the start.
 ステップ200で、点灯部50が、上記図示しない固視灯を点灯させる。これにより、被検眼12の視線を、眼科装置110の正面方向(光軸方向(Z方向))に誘導する。この方向は、眼底中央部(黄斑、視神経乳頭が存在する後極部)を撮影するための方向である。なお、眼底の周辺部を撮影するときは点灯部50は、固視灯の点灯位置を正面方向とは異なる位置に点灯させる。 In step 200, the lighting unit 50 lights the fixation lamp (not shown above). As a result, the line of sight of the eye 12 to be inspected is guided in the front direction (optical axis direction (Z direction)) of the ophthalmic apparatus 110. This direction is for photographing the central part of the fundus (macula, posterior pole where the optic nerve head is present). When photographing the peripheral portion of the fundus, the lighting portion 50 turns on the lighting position of the fixation lamp at a position different from the front direction.
 ステップ202で、位置合わせ部52及び表示部58は、眼科装置110の位置合わせ処理(図5Aも参照)を実行し、被検眼12と眼科装置110との位置関係を撮影が適切に行われる位置関係に調整を行うアライメント処理を行う。図5Aには、ステップ202の位置合わせ処理のフローチャートが示されている。図5Aに示すように、ステップ250で、位置合わせ部52及び表示部58は、入力/表示部40のディスプレイに、撮影準備画面(スクリーン)300(図6も参照)の表示処理を実行する。図5Bには、ステップ250の撮影準備画面300の表示処理のフローチャートが示されている。 In step 202, the alignment unit 52 and the display unit 58 execute the alignment process of the ophthalmic apparatus 110 (see also FIG. 5A), and the positional relationship between the eye to be inspected 12 and the ophthalmologic apparatus 110 is appropriately photographed. Perform alignment processing to adjust the relationship. FIG. 5A shows a flowchart of the alignment process in step 202. As shown in FIG. 5A, in step 250, the alignment unit 52 and the display unit 58 execute a display process of the shooting preparation screen (screen) 300 (see also FIG. 6) on the display of the input / display unit 40. FIG. 5B shows a flowchart of the display process of the shooting preparation screen 300 in step 250.
 図5Bに示すように、ステップ259で、表示部58は、入力/表示部40のディスプレイに、撮影準備画面300(図6も参照)を表示する。 As shown in FIG. 5B, in step 259, the display unit 58 displays the shooting preparation screen 300 (see also FIG. 6) on the display of the input / display unit 40.
 図6に示すように、撮影準備画面300は、以下の各表示項目が表示される。表示項目には、患者ID表示フィールド302、患者氏名表示フィールド304、右眼の眼底画像の取得状態を表示する取得状態表示セクション306、及び左眼の眼底画像の取得状態を表示する取得状態表示セクション308がある。
 取得状態表示セクション306及び取得状態表示セクション308はそれぞれ、本開示の技術の「第1セクション」及び「第2セクション」の一例である。
As shown in FIG. 6, the following display items are displayed on the shooting preparation screen 300. The display items include a patient ID display field 302, a patient name display field 304, an acquisition status display section 306 that displays the acquisition status of the fundus image of the right eye, and an acquisition status display section that displays the acquisition status of the fundus image of the left eye. There is 308.
The acquisition status display section 306 and the acquisition status display section 308 are examples of the "first section" and the "second section" of the technique of the present disclosure, respectively.
 また、表示項目には、被検眼の前眼部を表示する前眼部撮影セクション310、及び被検眼の眼底を表示する眼底表示セクション312がある。更に、表示項目には、眼科装置110のZ方向の位置の正しい位置からのずれ量を表示するずれ量表示セクション314、撮影ボタン316、及び情報表示処理を終了させる検査終了ボタン318がある。前眼部撮影セクション310には、位置合わせ部52により取得された、被検眼12と眼科装置110との位置関係を示す、アライメント情報に基づいて生成された眼のオブジェクト320が表示される(これについては後述する)。前眼部撮影セクション310は、本開示の技術の「第3セクション」の一例である。 Further, the display items include an anterior ocular segment photographing section 310 for displaying the anterior segment of the eye to be inspected, and a fundus display section 312 for displaying the fundus of the inspected eye. Further, the display items include a deviation amount display section 314 for displaying the deviation amount from the correct position of the position in the Z direction of the ophthalmic apparatus 110, an imaging button 316, and an inspection end button 318 for ending the information display process. In the anterior ocular segment imaging section 310, an eye object 320 generated based on alignment information, which indicates the positional relationship between the eye to be inspected 12 and the ophthalmologic apparatus 110, acquired by the alignment unit 52 is displayed (this). Will be described later). The anterior ocular segment imaging section 310 is an example of a "third section" of the technique of the present disclosure.
 上記のように撮像素子34からは被検眼12の眼底画像が制御部36に出力され続けているので、制御部36の表示部58は、モニタしている眼底画像を眼底表示セクション312に表示する。 Since the fundus image of the eye to be inspected 12 continues to be output from the image sensor 34 to the control unit 36 as described above, the display unit 58 of the control unit 36 displays the monitored fundus image on the fundus display section 312. ..
 図6では、上記のようにモニタしている眼底画像の被検眼が右眼である場合が示されている。 FIG. 6 shows a case where the eye to be inspected in the fundus image monitored as described above is the right eye.
 ステップ261で、位置合わせ部52は、被検眼の眼底画像の取得状況(AIスクリーニング用の左右の被検眼の眼底画像が取得されているか否か)を確認する。なお、ステップ261の確認処理は、被検眼の眼底画像の以前の取得状態を確認するための処理である。
 AIスクリーニング用の右の被検眼の眼底画像は、本開示の技術の「第1眼底画像」の一例であり、AIスクリーニング用の左の被検眼の眼底画像は、本開示の技術の「第2眼底画像」の一例である。
In step 261 the alignment unit 52 confirms the acquisition status of the fundus image of the eye to be inspected (whether or not the fundus images of the left and right eye to be inspected for AI screening are acquired). The confirmation process in step 261 is a process for confirming the previous acquisition state of the fundus image of the eye to be inspected.
The fundus image of the right eye to be inspected for AI screening is an example of the "first fundus image" of the technique of the present disclosure, and the fundus image of the left eye to be inspected for AI screening is the "second fundus image" of the technique of the present disclosure. This is an example of a fundus image.
 具体的には、表示部58は、管理サーバ140に、患者IDに対応して、眼底画像及び眼底画像が右眼の眼底画像か左眼の眼底画像かの情報に対応して記憶されているか問い合わせる。 Specifically, whether the display unit 58 stores the fundus image and the fundus image corresponding to the information of whether the fundus image of the right eye or the fundus image of the left eye corresponds to the patient ID in the management server 140. Contact us.
 患者の右眼及び左眼の各々の眼底が撮影され、右眼及び左眼の各々の眼底画像が取得されていれば、患者ID、右眼の情報、及び右眼の眼底画像が対応し、且つ、患者ID、左眼の情報、及び左眼の眼底画像が対応して、管理サーバ140の記憶装置に記憶されている。管理サーバ140は、患者ID、右眼の情報、及び右眼の眼底画像を対応し、且つ、患者ID、左眼の情報、及び左眼の眼底画像を対応して、眼科装置110に送信する。患者ID、右眼の情報、右眼の眼底画像、左眼の情報、及び左眼の眼底画像を受信した眼科装置110は、被検眼の眼底画像の取得状態として、右眼及び左眼の各々の眼底画像が取得済みであること(撮影済)を確認することができる。 If the fundus of each of the right eye and the left eye of the patient is photographed and the fundus images of the right eye and the left eye are acquired, the patient ID, the information of the right eye, and the fundus image of the right eye correspond. Moreover, the patient ID, the information of the left eye, and the fundus image of the left eye are stored in the storage device of the management server 140 correspondingly. The management server 140 corresponds to the patient ID, the information of the right eye, and the fundus image of the right eye, and transmits the patient ID, the information of the left eye, and the fundus image of the left eye to the ophthalmic apparatus 110. .. The ophthalmologic apparatus 110 that has received the patient ID, the information of the right eye, the fundus image of the right eye, the information of the left eye, and the fundus image of the left eye receives the fundus image of the subject eye as the acquisition state of the fundus image of the right eye and the left eye, respectively. It can be confirmed that the fundus image of the eye has been acquired (taken).
 患者の右眼又は左眼の眼底が撮影され、右眼又は左眼の眼底画像が取得されていれば、患者IDに対応して、右眼の情報と右眼の眼底画像とが対応し、又は、左眼の情報と左眼の眼底画像とが対応して管理サーバ140の記憶装置に記憶されている。管理サーバ140は、患者ID、右眼の情報、及び右眼の眼底画像を対応し、又は、患者ID、左眼の情報、及び左眼の眼底画像を対応して、眼科装置110に送信する。患者IDと、右眼の情報及び右眼の眼底画像、又は、左眼の情報及び左眼の眼底画像とを受信した眼科装置110は、被検眼の眼底画像の取得状態として、右眼について眼底画像は取得済みである(撮影済)が、左眼について眼底画像は未取得であること(未撮影)、又は、右眼について眼底画像は未取得であるが(未撮影)、左眼について眼底画像は取得済みであること(撮影済)を確認することができる。 If the fundus of the right eye or the left eye of the patient is photographed and the fundus image of the right eye or the left eye is acquired, the information of the right eye and the fundus image of the right eye correspond to each other corresponding to the patient ID. Alternatively, the information of the left eye and the fundus image of the left eye are stored in the storage device of the management server 140 in correspondence with each other. The management server 140 corresponds to the patient ID, the information of the right eye, and the fundus image of the right eye, or corresponds to the patient ID, the information of the left eye, and the fundus image of the left eye, and transmits the patient ID, the information of the left eye, and the fundus image of the left eye to the ophthalmic apparatus 110. .. The ophthalmologic apparatus 110, which has received the patient ID and the information of the right eye and the fundus image of the right eye, or the information of the left eye and the fundus image of the left eye, sets the fundus of the right eye as the acquisition state of the fundus image of the eye to be inspected. The image has been acquired (photographed), but the fundus image has not been acquired for the left eye (unphotographed), or the fundus image for the right eye has not been acquired (unphotographed), but the fundus for the left eye. It can be confirmed that the image has been acquired (taken).
 患者の右眼の眼底も左眼の眼底も撮影されていなければ、患者IDに対応して、右眼の情報、右眼の眼底画像、左眼の情報、及び左眼の眼底画像が管理サーバ140の記憶装置に記憶されていない。管理サーバ140は、右眼の情報、右眼の眼底画像、左眼の情報、及び左眼の眼底画像を眼科装置110に送信しない。右眼の情報、右眼の眼底画像、左眼の情報、及び左眼の眼底画像を受信しない眼科装置110は、被検眼の眼底画像の取得状態として、右眼及び左眼の各々の眼底画像が未取得であること(未撮影)を確認することができる。 If neither the fundus of the patient's right eye nor the fundus of the left eye has been photographed, the information of the right eye, the image of the fundus of the right eye, the information of the left eye, and the image of the fundus of the left eye correspond to the patient ID. It is not stored in the storage device of 140. The management server 140 does not transmit the information of the right eye, the image of the fundus of the right eye, the information of the left eye, and the image of the fundus of the left eye to the ophthalmic apparatus 110. The ophthalmologic apparatus 110 that does not receive the information of the right eye, the fundus image of the right eye, the information of the left eye, and the fundus image of the left eye receives the fundus images of the right eye and the left eye as the acquisition state of the fundus image of the inspected eye. Can be confirmed that has not been acquired (not photographed).
 本開示の技術は、患者IDに対応して、右眼の情報、右眼の眼底画像、左眼の情報、及び左眼の眼底画像が管理サーバ140の記憶装置に記憶されることに限定されず、これらの情報及び画像を眼科装置110の記憶装置に記憶するようにしてもよい。この場合には、位置合わせ部52は、管理サーバ140に上記問い合わせを行うのではく、眼科装置110の記憶装置の記憶内容を確認する。 The technique of the present disclosure is limited to storing the information of the right eye, the fundus image of the right eye, the information of the left eye, and the fundus image of the left eye in the storage device of the management server 140 corresponding to the patient ID. Instead, these information and images may be stored in the storage device of the ophthalmologic device 110. In this case, the alignment unit 52 does not make the above inquiry to the management server 140, but confirms the stored contents of the storage device of the ophthalmic device 110.
 ステップ263で、表示部58は、取得状態表示セクション306、308の表示内容を、ステップ261で確認した取得状態が示す内容に決定し、取得状態表示セクション306、308の表示状態を、当該決定した内容にする。例えば、上記のように右眼を撮影中で右眼、左眼ともに未撮影の場合には右眼の取得状態表示セクション306と左眼の取得状態表示セクション308の両方に「未撮影」を表示する。右眼を撮影中で右眼、左眼ともに撮影済の場合には右眼の取得状態表示セクション306と左眼の取得状態表示セクション308の両方に「撮影済」を表示する。右眼を撮影中で右眼は未撮影、左眼は撮影済の場合には右眼の取得状態表示セクション306には、「未撮影」を表示し、左眼の取得状態表示セクション308には、「撮影済」を表示する。図6には、右眼を撮影中で右眼は撮影済、左眼は未撮影の場合が示されており、具体的には、右眼の取得状態表示セクション306には、「撮影済」が表示され、左眼の取得状態表示セクション308には、「未撮影」が表示されている。 In step 263, the display unit 58 determines the display content of the acquisition status display sections 306 and 308 to be the content indicated by the acquisition status confirmed in step 261 and determines the display status of the acquisition status display sections 306 and 308. Make it content. For example, if the right eye is being photographed and neither the right eye nor the left eye is photographed as described above, "not photographed" is displayed in both the acquisition status display section 306 of the right eye and the acquisition status display section 308 of the left eye. do. When the right eye is being photographed and both the right eye and the left eye have been photographed, "photographed" is displayed in both the acquisition status display section 306 of the right eye and the acquisition status display section 308 of the left eye. If the right eye is being photographed and the right eye has not been photographed and the left eye has been photographed, "not photographed" is displayed in the acquisition status display section 306 of the right eye, and the acquisition status display section 308 of the left eye is displayed. , "Photographed" is displayed. FIG. 6 shows a case where the right eye is being photographed, the right eye is photographed, and the left eye is not photographed. Specifically, the acquisition status display section 306 of the right eye is “photographed”. Is displayed, and "not photographed" is displayed in the acquisition status display section 308 of the left eye.
 ステップ250の撮影準備画面300の表示処理は、位置合わせ部52と表示部58により行われてもよい。この場合、位置合わせ部52により収集されたアライメント情報に基づいて、表示部58は撮影準備画面300を生成し、入力/表示部40のディスプレイに撮影準備画面300の画像信号を出力する。入力/表示部40のディスプレイは当該画像信号に基づき、撮影準備画面300を表示する。 The display process of the shooting preparation screen 300 in step 250 may be performed by the alignment unit 52 and the display unit 58. In this case, the display unit 58 generates the shooting preparation screen 300 based on the alignment information collected by the alignment unit 52, and outputs the image signal of the shooting preparation screen 300 to the display of the input / display unit 40. The display of the input / display unit 40 displays the shooting preparation screen 300 based on the image signal.
 ステップ263の処理が終了すると、図5Aの撮影準備画面600の表示処理(ステップ250)が終了する。 When the process of step 263 is completed, the display process (step 250) of the shooting preparation screen 600 of FIG. 5A is completed.
 次のステップ252で、位置合わせ部52は、ステレオカメラ15A、15Bからの画像信号に基づく異なる方向からの画像を用いて3次元計測により、被検眼の位置、具体的には、瞳孔中心のXYZ位置を計測する。 In the next step 252, the alignment unit 52 measures the position of the eye to be inspected, specifically, XYZ at the center of the pupil, by three-dimensional measurement using images from different directions based on the image signals from the stereo cameras 15A and 15B. Measure the position.
 ステップ254で、位置合わせ部52は、第1に、瞳孔中心の3次元位置(X,Y,Z)のX、Yの座標情報から、前眼部撮影セクション310における眼のオブジェクト320表示位置を決定する。具体的には、位置合わせ部52は、瞳孔中心のX方向及びY方向の各々の位置と、基準位置(X、Y)とのずれ量を計算し、被検眼の位置が基準位置からずれている様子を示すための眼のオブジェクト320の表示位置を決定する。(X、Y)が光軸中心を示す原点(0,0)であれば、眼のオブジェクト320の表示位置は前眼部撮影セクション310の中心となる。(X,Y)が原点と異なる場合は、X及びYの位置情報に基づいて、表示する眼のオブジェクト320の前眼部撮影セクション310における表示位置を決定する。 In step 254, the alignment unit 52 first determines the display position of the eye object 320 in the anterior segment imaging section 310 from the X, Y coordinate information of the three-dimensional position (X, Y, Z) at the center of the pupil. decide. Specifically, the alignment unit 52 calculates the amount of deviation between the X-direction and Y-direction positions of the center of the pupil and the reference position (X, Y), and the position of the eye to be inspected deviates from the reference position. The display position of the object 320 of the eye for showing the state of being is determined. If (X, Y) is the origin (0,0) indicating the center of the optical axis, the display position of the eye object 320 is the center of the anterior segment imaging section 310. When (X, Y) is different from the origin, the display position in the anterior segment imaging section 310 of the object 320 of the eye to be displayed is determined based on the position information of X and Y.
 眼のオブジェクト320は、前眼部の特徴を模したイラストであり、瞳孔、虹彩、眼球結膜(いわゆる白眼の部分)、および、瞳孔を強調表示させる円328を構成要素に含んだ画像オブジェクトである。  The eye object 320 is an illustration that imitates the characteristics of the anterior segment of the eye, and is an image object that includes a pupil, an iris, an eyeball conjunctiva (so-called white eye portion), and a circle 328 that highlights the pupil. .. It was
 ステレオカメラ15A、15Bは、被検眼12の前眼部の瞳孔を、上側、下側の斜めから撮影するので、ステレオカメラ15A、15Bからの画像信号に基づく前眼部の画像を表示すると、斜めの画像が表示される。しかし、眼のオブジェクトを用いると、被検者により異なるリアルな前眼部画像ではなく、眼を正面から見たイラスト画像として表示することができ、ユーザがアライメント作業を簡単に行えるという効果を奏する。また、リアルな前眼部が画像では瞼やまつ毛などが映り込む場合がありアライメント作業を難しくする場合があるが、眼のオブジェクトを表示することにより、前眼部の撮影状態によらずユーザがアライメント作業を行うことが可能となる。 Since the stereo cameras 15A and 15B capture the pupil of the anterior segment of the eye to be inspected 12 from the upper and lower diagonals, when the image of the anterior segment based on the image signals from the stereo cameras 15A and 15B is displayed, they are oblique. Image is displayed. However, by using an eye object, it is possible to display the eye as an illustration image viewed from the front instead of a realistic front eye image that differs depending on the subject, which has the effect of facilitating the alignment work by the user. .. In addition, the realistic front eye part may reflect eyelids and eyelashes in the image, which may make the alignment work difficult. It becomes possible to perform alignment work.
 ステップ254で、位置合わせ部52は、第2に、瞳孔中心の3次元位置(X,Y,Z)のZの座標情報に基づいて、前眼部撮影セクション310に表示される眼のオブジェクト320の大きさを決定する。具体的には、位置合わせ部52は、瞳孔中心のZ方向の位置と、基準位置(Z)とのずれ量を計算し、被検眼の位置が基準位置からずれている様子を示すための眼のオブジェクト320の大きさを決定する。瞳孔中心がZ=0の位置(Z方向において光束が最も細くなる位置)であれば、眼の瞳孔の大きさは標準の大きさ(デフォルトでの表示)とする。 In step 254, the alignment unit 52 secondly displays the eye object 320 displayed in the anterior segment imaging section 310 based on the Z coordinate information of the three-dimensional position (X, Y, Z) of the center of the pupil. Determine the size of. Specifically, the alignment unit 52 calculates the amount of deviation between the position in the Z direction of the center of the pupil and the reference position (Z), and indicates that the position of the eye to be inspected is deviated from the reference position. Determines the size of the object 320 of. If the center of the pupil is at the position where Z = 0 (the position where the luminous flux is the thinnest in the Z direction), the size of the pupil of the eye is the standard size (displayed by default).
 しかし、瞳孔中心の位置がZ=0の位置より近い(眼科装置110側に近い)位置の場合(Z<0)、ステレオカメラ15A、15Bで撮影される前眼部の大きさはZ=0の特に比べて大きく撮影される。よって、眼のオブジェクト320の大きさをZの値に応じて拡大する。 However, when the position of the center of the pupil is closer to the position of Z = 0 (closer to the ophthalmic apparatus 110 side) (Z <0), the size of the anterior eye portion photographed by the stereo cameras 15A and 15B is Z = 0. It is taken larger than the one in particular. Therefore, the size of the object 320 of the eye is enlarged according to the value of Z.
 また、瞳孔中心の位置がZ=0の位置より遠い(眼科装置110から遠ざかる)位置の場合(Z>0)、ステレオカメラ15A、15Bで撮影される前眼部の大きさはZ=0の特に比べて小さく撮影される。よって、眼のオブジェクト320の大きさをZの値に応じて縮小する。 When the position of the center of the pupil is farther than the position of Z = 0 (away from the ophthalmic apparatus 110) (Z> 0), the size of the anterior eye portion photographed by the stereo cameras 15A and 15B is Z = 0. It is taken smaller than in particular. Therefore, the size of the eye object 320 is reduced according to the value of Z.
 つまり、位置合わせ部52は、計測された瞳孔中心の位置の光軸方向の位置であるZの座標値によって、表示する眼のオブジェクトの大きさを、瞳孔中心の位置がZ=0の位置より近い場合には、標準の大きさより大きく、瞳孔中心の位置がZ=0の位置より遠い場合には、標準の大きさより小さくなるように、眼のオブジェクトのサイズを決定する。 That is, the alignment unit 52 sets the size of the eye object to be displayed according to the coordinate value of Z, which is the position in the optical axis direction of the measured position of the center of the pupil, from the position where the position of the center of the pupil is Z = 0. The size of the eye object is determined so that when it is close, it is larger than the standard size, and when the position of the center of the pupil is farther than the position of Z = 0, it is smaller than the standard size.
 ステップ256で、表示部58は、決定された位置に、決定されたサイズの眼のオブジェクトを、前眼部撮影セクション310に表示する。具体的には、まず、眼のアニメのオブジェクト320の画像データは、あらかじめ眼科装置110の制御部36の記憶装置に記憶されている。表示部58は、記憶されている眼のオブジェクトを読み出し、眼のオブジェクト320のサイズを位置合わせ部52により決定された大きさとなるように拡大あるいは縮小する画像処理を実行する。そして、サイズが変更された眼のオブジェクト320を、位置合わせ部52により決定された前眼部撮影セクション310における位置に表示する。 In step 256, the display unit 58 displays an eye object of a determined size at a determined position in the anterior ocular segment photographing section 310. Specifically, first, the image data of the eye animation object 320 is stored in advance in the storage device of the control unit 36 of the ophthalmic apparatus 110. The display unit 58 reads out the stored eye object and executes image processing for enlarging or reducing the size of the eye object 320 so as to have a size determined by the alignment unit 52. Then, the resized eye object 320 is displayed at a position in the anterior ocular segment photographing section 310 determined by the alignment portion 52.
 さらに、表示部58は、前眼部撮影セクション310の中心に、十字指標(322と324)と円指標326とを重畳表示する。当該十字指標は、眼科装置110の光軸の位置を示す。十字指標は縦線324と横線322からからなり、縦線324と横線322との交点が眼科装置110の光軸位置である。円指標326は標準的な瞳孔の大きさを示した円であり、円指標の中心は十字指標の交点と一致している。 Further, the display unit 58 superimposes and displays the cross index (322 and 324) and the circle index 326 at the center of the anterior eye portion photographing section 310. The cross index indicates the position of the optical axis of the ophthalmic apparatus 110. The cross index is composed of a vertical line 324 and a horizontal line 322, and the intersection of the vertical line 324 and the horizontal line 322 is the optical axis position of the ophthalmic apparatus 110. The circle index 326 is a circle indicating the standard pupil size, and the center of the circle index coincides with the intersection of the cross indexes.
 このようにして、図6の撮影準備画面300の前眼部撮影セクション310に示したような、被検眼12と眼科装置110のアライメント状態に応じた眼のオブジェクト320と十字指標(322と324)と円指標326とが表示される。 In this way, the eye object 320 and the cross index (322 and 324) according to the alignment state between the eye to be inspected 12 and the ophthalmologic apparatus 110 as shown in the anterior ocular segment imaging section 310 of the imaging preparation screen 300 of FIG. And the circle index 326 are displayed.
 以上のように図5Aのステップ256の、上記眼のオブジェクト320の表示処理が実行されると、次に、ステップ258で、位置合わせ部52は、計測された瞳孔中心のXYZの各方向の位置に基づいて、XYZステージや顎台などの位置調整装置42を制御することにより、瞳孔中心が正しい位置に位置するように、眼科装置110の位置を調整する(自動アライメント調整)。 あるいは、撮影準備画面300の前眼部撮影セクション310を見ながら、ユーザが位置調整装置42を操作し、瞳孔中心が正しい位置に位置するように、眼科装置110の位置を調整するようにしてもよい(手動アライメント調整)。 When the display process of the eye object 320 in step 256 of FIG. 5A is executed as described above, then, in step 258, the alignment portion 52 is positioned in each direction of the measured pupil center in each direction. By controlling the position adjusting device 42 such as the XYZ stage and the chin rest, the position of the ophthalmic device 110 is adjusted so that the center of the pupil is positioned at the correct position (automatic alignment adjustment). Alternatively, while looking at the anterior ocular segment imaging section 310 of the imaging preparation screen 300, the user may operate the position adjusting device 42 to adjust the position of the ophthalmic apparatus 110 so that the center of the pupil is positioned at the correct position. Good (manual alignment adjustment).
 図7Aには、位置合わせ部52により計測された被検眼12の瞳孔中心のXYZの各方向の位置が、(X,Y,Z)=(0,0,0)の位置からずれており、Z方向の位置は正しい位置(Z=0)より遠い場合に対応した大きさの、眼のオブジェクト320が前眼部撮影セクション310に表示された様子が示されている。 In FIG. 7A, the position of the center of the pupil of the eye to be inspected 12 in each direction measured by the alignment unit 52 is deviated from the position of (X, Y, Z) = (0,0,0). It is shown that the eye object 320 is displayed in the anterior ocular segment photographing section 310 with a size corresponding to the case where the position in the Z direction is farther than the correct position (Z = 0).
 図7Aに示すように、前眼部撮影セクション310の中心は眼科装置110の光軸の位置であり、当該位置が理解されやすくするために、表示部58は、前眼部撮影セクション310に、横線322と縦線324とによる定まる十字指標を表示すると共に、横線322と縦線324との交点を、前眼部撮影セクション310の中心に位置させる。前眼部撮影セクション310の中心を中心として、Z方向の正しい位置に位置する瞳孔の標準の大きさの円指標326を、前眼部撮影セクション310に表示する。 As shown in FIG. 7A, the center of the anterior ocular segment imaging section 310 is the position of the optical axis of the ophthalmologic apparatus 110, and in order to make the position easier to understand, the display unit 58 is attached to the anterior ocular segment imaging section 310. A cross index determined by the horizontal line 322 and the vertical line 324 is displayed, and the intersection of the horizontal line 322 and the vertical line 324 is positioned at the center of the anterior ocular segment imaging section 310. A circular index 326 of the standard size of the pupil located at the correct position in the Z direction about the center of the anterior segment imaging section 310 is displayed in the anterior segment imaging section 310.
 図7Aに示す例では、上記のように、被検眼12の計測された瞳孔中心のZ方向の位置は正しい位置(Z=0)より遠いので、円328の大きさは、円指標326より小さく表示されている(眼のオブジェクト320がデフォルトより拡大して表示されている)。前眼部撮影セクション310の中心からの円328の中心のずれ量により第1の様子(被検眼12の瞳孔のXY面内での光軸からのズレ量)が表現され、円328の大きさにより、第2の様子(被検眼12の眼底に焦点があっているか否かZ軸(光軸)方向のズレ量)が表現されている。 In the example shown in FIG. 7A, as described above, the measured position of the pupil center of the eye 12 in the Z direction is farther than the correct position (Z = 0), so that the size of the circle 328 is smaller than the circle index 326. It is displayed (the object 320 of the eye is enlarged from the default). The first state (the amount of deviation of the pupil of the eye to be inspected 12 from the optical axis in the XY plane) is expressed by the amount of deviation of the center of the circle 328 from the center of the anterior segment imaging section 310, and the size of the circle 328. The second state (whether or not the fundus of the eye to be inspected 12 is in focus or not, the amount of deviation in the Z-axis (optical axis) direction) is expressed by the above.
 図7Bには、計測された瞳孔中心のXYの各方向の位置が、(X,Y,Z)=(0,0,0)の位置からがずれているが、Zの方向の位置が正しい位置に位置する場合の、眼のオブジェクトが前眼部撮影セクション310に表示された様子が示されている。Zの方向の位置が正しい位置に位置するので、円328の大きさは、瞳孔の標準の大きさの円指標326と同じである(眼のオブジェクト320がデフォルトで表示されている)。 In FIG. 7B, the measured position of the center of the pupil in each direction of XY is deviated from the position of (X, Y, Z) = (0,0,0), but the position in the direction of Z is correct. It is shown that the object of the eye, when located in position, is displayed in the anterior segment imaging section 310. The size of the circle 328 is the same as the circle index 326 of the standard size of the pupil (the object 320 of the eye is displayed by default) because the position in the Z direction is in the correct position.
 図7Cには、計測された瞳孔中心のXYの各方向の位置が正しい位置に位置しているが、Zの方向の位置が正しい位置からずれている、特に、遠い位置する場合の、眼のオブジェクトが前眼部撮影セクション310に表示された様子が示されている。計測された瞳孔中心のXYの各方向の位置が正しい位置に位置しているので、円328の中心は、瞳孔の標準の大きさの円指標326の中心に一致している。しかし、被検眼12の計測された瞳孔中心のZ方向の位置は正しい位置(Z=0)より遠いので、円328の大きさは、瞳孔の標準の大きさの円指標326より小さい(眼のオブジェクト320がデフォルトより縮小して表示されている)。 In FIG. 7C, the measured position of the center of the pupil in each direction of XY is located at the correct position, but the position in the direction of Z is deviated from the correct position, especially when the position is far away. It is shown that the object is displayed in the anterior segment imaging section 310. The center of the circle 328 coincides with the center of the circle index 326 of the standard size of the pupil, since the measured position of the center of the pupil in each direction of XY is located at the correct position. However, since the measured Z-direction position of the pupil center of the eye 12 is farther than the correct position (Z = 0), the size of the circle 328 is smaller than the circle index 326 of the standard size of the pupil (of the eye). Object 320 is displayed smaller than the default).
 図7Dには、計測された瞳孔中心のXYの各方向の位置は正しい位置に位置しているが、Zの方向の位置が正しい位置からずれている、特に、近い位置に位置する場合の、眼のオブジェクトが前眼部撮影セクション310に表示された様子が示されている。円328の中心は、瞳孔の標準の大きさの円指標326の中心に一致しているが、円328の大きさは、瞳孔の標準の大きさの円指標326より大きい。 In FIG. 7D, the measured positions of the center of the pupil in each direction of XY are located at the correct positions, but the positions in the direction of Z are deviated from the correct positions, especially when they are located close to each other. It is shown that the object of the eye is displayed in the anterior segment imaging section 310. The center of the circle 328 coincides with the center of the circle index 326 of the standard size of the pupil, but the size of the circle 328 is larger than the circle index 326 of the standard size of the pupil.
 図7Eには、計測された瞳孔中心のXYZの各方向の位置が正しい位置に位置する場合の、眼のアニメが前眼部撮影セクション310に表示された様子が示されている。円328の中心は、瞳孔の標準の大きさの円指標326の中心に一致し、円328の大きさは、瞳孔の標準の大きさの円指標326と一致している。 FIG. 7E shows how the animation of the eye is displayed in the anterior segment imaging section 310 when the measured position of the center of the pupil in each direction of XYZ is located at the correct position. The center of the circle 328 coincides with the center of the circle index 326 of the standard size of the pupil, and the size of the circle 328 coincides with the circle index 326 of the standard size of the pupil.
 これらの図7Aから図7Eに示すように、十字指標における横線322と縦線324との交点は、前眼部撮影セクション310の中心に位置する。この状態で、眼のオブジェクト320は、ステレオカメラ15A、15Bにより計測された位置に対応する前眼部撮影セクション310に、また、瞳孔中心の位置に対応する大きさで表示される。 As shown in FIGS. 7A to 7E, the intersection of the horizontal line 322 and the vertical line 324 in the cross index is located at the center of the anterior ocular segment imaging section 310. In this state, the eye object 320 is displayed in the anterior segment imaging section 310 corresponding to the position measured by the stereo cameras 15A and 15B, and in a size corresponding to the position of the center of the pupil.
 よって、眼のオブジェクトの位置及び大きさにより、ユーザは、被検眼12の位置が、正しい位置に位置しているか否か、位置していない場合に、どの方向にどのくらいずれているのかを直感的に理解することができる。 Therefore, depending on the position and size of the object of the eye, the user intuitively knows whether or not the position of the eye to be inspected 12 is in the correct position, and if not, in which direction and how much. Can be understood.
 ユーザは、前眼部撮影セクション310に表示された眼のオブジェクトを見ながら、眼科装置110のXYZステージや顎台をユーザが操作することにより、被検眼12と眼科装置110の位置関係を、眼底を撮影するために最適な位置に調整をすることができる。また、XYZステージや顎台の操作に応じて、眼のオブジェクトの表示位置や大きさがアニメーションのような動画で表示される。よって、ユーザはXYZステージや顎台の操作ボタンやジョイスティックを見ることなく、眼科装置110のディスプレイを見ながらアライメント作業を行うことができる。 The user operates the XYZ stage and the chin rest of the ophthalmologic apparatus 110 while looking at the eye object displayed in the anterior ocular segment imaging section 310 to obtain the positional relationship between the eye to be inspected 12 and the ophthalmologic apparatus 110. It can be adjusted to the optimum position for shooting. In addition, the display position and size of the eye object are displayed as a moving image such as an animation according to the operation of the XYZ stage and the chin rest. Therefore, the user can perform the alignment work while looking at the display of the ophthalmic apparatus 110 without looking at the XYZ stage, the operation buttons of the jaw rest, or the joystick.
 なお、表示部58は、位置合わせ部52で計測されたZの位置情報(座標値)が、あらかじめ定められた安全基準値のZの値よりも近い(眼科装置110側に近い)位置の場合に、装置が患者に衝突する危険があることを知らせる注意喚起表示を撮影準備画面300上に表示してもよい。 The display unit 58 is in the case where the Z position information (coordinate value) measured by the alignment unit 52 is closer to the Z value of the predetermined safety reference value (closer to the ophthalmic apparatus 110 side). In addition, a warning display may be displayed on the imaging preparation screen 300 to inform that the device is in danger of colliding with the patient.
 前眼部撮影セクション310の中心の十字指標322,324や円指標326と、眼のオブジェクトの表示位置や大きさを見ることにより、ユーザはX方向及びY方向の各々のずれ量と、基準位置から被検眼のZ方向のずれ量を直感的に把握することができる。よって眼科装置110の上下、左右、前後の位置を移動させる3次元駆動ステージの制御や、顎台の位置の制御を、前眼部撮影セクション310に表示される眼のオブジェクトを見ながら操作し、撮影に最適な位置調整を簡単に行うことができる。 By observing the cross index 322, 324 and the circle index 326 at the center of the anterior optometry section 310 and the display position and size of the object of the eye, the user can see the amount of deviation in the X direction and the Y direction and the reference position. It is possible to intuitively grasp the amount of deviation of the eye to be inspected in the Z direction. Therefore, the control of the three-dimensional drive stage for moving the vertical, horizontal, and front-back positions of the ophthalmic apparatus 110 and the control of the position of the jaw stand are operated while looking at the eye object displayed in the anterior ocular segment imaging section 310. You can easily adjust the optimum position for shooting.
 本実施の形態では、上記のように、眼のオブジェクト320における瞳孔の大きさを、瞳孔中心のZ方向の位置に対応する大きさで表示する。これにより、ユーザは、被検眼12のZ方向の位置が、正しい位置に位置しているか否か、位置していない場合に、どのくらいずれているのかを理解することができる。 In the present embodiment, as described above, the size of the pupil in the object 320 of the eye is displayed in a size corresponding to the position of the center of the pupil in the Z direction. This allows the user to understand whether or not the position of the eye 12 to be inspected 12 in the Z direction is located at the correct position, and if not, how much it is.
 本実施の形態では更に、表示部58は、位置合わせ部52で計測されたZの位置情報に基づいて、眼科装置110のZ方向の位置の正しい位置からのずれ量に応じて、色が変化するマークをずれ量表示セクション314に表示するようにしている。例えば、ずれ量が0の場合には、表示部58は、ずれ量表示セクション314の色を、青色にする。Z方向における正しい位置から眼科装置110側に向かう方向のずれ量が大きくなるに従って、表示部58は、ずれ量表示セクション314の色を、青色から赤色の成分を大きくする。Z方向における正しい位置から眼科装置110側に向かう方向のずれ量が大きくなるに従って、表示部58は、ずれ量表示セクション314の色を、青色から緑色の成分を大きくする。 Further, in the present embodiment, the color of the display unit 58 changes according to the amount of deviation of the Z direction position of the ophthalmic apparatus 110 from the correct position based on the Z position information measured by the alignment unit 52. The mark to be displayed is displayed in the deviation amount display section 314. For example, when the deviation amount is 0, the display unit 58 changes the color of the deviation amount display section 314 to blue. As the amount of deviation from the correct position in the Z direction toward the ophthalmologic device 110 increases, the display unit 58 increases the color of the deviation amount display section 314 from blue to red. As the amount of deviation from the correct position in the Z direction toward the ophthalmologic device 110 increases, the display unit 58 increases the color of the deviation amount display section 314 from blue to green.
 このようにずれ量表示セクション314の色を、被検眼12の瞳孔中心が正しい位置からのZ方向のずれ量及びずれの方向に応じて、変える。よって、ユーザは、被検眼12のZ方向の位置が、正しい位置に位置しているか否か、位置していない場合に、どのくらいずれているのかを理解することができる。なお、Z軸方向のずれ量表示セクション314がある場合は、眼のオブジェクト320の表示の大きさを変化させなくてもよい。 In this way, the color of the deviation amount display section 314 is changed according to the deviation amount and the deviation direction in the Z direction from the correct position of the pupil center of the eye to be inspected 12. Therefore, the user can understand whether or not the position of the eye 12 to be inspected 12 in the Z direction is located at the correct position, and if not, how much it is. When there is a deviation amount display section 314 in the Z-axis direction, it is not necessary to change the display size of the object 320 of the eye.
 なお、図7Aから図7Eに示す例では、位置合わせ部52は、眼のアニメーション320を表示しているが、本開示の技術はこれに限定されず、オブジェクトに代えて、コンピュータグラフィックス(CG(computer graphics))の画像を、位置及び大きさを上記図7Aから図7Eに示す例と同様に調整して表示してもよい。また、二次元のオブジェクト画像ではなく3次元の立体的なオブジェクト画像であってもよい。
 上記オブジェクト320及びCGの画像は、被検眼を模した模擬画像の一例である。
In the example shown in FIGS. 7A to 7E, the alignment unit 52 displays the animation 320 of the eye, but the technique of the present disclosure is not limited to this, and computer graphics (CG) are used instead of the object. (Computer graphics)) images may be displayed with their positions and sizes adjusted in the same manner as in the examples shown in FIGS. 7A to 7E. Further, it may be a three-dimensional three-dimensional object image instead of a two-dimensional object image.
The images of the object 320 and CG are examples of simulated images imitating the eye to be inspected.
 以上説明した図5Aに示した眼科装置110の位置の調整処理では、ステレオカメラ15A、15Bにより得られた被検眼12のリアルな前眼部画像は使用されない。よって、図5Aに示した眼科装置110の位置の調整処理を実行する場合には、制御部36は、瞳孔を含む前眼部画像を形成する必要はない。従って、ステレオカメラ15A、15Bではなく、レーザ距離計を用いることができる。 In the position adjustment process of the ophthalmic apparatus 110 shown in FIG. 5A described above, the realistic anterior ocular segment image of the eye 12 to be inspected obtained by the stereo cameras 15A and 15B is not used. Therefore, when the position adjustment process of the ophthalmic apparatus 110 shown in FIG. 5A is executed, the control unit 36 does not need to form the anterior eye portion image including the pupil. Therefore, a laser rangefinder can be used instead of the stereo cameras 15A and 15B.
 しかし、本開示の技術は、これに限定されず、ステレオカメラ15A、15Bの一方により得られた被検眼12の前眼部画像を、眼のオブジェクトに代えて用いるようにしてもよい。 However, the technique of the present disclosure is not limited to this, and the anterior eye portion image of the eye to be inspected 12 obtained by one of the stereo cameras 15A and 15B may be used instead of the object of the eye.
 図8には、ステレオカメラ15A、15Bの一方により得られた被検眼12の前眼部画像を、眼のオブジェクトに代えて用いる場合の眼科装置110の位置の調整処理のフローチャートが示されている。図8に示す調整処理は、図5Aのステップ250とステップ258との間で、ステップ252から256に代えて、ステップ251からステップ258を実行する。 FIG. 8 shows a flowchart of the position adjustment process of the ophthalmologic apparatus 110 when the anterior eye portion image of the eye to be inspected 12 obtained by one of the stereo cameras 15A and 15B is used instead of the eye object. .. In the adjustment process shown in FIG. 8, between step 250 and step 258 of FIG. 5A, steps 251 to 258 are executed instead of steps 252 to 256.
 ステップ251で、位置合わせ部52は、ステレオカメラ15A、15Bにより、被検眼12の前眼部の瞳孔を、(斜めから)撮影する。 In step 251 the alignment unit 52 photographs the pupil of the anterior eye portion of the eye to be inspected 12 (from an angle) with the stereo cameras 15A and 15B.
 ステップ253で、位置合わせ部52は、ステレオカメラ15A、15Bからの画像信号に基づく異なる方向からの画像を用いて3次元計測により、瞳孔のXYZの各方向の位置を計測する。 In step 253, the alignment unit 52 measures the position of the pupil in each direction of XYZ by three-dimensional measurement using images from different directions based on the image signals from the stereo cameras 15A and 15B.
 ステップ255で、位置合わせ部52は、瞳孔のXYZの各方向の位置情報とあらかじめ保持してあるステレオカメラ15A、15Bの何れか、例えば、前眼部を下側から撮影するステレオカメラ15Bの眼科装置110の光軸に対する傾き情報を用いて、ステレオカメラ15Bからの画像信号に基づく瞳孔の(斜め)画像G0(図9A参照)を、図9Bに示すように正面画像G1に変換する。 In step 255, the alignment unit 52 uses the position information of the pupil in each direction of XYZ and one of the stereo cameras 15A and 15B held in advance, for example, the ophthalmology of the stereo camera 15B for photographing the anterior segment of the eye from below. Using the tilt information with respect to the optical axis of the device 110, the (oblique) image G0 (see FIG. 9A) of the pupil based on the image signal from the stereo camera 15B is converted into the front image G1 as shown in FIG. 9B.
 ステップ257で、位置合わせ部52は、第1に、図5Aのステップ254の処理と同様に、X及びYの方向の位置情報に基づいて、表示する正面画像G1の位置と、Zの位置情報に基づいて、表示する正面画像G1の大きさとを決定する。 In step 257, first, the alignment unit 52 displays the position of the front image G1 and the position information of Z based on the position information in the directions of X and Y, as in the process of step 254 of FIG. 5A. Based on, the size of the front image G1 to be displayed is determined.
 ステップ257で、位置合わせ部52は、第2に、決定された位置に、決定されたサイズの正面画像G1と、十字指標とを、前眼部撮影セクション310に表示する。 In step 257, the alignment unit 52 secondly displays the front image G1 of the determined size and the cross index at the determined position in the anterior ocular segment photographing section 310.
 なお、図8に示す例において、被検眼12の瞳孔の正面画像G1を表示しているが、瞳孔の大きさは、環境からの光によって生理的に変更されるので、被検眼12の瞳孔中心が正しい位置に位置しても、正面画像G1における瞳孔の大きさは、上記標準の大きさと一致しない場合がある。そこで、瞳孔のXYZの各方向の位置を計測するために、ステレオカメラ15A、15Bからの画像信号を用いず、レーザ距離計により瞳孔のXYZの各方向の位置を計測するようにしてもよい。 In the example shown in FIG. 8, the front image G1 of the pupil of the eye to be inspected 12 is displayed, but since the size of the pupil is physiologically changed by the light from the environment, the center of the pupil of the eye to be inspected 12 is displayed. Even if is positioned at the correct position, the size of the pupil in the front image G1 may not match the above standard size. Therefore, in order to measure the position of the pupil in each direction of XYZ, the position of the pupil in each direction of XYZ may be measured by a laser range finder without using the image signals from the stereo cameras 15A and 15B.
 以上のように眼科装置110の位置の調整が終了すると、図4のステップ202の処理が終了する。 When the adjustment of the position of the ophthalmic apparatus 110 is completed as described above, the process of step 202 in FIG. 4 is completed.
 ステップ204で、フォーカス調整部54は、フォーカスレンズ32の位置を調整することにより、被検眼12の眼底が、撮像素子34に結像するように、フォーカス調整をする。 In step 204, the focus adjustment unit 54 adjusts the position of the focus lens 32 to adjust the focus so that the fundus of the eye to be inspected 12 forms an image on the image sensor 34.
 ステップ206で、撮影部56は、撮影ボタン316がオンされるのを待ち、撮影ボタン316がオンされた場合に、撮影光源28を点灯させ、被検眼12の眼底を撮影する。 In step 206, the photographing unit 56 waits for the photographing button 316 to be turned on, and when the photographing button 316 is turned on, the photographing light source 28 is turned on and the fundus of the eye to be inspected 12 is photographed.
 なお、ステップ206では、撮影ボタン316を強調表示、例えば、色を変えたり点滅表示したりすることにより、眼科装置110の位置合わせ及びフォーカス調整が終了し、撮影できる状態であることをオペレータに注意喚起するようにしてもよい。なお、撮影できる旨のメッセージを表示してもよい。 Note that in step 206, by highlighting the imaging button 316, for example, changing the color or blinking the image, the alignment and focus adjustment of the ophthalmologic apparatus 110 are completed, and the operator is in a state where imaging is possible. It may be aroused. A message indicating that shooting may be possible may be displayed.
 撮影ボタン316を省略し、ステップ204の処理が終了すると、自動的に、撮影光源28を点灯させ、被検眼12の眼底を撮影するようにしてもよい。 The photographing button 316 may be omitted, and when the process of step 204 is completed, the photographing light source 28 may be automatically turned on to photograph the fundus of the eye to be inspected 12.
 また、オート撮影モードとマニュアル撮影モードとの何れかを設定するためのモード選択ボタンを備えるようにしてもよい。オート撮影モードとは、ステップ204の処理が終了すると、自動的に被検眼12の眼底を撮影することを実行させるモードである。マニュアル撮影モードは、撮影ボタン316がオンされた場合に、被検眼12の眼底を撮影するモードである。撮影部56は、モード選択ボタンによる設定に応じて、被検眼12の眼底を撮影する。 Further, a mode selection button for setting either an automatic shooting mode or a manual shooting mode may be provided. The auto photographing mode is a mode in which the fundus of the eye to be inspected 12 is automatically photographed when the process of step 204 is completed. The manual shooting mode is a mode for shooting the fundus of the eye to be inspected 12 when the shooting button 316 is turned on. The photographing unit 56 photographs the fundus of the eye to be inspected 12 according to the setting by the mode selection button.
 ステップ208で、表示部58は、入力/表示部40のディスプレイに、撮影準備画面300に代えて、図11に示す撮影結果確認画面(スクリーン(グラフィックユーザインターフェース))400の表示処理を実行する。 In step 208, the display unit 58 executes a display process of the shooting result confirmation screen (screen (graphic user interface)) 400 shown in FIG. 11 instead of the shooting preparation screen 300 on the display of the input / display unit 40.
 図10には、撮影結果確認画面400の表示処理のフローチャートが示されている。図10に示すように、ステップ350で、表示部58は、撮影結果確認画面400を表示する。 FIG. 10 shows a flowchart of the display process of the shooting result confirmation screen 400. As shown in FIG. 10, in step 350, the display unit 58 displays the shooting result confirmation screen 400.
 図11に示すように、撮影結果確認画面400には、以下の各表示項目が表示される。表示項目には、患者ID表示フィールド402、患者氏名表示フィールド404、右眼の眼底画像を表示する右眼眼底画像表示フィールド410、及び左眼の眼底画像を表示する左眼眼底画像表示フィールド412がある。 As shown in FIG. 11, the following display items are displayed on the shooting result confirmation screen 400. The display items include a patient ID display field 402, a patient name display field 404, a right eye fundus image display field 410 for displaying the right eye fundus image, and a left eye fundus image display field 412 for displaying the left eye fundus image. be.
 右眼眼底画像表示フィールド410及び左眼の撮影結果を表示する左眼眼底画像表示フィールド412はそれぞれ、本開示の技術の「第1フィールド」及び「第2フィールド」の一例である。 The right eye fundus image display field 410 and the left eye fundus image display field 412 displaying the imaging result of the left eye are examples of the "first field" and the "second field" of the technique of the present disclosure, respectively.
 右眼眼底画像表示フィールド410の左右には、右眼が複数回撮影され、右眼の複数の眼底画像が記憶されている場合に、右眼の複数の眼底画像を順に右眼眼底画像表示フィールド410に表示することを指示するための右眼眼底画像切替ボタン410A、410Bが設けられている。右眼眼底画像切替ボタン410Aがオンされると、右眼眼底画像表示フィールド410に現在表示されている眼底画像の撮影時期より過去に撮影されて得られた眼底画像に切り替えることが指示される。右眼眼底画像切替ボタン410Bがオンされると、右眼眼底画像表示フィールド410に現在表示されている眼底画像の撮影時期より将来に撮影されて得られた眼底画像に切り替えることが指示される。左眼眼底画像表示フィールド412の左右にも左眼眼底画像切替ボタン412A、412Bが設けられている。右眼眼底画像表示フィールド410の下側に、右眼眼底画像表示フィールド410に表示されている眼底画像の画質の判定結果を表示する右眼画質判定結果表示フィールド420が設けられている。左眼眼底画像表示フィールド412の下側にも左眼画質判定結果表示フィールド422が設けられている。 On the left and right sides of the right eye fundus image display field 410, when the right eye is photographed multiple times and multiple fundus images of the right eye are stored, the plurality of fundus images of the right eye are sequentially displayed in the right eye fundus image display field. The right eye fundus image switching buttons 410A and 410B for instructing the display on the 410 are provided. When the right eye fundus image switching button 410A is turned on, it is instructed to switch to the fundus image obtained by taking a picture in the past from the time when the fundus image currently displayed in the right eye fundus image display field 410 is taken. When the right eye fundus image switching button 410B is turned on, it is instructed to switch to the fundus image obtained by being photographed in the future from the time when the fundus image currently displayed in the right fundus image display field 410 is taken. Left eye fundus image switching buttons 412A and 412B are also provided on the left and right sides of the left eye fundus image display field 412. Below the right eye fundus image display field 410, a right eye image quality determination result display field 420 for displaying the image quality determination result of the fundus image displayed in the right eye fundus image display field 410 is provided. A left eye image quality determination result display field 422 is also provided below the left eye fundus image display field 412.
 右眼画質判定結果表示フィールド420及び左眼画質判定結果表示フィールド422はそれぞれ、本開示の技術の「第3フィールド」及び「第4フィールド」の一例である。 The right eye image quality determination result display field 420 and the left eye image quality determination result display field 422 are examples of the "third field" and the "fourth field" of the technique of the present disclosure, respectively.
 更に、表示項目には、撮影継続ボタン432、AI解析実施ボタン434、情報表示処理を終了させる検査終了ボタン436がある。
 AI解析実施ボタン434は、本開示の技術の「ボタン」の一例であり、AI解析実施ボタン434が表示される部分は、本開示の技術の「第5フィールド」の一例である。
Further, the display items include a shooting continuation button 432, an AI analysis execution button 434, and an inspection end button 436 for ending the information display process.
The AI analysis execution button 434 is an example of the "button" of the technique of the present disclosure, and the portion where the AI analysis execution button 434 is displayed is an example of the "fifth field" of the technique of the present disclosure.
 ステップ352で、表示部58は、ステップ206での撮影により得られた眼底画像の被検眼の左右の判別を行う。なお、眼底画像の被検眼の左右の判別は、次のように行うことができる。例えば、ステップ206の撮影の際に、オペレータが、撮影している被検眼に応じて図示しない右眼ボタン又は左眼ボタンを押し、制御部36は、その情報をその記憶装置に記憶させておく。ステップ352では、表示部58は、制御部36の記憶装置に記憶されている情報に基づいて、眼底画像の被検眼の左右の判別を行う。また、表示部58は、画像処理により、上記左右の判別を行ってもよい。より具体的には、表示部58は、眼底画像に対して視神経乳頭と視神経乳頭への血管とを抽出する画像処理を実行する。そして、表示部58は、眼底画像の視神経乳頭への血管の向きと、予め記憶した左右の眼底の視神経乳頭への血管の向きとを比較することにより、眼底画像が、左右の何れの被検眼の眼底画像であるのかを判断する。なお、表示部58は、制御部36の記憶装置に記憶されている情報に基づいて判別をした眼底画像の被検眼の左右の判別結果を上記画像処理により検証してもよい。 In step 352, the display unit 58 discriminates between the left and right of the eye to be inspected in the fundus image obtained by the imaging in step 206. It should be noted that the left and right discrimination of the eye to be inspected in the fundus image can be performed as follows. For example, at the time of shooting in step 206, the operator presses a right eye button or a left eye button (not shown) according to the eye to be imaged, and the control unit 36 stores the information in the storage device. .. In step 352, the display unit 58 determines the left and right of the eye to be inspected of the fundus image based on the information stored in the storage device of the control unit 36. Further, the display unit 58 may perform the above-mentioned left / right discrimination by image processing. More specifically, the display unit 58 performs image processing for extracting the optic disc and the blood vessel to the optic disc on the fundus image. Then, the display unit 58 compares the direction of the blood vessel to the optic disc of the fundus image with the direction of the blood vessel to the optic disc of the left and right fundus memorized in advance, so that the fundus image can be displayed on either the left or right eye. Judge whether it is a fundus image of. The display unit 58 may verify the left / right discrimination result of the eye to be inspected of the fundus image determined based on the information stored in the storage device of the control unit 36 by the above image processing.
 ステップ354で、表示部58は、ステップ352の判別結果に対応する眼底画像を右眼眼底画像表示フィールド410又は左眼眼底画像表示フィールド412に表示にする。 In step 354, the display unit 58 displays the fundus image corresponding to the discrimination result of step 352 in the right fundus image display field 410 or the left fundus image display field 412.
 図11に示す例では、右眼の眼底が撮影されているが、左眼の眼底は撮影されていない例が示されている。このように、例えば、右眼の眼底が撮影されている場合には、右眼眼底画像表示フィールド410に、得られた眼底画像が表示される。左眼の眼底が撮影されていない場合には、左眼眼底画像表示フィールド412には何も表示しないことにより、左眼の眼底が撮影されていないことを示す。このように、右眼眼底画像表示フィールド410及び左眼眼底画像表示フィールド412の表示状態から、オペレータは、各眼の眼底が撮影されているのか否かを判断することができる。 In the example shown in FIG. 11, the fundus of the right eye is photographed, but the fundus of the left eye is not photographed. As described above, for example, when the fundus of the right eye is photographed, the obtained fundus image is displayed in the right eye fundus image display field 410. When the fundus of the left eye is not photographed, nothing is displayed in the fundus image display field 412 of the left eye, indicating that the fundus of the left eye is not photographed. In this way, from the display state of the right eye fundus image display field 410 and the left eye fundus image display field 412, the operator can determine whether or not the fundus of each eye is photographed.
 また、右眼の眼底が複数回撮影されている場合には、右眼眼底画像切替ボタン410A、410Bが表示される。これにより、オペレータは、右眼の眼底が複数回撮影されていることを知ることができる。左眼の眼底が1回も撮影されていない場合には、左眼眼底画像切替ボタン412A、412Bが表示されない。こにより、オペレータは、左眼の眼底が1回も撮影されていないことを知ることができる。 If the fundus of the right eye has been photographed multiple times, the right eye fundus image switching buttons 410A and 410B are displayed. This allows the operator to know that the fundus of the right eye has been photographed multiple times. If the fundus of the left eye has not been photographed even once, the left eye fundus image switching buttons 412A and 412B are not displayed. This allows the operator to know that the fundus of the left eye has never been photographed.
 ステップ356で、表示部58は、ステップ206での撮影により得られた眼底画像の画質が許容される画質か否かを判定する。なお、ステップ202で眼科装置110の位置が調整され、ステップ204でフォーカスが調整されているが、ステップ206で撮影するときに、被検眼が固視微動等により移動して、画質が許容されない場合がある。 In step 356, the display unit 58 determines whether or not the image quality of the fundus image obtained by the photographing in step 206 is acceptable. The position of the ophthalmic apparatus 110 is adjusted in step 202, and the focus is adjusted in step 204. There is.
 許容される画質は、AIを用いて被検眼の症状に関する情報を生成する処理を実行することができる一定以上の画質である。具体的には、上記一定以上の画質の眼底画像は、ボケていないこと及び視神経乳頭の位置が所定位置の範囲内に位置していること少なくとも一方の条件(本実施の形態では両方)を満たした画像である。 The permissible image quality is an image quality of a certain level or higher that can execute a process of generating information regarding the symptom of the eye to be inspected using AI. Specifically, the fundus image having a certain image quality or higher satisfies at least one condition (both in the present embodiment) that the fundus image is not blurred and that the position of the optic nerve head is within a predetermined position. It is an image.
 具体的には、表示部58は、眼底画像から血管を抽出する。表示部58は、抽出した血管の部分と、眼底画像の血管部分以外とのコントラスが所定値以上か否かを判断することにより、ボケた画像であるか否かを判断する。表示部58は、予め求められた血管のパターンから、眼底画像から視神経乳頭の位置を判定し、判定した位置が、眼底画像の所定位置の範囲内に位置しているか否かを判断する。 Specifically, the display unit 58 extracts blood vessels from the fundus image. The display unit 58 determines whether or not the image is out of focus by determining whether or not the contrast between the extracted blood vessel portion and the portion other than the blood vessel portion of the fundus image is equal to or greater than a predetermined value. The display unit 58 determines the position of the optic nerve head from the fundus image from the blood vessel pattern obtained in advance, and determines whether or not the determined position is within the range of the predetermined position of the fundus image.
 また、PSNR(Peak signal-to-noise ratio)、SSIM(Structural Similarity)などのアルゴリズムを用いて基準画像と撮影された眼底画像とを比較することにより画質を判定することにより、ステップ206での撮影により得られた眼底画像の画質が許容される画質か否かを判定してもよい。 Further, the image quality is determined by comparing the reference image and the photographed fundus image using algorithms such as PSNR (Peak signal-to-noise ratio) and SSIM (Structural Similarity), thereby taking a picture in step 206. It may be determined whether or not the image quality of the fundus image obtained by the above is acceptable.
 ステップ358で、表示部58は、被検眼の判定結果に対応する評価結果表示フィールド420/422に画質の判定結果を表示する。図11に示す例では、右眼の眼底画像の画質が許容される画質であると判定され場合に、右眼画質判定結果表示フィールド420に、画質が許容される画質であることを示す表示、例えば、「OK」が表示される。なお、上記のように左眼の眼底画像は得られていないので、その画質も判定できない。よって、左眼画質判定結果表示フィールド422には、何も表示しない。 In step 358, the display unit 58 displays the image quality determination result in the evaluation result display field 420/422 corresponding to the determination result of the eye to be inspected. In the example shown in FIG. 11, when it is determined that the image quality of the fundus image of the right eye is an acceptable image quality, a display indicating that the image quality is an acceptable image quality is displayed in the right eye image quality determination result display field 420. For example, "OK" is displayed. Since the fundus image of the left eye has not been obtained as described above, the image quality cannot be determined. Therefore, nothing is displayed in the left eye image quality determination result display field 422.
 ステップ358の判定結果の表示処理が終了すると、図4のステップ208の撮影結果確認画面400の表示処理が終了する。 When the display process of the determination result in step 358 is completed, the display process of the shooting result confirmation screen 400 in step 208 of FIG. 4 is completed.
 ステップ210で、判断部60は、ステップ206での撮影により得られた眼底画像の画質が許容されるか否かを判断する。この判断は、ステップ356の判断結果に基づいて判断する。当該画質が許容されないと判断された場合、即ち、判断結果が否許容結果の場合には、情報処理はステップ200に戻って、ステップ200からステップ210を実行する。当該画質が許容されると判断された場合、即ち、判断結果が許容結果の場合には、情報処理はステップ212に進む。 In step 210, the determination unit 60 determines whether or not the image quality of the fundus image obtained by the photographing in step 206 is acceptable. This judgment is made based on the judgment result of step 356. If it is determined that the image quality is not acceptable, that is, if the determination result is a negative acceptance result, the information processing returns to step 200 and executes steps 200 to 210. If it is determined that the image quality is acceptable, that is, if the determination result is an acceptable result, the information processing proceeds to step 212.
 ステップ212で、判断部60は、ステップ200からステップ210が両眼について終了したか否かを判断する。ステップ200からステップ210が両眼について終了したと判断されなかった場合、例えば、上記のように、右眼の眼底が撮影されているが、左眼の眼底は撮影されていないと判断された場合は、情報処理はステップ214に進む。 In step 212, the determination unit 60 determines whether or not steps 200 to 210 have been completed for both eyes. When it is not determined that steps 200 to 210 are completed for both eyes, for example, when it is determined that the fundus of the right eye is photographed but the fundus of the left eye is not photographed as described above. Information processing proceeds to step 214.
 ステップ214で、判断部60は、撮影継続ボタン432がオンされたか否かを判断することにより、オペレータにより、眼底が既に撮影された眼とは異なる眼の眼底の撮影が指示されたか否かを判断する。なお、ステップ214では、撮影継続ボタン432を強調表示、例えば、色を変えたり点滅表示したりすることにより、眼底が既に撮影された眼の眼底画像の画質が許容され、他の眼の眼底の撮影が必要であることをオペレータに注意喚起するようにしてもよい。なお、眼底が既に撮影された眼の眼底画像の画質が許容され、他の眼の眼底の撮影が必要である旨のメッセージを表示してもよい。撮影継続ボタン432がオンされると、情報処理はステップ200に戻って、ステップ200からステップ212を実行する。 In step 214, the determination unit 60 determines whether or not the imaging continuation button 432 is turned on, so that the operator has instructed whether or not the fundus of the eye is photographed differently from the eye in which the fundus has already been imaged. to decide. In step 214, by highlighting the shooting continuation button 432, for example, changing the color or blinking the color, the image quality of the fundus image of the eye whose fundus has already been photographed is allowed, and the image quality of the fundus image of another eye is allowed. The operator may be alerted that shooting is necessary. It should be noted that a message may be displayed to the effect that the image quality of the fundus image of the eye whose fundus has already been photographed is acceptable and that the fundus of another eye needs to be imaged. When the shooting continuation button 432 is turned on, the information processing returns to step 200 and executes steps 200 to 212.
 ステップ212で、ステップ200からステップ210が両眼について終了したと判断された場合には、ステップ216で、表示部58は、AI解析実施ボタン434をアクティベイト(activate)、即ち、ユーザが操作可能状態に変更する。ステップ218で、判断部60は、AI解析実施ボタン434がオンされたか否かを判断する。 If it is determined in step 212 that step 200 to step 210 have been completed for both eyes, in step 216, the display unit 58 activates the AI analysis execution button 434, that is, the user can operate it. Change to state. In step 218, the determination unit 60 determines whether or not the AI analysis execution button 434 is turned on.
 ステップ212で、ステップ200からステップ210が両眼について終了したと判断された場合には、右眼眼底画像表示フィールド410に右眼の眼底画像が表示され、左眼眼底画像表示フィールド412に左眼の眼底画像が表示され、右眼画質判定結果表示フィールド420と左眼画質判定結果表示フィールド422とに「OK」が表示されている。よって、オペレータは、許容される眼底画像が左右の被検眼について得られたことが理解され、AIを用いた被検眼の症状に関する情報を生成する処理の実行が可能であることが理解できる。そこで、オペレータは、AI解析実施ボタン434をオンする。 If it is determined in step 212 that steps 200 to 210 have been completed for both eyes, the fundus image of the right eye is displayed in the right eye fundus image display field 410, and the left eye is displayed in the left eye fundus image display field 412. The fundus image is displayed, and "OK" is displayed in the right eye image quality determination result display field 420 and the left eye image quality determination result display field 422. Therefore, it can be understood that the operator can understand that the permissible fundus image was obtained for the left and right eye to be inspected, and that it is possible to execute the process of generating the information regarding the symptom of the inspected eye using AI. Therefore, the operator turns on the AI analysis execution button 434.
 ステップ216では、表示部58は、AI解析実施ボタン434を強調表示、例えば、点滅表示や色を変えて表示したりしてもよい。また、撮影結果確認画面400にAI解析実施ボタン434を表示するAI解析実施ボタン表示フィールドを設ける。ステップ212で、ステップ200からステップ210が両眼について終了したと判断された場合には、表示部58は、AI解析実施ボタン434をAI解析実施ボタン表示フィールドに表示するようにしてもよい。
 なお、AI解析実施ボタン表示フィールドは、本開示の技術の「第5フィールド」の一例である。
In step 216, the display unit 58 may highlight the AI analysis execution button 434, for example, blinking display or changing the color. Further, the shooting result confirmation screen 400 is provided with an AI analysis execution button display field for displaying the AI analysis execution button 434. If it is determined in step 212 that step 210 has been completed for both eyes, the display unit 58 may display the AI analysis execution button 434 in the AI analysis execution button display field.
The AI analysis execution button display field is an example of the "fifth field" of the technique of the present disclosure.
 ステップ218で、AI解析実施ボタン434がオンされたと判断された場合に、ステップ220で、送信部62は通信部38を制御して、両眼の眼底画像をAI解析サーバ120に送信する。なお、送信する眼底画像は、許容される画質の眼底画像のみである。また、患者IDや患者名など個人を特定できる情報ではなく、眼底画像に対して固有に付与された画像管理IDを付加情報として、眼底画像データとともに送信する。 When it is determined in step 218 that the AI analysis execution button 434 is turned on, in step 220, the transmission unit 62 controls the communication unit 38 to transmit the fundus image of both eyes to the AI analysis server 120. The fundus image to be transmitted is only a fundus image having an acceptable image quality. Further, instead of the information that can identify an individual such as the patient ID and the patient name, the image management ID uniquely assigned to the fundus image is transmitted as additional information together with the fundus image data.
 AI解析サーバ120は、送信された両眼の眼底画像に対して糖尿病網膜症等の有無あるいはその進行度(グレード)の推定等の被検眼の症状に関する情報を生成する処理を実行する。生成された情報は、画像管理IDと共に、管理サーバ140に送信される。管理サーバ140は、画像管理IDから患者ID及び患者氏名を特定し、生成された情報が記憶される。ビューワ150から、患者ID及び患者氏名が指定され、糖尿病網膜症等の生成された情報の送信指示が管理サーバ140に送信されると、管理サーバ140は、ビューワ150に、生成された情報を送信する。ビューワ150は、生成された情報をディスプレイに表示する。 The AI analysis server 120 executes a process of generating information on the symptoms of the eye to be inspected, such as the presence or absence of diabetic retinopathy or the estimation of the degree of progression (grade) of the transmitted fundus images of both eyes. The generated information is transmitted to the management server 140 together with the image management ID. The management server 140 identifies the patient ID and the patient name from the image management ID, and stores the generated information. When the patient ID and the patient name are specified from the viewer 150 and the transmission instruction of the generated information such as diabetic retinopathy is transmitted to the management server 140, the management server 140 transmits the generated information to the viewer 150. do. The viewer 150 displays the generated information on the display.
 上記のようにAI解析サーバ120が眼底画像に対して糖尿病網膜症等の有無や進行度等の被検眼の症状に関する情報を生成する処理をするためには、処理の対象となる眼底画像の画質が、AIを用いて上記処理を実行することができる一定以上の画質である必要がある。よって、従来、被検眼を撮影して得られた眼底画像が上記一定以上の画質である眼底画像を事前に判断する必要があった。眼底画像の画質が上記一定以上の画質であるか否かを判断することは、眼科のAI診断に詳しい人でなければ難しい。 As described above, in order for the AI analysis server 120 to process the fundus image to generate information on the symptoms of the eye to be inspected such as the presence or absence of diabetic retinopathy and the degree of progression, the image quality of the fundus image to be processed is processed. However, it is necessary that the image quality is above a certain level so that the above processing can be executed using AI. Therefore, conventionally, it has been necessary to determine in advance a fundus image in which the fundus image obtained by photographing the eye to be inspected has the above-mentioned image quality of a certain level or higher. It is difficult to judge whether or not the image quality of the fundus image is above a certain level unless the person is familiar with the AI diagnosis of ophthalmology.
 しかし、本実施の形態では、眼科装置110が、左右の被検眼について、上記一定以上の画質の眼底画像が得られるまで、各被検眼の眼底の撮影をする。 However, in the present embodiment, the ophthalmologic apparatus 110 photographs the fundus of each of the left and right eyes to be inspected until a fundus image having a certain image quality or higher is obtained.
 よって、本実施の形態では、眼科のAI診断に詳しい人でなくても眼科装置110に対して、AIを用いて被検眼の症状に関する情報を生成する処理を実行することができる一定以上の画質の眼底画像をAI解析サーバ120に出力させることができる。また、本実施の形態では、左右の一定の画質以上の眼底画像が撮影されたか否かを簡単に区別できるグラフィックユーザインターフェースを用いたので、上記画質を有する左右の被検眼の眼底画像をAI解析サーバ120に出力させることができる。 Therefore, in the present embodiment, even a person who is not familiar with the AI diagnosis of the ophthalmology can execute a process of generating information on the symptom of the eye to be inspected on the ophthalmology apparatus 110 with an image quality of a certain level or higher. The fundus image can be output to the AI analysis server 120. Further, in the present embodiment, since a graphic user interface that can easily distinguish whether or not a fundus image having a certain image quality or higher on the left and right has been taken is used, the fundus images of the left and right eyes having the above image quality are analyzed by AI. It can be output to the server 120.
 本実施の形態の眼科システム100は、眼科装置110と、AI解析サーバ120と、管理サーバ140と、ビューワ150と、を備えている。しかし、本開示の技術はこれに限定されない。例えば、AI解析サーバ120を省略し、眼科装置110、管理サーバ140、及びビューワ150の少なくとも一方がAI解析サーバ120の上記AI機能(AIアルゴリズム)を備えてもよい。 The ophthalmology system 100 of the present embodiment includes an ophthalmology apparatus 110, an AI analysis server 120, a management server 140, and a viewer 150. However, the techniques of the present disclosure are not limited to this. For example, the AI analysis server 120 may be omitted, and at least one of the ophthalmology device 110, the management server 140, and the viewer 150 may have the AI function (AI algorithm) of the AI analysis server 120.
 本実施の形態では、機械学習によって訓練されたAIアルゴリズムを用いて、被検眼の症状に関する情報を生成する処理を実行するAI解析サーバ120を備えている。しかし、本開示の技術はこれに限定されない。例えば、眼の各種病気の状態の画像を予め記憶しておき、眼底が撮影されて得られた眼底画像と、予め記憶された眼の各種病気の状態の画像とのマッチングを行って、被検眼の症状に関する情報を生成する処理を行う装置を備えてもよい。また、眼の各種病気の状態の画像から病気に固有の特徴量を抽出しておき、送信された解析対象の眼底画像の特徴量と比較することにより、被検眼の症状に関する情報を生成する処理を行うプログラムを実行するような解析装置を備えてもよい。 The present embodiment includes an AI analysis server 120 that executes a process of generating information regarding the symptom of the eye to be inspected by using an AI algorithm trained by machine learning. However, the techniques of the present disclosure are not limited to this. For example, images of various diseases of the eye are stored in advance, and the fundus image obtained by photographing the fundus is matched with the images of various diseases of the eye stored in advance, and the eye to be inspected is examined. It may be provided with a device that performs processing for generating information on the symptom of. In addition, a process of extracting information specific to a disease from images of various diseases of the eye and comparing it with the transmitted characteristics of the fundus image to be analyzed to generate information on the symptoms of the eye to be inspected. It may be provided with an analyzer that executes a program that performs the above.
 本開示において、各構成要素(装置等)は、矛盾が生じない限りは、1つのみ存在しても2つ以上存在してもよい 。 In the present disclosure, each component (device, etc.) may exist only once or two or more as long as there is no contradiction.
 以上説明した例では、コンピュータを利用したソフトウェア構成により画像処理が実現される場合を例示したが、本開示の技術はこれに限定されるものではない。例えば、コンピュータを利用したソフトウェア構成に代えて、FPGA(Field-Programmable Gate Array)またはASIC(Application Specific Integrated Circuit)等のハードウェア構成のみによって、画像処理が実行されるようにしてもよい。画像処理のうちの一部の処理がソフトウェア構成により実行され、残りの処理がハードウェア構成によって実行されるようにしてもよい。 In the example described above, the case where the image processing is realized by the software configuration using the computer is illustrated, but the technique of the present disclosure is not limited to this. For example, instead of the software configuration using a computer, the image processing may be executed only by the hardware configuration such as FPGA (Field-Programmable Gate Array) or ASIC (Application Specific Integrated Circuit). Some of the image processing may be performed by the software configuration and the rest may be performed by the hardware configuration.
 このように本開示の技術は、コンピュータを利用したソフトウェア構成により画像処理が実現される場合とされない場合とを含むので、以下の技術を含む。 As described above, the technology of the present disclosure includes the following technology because it includes the case where the image processing is realized and the case where the image processing is not realized by the software configuration using the computer.
(第1の技術)
 被検者の右眼の第1眼底画像と前記被検者の左眼の第2眼底画像とが取得済みであることを判断する判断部と、
 前記第1眼底画像が取得済みの場合は前記第1眼底画像が表示され、前記第1眼底画像が未取得の場合は未取得を示すための第1フィールドと、前記第2眼底画像が取得済みの場合は前記第2眼底画像が表示され、前記第2眼底画像が未取得の場合は未取得を示すための第2フィールドと、を含む表示画面を生成する生成部と、
 を備える情報表示装置。
(First technology)
A judgment unit for determining that the first fundus image of the subject's right eye and the second fundus image of the subject's left eye have been acquired.
If the first fundus image has been acquired, the first fundus image is displayed, and if the first fundus image has not been acquired, the first field for indicating unacquired and the second fundus image have been acquired. In the case of, the second fundus image is displayed, and if the second fundus image has not been acquired, a second field for indicating that the second fundus image has not been acquired, and a generation unit for generating a display screen including.
Information display device.
(第2の技術)
 判断部が、被検者の右眼の第1眼底画像と前記被検者の左眼の第2眼底画像とが取得済みであることを判断するステップと、
 生成部が、前記第1眼底画像が取得済みの場合は前記第1眼底画像が表示され、前記第1眼底画像が未取得の場合は未取得を示すための第1フィールドと、前記第2眼底画像が取得済みの場合は前記第2眼底画像が表示され、前記第2眼底画像が未取得の場合は未取得を示すための第2フィールドと、を含む表示画面を生成するステップと、
 を含む情報表示方法。
(Second technology)
A step of determining that the first fundus image of the subject's right eye and the second fundus image of the subject's left eye have been acquired by the determination unit.
When the generation unit has acquired the first fundus image, the first fundus image is displayed, and when the first fundus image has not been acquired, the first field for indicating that the first fundus image has not been acquired, and the second fundus. If the image has been acquired, the second fundus image is displayed, and if the second fundus image has not been acquired, a second field for indicating that the second fundus image has not been acquired, and a step of generating a display screen including the step of generating the display screen.
Information display method including.
(第3の技術)
 被検者の右眼の第1眼底画像が取得されているか及び被検眼の左眼の第2眼底画像が取得されているのかを判断する判断部と、
 前記被検眼の位置を検出する検出部と、
 前記検出された位置と、前記被検眼の眼底を撮影するために前記被検眼が位置すべき基準位置とのずれ量を計算する計算部と、
 前記第1眼底画像が取得済みあるいは未取得を示すための第1セクションと、前記第2眼底画像が取得済みあるいは未取得を示すための第2セクションと、前記ずれ量に基づいて、前記検出された被検眼の位置が前記基準位置からずれている様子を示す画像を表示するための第3セクションを含む表示画面を生成する生成部と、
 を備える情報表示装置。
(Third technology)
A judgment unit for determining whether the first fundus image of the right eye of the subject and the second fundus image of the left eye of the subject have been acquired.
A detection unit that detects the position of the eye to be inspected,
A calculation unit that calculates the amount of deviation between the detected position and the reference position where the eye to be inspected should be located in order to photograph the fundus of the eye to be inspected.
The detection is based on the first section for indicating that the first fundus image has been acquired or not acquired, the second section for indicating that the second fundus image has been acquired or not acquired, and the amount of deviation. A generator that generates a display screen including a third section for displaying an image showing that the position of the eye to be inspected deviates from the reference position.
Information display device.
(第4の技術)
 判断部が、被検者の右眼の第1眼底画像が取得されているか及び被検眼の左眼の第2眼底画像が取得されているのかを判断するステップと、
 検出部が、前記被検眼の位置を検出するステップと、
 計算部が、前記検出された位置と、前記被検眼の眼底を撮影するために前記被検眼が位置すべき基準位置とのずれ量を計算するステップと、
 生成部が、前記第1眼底画像が取得済みあるいは未取得を示すための第1セクションと、前記第2眼底画像が取得済みあるいは未取得を示すための第2セクションと、前記ずれ量に基づいて、前記検出された被検眼の位置が前記基準位置からずれている様子を示す画像を表示するための第3セクションを含む表示画面を生成するステップと、
 を含む情報表示方法。
(Fourth technology)
The step of determining whether the first fundus image of the right eye of the subject and the second fundus image of the left eye of the subject have been acquired by the determination unit.
The step in which the detection unit detects the position of the eye to be inspected,
A step in which the calculation unit calculates the amount of deviation between the detected position and the reference position where the eye to be inspected should be located in order to photograph the fundus of the eye to be inspected.
The generation unit is based on the first section for indicating that the first fundus image has been acquired or not acquired, the second section for indicating that the second fundus image has been acquired or not acquired, and the amount of deviation. A step of generating a display screen including a third section for displaying an image showing how the detected position of the eye to be inspected deviates from the reference position.
Information display method including.
 なお、位置合わせ部52は、本開示の技術の「判断部」、「検出部、及び「計算部」の一例である。位置合わせ部52及び表示部58は、本開示の技術の「生成部」の一例である。 The alignment unit 52 is an example of the "judgment unit", "detection unit", and "calculation unit" of the technique of the present disclosure. The alignment unit 52 and the display unit 58 are examples of the “generation unit” of the technique of the present disclosure.
 以上の開示内容から以下の技術が提案される。 The following technologies are proposed based on the above disclosure contents.
(第5の技術)
 情報を表示するためのコンピュータープログラム製品であって、
 前記コンピュータープログラム製品は、それ自体が一時的な信号ではないコンピュータ可読記憶媒体を備え、
 前記コンピュータ可読記憶媒体には、プログラムが格納されており、
 前記プログラムは、
 コンピュータに、
 被検者の右眼の第1眼底画像と前記被検者の左眼の第2眼底画像とが取得済みであるかを判断し、
 前記第1眼底画像が取得済みの場合は前記第1眼底画像が表示され、前記第1眼底画像が未取得の場合は未取得を示すための第1フィールドと、前記第2眼底画像が取得済みの場合は前記第2眼底画像が表示され、前記第2眼底画像が未取得の場合は未取得を示すための第2フィールドと、を含む表示画面を生成する
 ことを実行させる、コンピュータープログラム製品。
(Fifth technology)
A computer program product for displaying information
The computer program product comprises a computer-readable storage medium that is not itself a temporary signal.
A program is stored in the computer-readable storage medium.
The program
On the computer
It is determined whether the first fundus image of the subject's right eye and the second fundus image of the subject's left eye have been acquired.
If the first fundus image has been acquired, the first fundus image is displayed, and if the first fundus image has not been acquired, the first field for indicating that the first fundus image has not been acquired and the second fundus image have been acquired. In the case of, the second fundus image is displayed, and if the second fundus image has not been acquired, a second field for indicating that the second fundus image has not been acquired is displayed, and a computer program product for executing the generation of a display screen including.
(第6の技術)
 情報を表示するためのコンピュータープログラム製品であって、
 前記コンピュータープログラム製品は、それ自体が一時的な信号ではないコンピュータ可読記憶媒体を備え、
 前記コンピュータ可読記憶媒体には、プログラムが格納されており、
 前記プログラムは、
 コンピュータに、
 被検者の右眼の第1眼底画像が取得されているか及び被検眼の左眼の第2眼底画像が取得されているのかを判断し、
 前記被検眼の位置を検出し、
 前記検出された位置と、前記被検眼の眼底を撮影するために前記被検眼が位置すべき基準位置とのずれ量を計算し、
 前記第1眼底画像が取得済みあるいは未取得を示すための第1セクションと、前記第2眼底画像が取得済みあるいは未取得を示すための第2セクションと、前記ずれ量に基づいて、前記検出された被検眼の位置が前記基準位置からずれている様子を示す画像を表示するための第3セクションを含む表示画面を生成する、
 ことを実行させる、コンピュータープログラム製品。
(Sixth technology)
A computer program product for displaying information
The computer program product comprises a computer-readable storage medium that is not itself a temporary signal.
A program is stored in the computer-readable storage medium.
The program
On the computer
It is determined whether the first fundus image of the right eye of the subject is acquired and the second fundus image of the left eye of the subject is acquired.
The position of the eye to be inspected is detected,
The amount of deviation between the detected position and the reference position where the eye to be inspected should be located in order to photograph the fundus of the eye to be inspected is calculated.
The detection is based on the first section for indicating that the first fundus image has been acquired or not acquired, the second section for indicating that the second fundus image has been acquired or not acquired, and the amount of deviation. Generates a display screen that includes a third section for displaying an image showing how the position of the eye to be inspected deviates from the reference position.
A computer program product that lets you do that.
 なお、制御部36は、本開示の技術の「コンピュータープログラム製品」の一例である。 The control unit 36 is an example of the "computer program product" of the technology of the present disclosure.
 以上説明した情報表示処理はあくまでも一例である。従って、主旨を逸脱しない範囲内において不要なステップを削除したり、新たなステップを追加したり、処理順序を入れ替えたりしてもよいことは言うまでもない。 The information display process described above is just an example. Therefore, it goes without saying that unnecessary steps may be deleted, new steps may be added, or the processing order may be changed within a range that does not deviate from the purpose.
 本明細書に記載された全ての文献、特許出願、及び技術規格は、個々の文献、特許出願、及び技術規格が参照により取り込まれることが具体的にかつ個々に記載された場合と同様に、本明細書中に参照により取り込まれる。 All documents, patent applications, and technical standards described herein are as if the individual documents, patent applications, and technical standards were specifically and individually stated to be incorporated by reference. Incorporated by reference herein.

Claims (12)

  1.  プロセッサが行う情報表示方法であって、
     被検者の右眼の第1眼底画像と前記被検者の左眼の第2眼底画像とが取得済みであることを判断するステップと、
     前記第1眼底画像が取得済みの場合は前記第1眼底画像が表示され、前記第1眼底画像が未取得の場合は未取得を示すための第1フィールドと、前記第2眼底画像が取得済みの場合は前記第2眼底画像が表示され、前記第2眼底画像が未取得の場合は未取得を示すための第2フィールドと、を含む表示画面を生成するステップと、
     を含む情報表示方法。
    It is an information display method performed by the processor.
    A step of determining that the first fundus image of the subject's right eye and the second fundus image of the subject's left eye have been acquired, and
    If the first fundus image has been acquired, the first fundus image is displayed, and if the first fundus image has not been acquired, the first field for indicating that the first fundus image has not been acquired and the second fundus image have been acquired. In the case of, the second fundus image is displayed, and if the second fundus image has not been acquired, a second field for indicating that the second fundus image has not been acquired, and a step of generating a display screen including the step of generating the display screen.
    Information display method including.
  2.  前記表示画面には、
     前記第1眼底画像の画質の第1評価結果を表示するための第3フィールドと、
     前記第2眼底画像の画質の第2評価結果を表示するための第4フィールドと、
     を更に含む請求項1に記載の情報表示方法。
    On the display screen,
    A third field for displaying the first evaluation result of the image quality of the first fundus image, and
    A fourth field for displaying the second evaluation result of the image quality of the second fundus image, and
    The information display method according to claim 1, further comprising.
  3.  前記表示画面には、被検眼の症状に関する情報を生成する処理を実行させるためのボタンを含み、
     前記第1評価結果及び前記第2評価結果の少なくとも一方が許容結果の場合、前記ボタンをアクティベイトするステップを更に含む、
     請求項2に記載の情報表示方法。
    The display screen includes a button for executing a process for generating information regarding the symptom of the eye to be inspected.
    If at least one of the first evaluation result and the second evaluation result is an acceptable result, the step of activating the button is further included.
    The information display method according to claim 2.
  4.  前記表示画面には、被検眼の症状に関する情報を生成する処理を実行させるためのボタンを表示する第5フィールドを含み、
     前記第1評価結果及び前記第2評価結果の少なくとも一方が許容結果の場合、前記第5フィールドに前記ボタンを表示するステップを更に含む、
     請求項2に記載の情報表示方法。
    The display screen includes a fifth field displaying a button for executing a process for generating information regarding the symptom of the eye to be inspected.
    If at least one of the first evaluation result and the second evaluation result is an acceptable result, the step of displaying the button in the fifth field is further included.
    The information display method according to claim 2.
  5.  前記右眼の眼底画像が複数取得されている場合には、前記第1フィールドに、前記複数取得されている眼底画像を切り替え可能に表示するステップと、
     前記左眼の眼底画像が複数取得されている場合には、前記第2フィールドに、前記複数取得されている眼底画像を切り替え可能に表示するステップと、
     を更に含む、請求項1から請求項4の何れか1項に記載の情報表示方法。
    When a plurality of fundus images of the right eye are acquired, a step of displaying the plurality of acquired fundus images in the first field so as to be switchable.
    When a plurality of the fundus images of the left eye are acquired, the step of displaying the plurality of acquired fundus images in the second field so as to be switchable.
    The information display method according to any one of claims 1 to 4, further comprising.
  6.  メモリと、前記メモリに接続するプロセッサとを備え、
     前記プロセッサは、
     被検者の右眼の第1眼底画像と前記被検者の左眼の第2眼底画像とが取得済みであるかを判断し、
     前記第1眼底画像が取得済みの場合は前記第1眼底画像が表示され、前記第1眼底画像が未取得の場合は未取得を示すための第1フィールドと、前記第2眼底画像が取得済みの場合は前記第2眼底画像が表示され、前記第2眼底画像が未取得の場合は未取得を示すための第2フィールドと、を含む表示画面を生成する、
     情報表示装置。
    A memory and a processor connected to the memory are provided.
    The processor
    It is determined whether the first fundus image of the subject's right eye and the second fundus image of the subject's left eye have been acquired.
    If the first fundus image has been acquired, the first fundus image is displayed, and if the first fundus image has not been acquired, the first field for indicating that the first fundus image has not been acquired and the second fundus image have been acquired. In the case of, the second fundus image is displayed, and if the second fundus image has not been acquired, a display screen including a second field for indicating that the second fundus image has not been acquired is generated.
    Information display device.
  7.  コンピュータに、
     被検者の右眼の第1眼底画像と前記被検者の左眼の第2眼底画像とが取得されているかを判断し、
     前記第1眼底画像が取得済みの場合は前記第1眼底画像が表示され、前記第1眼底画像が未取得の場合は未取得を示すための第1フィールドと、前記第2眼底画像が取得済みの場合は前記第2眼底画像が表示され、前記第2眼底画像が未取得の場合は未取得を示すための第2フィールドと、を含む表示画面を生成する、
     ことを実行させるプログラム。
    On the computer
    It is determined whether the first fundus image of the subject's right eye and the second fundus image of the subject's left eye have been acquired.
    If the first fundus image has been acquired, the first fundus image is displayed, and if the first fundus image has not been acquired, the first field for indicating that the first fundus image has not been acquired and the second fundus image have been acquired. In the case of, the second fundus image is displayed, and if the second fundus image has not been acquired, a display screen including a second field for indicating that the second fundus image has not been acquired is generated.
    A program that lets you do things.
  8.  プロセッサが行う情報表示方法であって、
     被検者の右眼の第1眼底画像が取得されているか及び被検眼の左眼の第2眼底画像が取得されているのかを判断するステップと、
     前記被検眼の位置を検出するステップと、
     前記検出された位置と、前記被検眼の眼底を撮影するために前記被検眼が位置すべき基準位置とのずれ量を計算するステップと、
     前記第1眼底画像が取得済みあるいは未取得を示すための第1セクションと、前記第2眼底画像が取得済みあるいは未取得を示すための第2セクションと、前記ずれ量に基づいて、前記検出された被検眼の位置が前記基準位置からずれている様子を示す画像を表示するための第3セクションを含む表示画面を生成するステップと、
     を含む情報表示方法。
    It is an information display method performed by the processor.
    A step of determining whether the first fundus image of the right eye of the subject and the second fundus image of the left eye of the subject have been acquired, and
    The step of detecting the position of the eye to be inspected and
    A step of calculating the amount of deviation between the detected position and the reference position where the eye to be inspected should be located in order to photograph the fundus of the eye to be inspected.
    The detection is based on the first section for indicating that the first fundus image has been acquired or not acquired, the second section for indicating that the second fundus image has been acquired or not acquired, and the amount of deviation. A step of generating a display screen including a third section for displaying an image showing how the position of the eye to be inspected deviates from the reference position.
    Information display method including.
  9.  前記基準位置を基準に左右方向をX方向、上下方向をY方向とすると、
     前記第3セクションには、前記検出された被検眼の位置が前記基準位置から、X方向及びY方向にずれている第1の様子が表示される、
     請求項8に記載の情報表示方法。
    Assuming that the left-right direction is the X direction and the up-down direction is the Y direction with respect to the reference position,
    In the third section, the first state in which the position of the detected eye to be inspected is deviated from the reference position in the X direction and the Y direction is displayed.
    The information display method according to claim 8.
  10.  前記基準位置を基準に左右方向をX方向、上下方向をY方向、X方向およびY方向に垂直な方向をZ方向とすると、
     前記第3セクションには、前記検出された被検眼の位置が前記基準位置から、X方向及びY方向にずれている第1の様子とZ方向にずれている第2の様子とが表示される、
     請求項8に記載の情報表示方法。
    Assuming that the left-right direction is the X direction, the up-down direction is the Y direction, and the X-direction and the direction perpendicular to the Y-direction are the Z-directions with respect to the reference position.
    In the third section, the first state in which the detected position of the eye to be inspected is deviated from the reference position in the X and Y directions and the second state in which the position is deviated in the Z direction are displayed. ,
    The information display method according to claim 8.
  11.  メモリと、前記メモリに接続するプロセッサとを備え、
     前記プロセッサは、
     被検者の右眼の第1眼底画像が取得されているか及び被検眼の左眼の第2眼底画像が取得されているのかを判断し、
     前記被検眼の位置を検出し、
     前記検出された位置と、前記被検眼の眼底を撮影するために前記被検眼が位置すべき基準位置とのずれ量を計算し、
     前記第1眼底画像が取得済みあるいは未取得を示すための第1セクションと、前記第2眼底画像が取得済みあるいは未取得を示すための第2セクションと、前記ずれ量に基づいて、前記検出された被検眼の位置が前記基準位置からずれている様子を示す画像を表示するための第3セクションを含む表示画面を生成する、
     情報表示装置。
    A memory and a processor connected to the memory are provided.
    The processor
    It is determined whether the first fundus image of the right eye of the subject is acquired and the second fundus image of the left eye of the subject is acquired.
    The position of the eye to be inspected is detected,
    The amount of deviation between the detected position and the reference position where the eye to be inspected should be located in order to photograph the fundus of the eye to be inspected is calculated.
    The detection is based on the first section for indicating that the first fundus image has been acquired or not acquired, the second section for indicating that the second fundus image has been acquired or not acquired, and the amount of deviation. Generates a display screen that includes a third section for displaying an image showing how the position of the eye to be inspected deviates from the reference position.
    Information display device.
  12.  コンピュータに、
     被検者の右眼の第1眼底画像が取得されているか及び被検眼の左眼の第2眼底画像が取得されているのかを判断し、
     前記被検眼の位置を検出し、
     前記検出された位置と、前記被検眼の眼底を撮影するために前記被検眼が位置すべき基準位置とのずれ量を計算し、
     前記第1眼底画像が取得済みあるいは未取得を示すための第1セクションと、前記第2眼底画像が取得済みあるいは未取得を示すための第2セクションと、前記ずれ量に基づいて、前記検出された被検眼の位置が前記基準位置からずれている様子を示す画像を表示するための第3セクションを含む表示画面を生成する、
     ことを実行させるプログラム。
    On the computer
    It is determined whether the first fundus image of the right eye of the subject is acquired and the second fundus image of the left eye of the subject is acquired.
    The position of the eye to be inspected is detected,
    The amount of deviation between the detected position and the reference position where the eye to be inspected should be located in order to photograph the fundus of the eye to be inspected is calculated.
    The detection is based on the first section for indicating that the first fundus image has been acquired or not acquired, the second section for indicating that the second fundus image has been acquired or not acquired, and the amount of deviation. Generates a display screen that includes a third section for displaying an image showing how the position of the eye to be inspected deviates from the reference position.
    A program that lets you do things.
PCT/JP2020/023372 2020-06-15 2020-06-15 Information display method, information display device, and program WO2021255779A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/023372 WO2021255779A1 (en) 2020-06-15 2020-06-15 Information display method, information display device, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/023372 WO2021255779A1 (en) 2020-06-15 2020-06-15 Information display method, information display device, and program

Publications (1)

Publication Number Publication Date
WO2021255779A1 true WO2021255779A1 (en) 2021-12-23

Family

ID=79268613

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/023372 WO2021255779A1 (en) 2020-06-15 2020-06-15 Information display method, information display device, and program

Country Status (1)

Country Link
WO (1) WO2021255779A1 (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002238860A (en) * 2001-02-22 2002-08-27 Canon Inc Ophthalmological device and eye image display method
JP2007125260A (en) * 2005-11-04 2007-05-24 Nidek Co Ltd Ophthalmologic apparatus
US20120230564A1 (en) * 2009-11-16 2012-09-13 Jiang Liu Obtaining data for automatic glaucoma screening, and screening and diagnostic techniques and systems using the data
JP2013059565A (en) * 2011-09-14 2013-04-04 Nidek Co Ltd Corneal endothelial cell imaging apparatus
JP2013063215A (en) * 2011-09-20 2013-04-11 Canon Inc Image processing apparatus, ophthalmologic imaging apparatus, image processing method, and program
JP2014045868A (en) * 2012-08-30 2014-03-17 Canon Inc Interactive controller
JP2014068766A (en) * 2012-09-28 2014-04-21 Nidek Co Ltd Ophthalmologic apparatus
JP2016509913A (en) * 2013-03-14 2016-04-04 カール ツアイス メディテック アクチエンゲゼルシャフト Improved user interface for acquisition, display, and analysis of ophthalmic diagnostic data
JP2016049284A (en) * 2014-08-29 2016-04-11 株式会社トプコン Ophthalmologic apparatus
WO2018198839A1 (en) * 2017-04-28 2018-11-01 株式会社ニコン Ophthalmological imaging optical system, ophthalmological imaging device, ophthalmological image acquisition method, and ophthalmological image system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002238860A (en) * 2001-02-22 2002-08-27 Canon Inc Ophthalmological device and eye image display method
JP2007125260A (en) * 2005-11-04 2007-05-24 Nidek Co Ltd Ophthalmologic apparatus
US20120230564A1 (en) * 2009-11-16 2012-09-13 Jiang Liu Obtaining data for automatic glaucoma screening, and screening and diagnostic techniques and systems using the data
JP2013059565A (en) * 2011-09-14 2013-04-04 Nidek Co Ltd Corneal endothelial cell imaging apparatus
JP2013063215A (en) * 2011-09-20 2013-04-11 Canon Inc Image processing apparatus, ophthalmologic imaging apparatus, image processing method, and program
JP2014045868A (en) * 2012-08-30 2014-03-17 Canon Inc Interactive controller
JP2014068766A (en) * 2012-09-28 2014-04-21 Nidek Co Ltd Ophthalmologic apparatus
JP2016509913A (en) * 2013-03-14 2016-04-04 カール ツアイス メディテック アクチエンゲゼルシャフト Improved user interface for acquisition, display, and analysis of ophthalmic diagnostic data
JP2016049284A (en) * 2014-08-29 2016-04-11 株式会社トプコン Ophthalmologic apparatus
WO2018198839A1 (en) * 2017-04-28 2018-11-01 株式会社ニコン Ophthalmological imaging optical system, ophthalmological imaging device, ophthalmological image acquisition method, and ophthalmological image system

Similar Documents

Publication Publication Date Title
USRE49024E1 (en) Fundus observation apparatus
JP5643004B2 (en) Ophthalmic equipment
US9706920B2 (en) Ophthalmologic apparatus
JP5989523B2 (en) Ophthalmic equipment
JP6354979B2 (en) Fundus photographing device
JP2018199010A (en) Ophthalmologic apparatus
JP5850292B2 (en) Ophthalmic equipment
JP6641730B2 (en) Ophthalmic apparatus and ophthalmic apparatus program
JP6349878B2 (en) Ophthalmic photographing apparatus, ophthalmic photographing method, and ophthalmic photographing program
JP5101370B2 (en) Fundus photographing device
JP6407631B2 (en) Ophthalmic equipment
JP7430082B2 (en) Ophthalmological equipment and its operating method
JP6892540B2 (en) Ophthalmic equipment
JP2021166817A (en) Ophthalmologic apparatus
JP7266375B2 (en) Ophthalmic device and method of operation thereof
WO2020240867A1 (en) Image processing method, image processing device, and image processing program
WO2021255779A1 (en) Information display method, information display device, and program
CN111989030A (en) Image processing method, program, and image processing apparatus
JP7283932B2 (en) ophthalmic equipment
JP7271733B2 (en) ophthalmic equipment
WO2023203992A1 (en) Information processing device, method for controlling information processing device, and program
JP2018051340A (en) Ophthalmologic apparatus
JP6310589B2 (en) Ophthalmic equipment
WO2019203314A1 (en) Image processing method, program, and image processing device
JPWO2016129499A1 (en) Eye refractive power measuring device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20940743

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20940743

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP