WO2023037658A1 - Ophthalmological device, method for controlling ophthalmological device, method for processing eye image, program, and recording medium - Google Patents

Ophthalmological device, method for controlling ophthalmological device, method for processing eye image, program, and recording medium Download PDF

Info

Publication number
WO2023037658A1
WO2023037658A1 PCT/JP2022/020096 JP2022020096W WO2023037658A1 WO 2023037658 A1 WO2023037658 A1 WO 2023037658A1 JP 2022020096 W JP2022020096 W JP 2022020096W WO 2023037658 A1 WO2023037658 A1 WO 2023037658A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
scheimpflug
neural network
processing unit
segmentation
Prior art date
Application number
PCT/JP2022/020096
Other languages
French (fr)
Japanese (ja)
Inventor
神之介 東
Original Assignee
株式会社トプコン
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社トプコン filed Critical 株式会社トプコン
Publication of WO2023037658A1 publication Critical patent/WO2023037658A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/13Ophthalmic microscopes
    • A61B3/135Slit-lamp microscopes

Definitions

  • the present invention relates to an ophthalmic apparatus, a method of controlling an ophthalmic apparatus, a method of processing an eye image, a program, and a recording medium.
  • Ophthalmic equipment includes slit lamp microscopes, fundus cameras, scanning laser ophthalmoscopes (SLO), optical coherence tomography (OCT), and the like.
  • SLO scanning laser ophthalmoscopes
  • OCT optical coherence tomography
  • Various inspection and measurement devices such as refractometers, keratometers, tonometers, specular microscopes, wavefront analyzers, and microperimeters are also equipped with functions for photographing the anterior segment of the eye and the fundus.
  • a slit lamp microscope is an ophthalmologic apparatus for illuminating an eye to be inspected with slit light, and observing or photographing an illuminated cross section from the side with a microscope (see Patent Documents 1 and 2, for example).
  • a slit lamp microscope capable of scanning a three-dimensional region of an eye to be examined at high speed by using an optical system configured to satisfy the Scheimpflug condition (for example, see Patent Document 3). reference).
  • a rolling shutter camera as an imaging method for scanning an object with slit light.
  • a flare cell meter for evaluating the inflammatory state of an eye to be examined (see Patent Documents 4 and 5, for example).
  • a flare cell meter is an ophthalmic device for measuring the number of inflammatory cells floating in the anterior chamber and the protein concentration (flare concentration) in the anterior chamber. or by confining the LED light with a slit aperture.
  • One object of the present invention is to automate the evaluation of inflammatory conditions based on eye images.
  • An ophthalmologic apparatus includes an image acquisition unit and a data processing unit.
  • the image acquisition unit acquires a Scheimpflug image of the subject's eye.
  • the data processing unit executes processing for generating inflammation state information indicating the inflammation state of the subject's eye from the Scheimpflug image acquired by the image acquisition unit.
  • FIG. 1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects
  • FIG. 1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects
  • FIG. 1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects
  • FIG. 1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects
  • FIG. 1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects
  • FIG. 1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects
  • FIG. 1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects
  • FIG. 1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects
  • FIG. 1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects
  • FIG. 1 is a schematic diagram representing a configuration of an ophthalmic apparatus according
  • FIG. 1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects
  • FIG. 1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects
  • FIG. 1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects
  • FIG. 1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects
  • FIG. 1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects
  • FIG. 1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects
  • FIG. 1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects
  • FIG. 1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects
  • FIG. 1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects
  • FIG. 1 is a schematic diagram representing a configuration of an ophthalmic apparatus according
  • FIG. 1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects;
  • FIG. 1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects;
  • FIG. 1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects;
  • FIG. 1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects;
  • FIG. 1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects;
  • FIG. 1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects;
  • FIG. 1 is a flow diagram representing processing performed by an ophthalmic device according to an exemplary aspect;
  • FIG. 1 is a flow diagram representing processing performed by an ophthalmic device according to an exemplary aspect;
  • FIG. 1 is a flow diagram representing processing performed by an ophthalmic device according to an exemplary aspect;
  • FIG. 1 is a flow diagram representing processing performed
  • FIG. 1 is a flow diagram representing processing performed by an ophthalmic device according to an exemplary aspect
  • FIG. 1 is a flow diagram representing processing performed by an ophthalmic device according to an exemplary aspect
  • FIG. 1 is a flow diagram representing processing performed by an ophthalmic device according to an exemplary aspect
  • FIG. 1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects
  • FIG. 1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects
  • FIG. FIG. 4 is a schematic diagram for explaining the operation of an ophthalmic device according to exemplary aspects
  • FIG. 4 is a schematic diagram for explaining the operation of an ophthalmic device according to exemplary aspects
  • FIG. 4 is a schematic diagram for explaining the operation of an ophthalmic device according to exemplary aspects
  • 1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects
  • FIG. 1 is a schematic diagram representing a configuration of an ophthalmic apparatus according
  • Circuitry or processing circuitry is a general purpose processor, special purpose processor, integrated circuit, CPU (Central Processing Unit), GPU (Graphics Processing Unit) configured and/or programmed to perform at least a portion of the disclosed functions.
  • CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • a processor may be considered processing circuitry or circuitry, including transistors and/or other circuitry, including any combination thereof.
  • circuitry, units, means, or like terms may be used to refer to hardware that performs at least some of the functions described herein, or hardware that is programmed to perform at least some of the functions disclosed herein. processor, or known hardware programmed and/or configured to perform at least some of the functions described.
  • a circuit arrangement, unit, means or like terminology is a combination of hardware and software, the software being used to configure the hardware and/or the processor.
  • inflammatory state information information indicating the inflammatory state of the subject's eye
  • a digital image referred to as a Scheimpflug image
  • an optical system that satisfies the Scheimpflug condition.
  • the number of Scheimpflug images processed by the embodiments may be arbitrary, with some embodiments processing one Scheimpflug image and some embodiments processing multiple images.
  • the plurality of images are collected, for example, by scanning using slit light (referred to as slit scanning).
  • Slit scanning is an ophthalmic imaging technique developed by the applicant of the present application, in which a series of images are collected by scanning a three-dimensional region of an eye to be inspected with slit light. etc., are described.
  • a Scheimpflug image processed by some embodiments may be an image created by processing a Scheimpflug image.
  • images created by manipulating Scheimpflug images include images created by applying arbitrary digital image processing such as correction, editing, enhancement, etc. to Scheimpflug images, and images constructed from multiple Scheimpflug images.
  • 3D images volume images
  • the inflammatory state information generated by the embodiment may be any information related to the inflammatory state of the eye to be examined.
  • the inflammatory state information of the embodiment includes information on inflammatory cells present in the anterior chamber (anterior chamber cells), information on proteins present in the anterior chamber (anterior chamber flare), information on lens opacification, disease onset/ Any one or more of information regarding progress and information regarding disease activity may be included.
  • Some embodiments may generate aggregate information (eg, aggregate rating information) based on two or more pieces of information.
  • Information on inflammatory cells includes information on arbitrary parameters such as the density (concentration), number, position, and distribution of inflammatory cells, and evaluation information based on information on predetermined parameters. Evaluation information on inflammatory cells is called cell evaluation information.
  • Information on anterior chamber flare includes information on arbitrary parameters such as the concentration, number, position, and distribution of flares, and evaluation information based on information on predetermined parameters.
  • Information on lens opacification includes information on arbitrary parameters such as concentration, number, position, and distribution of opacity, and evaluation information based on information on predetermined parameters.
  • Information on the onset and progress of a disease includes information on arbitrary parameters such as presence/absence of onset, state of onset, duration of disease, and state of progress, and evaluation information based on information on predetermined parameters.
  • Information on disease activity includes information on arbitrary parameters such as the state of disease activity, evaluation information based on information on predetermined parameters, and the like.
  • An exemplary aspect of an embodiment that generates evaluation information regarding an inflammatory condition is not only information generated by an ophthalmic device that performs processing to generate this evaluation information, but also information that is externally input to the ophthalmic device (e.g., Information acquired by other ophthalmologic apparatus, information input by a doctor) may also be used to generate the evaluation information.
  • the information referred to for generating the inflammatory state information exemplified above may be arbitrary, and includes, for example, the classification criteria for uveitis diseases proposed by The Standardization of Uveitis Nomenclature (SUN) Working Group. ⁇ (“Standardization of uveitis nomenclature for reporting clinical data. Results of the First International Workshop” ⁇ American Journal of Ophthalmology ⁇ Volume 140 ⁇ Issue 3 ⁇ September 2005 ⁇ Page 509-516) ⁇
  • the inflammation state information of the embodiment is not limited to the above examples. Also, the reference information for generating the inflammation state information of the embodiment is not limited to the above example.
  • the site of the subject's eye to which the slit scan is applied includes at least a part or the whole of the site to which the slit scan is applied to acquire the data used to generate the inflammatory state information.
  • a slit scan is applied to a region that includes at least a portion of the anterior chamber.
  • a slit scan is applied to a region including at least a portion of the lens. The same is true when acquiring two or more Scheimpflug images by a method other than slit scanning, or when acquiring only one Scheimpflug image.
  • the region (part) of the eye to be examined to which slit scanning is applied is at least a part of the anterior segment (e.g., cornea, iris, anterior chamber, angle, ciliary body, zonule of Chin, lens, nerve, blood vessel, etc.).
  • lesions; scars; artificial objects such as intraocular lenses, minimally invasive glaucoma surgery (MIGS) devices), and/or at least a portion of the posterior segment of the eye (e.g., vitreous, retina, choroid, sclera tissue such as membranes, optic nerve head, cribriform plate, macula, nerves, blood vessels; lesions; treatment scars; artificial objects such as artificial retinas).
  • MIGS minimally invasive glaucoma surgery
  • a slit scan may be applied to at least a portion of the juxtacular tissue, such as the eyelids and meibomian glands.
  • a three-dimensional region including any two of at least a portion of the anterior segment, at least a portion of the posterior segment, and at least a portion of the periocular tissue, or all Slit scanning may be applied to the three-dimensional area.
  • Embodiments capable of generating inflammatory status information including cell assessment information as described above may be configured to perform at least one of the following three processes: (1) before the subject eye; segmentation (called first segmentation or anterior chamber segmentation) to identify the image region corresponding to the chamber (called the anterior chamber region); (2) identifying the image region corresponding to the inflammatory cells (called the cell region); (3) processing for generating cell evaluation information (referred to as cell evaluation information generation processing).
  • first segmentation or anterior chamber segmentation to identify the image region corresponding to the chamber
  • the cell region identifying the image region corresponding to the inflammatory cells
  • cell evaluation information generation processing processing for generating cell evaluation information generation processing.
  • a first exemplary aspect includes first segmentation for identifying an anterior chamber region from a Scheimpflug image, second segmentation for identifying a cell region from the anterior chamber region identified by the first segmentation, and and a cell evaluation information generation process of generating cell evaluation information from the cell region specified by the segmentation.
  • the first segmentation of this aspect may be performed using a neural network trained by machine learning, but is not so limited.
  • the second segmentation of this aspect may be performed using a neural network trained by machine learning, but is not so limited.
  • the cell evaluation information generation processing of this embodiment may be performed using a neural network trained by machine learning, but is not limited to this. The details of this aspect will be described later.
  • a second exemplary aspect is a second segmentation for identifying a cell region from a Scheimpflug image without performing a first segmentation for identifying an anterior chamber region, and and a cell evaluation information generating process of generating cell evaluation information from the cell area.
  • the second segmentation of this aspect may be performed using a neural network trained by machine learning, but is not so limited.
  • the cell evaluation information generation processing of this embodiment may be performed using a neural network trained by machine learning, but is not limited to this. The details of this aspect will be described later.
  • a third exemplary aspect provides a method for generating cell assessment information from a Scheimpflug image without performing a first segmentation to identify the anterior chamber region and a second segmentation to identify the cell region. It is configured to execute evaluation information generation processing.
  • the cell evaluation information generation processing of this embodiment may be performed using a neural network trained by machine learning, but is not limited to this. The details of this aspect will be described later.
  • a fourth exemplary aspect is a first segmentation for identifying an anterior chamber region from a Scheimpflug image without performing a second segmentation for identifying a cell region, and and cell evaluation information generation processing for generating cell evaluation information from the anterior chamber region.
  • the first segmentation of this aspect may be performed using a neural network trained by machine learning, but is not so limited.
  • the cell evaluation information generation processing of this embodiment may be performed using a neural network trained by machine learning, but is not limited to this. The details of this aspect will be described later.
  • a fifth exemplary aspect is performed without using a neural network trained by machine learning.
  • This aspect includes first segmentation for analyzing the Scheimpflug image to identify the anterior chamber region, and second segmentation for analyzing the anterior chamber region identified by the first segmentation to identify the cell region. , and a cell evaluation information generation process for generating cell evaluation information based on the cell region specified by the second segmentation. The details of this aspect will be described later.
  • a sixth exemplary aspect is a machine learning-based configuration of at least part of the first segmentation, at least part of the second segmentation, and at least part of the cell evaluation information generation process. and configured to perform processing other than processing performed in the machine learning-based configuration in the non-machine learning-based configuration.
  • This aspect is realized, for example, by partially combining any of the above-described first to fourth exemplary aspects and the fifth exemplary aspect, so detailed description thereof are omitted.
  • ⁇ Ophthalmic device> 1 provides exemplary aspects of an ophthalmic device according to embodiments; A specific example (working example) of the ophthalmologic apparatus according to this aspect will be described later.
  • FIG. 1 shows the configuration of an ophthalmologic apparatus according to this aspect.
  • the ophthalmologic apparatus 1000 includes an image acquisition section 1010 and a data processing section 1020 .
  • the image acquisition unit 1010 is configured to acquire a Scheimpflug image of the subject's eye.
  • the image acquisition unit 1010 of some exemplary aspects is configured to capture a Scheimpflug image of the subject's eye.
  • a configuration example of such an image acquisition unit 1010 is shown in FIG. 2A.
  • the image acquisition unit 1010A shown in FIG. 2A includes an illumination system 1011 and an imaging system 1012.
  • the illumination system 1011 is configured to project slit light onto the subject's eye.
  • the imaging system 1012 is configured to photograph an eye to be inspected, and includes an image sensor 1013 and an optical system (not shown) that guides light from the eye to be inspected to the image sensor 1013 .
  • the illumination system 1011 and the imaging system 1012 are configured to satisfy the Scheimpflug conditions and function as a Scheimpflug camera. More specifically, the plane (plane including the object plane) passing through the optical axis of the illumination system 1011, the main plane of the imaging system 1012, and the imaging plane of the imaging element 1013 are arranged to intersect on the same straight line. 2, an illumination system 1011 and an imaging system 1012 are configured. Accordingly, it is possible to perform imaging while the imaging system 1012 is in focus at all positions in the object plane (all positions in the direction along the optical axis of the illumination system 1011).
  • the image acquisition unit 1010A shown in FIG. 2A is configured to scan a three-dimensional region of the subject's eye with a slit of light to collect a series of Scheimpflug images.
  • the image acquisition unit 1010A of this example is configured to collect a series of Scheimpflug images by repeatedly photographing while moving the projection position of the slit light with respect to the three-dimensional region of the subject's eye.
  • the image acquisition unit 1010A may be configured to scan the three-dimensional region of the subject's eye by translating the slit light in a direction orthogonal to the longitudinal direction of the slit light. Such a scanning mode is different from that of a conventional anterior segment photographing device that scans the anterior segment by rotating slit light.
  • the longitudinal direction of the slit light is the longitudinal direction of the beam cross section of the slit light at the projection position to the eye to be examined, in other words, the longitudinal direction of the image of the slit light formed on the eye to be examined. It may substantially match the direction along the body axis (body axis direction). Further, the dimension of the slit light in the longitudinal direction may be equal to or greater than the corneal diameter in the body axis direction of the subject, and the distance of parallel movement of the slit light may be equal to or greater than the corneal diameter in the direction orthogonal to the body axis direction of the subject. It's okay.
  • the series of Scheimpflug images acquired by the image acquisition unit 1010A of this example is an image group (frame group) acquired continuously in time, but is sequentially acquired from a plurality of different positions in the three-dimensional region of the subject's eye. Since it is a group of images collected in the same manner, unlike general moving images, it is a group of images that are spatially distributed.
  • the illumination system 1011 projects slit light onto the three-dimensional area of the subject's eye
  • the imaging system 1012 projects the slit light from the illumination system 1011 onto the three-dimensional area of the subject's eye. to shoot.
  • the image acquisition section 1010A of this example further includes a mechanism for moving the illumination system 1011 and the imaging system 1012 .
  • the data processing unit 1020 of the ophthalmologic apparatus 1000 to which the image acquisition unit 1010A of this example is applied is configured to generate inflammation state information from the Scheimpflug images included in the series of Scheimpflug images collected by the image acquisition unit 1010A. It can be.
  • the data processor 1020 of this example performs the generation of inflammation status information by processing one or more Scheimpflug images included in the sequence of Scheimpflug images acquired by the image acquisition unit 1010A.
  • any number of Scheimpflug images may be used to generate inflammation state information.
  • the data processing unit 1020 of the ophthalmologic apparatus 1000 to which the image acquisition unit 1010A of this example is applied performs processing for processing a series of Scheimpflug images collected by the image acquisition unit 1010A to generate processed image data; and generating inflammation state information based on the processed image data.
  • the data processing unit 1020 of this example performs processing for constructing a three-dimensional image (which is an example of processed image data) from a plurality of Scheimpflug images included in a series of Scheimpflug images, and processing based on this three-dimensional image. and generating inflammation state information.
  • the data processing unit 1020 of this example performs processing for constructing a three-dimensional image from a plurality of Scheimpflug images included in a series of Scheimpflug images, and rendering images (which are examples of processed image data) from the three-dimensional images. and generating inflammation state information based on this rendered image.
  • the imaging system 1012 of the image acquisition unit 1010A may include two or more imaging systems.
  • the imaging system 1012A of the image acquisition unit 1010B shown in FIG. 2B includes a first imaging system 1014 and a second imaging system 1015 that respectively shoot from different directions.
  • the image acquisition unit 1010B of the present example captures the three-dimensional region of the subject's eye from mutually different directions with the first imaging system 1014 and the second imaging system 1015 in slit scanning for acquiring a series of Scheimpflug images.
  • image acquisition such that the longitudinal direction of the beam cross section of the slit light at the incident position to the eye to be inspected is the vertical direction (Y direction) and the moving direction of the slit light is the horizontal direction (horizontal direction, X direction).
  • the first imaging system 1014 and the second imaging system 1015 are configured so that one of them photographs the subject's eye from the left oblique direction, and the other photographs the subject's eye from the right oblique direction. may be placed.
  • a series of Scheimpflug images acquired by the first imaging system 1014 is called a first Scheimpflug image group
  • a series of Scheimpflug images acquired by the second imaging system 1015 is called a second Scheimpflug image group.
  • a series of Scheimpflug images acquired by such an image acquisition unit 1010B includes a first Scheimpflug image group and a second Scheimpflug image group.
  • one Scheimpflug image is obtained by the first imaging system 1014
  • one Scheimpflug image is obtained by the second imaging system 1015
  • one Scheimpflug image acquired by the first imaging system 1014 may be referred to as a first Scheimpflug image group
  • one Scheimpflug image acquired by the second imaging system 1015 may be called a first Scheimpflug image group
  • the set of Scheimpflug images may be called a second Scheimpflug image group.
  • the term "group” may be used not only when multiple elements are included, but also when only one element is included.
  • the imaging of the subject's eye by the first imaging system 1014 and the imaging of the subject's eye by the second imaging system 1015 are performed in parallel with each other. That is, the image acquisition unit 1010B simultaneously performs imaging by the first imaging system 1014 and imaging by the second imaging system 1015 while moving the projection position of the slit light with respect to the three-dimensional area of the subject's eye.
  • the image acquisition unit 1010B of this example is configured to perform imaging (collection of Scheimpflug images) by the first imaging system 1014 and imaging by the second imaging system 1015 (collection of Scheimpflug images) in synchronization with each other. may be configured.
  • imaging selection of Scheimpflug images
  • second imaging system 1015 selection of Scheimpflug images
  • association between the first Scheimpflug image group and the second Scheimpflug image group can be easily performed without using image processing or the like. This association is performed, for example, so as to associate Scheimpflug images with a small acquisition time difference.
  • the ophthalmologic apparatus 1000 refers to the mutual synchronization relationship between the imaging by the first imaging system 1014 and the imaging by the second imaging system 1015, and generates a series of Scheimpflug images corresponding to the slit scan by the first imaging system. It can be reconstructed from the Scheimpflug image group and the second Scheimpflug image group.
  • FIG. 2C shows a configuration example of an ophthalmologic apparatus 1000 to which the image acquisition unit 1010B shown in FIG. 2B is applied.
  • the optical axis of the first imaging system 1014 and the optical axis of the second imaging system 1015 are arranged to be inclined in opposite directions with respect to the optical axis of the illumination system 1011.
  • the data processing unit 1020A of this example includes an image selection unit 1021 and an inflammation state information generation unit 1022.
  • the image selection unit 1021 is configured to select an image from the first Scheimpflug image group acquired by the first imaging system 1014 and the second Scheimpflug image group acquired by the second imaging system 1015.
  • the image selection unit 1021 may be configured to select one of the first Scheimpflug image acquired by the first imaging system 1014 and the second Scheimpflug image acquired by the second imaging system 1015 .
  • the image selection unit 1021 determines the correspondence relationship between the first Scheimpflug image group and the second Scheimpflug image group respectively acquired based on the synchronization between the imaging by the first imaging system 1014 and the imaging by the second imaging system 1015. from the first Scheimpflug image group and the second Scheimpflug image group corresponding to the slit scan that acquired the first Scheimpflug image group and the second Scheimpflug image group based on It is configured. In short, the image selector 1021 reconstructs a series of Scheimpflug images from the first Scheimpflug image group and the second Scheimpflug image group acquired by the first imaging system 1014 and the second imaging system 1015, respectively. is configured as
  • the method of image selection processing executed by the image selection unit 1021 may be arbitrary. For example, based on predetermined conditions and predetermined parameters, such as the configuration and/or arrangement of the first imaging system 1014 and the second imaging system 1015, the purpose and/or application of image selection, and the like, the method of image selection processing is determined and determined. /or can be selected.
  • the image acquisition unit 1010B performs imaging by the first imaging system 1014 and imaging by the second imaging system 1015 mutually synchronously. Further, as described above, the optical axis of the first imaging system 1014 and the optical axis of the second imaging system 1015 are arranged to be inclined in opposite directions with respect to the optical axis of the illumination system 1011 . For example, the optical axis of the first imaging system 1014 is tilted to the left with respect to the optical axis of the illumination system 1011, and the optical axis of the second imaging system 1015 is the optical axis of the illumination system 1011. is tilted to the right with respect to The first imaging system 1014 and the second imaging system 1015 arranged in this manner are sometimes called a left imaging system and a right imaging system, respectively.
  • the tilt angle of the optical axis of the first imaging system 1014 with respect to the optical axis of the illumination system 1011 and the tilt angle of the optical axis of the second imaging system 1015 with respect to the optical axis of the illumination system 1011 may be equal to or different from each other. You may Moreover, these inclination angles may be fixed or may be variable.
  • the illumination system 1011 of this example is configured and arranged so as to project slit light, the longitudinal direction of the cross section of which is oriented in the Y direction, onto the subject's eye from the front direction.
  • the image acquisition unit 1010B of this example moves the illumination system 1011, the first imaging system 1014, and the second imaging system 1015 integrally in the X direction to obtain a three-dimensional region of the anterior segment of the subject's eye.
  • a slit scan is applied to the
  • the image selection unit 1021 of this example selects artifacts based on the correspondence relationship between the first Scheimpflug image group acquired by the first imaging system 1014 and the second Scheimpflug image group acquired by the second imaging system 1015.
  • a new set of Scheimpflug images corresponding to a slit scan that acquired the first set of Scheimpflug images and the second set of Scheimpflug images by selecting a plurality of Scheimpflug images from the first set and the second set of Scheimpflug images that do not contain A series of Scheimpflug images are selected.
  • This artifact can be any kind of artifact. When an anterior segment scan is performed as in this example, this artifact may be an artifact caused by corneal reflection (referred to as a corneal reflection artifact).
  • the projection position (Scheimpflug image, frame) of the slit light that causes corneal reflection artifacts differs between the left imaging system and the right imaging system.
  • the illumination system 1011, the first imaging system 1014, and the second imaging system 1015 are projected while projecting the slit light with the longitudinal direction of the cross section oriented in the Y direction toward the eye to be examined from the front direction.
  • the corneal reflected light of the slit light is likely to enter the left imaging system when the slit light is projected at a position to the left of the corneal vertex.
  • the slit light is projected at a position to the right of the corneal vertex, it is likely to enter the right imaging system.
  • the image selection unit 1021 of some exemplary modes first selects the corneal vertex from among the first Scheimpflug image group acquired by the first imaging system 1014 as the left imaging system.
  • a corresponding Scheimpflug image (first corneal vertex image) is specified, and a Scheimpflug image (first corneal vertex image) corresponding to the corneal vertex is selected from the second Scheimpflug image group acquired by the second imaging system 1015 as the right imaging system.
  • the process of identifying the corneal vertex image includes the process of detecting images corresponding to the corneal surface from each Scheimpflug image included in the first Scheimpflug image group, and the pixels of these detected images. and setting the Scheimpflug image including the identified pixels as the first corneal vertex image.
  • the setting of the second corneal vertex image may be performed in the same manner.
  • the image selection unit 1021 selects a Scheimpflug image group positioned to the right of the first corneal vertex image from the first Scheimpflug image group, and selects a Scheimpflug image group positioned to the right of the first corneal vertex image from the second Scheimpflug image group.
  • a series of Scheimpflug images consisting of two selected Scheimpflug image groups (and the first corneal vertex image and/or the second corneal vertex image) by selecting a Scheimpflug image group located to the left of the corneal vertex image form an image. This yields a series of Scheimpflug images over the three-dimensional region of the anterior segment to which the slit scan was applied and which is (likely) free of corneal reflection artifacts.
  • the image selector 1021 of some exemplary aspects selects which of the two images acquired substantially simultaneously by the first imaging system 1014 (eg, left imaging system) and the second imaging system 1015 (eg, right imaging system). is configured to determine if any corneal reflection artifact is included.
  • This corneal reflection artifact determination process includes predetermined image analysis, for example, thresholding of luminance information assigned to pixels. Note that the process of determining two images acquired substantially simultaneously can be executed based on the synchronous relationship between the imaging by the first imaging system 1014 and the imaging by the second imaging system 1015 .
  • the threshold processing used in the artifact determination process is performed, for example, to identify pixels assigned luminance values exceeding a preset threshold.
  • the threshold may be set higher than the luminance value of the slit light image (projection area of slit light) in the image.
  • the image selection unit 1021 is configured to determine an image brighter than the slit light image as an artifact without determining the slit light image as an artifact.
  • artifacts detected by the image selection unit 1021 configured in this manner are: It can be considered as likely to be a corneal reflection artifact.
  • the image selection unit 1021 may perform any image analysis other than threshold processing, such as pattern recognition, segmentation, and edge detection.
  • image analysis such as image analysis, image processing, machine learning, artificial intelligence, cognitive computing, etc.
  • cognitive computing etc.
  • the image selection unit 1021 Select an image. That is, the image selection unit 1021 selects the other image, which is not the image determined to include the artifact, of the two images obtained substantially simultaneously by the first imaging system 1014 and the second imaging system 1015. select.
  • the image selection unit 1021 performs, for example, a process of evaluating the magnitude of the adverse effect of the artifacts on observation and diagnosis, and selects the image with the smaller adverse effect. It may be configured to perform a process to This evaluation process may be performed based on, for example, any one or more conditions of size, intensity, shape, and position of the artifact. Typically, artifacts with large dimensions, artifacts with high intensity, and artifacts located in or near the attention area such as a slit light image are evaluated to have a large adverse effect.
  • Patent Document 3 Japanese Patent Application Laid-Open No. 2019-213733
  • the image selection unit 1021 By providing the image selection unit 1021 as described above, it is possible to provide an image of the three-dimensional region of the subject's eye that does not contain artifacts that hinder observation, analysis, or diagnosis. Furthermore, it becomes possible to provide an image of the three-dimensional area of the subject's eye that does not contain artifacts to subsequent processing. For example, it becomes possible to construct a three-dimensional image or a rendered image of the subject's eye based on a group of images that do not contain artifacts.
  • the image obtained by the left imaging system and the image obtained by the right imaging system differ in the dimensions of the depicted predetermined part.
  • the thickness of the cornea, the size of the inflammatory cells, and the size of the anterior chamber flare are visualized in the left and right images obtained by photographing substantially the same position with the left imaging system and the right imaging system, respectively. may differ from each other. Even in such a case, by using the image selection unit 1021, it is possible to match the dimensions of the predetermined portion.
  • the inflammation state information generation unit 1022 is configured to generate inflammation state information based on the Scheimpflug image selected by the selection unit 1021 .
  • the number of Scheimpflug images used to generate inflammation state information may be set arbitrarily. Further, the inflammatory state information generation unit 1022 processes one or more Scheimpflug images included in the series of Scheimpflug images selected by the selection unit 1021 to generate processed image data, and processes the generated processed image data. and generating inflammation state information based on the data.
  • the inflammatory state information generated by the data processing unit 1020 may include cell evaluation information, which is evaluation information regarding inflammatory cells in the anterior chamber of the subject's eye.
  • the data processing unit 1020 may be configured to perform at least one of the first segmentation, second segmentation, and cell evaluation information generation processing.
  • the first segmentation is processing for identifying an anterior chamber region corresponding to the anterior chamber
  • the second segmentation is processing for identifying a cell region corresponding to inflammatory cells
  • cell evaluation information generation processing is a process for generating cell evaluation information.
  • the first segmentation may be machine learning-based processing or non-machine learning-based processing, or may be a combination of machine learning-based processing and non-machine learning-based processing.
  • the second segmentation may be machine learning based processing, non-machine learning based processing, or a combination of machine learning based processing and non-machine learning based processing.
  • the cell assessment information generation process may be machine learning based process or non-machine learning based process, or may be a combination of machine learning based process and non-machine learning based process.
  • the type of data input to the first segmentation, the type of data input to the second segmentation, and the type of data input to the cell evaluation information generation process may all be arbitrary. Some examples of possible combinations of multiple processes, including first segmentation, second segmentation, and cell assessment information generation processes, are described below.
  • the data processing unit 1030 shown in FIG. 3 is an example of the configuration of the data processing unit 1020 in FIG.
  • the data processing unit 1030 of this example includes a first segmentation unit 1031 , a second segmentation unit 1032 and a cell evaluation information generation processing unit 1033 .
  • the first segmentation unit 1031 includes a processor that executes first segmentation for identifying the anterior chamber region, and is configured to identify the anterior chamber region from the Scheimpflug image acquired by the image acquisition unit 1010.
  • FIG. 4 shows a configuration example of the first segmentation unit 1031 when executing the first segmentation of this example using machine learning.
  • the first segmentation unit 1031A of this example is configured to execute the first segmentation using an inference model 1034 (referred to as a first inference model) constructed in advance.
  • the first inference model 1034 includes at least an eye image (eg, a Scheimpflug image of the eye, an eye image acquired with another modality, or a Scheimpflug image of the eye and an eye image acquired with another modality). It includes a neural network 1035 (referred to as the first neural network) constructed by machine learning using training data.
  • the data input to the first neural network 1035 is the Scheimpflug image
  • the data output from the first neural network 1035 is the anterior chamber region. That is, the first segmentation unit 1031A obtains Scheimpflug images acquired by the image acquisition unit 1010 (for example, one or more Scheimpflug images, one or more processed image data, or one or more Scheimpflug images and one
  • the Scheimpflug image is input to the first neural network 1035 of the first inference model 1034, and the output data from the first neural network 1035 (the anterior chamber region in the input Scheimpflug image) is received. ) is configured to retrieve
  • a device (inference model building device) that builds the first inference model 1034 may be provided in the ophthalmic device 1000, may be provided in a peripheral device (such as a computer) of the ophthalmic device 1000, or may be provided in another computer. may be
  • the model construction unit 2000 shown in FIG. 5 is an example of an inference model construction device, and includes a learning processing unit 2010 and a neural network 2020.
  • Neural network 2020 typically includes a convolutional neural network (CNN).
  • CNN convolutional neural network
  • Reference numeral 2030 in FIG. 5 shows an example of the structure of a convolutional neural network.
  • Images are input to the input layer.
  • Multiple pairs of convolutional layers and pooling layers are arranged behind the input layer. Although three pairs of convolutional layers and pooling layers are provided in the example shown in FIG. 5, the number of pairs may be arbitrary.
  • a convolution operation is a product-sum operation of a filter function (weighting coefficient, filter kernel) having the same dimension as that of an input image.
  • the convolution layer applies a convolution operation to each of the portions of the input image. More specifically, in the convolution layer, the value of each pixel of the subimage to which the filter function is applied is multiplied by the value (weight) of the filter function corresponding to that pixel to calculate the product, Find the sum of the products over multiple pixels. The sum-of-products value thus obtained is assigned to the corresponding pixel in the output image.
  • the result of the convolution operation for the entire input image can be obtained.
  • a convolution operation a large number of images are obtained from which various features are extracted using a large number of weighting factors. That is, a large number of filtered images such as smoothed images and edge images are obtained.
  • the multiple images produced by the convolutional layers are called feature maps.
  • the pooling layer compresses the feature map generated by the previous convolutional layer (data thinning, etc.). More specifically, the pooling layer calculates statistical values for predetermined neighboring pixels of the pixel of interest in the feature map at predetermined pixel intervals, and outputs an image smaller in size than the input feature map.
  • the statistic value applied to the pooling operation is, for example, the maximum value (max pooling) or the average value (average pooling).
  • stride the pixel spacing applied to the pooling operation.
  • a convolutional neural network can extract many features from the input image by performing processing with multiple pairs of convolutional layers and pooling layers.
  • a fully connected layer is provided after the last pair of convolutional and pooling layers. Although two fully bonded layers are provided in the example shown in FIG. 5, any number of fully bonded layers may be used.
  • the features compressed by a combination of convolution and pooling are used to perform processes such as image classification, image segmentation, and regression.
  • an output layer After the last fully connected layer is an output layer that provides the output result.
  • the convolutional neural network may not include fully connected layers (e.g., a fully connected convolutional network (FCN)), support vector machines, recurrent neural networks (RNN), etc. You can stay.
  • Machine learning for neural network 2020 may also include transfer learning. That is, the neural network 2020 may include a neural network that has already been trained using other training data (training images) and whose parameters have been adjusted.
  • the model construction unit 2000 (learning processing unit 2010) may be configured to be able to apply fine tuning to a trained neural network (neural network 2020).
  • Neural network 2020 may be constructed using a known open source neural network architecture.
  • the learning processing unit 2010 applies machine learning using training data to the neural network 2020.
  • the parameters adjusted by learning processing unit 2010 include, for example, filter coefficients of convolutional layers and connection weights and offsets of fully connected layers.
  • training data may include one or more Scheimpflug images acquired for one or more eyes. Since the Scheimpflug image of the eye is an image of the same type as the image input to the first neural network 1035, compared to performing machine learning using training data containing only other types of images, the first The output quality (accuracy, accuracy, etc.) of the neural network 1035 can be improved.
  • the type of images included in the training data is not limited to Scheimpflug images.
  • the training data may include images acquired by other ophthalmologic modalities (fundus camera, OCT device, SLO, surgical microscope, etc.), or any Images acquired by diagnostic imaging modalities in clinical departments (ultrasonic diagnostic equipment, X-ray diagnostic equipment, X-ray CT equipment, magnetic resonance imaging (MRI) equipment, etc.) and generated by processing actual eye images
  • An image (processed image data) or a pseudo image may be included.
  • techniques such as data augmentation and data augmentation may be used to increase the number of images and the like included in the training data.
  • the training method (machine learning method) for constructing the first neural network 1035 may be arbitrary, for example, supervised learning, unsupervised learning, and reinforcement learning, or any two or more It may be a combination.
  • supervised learning is performed using training data generated by annotations that label input images.
  • this annotation for example, for each image in the training data, the anterior chamber region in that image is identified and labeled. Identification of the anterior chamber region is performed, for example, by at least one of a physician, computer, and other inference model.
  • the learning processing unit 2010 can construct the first neural network 1035 by applying supervised learning using such training data to the neural network 2020 .
  • the first inference model 1034 including the first neural network 1035 constructed in this manner receives a Scheimpflug image (for example, the Scheimpflug image acquired by the image acquisition unit 1010 and its processed image data), and This is a trained model whose output is the anterior chamber region (for example, information indicating the range or position of the anterior chamber region) in the input Scheimpflug image.
  • a Scheimpflug image for example, the Scheimpflug image acquired by the image acquisition unit 1010 and its processed image data
  • This is a trained model whose output is the anterior chamber region (for example, information indicating the range or position of the anterior chamber region) in the input Scheimpflug image.
  • the learning processing unit 2010 randomly selects and disables some units of the neural network 2020, and performs learning using the remaining units. May go (drop out).
  • the method used to build the inference model is not limited to the example shown here.
  • support vector machines Bayesian classifiers, boosting, k-means, kernel density estimation, principal component analysis, independent component analysis, self-organizing maps, random forests, generative adversarial networks (GAN), etc. It can be used to build an inference model.
  • the first segmentation unit 1031A of this example uses such a first inference model 1034 (first neural network 1035) to execute processing for identifying the anterior chamber region from the Scheimpflug image of the subject's eye.
  • first inference model 1034 first neural network 1035
  • the second segmentation unit 1032 includes a processor that executes second segmentation for identifying cell regions, and is configured to identify cell regions from the anterior chamber region identified by the first segmentation unit 1031.
  • FIG. 6 shows a configuration example of the second segmentation unit 1032 when executing the second segmentation of this example using machine learning.
  • the second segmentation unit 1032A of this example is configured to perform the second segmentation using an inference model 1036 constructed in advance (referred to as a second inference model).
  • the second inference model 1036 includes at least an eye image (eg, a Scheimpflug image of the eye, an eye image acquired with another modality, or a Scheimpflug image of the eye and an eye image acquired with another modality). It includes a neural network 1037 (referred to as a second neural network) constructed by machine learning using training data.
  • a neural network 1037 referred to as a second neural network
  • the eye images included in the training data of this example include an image corresponding to at least part of the anterior chamber of the eye (called an anterior chamber image).
  • the eye images included in the training data of this example may include the results of manual or automatic segmentation on anterior segment images (e.g., Scheimpflug images, images acquired with other modalities), e.g. It may be an anterior chamber image extracted from the partial image, or information indicating the range or position of the anterior chamber image in the anterior segment image.
  • the data input to the second neural network 1037 is the anterior chamber region identified from the Scheimpflug image by the first segmentation unit 1031 (or the identified and extracted anterior chamber region (the same shall apply hereinafter)), and the second neural network The data output from 1037 are cell regions. That is, the second segmentation unit 1032A receives the anterior chamber region specified by the first segmentation unit 1031, inputs this anterior chamber region to the second neural network 1037 of the second inference model 1036, and outputs the output data (input cell area in the anterior chamber area).
  • the construction of the second inference model 1036 may be performed in the same manner as the construction of the first inference model 1034 (first neural network 1035).
  • the construction of the second inference model 1036 (second neural network 1037) is executed by the model construction unit 2000 shown in FIG.
  • the model construction unit 2000 (learning processing unit 2010 and neural network 2020) of this example may be the same as that in construction of the first inference model 1034 (first neural network 1035).
  • the training data used to construct the second neural network 1037 includes one or more Scheimpflug images acquired for one or more eyes (e.g., an anterior segment image in which the anterior chamber region is specified, an anterior chamber image). you can stay
  • the types of images included in the training data are not limited to Scheimpflug images. An image generated by processing an image of the eye of the eye, a pseudo image, or the like may be included.
  • the training method (machine learning method) for constructing the second neural network 1037 may be arbitrary, for example, supervised learning, unsupervised learning, and reinforcement learning, or any two or more It may be a combination.
  • supervised learning is performed using training data generated by annotations that label input images.
  • this annotation for example, for each image included in the training data, cell regions in the image are identified and labeled. Identification of cell regions is performed, for example, by at least one of a physician, a computer, and other inference models.
  • the learning processing unit 2010 can construct the second neural network 1037 by applying supervised learning using such training data to the neural network 2020 .
  • the second inference model 1036 including the second neural network 1037 constructed in this manner receives as input the anterior chamber region specified by the first segmentation unit 1031, and the input cell region in the anterior chamber region (for example, it is a learned model whose output is information indicating the range or position of a cell region.
  • the second segmentation unit 1032A of the present example uses such a second inference model 1036 (second neural network 1037) to execute a process of identifying cell regions from the anterior chamber region in the Scheimpflug image of the subject's eye. do.
  • the cell evaluation information generation processing unit 1033 includes a processor that executes cell evaluation information generation processing for generating cell evaluation information, and generates cell evaluation information from the cell regions specified by the second segmentation unit 1032.
  • FIG. 7 shows a configuration example of the cell evaluation information generation processing unit 1033 when executing the cell evaluation information generation processing of this example using machine learning.
  • 1033 A of cell evaluation information generation process parts of this example are comprised so that a cell evaluation information generation process may be performed using the inference model 1038 (it calls a 3rd inference model) constructed
  • the third inference model 1038 includes at least an eye image (eg, a Scheimpflug image of the eye, an eye image acquired with another modality, or a Scheimpflug image of the eye and an eye image acquired with another modality). It includes a neural network 1039 (referred to as a third neural network) constructed by machine learning using training data.
  • a third neural network constructed by machine learning using training data.
  • the eye images included in the training data of this example include at least an anterior chamber image in which an image of inflammatory cells is drawn, and may further include an anterior chamber image in which an image of inflammatory cells is not drawn.
  • the eye images included in the training data of this example may include the results of manual or automatic segmentation on anterior segment images (e.g., Scheimpflug images, images acquired with other modalities), e.g. It may be a cell image extracted from the anterior chamber image in the partial image, or information indicating the range or position of the cell region in the anterior chamber image.
  • the data input to the third neural network 1039 is the output from the second segmentation unit 1032 or data based thereon (for example, data indicating the range, position, distribution, etc. of the cell area, or the result of specifying the cell area anterior chamber region), and the data output from the third neural network 1039 is cell evaluation information. That is, the cell evaluation information generation processing unit 1033A receives the result of specifying the cell region by the second segmentation unit 1032 or data based thereon, and converts the result of specifying the cell region or data based thereon to the third neural network 1039 of the third inference model 1038. and obtain output data (cell evaluation information) from the third neural network 1037 . As described above, the cell evaluation information is evaluation information regarding predetermined parameters regarding inflammatory cells (eg, density, number, position, distribution, etc. of inflammatory cells).
  • predetermined parameters regarding inflammatory cells eg, density, number, position, distribution, etc. of inflammatory cells.
  • the construction of the third inference model 1038 may be performed in the same manner as the construction of the first inference model 1034 (first neural network 1035).
  • the construction of the third inference model 1038 (third neural network 1039) is executed by the model construction unit 2000 shown in FIG.
  • the model construction unit 2000 (learning processing unit 2010 and neural network 2020) of this example may be the same as that in construction of the first inference model 1034 (first neural network 1035).
  • the training data used to construct the third neural network 1039 includes one or more Scheimpflug images acquired for one or more eyes (for example, an anterior segment image including an anterior chamber region in which a cell region is specified, a cell region is identified anterior chamber image).
  • the types of images included in the training data are not limited to Scheimpflug images.
  • An image generated by processing an image of the eye of the eye, a pseudo image, or the like may be included.
  • the training method (machine learning method) for constructing the third neural network 1039 may be arbitrary, for example, supervised learning, unsupervised learning, and reinforcement learning, or any two or more It may be a combination.
  • supervised learning is performed using training data generated by annotations that label input images.
  • each image (in which a cell region is specified) included in the training data is labeled with cell evaluation information generated from that image.
  • Generating cell assessment information from images is performed, for example, by at least one of a physician, computer, and other inference model.
  • the learning processing unit 2010 can construct the third neural network 1039 by applying supervised learning using such training data to the neural network 2020 .
  • the third inference model 1038 including the third neural network 1039 constructed in this manner receives as input the cell region identification result by the second segmentation unit 1032 or data based thereon, and the input cell region identification result Alternatively, it is a learned model that outputs cell evaluation information based on data based thereon.
  • the cell evaluation information generation processing unit 1033A of this example By using such a third inference model 1038 (third neural network 1039), the cell evaluation information generation processing unit 1033A of this example generates cell evaluation information from the cell area in the anterior chamber area in the Scheimpflug image of the eye to be examined. Execute the process to generate the .
  • a data processing unit 1040 shown in FIG. 8 is an example of the configuration of the data processing unit 1020 in FIG.
  • the data processing unit 1040 of this example includes a first segmentation unit 1041 , a conversion processing unit 1042 , a second segmentation unit 1043 and a cell evaluation information generation processing unit 1044 .
  • the first segmentation unit 1041 has the same configuration and function as the first segmentation unit 1031 in FIG. 3 (for example, the first segmentation unit 1031A in FIG. It is configured to perform a first segmentation to identify a chamber region.
  • the conversion processing unit 1042 converts the anterior chamber region specified by the first segmentation unit 1041 into data with a structure according to the second segmentation performed by the second segmentation unit 1043.
  • the second segmentation unit 1043 of this example is configured to execute the second segmentation using a neural network (second neural network) constructed by machine learning.
  • the conversion processing unit 1042 performs conversion for converting the anterior chamber region specified from the Scheimpflug image by the first segmentation unit 1041 into image data having a structure corresponding to the input layer of the second neural network of the second segmentation unit 1043. configured to do the work.
  • the input layer of the second neural network (convolutional neural network) of the second segmentation unit 1043 may be configured to accept data with a predetermined structure (morphology, format).
  • This predefined data structure may be, for example, a predefined image size (eg, number of vertical pixels and number of horizontal pixels), a predefined image shape (eg, square or rectangular), and the like.
  • the image size and image shape of the anterior chamber region identified by the first segmentation unit 1041 vary depending on the specifications of the ophthalmologic apparatus, conditions and settings at the time of imaging, individual differences in the size and shape of the eye to be examined, and the like.
  • the transformation processing unit 1042 converts the structure of the anterior chamber region (for example, image size and/or image shape) specified by the first segmentation unit 1041 into an input layer of the second neural network of the second segmentation unit 1043. Convert to structure.
  • Image size conversion may be performed using any known image size conversion technique.
  • a process of dividing or a process of resizing the anterior chamber region specified by the first segmentation unit 1041 into a single image having an image size corresponding to the input layer may be included.
  • Image shape transformation may be performed using any known image transformation technique. Conversion processing for other data structures may be performed in the same manner.
  • the image input to the neural network is a Scheimpflug image
  • the image input to the neural network is arbitrary processed image data of a Scheimpflug image
  • the arrangement of elements that execute conversion processing may be arbitrary.
  • the conversion processing unit is a stage prior to the target neural network (for example, a stage prior to the inference model including this neural network). , or inside this inference model and before this neural network), or inside the target neural network.
  • the conversion processing unit is placed at a stage prior to the input layer that receives inputs that directly correspond to the outputs of this neural network.
  • the second segmentation unit 1043 has the same configuration and function as the second segmentation unit 1032A in FIG. is configured to run
  • the second neural network of the second segmentation unit 1043 is configured to receive input of the image data (the anterior chamber region whose data structure has been converted) generated by the conversion processing unit 1042 and output the cell region.
  • Machine learning for building the second neural network of this example may be performed in the same manner as machine learning for building the second neural network 1037 of FIG.
  • the cell evaluation information generation processing unit 1044 has the same configuration and function as the cell evaluation information generation processing unit 1033 in FIG. 3 (for example, the cell evaluation information generation processing unit 1033A in FIG. It is configured to execute a cell evaluation information generation process for generating cell evaluation information from the cell area obtained.
  • the data processing unit 1040 of this example may have a configuration in which the conversion processing unit 1042 is arranged between the first segmentation unit 1031 and the second segmentation unit 1032 of the data processing unit 1030 in FIG.
  • the configuration of the data processing unit 1040 of this example is not limited to this.
  • a data processing unit 1050 shown in FIG. 9 is an example of the configuration of the data processing unit 1020 in FIG.
  • a data processing unit 1050 of this example includes a second segmentation unit 1051 and a cell evaluation information generation processing unit 1052 .
  • the second segmentation unit 1051 includes a processor that executes second segmentation for identifying cell regions, and is configured to identify cell regions from the Scheimpflug image acquired by the image acquisition unit 1010 .
  • FIG. 10 shows a configuration example of the second segmentation unit 1051 when executing the second segmentation of this example using machine learning.
  • the second segmentation unit 1051A of this example is configured to execute the second segmentation using an inference model 1053 constructed in advance (referred to as a fourth inference model).
  • the fourth inference model 1053 includes at least an eye image (eg, a Scheimpflug image of the eye, an eye image acquired with another modality, or a Scheimpflug image of the eye and an eye image acquired with another modality). It includes a neural network 1054 (referred to as a fourth neural network) constructed by machine learning using training data.
  • fourth neural network 1054 may include at least a portion of first neural network 1035 of FIG. 4 and at least a portion of second neural network 1037 of FIG.
  • fourth neural network 1054 may be a neural network in which first neural network 1035 and second neural network 1037 are arranged in series.
  • the fourth neural network 1054 having such a configuration has a function of specifying the anterior chamber region from the Scheimpflug image and a function of specifying the cell region from the anterior chamber region.
  • the fourth neural network 1054 may be machine-learned to directly identify cell regions from the Scheimpflug image without identifying the anterior chamber region.
  • Embodiments of the fourth neural network 1054 are not limited to these, and may include any machine learning applied neural network for identifying cell regions from a Scheimpflug image.
  • the data input to the fourth neural network 1054 are Scheimpflug images, and the data output from the fourth neural network 1054 are cell regions. That is, the second segmentation unit 1051A receives the Scheimpflug image, inputs this Scheimpflug image to the fourth neural network 1054 of the fourth inference model 1053, and outputs data from the fourth neural network 1054 (the input Scheimpflug cell area in the image).
  • the construction of the fourth inference model 1053 may be performed in the same manner as the construction of the first inference model 1034 (first neural network 1035).
  • the construction of the fourth inference model 1053 (fourth neural network 1054) is executed by the model construction unit 2000 shown in FIG.
  • the model construction unit 2000 (learning processing unit 2010 and neural network 2020) of this example may be the same as that in construction of the first inference model 1034 (first neural network 1035).
  • the training data used to construct the fourth neural network 1054 may include one or more Scheimpflug images acquired for one or more eyes.
  • the types of images included in the training data are not limited to Scheimpflug images.
  • An image generated by processing an image of the eye of the eye, a pseudo image, or the like may be included.
  • Any image included in the training data may be accompanied by information to assist the processing performed by the fourth neural network 1054.
  • the anterior chamber region in the image may be labeled by prior annotation.
  • the training method (machine learning method) for constructing the fourth neural network 1054 may be arbitrary, for example, supervised learning, unsupervised learning, and reinforcement learning, or any two or more It may be a combination.
  • supervised learning is performed using training data generated by annotations that label input images.
  • this annotation for example, for each image included in the training data, cell regions in the image are identified and labeled. Identification of cell regions is performed, for example, by at least one of a physician, a computer, and other inference models.
  • the learning processing unit 2010 can construct the fourth neural network 1054 by applying supervised learning using such training data to the neural network 2020 .
  • the fourth inference model 1053 including the fourth neural network 1054 constructed in this way is input with the Scheimpflug image (or its processed image data, etc.) acquired by the image acquisition unit 1010, and the input It is a trained model whose output is a cell region (for example, information indicating the range or position of a cell region) in a Scheimpflug image.
  • the second segmentation unit 1051A of this example uses such a fourth inference model 1053 (fourth neural network 1054) to execute processing for identifying cell regions from the Scheimpflug image of the subject's eye.
  • the cell evaluation information generation processing unit 1052 includes a processor that executes cell evaluation information generation processing for generating cell evaluation information, and generates cell evaluation information from the cell regions specified by the second segmentation unit 1051.
  • FIG. 11 shows a configuration example of the cell evaluation information generation processing unit 1052 when executing the cell evaluation information generation processing of this example using machine learning.
  • 1052 A of cell evaluation information generation process parts of this example are comprised so that cell evaluation information generation processing may be performed using the inference model 1055 (it calls a 5th inference model) constructed
  • the fifth inference model 1055 includes at least an eye image (eg, a Scheimpflug image of the eye, an eye image acquired with another modality, or a Scheimpflug image of the eye and an eye image acquired with another modality). It includes a neural network 1056 (referred to as a fifth neural network) constructed by machine learning using training data.
  • the data input to the fifth neural network 1056 is the output from the second segmentation unit 1051 or data based thereon (for example, data indicating the range, position, distribution, etc. of the cell area, or the result of specifying the cell area) anterior chamber region), and the data output from the fifth neural network 1056 is cell evaluation information. That is, the cell evaluation information generation processing unit 1052A receives the result of specifying the cell region by the second segmentation unit 1051 or data based thereon, and converts the result of specifying the cell region or data based thereon to the fifth neural network 1056 of the fifth inference model 1055. to obtain output data (cell evaluation information) from the fifth neural network 1056 .
  • the cell evaluation information generation processing unit 1052A receives the result of specifying the cell region by the second segmentation unit 1051 or data based thereon, and converts the result of specifying the cell region or data based thereon to the fifth neural network 1056 of the fifth inference model 1055. to obtain output data (cell evaluation information) from the fifth neural network 1056
  • the machine learning method for building the fifth inference model 1055 is similar to the machine learning method for building the third neural network 1039 of the cell evaluation information generation processing unit 1033 in FIG. can be Also, the training data used for machine learning for building the fifth inference model 1055 (fifth neural network 1056) is the same as the training data used for machine learning for building the third neural network 1039. good.
  • a fifth inference model 1055 including a fifth neural network 1056 receives the cell region identification result by the second segmentation unit 1051 or data based thereon as input, and the input cell region identification result or data based on the cell region This is a trained model that outputs evaluation information.
  • the cell evaluation information generation processing unit 1052A of the present example uses such a fifth inference model 1055 (fifth neural network 1056) to generate cell evaluation information from the cell area in the Scheimpflug image of the subject's eye. to run.
  • a data processing unit 1060 shown in FIG. 12 is an example of the configuration of the data processing unit 1020 in FIG.
  • the data processing section 1060 of this example includes a cell evaluation information generation processing section 1061 .
  • the cell evaluation information generation processing unit 1061 includes a processor that executes cell evaluation information generation processing for generating cell evaluation information, and generates cell evaluation information from the Scheimpflug image acquired by the image acquisition unit 1010.
  • FIG. 13 shows a configuration example of the cell evaluation information generation processing unit 1061 when executing the cell evaluation information generation processing of this example using machine learning.
  • 1061 A of cell evaluation information generation process parts of this example are comprised so that the cell evaluation information generation process may be performed using the inference model 1062 (it calls a 6th inference model) constructed
  • the sixth inference model 1062 includes at least an eye image (eg, a Scheimpflug image of the eye, an eye image acquired with another modality, or a Scheimpflug image of the eye and an eye image acquired with another modality). It includes a neural network 1063 (referred to as a sixth neural network) constructed by machine learning using training data.
  • a neural network 1063 referred to as a sixth neural network
  • sixth neural network 1063 is at least a portion of first neural network 1035 of FIG. 4 and at least a portion of second neural network 1037 of FIG. 6 and third neural network 1039 of FIG. may include at least a portion of
  • sixth neural network 1063 may be a neural network in which first neural network 1035, second neural network 1037, and third neural network 1039 are arranged in series.
  • the sixth neural network 1063 having such a configuration has a function of specifying an anterior chamber region from a Scheimpflug image, a function of specifying a cell region from the anterior chamber region, and a function of generating cell evaluation information from the cell region. .
  • the sixth neural network 1063 employs machine learning to generate cell assessment information directly from the Scheimpflug image without anterior chamber region identification and/or cell region identification. may be applied.
  • Embodiments of the sixth neural network 1063 are not limited to these, and may include any machine learning applied neural network for identifying cell evaluation information from a Scheimpflug image.
  • the data input to the sixth neural network 1063 is output from the image acquisition unit 1010 or data based thereon, and the data output from the sixth neural network 1063 is cell evaluation information. That is, the cell evaluation information generation processing unit 1061A receives the Scheimpflug image (and/or data based on this Scheimpflug image) acquired by the image acquisition unit 1010, and converts this Scheimpflug image or data based thereon into the sixth inference. It is configured to input to the sixth neural network 1063 of the model 1062 and acquire output data (cell evaluation information) from the sixth neural network 1063 .
  • the machine learning method for building the sixth inference model 1062 (sixth neural network 1063) is similar to the machine learning method for building the third neural network 1039 of the cell evaluation information generation processing unit 1033 in FIG. can be Also, the training data used for machine learning to construct the sixth inference model 1062 (sixth neural network 1063) is the same as the training data used for machine learning to construct the third neural network 1039. good.
  • a sixth inference model 1062 including a sixth neural network 1063 receives the Scheimpflug image acquired by the image acquisition unit 1010 (and/or data based on this Scheimpflug image), and (and/or data based on this Scheimpflug image).
  • the cell evaluation information generation processing unit 1061A of the present example By using such a sixth inference model 1062 (sixth neural network 1063), the cell evaluation information generation processing unit 1061A of the present example generates a Scheimpflug image of the eye to be examined (and/or data based on this Scheimpflug image). ) to generate cell evaluation information.
  • a data processing unit 1070 shown in FIG. 14 is an example of the configuration of the data processing unit 1020 in FIG.
  • a data processing unit 1070 of this example includes a first segmentation unit 1071 and a cell evaluation information generation processing unit 1072 .
  • the first segmentation unit 1071 includes a processor that executes first segmentation for identifying the anterior chamber region, and is configured to identify the anterior chamber region from the Scheimpflug image acquired by the image acquisition unit 1010.
  • FIG. 15 shows a configuration example of the first segmentation unit 1071 when executing the first segmentation of this example using machine learning.
  • the first segmentation unit 1071A of this example is configured to execute the first segmentation using an inference model 1073 constructed in advance (referred to as a seventh inference model).
  • the seventh inference model 1073 includes at least an eye image (eg, a Scheimpflug image of the eye, an eye image acquired with another modality, or a Scheimpflug image of the eye and an eye image acquired with another modality). It includes a neural network 1074 (referred to as a seventh neural network) constructed by machine learning using training data.
  • a seventh neural network constructed by machine learning using training data.
  • the machine learning technique for constructing the seventh inference model 1073 is the same as the machine learning technique for constructing the first neural network 1035 of the first segmentation unit 1031A in FIG. you can also, the training data used for machine learning to construct the seventh inference model 1073 (seventh neural network 1074) is the same as the training data used for machine learning to construct the first neural network 1035. good.
  • seventh neural network 1074 may be the same or similar to first neural network 1035 and seventh inference model 1073 may be the same or similar to first inference model 1034 .
  • a seventh inference model 1073 including a seventh neural network 1074 receives the Scheimpflug image acquired by the image acquisition unit 1010 (or its processed image data, etc.), It is a trained model whose output is a region (for example, information indicating the range or position of the anterior chamber region).
  • the first segmentation unit 1071A of this example uses such a seventh inference model 1073 (seventh neural network 1074) to execute processing for identifying the anterior chamber region from the Scheimpflug image of the subject's eye.
  • the cell evaluation information generation processing unit 1072 includes a processor that executes cell evaluation information generation processing for generating cell evaluation information, and generates cell evaluation information from the anterior chamber region specified by the first segmentation unit 1071.
  • FIG. 16 shows a configuration example of the cell evaluation information generation processing unit 1072 when executing the cell evaluation information generation processing of this example using machine learning.
  • 1072 A of cell evaluation information generation process parts of this example are comprised so that cell evaluation information generation processing may be performed using the inference model 1075 (it calls an 8th inference model) constructed
  • the eighth inference model 1075 includes at least an eye image (eg, a Scheimpflug image of the eye, an eye image acquired with another modality, or a Scheimpflug image of the eye and an eye image acquired with another modality). It includes a neural network 1076 (referred to as an eighth neural network) constructed by machine learning using training data.
  • an eighth neural network constructed by machine learning using training data.
  • eighth neural network 1076 may include at least a portion of second neural network 1037 of FIG. 6 and at least a portion of third neural network 1039 of FIG.
  • eighth neural network 1076 may be a neural network in which second neural network 1037 and third neural network 1039 are arranged in series.
  • the eighth neural network 1076 having such a configuration has a function of specifying a cell area from the anterior chamber area and a function of generating cell evaluation information from the cell area.
  • the eighth neural network 1076 may be machine-learned to generate cell assessment information directly from the anterior chamber region without specifying cell regions. Aspects of the eighth neural network 1076 are not limited to these and may include any machine learning applied neural network for identifying cellular assessment information from the anterior chamber region.
  • the data input to the eighth neural network 1076 is the output from the first segmentation unit 1071 or data based thereon, and the data output from the eighth neural network 1076 is cell evaluation information. That is, the cell evaluation information generation processing unit 1072A receives the anterior chamber region (and/or data based on this anterior chamber region) specified from the Scheimpflug image by the first segmentation unit 1071, and receives this anterior chamber region or data based on it. It is configured to input data into an eighth neural network 1076 of an eighth inference model 1075 and obtain output data (cell evaluation information) from the eighth neural network 1076 .
  • the machine learning method for building the eighth inference model 1075 (the eighth neural network 1076) is similar to the machine learning method for building the third neural network 1039 of the cell evaluation information generation processing unit 1033 in FIG. can be Also, the training data used for machine learning to construct the eighth inference model 1075 (eighth neural network 1076) is the same as the training data used for machine learning to construct the third neural network 1039. good.
  • An eighth inference model 1075 including an eighth neural network 1076 receives as input the anterior chamber region identified by the first segmentation unit 1071 (and/or data based on this anterior chamber region) and It is a trained model that outputs cell evaluation information based on the region (and/or data based on this anterior chamber region).
  • the cell evaluation information generation processing unit 1072A of this example uses the anterior chamber region (and/or the anterior chamber region) in the Scheimpflug image of the eye to be examined. data based on the tuft region) to generate cell evaluation information.
  • a data processing unit 1080 shown in FIG. 17 is an example of the configuration of the data processing unit 1020 in FIG.
  • the data processing unit 1080 of this example includes a first segmentation unit 1081 , a conversion processing unit 1082 and a cell evaluation information generation processing unit 1083 .
  • the first segmentation unit 1081 has the same configuration and function as the first segmentation unit 1031 in FIG. 3 (for example, the first segmentation unit 1031A in FIG. It is configured to perform a first segmentation to identify a chamber region.
  • the conversion processing unit 1082 converts the anterior chamber region specified by the first segmentation unit 1081 into data with a structure according to the cell evaluation information generation processing executed by the cell evaluation information generation processing unit 1083.
  • the cell evaluation information generation processing unit 1083 of this example executes cell evaluation information generation processing using a neural network (eighth neural network) constructed by machine learning, like the cell evaluation information generation processing unit 1072A in FIG. is configured to
  • the conversion processing unit 1082 converts the anterior chamber region specified from the Scheimpflug image by the first segmentation unit 1081 into image data having a structure corresponding to the input layer of the eighth neural network of the cell evaluation information generation processing unit 1083. It is configured to perform the conversion process of
  • the input layer of the eighth neural network (convolutional neural network) of the cell evaluation information generation processing unit 1083 may be configured to accept data with a predetermined structure (morphology, format).
  • This predefined data structure may be, for example, a predefined image size (eg, number of vertical pixels and number of horizontal pixels), a predefined image shape (eg, square or rectangular), and the like.
  • the image size and image shape of the anterior chamber region specified by the first segmentation unit 1081 vary depending on the specifications of the ophthalmologic apparatus, conditions and settings at the time of imaging, individual differences in the size and shape of the eye to be examined, and the like. .
  • the input layer of the eighth neural network of the cell evaluation information generation processing unit 1083 receives the structure of the anterior chamber region (for example, image size and/or image shape) specified by the first segmentation unit 1081. Convert to a possible structure.
  • Image size conversion may be performed using any known image size conversion technique.
  • a process of dividing or a process of resizing the anterior chamber region specified by the first segmentation unit 1081 into a single image having an image size corresponding to the input layer may be included.
  • Image shape transformation may be performed using any known image transformation technique. Conversion processing for other data structures may be performed in the same manner.
  • the cell evaluation information generation processing unit 1083 has the same configuration and function as the cell evaluation information generation processing unit 1072 in FIG. 14 (for example, the cell evaluation information generation processing unit 1072A in FIG. It is configured to execute cell evaluation information generation processing for generating cell evaluation information from data obtained by processing the anterior chamber region obtained by the conversion processing unit 1082 .
  • the data processing unit 1080 of this example may have a configuration in which the conversion processing unit 1082 is arranged between the first segmentation unit 1071 and the cell evaluation information generation processing unit 1072 of the data processing unit 1070 of FIG. .
  • the configuration of the data processing unit 1080 of this example is not limited to this.
  • the data processing unit 1020 including an inference model (neural network) built using machine learning have been mainly described.
  • the data processing unit 1020 is not limited to such a machine learning-based configuration.
  • the data processing unit 1020 according to the present disclosure may be implemented by machine learning-based configurations alone, by a combination of machine learning-based configurations and non-machine learning-based configurations, or by non-machine learning-based configurations. It may also be implemented by the configuration of the base only.
  • a data processing unit 1090 shown in FIG. 18 is an example of the configuration of the data processing unit 1020 in FIG. 1, and has a non-machine learning-based configuration.
  • the data processing section 1090 of this example includes a first analysis processing section 1091 , a second analysis processing section 1092 and a third analysis processing section 1093 .
  • the first analysis processing unit 1091 includes a processor that executes a first segmentation for specifying the anterior chamber region, and the Scheimpflug image (and/or its processed image data) acquired by the image acquisition unit 1010 has a predetermined An analysis process (referred to as first analysis process) is applied to specify the anterior chamber region in the Scheimpflug image.
  • the first analysis process may include any known segmentation for identifying the anterior chamber region in the Scheimpflug image.
  • the segmentation for identifying the anterior chamber region includes the segmentation for identifying the image region corresponding to the cornea (especially the posterior surface of the cornea) and the segmentation for identifying the image region corresponding to the lens (especially the anterior surface of the lens). segmentation and.
  • the image area corresponding to the cornea is called the corneal area
  • the image area corresponding to the posterior surface of the cornea is called the corneal posterior surface area
  • the image area corresponding to the crystalline lens is called the crystalline lens area
  • the image area corresponding to the anterior surface of the crystalline lens is called the crystalline anterior area. call.
  • the segmentation of the posterior surface region of the cornea may include any known segmentation. Segmentation of the posterior corneal region suffers from artifacts in Scheimpflug images and saturation of pixel values.
  • the configuration shown in FIG. 2C can be adopted. That is, by combining the imaging method using the first imaging system 1014 and the second imaging system 1015 and the selection method of the Scheimpflug image using the image selection unit 1021, a Scheimpflug image free from artifacts and saturation is selected. , the posterior surface region of the cornea can be identified from this Scheimpflug image.
  • the segmentation of the anterior lens region may include any known segmentation.
  • the expression state of the Scheimpflug image changes depending on the state of the pupil of the eye to be examined (e.g., mydriatic state, non-mydriatic state, small-pupil eye, etc.). becomes a problem.
  • the range of the lens to be imaged is smaller than when the subject's eye is mydriatic.
  • the position and shape of the non-imaged part of the anterior lens surface can be estimated based on the anterior lens area depicted in the Scheimpflug image. It is possible to apply a process for uniforming the expression state of the Scheimpflug image, such as a process for The processing for equalizing the representation state of the Scheimpflug image may be executed on a machine learning basis or may be executed on a non-machine learning basis. Also, the process of estimating the position and shape of the anterior surface of the lens may include, for example, any known extrapolation process.
  • images with problems and images without problems may be mixed, or the degree of problems may vary between images.
  • images with and without artifacts and saturation may be mixed, or various different states of artifacts (position, size, shape, etc.) may be mixed into several images.
  • These phenomena can adversely affect the quality (eg, stability, robustness, reproducibility, accuracy, precision, etc.) of the processing performed by the data processing unit 1090 .
  • steps may be taken to prevent these phenomena from occurring or to reduce adverse effects resulting from these phenomena.
  • an imaging method using the first imaging system 1014 and the second imaging system 1015 may be combined with a Scheimpflug image selection method using the image selection unit 1021 .
  • Examples of the latter measures include image correction/noise removal/noise reduction, image parameter adjustment, and the like.
  • the second analysis processing unit 1092 includes a processor that executes second segmentation for identifying cell regions, and applies the second analysis processing to the anterior chamber region identified from the Scheimpflug image by the first analysis processing unit 1091. is configured to identify the cell area by
  • the second analysis process 1092 calculates a and may be configured to identify the cell area.
  • the second analysis processor 1092 may be configured to apply segmentation to the anterior chamber region to identify cell regions. This segmentation is performed, for example, according to a program created based on the standard morphology (eg size, shape, etc.) of inflammatory cells (cell areas).
  • the second analysis processor 1092 may be configured to identify cell regions by at least a partial combination of these two techniques.
  • the measures that can be taken when the image acquisition unit 1010 acquires a series of Scheimpflug images by slit scanning may be the same as those of the first analysis processing unit 1091 . Also, considering that the cell area is generally a minute image area, measures may be taken to distinguish between the cell area and minute artifacts. For example, by performing processing for removing artifacts (ghosts, etc.), erroneous detection of artifacts in cell region detection can be prevented.
  • the third analysis processing unit 1093 includes a processor that executes cell evaluation information generation processing for generating cell evaluation information. 3 analysis processing is applied to generate cell evaluation information.
  • the cell assessment information may be any assessment information relating to inflammatory cells, including information representing the state of inflammatory cells (e.g., any parameter such as density, number, location, distribution, etc.). or may include evaluation information generated based on information of predetermined parameters relating to the state of inflammatory cells.
  • the third analysis processing unit 1093 can obtain the density, number, position, distribution, etc. of one or more cell regions identified by the second analysis processing unit 1092.
  • the process of obtaining the density of inflammatory cells includes, for example, a process of setting an image area of a predetermined size (for example, an image area of 1 mm square), and a cell detected by the second analysis processing unit 1092 in the set image area. and a process of counting the number of areas.
  • the dimension of the image area is, for example, the specification of the optical system of the ophthalmologic apparatus 1000 (eg, design data of the optical system and/or actual measurement data of the optical system). ), and is typically defined as the correspondence between pixels and dimensions in real space (for example, dot pitch).
  • the cell evaluation information may include information on the density of inflammatory cells thus obtained, or may include evaluation information obtained from this density information.
  • This evaluation information may include, for example, evaluation results using the uveitis disease classification criteria proposed by the SUN Working Group.
  • This classification standard defines the grade according to the number of inflammatory cells present in one field of view (1 mm square field of view) (that is, the density (concentration) of inflammatory cells). 0” is less than 1 cell, grade “0.5+” is 1-5 cells, grade “1+” is 6-15 cells, grade “2+” is 16-25 cells, grade “3+” ' is defined as 26-50 cells and grade '4+' as 50 or more cells. Note that the grade divisions of this classification standard may be finer or rougher. Alternatively, cell evaluation information may be generated based on other classification criteria.
  • the data processing unit 1090 uses the partial region of the anterior chamber identified from the Scheimpflug image by the first analysis processing unit 1091 (for example, a 1 mm square image region), processing to determine the number of cell regions belonging to this partial region, and processing to calculate the density of inflammatory cells based on this number and the dimensions of the partial region. It can be.
  • the data processing unit 1090 selects a cell region located within the partial region from among the cell regions detected from the entire anterior chamber region by the second analysis processing unit 1092, and based on the selected cell region
  • the third analysis processing unit 1093 may be configured to obtain the density.
  • the data processing unit 1090 analyzes the partial region by the second analysis processing unit 1092 to specify the cell region, and obtains the density by the third analysis processing unit 1093 based on the cell region specified from the partial region. It may be configured as
  • the process of obtaining the number of inflammatory cells includes, for example, the process of counting the number of cell regions detected by the second analysis processing unit 1092.
  • the cell evaluation information may include information on the number of inflammatory cells obtained in this manner, or may include evaluation information obtained from the information on this number. For example, cell assessment information can be obtained by dividing the number of cell areas detected from the entire anterior chamber area by the dimensions of the anterior chamber area (e.g., area, volume, etc.) to obtain the average density of inflammatory cells in the entire anterior chamber area. can be asked for.
  • the cell evaluation information may include an evaluation result (eg, grade) based on the number of cell regions detected from the entire anterior chamber region, or the number and/or the number of cell regions in a partial region of the anterior chamber region. Evaluation results based thereon may be included.
  • the process of determining the position of inflammatory cells may include, for example, the process of identifying the cell area detected by the second analysis processing unit 1092 .
  • the position of the cell region may be expressed, for example, as coordinates in the defined coordinate system of the Scheimpflug image, or as a relative position (e.g., distance , direction, etc.).
  • This reference area is, for example, a corneal area, a posterior corneal area, a crystalline lens area, an anterior crystalline lens area, an image area corresponding to the axis of the eye (for example, a straight line connecting the vertex position of the cornea and the apex position of the anterior lens).
  • the cell evaluation information may include information on the position of the inflammatory cells obtained in this manner, or may include evaluation information obtained from this positional information.
  • the cell evaluation information may include information representing the distribution of inflammatory cells (distribution of a plurality of cell regions), or an evaluation result (eg, grade) based on the position of one or more inflammatory cells. It may contain an evaluation result (for example, grade) based on the positions (distribution) of a plurality of inflammatory cells.
  • a first operation example of the ophthalmologic apparatus 1000 is shown in FIG. It is assumed that various operations (preparatory operations) executed before imaging by the ophthalmologic apparatus 1000 have been completed. Preparatory actions include adjustment of the table on which the ophthalmologic apparatus 1000 is installed, adjustment of the chair used by the subject, adjustment of the face support (chin support, forehead support, etc.) of the ophthalmologic apparatus 1000, and ophthalmologic examination of the eye to be examined. Alignment of the apparatus 1000, slit light adjustment (for example, light amount adjustment, width adjustment, length adjustment, orientation adjustment), and the like.
  • the ophthalmologic apparatus 1000 Upon receiving the instruction to start imaging, the ophthalmologic apparatus 1000 acquires a Scheimpflug image of the subject's eye using the image acquisition unit 1010 (S1).
  • the ophthalmologic apparatus 1000 uses the data processing unit 1020 to generate inflammatory state information indicating the inflammatory state of the subject's eye based on the Scheimpflug image acquired in step S1 (S2).
  • the number of Scheimpflug images acquired in step S1 of this operation example may be set in advance, and may be one, or two or more (for example, a series of Scheimpflug images collected by slit scanning). may be In step S2, one of the various data processing described above is executed according to the number of Scheimpflug images acquired in step S1.
  • This data processing may be machine-learning-based processing, non-machine-learning-based processing, or a combination of machine-learning-based processing and non-machine-learning-based processing.
  • the ophthalmologic apparatus 1000 When the ophthalmologic apparatus 1000 includes two or more imaging systems, multiple Scheimpflug images acquired by these imaging systems can be processed. For example, when the ophthalmologic apparatus 1000 includes a first imaging system 1014 and a second imaging system 1015 as shown in FIG. 2C, two or more Scheimpflug The images can be processed by the data processing unit 1020A (image selection unit 1021, inflammation state information generation unit 1022).
  • the data processing performed by the data processing unit 1020A may be machine-learning-based processing, non-machine-learning-based processing, or a combination of machine-learning-based processing and non-machine-learning-based processing.
  • the ophthalmologic apparatus 1000 can display the Scheimpflug image acquired in step S1 and/or the inflammatory state information generated in step S2 on the display device.
  • the display device may be an element of the ophthalmic device 1000 or an external device connected to the ophthalmic device 1000 .
  • the mode of information display is not limited to these examples. It is possible to at least partially combine at least two of these examples.
  • the ophthalmologic apparatus 1000 causes the display device to display the Scheimpflug image acquired in step S1 and/or the inflammatory state information generated in step S2 as they are.
  • the ophthalmologic apparatus 1000 may display other information (referred to as additional information) together with the Scheimpflug image and/or inflammation state information.
  • additional information may be any information useful for diagnosis of the subject's eye along with the Scheimpflug image and/or inflammation status information.
  • the ophthalmologic apparatus 1000 generates an image that simulates an image of the eye acquired by a conventional slit lamp microscope (referred to as a slit lamp image) from the Scheimpflug image, and displays the generated simulated image.
  • a slit lamp image a conventional slit lamp microscope
  • a pseudo image is displayed on the display device. This makes it possible to provide an image simulating a slit lamp image that has been conventionally used for observing the inflammatory state of the subject's eye and that many doctors are familiar with.
  • the process of generating simulated images from slit lamp images is formed by machine learning-based and/or non-machine learning-based processes.
  • Machine learning-based processing is performed using, for example, a neural network constructed by machine learning using training data containing multiple pairs of Scheimpflug and slitlamp images.
  • This neural network includes a convolutional neural network.
  • This convolutional neural network is configured to receive an input of a Scheimpflug image and output a simulated image.
  • Non-machine-learning-based processing may include processing that transforms the appearance of an image, such as processing to generate simulated blur, color conversion, and image quality conversion.
  • Exemplary aspects of non-machine learning-based processing include the processing of constructing a three-dimensional image (e.g., an 8-bit grayscale volume) from a series of Scheimpflug images acquired in a slit scan, and the maximum pixel value range
  • a process of lowering (gradation of pixel value) for example, a process of lowering 256 gradations to 10 gradations
  • a region of interest for example, a rectangular parallelepiped region of a predetermined size
  • a setting process and a process of constructing an enface image of the set region of interest for example, maximum intensity projection processing (MIP) are included.
  • MIP maximum intensity projection processing
  • the pseudo-image may be an image representing the same region as the widely focused Scheimpflug image, or a partial region of the region represented by the Scheimpflug image (for example, the uveitis disease proposed by the SUN Working Group).
  • 1 field of view which is the evaluation range in the classification criteria of 1, that is, a field of view of 1 mm square).
  • the ophthalmologic apparatus 1000 can, for example, highlight a portion of interest in the simulated image.
  • the portion of interest include an image region corresponding to inflammatory cells, an image region corresponding to anterior chamber flare, an image region corresponding to lens opacification, and the like.
  • the ophthalmologic apparatus 1000 creates a map representing the inflammatory state of the subject's eye (referred to as an inflammatory state map) and displays it on the display device.
  • inflammatory state maps include an inflammatory cell map representing the position (distribution) of inflammatory cells in the anterior chamber, and an inflammatory cell density map representing the distribution of the density (or number) of inflammatory cells in the anterior chamber. (Inflammatory cell number map).
  • the process of creating a map of these inflammatory cells is, for example, a process of identifying image regions (cell regions) corresponding to inflammatory cells from each of a series of Scheimpflug images acquired by slit scanning (second segmentation).
  • each identified cell region for example, two-dimensional coordinates in the defined coordinate system of Scheimpflug images, or three-dimensional coordinates in the defined coordinate system of a series of Scheimpflug-based three-dimensional images); and a process of creating a map based on the determined positions of each cell region.
  • the ophthalmologic device 1000 can display an inflammation state map together with a Scheimpflug image and/or inflammation state information.
  • the ophthalmologic apparatus 1000 can display a frontal image based on a series of Scheimpflug images acquired by slit scanning and an inflammatory state map generated based on the same series of Scheimpflug images.
  • FIG. 19 A second operation example of the ophthalmologic apparatus 1000 is shown in FIG. This operation example is performed to acquire cell evaluation information. Any items described with respect to the operation example of FIG. 19 can be combined with this operation example unless otherwise specified.
  • the ophthalmologic apparatus 1000 acquires a Scheimpflug image of the subject's eye using the image acquisition unit 1010 (S11).
  • the ophthalmologic apparatus 1000 causes the data processing unit 1020 to apply at least one of the above-described first segmentation, second segmentation, and cell evaluation information generation processing to the Scheimpflug image acquired in step S11. (S12).
  • the data processing unit 1030 in FIG. 3 or the data in FIG. A processing unit 1040 is employed.
  • the data processing unit 1020 of this example includes at least one of the first segmentation unit 1031A in FIG. 4, the second segmentation unit 1032A in FIG. 6, and the cell evaluation information generation processing unit 1033A in FIG. good.
  • the data processing unit 1090 in FIG. be done.
  • the data processing unit 1020 of the ophthalmologic apparatus 1000 performs the data shown in FIG. A processing unit 1050 is employed.
  • the data processing unit 1020 of this example may include the second segmentation unit 1051A in FIG. 10 and/or the cell evaluation information generation processing unit 1052A in FIG.
  • the data processing unit 1020 of the ophthalmologic apparatus 1000 performs the data processing of FIG. A section 1060 is employed.
  • the data processing section 1020 of this example may include the cell evaluation information generation processing section 1061A of FIG.
  • the data processing unit 1020 of the ophthalmologic apparatus 1000 performs the data shown in FIG.
  • the processing unit 1070 or the data processing unit 1080 of FIG. 17 is employed.
  • the data processing unit 1020 of this example may include the first segmentation unit 1071A in FIG. 15 and/or the cell evaluation information generation processing unit 1072A in FIG.
  • the data processing unit 1020 of the ophthalmologic apparatus 1000 may be configured to be able to execute only the first segmentation out of the first segmentation, the second segmentation, and the cell evaluation information generation process. Alternatively, it may be configured to be able to perform only the second segmentation, or may be configured to be able to perform only the first segmentation and the second segmentation.
  • the ophthalmologic apparatus 1000 uses the data processing unit 1020 to generate cell evaluation information (S13). Note that when the cell evaluation information generating process is executed in step S12 and all the cell evaluation information to be acquired in the main examination is acquired in step S12, it is not necessary to execute step S13. (In other words, step S13 is included in step S12).
  • Step S12 and/or step S13 may partially include an operation by the user.
  • an operation for specifying an anterior chamber region in a Scheimpflug image an operation for specifying a cell region in a Scheimpflug image, an operation for specifying a cell region in the anterior chamber region, operation, operation for creating cell evaluation information from the anterior chamber region, operation for creating cell evaluation information from the cell region, operation for editing (correcting) the anterior chamber region identified in the first segmentation
  • a user interface includes a display device and an operation device.
  • Ophthalmic device 1000 may include at least a portion of a user interface.
  • the ophthalmologic apparatus 1000 extracts the Scheimpflug image acquired in step S11, the information acquired in step S12 (for example, information based on the Scheimpflug image, the anterior chamber region, information based on the anterior chamber region, cell region, cell information on the region, cell evaluation information), cell evaluation information generated in step S13, and the like are displayed on the display device (S14).
  • the information acquired in step S12 for example, information based on the Scheimpflug image, the anterior chamber region, information based on the anterior chamber region, cell region, cell information on the region, cell evaluation information
  • cell evaluation information generated in step S13, and the like are displayed on the display device (S14).
  • FIG. 19 A third operation example of the ophthalmologic apparatus 1000 is shown in FIG. This operation example is performed to acquire cell evaluation information. Unless otherwise specified, any items described with respect to the operation example of FIG. 19 and/or any items described with respect to the operation example of FIG. 20 can be combined with this operation example.
  • the data processing unit 1020 of the ophthalmologic apparatus 1000 includes a second segmentation unit (for example, a second segmentation unit 1032A or a second segmentation unit 1051A) including a convolutional neural network, and a data structure conversion unit suitable for this convolutional neural network. (for example, conversion processing unit 1042 or conversion processing unit 1082).
  • a second segmentation unit for example, a second segmentation unit 1032A or a second segmentation unit 1051A
  • a data structure conversion unit suitable for this convolutional neural network for example, conversion processing unit 1042 or conversion processing unit 1082).
  • the ophthalmologic apparatus 1000 acquires a Scheimpflug image of the subject's eye using the image acquisition unit 1010 (S21).
  • the ophthalmologic apparatus 1000 applies the first segmentation (anterior chamber segmentation) for identifying the anterior chamber region from the Scheimpflug image to the Scheimpflug image acquired in step S21 by the data processing unit 1020 (S22 ).
  • the anterior chamber segmentation method in this step may be arbitrary, and may include one or both of data processing by the data processing unit 1020 and operation by the user.
  • the anterior chamber segmentation by the data processing unit 1020 may be any one of the various techniques described above.
  • the data processing unit 1020 removes ghosts in the anterior chamber region specified by the anterior chamber segmentation in step S22 (S23). As a result, it is possible to prevent a ghost from being erroneously detected as a cell region in cell segmentation in step S25, which will be described later.
  • the data processing unit 1020 converts the anterior chamber region from which the ghost has been removed in step S23 into image data having a structure corresponding to the input layer of the convolutional neural network used in the next step S25 (S24).
  • the data processing unit 1020 inputs the image data (transformed data of the anterior chamber region) acquired in step S24 to the convolutional neural network of the second segmentation unit configured to perform cell segmentation (S25). ). Thereby, the cell area in the anterior chamber area obtained in step S22 is specified.
  • the data processing unit 1020 evaluates the density of inflammatory cells in the anterior chamber of the subject's eye based on the results of cell region identification performed in step S25 (S26).
  • the processing of this step is executed by, for example, one of the various cell evaluation information generation processing units described above.
  • the data processing unit 1020 generates cell evaluation information based on the results of the evaluation performed in step S26 (S27).
  • the processing of this step is executed by, for example, one of the various cell evaluation information generation processing units described above.
  • the ophthalmologic apparatus 1000 causes the display device to display the information acquired in steps S21 to S27 (S28).
  • Examples of information displayed in this step include the Scheimpflug image acquired in step S21, the information acquired in step S22 (for example, information based on the Scheimpflug image, the anterior chamber region, information based on the anterior chamber region), Information acquired in step S23 (e.g., deghosted anterior chamber region, information based on deghosted anterior chamber region), information acquired in step S24 (e.g., shaped anterior chamber region, shaped information based on the anterior chamber region obtained), information acquired in step S25 (e.g., cell region, information based on cell region), information acquired in step S26 (e.g., inflammatory cell density, density evaluation information , information based on density evaluation information), information acquired in step S27 (for example, cell evaluation information, information based on cell evaluation information), and the like.
  • Information acquired in step S23 e.g., deghosted anterior chamber region, information based on deghosted anterior chamber region
  • information acquired in step S24 e.g., shaped anterior chamber region, shaped information based
  • step S27 the inflammatory cell density value used in the evaluation proposed by the SUN Working Group (the number of inflammatory cells present in a 1 mm square area ) and/or the grade corresponding to this density value is determined, and in step S28 at least the density value and/or grade determined in step S27 is displayed.
  • FIG. 10 A fourth operation example of the ophthalmologic apparatus 1000 is shown in FIG. This operation example is performed to acquire cell evaluation information. Unless otherwise specified, at least one of any item described with respect to the operation example of FIG. 19, any item described with respect to the operation example of FIG. 20, and any item described with respect to the operation example of FIG. can be combined with this operation example.
  • the data processing unit 1020 of the ophthalmologic apparatus 1000 includes a cell evaluation information generation processing unit including a convolutional neural network (for example, a cell evaluation information generation processing unit 1033A, a cell evaluation information generation processing unit 1052A, a cell evaluation information generation processing unit 1061A or cell evaluation information generation processing unit 1072A) and an element (for example, conversion processing unit 1042 or conversion processing unit 1082) that performs data structure conversion in accordance with this convolutional neural network.
  • a convolutional neural network for example, a cell evaluation information generation processing unit 1033A, a cell evaluation information generation processing unit 1052A, a cell evaluation information generation processing unit 1061A or cell evaluation information generation processing unit 1072A
  • an element for example, conversion processing unit 1042 or conversion processing unit 1082
  • the ophthalmologic apparatus 1000 acquires a Scheimpflug image of the subject's eye using the image acquisition unit 1010 (S31).
  • the ophthalmologic apparatus 1000 applies the first segmentation (anterior chamber segmentation) for identifying the anterior chamber region from the Scheimpflug image to the Scheimpflug image acquired in step S31 by the data processing unit 1020 (S32). ).
  • the anterior chamber segmentation method in this step may be arbitrary.
  • the data processing unit 1020 removes ghosts in the anterior chamber region specified by the anterior chamber segmentation in step S32 (S33). As a result, it is possible to prevent the ghost from being reflected in the evaluation result in the cell evaluation information generating process in step S35, which will be described later.
  • the data processing unit 1020 converts the anterior chamber region from which the ghost has been removed in step S33 into image data having a structure corresponding to the input layer of the convolutional neural network used in the next step S35 (S34).
  • the data processing unit 1020 converts the image data (transformed data of the anterior chamber region) acquired in step S34 into the convolutional neural network of the cell evaluation information generation processing unit configured to execute the cell evaluation information generation processing. to enter. Thereby, cell evaluation information based on the anterior chamber region acquired in step S32 is generated (S35).
  • the ophthalmologic apparatus 1000 causes the display device to display the cell evaluation information generated in step S35 (S36).
  • step S35 the inflammatory cell density value used in the evaluation proposed by the SUN Working Group (the number of inflammatory cells present in a 1 mm square area ) and/or the grade corresponding to this density value is determined, and in step S36 at least the density value and/or grade determined in step S35 is displayed.
  • step S35 the value of the density of inflammatory cells used in the evaluation proposed by the SUN Working Group (the inflammatory cells present in a 1 mm square area number of The data processing unit 1020 of this aspect obtains the grade corresponding to the density value obtained in step S35 by referring to predetermined data representing the correspondence relationship between the density value and the grade.
  • step S36 of this aspect the grade obtained by the data processing unit 1020 (and the density value obtained in step S35 of this aspect) is displayed.
  • the ophthalmologic apparatus 1000 can display arbitrary information acquired in steps S31 to S34 on the display device.
  • Examples of information displayed in addition to the cell evaluation information include the Scheimpflug image acquired in step S31, the information acquired in step S32 (for example, information based on the Scheimpflug image, the anterior chamber region, the anterior chamber region, information), information acquired in step S33 (e.g., deghosted anterior chamber region, information based on deghosted anterior chamber region), information acquired in step S34 (e.g., shaped anterior chamber area, information based on the shaped anterior chamber area), etc.
  • inflammation state information can be generated in the same manner as when generating cell evaluation information.
  • examples of inflammatory state information other than cell evaluation information include information on anterior chamber flare, information on lens opacification, information on onset and progress of disease, and information on activity of disease.
  • Any of these exemplary inflammatory status information may be generated and evaluated with reference to the uveitis disease classification criteria proposed by the SUN Working Group, as in the case of the cell evaluation information.
  • These exemplary inflammatory state information generating and evaluating processes may be machine learning based processes, non-machine learning based processes, or combinations of machine learning and non-machine learning based processes. There may be. Building a neural network for machine learning-based processing may be performed in the same manner as building a neural network for generating cell assessment information.
  • the processor for performing non-machine learning-based processing is configured to at least perform processing for obtaining evaluation targets (e.g., anterior chamber flare, lens opacification, disease onset/course, disease activity, etc.). , and may be configured to execute a process of performing an evaluation based on the obtained evaluation object.
  • evaluation targets e.g., anterior chamber flare, lens opacification, disease onset/course, disease activity, etc.
  • FIG. 23 shows an example of a specific configuration of an ophthalmologic apparatus capable of functioning as the ophthalmologic apparatus 1000 described above.
  • the ophthalmologic apparatus of this example is a system (slit lamp microscope system 1) in which a slit lamp microscope and a computer (information processing apparatus) are combined.
  • the slit lamp microscope system 1 includes an illumination system 2, an imaging system 3, a moving image imaging system 4, an optical path coupling element 5, a moving mechanism 6, a control unit 7, a data processing unit 8, a communication unit 9, a user interface 10;
  • the cornea of the subject's eye E is indicated by symbol C, and the lens is indicated by symbol CL.
  • the anterior chamber corresponds to the area between the cornea C and the lens
  • some exemplary aspects of the slit lamp microscope system 1 are responsible for the microscope body, the computer, and the communication between the microscope body and the computer. Includes communication devices.
  • the main body of the microscope includes an illumination system 2 , an imaging system 3 , a video imaging system 4 , an optical path coupling element 5 and a moving mechanism 6 .
  • the computer includes a control section 7 , a data processing section 8 , a communication section 9 and a user interface 10 .
  • the computer may be installed, for example, in the vicinity of the microscope main body, or may be installed on the network.
  • a combination of the illumination system 2, the imaging system 3, and the movement mechanism 6 is an example of the image acquisition unit 1010 of the ophthalmologic apparatus 1000.
  • Illumination system 2 is an example of illumination system 1011 of ophthalmic apparatus 1000 .
  • the imaging system 3 is an example of the imaging system 1012 of the ophthalmologic apparatus 1000 .
  • the illumination system 2 projects slit light onto the anterior segment of the eye E to be examined.
  • Reference numeral 2a denotes an optical axis of the illumination system 2 (referred to as an illumination optical axis).
  • the illumination system 2 may have a configuration similar to that of a conventional slit lamp microscope illumination system.
  • the illumination system 2 includes an illumination light source, a positive lens, a slit forming section, and an objective lens in order from the far side from the eye E to be examined.
  • the illumination light output from the illumination light source passes through the positive lens and is projected onto the slit forming portion.
  • the slit forming part passes a part of the illumination light to generate slit light.
  • the slit forming part has a pair of slit blades.
  • the illumination system 2 may include a focusing mechanism for changing the focus position of the slit light. This focusing mechanism, for example, moves the objective lens along the illumination optical axis 2a. Alternatively, the focusing mechanism moves a focusing lens arranged between the objective lens and the slit forming part.
  • FIG. 23 is a top view, in which the direction along the axis of the subject's eye E is defined as the Z direction, and the left and right directions for the subject among the directions orthogonal to this direction are defined as the X direction, which are orthogonal to both the X direction and the Z direction.
  • the direction (vertical direction, body axis direction) be the Y direction.
  • the slit lamp microscope system 1 can be aligned with the eye to be examined E so that the illumination optical axis 2a coincides with the axis of the eye E to be examined. Alignment can be done so that they are arranged parallel to the axis.
  • the imaging system 3 images the anterior segment of the eye onto which the slit light from the illumination system 2 is projected.
  • Reference numeral 3a indicates an optical axis of the imaging system 3 (referred to as an imaging optical axis).
  • the imaging system 3 includes an optical system 3A and an imaging element 3B.
  • the optical system 3A guides the light from the anterior segment of the subject's eye E on which the slit light is projected to the imaging device 3B.
  • the optical system 3A may have the same configuration as the imaging system of a conventional slit lamp microscope.
  • the optical system 3A includes, in order from the side closer to the subject's eye E, an objective lens, a variable magnification optical system, and an imaging lens.
  • the imaging element 3B receives the light guided by the optical system 3A on its imaging surface.
  • the imaging element 3B includes an area sensor having a two-dimensional imaging area. This area sensor may be, for example, a charge coupled device (CCD) image sensor or a complementary metal oxide semiconductor (CMOS) image sensor.
  • the imaging system 3 may include a focusing mechanism for changing its focus position. This focusing mechanism, for example, moves the objective lens along the photographing optical axis 3a. Alternatively, the focusing mechanism moves a focusing lens arranged between the objective lens and the imaging lens along the photographing optical axis 3a.
  • the illumination system 2 and the photography system 3 function as a Scheimpflug camera. That is, the illumination system 2 and the imaging system 3 are configured so that the object surface along the illumination optical axis 2a, the optical system 3A, and the imaging surface of the imaging device 3B satisfy the so-called Scheimpflug condition. More specifically, the YZ plane (including the object plane) passing through the illumination optical axis 2a, the principal plane of the optical system 3A, and the imaging plane of the imaging device 3B intersect on the same straight line. Thereby, all positions in the object plane (all positions in the direction along the illumination optical axis 2a) can be focused and photographed.
  • the illumination system 2 and the imaging system 3 are configured so that the imaging system 3 is focused in at least the range (anterior chamber) from the posterior surface of the cornea C to the anterior surface of the crystalline lens CL.
  • the illumination system 2 and the imaging system 3 may be configured so that the imaging system 3 is focused on at least the range from the anterior surface of the cornea C to the posterior surface of the crystalline lens CL.
  • Such conditions are realized according to the configuration and arrangement of the elements included in the illumination system 2, the configuration and arrangement of the elements included in the imaging system 3, the relative positions of the illumination system 2 and the imaging system 3, and the like.
  • Parameters indicating the relative positions of the illumination system 2 and the imaging system 3 include, for example, the angle ⁇ between the illumination optical axis 2a and the imaging optical axis 3a.
  • the angle ⁇ is set to 17.5 degrees, 30 degrees, or 45 degrees, for example.
  • the angle ⁇ may be variable.
  • the moving image capturing system 4 captures a moving image of the anterior segment of the eye to be inspected E in parallel with the imaging of the eye to be inspected by the illumination system 2 and the imaging system 3 .
  • the moving image capturing system 4 functions as a video camera.
  • the optical path coupling element 5 couples the optical path of the illumination system 2 (illumination optical path) and the optical path of the video shooting system 4 (video shooting optical path).
  • the optical coupling element 5 may be, for example, a beam splitter such as a half mirror or a dichroic mirror.
  • FIG. 1 A specific example of an optical system including an illumination system 2, an imaging system 3, a video imaging system 4, and an optical path coupling element 5 is shown in FIG.
  • the imaging system 3 includes two imaging systems (first imaging system and second imaging system).
  • the optical system of the slit lamp microscope system 1 includes other elements in addition to or instead of any of the elements shown in FIG. any element, any element of known slit lamp microscopes, any element of known ophthalmic instruments).
  • the optical system shown in FIG. 24 includes an illumination system 20, a left imaging system 30L, a right imaging system 30R, and a moving image imaging system 40.
  • Illumination system 20 is an example of illumination system 2 .
  • a combination of the left imaging system 30L and the right imaging system 30R is an example of the imaging system 3, and an example of a combination of the first imaging system 1014 and the second imaging system 1015 of the ophthalmologic apparatus 1000.
  • a moving image capturing system 40 is an example of the moving image capturing system 4 .
  • the beam splitter 47 is an example of the optical coupling element 5 .
  • reference numeral 20a indicates the optical axis of the illumination system 20 (referred to as illumination optical axis)
  • reference numeral 30La indicates the optical axis of the left imaging system 30L (referred to as the left imaging optical axis)
  • reference numeral 30Ra indicates the right imaging system.
  • 30R optical axis (referred to as the right imaging optical axis).
  • the orientation of the left imaging optical axis 30La and the orientation of the right imaging optical axis 30Ra are different from each other.
  • the angle between the illumination optical axis 20a and the left imaging optical axis 30La is denoted by ⁇ L
  • the angle between the illumination optical axis 20a and the right imaging optical axis 30Ra is denoted by ⁇ R.
  • the angle ⁇ L and the angle ⁇ R may be equal to or different from each other. Each of the angles ⁇ L and ⁇ R may be variable.
  • the illumination optical axis 20a, the left imaging optical axis 30La, and the right imaging optical axis 30Ra intersect at one point. As in FIG. 23, the Z coordinate of this intersection point is indicated by Z0.
  • the moving mechanism 6 of this example is configured to move the illumination system 20, the left imaging system 30L, and the right imaging system 30R in the direction indicated by the arrow 49 (X direction).
  • the illumination system 20, the left imaging system 30L, and the right imaging system 30R are mounted on a stage movable at least in the X direction, and the movement mechanism 6 is controlled by the controller The movable stage is moved in the X direction according to the control signal from 7.
  • the illumination system 20 projects slit light onto the anterior segment of the eye E to be examined.
  • the illumination system 20 includes an illumination light source 21, a positive lens 22, a slit forming section 23, and objective lens groups 24 and 25 in order from the far side from the eye E to be examined. include.
  • Illumination light for example, visible light
  • Illumination light output from the illumination light source 21 is refracted by the positive lens 22 and projected onto the slit forming portion 23 .
  • a part of the projected illumination light passes through the slit formed by the slit forming part 23 and becomes slit light.
  • the generated slit light is refracted by the objective lens groups 24 and 25, reflected by the beam splitter 47, and projected onto the anterior segment of the eye E to be examined.
  • the left imaging system 30L includes a reflector 31L, an imaging lens 32L, and an imaging device 33L.
  • the reflector 31L and the imaging lens 32L guide the light from the anterior segment on which the slit light is projected by the illumination system 20 (the light traveling in the direction of the left imaging system 30L) to the imaging element 33L.
  • the light traveling in the direction of the left imaging system 30L from the anterior segment is light from the anterior segment on which the slit light is projected and travels in a direction away from the illumination optical axis 20a.
  • the reflector 31L reflects the light in a direction approaching the illumination optical axis 20a.
  • the imaging lens 32L refracts the light reflected by the reflector 31L and forms an image on the imaging surface 34L of the imaging element 33L.
  • the imaging element 33L receives the light on the imaging surface 34L.
  • the left imaging system 30L repeatedly performs imaging in parallel with the movement of the illumination system 20, the left imaging system 30L, and the right imaging system 30R by the moving mechanism 6. This yields a plurality of anterior segment images (a series of Scheimpflug images).
  • the object plane along the illumination optical axis 20a, the optical system including the reflector 31L and the imaging lens 32L, and the imaging plane 34L satisfy the Scheimpflug condition. More specifically, considering the deflection of the optical path of the imaging system 30L by the reflector 31L, the YZ plane (including the object plane) passing through the illumination optical axis 20a, the principal plane of the imaging lens 32L, and the imaging plane 34L. intersect on the same straight line. As a result, the left imaging system 30L can perform imaging while focusing on all positions within the object plane (for example, the range from the anterior surface of the cornea to the posterior surface of the lens).
  • the right imaging system 30R includes a reflector 31R, an imaging lens 32R, and an imaging device 33R.
  • the reflector 31R and the imaging lens 32R guide the light from the anterior segment on which the slit light is projected by the illumination system 20 (the light traveling in the direction of the right imaging system 30R) to the imaging device 33R.
  • the right imaging system 30R repeatedly performs imaging in parallel with the movement of the illumination system 20, the left imaging system 30L, and the right imaging system 30R by the moving mechanism 6, thereby obtaining a plurality of anterior segment images (a series of Scheimpflug images). get.
  • the object surface along the illumination optical axis 20a, the optical system including the reflector 31R and the imaging lens 32R, and the imaging surface 34R satisfy the Scheimpflug condition.
  • the Scheimpflug image acquisition by the left imaging system 30L and the Scheimpflug image acquisition by the right imaging system 30R are performed in parallel with each other.
  • the combination of the series of Scheimpflug images acquired by the left imaging system 30L and the series of Scheimpflug images acquired by the right imaging system 30R corresponds to the combination of the first Scheimpflug image group and the second Scheimpflug image group. do.
  • the control unit 7 can synchronize repeated imaging by the left imaging system 30L and repeated imaging by the right imaging system 30R. As a result, a correspondence relationship is obtained between a series of Scheimpflug images obtained by the left imaging system 30L and a series of Scheimpflug images obtained by the right imaging system 30R. This correspondence is a temporal correspondence, more specifically, a pairing of images acquired substantially at the same time.
  • control unit 7 or the data processing unit 8 obtains the correspondence relationship between the multiple anterior segment images obtained by the left imaging system 30L and the multiple anterior segment images obtained by the right imaging system 30R. Processing can be performed.
  • control unit 7 or the data processing unit 8 can generate an anterior segment image sequentially input from the left imaging system 30L and an anterior segment image sequentially input from the right imaging system 30R according to their input timing. can be paired.
  • the moving image imaging system 40 performs moving image imaging of the anterior segment of the subject's eye E from a fixed position.
  • the moving image capturing system 40 does not have to be moved by the moving mechanism 6 .
  • the moving image capturing system 40 is arranged coaxially with the illumination system 20, but the arrangement is not limited to this.
  • a video capture system can be placed non-coaxially with illumination system 20 .
  • the light transmitted through the beam splitter 47 is reflected by the reflector 48 and enters the moving image capturing system 40 .
  • the light incident on the moving image capturing system 40 is refracted by the objective lens 41 and then imaged on the imaging surface of the imaging device 43 by the imaging lens 42 .
  • the imaging device 43 is an area sensor.
  • the moving image capturing system 40 can be used for monitoring the movement of the subject's eye E, alignment, tracking, and the like. Additionally, the motion picture capture system 40 can be utilized to process a series of Scheimpflug images.
  • the moving mechanism 6 is configured to integrally move the illumination system 2 and the imaging system 3 in the X direction.
  • the controller 7 is configured to control each part of the slit lamp microscope system 1 .
  • the control unit 7 controls the elements of the illumination system 2 (illumination light source, slit forming unit, focusing mechanism, etc.), the elements of the imaging system 3 (focusing mechanism of the optical system 3A, the imaging device 3B, etc.), the moving image imaging system 4 (focusing mechanism, imaging device, etc.), moving mechanism 6, data processing unit 8, communication unit 9, user interface 10, and the like.
  • the control unit 7 can execute control of the illumination system 2, the imaging system 3, and the moving mechanism 6, and control of the video imaging system 4 in parallel. This enables slit scanning (collection of a series of Scheimpflug images) by the image acquisition unit 1010 of the ophthalmologic apparatus 1000 and moving image photography (collection of a series of time-series images) to be performed in parallel. Furthermore, the control unit 7 can execute the control of the illumination system 2, the imaging system 3 and the moving mechanism 6, and the control of the moving image imaging system 4 in synchronization with each other. This makes it possible to synchronize slit scanning and moving image capturing by the image acquiring unit 1010 of the ophthalmologic apparatus 1000 with each other.
  • the control unit 7 controls repeated imaging by the left imaging system 30L (acquisition of the first Scheimpflug image group) and repeated imaging by the right imaging system 30R ( Acquisition of the second Scheimpflug image group) can be performed synchronously with each other.
  • the control unit 7 includes a processor, main storage device, auxiliary storage device, and the like. Computer programs such as various control programs are stored in the auxiliary storage device. These computer programs may be stored in a computer or storage device accessible by the slit lamp microscope system 1 .
  • the functions of the control unit 7 are realized by cooperation between software such as a control program and hardware such as a processor.
  • the control unit 7 can apply the following control to the illumination system 2, the imaging system 3, and the movement mechanism 6 in order to scan the three-dimensional region of the anterior segment of the eye E to be examined with slit light. .
  • the control unit 7 controls the moving mechanism 6 so as to place the illumination system 2 and the imaging system 3 at a predetermined scan start position (alignment control).
  • the scan start position is, for example, a position corresponding to the end (first end) of the cornea C in the X direction, or a position further away from the axis of the eye E to be examined.
  • Symbol X0 in FIG. 25A indicates the scan start position corresponding to the first end of the cornea C in the X direction.
  • Reference X0' in FIG. 25B indicates a scan start position that is farther from the axis EA of the subject's eye E than the position corresponding to the first end of the cornea C in the X direction.
  • the control unit 7 controls the illumination system 2 to start projecting slit light onto the anterior segment of the subject's eye E (slit light projection control). Further, the control unit 7 controls the imaging system 3 to start moving image imaging of the anterior segment of the subject's eye E (imaging control). After executing alignment control, slit light projection control, and photographing control, the control unit 7 controls the moving mechanism 6 to start moving the illumination system 2 and the photographing system 3 (movement control).
  • the movement control moves the illumination system 2 and the imaging system 3 integrally. That is, the illumination system 2 and the imaging system 3 are moved while maintaining the relative position (angle ⁇ , etc.) between the illumination system 2 and the imaging system 3 (with the Scheimpflug condition satisfied).
  • the movement of the illumination system 2 and the imaging system 3 is performed from the above-described scan start position to a predetermined scan end position.
  • the scan end position is, for example, a position corresponding to the end (second end) of the cornea C on the opposite side of the first end in the X direction, or a position closer to the subject's eye E, similar to the scan start position. A position away from the axis.
  • the slit scan in this example is applied to the range from the scan start position to the scan end position.
  • This slit scan consists of projecting slit light with the X direction as the width direction and the Y direction as the longitudinal direction to the anterior segment of the eye, the integral movement of the illumination system 2 and the imaging system 3 in the X direction, and the imaging system 3 in parallel (coordinated and synchronously) with each other.
  • the length of the slit light (that is, the dimension of the beam cross section of the slit light in the Y direction) is set to be equal to or greater than the diameter of the cornea C on the surface of the eye E to be examined, for example.
  • the moving distance of the illumination system 2 and the imaging system 3 by the moving mechanism 6 is set to be equal to or larger than the corneal diameter in the X direction. This makes it possible to apply slit scanning to a three-dimensional region including the entire cornea C, and to image a wide range of the anterior chamber.
  • FIG. 26 shows an example of such a plurality of anterior segment images (that is, a group of frames forming a moving image).
  • FIG. 26 shows a plurality of anterior segment images (frame groups) F1, F2, F3, . . . , FN.
  • the anterior segment image Fn includes the slit optical image An.
  • the slit optical images A1, A2, A3, . . . , AN are moving rightward along the time series.
  • the scan start position and scan end position correspond to both ends of the cornea C in the X direction.
  • the scan start position and/or the scan end position are not limited to this example, and may be, for example, positions further from the axis of the subject's eye E than the end of the cornea. In addition, it is possible to arbitrarily set the direction and number of scans.
  • the data processing unit 8 is configured to execute various data processing. Data to be processed may be either data acquired by the slit lamp microscope system 1 or data input from the outside.
  • the data processing unit 8 includes a processor, main storage device, auxiliary storage device, and the like. Computer programs such as various data processing programs are stored in the auxiliary storage device. These computer programs may be stored in a computer or storage device accessible by the slit lamp microscope system 1 .
  • the functions of the data processing unit 8 are realized by cooperation between software such as a data processing program and hardware such as a processor.
  • the data processing unit 8 may have any configuration in the description of the data processing unit 1020 of the ophthalmologic apparatus 1000 (see FIGS. 2C and 3 to 18).
  • the configuration of the data processing unit 8 is not limited to those.
  • the image selection unit 1021 of this aspect determines the correspondence relationship (first Scheimpflug A new set of Scheimpflug images corresponding to the slit scan is selected from these two sets of Scheimpflug images, based on the correspondence between the image set and the second Scheimpflug image set.
  • the data processing unit 8 generates inflammation state information based on the series of new Scheimpflug images selected by the image selection unit 1021, like the inflammation state information generation unit 1022 in FIG. 2C.
  • the configuration of the data processing unit 8 is not limited to these examples.
  • Some exemplary aspects of the data processing unit 8 include any of the data processing functions disclosed by any of the applicants of the present application, such as any of the data processing functions disclosed in Japanese Unexamined Patent Application Publication No. 2019-213733. It may have any data processing capabilities related to technology.
  • the communication unit 9 performs data communication between the slit lamp microscope system 1 and other devices. That is, the communication unit 9 transmits data to other devices and receives data transmitted from other devices.
  • the data communication method executed by the communication unit 9 may be arbitrary.
  • the communication unit 9 uses one or more of various communication interfaces such as a communication interface conforming to the Internet, a communication interface conforming to a leased line, a communication interface conforming to a LAN, and a communication interface conforming to short-range communication.
  • various communication interfaces such as a communication interface conforming to the Internet, a communication interface conforming to a leased line, a communication interface conforming to a LAN, and a communication interface conforming to short-range communication.
  • Data communication may be wired communication or wireless communication.
  • the data sent and received by the communication unit 9 may be encrypted.
  • the control unit 7 and/or the data processing unit 8 include an encryption processing unit that encrypts data transmitted by the communication unit 9 and a decryption processing unit that decrypts data received by the communication unit 9. At least one of the processing units is included.
  • the user interface 10 includes arbitrary user interface devices such as display devices and operation devices. Users such as doctors, subjects, and assistants can operate the slit lamp microscope system 1 and input information to the slit lamp microscope system 1 by using the user interface 10 .
  • the display device displays various information under the control of the control unit 7.
  • the display device may include a flat panel display such as a liquid crystal display (LCD).
  • the operation device includes a device for operating the slit lamp microscope system 1 and a device for inputting information.
  • Operation devices include, for example, buttons, switches, levers, dials, handles, knobs, mice, keyboards, trackballs, operation panels, and the like.
  • a device such as a touch screen in which a display device and an operation device are integrated may be used.
  • At least part of the user interface may be arranged as a peripheral device of the slit lamp microscope system 1.
  • the elements of the slit lamp microscope system 1 are not limited to those described above.
  • the slit lamp microscope system 1 may include any element that can be combined with a slit lamp microscope, or more generally any element that can be combined with an ophthalmic device.
  • the slit lamp microscope system 1 may include any element for processing the data of the subject's eye acquired by the slit lamp microscope, and more generally any element for processing any ophthalmic data. may contain.
  • the slit lamp microscope system 1 may include a fixation system that outputs light (fixation light) for fixing the eye E to be examined.
  • a fixation system typically includes at least one visible light source (the fixation light source) or a display device that displays an image such as a landscape chart or a fixation target.
  • the fixation system is arranged coaxially or non-coaxially with the illumination system 2 or the imaging system 3, for example.
  • the ophthalmologic apparatus 1000 and the slit lamp microscope system 1 described above have a function of photographing the subject's eye (imaging function, image acquisition unit 1010). equipment).
  • Some exemplary embodiments of the ophthalmologic apparatus include a computer (information processing apparatus) having a function of externally receiving an image of the subject's eye instead of (or in addition to) the imaging function.
  • the ophthalmologic apparatus 3000 of this example includes an image acquisition section 3010 and a data processing section 3020 .
  • the image acquisition section 3010 includes an image reception section 3011 .
  • the image acquisition section 3010 may further include a configuration similar to that of the image acquisition section 1010 of the ophthalmologic apparatus 1000 .
  • the data processing unit 3020 may have any configuration in the description of the data processing unit 1020 of the ophthalmologic apparatus 1000, but is not limited thereto.
  • the image receiving unit 3011 is configured to receive a Scheimpflug image of the subject's eye acquired in advance (in other words, a Scheimpflug image of the subject's eye acquired by past imaging).
  • Image acceptor 3011 includes, for example, a communication device and/or a media drive.
  • the communication device like the communication unit 9 of the slit lamp microscope system 1 for example, is configured to receive data stored in an external storage device.
  • a media drive is configured to read data recorded on a recording medium.
  • the data processing unit 3020 is configured to execute processing for generating inflammation state information indicating the inflammation state of the subject's eye from the Scheimpflug image received by the image receiving unit 3011 .
  • processing executable by the data processing unit 3020 refer to the description of the ophthalmologic apparatus 1000 and the description of the slit lamp microscope system 1. FIG.
  • the recording medium is a computer-readable non-transitory recording medium.
  • Such recording media may take any form, and examples thereof include magnetic disks, optical disks, magneto-optical disks, and semiconductor memories.
  • a method according to some exemplary aspects is a method of controlling an ophthalmic device (e.g., ophthalmic device 1000, slit lamp microscope system 1, or ophthalmic device 3000) that includes a processor, wherein the ophthalmic device performs Scheimpflug A step of acquiring an image (referred to as a first acquisition step); and a step of causing a processor to execute a process for generating inflammation state information indicating the inflammation state of the subject's eye from the Scheimpflug image (first generation step). called) and
  • the first acquisition step and/or the first generation step may include any item in the description of the ophthalmologic apparatus 1000, any item in the description of the slit lamp microscope system 1, and any item in the description of the ophthalmic apparatus 3000. may be embodied by any of Further, any of the arbitrary steps in the description of the ophthalmic apparatus 1000, the arbitrary steps in the description of the slit lamp microscope system 1, and the arbitrary steps in the description of the ophthalmic apparatus 3000 may be referred to as the first acquisition step and the first may be combined with the production step of
  • a method is a method of processing an image of an eye, including the steps of acquiring a Scheimpflug image of the eye to be examined (referred to as a second acquisition step); from the Scheimpflug image (referred to as a second generation step).
  • the second acquisition step and/or the second generation step may include any item in the description of the ophthalmologic apparatus 1000, any item in the description of the slit lamp microscope system 1, and any item in the description of the ophthalmic apparatus 3000. may be embodied by any of Further, any of the optional steps in the description of the ophthalmic apparatus 1000, the optional steps in the description of the slit lamp microscope system 1, and the optional steps in the description of the ophthalmic apparatus 3000 may be referred to as the second acquisition step and the second acquisition step. may be combined with the production step of
  • the present invention may include a program (referred to as a first program) that causes a computer to execute a method of controlling an ophthalmologic apparatus.
  • the present invention may also include a program (referred to as a second program) that causes a computer to execute a method of processing an eye image.
  • the present invention may include a computer-readable non-transitory recording medium recording the first program.
  • the present invention may also include a computer-readable non-temporary recording medium recording the second program.
  • Such non-transitory recording media may be in any form, examples of which include magnetic disks, optical disks, magneto-optical disks, semiconductor memories, and the like.
  • Machine-learning-based processing and/or non-machine-learning-based processing are used to create correspondences between data sets.
  • machine learning-based processing for example, machine learning is performed using training data that includes multiple pairs of data sets obtained from conventional evaluation techniques and data sets obtained from evaluation techniques according to the present disclosure.
  • the inference model constructed by this includes a neural network that inputs the data acquired by the evaluation method according to the present disclosure and outputs data simulating the data acquired by the conventional evaluation method.
  • machine learning-based processing and/or non-machine learning-based processing are used to create correspondences between data.
  • machine learning-based processing for example, machine learning is performed using training data that includes multiple pairs of data obtained under a first condition and data obtained under a second condition.
  • the inference model constructed by this uses the data obtained under the first condition (or the second condition) as input, and imitates the data obtained under the second condition (or the first condition) It includes a neural network that outputs data.
  • the method of controlling the ophthalmic device, the method of processing the eye image, the program, and the recording medium according to the present disclosure the evaluation of the inflammatory state based on the eye image, which has conventionally been performed manually (manually) can be at least partially automated.
  • inflammatory conditions are evaluated based on Scheimpflug images, so evaluation can be performed based on high-quality images that are in focus over a wide range. Therefore, it is possible to evaluate a wide range of eyes to be examined with high quality.
  • a group of high-quality Scheimpflug images (a series of Scheimpflug images) in which a wide three-dimensional region of the subject's eye is in focus can be acquired quickly. Since it is possible to perform evaluation based on this, it is possible to perform evaluation of a very wide range of eyes to be examined with high quality. For example, a wide range of the anterior chamber can be evaluated, and the lens and cornea can be added to the evaluation targets.
  • Patent Document 5 International Publication No. 2018/003906
  • scattered light cannot be detected if the exposure time for shooting one frame, such as the video rate, is short. is set to about 100 milliseconds to 1 second, but considering the effects of eye movement and blinking of the subject's eye, slit scanning as in the present disclosure cannot be performed.
  • the size of the projection image of the slit light on the cornea is set to 0.2 mm ⁇ 2 mm, so It is difficult to image large areas.
  • the size of the projected image of the slit light on the cornea can be set to about 0.05 mm ⁇ 8 to 12 mm, thereby imaging a wide range of the anterior segment. becomes possible.
  • a white LED or the like can be used instead of a blue LED as in the invention described in Patent Document 5 (International Publication No. 2018/003906), and a color camera is used instead of a monochrome camera.
  • RGB signal, G signal, B signal can be evaluated using color information (R signal, G signal, B signal).
  • a wide range of the anterior segment can be imaged, and in addition to acquiring an anterior segment image, an image of inflammatory cells, and inflammatory state information, the shape of the anterior segment can be presented. It also has the advantage of being able to provide a variety of information to doctors, as it also enables analysis.
  • ophthalmologic apparatus 1010 image acquisition unit 1011 illumination system 1012 imaging system 1020 data processing unit 1021 image selection unit 1022 inflammation state information generation unit 1031 first segmentation unit 1032 second segmentation unit 1033 cell evaluation information generation processing unit 3000 ophthalmology apparatus 3010 image acquisition Unit 3011 Image reception unit 3020 Data processing unit

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

An ophthalmological device (1000) according to an illustrative embodiment of the present invention comprises an image acquisition unit (1010) and a data processing unit (1020). The image acquisition unit (1010) acquires a Scheimpflug image of a subject's eye. The data processing unit (1020) performs processing for generating inflammation state data, which indicates the inflammation state of the subject's eye, from the Scheimpflug image acquired by the image acquisition unit (1010).

Description

眼科装置、眼科装置を制御する方法、眼画像を処理する方法、プログラム、及び記録媒体Ophthalmic device, method for controlling ophthalmic device, method for processing eye image, program, and recording medium
 本発明は、眼科装置、眼科装置を制御する方法、眼画像を処理する方法、プログラム、及び記録媒体に関する。 The present invention relates to an ophthalmic apparatus, a method of controlling an ophthalmic apparatus, a method of processing an eye image, a program, and a recording medium.
 眼科分野において画像診断は重要な位置を占める。眼科画像診断では、各種の眼科装置が用いられる。眼科装置には、スリットランプ顕微鏡、眼底カメラ、走査型レーザー検眼鏡(SLO)、光干渉断層計(OCT)などがある。また、レフラクトメータ、ケラトメータ、眼圧計、スペキュラーマイクロスコープ、ウェーブフロントアナライザ、マイクロペリメータなどの各種の検査装置や測定装置にも、前眼部や眼底を撮影する機能が搭載されている。 Diagnostic imaging occupies an important position in the field of ophthalmology. Various types of ophthalmic equipment are used in ophthalmologic image diagnosis. Ophthalmic equipment includes slit lamp microscopes, fundus cameras, scanning laser ophthalmoscopes (SLO), optical coherence tomography (OCT), and the like. Various inspection and measurement devices such as refractometers, keratometers, tonometers, specular microscopes, wavefront analyzers, and microperimeters are also equipped with functions for photographing the anterior segment of the eye and the fundus.
 これら様々な眼科装置のうち最も広く且つ頻繁に使用される装置の1つが、眼科医にとっての聴診器とも呼ばれるスリットランプ顕微鏡である。スリットランプ顕微鏡は、スリット光で被検眼を照明し、照明された断面を側方から顕微鏡で観察したり撮影したりするための眼科装置である(例えば、特許文献1、2を参照)。また、シャインプルーフの条件を満足するように構成された光学系を用いることにより被検眼の3次元領域を高速でスキャンすることが可能なスリットランプ顕微鏡も知られている(例えば、特許文献3を参照)。なお、スリットランプ顕微鏡の他にも、スリット光で対象物をスキャンする撮像方式としてローリングシャッターカメラなどがある。 One of the most widely and frequently used of these various ophthalmic instruments is the slit lamp microscope, also called the stethoscope for ophthalmologists. A slit lamp microscope is an ophthalmologic apparatus for illuminating an eye to be inspected with slit light, and observing or photographing an illuminated cross section from the side with a microscope (see Patent Documents 1 and 2, for example). Also known is a slit lamp microscope capable of scanning a three-dimensional region of an eye to be examined at high speed by using an optical system configured to satisfy the Scheimpflug condition (for example, see Patent Document 3). reference). In addition to the slit lamp microscope, there is a rolling shutter camera as an imaging method for scanning an object with slit light.
 スリット光を用いて測定を行う眼科装置として、被検眼の炎症状態を評価するためのフレアセルメータがある(例えば、特許文献4、5を参照)。フレアセルメータは、前房内に浮遊している炎症性細胞の個数や前房内の蛋白質濃度(フレア濃度)を測定するための眼科装置であり、例えばレーザー光を1次元的にスキャンすることによって又はLED光をスリット開口で制限することによってスリット光を生成する。 As an ophthalmologic device that uses slit light for measurement, there is a flare cell meter for evaluating the inflammatory state of an eye to be examined (see Patent Documents 4 and 5, for example). A flare cell meter is an ophthalmic device for measuring the number of inflammatory cells floating in the anterior chamber and the protein concentration (flare concentration) in the anterior chamber. or by confining the LED light with a slit aperture.
特開2016-159073号公報JP 2016-159073 A 特開2016-179004号公報JP 2016-179004 A 特開2019-213733号公報JP 2019-213733 A 特開2005-102938号公報JP-A-2005-102938 国際公開第2018/003906号WO2018/003906
 本発明の1つの目的は、眼画像に基づく炎症状態の評価を自動化することにある。 One object of the present invention is to automate the evaluation of inflammatory conditions based on eye images.
 例示的な実施形態に係る眼科装置は、画像取得部と、データ処理部とを含んでいる。画像取得部は、被検眼のシャインプルーフ画像を取得する。データ処理部は、画像取得部により取得されたシャインプルーフ画像から被検眼の炎症状態を示す炎症状態情報を生成するための処理を実行する。 An ophthalmologic apparatus according to an exemplary embodiment includes an image acquisition unit and a data processing unit. The image acquisition unit acquires a Scheimpflug image of the subject's eye. The data processing unit executes processing for generating inflammation state information indicating the inflammation state of the subject's eye from the Scheimpflug image acquired by the image acquisition unit.
 例示的な実施形態によれば、眼画像に基づく炎症状態の評価を自動化することが可能である。 According to exemplary embodiments, it is possible to automate the assessment of inflammatory conditions based on eye images.
例示的な態様に係る眼科装置の構成を表す概略図である。1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects; FIG. 例示的な態様に係る眼科装置の構成を表す概略図である。1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects; FIG. 例示的な態様に係る眼科装置の構成を表す概略図である。1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects; FIG. 例示的な態様に係る眼科装置の構成を表す概略図である。1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects; FIG. 例示的な態様に係る眼科装置の構成を表す概略図である。1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects; FIG. 例示的な態様に係る眼科装置の構成を表す概略図である。1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects; FIG. 例示的な態様に係る眼科装置の構成を表す概略図である。1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects; FIG. 例示的な態様に係る眼科装置の構成を表す概略図である。1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects; FIG. 例示的な態様に係る眼科装置の構成を表す概略図である。1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects; FIG. 例示的な態様に係る眼科装置の構成を表す概略図である。1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects; FIG. 例示的な態様に係る眼科装置の構成を表す概略図である。1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects; FIG. 例示的な態様に係る眼科装置の構成を表す概略図である。1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects; FIG. 例示的な態様に係る眼科装置の構成を表す概略図である。1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects; FIG. 例示的な態様に係る眼科装置の構成を表す概略図である。1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects; FIG. 例示的な態様に係る眼科装置の構成を表す概略図である。1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects; FIG. 例示的な態様に係る眼科装置の構成を表す概略図である。1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects; FIG. 例示的な態様に係る眼科装置の構成を表す概略図である。1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects; FIG. 例示的な態様に係る眼科装置の構成を表す概略図である。1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects; FIG. 例示的な態様に係る眼科装置の構成を表す概略図である。1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects; FIG. 例示的な態様に係る眼科装置の構成を表す概略図である。1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects; FIG. 例示的な態様に係る眼科装置が実行する処理を表すフロー図である。1 is a flow diagram representing processing performed by an ophthalmic device according to an exemplary aspect; FIG. 例示的な態様に係る眼科装置が実行する処理を表すフロー図である。1 is a flow diagram representing processing performed by an ophthalmic device according to an exemplary aspect; FIG. 例示的な態様に係る眼科装置が実行する処理を表すフロー図である。1 is a flow diagram representing processing performed by an ophthalmic device according to an exemplary aspect; FIG. 例示的な態様に係る眼科装置が実行する処理を表すフロー図である。1 is a flow diagram representing processing performed by an ophthalmic device according to an exemplary aspect; FIG. 例示的な態様に係る眼科装置の構成を表す概略図である。1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects; FIG. 例示的な態様に係る眼科装置の構成を表す概略図である。1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects; FIG. 例示的な態様に係る眼科装置の動作を説明するための概略図である。FIG. 4 is a schematic diagram for explaining the operation of an ophthalmic device according to exemplary aspects; 例示的な態様に係る眼科装置の動作を説明するための概略図である。FIG. 4 is a schematic diagram for explaining the operation of an ophthalmic device according to exemplary aspects; 例示的な態様に係る眼科装置の動作を説明するための概略図である。FIG. 4 is a schematic diagram for explaining the operation of an ophthalmic device according to exemplary aspects; 例示的な態様に係る眼科装置の構成を表す概略図である。1 is a schematic diagram representing a configuration of an ophthalmic apparatus according to exemplary aspects; FIG.
 実施形態の幾つかの例示的な態様について、図面を参照しながら詳細に説明する。なお、任意の公知技術を実施形態に組み合わせることができる。例えば、本明細書で引用した文献において開示された任意の事項など、当該技術分野の任意の公知技術を、いずれかの実施形態に組み合わせることが可能である。特に、特許文献3(特開2019-213733号公報)に開示されている全ての内容は、参照によって本開示に援用される。同様に、本開示に関連する技術について本願の出願人により開示された任意の技術事項(特許出願、論文などにおいて開示された事項)を、いずれかの実施形態に組み合わせることができる。また、本開示に係る様々な態様のうちのいずれか2つ以上を少なくとも部分的に組み合わせることが可能である。 Several exemplary aspects of the embodiments will be described in detail with reference to the drawings. Any known technique can be combined with the embodiments. For example, any known technology in the art, such as any of the matters disclosed in the documents cited herein, can be combined with any of the embodiments. In particular, all contents disclosed in Patent Document 3 (Japanese Patent Application Laid-Open No. 2019-213733) are incorporated into the present disclosure by reference. Similarly, any technical matter disclosed by the applicant of the present application (matters disclosed in patent applications, papers, etc.) regarding technology related to the present disclosure can be combined with any of the embodiments. Also, any two or more of the various aspects of the present disclosure can be at least partially combined.
 本開示において説明される要素の機能の少なくとも一部は、回路構成(circuitry)又は処理回路構成(processing circuitry)を用いて実装される。回路構成又は処理回路構成は、開示された機能の少なくとも一部を実行するように構成及び/又はプログラムされた、汎用プロセッサ、専用プロセッサ、集積回路、CPU(Central Processing Unit)、GPU(Graphics Processing Unit)、ASIC(Application Specific Integrated Circuit)、プログラマブル論理デバイス(例えば、SPLD(Simple Programmable Logic Device)、CPLD(Complex Programmable Logic Device)、FPGA(Field Programmable Gate Array)、従来の回路構成、及びそれらの任意の組み合わせのいずれかを含む。プロセッサは、トランジスタ及び/又は他の回路構成を含む、処理回路構成又は回路構成とみなされる。本開示において、回路構成、ユニット、手段、又はこれらに類する用語は、開示された機能の少なくとも一部を実行するハードウェア、又は、開示された機能の少なくとも一部を実行するようにプログラムされたハードウェアである。ハードウェアは、本明細書に開示されたハードウェアであってよく、或いは、記載された機能の少なくとも一部を実行するようにプログラム及び/又は構成された既知のハードウェアであってもよい。ハードウェアが或るタイプの回路構成とみなされ得るプロセッサである場合、回路構成、ユニット、手段、又はこれらに類する用語は、ハードウェアとソフトウェアとの組み合わせであり、このソフトウェアはハードウェア及び/又はプロセッサを構成するために使用される。 At least some of the functionality of the elements described in this disclosure are implemented using circuitry or processing circuitry. Circuitry or processing circuitry is a general purpose processor, special purpose processor, integrated circuit, CPU (Central Processing Unit), GPU (Graphics Processing Unit) configured and/or programmed to perform at least a portion of the disclosed functions. ), ASIC (Application Specific Integrated Circuit), programmable logic device (e.g., SPLD (Simple Programmable Logic Device), CPLD (Complex Programmable Logic Device), FPGA (Field Programmable Gate), any conventional circuit configuration thereof, and A processor may be considered processing circuitry or circuitry, including transistors and/or other circuitry, including any combination thereof.In this disclosure, circuitry, units, means, or like terms may be used to refer to hardware that performs at least some of the functions described herein, or hardware that is programmed to perform at least some of the functions disclosed herein. processor, or known hardware programmed and/or configured to perform at least some of the functions described. , a circuit arrangement, unit, means or like terminology is a combination of hardware and software, the software being used to configure the hardware and/or the processor.
<実施形態の概要>
 実施形態は、シャインプルーフの条件を満足する光学系を用いて被検眼を撮影することによって生成されたデジタル画像(シャインプルーフ画像と呼ぶ)から被検眼の炎症状態を示す情報(炎症状態情報と呼ぶ)を生成する技術に関する。
<Overview of Embodiment>
In the embodiment, information indicating the inflammatory state of the subject's eye (referred to as inflammatory state information) is obtained from a digital image (referred to as a Scheimpflug image) generated by photographing the subject's eye using an optical system that satisfies the Scheimpflug condition. ) related to the technology to generate
 実施形態によって処理されるシャインプルーフ画像の個数は任意であってよく、幾つかの実施形態は1つのシャインプルーフ画像を処理し、幾つかの実施形態は複数の画像を処理する。この複数の画像は、例えば、スリット光を用いたスキャン(スリットスキャンと呼ぶ)によって収集される。スリットスキャンは、本願の出願人により開発された、被検眼の3次元領域をスリット光でスキャンして一連の画像を収集する眼科イメージング技術であり、特許文献3(特開2019-213733号公報)などにおいて説明されている。 The number of Scheimpflug images processed by the embodiments may be arbitrary, with some embodiments processing one Scheimpflug image and some embodiments processing multiple images. The plurality of images are collected, for example, by scanning using slit light (referred to as slit scanning). Slit scanning is an ophthalmic imaging technique developed by the applicant of the present application, in which a series of images are collected by scanning a three-dimensional region of an eye to be inspected with slit light. etc., are described.
 幾つかの実施形態によって処理されるシャインプルーフ画像は、シャインプルーフ画像を加工して作成された画像でもよい。シャインプルーフ画像を加工して作成された画像の例としては、補正、編集、強調などの任意のデジタル画像処理をシャインプルーフ画像に適用して作成された画像や、複数のシャインプルーフ画像から構築された3次元画像(ボリューム画像)や、3次元画像に任意のレンダリングを適用して作成された画像などがある。 A Scheimpflug image processed by some embodiments may be an image created by processing a Scheimpflug image. Examples of images created by manipulating Scheimpflug images include images created by applying arbitrary digital image processing such as correction, editing, enhancement, etc. to Scheimpflug images, and images constructed from multiple Scheimpflug images. 3D images (volume images), images created by applying arbitrary rendering to 3D images, and the like.
 実施形態によって生成される炎症状態情報は、被検眼の炎症状態に関連した任意の情報であってよい。実施形態の炎症状態情報は、前房内に存在する炎症性細胞(前房セル)に関する情報、前房内に存在する蛋白質(前房フレア)に関する情報、水晶体の混濁に関する情報、疾患の発症・経過に関する情報、及び、疾患の活動性に関する情報のうちのいずれか1つ以上を含んでいてよい。幾つかの実施形態は、2つ以上の情報に基づいて総合的な情報(例えば、総合的な評価情報)を生成してもよい。 The inflammatory state information generated by the embodiment may be any information related to the inflammatory state of the eye to be examined. The inflammatory state information of the embodiment includes information on inflammatory cells present in the anterior chamber (anterior chamber cells), information on proteins present in the anterior chamber (anterior chamber flare), information on lens opacification, disease onset/ Any one or more of information regarding progress and information regarding disease activity may be included. Some embodiments may generate aggregate information (eg, aggregate rating information) based on two or more pieces of information.
 炎症性細胞に関する情報としては、炎症性細胞の密度(濃度)、個数、位置、分布などの任意のパラメータの情報や、所定のパラメータの情報に基づく評価情報などがある。炎症性細胞に関する評価情報を細胞評価情報と呼ぶ。前房フレアに関する情報としては、フレアの濃度、個数、位置、分布などの任意のパラメータの情報や、所定のパラメータの情報に基づく評価情報などがある。水晶体混濁に関する情報としては、混濁の濃度、個数、位置、分布などの任意のパラメータの情報や、所定のパラメータの情報に基づく評価情報などがある。疾患の発症・経過に関する情報としては、発症の有無、発症の状態、罹患期間、経過の状態などの任意のパラメータの情報や、所定のパラメータの情報に基づく評価情報などがある。疾患の活動性に関する情報としては、疾患の活動の状態などの任意のパラメータの情報や、所定のパラメータの情報に基づく評価情報などがある。 Information on inflammatory cells includes information on arbitrary parameters such as the density (concentration), number, position, and distribution of inflammatory cells, and evaluation information based on information on predetermined parameters. Evaluation information on inflammatory cells is called cell evaluation information. Information on anterior chamber flare includes information on arbitrary parameters such as the concentration, number, position, and distribution of flares, and evaluation information based on information on predetermined parameters. Information on lens opacification includes information on arbitrary parameters such as concentration, number, position, and distribution of opacity, and evaluation information based on information on predetermined parameters. Information on the onset and progress of a disease includes information on arbitrary parameters such as presence/absence of onset, state of onset, duration of disease, and state of progress, and evaluation information based on information on predetermined parameters. Information on disease activity includes information on arbitrary parameters such as the state of disease activity, evaluation information based on information on predetermined parameters, and the like.
 炎症状態に関する評価情報を生成する実施形態の例示的な態様は、この評価情報を生成する処理を行う眼科装置により生成された情報だけでなく、当該眼科装置に外部から入力された情報(例えば、他の眼科装置により取得された情報、医師により入力された情報)にも基づいて、評価情報を生成するように構成されていてもよい。 An exemplary aspect of an embodiment that generates evaluation information regarding an inflammatory condition is not only information generated by an ophthalmic device that performs processing to generate this evaluation information, but also information that is externally input to the ophthalmic device (e.g., Information acquired by other ophthalmologic apparatus, information input by a doctor) may also be used to generate the evaluation information.
 以上に例示した炎症状態情報を生成するために参照される情報は任意であってよく、例えば、The Standardization of Uveitis Nomenclature(SUN) Working Groupにより提案されたぶどう膜炎疾患の分類基準を含んでいてもよい(“Standardization of uveitis nomenclature for reporting clinical data. Results of the First International Workshop”、American Journal of Ophthalmology、Volume 140、Issue 3、September 2005、Page 509-516)。 The information referred to for generating the inflammatory state information exemplified above may be arbitrary, and includes, for example, the classification criteria for uveitis diseases proposed by The Standardization of Uveitis Nomenclature (SUN) Working Group.もよい(“Standardization of uveitis nomenclature for reporting clinical data. Results of the First International Workshop”、American Journal of Ophthalmology、Volume 140、Issue 3、September 2005、Page 509-516)。
 なお、実施形態の炎症状態情報は、上記の例に限定されるものはない。また、実施形態の炎症状態情報を生成するための参照情報も、上記の例に限定されるものではない。 It should be noted that the inflammation state information of the embodiment is not limited to the above examples. Also, the reference information for generating the inflammation state information of the embodiment is not limited to the above example.
 実施形態では、被検眼の前眼部にスリットスキャンを適用可能な例示的な態様についても説明する。スリットスキャンが適用される被検眼の部位は、炎症状態情報の生成に使用されるデータを取得するためのスリットスキャンが適用される部位の一部又は全体を少なくとも含む。例えば、炎症性細胞に関する情報及び/又は前房フレアに関する情報を含む炎症状態情報を生成する場合、前房の少なくとも一部を含む領域に対してスリットスキャンが適用される。また、水晶体混濁に関する情報を含む炎症状態情報を生成する場合、水晶体の少なくとも一部を含む領域に対してスリットスキャンが適用される。スリットスキャン以外の手法で2つ以上のシャインプルーフ画像を取得する場合や、シャインプルーフ画像を1つのみ取得する場合においても同様である。 In the embodiment, an exemplary mode in which slit scanning can be applied to the anterior segment of the eye to be examined will also be described. The site of the subject's eye to which the slit scan is applied includes at least a part or the whole of the site to which the slit scan is applied to acquire the data used to generate the inflammatory state information. For example, when generating inflammatory status information including information about inflammatory cells and/or information about an anterior chamber flare, a slit scan is applied to a region that includes at least a portion of the anterior chamber. Also, when generating inflammatory status information including information about lens opacification, a slit scan is applied to a region including at least a portion of the lens. The same is true when acquiring two or more Scheimpflug images by a method other than slit scanning, or when acquiring only one Scheimpflug image.
 一般に、スリットスキャンが適用される被検眼の領域(部位)は、前眼部の少なくとも一部(例:角膜、虹彩、前房、隅角、毛様体、チン小帯、水晶体、神経、血管などの組織;病変部;治療痕;眼内レンズ、低侵襲緑内障手術(MIGS)デバイスなどの人工物)、及び/又は、後眼部の少なくとも一部(例:硝子体、網膜、脈絡膜、強膜、視神経乳頭、篩状板、黄斑、神経、血管などの組織;病変部;治療痕;人工網膜などの人工物)を含む。幾つかの例示的な態様では、瞼やマイボーム腺などの眼球近傍組織の少なくとも一部にスリットスキャンを適用してもよい。幾つかの例示的な態様では、前眼部の少なくとも一部、後眼部の少なくとも一部、及び、眼球近傍組織の少なくとも一部のうちのいずれか2つを含む3次元領域又は全てを含む3次元領域に対してスリットスキャンを適用してもよい。 In general, the region (part) of the eye to be examined to which slit scanning is applied is at least a part of the anterior segment (e.g., cornea, iris, anterior chamber, angle, ciliary body, zonule of Chin, lens, nerve, blood vessel, etc.). lesions; scars; artificial objects such as intraocular lenses, minimally invasive glaucoma surgery (MIGS) devices), and/or at least a portion of the posterior segment of the eye (e.g., vitreous, retina, choroid, sclera tissue such as membranes, optic nerve head, cribriform plate, macula, nerves, blood vessels; lesions; treatment scars; artificial objects such as artificial retinas). In some exemplary embodiments, a slit scan may be applied to at least a portion of the juxtacular tissue, such as the eyelids and meibomian glands. In some exemplary aspects, a three-dimensional region including any two of at least a portion of the anterior segment, at least a portion of the posterior segment, and at least a portion of the periocular tissue, or all Slit scanning may be applied to the three-dimensional area.
 前述した細胞評価情報を含む炎症状態情報を生成することが可能な実施形態は、次の3つの処理のうちの少なくとも1つを実行するように構成されていてよい:(1)被検眼の前房に相当する画像領域(前房領域と呼ぶ)を特定するためのセグメンテーション(第1セグメンテーション又は前房セグメンテーションと呼ぶ);(2)炎症性細胞に相当する画像領域(細胞領域と呼ぶ)を特定するためのセグメンテーション(第2セグメンテーション又は細胞セグメンテーションと呼ぶ);(3)細胞評価情報を生成するための処理(細胞評価情報生成処理と呼ぶ)。このような実施形態の幾つかの例示的な態様を以下に説明する。 Embodiments capable of generating inflammatory status information including cell assessment information as described above may be configured to perform at least one of the following three processes: (1) before the subject eye; segmentation (called first segmentation or anterior chamber segmentation) to identify the image region corresponding to the chamber (called the anterior chamber region); (2) identifying the image region corresponding to the inflammatory cells (called the cell region); (3) processing for generating cell evaluation information (referred to as cell evaluation information generation processing). Some exemplary aspects of such embodiments are described below.
 第1の例示的な態様は、シャインプルーフ画像から前房領域を特定するための第1セグメンテーションと、この第1セグメンテーションにより特定された前房領域から細胞領域を特定する第2セグメンテーションと、この第2セグメンテーションにより特定された細胞領域から細胞評価情報を生成する細胞評価情報生成処理とのうちの少なくとも1つの処理を実行するように構成されている。本態様の第1セグメンテーションは、機械学習により訓練されたニューラルネットワークを用いて実行されてよいが、これに限定されない。本態様の第2セグメンテーションは、機械学習により訓練されたニューラルネットワークを用いて実行されてよいが、これに限定されない。本態様の細胞評価情報生成処理は、機械学習により訓練されたニューラルネットワークを用いて実行されてよいが、これに限定されない。本態様の詳細については後述する。 A first exemplary aspect includes first segmentation for identifying an anterior chamber region from a Scheimpflug image, second segmentation for identifying a cell region from the anterior chamber region identified by the first segmentation, and and a cell evaluation information generation process of generating cell evaluation information from the cell region specified by the segmentation. The first segmentation of this aspect may be performed using a neural network trained by machine learning, but is not so limited. The second segmentation of this aspect may be performed using a neural network trained by machine learning, but is not so limited. The cell evaluation information generation processing of this embodiment may be performed using a neural network trained by machine learning, but is not limited to this. The details of this aspect will be described later.
 第2の例示的な態様は、前房領域を特定するための第1セグメンテーションを実行することなく、シャインプルーフ画像から細胞領域を特定するための第2セグメンテーションと、この第2セグメンテーションにより特定された細胞領域から細胞評価情報を生成する細胞評価情報生成処理とのうちの少なくとも1つの処理を実行するように構成されている。本態様の第2セグメンテーションは、機械学習により訓練されたニューラルネットワークを用いて実行されてよいが、これに限定されない。本態様の細胞評価情報生成処理は、機械学習により訓練されたニューラルネットワークを用いて実行されてよいが、これに限定されない。本態様の詳細については後述する。 A second exemplary aspect is a second segmentation for identifying a cell region from a Scheimpflug image without performing a first segmentation for identifying an anterior chamber region, and and a cell evaluation information generating process of generating cell evaluation information from the cell area. The second segmentation of this aspect may be performed using a neural network trained by machine learning, but is not so limited. The cell evaluation information generation processing of this embodiment may be performed using a neural network trained by machine learning, but is not limited to this. The details of this aspect will be described later.
 第3の例示的な態様は、前房領域を特定するための第1セグメンテーション及び細胞領域を特定するための第2セグメンテーションを実行することなく、シャインプルーフ画像から細胞評価情報を生成するための細胞評価情報生成処理を実行するように構成されている。本態様の細胞評価情報生成処理は、機械学習により訓練されたニューラルネットワークを用いて実行されてよいが、これに限定されない。本態様の詳細については後述する。 A third exemplary aspect provides a method for generating cell assessment information from a Scheimpflug image without performing a first segmentation to identify the anterior chamber region and a second segmentation to identify the cell region. It is configured to execute evaluation information generation processing. The cell evaluation information generation processing of this embodiment may be performed using a neural network trained by machine learning, but is not limited to this. The details of this aspect will be described later.
 第4の例示的な態様は、細胞領域を特定するための第2セグメンテーションを実行することなく、シャインプルーフ画像から前房領域を特定するための第1セグメンテーションと、この第1セグメンテーションにより特定された前房領域から細胞評価情報を生成するための細胞評価情報生成処理とのうちの少なくとも1つを実行するように構成されている。本態様の第1セグメンテーションは、機械学習により訓練されたニューラルネットワークを用いて実行されてよいが、これに限定されない。本態様の細胞評価情報生成処理は、機械学習により訓練されたニューラルネットワークを用いて実行されてよいが、これに限定されない。本態様の詳細については後述する。 A fourth exemplary aspect is a first segmentation for identifying an anterior chamber region from a Scheimpflug image without performing a second segmentation for identifying a cell region, and and cell evaluation information generation processing for generating cell evaluation information from the anterior chamber region. The first segmentation of this aspect may be performed using a neural network trained by machine learning, but is not so limited. The cell evaluation information generation processing of this embodiment may be performed using a neural network trained by machine learning, but is not limited to this. The details of this aspect will be described later.
 第5の例示的な態様は、機械学習により訓練されたニューラルネットワークを用いることなく実行される。本態様は、シャインプルーフ画像を解析して前房領域を特定するための第1セグメンテーションと、この第1セグメンテーションにより特定された前房領域を解析して細胞領域を特定するための第2セグメンテーションと、第2セグメンテーションにより特定された細胞領域に基づいて細胞評価情報を生成するための細胞評価情報生成処理とを実行するように構成されている。本態様の詳細については後述する。 A fifth exemplary aspect is performed without using a neural network trained by machine learning. This aspect includes first segmentation for analyzing the Scheimpflug image to identify the anterior chamber region, and second segmentation for analyzing the anterior chamber region identified by the first segmentation to identify the cell region. , and a cell evaluation information generation process for generating cell evaluation information based on the cell region specified by the second segmentation. The details of this aspect will be described later.
 第6の例示的な態様は、第1セグメンテーションの少なくとも一部、第2セグメンテーションの少なくとも一部、及び、細胞評価情報生成処理の少なくとも一部のうちのいずれか1つ以上を機械学習ベースの構成で実行するとともに、機械学習ベースの構成で実行される処理以外の処理を非機械学習ベースの構成で実行するように構成されている。本態様は、例えば、上記した第1~第4の例示的な態様のいずれかと、第5の例示的な態様とを部分的に組み合わせることによって実現されるものであるから、その詳細な説明については省略する。 A sixth exemplary aspect is a machine learning-based configuration of at least part of the first segmentation, at least part of the second segmentation, and at least part of the cell evaluation information generation process. and configured to perform processing other than processing performed in the machine learning-based configuration in the non-machine learning-based configuration. This aspect is realized, for example, by partially combining any of the above-described first to fourth exemplary aspects and the fifth exemplary aspect, so detailed description thereof are omitted.
<眼科装置>
 実施形態に係る眼科装置の例示的な態様を提供する。本態様に係る眼科装置の具体例(実施例)については後述する。
<Ophthalmic device>
1 provides exemplary aspects of an ophthalmic device according to embodiments; A specific example (working example) of the ophthalmologic apparatus according to this aspect will be described later.
 図1は、本態様に係る眼科装置の構成を示す。眼科装置1000は、画像取得部1010と、データ処理部1020とを含んでいる。 FIG. 1 shows the configuration of an ophthalmologic apparatus according to this aspect. The ophthalmologic apparatus 1000 includes an image acquisition section 1010 and a data processing section 1020 .
 画像取得部1010は、被検眼のシャインプルーフ画像を取得するように構成されている。幾つかの例示的な態様の画像取得部1010は、被検眼を撮影してシャインプルーフ画像を取得するように構成されている。このような画像取得部1010の構成例を図2Aに示す。 The image acquisition unit 1010 is configured to acquire a Scheimpflug image of the subject's eye. The image acquisition unit 1010 of some exemplary aspects is configured to capture a Scheimpflug image of the subject's eye. A configuration example of such an image acquisition unit 1010 is shown in FIG. 2A.
 図2Aに示す画像取得部1010Aは、照明系1011及び撮影系1012を含む。照明系1011は、被検眼にスリット光を投射するように構成されている。撮影系1012は、被検眼の撮影を行うように構成されており、撮像素子1013と、被検眼からの光を撮像素子1013に導く光学系(図示せず)とを含んでいる。 The image acquisition unit 1010A shown in FIG. 2A includes an illumination system 1011 and an imaging system 1012. The illumination system 1011 is configured to project slit light onto the subject's eye. The imaging system 1012 is configured to photograph an eye to be inspected, and includes an image sensor 1013 and an optical system (not shown) that guides light from the eye to be inspected to the image sensor 1013 .
 照明系1011と撮影系1012とは、シャインプルーフの条件を満足するように構成されており、シャインプルーフカメラとして機能する。より具体的には、照明系1011の光軸を通る平面(物面を含む平面)と、撮影系1012の主面と、撮像素子1013の撮像面とが、同一の直線上にて交差するように、照明系1011及び撮影系1012は構成されている。これにより、物面内の全ての位置(照明系1011の光軸に沿う方向における全ての位置)に撮影系1012のピントが合った状態で撮影を行うことができる。 The illumination system 1011 and the imaging system 1012 are configured to satisfy the Scheimpflug conditions and function as a Scheimpflug camera. More specifically, the plane (plane including the object plane) passing through the optical axis of the illumination system 1011, the main plane of the imaging system 1012, and the imaging plane of the imaging element 1013 are arranged to intersect on the same straight line. 2, an illumination system 1011 and an imaging system 1012 are configured. Accordingly, it is possible to perform imaging while the imaging system 1012 is in focus at all positions in the object plane (all positions in the direction along the optical axis of the illumination system 1011).
 幾つかの例示的な態様において、図2Aに示す画像取得部1010Aは、被検眼の3次元領域をスリット光でスキャンして一連のシャインプルーフ画像を収集するように構成されている。本例の画像取得部1010Aは、被検眼の3次元領域に対するスリット光の投射位置を移動しながら繰り返し撮影を行うことによって一連のシャインプルーフ画像を収集するように構成されている。 In some exemplary aspects, the image acquisition unit 1010A shown in FIG. 2A is configured to scan a three-dimensional region of the subject's eye with a slit of light to collect a series of Scheimpflug images. The image acquisition unit 1010A of this example is configured to collect a series of Scheimpflug images by repeatedly photographing while moving the projection position of the slit light with respect to the three-dimensional region of the subject's eye.
 幾つかの例示的な態様において、画像取得部1010Aは、スリット光の長手方向に直交する方向にスリット光を平行移動することによって被検眼の3次元領域をスキャンするように構成されてよい。このようなスキャン態様は、スリット光を回転させて前眼部をスキャンする従来の前眼部撮影装置のそれとは異なっている。 In some exemplary aspects, the image acquisition unit 1010A may be configured to scan the three-dimensional region of the subject's eye by translating the slit light in a direction orthogonal to the longitudinal direction of the slit light. Such a scanning mode is different from that of a conventional anterior segment photographing device that scans the anterior segment by rotating slit light.
 ここで、スリット光の長手方向は、被検眼への投射位置におけるスリット光のビーム断面の長手方向、換言すると、被検眼上に形成されたスリット光の像の長手方向であり、被検者の体軸に沿う方向(体軸方向)に略一致していてよい。また、長手方向におけるスリット光の寸法は、被検体の体軸方向における角膜径以上であってよく、スリット光の平行移動の距離は、被検体の体軸方向に直交する方向における角膜径以上であってよい。 Here, the longitudinal direction of the slit light is the longitudinal direction of the beam cross section of the slit light at the projection position to the eye to be examined, in other words, the longitudinal direction of the image of the slit light formed on the eye to be examined. It may substantially match the direction along the body axis (body axis direction). Further, the dimension of the slit light in the longitudinal direction may be equal to or greater than the corneal diameter in the body axis direction of the subject, and the distance of parallel movement of the slit light may be equal to or greater than the corneal diameter in the direction orthogonal to the body axis direction of the subject. It's okay.
 本例の画像取得部1010Aにより収集された一連のシャインプルーフ画像は、時間的に連続して収集された画像群(フレーム群)ではあるが、被検眼の3次元領域における異なる複数の位置から逐次に収集された画像群であるから、一般的な動画像とは異なり、空間的に分布する画像群である。 The series of Scheimpflug images acquired by the image acquisition unit 1010A of this example is an image group (frame group) acquired continuously in time, but is sequentially acquired from a plurality of different positions in the three-dimensional region of the subject's eye. Since it is a group of images collected in the same manner, unlike general moving images, it is a group of images that are spatially distributed.
 本例の画像取得部1010Aにおいて、照明系1011は、スリット光を被検眼の3次元領域に投射し、撮影系1012は、照明系1011からのスリット光が投射されている被検眼の3次元領域を撮影する。本例の画像取得部1010Aは、更に、照明系1011及び撮影系1012を移動するための機構を含んでいる。 In the image acquisition unit 1010A of this example, the illumination system 1011 projects slit light onto the three-dimensional area of the subject's eye, and the imaging system 1012 projects the slit light from the illumination system 1011 onto the three-dimensional area of the subject's eye. to shoot. The image acquisition section 1010A of this example further includes a mechanism for moving the illumination system 1011 and the imaging system 1012 .
 本例の画像取得部1010Aが適用された眼科装置1000のデータ処理部1020は、画像取得部1010Aにより収集された一連のシャインプルーフ画像に含まれるシャインプルーフ画像から炎症状態情報を生成するように構成されていてよい。本例のデータ処理部1020は、画像取得部1010Aにより収集された一連のシャインプルーフ画像に含まれる1つ以上のシャインプルーフ画像を処理することによって炎症状態情報の生成を実行する。ここで、炎症状態情報の生成に使用されるシャインプルーフ画像の個数は任意であってよい。 The data processing unit 1020 of the ophthalmologic apparatus 1000 to which the image acquisition unit 1010A of this example is applied is configured to generate inflammation state information from the Scheimpflug images included in the series of Scheimpflug images collected by the image acquisition unit 1010A. It can be. The data processor 1020 of this example performs the generation of inflammation status information by processing one or more Scheimpflug images included in the sequence of Scheimpflug images acquired by the image acquisition unit 1010A. Here, any number of Scheimpflug images may be used to generate inflammation state information.
 また、本例の画像取得部1010Aが適用された眼科装置1000のデータ処理部1020は、画像取得部1010Aにより収集された一連のシャインプルーフ画像を加工して加工画像データを生成する処理と、生成された加工画像データに基づいて炎症状態情報を生成する処理とを実行するように構成されていてよい。 Further, the data processing unit 1020 of the ophthalmologic apparatus 1000 to which the image acquisition unit 1010A of this example is applied performs processing for processing a series of Scheimpflug images collected by the image acquisition unit 1010A to generate processed image data; and generating inflammation state information based on the processed image data.
 例えば、本例のデータ処理部1020は、一連のシャインプルーフ画像に含まれる複数のシャインプルーフ画像から3次元画像(加工画像データの例である)を構築する処理と、この3次元画像に基づいて炎症状態情報を生成する処理とを実行するように構成されていてよい。或いは、本例のデータ処理部1020は、一連のシャインプルーフ画像に含まれる複数のシャインプルーフ画像から3次元画像を構築する処理と、この3次元画像からレンダリング画像(加工画像データの例である)を生成する処理と、このレンダリング画像に基づいて炎症状態情報を生成する処理とを実行するように構成されていてよい。 For example, the data processing unit 1020 of this example performs processing for constructing a three-dimensional image (which is an example of processed image data) from a plurality of Scheimpflug images included in a series of Scheimpflug images, and processing based on this three-dimensional image. and generating inflammation state information. Alternatively, the data processing unit 1020 of this example performs processing for constructing a three-dimensional image from a plurality of Scheimpflug images included in a series of Scheimpflug images, and rendering images (which are examples of processed image data) from the three-dimensional images. and generating inflammation state information based on this rendered image.
 幾つかの例示的な態様において、画像取得部1010Aの撮影系1012は、2つ以上の撮影系を含んでいてよい。例えば、図2Bに示す画像取得部1010Bの撮影系1012Aは、互いに異なる方向からそれぞれ撮影する第1撮影系1014及び第2撮影系1015を含んでいる。 In some exemplary aspects, the imaging system 1012 of the image acquisition unit 1010A may include two or more imaging systems. For example, the imaging system 1012A of the image acquisition unit 1010B shown in FIG. 2B includes a first imaging system 1014 and a second imaging system 1015 that respectively shoot from different directions.
 本例の画像取得部1010Bは、一連のシャインプルーフ画像を収集するためのスリットスキャンにおいて被検眼の3次元領域を互いに異なる方向からそれぞれ第1撮影系1014及び第2撮影系1015で撮影するように構成されていてよい。例えば、被検眼への入射位置におけるスリット光のビーム断面の長手方向が上下方向(Y方向)であり、且つ、スリット光の移動方向が水平方向(左右方向、X方向)であるように画像取得部1010Bが構成されている場合において、第1撮影系1014及び第2撮影系1015は、一方が左斜方から被検眼を撮影し、且つ、他方が右斜方から被検眼を撮影するように配置されていてよい。 The image acquisition unit 1010B of the present example captures the three-dimensional region of the subject's eye from mutually different directions with the first imaging system 1014 and the second imaging system 1015 in slit scanning for acquiring a series of Scheimpflug images. may be configured. For example, image acquisition such that the longitudinal direction of the beam cross section of the slit light at the incident position to the eye to be inspected is the vertical direction (Y direction) and the moving direction of the slit light is the horizontal direction (horizontal direction, X direction). In the case where the unit 1010B is configured, the first imaging system 1014 and the second imaging system 1015 are configured so that one of them photographs the subject's eye from the left oblique direction, and the other photographs the subject's eye from the right oblique direction. may be placed.
 第1撮影系1014により収集される一連のシャインプルーフ画像を第1シャインプルーフ画像群と呼び、第2撮影系1015により収集される一連のシャインプルーフ画像を第2シャインプルーフ画像群と呼ぶ。このような画像取得部1010Bにより収集される一連のシャインプルーフ画像は、第1シャインプルーフ画像群と第2シャインプルーフ画像群とを含んでいる。 A series of Scheimpflug images acquired by the first imaging system 1014 is called a first Scheimpflug image group, and a series of Scheimpflug images acquired by the second imaging system 1015 is called a second Scheimpflug image group. A series of Scheimpflug images acquired by such an image acquisition unit 1010B includes a first Scheimpflug image group and a second Scheimpflug image group.
 なお、スリットスキャンを行うことなく、第1撮影系1014により1枚のシャインプルーフ画像(第1シャインプルーフ画像)を取得し、第2撮影系1015により1枚のシャインプルーフ画像(第2シャインプルーフ画像)を取得する場合においても、用語の便宜上、第1撮影系1014により取得された1枚のシャインプルーフ画像を第1シャインプルーフ画像群と呼ぶことがあり、第2撮影系1015により取得された1枚のシャインプルーフ画像を第2シャインプルーフ画像群と呼ぶことがある。このように、本開示において、用語「群」は、複数の要素が含まれる場合だけでなく、1つの要素のみが含まれる場合にも使用されることがある。 Without performing slit scanning, one Scheimpflug image (first Scheimpflug image) is obtained by the first imaging system 1014, and one Scheimpflug image (second Scheimpflug image) is obtained by the second imaging system 1015. ), for convenience of terminology, one Scheimpflug image acquired by the first imaging system 1014 may be referred to as a first Scheimpflug image group, and one Scheimpflug image acquired by the second imaging system 1015 may be called a first Scheimpflug image group. The set of Scheimpflug images may be called a second Scheimpflug image group. Thus, in this disclosure, the term "group" may be used not only when multiple elements are included, but also when only one element is included.
 本例の画像取得部1010Bがスリットスキャンを実行する場合において、第1撮影系1014による被検眼の撮影と第2撮影系1015による被検眼の撮影とは、互いに並行して実行される。すなわち、画像取得部1010Bは、被検眼の3次元領域に対するスリット光の投射位置を移動しつつ、第1撮影系1014による撮影と第2撮影系1015による撮影とを並行して行う。 When the image acquisition unit 1010B of this example performs slit scanning, the imaging of the subject's eye by the first imaging system 1014 and the imaging of the subject's eye by the second imaging system 1015 are performed in parallel with each other. That is, the image acquisition unit 1010B simultaneously performs imaging by the first imaging system 1014 and imaging by the second imaging system 1015 while moving the projection position of the slit light with respect to the three-dimensional area of the subject's eye.
 更に、本例の画像取得部1010Bは、第1撮影系1014による撮影(シャインプルーフ画像の収集)と第2撮影系1015による撮影(シャインプルーフ画像の収集)とを互いに同期して実行するように構成されていてよい。この同期関係を参照することで、第1シャインプルーフ画像群と第2シャインプルーフ画像群との間の関連付けを、画像処理等を用いることなく、容易に行うことができる。この関連付けは、例えば、互いの取得時間の差が小さいシャインプルーフ画像同士を対応付けるように実行される。 Further, the image acquisition unit 1010B of this example is configured to perform imaging (collection of Scheimpflug images) by the first imaging system 1014 and imaging by the second imaging system 1015 (collection of Scheimpflug images) in synchronization with each other. may be configured. By referring to this synchronous relationship, association between the first Scheimpflug image group and the second Scheimpflug image group can be easily performed without using image processing or the like. This association is performed, for example, so as to associate Scheimpflug images with a small acquisition time difference.
 このような態様において、眼科装置1000は、第1撮影系1014による撮影と第2撮影系1015による撮影との相互同期関係を参照して、当該スリットスキャンに対応する一連のシャインプルーフ画像を第1シャインプルーフ画像群及び第2シャインプルーフ画像群から再構成することができる。 In such a mode, the ophthalmologic apparatus 1000 refers to the mutual synchronization relationship between the imaging by the first imaging system 1014 and the imaging by the second imaging system 1015, and generates a series of Scheimpflug images corresponding to the slit scan by the first imaging system. It can be reconstructed from the Scheimpflug image group and the second Scheimpflug image group.
 図2Bに示す画像取得部1010Bが適用されている眼科装置1000の構成例を図2Cに示す。本例の画像取得部1010Bにおいて、第1撮影系1014の光軸と第2撮影系1015の光軸とは、照明系1011の光軸に対して互いに反対の方向に傾斜して配置されている。更に、本例のデータ処理部1020Aは、画像選択部1021及び炎症状態情報生成部1022を含んでいる。 FIG. 2C shows a configuration example of an ophthalmologic apparatus 1000 to which the image acquisition unit 1010B shown in FIG. 2B is applied. In the image acquisition unit 1010B of this example, the optical axis of the first imaging system 1014 and the optical axis of the second imaging system 1015 are arranged to be inclined in opposite directions with respect to the optical axis of the illumination system 1011. . Further, the data processing unit 1020A of this example includes an image selection unit 1021 and an inflammation state information generation unit 1022.
 画像選択部1021は、第1撮影系1014により取得された第1シャインプルーフ画像群及び第2撮影系1015により取得された第2シャインプルーフ画像群のうちから画像を選択するように構成されている。例えば、画像選択部1021は、第1撮影系1014により取得された第1シャインプルーフ画像及び第2撮影系1015により取得された第2シャインプルーフ画像の一方を選択するように構成されていてよい。 The image selection unit 1021 is configured to select an image from the first Scheimpflug image group acquired by the first imaging system 1014 and the second Scheimpflug image group acquired by the second imaging system 1015. . For example, the image selection unit 1021 may be configured to select one of the first Scheimpflug image acquired by the first imaging system 1014 and the second Scheimpflug image acquired by the second imaging system 1015 .
 画像選択部1021は、第1撮影系1014による撮影と第2撮影系1015による撮影との同期に基づきそれぞれ収集された第1シャインプルーフ画像群と第2シャインプルーフ画像群との間の対応関係に基づいて、第1シャインプルーフ画像群及び第2シャインプルーフ画像群を収集したスリットスキャンに対応する新たな一連のシャインプルーフ画像を第1シャインプルーフ画像群及び第2シャインプルーフ画像群から選択するように構成されている。端的に言うと、画像選択部1021は、第1撮影系1014及び第2撮影系1015によりそれぞれ収集された第1シャインプルーフ画像群及び第2シャインプルーフ画像群から一連のシャインプルーフ画像を再構成するように構成されている。 The image selection unit 1021 determines the correspondence relationship between the first Scheimpflug image group and the second Scheimpflug image group respectively acquired based on the synchronization between the imaging by the first imaging system 1014 and the imaging by the second imaging system 1015. from the first Scheimpflug image group and the second Scheimpflug image group corresponding to the slit scan that acquired the first Scheimpflug image group and the second Scheimpflug image group based on It is configured. In short, the image selector 1021 reconstructs a series of Scheimpflug images from the first Scheimpflug image group and the second Scheimpflug image group acquired by the first imaging system 1014 and the second imaging system 1015, respectively. is configured as
 画像選択部1021が実行する画像選択処理の手法は任意であってよい。例えば、第1撮影系1014及び第2撮影系1015の構成及び/又は配置や、画像選択の目的及び/又は用途など、所定の条件や所定のパラメータに基づいて、画像選択処理の手法を決定及び/又は選択することができる。 The method of image selection processing executed by the image selection unit 1021 may be arbitrary. For example, based on predetermined conditions and predetermined parameters, such as the configuration and/or arrangement of the first imaging system 1014 and the second imaging system 1015, the purpose and/or application of image selection, and the like, the method of image selection processing is determined and determined. /or can be selected.
 画像取得部1010Bは、第1撮影系1014による撮影と第2撮影系1015による撮影とを相互同期的に実行する。また、前述したように、第1撮影系1014の光軸と第2撮影系1015の光軸とが、照明系1011の光軸に対して互いに反対の方向に傾斜して配置されている。例えば、第1撮影系1014の光軸は、照明系1011の光軸に対して左方に傾斜して配置されており、且つ、第2撮影系1015の光軸は、照明系1011の光軸に対して右方に傾斜して配置されている。このように配置された第1撮影系1014及び第2撮影系1015をそれぞれ左撮影系及び右撮影系と呼ぶことがある。 The image acquisition unit 1010B performs imaging by the first imaging system 1014 and imaging by the second imaging system 1015 mutually synchronously. Further, as described above, the optical axis of the first imaging system 1014 and the optical axis of the second imaging system 1015 are arranged to be inclined in opposite directions with respect to the optical axis of the illumination system 1011 . For example, the optical axis of the first imaging system 1014 is tilted to the left with respect to the optical axis of the illumination system 1011, and the optical axis of the second imaging system 1015 is the optical axis of the illumination system 1011. is tilted to the right with respect to The first imaging system 1014 and the second imaging system 1015 arranged in this manner are sometimes called a left imaging system and a right imaging system, respectively.
 照明系1011の光軸に対する第1撮影系1014の光軸の傾斜角度と、照明系1011の光軸に対する第2撮影系1015の光軸の傾斜角度とは、互いに等しくてもよいし、互いに相違してもよい。また、これらの傾斜角度は固定されていてもよいし、可変であってもよい。 The tilt angle of the optical axis of the first imaging system 1014 with respect to the optical axis of the illumination system 1011 and the tilt angle of the optical axis of the second imaging system 1015 with respect to the optical axis of the illumination system 1011 may be equal to or different from each other. You may Moreover, these inclination angles may be fixed or may be variable.
 本例の照明系1011は、断面の長手方向がY方向に配向されたスリット光を被検眼に対して正面方向から投射するように構成及び配置される。また、本例の画像取得部1010Bは、照明系1011、第1撮影系1014、及び第2撮影系1015を一体的にX方向に移動することによって、被検眼の前眼部の3次元領域に対してスリットスキャンを適用する。 The illumination system 1011 of this example is configured and arranged so as to project slit light, the longitudinal direction of the cross section of which is oriented in the Y direction, onto the subject's eye from the front direction. In addition, the image acquisition unit 1010B of this example moves the illumination system 1011, the first imaging system 1014, and the second imaging system 1015 integrally in the X direction to obtain a three-dimensional region of the anterior segment of the subject's eye. A slit scan is applied to the
 本例の画像選択部1021は、第1撮影系1014により収集された第1シャインプルーフ画像群と第2撮影系1015により収集された第2シャインプルーフ画像群との間の対応関係に基づき、アーティファクトを含まない複数のシャインプルーフ画像を第1シャインプルーフ画像群及び第2シャインプルーフ画像群から選択することによって、第1シャインプルーフ画像群及び第2シャインプルーフ画像群を収集したスリットスキャンに対応する新たな一連のシャインプルーフ画像の選択を行う。このアーティファクトは、任意の種類のアーティファクトであってよい。本例のように前眼部スキャンを行う場合、このアーティファクトは、角膜反射に起因するアーティファクト(角膜反射アーティファクトと呼ぶ)であってよい。以下、画像選択部1021が実行する処理について幾つかの例を説明する。 The image selection unit 1021 of this example selects artifacts based on the correspondence relationship between the first Scheimpflug image group acquired by the first imaging system 1014 and the second Scheimpflug image group acquired by the second imaging system 1015. A new set of Scheimpflug images corresponding to a slit scan that acquired the first set of Scheimpflug images and the second set of Scheimpflug images by selecting a plurality of Scheimpflug images from the first set and the second set of Scheimpflug images that do not contain A series of Scheimpflug images are selected. This artifact can be any kind of artifact. When an anterior segment scan is performed as in this example, this artifact may be an artifact caused by corneal reflection (referred to as a corneal reflection artifact). Several examples of the processing executed by the image selection unit 1021 will be described below.
 角膜反射アーティファクトが生じるスリット光の投射位置(シャインプルーフ画像、フレーム)は、左撮影系と右撮影系とで異なる。例えば、本例のように、断面の長手方向がY方向に配向されたスリット光を被検眼に対して正面方向から投射しつつ、照明系1011、第1撮影系1014、及び第2撮影系1015を一体的にX方向に移動することによってスリットスキャンを行う場合、スリット光の角膜反射光は、角膜頂点よりも左方の位置にスリット光が投射されているときには左撮影系に入射しやすく、角膜頂点よりも右方の位置にスリット光が投射されているときには右撮影系に入射しやすい。 The projection position (Scheimpflug image, frame) of the slit light that causes corneal reflection artifacts differs between the left imaging system and the right imaging system. For example, as in this example, the illumination system 1011, the first imaging system 1014, and the second imaging system 1015 are projected while projecting the slit light with the longitudinal direction of the cross section oriented in the Y direction toward the eye to be examined from the front direction. When slit scanning is performed by integrally moving in the X direction, the corneal reflected light of the slit light is likely to enter the left imaging system when the slit light is projected at a position to the left of the corneal vertex. When the slit light is projected at a position to the right of the corneal vertex, it is likely to enter the right imaging system.
 このような事情を考慮し、幾つかの例示的な態様の画像選択部1021は、まず、左撮影系としての第1撮影系1014により収集された第1シャインプルーフ画像群のうちから角膜頂点に対応するシャインプルーフ画像(第1角膜頂点画像)を特定するとともに、右撮影系としての第2撮影系1015により収集された第2シャインプルーフ画像群のうちから角膜頂点に対応するシャインプルーフ画像(第2角膜頂点画像)を特定するように構成される。 In consideration of such circumstances, the image selection unit 1021 of some exemplary modes first selects the corneal vertex from among the first Scheimpflug image group acquired by the first imaging system 1014 as the left imaging system. A corresponding Scheimpflug image (first corneal vertex image) is specified, and a Scheimpflug image (first corneal vertex image) corresponding to the corneal vertex is selected from the second Scheimpflug image group acquired by the second imaging system 1015 as the right imaging system. bicorneal apex images).
 幾つかの例示的な態様において、角膜頂点画像を特定する処理は、第1シャインプルーフ画像群に含まれる各シャインプルーフ画像から角膜表面に相当する画像を検出する処理と、これらの検出画像のピクセルのZ座標に基づき本例の眼科装置1000に最も近いピクセルを特定する処理と、特定されたピクセルを含むシャインプルーフ画像を第1角膜頂点画像に設定する処理とを含んでいてよい。第2角膜頂点画像の設定も同じ要領で実行されてよい。 In some exemplary aspects, the process of identifying the corneal vertex image includes the process of detecting images corresponding to the corneal surface from each Scheimpflug image included in the first Scheimpflug image group, and the pixels of these detected images. and setting the Scheimpflug image including the identified pixels as the first corneal vertex image. The setting of the second corneal vertex image may be performed in the same manner.
 次に、画像選択部1021は、第1シャインプルーフ画像群のうちから第1角膜頂点画像よりも右方に位置するシャインプルーフ画像群を選択するとともに、第2シャインプルーフ画像群のうちから第2角膜頂点画像よりも左方に位置するシャインプルーフ画像群を選択し、選択された2つのシャインプルーフ画像群(並びに、第1角膜頂点画像及び/又は第2角膜頂点画像)からなる一連のシャインプルーフ画像を形成する。これにより、スリットスキャンが適用された前眼部の3次元領域にわたり、且つ、角膜反射アーティファクトを含まない(その可能性が高い)一連のシャインプルーフ画像が得られる。 Next, the image selection unit 1021 selects a Scheimpflug image group positioned to the right of the first corneal vertex image from the first Scheimpflug image group, and selects a Scheimpflug image group positioned to the right of the first corneal vertex image from the second Scheimpflug image group. A series of Scheimpflug images consisting of two selected Scheimpflug image groups (and the first corneal vertex image and/or the second corneal vertex image) by selecting a Scheimpflug image group located to the left of the corneal vertex image form an image. This yields a series of Scheimpflug images over the three-dimensional region of the anterior segment to which the slit scan was applied and which is (likely) free of corneal reflection artifacts.
 角膜頂点画像を特定する処理の他の例を説明する。幾つかの例示的な態様の画像選択部1021は、第1撮影系1014(例えば左撮影系)及び第2撮影系1015(例えば右撮影系)により実質的に同時に取得された2つの画像のいずれかに角膜反射アーティファクトが含まれているか判定するように構成される。この角膜反射アーティファクト判定処理は、所定の画像解析を含んでおり、例えば、ピクセルに割り当てられた輝度情報に関する閾値処理を含む。なお、実質的に同時に取得された2つの画像を決定する処理は、第1撮影系1014による撮影と第2撮影系1015による撮影との間の同期関係に基づき実行することができる。 Another example of the process of identifying the corneal vertex image will be explained. The image selector 1021 of some exemplary aspects selects which of the two images acquired substantially simultaneously by the first imaging system 1014 (eg, left imaging system) and the second imaging system 1015 (eg, right imaging system). is configured to determine if any corneal reflection artifact is included. This corneal reflection artifact determination process includes predetermined image analysis, for example, thresholding of luminance information assigned to pixels. Note that the process of determining two images acquired substantially simultaneously can be executed based on the synchronous relationship between the imaging by the first imaging system 1014 and the imaging by the second imaging system 1015 .
 アーティファクト判定処理に用いられる閾値処理は、例えば、予め設定された閾値を超える輝度値が割り当てられたピクセルを特定するように実行される。典型的には、閾値は、画像中のスリット光像(スリット光の投射領域)の輝度値よりも高く設定されてよい。これにより、画像選択部1021は、スリット光像をアーティファクトとして判定することなく、スリット光像よりも明るい像をアーティファクトとして判定するように構成される。シャインプルーフ画像においてスリット光像よりも明るい像は、角膜での正反射に起因する像である可能性が高いことを考慮すると、このように構成された画像選択部1021により検出されるアーティファクトは、角膜反射アーティファクトである可能性が高いと考えることができる。  The threshold processing used in the artifact determination process is performed, for example, to identify pixels assigned luminance values exceeding a preset threshold. Typically, the threshold may be set higher than the luminance value of the slit light image (projection area of slit light) in the image. Accordingly, the image selection unit 1021 is configured to determine an image brighter than the slit light image as an artifact without determining the slit light image as an artifact. Considering that an image brighter than the slit light image in the Scheimpflug image is likely to be an image caused by specular reflection on the cornea, artifacts detected by the image selection unit 1021 configured in this manner are: It can be considered as likely to be a corneal reflection artifact.
 画像選択部1021は、アーティファクト判定のために、例えば、パターン認識、セグメンテーション、エッジ検出など、閾値処理以外の任意の画像解析を実行してもよい。一般に、画像解析、画像処理、機械学習、人工知能、コグニティブ・コンピューティングなど、任意の情報処理技術を、アーティファクト判定に適用することが可能である。 For artifact determination, the image selection unit 1021 may perform any image analysis other than threshold processing, such as pattern recognition, segmentation, and edge detection. In general, any information processing technology, such as image analysis, image processing, machine learning, artificial intelligence, cognitive computing, etc., can be applied to artifact determination.
 アーティファクト判定の結果、第1撮影系1014及び第2撮影系1015により実質的に同時に取得された2つの画像の一方の画像にアーティファクトが含まれると判定されたとき、画像選択部1021は、他方の画像を選択する。つまり、画像選択部1021は、第1撮影系1014及び第2撮影系1015により実質的に同時に取得された2つの画像のうち、アーティファクトが含まれると判定された一方の画像ではない他方の画像を選択する。 As a result of the artifact determination, when it is determined that one of the two images obtained substantially simultaneously by the first imaging system 1014 and the second imaging system 1015 contains an artifact, the image selection unit 1021 Select an image. That is, the image selection unit 1021 selects the other image, which is not the image determined to include the artifact, of the two images obtained substantially simultaneously by the first imaging system 1014 and the second imaging system 1015. select.
 双方の画像にアーティファクトが含まれると判定された場合を想定し、画像選択部1021は、例えば、アーティファクトが観察や診断に与える悪影響の大きさを評価する処理と、悪影響が小さい側の画像を選択する処理とを実行するように構成されてよい。この評価処理は、例えば、アーティファクトの寸法、強度、形状、及び位置のいずれか1つ以上の条件に基づき実行されてよい。典型的には、寸法が大きいアーティファクト、強度が高いアーティファクト、スリット光像などの注目領域やその近傍に位置するアーティファクトなどは、悪影響が大きいと評価される。 Assuming a case where it is determined that both images contain artifacts, the image selection unit 1021 performs, for example, a process of evaluating the magnitude of the adverse effect of the artifacts on observation and diagnosis, and selects the image with the smaller adverse effect. It may be configured to perform a process to This evaluation process may be performed based on, for example, any one or more conditions of size, intensity, shape, and position of the artifact. Typically, artifacts with large dimensions, artifacts with high intensity, and artifacts located in or near the attention area such as a slit light image are evaluated to have a large adverse effect.
 なお、双方の画像にアーティファクトが含まれる場合などにおいて、特許文献3(特開2019-213733号公報)に開示されているアーティファクト除去を適用してもよい。 Note that in the case where both images contain artifacts, the artifact removal disclosed in Patent Document 3 (Japanese Patent Application Laid-Open No. 2019-213733) may be applied.
 以上に説明したような画像選択部1021を設けることにより、観察や解析や診断の妨げになるアーティファクトを含まない、被検眼の3次元領域の画像を提供することが可能になる。更に、アーティファクトを含まない被検眼の3次元領域の画像を後段の処理に提供することが可能になる。例えば、アーティファクトを含まない画像群に基づいて被検眼の3次元画像やレンダリング画像を構築することが可能になる。 By providing the image selection unit 1021 as described above, it is possible to provide an image of the three-dimensional region of the subject's eye that does not contain artifacts that hinder observation, analysis, or diagnosis. Furthermore, it becomes possible to provide an image of the three-dimensional area of the subject's eye that does not contain artifacts to subsequent processing. For example, it becomes possible to construct a three-dimensional image or a rendered image of the subject's eye based on a group of images that do not contain artifacts.
 実質的に同一の位置を撮影して得られた画像であっても、左撮影系で得られた画像と右撮影系で得られた画像とでは、描出された所定部位の寸法が互いに異なる場合がある。例えば、実質的に同一の位置を左撮影系及び右撮影系でそれぞれ撮影して得られた左画像及び右画像において、描出されている角膜の厚さや炎症性細胞の寸法や前房フレアの寸法が互いに異なる場合がある。このような場合においても、画像選択部1021を用いることによって所定部位の寸法を合わせることが可能になる。 Even if the images are obtained by photographing substantially the same position, the image obtained by the left imaging system and the image obtained by the right imaging system differ in the dimensions of the depicted predetermined part. There is For example, the thickness of the cornea, the size of the inflammatory cells, and the size of the anterior chamber flare are visualized in the left and right images obtained by photographing substantially the same position with the left imaging system and the right imaging system, respectively. may differ from each other. Even in such a case, by using the image selection unit 1021, it is possible to match the dimensions of the predetermined portion.
 炎症状態情報生成部1022は、選択部1021により選択されたシャインプルーフ画像に基づいて炎症状態情報を生成するように構成されている。炎症状態情報の生成に使用されるシャインプルーフ画像の個数は任意に設定されてよい。また、炎症状態情報生成部1022は、選択部1021により選択された一連のシャインプルーフ画像群に含まれる1以上のシャインプルーフ画像を加工して加工画像データを生成する処理と、生成された加工画像データに基づいて炎症状態情報を生成する処理とを実行するように構成されていてもよい。 The inflammation state information generation unit 1022 is configured to generate inflammation state information based on the Scheimpflug image selected by the selection unit 1021 . The number of Scheimpflug images used to generate inflammation state information may be set arbitrarily. Further, the inflammatory state information generation unit 1022 processes one or more Scheimpflug images included in the series of Scheimpflug images selected by the selection unit 1021 to generate processed image data, and processes the generated processed image data. and generating inflammation state information based on the data.
 前述したように、データ処理部1020により生成される炎症状態情報は、被検眼の前房内の炎症性細胞に関する評価情報である細胞評価情報を含んでいてよい。この場合、データ処理部1020は、第1セグメンテーション、第2セグメンテーション、及び細胞評価情報生成処理のうちの少なくとも1つの処理を実行するように構成されていてよい。ここで、第1セグメンテーションは前房に相当する前房領域を特定するための処理であり、第2セグメンテーションは炎症性細胞に相当する細胞領域を特定するための処理であり、細胞評価情報生成処理は細胞評価情報を生成するための処理である。 As described above, the inflammatory state information generated by the data processing unit 1020 may include cell evaluation information, which is evaluation information regarding inflammatory cells in the anterior chamber of the subject's eye. In this case, the data processing unit 1020 may be configured to perform at least one of the first segmentation, second segmentation, and cell evaluation information generation processing. Here, the first segmentation is processing for identifying an anterior chamber region corresponding to the anterior chamber, the second segmentation is processing for identifying a cell region corresponding to inflammatory cells, and cell evaluation information generation processing. is a process for generating cell evaluation information.
 第1セグメンテーションは、機械学習ベースの処理若しくは非機械学習ベースの処理であってよく、又は、機械学習ベースの処理及び非機械学習ベースの処理の組み合わせであってもよい。第2セグメンテーションは、機械学習ベースの処理若しくは非機械学習ベースの処理であってよく、又は、機械学習ベースの処理及び非機械学習ベースの処理の組み合わせであってもよい。細胞評価情報生成処理は、機械学習ベースの処理若しくは非機械学習ベースの処理であってよく、又は、機械学習ベースの処理及び非機械学習ベースの処理の組み合わせであってもよい。 The first segmentation may be machine learning-based processing or non-machine learning-based processing, or may be a combination of machine learning-based processing and non-machine learning-based processing. The second segmentation may be machine learning based processing, non-machine learning based processing, or a combination of machine learning based processing and non-machine learning based processing. The cell assessment information generation process may be machine learning based process or non-machine learning based process, or may be a combination of machine learning based process and non-machine learning based process.
 第1セグメンテーションに投入されるデータの種類、第2セグメンテーションに投入されるデータの種類、及び、細胞評価情報生成処理に投入されるデータの種類は、いずれも任意であってよい。以下、第1セグメンテーション、第2セグメンテーション、及び細胞評価情報生成処理を含む複数の処理の可能な組み合わせの幾つかの例について説明する。 The type of data input to the first segmentation, the type of data input to the second segmentation, and the type of data input to the cell evaluation information generation process may all be arbitrary. Some examples of possible combinations of multiple processes, including first segmentation, second segmentation, and cell assessment information generation processes, are described below.
 図3に示すデータ処理部1030は、図1のデータ処理部1020の構成の例である。本例のデータ処理部1030は、第1セグメンテーション部1031、第2セグメンテーション部1032、及び細胞評価情報生成処理部1033を含んでいる。 The data processing unit 1030 shown in FIG. 3 is an example of the configuration of the data processing unit 1020 in FIG. The data processing unit 1030 of this example includes a first segmentation unit 1031 , a second segmentation unit 1032 and a cell evaluation information generation processing unit 1033 .
 第1セグメンテーション部1031は、前房領域を特定するための第1セグメンテーションを実行するプロセッサを含み、画像取得部1010により取得されたシャインプルーフ画像から前房領域を特定するように構成されている。 The first segmentation unit 1031 includes a processor that executes first segmentation for identifying the anterior chamber region, and is configured to identify the anterior chamber region from the Scheimpflug image acquired by the image acquisition unit 1010.
 機械学習を用いて本例の第1セグメンテーションを実行する場合における第1セグメンテーション部1031の構成例を図4に示す。本例の第1セグメンテーション部1031Aは、予め構築された推論モデル1034(第1推論モデルと呼ぶ)を用いて第1セグメンテーションを実行するように構成されている。第1推論モデル1034は、眼画像(例えば、眼のシャインプルーフ画像、他のモダリティで取得した眼の画像、又は、眼のシャインプルーフ画像及び他のモダリティで取得された眼の画像)を少なくとも含む訓練データを用いた機械学習により構築されたニューラルネットワーク1035(第1ニューラルネットワークと呼ぶ)を含んでいる。 FIG. 4 shows a configuration example of the first segmentation unit 1031 when executing the first segmentation of this example using machine learning. The first segmentation unit 1031A of this example is configured to execute the first segmentation using an inference model 1034 (referred to as a first inference model) constructed in advance. The first inference model 1034 includes at least an eye image (eg, a Scheimpflug image of the eye, an eye image acquired with another modality, or a Scheimpflug image of the eye and an eye image acquired with another modality). It includes a neural network 1035 (referred to as the first neural network) constructed by machine learning using training data.
 第1ニューラルネットワーク1035に入力されるデータはシャインプルーフ画像であり、第1ニューラルネットワーク1035から出力されるデータは前房領域である。すなわち、第1セグメンテーション部1031Aは、画像取得部1010により取得されたシャインプルーフ画像(例えば、1つ以上のシャインプルーフ画像、1つ以上の加工画像データ、又は、1つ以上のシャインプルーフ画像及び1つ以上の加工画像データ)を受け、このシャインプルーフ画像を第1推論モデル1034の第1ニューラルネットワーク1035に入力し、第1ニューラルネットワーク1035からの出力データ(入力されたシャインプルーフ画像における前房領域)を取得するように構成されている。 The data input to the first neural network 1035 is the Scheimpflug image, and the data output from the first neural network 1035 is the anterior chamber region. That is, the first segmentation unit 1031A obtains Scheimpflug images acquired by the image acquisition unit 1010 (for example, one or more Scheimpflug images, one or more processed image data, or one or more Scheimpflug images and one The Scheimpflug image is input to the first neural network 1035 of the first inference model 1034, and the output data from the first neural network 1035 (the anterior chamber region in the input Scheimpflug image) is received. ) is configured to retrieve
 第1推論モデル1034を構築する装置(推論モデル構築装置)は、眼科装置1000に設けられていてもよいし、眼科装置1000の周辺機器(コンピュータなど)に設けられてもよいし、他のコンピュータであってもよい。 A device (inference model building device) that builds the first inference model 1034 may be provided in the ophthalmic device 1000, may be provided in a peripheral device (such as a computer) of the ophthalmic device 1000, or may be provided in another computer. may be
 図5に示すモデル構築部2000は、推論モデル構築装置の例であり、学習処理部2010とニューラルネットワーク2020とを含む。 The model construction unit 2000 shown in FIG. 5 is an example of an inference model construction device, and includes a learning processing unit 2010 and a neural network 2020.
 ニューラルネットワーク2020は、典型的には、畳み込みニューラルネットワーク(CNN)を含む。図5の符号2030は、畳み込みニューラルネットワークの構造の一例を示している。 Neural network 2020 typically includes a convolutional neural network (CNN). Reference numeral 2030 in FIG. 5 shows an example of the structure of a convolutional neural network.
 入力層には、画像が入力される。入力層の後ろには、畳み込み層とプーリング層とのペアが複数配置されている。図5に示す例には畳み込み層とプーリング層とのペアが3つ設けられているが、ペアの個数は任意であってよい。 Images are input to the input layer. Multiple pairs of convolutional layers and pooling layers are arranged behind the input layer. Although three pairs of convolutional layers and pooling layers are provided in the example shown in FIG. 5, the number of pairs may be arbitrary.
 畳み込み層では、画像から特徴(輪郭など)を把握するための畳み込み演算が行われる。畳み込み演算は、入力された画像に対する、この画像と同じ次元のフィルタ関数(重み係数、フィルタカーネル)の積和演算である。畳み込み層では、入力された画像の複数の部分にそれぞれ畳み込み演算を適用する。より具体的には、畳み込み層では、フィルタ関数が適用された部分画像の各ピクセルの値に、そのピクセルに対応するフィルタ関数の値(重み)を乗算して積を算出し、この部分画像の複数のピクセルにわたって積の総和を求める。このように得られた積和値は、出力される画像における対応ピクセルに代入される。フィルタ関数を適用する箇所(部分画像)を移動させながら積和演算を行うことで、入力された画像の全体についての畳み込み演算結果が得られる。このような畳み込み演算によれば、多数の重み係数を用いて様々な特徴が抽出された画像が多数得られる。つまり、平滑化画像やエッジ画像などの多数のフィルタ処理画像が得られる。畳み込み層により生成される多数の画像は特徴マップと呼ばれる。 In the convolution layer, convolution operations are performed to grasp features (contours, etc.) from the image. A convolution operation is a product-sum operation of a filter function (weighting coefficient, filter kernel) having the same dimension as that of an input image. The convolution layer applies a convolution operation to each of the portions of the input image. More specifically, in the convolution layer, the value of each pixel of the subimage to which the filter function is applied is multiplied by the value (weight) of the filter function corresponding to that pixel to calculate the product, Find the sum of the products over multiple pixels. The sum-of-products value thus obtained is assigned to the corresponding pixel in the output image. By performing the sum-of-products operation while moving the portion (partial image) to which the filter function is applied, the result of the convolution operation for the entire input image can be obtained. According to such a convolution operation, a large number of images are obtained from which various features are extracted using a large number of weighting factors. That is, a large number of filtered images such as smoothed images and edge images are obtained. The multiple images produced by the convolutional layers are called feature maps.
 プーリング層では、直前の畳み込み層により生成された特徴マップの圧縮(データの間引きなど)が行われる。より具体的には、プーリング層では、特徴マップ内の注目ピクセルの所定の近傍ピクセルにおける統計値を所定のピクセル間隔ごとに算出し、入力された特徴マップよりも小さな寸法の画像を出力する。なお、プーリング演算に適用される統計値は、例えば、最大値(max pooling)又は平均値(average pooling)である。また、プーリング演算に適用されるピクセル間隔は、ストライド(stride)と呼ばれる。 The pooling layer compresses the feature map generated by the previous convolutional layer (data thinning, etc.). More specifically, the pooling layer calculates statistical values for predetermined neighboring pixels of the pixel of interest in the feature map at predetermined pixel intervals, and outputs an image smaller in size than the input feature map. Note that the statistic value applied to the pooling operation is, for example, the maximum value (max pooling) or the average value (average pooling). Also, the pixel spacing applied to the pooling operation is called stride.
 畳み込みニューラルネットワークは、畳み込み層とプーリング層との複数のペアによって処理を行うことにより、入力された画像から多くの特徴を抽出することができる。 A convolutional neural network can extract many features from the input image by performing processing with multiple pairs of convolutional layers and pooling layers.
 畳み込み層とプーリング層との最後のペアの後ろには、全結合層が設けられている。図5に示す例においては2つの全結合層が設けられているが、全結合層の個数は任意であってよい。全結合層では、畳み込みとプーリングとの組み合わせによって圧縮された特徴量を用いて、画像分類、画像セグメンテーション、回帰などの処理を行う。最後の全結合層の後ろには、出力結果を提供する出力層が設けられている。 A fully connected layer is provided after the last pair of convolutional and pooling layers. Although two fully bonded layers are provided in the example shown in FIG. 5, any number of fully bonded layers may be used. In the fully connected layer, the features compressed by a combination of convolution and pooling are used to perform processes such as image classification, image segmentation, and regression. After the last fully connected layer is an output layer that provides the output result.
 幾つかの例示的な態様において、畳み込みニューラルネットワークは、全結合層を含まなくてもよいし(例えば、全層畳み込みネットワーク(FCN))、サポートベクターマシンや再帰型ニューラルネットワーク(RNN)などを含んでいてもよい。また、ニューラルネットワーク2020に対する機械学習は、転移学習を含んでいてもよい。つまり、ニューラルネットワーク2020は、他の訓練データ(訓練画像)を用いた学習が既に行われてパラメータ調整が為されたニューラルネットワークを含んでいてもよい。また、モデル構築部2000(学習処理部2010)は、学習済みのニューラルネットワーク(ニューラルネットワーク2020)にファインチューニングを適用可能に構成されてもよい。ニューラルネットワーク2020は、公知のオープンソースのニューラルネットワークアーキテクチャを用いて構築されたものであってもよい。 In some exemplary aspects, the convolutional neural network may not include fully connected layers (e.g., a fully connected convolutional network (FCN)), support vector machines, recurrent neural networks (RNN), etc. You can stay. Machine learning for neural network 2020 may also include transfer learning. That is, the neural network 2020 may include a neural network that has already been trained using other training data (training images) and whose parameters have been adjusted. Also, the model construction unit 2000 (learning processing unit 2010) may be configured to be able to apply fine tuning to a trained neural network (neural network 2020). Neural network 2020 may be constructed using a known open source neural network architecture.
 学習処理部2010は、訓練データを用いた機械学習をニューラルネットワーク2020に適用する。ニューラルネットワーク2020が畳み込みニューラルネットワークを含んでいる場合、学習処理部2010によって調整されるパラメータは、例えば、畳み込み層のフィルタ係数と、全結合層の結合重み及びオフセットとを含む。 The learning processing unit 2010 applies machine learning using training data to the neural network 2020. When neural network 2020 includes a convolutional neural network, the parameters adjusted by learning processing unit 2010 include, for example, filter coefficients of convolutional layers and connection weights and offsets of fully connected layers.
 前述したように、訓練データは、1以上の眼について取得された1以上のシャインプルーフ画像を含んでいてよい。眼のシャインプルーフ画像は、第1ニューラルネットワーク1035に入力される画像と同種の画像であるから、他の種類の画像のみを含む訓練データを用いて機械学習を行う場合と比較して、第1ニューラルネットワーク1035の出力の品質(正確度、精度など)の向上を図ることができる。 As described above, training data may include one or more Scheimpflug images acquired for one or more eyes. Since the Scheimpflug image of the eye is an image of the same type as the image input to the first neural network 1035, compared to performing machine learning using training data containing only other types of images, the first The output quality (accuracy, accuracy, etc.) of the neural network 1035 can be improved.
 訓練データに含まれる画像の種類はシャインプルーフ画像に限定されず、例えば、訓練データは、他の眼科モダリティ(眼底カメラ、OCT装置、SLO、手術用顕微鏡など)により取得された画像や、任意の診療科の画像診断モダリティ(超音波診断装置、X線診断装置、X線CT装置、磁気共鳴イメージング(MRI)装置など)により取得された画像や、実際の眼の画像を加工して生成された画像(加工画像データ)や、擬似的な画像などを含んでいてもよい。また、データ拡張、データオーギュメンテーションなどの技術を利用して、訓練データに含まれる画像等の個数を増加させてもよい。 The type of images included in the training data is not limited to Scheimpflug images. For example, the training data may include images acquired by other ophthalmologic modalities (fundus camera, OCT device, SLO, surgical microscope, etc.), or any Images acquired by diagnostic imaging modalities in clinical departments (ultrasonic diagnostic equipment, X-ray diagnostic equipment, X-ray CT equipment, magnetic resonance imaging (MRI) equipment, etc.) and generated by processing actual eye images An image (processed image data) or a pseudo image may be included. Also, techniques such as data augmentation and data augmentation may be used to increase the number of images and the like included in the training data.
 第1ニューラルネットワーク1035を構築するための訓練の手法(機械学習の手法)は任意であってよく、例えば、教師あり学習、教師なし学習、及び強化学習のいずれか、又は、いずれか2以上の組み合わせであってよい。 The training method (machine learning method) for constructing the first neural network 1035 may be arbitrary, for example, supervised learning, unsupervised learning, and reinforcement learning, or any two or more It may be a combination.
 幾つかの例示的な態様では、入力画像に対してラベルを付すアノテーションによって生成された訓練データを用いて教師あり学習が実施される。このアノテーションでは、例えば、訓練データに含まれる各画像について、その画像中の前房領域を特定してラベルを付す。前房領域の特定は、例えば、医師、コンピュータ、及び、他の推論モデルのうちの少なくとも1つによって実行される。学習処理部2010は、このような訓練データを用いた教師あり学習をニューラルネットワーク2020に適用することによって第1ニューラルネットワーク1035を構築することができる。 In some exemplary aspects, supervised learning is performed using training data generated by annotations that label input images. In this annotation, for example, for each image in the training data, the anterior chamber region in that image is identified and labeled. Identification of the anterior chamber region is performed, for example, by at least one of a physician, computer, and other inference model. The learning processing unit 2010 can construct the first neural network 1035 by applying supervised learning using such training data to the neural network 2020 .
 このようにして構築された第1ニューラルネットワーク1035を含む第1推論モデル1034は、シャインプルーフ画像(例えば、画像取得部1010により取得されたシャインプルーフ画像、その加工画像データ)を入力とし、且つ、入力されたシャインプルーフ画像中の前房領域(例えば、前房領域の範囲又は位置を示す情報)を出力とした学習済みモデルである。 The first inference model 1034 including the first neural network 1035 constructed in this manner receives a Scheimpflug image (for example, the Scheimpflug image acquired by the image acquisition unit 1010 and its processed image data), and This is a trained model whose output is the anterior chamber region (for example, information indicating the range or position of the anterior chamber region) in the input Scheimpflug image.
 第1ニューラルネットワーク1035の特定のユニットに処理が集中することを避けるために、学習処理部2010は、ニューラルネットワーク2020の幾つかのユニットをランダムに選んで無効化し、残りのユニットを用いて学習を行ってもよい(ドロップアウト)。 In order to avoid concentration of processing on a specific unit of the first neural network 1035, the learning processing unit 2010 randomly selects and disables some units of the neural network 2020, and performs learning using the remaining units. May go (drop out).
 推論モデル構築に用いられる手法は、ここに示した例に限定されない。例えば、サポートベクターマシン、ベイズ分類器、ブースティング、k平均法、カーネル密度推定、主成分分析、独立成分分析、自己組織化写像、ランダムフォレスト、敵対的生成ネットワーク(GAN)といった任意の手法を、推論モデルを構築するために利用することが可能である。 The method used to build the inference model is not limited to the example shown here. For example, support vector machines, Bayesian classifiers, boosting, k-means, kernel density estimation, principal component analysis, independent component analysis, self-organizing maps, random forests, generative adversarial networks (GAN), etc. It can be used to build an inference model.
 本例の第1セグメンテーション部1031Aは、このような第1推論モデル1034(第1ニューラルネットワーク1035)を用いることによって、被検眼のシャインプルーフ画像から前房領域を特定する処理を実行する。 The first segmentation unit 1031A of this example uses such a first inference model 1034 (first neural network 1035) to execute processing for identifying the anterior chamber region from the Scheimpflug image of the subject's eye.
 第2セグメンテーション部1032は、細胞領域を特定するための第2セグメンテーションを実行するプロセッサを含み、第1セグメンテーション部1031により特定された前房領域から細胞領域を特定するように構成されている。 The second segmentation unit 1032 includes a processor that executes second segmentation for identifying cell regions, and is configured to identify cell regions from the anterior chamber region identified by the first segmentation unit 1031.
 機械学習を用いて本例の第2セグメンテーションを実行する場合における第2セグメンテーション部1032の構成例を図6に示す。本例の第2セグメンテーション部1032Aは、予め構築された推論モデル1036(第2推論モデルと呼ぶ)を用いて第2セグメンテーションを実行するように構成されている。第2推論モデル1036は、眼画像(例えば、眼のシャインプルーフ画像、他のモダリティで取得した眼の画像、又は、眼のシャインプルーフ画像及び他のモダリティで取得された眼の画像)を少なくとも含む訓練データを用いた機械学習により構築されたニューラルネットワーク1037(第2ニューラルネットワークと呼ぶ)を含んでいる。 FIG. 6 shows a configuration example of the second segmentation unit 1032 when executing the second segmentation of this example using machine learning. The second segmentation unit 1032A of this example is configured to perform the second segmentation using an inference model 1036 constructed in advance (referred to as a second inference model). The second inference model 1036 includes at least an eye image (eg, a Scheimpflug image of the eye, an eye image acquired with another modality, or a Scheimpflug image of the eye and an eye image acquired with another modality). It includes a neural network 1037 (referred to as a second neural network) constructed by machine learning using training data.
 本例の訓練データに含まれる眼画像は、眼の前房の少なくとも一部に相当する画像(前房画像と呼ぶ)を含んでいる。本例の訓練データに含まれる眼画像は、前眼部画像(例えば、シャインプルーフ画像、他のモダリティで取得された画像)に対する手動セグメンテーション又は自動セグメンテーションの結果を含んでいてよく、例えば、前眼部画像から抽出された前房画像、又は、前眼部画像中の前房画像の範囲若しくは位置を示す情報であってよい。 The eye images included in the training data of this example include an image corresponding to at least part of the anterior chamber of the eye (called an anterior chamber image). The eye images included in the training data of this example may include the results of manual or automatic segmentation on anterior segment images (e.g., Scheimpflug images, images acquired with other modalities), e.g. It may be an anterior chamber image extracted from the partial image, or information indicating the range or position of the anterior chamber image in the anterior segment image.
 第2ニューラルネットワーク1037に入力されるデータは第1セグメンテーション部1031によりシャインプルーフ画像から特定された前房領域(又は、特定及び抽出された前房領域(以下同様))であり、第2ニューラルネットワーク1037から出力されるデータは細胞領域である。すなわち、第2セグメンテーション部1032Aは、第1セグメンテーション部1031により特定された前房領域を受け、この前房領域を第2推論モデル1036の第2ニューラルネットワーク1037に入力し、第2ニューラルネットワーク1037からの出力データ(入力された前房領域における細胞領域)を取得するように構成されている。 The data input to the second neural network 1037 is the anterior chamber region identified from the Scheimpflug image by the first segmentation unit 1031 (or the identified and extracted anterior chamber region (the same shall apply hereinafter)), and the second neural network The data output from 1037 are cell regions. That is, the second segmentation unit 1032A receives the anterior chamber region specified by the first segmentation unit 1031, inputs this anterior chamber region to the second neural network 1037 of the second inference model 1036, and outputs the output data (input cell area in the anterior chamber area).
 第2推論モデル1036(第2ニューラルネットワーク1037)の構築は、第1推論モデル1034(第1ニューラルネットワーク1035)の構築と同じ要領で実行されてよい。例えば、第2推論モデル1036(第2ニューラルネットワーク1037)の構築は、図5に示すモデル構築部2000によって実行される。特に言及しない限り、本例のモデル構築部2000(学習処理部2010及びニューラルネットワーク2020)は、第1推論モデル1034(第1ニューラルネットワーク1035)の構築におけるそれと同じであってよい。 The construction of the second inference model 1036 (second neural network 1037) may be performed in the same manner as the construction of the first inference model 1034 (first neural network 1035). For example, the construction of the second inference model 1036 (second neural network 1037) is executed by the model construction unit 2000 shown in FIG. Unless otherwise specified, the model construction unit 2000 (learning processing unit 2010 and neural network 2020) of this example may be the same as that in construction of the first inference model 1034 (first neural network 1035).
 第2ニューラルネットワーク1037の構築に用いられる訓練データは、1以上の眼について取得された1以上のシャインプルーフ画像(例えば、前房領域が特定された前眼部画像、前房画像)を含んでいてよい。訓練データに含まれる画像の種類はシャインプルーフ画像に限定されず、例えば、訓練データは、他の眼科モダリティにより取得された画像や、任意の診療科の画像診断モダリティにより取得された画像や、実際の眼の画像を加工して生成された画像や、擬似的な画像などを含んでいてもよい。 The training data used to construct the second neural network 1037 includes one or more Scheimpflug images acquired for one or more eyes (e.g., an anterior segment image in which the anterior chamber region is specified, an anterior chamber image). you can stay The types of images included in the training data are not limited to Scheimpflug images. An image generated by processing an image of the eye of the eye, a pseudo image, or the like may be included.
 第2ニューラルネットワーク1037を構築するための訓練の手法(機械学習の手法)は任意であってよく、例えば、教師あり学習、教師なし学習、及び強化学習のいずれか、又は、いずれか2以上の組み合わせであってよい。 The training method (machine learning method) for constructing the second neural network 1037 may be arbitrary, for example, supervised learning, unsupervised learning, and reinforcement learning, or any two or more It may be a combination.
 幾つかの例示的な態様では、入力画像に対してラベルを付すアノテーションによって生成された訓練データを用いて教師あり学習が実施される。このアノテーションでは、例えば、訓練データに含まれる各画像について、その画像中の細胞領域を特定してラベルを付す。細胞領域の特定は、例えば、医師、コンピュータ、及び、他の推論モデルのうちの少なくとも1つによって実行される。学習処理部2010は、このような訓練データを用いた教師あり学習をニューラルネットワーク2020に適用することによって第2ニューラルネットワーク1037を構築することができる。 In some exemplary aspects, supervised learning is performed using training data generated by annotations that label input images. In this annotation, for example, for each image included in the training data, cell regions in the image are identified and labeled. Identification of cell regions is performed, for example, by at least one of a physician, a computer, and other inference models. The learning processing unit 2010 can construct the second neural network 1037 by applying supervised learning using such training data to the neural network 2020 .
 このようにして構築された第2ニューラルネットワーク1037を含む第2推論モデル1036は、第1セグメンテーション部1031により特定された前房領域を入力とし、且つ、入力された前房領域中の細胞領域(例えば、細胞領域の範囲又は位置を示す情報)を出力とした学習済みモデルである。 The second inference model 1036 including the second neural network 1037 constructed in this manner receives as input the anterior chamber region specified by the first segmentation unit 1031, and the input cell region in the anterior chamber region ( For example, it is a learned model whose output is information indicating the range or position of a cell region.
 本例の第2セグメンテーション部1032Aは、このような第2推論モデル1036(第2ニューラルネットワーク1037)を用いることによって、被検眼のシャインプルーフ画像中の前房領域から細胞領域を特定する処理を実行する。 The second segmentation unit 1032A of the present example uses such a second inference model 1036 (second neural network 1037) to execute a process of identifying cell regions from the anterior chamber region in the Scheimpflug image of the subject's eye. do.
 細胞評価情報生成処理部1033は、細胞評価情報を生成するための細胞評価情報生成処理を実行するプロセッサを含み、第2セグメンテーション部1032により特定された細胞領域から細胞評価情報を生成する。 The cell evaluation information generation processing unit 1033 includes a processor that executes cell evaluation information generation processing for generating cell evaluation information, and generates cell evaluation information from the cell regions specified by the second segmentation unit 1032.
 機械学習を用いて本例の細胞評価情報生成処理を実行する場合における細胞評価情報生成処理部1033の構成例を図7に示す。本例の細胞評価情報生成処理部1033Aは、予め構築された推論モデル1038(第3推論モデルと呼ぶ)を用いて細胞評価情報生成処理を実行するように構成されている。第3推論モデル1038は、眼画像(例えば、眼のシャインプルーフ画像、他のモダリティで取得した眼の画像、又は、眼のシャインプルーフ画像及び他のモダリティで取得された眼の画像)を少なくとも含む訓練データを用いた機械学習により構築されたニューラルネットワーク1039(第3ニューラルネットワークと呼ぶ)を含んでいる。 FIG. 7 shows a configuration example of the cell evaluation information generation processing unit 1033 when executing the cell evaluation information generation processing of this example using machine learning. 1033 A of cell evaluation information generation process parts of this example are comprised so that a cell evaluation information generation process may be performed using the inference model 1038 (it calls a 3rd inference model) constructed|assembled previously. The third inference model 1038 includes at least an eye image (eg, a Scheimpflug image of the eye, an eye image acquired with another modality, or a Scheimpflug image of the eye and an eye image acquired with another modality). It includes a neural network 1039 (referred to as a third neural network) constructed by machine learning using training data.
 本例の訓練データに含まれる眼画像は、炎症性細胞の画像が描出されている前房画像を少なくとも含み、炎症性細胞の画像が描出されていない前房画像を更に含んでいてもよい。本例の訓練データに含まれる眼画像は、前眼部画像(例えば、シャインプルーフ画像、他のモダリティで取得された画像)に対する手動セグメンテーション又は自動セグメンテーションの結果を含んでいてよく、例えば、前眼部画像中の前房画像から抽出された細胞画像、又は、前房画像中の細胞領域の範囲若しくは位置を示す情報であってよい。 The eye images included in the training data of this example include at least an anterior chamber image in which an image of inflammatory cells is drawn, and may further include an anterior chamber image in which an image of inflammatory cells is not drawn. The eye images included in the training data of this example may include the results of manual or automatic segmentation on anterior segment images (e.g., Scheimpflug images, images acquired with other modalities), e.g. It may be a cell image extracted from the anterior chamber image in the partial image, or information indicating the range or position of the cell region in the anterior chamber image.
 第3ニューラルネットワーク1039に入力されるデータは第2セグメンテーション部1032からの出力又はそれに基づくデータ(例えば、細胞領域の範囲、位置、分布などを示すデータ、又は、細胞領域の特定結果が付された前房領域)であり、第3ニューラルネットワーク1039から出力されるデータは細胞評価情報である。すなわち、細胞評価情報生成処理部1033Aは、第2セグメンテーション部1032による細胞領域の特定結果又はそれに基づくデータを受け、細胞領域の特定結果又はそれに基づくデータを第3推論モデル1038の第3ニューラルネットワーク1039に入力し、第3ニューラルネットワーク1037からの出力データ(細胞評価情報)を取得するように構成されている。前述したように、細胞評価情報は、炎症性細胞に関する所定のパラメータ(例えば、炎症性細胞の密度、個数、位置、分布など)についての評価情報である。 The data input to the third neural network 1039 is the output from the second segmentation unit 1032 or data based thereon (for example, data indicating the range, position, distribution, etc. of the cell area, or the result of specifying the cell area anterior chamber region), and the data output from the third neural network 1039 is cell evaluation information. That is, the cell evaluation information generation processing unit 1033A receives the result of specifying the cell region by the second segmentation unit 1032 or data based thereon, and converts the result of specifying the cell region or data based thereon to the third neural network 1039 of the third inference model 1038. and obtain output data (cell evaluation information) from the third neural network 1037 . As described above, the cell evaluation information is evaluation information regarding predetermined parameters regarding inflammatory cells (eg, density, number, position, distribution, etc. of inflammatory cells).
 第3推論モデル1038(第3ニューラルネットワーク1039)の構築は、第1推論モデル1034(第1ニューラルネットワーク1035)の構築と同じ要領で実行されてよい。例えば、第3推論モデル1038(第3ニューラルネットワーク1039)の構築は、図5に示すモデル構築部2000によって実行される。特に言及しない限り、本例のモデル構築部2000(学習処理部2010及びニューラルネットワーク2020)は、第1推論モデル1034(第1ニューラルネットワーク1035)の構築におけるそれと同じであってよい。 The construction of the third inference model 1038 (third neural network 1039) may be performed in the same manner as the construction of the first inference model 1034 (first neural network 1035). For example, the construction of the third inference model 1038 (third neural network 1039) is executed by the model construction unit 2000 shown in FIG. Unless otherwise specified, the model construction unit 2000 (learning processing unit 2010 and neural network 2020) of this example may be the same as that in construction of the first inference model 1034 (first neural network 1035).
 第3ニューラルネットワーク1039の構築に用いられる訓練データは、1以上の眼について取得された1以上のシャインプルーフ画像(例えば、細胞領域が特定された前房領域を含む前眼部画像、細胞領域が特定された前房画像)を含んでいてよい。訓練データに含まれる画像の種類はシャインプルーフ画像に限定されず、例えば、訓練データは、他の眼科モダリティにより取得された画像や、任意の診療科の画像診断モダリティにより取得された画像や、実際の眼の画像を加工して生成された画像や、擬似的な画像などを含んでいてもよい。 The training data used to construct the third neural network 1039 includes one or more Scheimpflug images acquired for one or more eyes (for example, an anterior segment image including an anterior chamber region in which a cell region is specified, a cell region is identified anterior chamber image). The types of images included in the training data are not limited to Scheimpflug images. An image generated by processing an image of the eye of the eye, a pseudo image, or the like may be included.
 第3ニューラルネットワーク1039を構築するための訓練の手法(機械学習の手法)は任意であってよく、例えば、教師あり学習、教師なし学習、及び強化学習のいずれか、又は、いずれか2以上の組み合わせであってよい。 The training method (machine learning method) for constructing the third neural network 1039 may be arbitrary, for example, supervised learning, unsupervised learning, and reinforcement learning, or any two or more It may be a combination.
 幾つかの例示的な態様では、入力画像に対してラベルを付すアノテーションによって生成された訓練データを用いて教師あり学習が実施される。このアノテーションでは、例えば、訓練データに含まれる各画像(細胞領域が特定されている)に対して、その画像から生成された細胞評価情報がラベルとして付される。画像からの細胞評価情報の生成は、例えば、医師、コンピュータ、及び、他の推論モデルのうちの少なくとも1つによって実行される。学習処理部2010は、このような訓練データを用いた教師あり学習をニューラルネットワーク2020に適用することによって第3ニューラルネットワーク1039を構築することができる。 In some exemplary aspects, supervised learning is performed using training data generated by annotations that label input images. In this annotation, for example, each image (in which a cell region is specified) included in the training data is labeled with cell evaluation information generated from that image. Generating cell assessment information from images is performed, for example, by at least one of a physician, computer, and other inference model. The learning processing unit 2010 can construct the third neural network 1039 by applying supervised learning using such training data to the neural network 2020 .
 このようにして構築された第3ニューラルネットワーク1039を含む第3推論モデル1038は、第2セグメンテーション部1032による細胞領域の特定結果又はそれに基づくデータを入力とし、且つ、入力された細胞領域の特定結果又はそれに基づくデータに基づく細胞評価情報を出力とした学習済みモデルである。 The third inference model 1038 including the third neural network 1039 constructed in this manner receives as input the cell region identification result by the second segmentation unit 1032 or data based thereon, and the input cell region identification result Alternatively, it is a learned model that outputs cell evaluation information based on data based thereon.
 本例の細胞評価情報生成処理部1033Aは、このような第3推論モデル1038(第3ニューラルネットワーク1039)を用いることによって、被検眼のシャインプルーフ画像中の前房領域における細胞領域から細胞評価情報を生成する処理を実行する。 By using such a third inference model 1038 (third neural network 1039), the cell evaluation information generation processing unit 1033A of this example generates cell evaluation information from the cell area in the anterior chamber area in the Scheimpflug image of the eye to be examined. Execute the process to generate the .
 図8に示すデータ処理部1040は、図1のデータ処理部1020の構成の例である。本例のデータ処理部1040は、第1セグメンテーション部1041、変換処理部1042、第2セグメンテーション部1043、及び細胞評価情報生成処理部1044を含んでいる。 A data processing unit 1040 shown in FIG. 8 is an example of the configuration of the data processing unit 1020 in FIG. The data processing unit 1040 of this example includes a first segmentation unit 1041 , a conversion processing unit 1042 , a second segmentation unit 1043 and a cell evaluation information generation processing unit 1044 .
 第1セグメンテーション部1041は、図3の第1セグメンテーション部1031(例えば、図4の第1セグメンテーション部1031A)と同様の構成及び機能を有し、画像取得部1010により取得されたシャインプルーフ画像から前房領域を特定するための第1セグメンテーションを実行するように構成されている。 The first segmentation unit 1041 has the same configuration and function as the first segmentation unit 1031 in FIG. 3 (for example, the first segmentation unit 1031A in FIG. It is configured to perform a first segmentation to identify a chamber region.
 変換処理部1042は、第1セグメンテーション部1041により特定された前房領域を、第2セグメンテーション部1043が実行する第2セグメンテーションに応じた構造のデータに変換する。本例の第2セグメンテーション部1043は、図6の第2セグメンテーション部1032Aのように、機械学習で構築されたニューラルネットワーク(第2ニューラルネットワーク)を用いて第2セグメンテーションを実行するように構成されている。変換処理部1042は、第1セグメンテーション部1041によりシャインプルーフ画像から特定された前房領域を、第2セグメンテーション部1043の第2ニューラルネットワークの入力層に対応した構造の画像データに変換するための変換処理を実行するように構成されている。 The conversion processing unit 1042 converts the anterior chamber region specified by the first segmentation unit 1041 into data with a structure according to the second segmentation performed by the second segmentation unit 1043. The second segmentation unit 1043 of this example, like the second segmentation unit 1032A in FIG. 6, is configured to execute the second segmentation using a neural network (second neural network) constructed by machine learning. there is The conversion processing unit 1042 performs conversion for converting the anterior chamber region specified from the Scheimpflug image by the first segmentation unit 1041 into image data having a structure corresponding to the input layer of the second neural network of the second segmentation unit 1043. configured to do the work.
 例えば、第2セグメンテーション部1043の第2ニューラルネットワーク(畳み込みニューラルネットワーク)の入力層が既定の構造(形態、形式)のデータを受け付けるように構成されている場合がある。この既定のデータ構造は、例えば、既定の画像サイズ(例えば、縦方向のピクセル数及び横方向のピクセル数)、既定の画像形状(例えば、正方形又は長方形)などであってよい。一方、眼科装置の仕様、撮影時の条件や設定、被検眼の寸法や形態の個人差などの影響により、第1セグメンテーション部1041によって特定される前房領域の画像サイズや画像形状は様々である。変換処理部1042は、第1セグメンテーション部1041により特定された前房領域の構造(例えば、画像サイズ及び/又は画像形状)を、第2セグメンテーション部1043の第2ニューラルネットワークの入力層が受け付け可能な構造に変換する。 For example, the input layer of the second neural network (convolutional neural network) of the second segmentation unit 1043 may be configured to accept data with a predetermined structure (morphology, format). This predefined data structure may be, for example, a predefined image size (eg, number of vertical pixels and number of horizontal pixels), a predefined image shape (eg, square or rectangular), and the like. On the other hand, the image size and image shape of the anterior chamber region identified by the first segmentation unit 1041 vary depending on the specifications of the ophthalmologic apparatus, conditions and settings at the time of imaging, individual differences in the size and shape of the eye to be examined, and the like. . The transformation processing unit 1042 converts the structure of the anterior chamber region (for example, image size and/or image shape) specified by the first segmentation unit 1041 into an input layer of the second neural network of the second segmentation unit 1043. Convert to structure.
 画像サイズの変換は、任意の公知の画像サイズ変換技術を用いて実行されてよく、例えば、第1セグメンテーション部1041により特定された前房領域を入力層に応じた画像サイズの複数の部分画像に分割する処理、又は、第1セグメンテーション部1041により特定された前房領域を入力層に応じた画像サイズの単一の画像にリサイズする処理を含んでいてよい。画像形状の変換は、任意の公知の画像変形技術を用いて実行されてよい。他のデータ構造の変換処理についても同様であってよい。 Image size conversion may be performed using any known image size conversion technique. A process of dividing or a process of resizing the anterior chamber region specified by the first segmentation unit 1041 into a single image having an image size corresponding to the input layer may be included. Image shape transformation may be performed using any known image transformation technique. Conversion processing for other data structures may be performed in the same manner.
 なお、本開示では、ニューラルネットワークの構造に対応した変換処理をシャインプルーフ画像の前房領域に適用する場合について幾つかの例を詳細に説明しているが、変換処理の態様やそのための構成はそれらに限定されない。 In the present disclosure, several examples of applying conversion processing corresponding to the structure of a neural network to the anterior chamber region of a Scheimpflug image are described in detail. but not limited to them.
 例えば、ニューラルネットワークに入力される画像がシャインプルーフ画像である場合には、入力されるシャインプルーフ画像に同様の変換処理を適用する構成を採用することが可能である。また、ニューラルネットワークに入力される画像がシャインプルーフ画像の任意の加工画像データである場合には、入力される加工画像データに同様の変換処理を適用するように構成を採用することが可能である。 For example, if the image input to the neural network is a Scheimpflug image, it is possible to adopt a configuration that applies similar conversion processing to the input Scheimpflug image. Further, when the image input to the neural network is arbitrary processed image data of a Scheimpflug image, it is possible to adopt a configuration in which the same conversion processing is applied to the input processed image data. .
 また、変換処理を実行する要素(変換処理を実行するプロセッサ。変換処理部と呼ぶ。)の配置も任意であってよい。例えば、変換処理部は、取得されたシャインプルーフ画像に基づき実行される一連の処理の流れにおいて、対象となるニューラルネットワークよりも前の段階(例えば、このニューラルネットワークを含む推論モデルよりも前の段階、又は、この推論モデルの内部であってこのニューラルネットワークよりも前の段階)に配置されていてよく、又は、対象となるニューラルネットワークの内部に配置されていてよい。対象となるニューラルネットワークの内部に配置される場合、このニューラルネットワークの出力に直接に対応する入力を受け付ける入力層よりも前の段階に変換処理部が配置される。 Also, the arrangement of elements that execute conversion processing (processors that execute conversion processing; referred to as conversion processing units) may be arbitrary. For example, in the flow of a series of processing executed based on the acquired Scheimpflug image, the conversion processing unit is a stage prior to the target neural network (for example, a stage prior to the inference model including this neural network). , or inside this inference model and before this neural network), or inside the target neural network. When placed inside the target neural network, the conversion processing unit is placed at a stage prior to the input layer that receives inputs that directly correspond to the outputs of this neural network.
 第2セグメンテーション部1043は、図6の第2セグメンテーション部1032Aと同様の構成及び機能を有し、変換処理部1042によりデータ構造が変換された前房領域から細胞領域を特定するための第2セグメンテーションを実行するように構成されている。第2セグメンテーション部1043の第2ニューラルネットワークは、変換処理部1042により生成された画像データ(データ構造が変換された前房領域)の入力を受け、細胞領域を出力するように構成されている。本例の第2ニューラルネットワークを構築するための機械学習は、図6の第2ニューラルネットワーク1037を構築するための機械学習と同じ要領で実行されてよい。 The second segmentation unit 1043 has the same configuration and function as the second segmentation unit 1032A in FIG. is configured to run The second neural network of the second segmentation unit 1043 is configured to receive input of the image data (the anterior chamber region whose data structure has been converted) generated by the conversion processing unit 1042 and output the cell region. Machine learning for building the second neural network of this example may be performed in the same manner as machine learning for building the second neural network 1037 of FIG.
 細胞評価情報生成処理部1044は、図3の細胞評価情報生成処理部1033(例えば、図7の細胞評価情報生成処理部1033A)と同様の構成及び機能を有し、第2セグメンテーション部1043により特定された細胞領域から細胞評価情報を生成するための細胞評価情報生成処理を実行するように構成されている。 The cell evaluation information generation processing unit 1044 has the same configuration and function as the cell evaluation information generation processing unit 1033 in FIG. 3 (for example, the cell evaluation information generation processing unit 1033A in FIG. It is configured to execute a cell evaluation information generation process for generating cell evaluation information from the cell area obtained.
 このように、本例のデータ処理部1040は、図3のデータ処理部1030の第1セグメンテーション部1031と第2セグメンテーション部1032との間に変換処理部1042を配置した構成であってよい。しかしながら、本例のデータ処理部1040の構成はこれに限定されない。 Thus, the data processing unit 1040 of this example may have a configuration in which the conversion processing unit 1042 is arranged between the first segmentation unit 1031 and the second segmentation unit 1032 of the data processing unit 1030 in FIG. However, the configuration of the data processing unit 1040 of this example is not limited to this.
 図9に示すデータ処理部1050は、図1のデータ処理部1020の構成の例である。本例のデータ処理部1050は、第2セグメンテーション部1051と細胞評価情報生成処理部1052とを含んでいる。 A data processing unit 1050 shown in FIG. 9 is an example of the configuration of the data processing unit 1020 in FIG. A data processing unit 1050 of this example includes a second segmentation unit 1051 and a cell evaluation information generation processing unit 1052 .
 第2セグメンテーション部1051は、細胞領域を特定するための第2セグメンテーションを実行するプロセッサを含み、画像取得部1010により取得されたシャインプルーフ画像から細胞領域を特定するように構成されている。 The second segmentation unit 1051 includes a processor that executes second segmentation for identifying cell regions, and is configured to identify cell regions from the Scheimpflug image acquired by the image acquisition unit 1010 .
 機械学習を用いて本例の第2セグメンテーションを実行する場合における第2セグメンテーション部1051の構成例を図10に示す。本例の第2セグメンテーション部1051Aは、予め構築された推論モデル1053(第4推論モデルと呼ぶ)を用いて第2セグメンテーションを実行するように構成されている。第4推論モデル1053は、眼画像(例えば、眼のシャインプルーフ画像、他のモダリティで取得した眼の画像、又は、眼のシャインプルーフ画像及び他のモダリティで取得された眼の画像)を少なくとも含む訓練データを用いた機械学習により構築されたニューラルネットワーク1054(第4ニューラルネットワークと呼ぶ)を含んでいる。 FIG. 10 shows a configuration example of the second segmentation unit 1051 when executing the second segmentation of this example using machine learning. The second segmentation unit 1051A of this example is configured to execute the second segmentation using an inference model 1053 constructed in advance (referred to as a fourth inference model). The fourth inference model 1053 includes at least an eye image (eg, a Scheimpflug image of the eye, an eye image acquired with another modality, or a Scheimpflug image of the eye and an eye image acquired with another modality). It includes a neural network 1054 (referred to as a fourth neural network) constructed by machine learning using training data.
 幾つかの例示的な態様において、第4ニューラルネットワーク1054は、図4の第1ニューラルネットワーク1035の少なくとも一部と図6の第2ニューラルネットワーク1037の少なくとも一部とを含んでいてもよい。例えば、第4ニューラルネットワーク1054は、第1ニューラルネットワーク1035と第2ニューラルネットワーク1037とを直列に配置したニューラルネットワークであってよい。このような構成の第4ニューラルネットワーク1054は、シャインプルーフ画像から前房領域を特定する機能と、前房領域から細胞領域を特定する機能とを有する。 In some exemplary aspects, fourth neural network 1054 may include at least a portion of first neural network 1035 of FIG. 4 and at least a portion of second neural network 1037 of FIG. For example, fourth neural network 1054 may be a neural network in which first neural network 1035 and second neural network 1037 are arranged in series. The fourth neural network 1054 having such a configuration has a function of specifying the anterior chamber region from the Scheimpflug image and a function of specifying the cell region from the anterior chamber region.
 幾つかの例示的な態様において、第4ニューラルネットワーク1054は、前房領域の特定を行うことなく、シャインプルーフ画像から細胞領域を直接に特定するように機械学習が施されていてもよい。第4ニューラルネットワーク1054の態様はこれらに限定されるものではなく、シャインプルーフ画像から細胞領域を特定するための任意の機械学習が適用されたニューラルネットワークを含んでいてよい。 In some exemplary embodiments, the fourth neural network 1054 may be machine-learned to directly identify cell regions from the Scheimpflug image without identifying the anterior chamber region. Embodiments of the fourth neural network 1054 are not limited to these, and may include any machine learning applied neural network for identifying cell regions from a Scheimpflug image.
 第4ニューラルネットワーク1054に入力されるデータはシャインプルーフ画像であり、第4ニューラルネットワーク1054から出力されるデータは細胞領域である。すなわち、第2セグメンテーション部1051Aは、シャインプルーフ画像を受け、このシャインプルーフ画像を第4推論モデル1053の第4ニューラルネットワーク1054に入力し、第4ニューラルネットワーク1054からの出力データ(入力されたシャインプルーフ画像における細胞領域)を取得するように構成されている。 The data input to the fourth neural network 1054 are Scheimpflug images, and the data output from the fourth neural network 1054 are cell regions. That is, the second segmentation unit 1051A receives the Scheimpflug image, inputs this Scheimpflug image to the fourth neural network 1054 of the fourth inference model 1053, and outputs data from the fourth neural network 1054 (the input Scheimpflug cell area in the image).
 第4推論モデル1053(第4ニューラルネットワーク1054)の構築は、第1推論モデル1034(第1ニューラルネットワーク1035)の構築と同じ要領で実行されてよい。例えば、第4推論モデル1053(第4ニューラルネットワーク1054)の構築は、図5に示すモデル構築部2000によって実行される。特に言及しない限り、本例のモデル構築部2000(学習処理部2010及びニューラルネットワーク2020)は、第1推論モデル1034(第1ニューラルネットワーク1035)の構築におけるそれと同じであってよい。 The construction of the fourth inference model 1053 (fourth neural network 1054) may be performed in the same manner as the construction of the first inference model 1034 (first neural network 1035). For example, the construction of the fourth inference model 1053 (fourth neural network 1054) is executed by the model construction unit 2000 shown in FIG. Unless otherwise specified, the model construction unit 2000 (learning processing unit 2010 and neural network 2020) of this example may be the same as that in construction of the first inference model 1034 (first neural network 1035).
 第4ニューラルネットワーク1054の構築に用いられる訓練データは、1以上の眼について取得された1以上のシャインプルーフ画像を含んでいてよい。訓練データに含まれる画像の種類はシャインプルーフ画像に限定されず、例えば、訓練データは、他の眼科モダリティにより取得された画像や、任意の診療科の画像診断モダリティにより取得された画像や、実際の眼の画像を加工して生成された画像や、擬似的な画像などを含んでいてもよい。 The training data used to construct the fourth neural network 1054 may include one or more Scheimpflug images acquired for one or more eyes. The types of images included in the training data are not limited to Scheimpflug images. An image generated by processing an image of the eye of the eye, a pseudo image, or the like may be included.
 訓練データに含まれるいずれかの画像は、第4ニューラルネットワーク1054により実行される処理を補助するための情報が付されていてもよい。例えば、事前のアノテーションによって画像中の前房領域にラベルが付されていてもよい。 Any image included in the training data may be accompanied by information to assist the processing performed by the fourth neural network 1054. For example, the anterior chamber region in the image may be labeled by prior annotation.
 第4ニューラルネットワーク1054を構築するための訓練の手法(機械学習の手法)は任意であってよく、例えば、教師あり学習、教師なし学習、及び強化学習のいずれか、又は、いずれか2以上の組み合わせであってよい。 The training method (machine learning method) for constructing the fourth neural network 1054 may be arbitrary, for example, supervised learning, unsupervised learning, and reinforcement learning, or any two or more It may be a combination.
 幾つかの例示的な態様では、入力画像に対してラベルを付すアノテーションによって生成された訓練データを用いて教師あり学習が実施される。このアノテーションでは、例えば、訓練データに含まれる各画像について、その画像中の細胞領域を特定してラベルを付す。細胞領域の特定は、例えば、医師、コンピュータ、及び、他の推論モデルのうちの少なくとも1つによって実行される。学習処理部2010は、このような訓練データを用いた教師あり学習をニューラルネットワーク2020に適用することによって第4ニューラルネットワーク1054を構築することができる。 In some exemplary aspects, supervised learning is performed using training data generated by annotations that label input images. In this annotation, for example, for each image included in the training data, cell regions in the image are identified and labeled. Identification of cell regions is performed, for example, by at least one of a physician, a computer, and other inference models. The learning processing unit 2010 can construct the fourth neural network 1054 by applying supervised learning using such training data to the neural network 2020 .
 このようにして構築された第4ニューラルネットワーク1054を含む第4推論モデル1053は、画像取得部1010により取得されたシャインプルーフ画像(又は、その加工画像データなど)を入力とし、且つ、入力されたシャインプルーフ画像中の細胞領域(例えば、細胞領域の範囲又は位置を示す情報)を出力とした学習済みモデルである。 The fourth inference model 1053 including the fourth neural network 1054 constructed in this way is input with the Scheimpflug image (or its processed image data, etc.) acquired by the image acquisition unit 1010, and the input It is a trained model whose output is a cell region (for example, information indicating the range or position of a cell region) in a Scheimpflug image.
 本例の第2セグメンテーション部1051Aは、このような第4推論モデル1053(第4ニューラルネットワーク1054)を用いることによって、被検眼のシャインプルーフ画像から細胞領域を特定する処理を実行する。 The second segmentation unit 1051A of this example uses such a fourth inference model 1053 (fourth neural network 1054) to execute processing for identifying cell regions from the Scheimpflug image of the subject's eye.
 細胞評価情報生成処理部1052は、細胞評価情報を生成するための細胞評価情報生成処理を実行するプロセッサを含み、第2セグメンテーション部1051により特定された細胞領域から細胞評価情報を生成する。 The cell evaluation information generation processing unit 1052 includes a processor that executes cell evaluation information generation processing for generating cell evaluation information, and generates cell evaluation information from the cell regions specified by the second segmentation unit 1051.
 機械学習を用いて本例の細胞評価情報生成処理を実行する場合における細胞評価情報生成処理部1052の構成例を図11に示す。本例の細胞評価情報生成処理部1052Aは、予め構築された推論モデル1055(第5推論モデルと呼ぶ)を用いて細胞評価情報生成処理を実行するように構成されている。第5推論モデル1055は、眼画像(例えば、眼のシャインプルーフ画像、他のモダリティで取得した眼の画像、又は、眼のシャインプルーフ画像及び他のモダリティで取得された眼の画像)を少なくとも含む訓練データを用いた機械学習により構築されたニューラルネットワーク1056(第5ニューラルネットワークと呼ぶ)を含んでいる。 FIG. 11 shows a configuration example of the cell evaluation information generation processing unit 1052 when executing the cell evaluation information generation processing of this example using machine learning. 1052 A of cell evaluation information generation process parts of this example are comprised so that cell evaluation information generation processing may be performed using the inference model 1055 (it calls a 5th inference model) constructed|assembled previously. The fifth inference model 1055 includes at least an eye image (eg, a Scheimpflug image of the eye, an eye image acquired with another modality, or a Scheimpflug image of the eye and an eye image acquired with another modality). It includes a neural network 1056 (referred to as a fifth neural network) constructed by machine learning using training data.
 第5ニューラルネットワーク1056に入力されるデータは第2セグメンテーション部1051からの出力又はそれに基づくデータ(例えば、細胞領域の範囲、位置、分布などを示すデータ、又は、細胞領域の特定結果が付された前房領域)であり、第5ニューラルネットワーク1056から出力されるデータは細胞評価情報である。すなわち、細胞評価情報生成処理部1052Aは、第2セグメンテーション部1051による細胞領域の特定結果又はそれに基づくデータを受け、細胞領域の特定結果又はそれに基づくデータを第5推論モデル1055の第5ニューラルネットワーク1056に入力し、第5ニューラルネットワーク1056からの出力データ(細胞評価情報)を取得するように構成されている。 The data input to the fifth neural network 1056 is the output from the second segmentation unit 1051 or data based thereon (for example, data indicating the range, position, distribution, etc. of the cell area, or the result of specifying the cell area) anterior chamber region), and the data output from the fifth neural network 1056 is cell evaluation information. That is, the cell evaluation information generation processing unit 1052A receives the result of specifying the cell region by the second segmentation unit 1051 or data based thereon, and converts the result of specifying the cell region or data based thereon to the fifth neural network 1056 of the fifth inference model 1055. to obtain output data (cell evaluation information) from the fifth neural network 1056 .
 第5推論モデル1055(第5ニューラルネットワーク1056)を構築するための機械学習の手法は、図7の細胞評価情報生成処理部1033の第3ニューラルネットワーク1039を構築するための機械学習の手法と同様であってよい。また、第5推論モデル1055(第5ニューラルネットワーク1056)を構築するための機械学習に用いられる訓練データは、第3ニューラルネットワーク1039を構築するための機械学習に用いられる訓練データと同様であってよい。 The machine learning method for building the fifth inference model 1055 (fifth neural network 1056) is similar to the machine learning method for building the third neural network 1039 of the cell evaluation information generation processing unit 1033 in FIG. can be Also, the training data used for machine learning for building the fifth inference model 1055 (fifth neural network 1056) is the same as the training data used for machine learning for building the third neural network 1039. good.
 第5ニューラルネットワーク1056を含む第5推論モデル1055は、第2セグメンテーション部1051による細胞領域の特定結果又はそれに基づくデータを入力とし、且つ、入力された細胞領域の特定結果又はそれに基づくデータに基づく細胞評価情報を出力とした学習済みモデルである。 A fifth inference model 1055 including a fifth neural network 1056 receives the cell region identification result by the second segmentation unit 1051 or data based thereon as input, and the input cell region identification result or data based on the cell region This is a trained model that outputs evaluation information.
 本例の細胞評価情報生成処理部1052Aは、このような第5推論モデル1055(第5ニューラルネットワーク1056)を用いることによって、被検眼のシャインプルーフ画像中の細胞領域から細胞評価情報を生成する処理を実行する。 The cell evaluation information generation processing unit 1052A of the present example uses such a fifth inference model 1055 (fifth neural network 1056) to generate cell evaluation information from the cell area in the Scheimpflug image of the subject's eye. to run.
 図12に示すデータ処理部1060は、図1のデータ処理部1020の構成の例である。本例のデータ処理部1060は、細胞評価情報生成処理部1061を含んでいる。 A data processing unit 1060 shown in FIG. 12 is an example of the configuration of the data processing unit 1020 in FIG. The data processing section 1060 of this example includes a cell evaluation information generation processing section 1061 .
 細胞評価情報生成処理部1061は、細胞評価情報を生成するための細胞評価情報生成処理を実行するプロセッサを含み、画像取得部1010により取得されたシャインプルーフ画像から細胞評価情報を生成する。 The cell evaluation information generation processing unit 1061 includes a processor that executes cell evaluation information generation processing for generating cell evaluation information, and generates cell evaluation information from the Scheimpflug image acquired by the image acquisition unit 1010.
 機械学習を用いて本例の細胞評価情報生成処理を実行する場合における細胞評価情報生成処理部1061の構成例を図13に示す。本例の細胞評価情報生成処理部1061Aは、予め構築された推論モデル1062(第6推論モデルと呼ぶ)を用いて細胞評価情報生成処理を実行するように構成されている。第6推論モデル1062は、眼画像(例えば、眼のシャインプルーフ画像、他のモダリティで取得した眼の画像、又は、眼のシャインプルーフ画像及び他のモダリティで取得された眼の画像)を少なくとも含む訓練データを用いた機械学習により構築されたニューラルネットワーク1063(第6ニューラルネットワークと呼ぶ)を含んでいる。 FIG. 13 shows a configuration example of the cell evaluation information generation processing unit 1061 when executing the cell evaluation information generation processing of this example using machine learning. 1061 A of cell evaluation information generation process parts of this example are comprised so that the cell evaluation information generation process may be performed using the inference model 1062 (it calls a 6th inference model) constructed|assembled previously. The sixth inference model 1062 includes at least an eye image (eg, a Scheimpflug image of the eye, an eye image acquired with another modality, or a Scheimpflug image of the eye and an eye image acquired with another modality). It includes a neural network 1063 (referred to as a sixth neural network) constructed by machine learning using training data.
 幾つかの例示的な態様において、第6ニューラルネットワーク1063は、図4の第1ニューラルネットワーク1035の少なくとも一部と図6の第2ニューラルネットワーク1037の少なくとも一部と図7の第3ニューラルネットワーク1039の少なくとも一部とを含んでいてもよい。例えば、第6ニューラルネットワーク1063は、第1ニューラルネットワーク1035と第2ニューラルネットワーク1037と第3ニューラルネットワーク1039とを直列に配置したニューラルネットワークであってよい。このような構成の第6ニューラルネットワーク1063は、シャインプルーフ画像から前房領域を特定する機能と、前房領域から細胞領域を特定する機能と、細胞領域から細胞評価情報を生成する機能とを有する。 In some exemplary aspects, sixth neural network 1063 is at least a portion of first neural network 1035 of FIG. 4 and at least a portion of second neural network 1037 of FIG. 6 and third neural network 1039 of FIG. may include at least a portion of For example, sixth neural network 1063 may be a neural network in which first neural network 1035, second neural network 1037, and third neural network 1039 are arranged in series. The sixth neural network 1063 having such a configuration has a function of specifying an anterior chamber region from a Scheimpflug image, a function of specifying a cell region from the anterior chamber region, and a function of generating cell evaluation information from the cell region. .
 幾つかの例示的な態様において、第6ニューラルネットワーク1063は、前房領域の特定及び/又は細胞領域の特定を行うことなく、シャインプルーフ画像から細胞評価情報を直接に生成するように機械学習が施されていてもよい。第6ニューラルネットワーク1063の態様はこれらに限定されるものではなく、シャインプルーフ画像から細胞評価情報を特定するための任意の機械学習が適用されたニューラルネットワークを含んでいてよい。 In some exemplary aspects, the sixth neural network 1063 employs machine learning to generate cell assessment information directly from the Scheimpflug image without anterior chamber region identification and/or cell region identification. may be applied. Embodiments of the sixth neural network 1063 are not limited to these, and may include any machine learning applied neural network for identifying cell evaluation information from a Scheimpflug image.
 第6ニューラルネットワーク1063に入力されるデータは画像取得部1010からの出力又はそれに基づくデータであり、第6ニューラルネットワーク1063から出力されるデータは細胞評価情報である。すなわち、細胞評価情報生成処理部1061Aは、画像取得部1010により取得されたシャインプルーフ画像(及び/又は、このシャインプルーフ画像に基づくデータ)を受け、このシャインプルーフ画像又はそれに基づくデータを第6推論モデル1062の第6ニューラルネットワーク1063に入力し、第6ニューラルネットワーク1063からの出力データ(細胞評価情報)を取得するように構成されている。 The data input to the sixth neural network 1063 is output from the image acquisition unit 1010 or data based thereon, and the data output from the sixth neural network 1063 is cell evaluation information. That is, the cell evaluation information generation processing unit 1061A receives the Scheimpflug image (and/or data based on this Scheimpflug image) acquired by the image acquisition unit 1010, and converts this Scheimpflug image or data based thereon into the sixth inference. It is configured to input to the sixth neural network 1063 of the model 1062 and acquire output data (cell evaluation information) from the sixth neural network 1063 .
 第6推論モデル1062(第6ニューラルネットワーク1063)を構築するための機械学習の手法は、図7の細胞評価情報生成処理部1033の第3ニューラルネットワーク1039を構築するための機械学習の手法と同様であってよい。また、第6推論モデル1062(第6ニューラルネットワーク1063)を構築するための機械学習に用いられる訓練データは、第3ニューラルネットワーク1039を構築するための機械学習に用いられる訓練データと同様であってよい。 The machine learning method for building the sixth inference model 1062 (sixth neural network 1063) is similar to the machine learning method for building the third neural network 1039 of the cell evaluation information generation processing unit 1033 in FIG. can be Also, the training data used for machine learning to construct the sixth inference model 1062 (sixth neural network 1063) is the same as the training data used for machine learning to construct the third neural network 1039. good.
 第6ニューラルネットワーク1063を含む第6推論モデル1062は、画像取得部1010により取得されたシャインプルーフ画像(及び/又は、このシャインプルーフ画像に基づくデータ)を入力とし、且つ、入力されたシャインプルーフ画像(及び/又は、このシャインプルーフ画像に基づくデータ)に基づく細胞評価情報を出力とした学習済みモデルである。 A sixth inference model 1062 including a sixth neural network 1063 receives the Scheimpflug image acquired by the image acquisition unit 1010 (and/or data based on this Scheimpflug image), and (and/or data based on this Scheimpflug image).
 本例の細胞評価情報生成処理部1061Aは、このような第6推論モデル1062(第6ニューラルネットワーク1063)を用いることによって、被検眼のシャインプルーフ画像(及び/又は、このシャインプルーフ画像に基づくデータ)から細胞評価情報を生成する処理を実行する。 By using such a sixth inference model 1062 (sixth neural network 1063), the cell evaluation information generation processing unit 1061A of the present example generates a Scheimpflug image of the eye to be examined (and/or data based on this Scheimpflug image). ) to generate cell evaluation information.
 図14に示すデータ処理部1070は、図1のデータ処理部1020の構成の例である。本例のデータ処理部1070は、第1セグメンテーション部1071と細胞評価情報生成処理部1072とを含んでいる。 A data processing unit 1070 shown in FIG. 14 is an example of the configuration of the data processing unit 1020 in FIG. A data processing unit 1070 of this example includes a first segmentation unit 1071 and a cell evaluation information generation processing unit 1072 .
 第1セグメンテーション部1071は、前房領域を特定するための第1セグメンテーションを実行するプロセッサを含み、画像取得部1010により取得されたシャインプルーフ画像から前房領域を特定するように構成されている。 The first segmentation unit 1071 includes a processor that executes first segmentation for identifying the anterior chamber region, and is configured to identify the anterior chamber region from the Scheimpflug image acquired by the image acquisition unit 1010.
 機械学習を用いて本例の第1セグメンテーションを実行する場合における第1セグメンテーション部1071の構成例を図15に示す。本例の第1セグメンテーション部1071Aは、予め構築された推論モデル1073(第7推論モデルと呼ぶ)を用いて第1セグメンテーションを実行するように構成されている。第7推論モデル1073は、眼画像(例えば、眼のシャインプルーフ画像、他のモダリティで取得した眼の画像、又は、眼のシャインプルーフ画像及び他のモダリティで取得された眼の画像)を少なくとも含む訓練データを用いた機械学習により構築されたニューラルネットワーク1074(第7ニューラルネットワークと呼ぶ)を含んでいる。 FIG. 15 shows a configuration example of the first segmentation unit 1071 when executing the first segmentation of this example using machine learning. The first segmentation unit 1071A of this example is configured to execute the first segmentation using an inference model 1073 constructed in advance (referred to as a seventh inference model). The seventh inference model 1073 includes at least an eye image (eg, a Scheimpflug image of the eye, an eye image acquired with another modality, or a Scheimpflug image of the eye and an eye image acquired with another modality). It includes a neural network 1074 (referred to as a seventh neural network) constructed by machine learning using training data.
 第7推論モデル1073(第7ニューラルネットワーク1074)を構築するための機械学習の手法は、図4の第1セグメンテーション部1031Aの第1ニューラルネットワーク1035を構築するための機械学習の手法と同様であってよい。また、第7推論モデル1073(第7ニューラルネットワーク1074)を構築するための機械学習に用いられる訓練データは、第1ニューラルネットワーク1035を構築するための機械学習に用いられる訓練データと同様であってよい。幾つかの例示的な態様において、第7ニューラルネットワーク1074は第1ニューラルネットワーク1035と同一又は類似であってよく、第7推論モデル1073は第1推論モデル1034と同一又は類似であってよい。 The machine learning technique for constructing the seventh inference model 1073 (seventh neural network 1074) is the same as the machine learning technique for constructing the first neural network 1035 of the first segmentation unit 1031A in FIG. you can Also, the training data used for machine learning to construct the seventh inference model 1073 (seventh neural network 1074) is the same as the training data used for machine learning to construct the first neural network 1035. good. In some exemplary aspects, seventh neural network 1074 may be the same or similar to first neural network 1035 and seventh inference model 1073 may be the same or similar to first inference model 1034 .
 第7ニューラルネットワーク1074を含む第7推論モデル1073は、画像取得部1010により取得されたシャインプルーフ画像(又は、その加工画像データなど)を入力とし、且つ、入力されたシャインプルーフ画像中の前房領域(例えば、前房領域の範囲又は位置を示す情報)を出力とした学習済みモデルである。 A seventh inference model 1073 including a seventh neural network 1074 receives the Scheimpflug image acquired by the image acquisition unit 1010 (or its processed image data, etc.), It is a trained model whose output is a region (for example, information indicating the range or position of the anterior chamber region).
 本例の第1セグメンテーション部1071Aは、このような第7推論モデル1073(第7ニューラルネットワーク1074)を用いることによって、被検眼のシャインプルーフ画像中から前房領域を特定する処理を実行する。 The first segmentation unit 1071A of this example uses such a seventh inference model 1073 (seventh neural network 1074) to execute processing for identifying the anterior chamber region from the Scheimpflug image of the subject's eye.
 細胞評価情報生成処理部1072は、細胞評価情報を生成するための細胞評価情報生成処理を実行するプロセッサを含み、第1セグメンテーション部1071により特定された前房領域から細胞評価情報を生成する。 The cell evaluation information generation processing unit 1072 includes a processor that executes cell evaluation information generation processing for generating cell evaluation information, and generates cell evaluation information from the anterior chamber region specified by the first segmentation unit 1071.
 機械学習を用いて本例の細胞評価情報生成処理を実行する場合における細胞評価情報生成処理部1072の構成例を図16に示す。本例の細胞評価情報生成処理部1072Aは、予め構築された推論モデル1075(第8推論モデルと呼ぶ)を用いて細胞評価情報生成処理を実行するように構成されている。第8推論モデル1075は、眼画像(例えば、眼のシャインプルーフ画像、他のモダリティで取得した眼の画像、又は、眼のシャインプルーフ画像及び他のモダリティで取得された眼の画像)を少なくとも含む訓練データを用いた機械学習により構築されたニューラルネットワーク1076(第8ニューラルネットワークと呼ぶ)を含んでいる。 FIG. 16 shows a configuration example of the cell evaluation information generation processing unit 1072 when executing the cell evaluation information generation processing of this example using machine learning. 1072 A of cell evaluation information generation process parts of this example are comprised so that cell evaluation information generation processing may be performed using the inference model 1075 (it calls an 8th inference model) constructed|assembled previously. The eighth inference model 1075 includes at least an eye image (eg, a Scheimpflug image of the eye, an eye image acquired with another modality, or a Scheimpflug image of the eye and an eye image acquired with another modality). It includes a neural network 1076 (referred to as an eighth neural network) constructed by machine learning using training data.
 幾つかの例示的な態様において、第8ニューラルネットワーク1076は、図6の第2ニューラルネットワーク1037の少なくとも一部と図7の第3ニューラルネットワーク1039の少なくとも一部とを含んでいてもよい。例えば、第8ニューラルネットワーク1076は、第2ニューラルネットワーク1037と第3ニューラルネットワーク1039とを直列に配置したニューラルネットワークであってよい。このような構成の第8ニューラルネットワーク1076は、前房領域から細胞領域を特定する機能と、細胞領域から細胞評価情報を生成する機能とを有する。 In some exemplary aspects, eighth neural network 1076 may include at least a portion of second neural network 1037 of FIG. 6 and at least a portion of third neural network 1039 of FIG. For example, eighth neural network 1076 may be a neural network in which second neural network 1037 and third neural network 1039 are arranged in series. The eighth neural network 1076 having such a configuration has a function of specifying a cell area from the anterior chamber area and a function of generating cell evaluation information from the cell area.
 幾つかの例示的な態様において、第8ニューラルネットワーク1076は、細胞領域の特定を行うことなく、前房領域から細胞評価情報を直接に生成するように機械学習が施されていてもよい。第8ニューラルネットワーク1076の態様はこれらに限定されるものではなく、前房領域から細胞評価情報を特定するための任意の機械学習が適用されたニューラルネットワークを含んでいてよい。 In some exemplary embodiments, the eighth neural network 1076 may be machine-learned to generate cell assessment information directly from the anterior chamber region without specifying cell regions. Aspects of the eighth neural network 1076 are not limited to these and may include any machine learning applied neural network for identifying cellular assessment information from the anterior chamber region.
 第8ニューラルネットワーク1076に入力されるデータは第1セグメンテーション部1071からの出力又はそれに基づくデータであり、第8ニューラルネットワーク1076から出力されるデータは細胞評価情報である。すなわち、細胞評価情報生成処理部1072Aは、第1セグメンテーション部1071によりシャインプルーフ画像から特定された前房領域(及び/又は、この前房領域に基づくデータ)を受け、この前房領域又はそれに基づくデータを第8推論モデル1075の第8ニューラルネットワーク1076に入力し、第8ニューラルネットワーク1076からの出力データ(細胞評価情報)を取得するように構成されている。 The data input to the eighth neural network 1076 is the output from the first segmentation unit 1071 or data based thereon, and the data output from the eighth neural network 1076 is cell evaluation information. That is, the cell evaluation information generation processing unit 1072A receives the anterior chamber region (and/or data based on this anterior chamber region) specified from the Scheimpflug image by the first segmentation unit 1071, and receives this anterior chamber region or data based on it. It is configured to input data into an eighth neural network 1076 of an eighth inference model 1075 and obtain output data (cell evaluation information) from the eighth neural network 1076 .
 第8推論モデル1075(第8ニューラルネットワーク1076)を構築するための機械学習の手法は、図7の細胞評価情報生成処理部1033の第3ニューラルネットワーク1039を構築するための機械学習の手法と同様であってよい。また、第8推論モデル1075(第8ニューラルネットワーク1076)を構築するための機械学習に用いられる訓練データは、第3ニューラルネットワーク1039を構築するための機械学習に用いられる訓練データと同様であってよい。 The machine learning method for building the eighth inference model 1075 (the eighth neural network 1076) is similar to the machine learning method for building the third neural network 1039 of the cell evaluation information generation processing unit 1033 in FIG. can be Also, the training data used for machine learning to construct the eighth inference model 1075 (eighth neural network 1076) is the same as the training data used for machine learning to construct the third neural network 1039. good.
 第8ニューラルネットワーク1076を含む第8推論モデル1075は、第1セグメンテーション部1071により特定された前房領域(及び/又は、この前房領域に基づくデータ)を入力とし、且つ、入力された前房領域(及び/又は、この前房領域に基づくデータ)に基づく細胞評価情報を出力とした学習済みモデルである。 An eighth inference model 1075 including an eighth neural network 1076 receives as input the anterior chamber region identified by the first segmentation unit 1071 (and/or data based on this anterior chamber region) and It is a trained model that outputs cell evaluation information based on the region (and/or data based on this anterior chamber region).
 本例の細胞評価情報生成処理部1072Aは、このような第8推論モデル1075(第8ニューラルネットワーク1076)を用いることによって、被検眼のシャインプルーフ画像中の前房領域(及び/又は、この前房領域に基づくデータ)から細胞評価情報を生成する処理を実行する。 By using such an eighth inference model 1075 (eighth neural network 1076), the cell evaluation information generation processing unit 1072A of this example uses the anterior chamber region (and/or the anterior chamber region) in the Scheimpflug image of the eye to be examined. data based on the tuft region) to generate cell evaluation information.
 図17に示すデータ処理部1080は、図1のデータ処理部1020の構成の例である。本例のデータ処理部1080は、第1セグメンテーション部1081、変換処理部1082、及び細胞評価情報生成処理部1083を含んでいる。 A data processing unit 1080 shown in FIG. 17 is an example of the configuration of the data processing unit 1020 in FIG. The data processing unit 1080 of this example includes a first segmentation unit 1081 , a conversion processing unit 1082 and a cell evaluation information generation processing unit 1083 .
 第1セグメンテーション部1081は、図3の第1セグメンテーション部1031(例えば、図4の第1セグメンテーション部1031A)と同様の構成及び機能を有し、画像取得部1010により取得されたシャインプルーフ画像から前房領域を特定するための第1セグメンテーションを実行するように構成されている。 The first segmentation unit 1081 has the same configuration and function as the first segmentation unit 1031 in FIG. 3 (for example, the first segmentation unit 1031A in FIG. It is configured to perform a first segmentation to identify a chamber region.
 変換処理部1082は、第1セグメンテーション部1081により特定された前房領域を、細胞評価情報生成処理部1083が実行する細胞評価情報生成処理に応じた構造のデータに変換する。本例の細胞評価情報生成処理部1083は、図16の細胞評価情報生成処理部1072Aのように、機械学習で構築されたニューラルネットワーク(第8ニューラルネットワーク)を用いて細胞評価情報生成処理を実行するように構成されている。変換処理部1082は、第1セグメンテーション部1081によりシャインプルーフ画像から特定された前房領域を、細胞評価情報生成処理部1083の第8ニューラルネットワークの入力層に対応した構造の画像データに変換するための変換処理を実行するように構成されている。 The conversion processing unit 1082 converts the anterior chamber region specified by the first segmentation unit 1081 into data with a structure according to the cell evaluation information generation processing executed by the cell evaluation information generation processing unit 1083. The cell evaluation information generation processing unit 1083 of this example executes cell evaluation information generation processing using a neural network (eighth neural network) constructed by machine learning, like the cell evaluation information generation processing unit 1072A in FIG. is configured to The conversion processing unit 1082 converts the anterior chamber region specified from the Scheimpflug image by the first segmentation unit 1081 into image data having a structure corresponding to the input layer of the eighth neural network of the cell evaluation information generation processing unit 1083. It is configured to perform the conversion process of
 例えば、細胞評価情報生成処理部1083の第8ニューラルネットワーク(畳み込みニューラルネットワーク)の入力層が既定の構造(形態、形式)のデータを受け付けるように構成されている場合がある。この既定のデータ構造は、例えば、既定の画像サイズ(例えば、縦方向のピクセル数及び横方向のピクセル数)、既定の画像形状(例えば、正方形又は長方形)などであってよい。一方、眼科装置の仕様、撮影時の条件や設定、被検眼の寸法や形態の個人差などの影響により、第1セグメンテーション部1081によって特定される前房領域の画像サイズや画像形状は様々である。変換処理部1082は、第1セグメンテーション部1081により特定された前房領域の構造(例えば、画像サイズ及び/又は画像形状)を、細胞評価情報生成処理部1083の第8ニューラルネットワークの入力層が受け付け可能な構造に変換する。 For example, the input layer of the eighth neural network (convolutional neural network) of the cell evaluation information generation processing unit 1083 may be configured to accept data with a predetermined structure (morphology, format). This predefined data structure may be, for example, a predefined image size (eg, number of vertical pixels and number of horizontal pixels), a predefined image shape (eg, square or rectangular), and the like. On the other hand, the image size and image shape of the anterior chamber region specified by the first segmentation unit 1081 vary depending on the specifications of the ophthalmologic apparatus, conditions and settings at the time of imaging, individual differences in the size and shape of the eye to be examined, and the like. . In the conversion processing unit 1082, the input layer of the eighth neural network of the cell evaluation information generation processing unit 1083 receives the structure of the anterior chamber region (for example, image size and/or image shape) specified by the first segmentation unit 1081. Convert to a possible structure.
 画像サイズの変換は、任意の公知の画像サイズ変換技術を用いて実行されてよく、例えば、第1セグメンテーション部1081により特定された前房領域を入力層に応じた画像サイズの複数の部分画像に分割する処理、又は、第1セグメンテーション部1081により特定された前房領域を入力層に応じた画像サイズの単一の画像にリサイズする処理を含んでいてよい。画像形状の変換は、任意の公知の画像変形技術を用いて実行されてよい。他のデータ構造の変換処理についても同様であってよい。 Image size conversion may be performed using any known image size conversion technique. A process of dividing or a process of resizing the anterior chamber region specified by the first segmentation unit 1081 into a single image having an image size corresponding to the input layer may be included. Image shape transformation may be performed using any known image transformation technique. Conversion processing for other data structures may be performed in the same manner.
 細胞評価情報生成処理部1083は、図14の細胞評価情報生成処理部1072(例えば、図16の細胞評価情報生成処理部1072A)と同様の構成及び機能を有し、第1セグメンテーション部1081により特定された前房領域を変換処理部1082により加工することによって得られたデータから細胞評価情報を生成するための細胞評価情報生成処理を実行するように構成されている。 The cell evaluation information generation processing unit 1083 has the same configuration and function as the cell evaluation information generation processing unit 1072 in FIG. 14 (for example, the cell evaluation information generation processing unit 1072A in FIG. It is configured to execute cell evaluation information generation processing for generating cell evaluation information from data obtained by processing the anterior chamber region obtained by the conversion processing unit 1082 .
 このように、本例のデータ処理部1080は、図14のデータ処理部1070の第1セグメンテーション部1071と細胞評価情報生成処理部1072との間に変換処理部1082を配置した構成であってよい。しかしながら、本例のデータ処理部1080の構成はこれに限定されない。 Thus, the data processing unit 1080 of this example may have a configuration in which the conversion processing unit 1082 is arranged between the first segmentation unit 1071 and the cell evaluation information generation processing unit 1072 of the data processing unit 1070 of FIG. . However, the configuration of the data processing unit 1080 of this example is not limited to this.
 ここまでは、主に、機械学習を用いて構築された推論モデル(ニューラルネットワーク)を含むデータ処理部1020の幾つかの例について説明した。しかし、データ処理部1020は、このような機械学習ベースの構成に限定されない。本開示に係るデータ処理部1020は、機械学習ベースの構成のみによって実施されてもよいし、機械学習ベースの構成と非機械学習ベースの構成との組み合わせによって実施されてもよいし、非機械学習ベースの構成のみによって実施されてもよい。 So far, several examples of the data processing unit 1020 including an inference model (neural network) built using machine learning have been mainly described. However, the data processing unit 1020 is not limited to such a machine learning-based configuration. The data processing unit 1020 according to the present disclosure may be implemented by machine learning-based configurations alone, by a combination of machine learning-based configurations and non-machine learning-based configurations, or by non-machine learning-based configurations. It may also be implemented by the configuration of the base only.
 以下、非機械学習ベースの構成のみによるデータ処理部1020の幾つかの例について説明する。機械学習ベースの構成と非機械学習ベースの構成との組み合わせによるデータ処理部1020の態様については、当業者であれば、前述した機械学習ベースの構成の幾つかの例と、後述する非機械学習ベースの構成のみによるデータ処理部1020の幾つかの例とに基づいて理解することができるであろう。 Several examples of the data processing unit 1020 with only non-machine learning-based configurations will be described below. Aspects of the data processing unit 1020 in combination with machine-learning-based and non-machine-learning-based configurations will be appreciated by those skilled in the art in some examples of the machine-learning-based configurations described above and the non-machine-learning configurations described below. It can be understood based on some examples of the data processing unit 1020 with only base configuration.
 図18に示すデータ処理部1090は、図1のデータ処理部1020の構成の例であり、非機械学習ベースの構成を備えている。本例のデータ処理部1090は、第1解析処理部1091と、第2解析処理部1092と、第3解析処理部1093とを含んでいる。 A data processing unit 1090 shown in FIG. 18 is an example of the configuration of the data processing unit 1020 in FIG. 1, and has a non-machine learning-based configuration. The data processing section 1090 of this example includes a first analysis processing section 1091 , a second analysis processing section 1092 and a third analysis processing section 1093 .
 第1解析処理部1091は、前房領域を特定するための第1セグメンテーションを実行するプロセッサを含み、画像取得部1010により取得されたシャインプルーフ画像(及び/又は、その加工画像データ)に所定の解析処理(第1解析処理と呼ぶ)を適用して、このシャインプルーフ画像中の前房領域を特定するように構成されている。 The first analysis processing unit 1091 includes a processor that executes a first segmentation for specifying the anterior chamber region, and the Scheimpflug image (and/or its processed image data) acquired by the image acquisition unit 1010 has a predetermined An analysis process (referred to as first analysis process) is applied to specify the anterior chamber region in the Scheimpflug image.
 第1解析処理は、シャインプルーフ画像中の前房領域を特定するための任意の公知のセグメンテーションを含んでいてよい。例えば、前房領域を特定するためのセグメンテーションは、角膜(特に、角膜後面)に相当する画像領域を特定するためのセグメンテーションと、水晶体(特に、水晶体前面)に相当する画像領域を特定するためのセグメンテーションとを含んでいる。角膜に相当する画像領域を角膜領域と呼び、角膜後面に相当する画像領域を角膜後面領域と呼び、水晶体に相当する画像領域を水晶体領域と呼び、水晶体前面に相当する画像領域を水晶体前面領域と呼ぶ。 The first analysis process may include any known segmentation for identifying the anterior chamber region in the Scheimpflug image. For example, the segmentation for identifying the anterior chamber region includes the segmentation for identifying the image region corresponding to the cornea (especially the posterior surface of the cornea) and the segmentation for identifying the image region corresponding to the lens (especially the anterior surface of the lens). segmentation and. The image area corresponding to the cornea is called the corneal area, the image area corresponding to the posterior surface of the cornea is called the corneal posterior surface area, the image area corresponding to the crystalline lens is called the crystalline lens area, and the image area corresponding to the anterior surface of the crystalline lens is called the crystalline anterior area. call.
 角膜後面領域のセグメンテーションは、任意の公知のセグメンテーションを含んでいてよい。角膜後面領域のセグメンテーションでは、シャインプルーフ画像中のアーティファクトや、ピクセル値のサチュレーションなどが問題となる。これらの問題を解消するために、例えば、図2Cに示す構成を採用することができる。すなわち、第1撮影系1014及び第2撮影系1015を用いた撮影方法と、画像選択部1021を用いたシャインプルーフ画像の選択方法とを組み合わせることによって、アーティファクトもサチュレーションも無いシャインプルーフ画像を選択し、このシャインプルーフ画像から角膜後面領域を特定することができる。 The segmentation of the posterior surface region of the cornea may include any known segmentation. Segmentation of the posterior corneal region suffers from artifacts in Scheimpflug images and saturation of pixel values. In order to solve these problems, for example, the configuration shown in FIG. 2C can be adopted. That is, by combining the imaging method using the first imaging system 1014 and the second imaging system 1015 and the selection method of the Scheimpflug image using the image selection unit 1021, a Scheimpflug image free from artifacts and saturation is selected. , the posterior surface region of the cornea can be identified from this Scheimpflug image.
 水晶体前面領域のセグメンテーションは、任意の公知のセグメンテーションを含んでいてよい。水晶体前面領域のセグメンテーションでは、被検眼の瞳孔の状態(例えば、散瞳状態、無散瞳状態、小瞳孔眼など)によってシャインプルーフ画像の表現状態(シャインプルーフ画像の見え方)が変化することなどが問題となる。例えば、被検眼が無散瞳状態又は小瞳孔眼である場合、被検眼が散瞳状態である場合と比較して、画像化される水晶体の範囲が小さくなってしまう。このような問題を解消するために、例えば、シャインプルーフ画像に描出されている水晶体前面領域に基づいて画像化されていない水晶体前面の部分(瞳孔に覆われている部分)の位置や形状を推測する処理など、シャインプルーフ画像の表現状態を均一化するための処理を適用することができる。シャインプルーフ画像の表現状態を均一化するための処理は、機械学習ベースで実行されてもよいし、非機械学習ベースで実行されてもよい。また、水晶体前面の位置や形状を推測する処理は、例えば、任意の公知の外挿処理を含んでいてよい。 The segmentation of the anterior lens region may include any known segmentation. In the segmentation of the anterior lens region, the expression state of the Scheimpflug image (the appearance of the Scheimpflug image) changes depending on the state of the pupil of the eye to be examined (e.g., mydriatic state, non-mydriatic state, small-pupil eye, etc.). becomes a problem. For example, when the subject's eye is non-mydriatic or has a small pupil, the range of the lens to be imaged is smaller than when the subject's eye is mydriatic. In order to solve such problems, for example, the position and shape of the non-imaged part of the anterior lens surface (the part covered by the pupil) can be estimated based on the anterior lens area depicted in the Scheimpflug image. It is possible to apply a process for uniforming the expression state of the Scheimpflug image, such as a process for The processing for equalizing the representation state of the Scheimpflug image may be executed on a machine learning basis or may be executed on a non-machine learning basis. Also, the process of estimating the position and shape of the anterior surface of the lens may include, for example, any known extrapolation process.
 画像取得部1010がスリットスキャンによって一連のシャインプルーフ画像を収集する場合、問題の有る画像と問題の無い画像とが混在していたり、画像間において問題の程度にばらつきがあったりすることがある。例えば、アーティファクトやサチュレーションが有る画像と無い画像とが混在していたり、様々な異なる状態のアーティファクト(位置、寸法、形状など)が幾つかの画像に混入していたりすることがある。これらの現象は、データ処理部1090が実行する処理の品質(例えば、安定性、ロバスト性、再現性、正確度、精度など)に悪影響を与えるおそれがある。幾つかの例示的な態様では、これらの現象を発生させないための対策を講じたり、これらの現象に起因する悪影響を小さくするための対策を講じたりすることができる。前者の対策の例として、第1撮影系1014及び第2撮影系1015を用いた撮影方法と、画像選択部1021を用いたシャインプルーフ画像の選択方法とを組み合わせることがある。後者の対策の例として、画像の補正・ノイズ除去・ノイズ低減、画像パラメータの調整などがある。 When the image acquisition unit 1010 acquires a series of Scheimpflug images by slit scanning, images with problems and images without problems may be mixed, or the degree of problems may vary between images. For example, images with and without artifacts and saturation may be mixed, or various different states of artifacts (position, size, shape, etc.) may be mixed into several images. These phenomena can adversely affect the quality (eg, stability, robustness, reproducibility, accuracy, precision, etc.) of the processing performed by the data processing unit 1090 . In some exemplary aspects, steps may be taken to prevent these phenomena from occurring or to reduce adverse effects resulting from these phenomena. As an example of the former countermeasure, an imaging method using the first imaging system 1014 and the second imaging system 1015 may be combined with a Scheimpflug image selection method using the image selection unit 1021 . Examples of the latter measures include image correction/noise removal/noise reduction, image parameter adjustment, and the like.
 第2解析処理部1092は、細胞領域を特定するための第2セグメンテーションを実行するプロセッサを含み、第1解析処理部1091によりシャインプルーフ画像から特定された前房領域に第2解析処理を適用して細胞領域を特定するように構成されている。 The second analysis processing unit 1092 includes a processor that executes second segmentation for identifying cell regions, and applies the second analysis processing to the anterior chamber region identified from the Scheimpflug image by the first analysis processing unit 1091. is configured to identify the cell area by
 幾つかの例示的な態様において、第2解析処理部1092は、前房領域内の各ピクセルの値(例えば、輝度値、R値、G値、及びB値のうちの少なくとも1つ)に基づいて、細胞領域を特定するように構成されていてよい。幾つかの例示的な態様において、第2解析処理部1092は、細胞領域を特定するためのセグメンテーションを前房領域に適用するように構成されていてよい。このセグメンテーションは、例えば、炎症性細胞(細胞領域)の標準的な形態(例えば、寸法、形状など)に基づき作成されたプログラムにしたがって実行される。幾つかの例示的な態様において、第2解析処理部1092は、これら2つの手法の少なくとも部分的な組み合わせによって細胞領域を特定するように構成されていてよい。 In some exemplary aspects, the second analysis process 1092 calculates a and may be configured to identify the cell area. In some exemplary aspects, the second analysis processor 1092 may be configured to apply segmentation to the anterior chamber region to identify cell regions. This segmentation is performed, for example, according to a program created based on the standard morphology (eg size, shape, etc.) of inflammatory cells (cell areas). In some exemplary aspects, the second analysis processor 1092 may be configured to identify cell regions by at least a partial combination of these two techniques.
 画像取得部1010がスリットスキャンによって一連のシャインプルーフ画像を収集する場合に講じうる対策については、第1解析処理部1091の場合と同様であってよい。また、一般的に細胞領域は微小な画像領域であることを考慮し、細胞領域と微小なアーティファクトとを識別するための対策を講じてもよい。例えば、アーティファクト(ゴーストなど)を除去するための処理を実行することで、細胞領域の検出においてアーティファクトが誤検出されることを防止することができる。 The measures that can be taken when the image acquisition unit 1010 acquires a series of Scheimpflug images by slit scanning may be the same as those of the first analysis processing unit 1091 . Also, considering that the cell area is generally a minute image area, measures may be taken to distinguish between the cell area and minute artifacts. For example, by performing processing for removing artifacts (ghosts, etc.), erroneous detection of artifacts in cell region detection can be prevented.
 第3解析処理部1093は、細胞評価情報を生成するための細胞評価情報生成処理を実行するプロセッサを含み、第2解析処理部1092によりシャインプルーフ画像の前房領域から特定された細胞領域に第3解析処理を適用して細胞評価情報を生成するように構成されている。 The third analysis processing unit 1093 includes a processor that executes cell evaluation information generation processing for generating cell evaluation information. 3 analysis processing is applied to generate cell evaluation information.
 前述したように、細胞評価情報は炎症性細胞に関する任意の評価情報であってよく、例えば、炎症性細胞の状態(例えば、密度、個数、位置、分布などの任意のパラメータ)を表す情報を含んでいてもよいし、炎症性細胞の状態に関する所定のパラメータの情報に基づき生成された評価情報を含んでいてもよい。 As mentioned above, the cell assessment information may be any assessment information relating to inflammatory cells, including information representing the state of inflammatory cells (e.g., any parameter such as density, number, location, distribution, etc.). or may include evaluation information generated based on information of predetermined parameters relating to the state of inflammatory cells.
 幾つかの例示的な態様において、第3解析処理部1093は、第2解析処理部1092により特定された1つ以上の細胞領域について、密度、個数、位置、分布などを求めることができる。 In some exemplary aspects, the third analysis processing unit 1093 can obtain the density, number, position, distribution, etc. of one or more cell regions identified by the second analysis processing unit 1092.
 炎症性細胞の密度を求める処理は、例えば、所定寸法の画像領域(例えば、1ミリメートル四方の画像領域)を設定する処理と、設定された画像領域において第2解析処理部1092により検出された細胞領域の個数をカウントする処理とを含んでいる。ここで、画像領域の寸法(例えば「1ミリメートル」のように、実空間における寸法)は、例えば、眼科装置1000の光学系の仕様(例えば、光学系の設計データ及び/又は光学系の実測データ)に基づいて定義され、典型的にはピクセルと実空間における寸法との対応関係(例えば、ドットピッチ)として定義される。細胞評価情報は、このようにして求められた炎症性細胞の密度の情報を含んでいてもよいし、この密度の情報から得られる評価情報を含んでいてもよい。この評価情報は、例えば、SUN Working Groupにより提案されたぶどう膜炎疾患の分類基準を用いた評価結果を含んでいてよい。この分類基準は、1視野(1ミリメートル四方の大きさの視野)に存在する炎症性細胞の個数(すなわち、炎症性細胞の密度(濃度))に応じたグレードを定義したものであり、グレード「0」は細胞数1個未満、グレード「0.5+」は細胞数1~5個、グレード「1+」は細胞数6~15個、グレード「2+」は細胞数16~25個、グレード「3+」は細胞数26~50個、グレード「4+」は細胞数50個以上として定義されている。なお、この分類基準のグレード区分をより細かくしてもよいし、より粗くしてもよい。また、他の分類基準に基づいて細胞評価情報を生成してもよい。 The process of obtaining the density of inflammatory cells includes, for example, a process of setting an image area of a predetermined size (for example, an image area of 1 mm square), and a cell detected by the second analysis processing unit 1092 in the set image area. and a process of counting the number of areas. Here, the dimension of the image area (dimension in real space such as “1 millimeter”) is, for example, the specification of the optical system of the ophthalmologic apparatus 1000 (eg, design data of the optical system and/or actual measurement data of the optical system). ), and is typically defined as the correspondence between pixels and dimensions in real space (for example, dot pitch). The cell evaluation information may include information on the density of inflammatory cells thus obtained, or may include evaluation information obtained from this density information. This evaluation information may include, for example, evaluation results using the uveitis disease classification criteria proposed by the SUN Working Group. This classification standard defines the grade according to the number of inflammatory cells present in one field of view (1 mm square field of view) (that is, the density (concentration) of inflammatory cells). 0” is less than 1 cell, grade “0.5+” is 1-5 cells, grade “1+” is 6-15 cells, grade “2+” is 16-25 cells, grade “3+” ' is defined as 26-50 cells and grade '4+' as 50 or more cells. Note that the grade divisions of this classification standard may be finer or rougher. Alternatively, cell evaluation information may be generated based on other classification criteria.
 幾つかの例示的な態様において、データ処理部1090(第3解析処理部1093)は、第1解析処理部1091によりシャインプルーフ画像から特定された前房領域の部分領域(例えば、1ミリメートル四方の画像領域)を特定する処理と、この部分領域に属する細胞領域の個数を求める処理と、この個数と部分領域の寸法とに基づいて炎症性細胞の密度を算出する処理とを実行するように構成されていてよい。ここで、データ処理部1090は、第2解析処理部1092により前房領域全体から検出された細胞領域のうち当該部分領域内に位置している細胞領域を選択し、選択された細胞領域に基づいて第3解析処理部1093により密度を求めるように構成されていてもよい。或いは、データ処理部1090は、第2解析処理部1092により当該部分領域を解析して細胞領域を特定し、当該部分領域から特定された細胞領域に基づいて第3解析処理部1093により密度を求めるように構成されていてもよい。 In some exemplary aspects, the data processing unit 1090 (the third analysis processing unit 1093) uses the partial region of the anterior chamber identified from the Scheimpflug image by the first analysis processing unit 1091 (for example, a 1 mm square image region), processing to determine the number of cell regions belonging to this partial region, and processing to calculate the density of inflammatory cells based on this number and the dimensions of the partial region. It can be. Here, the data processing unit 1090 selects a cell region located within the partial region from among the cell regions detected from the entire anterior chamber region by the second analysis processing unit 1092, and based on the selected cell region The third analysis processing unit 1093 may be configured to obtain the density. Alternatively, the data processing unit 1090 analyzes the partial region by the second analysis processing unit 1092 to specify the cell region, and obtains the density by the third analysis processing unit 1093 based on the cell region specified from the partial region. It may be configured as
 炎症性細胞の個数を求める処理は、例えば、第2解析処理部1092により検出された細胞領域の個数をカウントする処理を含んでいる。細胞評価情報は、このようにして求められた炎症性細胞の個数の情報を含んでいてもよいし、この個数の情報から得られる評価情報を含んでいてもよい。例えば、細胞評価情報は、前房領域全体から検出された細胞領域の個数を前房領域の寸法(例えば、面積、体積など)で除算することによって、前房領域全体における炎症性細胞の平均密度を求めることができる。また、細胞評価情報は、前房領域全体から検出された細胞領域の個数に基づく評価結果(例えば、グレード)を含んでいてもよいし、前房領域の部分領域における細胞領域の個数及び/又はそれに基づく評価結果を含んでいてもよい。 The process of obtaining the number of inflammatory cells includes, for example, the process of counting the number of cell regions detected by the second analysis processing unit 1092. The cell evaluation information may include information on the number of inflammatory cells obtained in this manner, or may include evaluation information obtained from the information on this number. For example, cell assessment information can be obtained by dividing the number of cell areas detected from the entire anterior chamber area by the dimensions of the anterior chamber area (e.g., area, volume, etc.) to obtain the average density of inflammatory cells in the entire anterior chamber area. can be asked for. In addition, the cell evaluation information may include an evaluation result (eg, grade) based on the number of cell regions detected from the entire anterior chamber region, or the number and/or the number of cell regions in a partial region of the anterior chamber region. Evaluation results based thereon may be included.
 炎症性細胞の位置を求める処理は、例えば、第2解析処理部1092により検出された細胞領域の位置を特定する処理を含んでいてよい。細胞領域の位置は、例えば、シャインプルーフ画像の定義座標系の座標として表現されてもよいし、又は、シャインプルーフ画像に描出されている所定の画像領域(基準領域)に対する相対位置(例えば、距離、方向など)として表現されてもよい。この基準領域は、例えば、角膜領域、角膜後面領域、水晶体領域、水晶体前面領域、眼の軸に相当する画像領域(例えば、角膜の頂点位置と水晶体前面の頂点位置とを結ぶ直線)などであってよい。細胞評価情報は、このようにして求められた炎症性細胞の位置の情報を含んでいてもよいし、この位置の情報から得られる評価情報を含んでいてもよい。例えば、細胞評価情報は、炎症性細胞の分布(複数の細胞領域の分布)を表す情報を含んでいてもよいし、1つ以上の炎症性細胞の位置に基づく評価結果(例えば、グレード)を含んでいてもよいし、複数の炎症性細胞の位置(分布)に基づく評価結果(例えば、グレード)を含んでいてもよい。 The process of determining the position of inflammatory cells may include, for example, the process of identifying the cell area detected by the second analysis processing unit 1092 . The position of the cell region may be expressed, for example, as coordinates in the defined coordinate system of the Scheimpflug image, or as a relative position (e.g., distance , direction, etc.). This reference area is, for example, a corneal area, a posterior corneal area, a crystalline lens area, an anterior crystalline lens area, an image area corresponding to the axis of the eye (for example, a straight line connecting the vertex position of the cornea and the apex position of the anterior lens). you can The cell evaluation information may include information on the position of the inflammatory cells obtained in this manner, or may include evaluation information obtained from this positional information. For example, the cell evaluation information may include information representing the distribution of inflammatory cells (distribution of a plurality of cell regions), or an evaluation result (eg, grade) based on the position of one or more inflammatory cells. It may contain an evaluation result (for example, grade) based on the positions (distribution) of a plurality of inflammatory cells.
 眼科装置1000の動作について説明する。なお、以下に説明する動作は例示に過ぎない。例えば、本開示に係る任意の事項、本開示において引用された文献に係る任意の事項、本開示に係る実施形態が属する技術分野に係る任意の事項、本開示に係る実施形態に関連する技術分野に係る任意の事項などを、下記の動作例に組み合わせることができる。 The operation of the ophthalmologic apparatus 1000 will be described. Note that the operations described below are merely examples. For example, any matter related to the present disclosure, any matter related to the literature cited in the present disclosure, any matter related to the technical field to which the embodiment according to the present disclosure belongs, the technical field related to the embodiment according to the present disclosure , etc., can be combined with the operation examples below.
 眼科装置1000の第1の動作例を図19に示す。眼科装置1000による撮影の前に実行される各種の動作(準備的動作)は済んでいるものとする。準備的動作としては、眼科装置1000が設置されたテーブルの調整、被検者が使用している椅子の調整、眼科装置1000の顔受け(顎受け、額当てなど)の調整、被検眼に対する眼科装置1000の位置合わせ(アライメント)、スリット光の調整(例えば、光量調整、幅調整、長さ調整、向き調整)などがある。 A first operation example of the ophthalmologic apparatus 1000 is shown in FIG. It is assumed that various operations (preparatory operations) executed before imaging by the ophthalmologic apparatus 1000 have been completed. Preparatory actions include adjustment of the table on which the ophthalmologic apparatus 1000 is installed, adjustment of the chair used by the subject, adjustment of the face support (chin support, forehead support, etc.) of the ophthalmologic apparatus 1000, and ophthalmologic examination of the eye to be examined. Alignment of the apparatus 1000, slit light adjustment (for example, light amount adjustment, width adjustment, length adjustment, orientation adjustment), and the like.
 撮影開始の指示を受けて、眼科装置1000は、画像取得部1010によって、被検眼のシャインプルーフ画像を取得する(S1)。 Upon receiving the instruction to start imaging, the ophthalmologic apparatus 1000 acquires a Scheimpflug image of the subject's eye using the image acquisition unit 1010 (S1).
 更に、眼科装置1000は、データ処理部1020によって、ステップS1で取得されたシャインプルーフ画像に基づいて、被検眼の炎症状態を示す炎症状態情報を生成する(S2)。 Further, the ophthalmologic apparatus 1000 uses the data processing unit 1020 to generate inflammatory state information indicating the inflammatory state of the subject's eye based on the Scheimpflug image acquired in step S1 (S2).
 本動作例のステップS1で取得されるシャインプルーフ画像の枚数は予め設定されていてよく、1枚であってもよいし、2枚以上(例えば、スリットスキャンで収集される一連のシャインプルーフ画像)であってもよい。ステップS2においては、ステップS1で取得されたシャインプルーフ画像の枚数に応じて、前述した様々なデータ処理のうちのいずれかが実行される。このデータ処理は、機械学習ベースの処理でもよいし、非機械学習ベースの処理でもよいし、機械学習ベースの処理と非機械学習ベースの処理との組み合わせでもよい。 The number of Scheimpflug images acquired in step S1 of this operation example may be set in advance, and may be one, or two or more (for example, a series of Scheimpflug images collected by slit scanning). may be In step S2, one of the various data processing described above is executed according to the number of Scheimpflug images acquired in step S1. This data processing may be machine-learning-based processing, non-machine-learning-based processing, or a combination of machine-learning-based processing and non-machine-learning-based processing.
 眼科装置1000が2つ以上の撮影系を含んでいる場合、これら撮影系により取得された複数のシャインプルーフ画像を処理することができる。例えば、図2Cに示すように眼科装置1000が第1撮影系1014及び第2撮影系1015を含んでいる場合、第1撮影系1014及び第2撮影系1015により取得された2つ以上のシャインプルーフ画像をデータ処理部1020A(画像選択部1021、炎症状態情報生成部1022)によって処理することができる。データ処理部1020Aにより実行されるデータ処理は、機械学習ベースの処理でもよいし、非機械学習ベースの処理でもよいし、機械学習ベースの処理と非機械学習ベースの処理との組み合わせでもよい。 When the ophthalmologic apparatus 1000 includes two or more imaging systems, multiple Scheimpflug images acquired by these imaging systems can be processed. For example, when the ophthalmologic apparatus 1000 includes a first imaging system 1014 and a second imaging system 1015 as shown in FIG. 2C, two or more Scheimpflug The images can be processed by the data processing unit 1020A (image selection unit 1021, inflammation state information generation unit 1022). The data processing performed by the data processing unit 1020A may be machine-learning-based processing, non-machine-learning-based processing, or a combination of machine-learning-based processing and non-machine-learning-based processing.
 眼科装置1000は、ステップS1で取得されたシャインプルーフ画像及び/又はステップS2で生成された炎症状態情報を表示装置に表示させることができる。表示装置は、眼科装置1000の要素であってもよいし、眼科装置1000に接続された外部機器であってもよい。 The ophthalmologic apparatus 1000 can display the Scheimpflug image acquired in step S1 and/or the inflammatory state information generated in step S2 on the display device. The display device may be an element of the ophthalmic device 1000 or an external device connected to the ophthalmic device 1000 .
 以下、眼科装置1000により実行可能な情報表示の幾つかの例について説明する。情報表示の態様はこれらの例に限定されない。これらの例のうちの少なくとも2つを少なくとも部分的に組み合わせることが可能である。 Several examples of information display executable by the ophthalmologic apparatus 1000 will be described below. The mode of information display is not limited to these examples. It is possible to at least partially combine at least two of these examples.
 情報表示の第1の例において、眼科装置1000は、ステップS1で取得されたシャインプルーフ画像及び/又はステップS2で生成された炎症状態情報をそのまま表示装置に表示させる。眼科装置1000は、シャインプルーフ画像及び/又は炎症状態情報とともに他の情報(付加情報と呼ぶ)を表示させてもよい。付加情報は、シャインプルーフ画像及び/又は炎症状態情報とともに被検眼の診療のために役立つ任意の情報であってよい。 In the first example of information display, the ophthalmologic apparatus 1000 causes the display device to display the Scheimpflug image acquired in step S1 and/or the inflammatory state information generated in step S2 as they are. The ophthalmologic apparatus 1000 may display other information (referred to as additional information) together with the Scheimpflug image and/or inflammation state information. The additional information may be any information useful for diagnosis of the subject's eye along with the Scheimpflug image and/or inflammation status information.
 情報表示の第2の例において、眼科装置1000は、従来のスリットランプ顕微鏡により取得された眼の画像(スリットランプ画像と呼ぶ)を摸擬した画像をシャインプルーフ画像から生成し、生成された摸擬画像を表示装置に表示させる。これにより、被検眼の炎症状態の観察に従来から使用されてきており多くの医師が見慣れているスリットランプ画像を模した画像を提供することが可能になる。 In a second example of information display, the ophthalmologic apparatus 1000 generates an image that simulates an image of the eye acquired by a conventional slit lamp microscope (referred to as a slit lamp image) from the Scheimpflug image, and displays the generated simulated image. A pseudo image is displayed on the display device. This makes it possible to provide an image simulating a slit lamp image that has been conventionally used for observing the inflammatory state of the subject's eye and that many doctors are familiar with.
 スリットランプ画像から摸擬画像を生成する処理は、機械学習ベースの処理及び/又は非機械学習ベースの処理によって形成される。 The process of generating simulated images from slit lamp images is formed by machine learning-based and/or non-machine learning-based processes.
 機械学習ベースの処理は、例えば、シャインプルーフ画像とスリットランプ画像との複数のペアを含む訓練データを用いた機械学習によって構築されたニューラルネットワークを用いて実行される。このニューラルネットワークは、畳み込みニューラルネットワークを含む。この畳み込みニューラルネットワークは、シャインプルーフ画像の入力を受け、摸擬画像を出力するように構成されている。 Machine learning-based processing is performed using, for example, a neural network constructed by machine learning using training data containing multiple pairs of Scheimpflug and slitlamp images. This neural network includes a convolutional neural network. This convolutional neural network is configured to receive an input of a Scheimpflug image and output a simulated image.
 非機械学習ベースの処理は、例えば、摸擬的なボケを生成する処理や色変換や画質変換など、画像の見え方を変換する処理を含んでいてよい。非機械学習ベースの処理の例示的な態様は、スリットスキャンで収集された一連のシャインプルーフ画像から3次元画像(例えば、8ビットのグレースケールボリューム)を構築する処理と、ピクセル値レンジの最大値(ピクセル値の階調)を低下させる処理(例えば、256階調を10階調に低下させる処理)と、階調が変換された3次元画像の関心領域(例えば、所定寸法の直方体領域)を設定する処理と、設定された関心領域の正面画像(enface画像)を構築する処理(例えば、最大値投影処理(MIP))とを含んでいる。 Non-machine-learning-based processing may include processing that transforms the appearance of an image, such as processing to generate simulated blur, color conversion, and image quality conversion. Exemplary aspects of non-machine learning-based processing include the processing of constructing a three-dimensional image (e.g., an 8-bit grayscale volume) from a series of Scheimpflug images acquired in a slit scan, and the maximum pixel value range A process of lowering (gradation of pixel value) (for example, a process of lowering 256 gradations to 10 gradations) and a region of interest (for example, a rectangular parallelepiped region of a predetermined size) of a 3D image whose gradation is converted A setting process and a process of constructing an enface image of the set region of interest (for example, maximum intensity projection processing (MIP)) are included.
 摸擬画像は、広範囲にピントが合ったシャインプルーフ画像と同じ領域を表す画像であってもよいし、シャインプルーフ画像が表す領域の部分領域(例えば、SUN Working Groupにより提案されたぶどう膜炎疾患の分類基準における評価範囲である1視野、つまり1ミリメートル四方の大きさの視野)であってもよい。 The pseudo-image may be an image representing the same region as the widely focused Scheimpflug image, or a partial region of the region represented by the Scheimpflug image (for example, the uveitis disease proposed by the SUN Working Group). 1 field of view, which is the evaluation range in the classification criteria of 1, that is, a field of view of 1 mm square).
 眼科装置1000は、例えば、摸擬画像中の注目部分を強調表示させることができる。この注目部分の例として、炎症性細胞に相当する画像領域、前房フレアに相当する画像領域、水晶体の混濁に相当する画像領域などがある。 The ophthalmologic apparatus 1000 can, for example, highlight a portion of interest in the simulated image. Examples of the portion of interest include an image region corresponding to inflammatory cells, an image region corresponding to anterior chamber flare, an image region corresponding to lens opacification, and the like.
 情報表示の第3の例において、眼科装置1000は、被検眼の炎症状態を表すマップ(炎症状態マップと呼ぶ)を作成して表示装置に表示させる。 In the third example of information display, the ophthalmologic apparatus 1000 creates a map representing the inflammatory state of the subject's eye (referred to as an inflammatory state map) and displays it on the display device.
 炎症状態マップの例として、前房内における炎症性細胞の位置(分布)を表す炎症性細胞マップや、前房内における炎症性細胞の密度(又は、個数)の分布を表す炎症性細胞密度マップ(炎症性細胞個数マップ)などがある。これらの炎症性細胞に関するマップを作成する処理は、例えば、スリットスキャンで収集された一連のシャインプルーフ画像のそれぞれから炎症性細胞に相当する画像領域(細胞領域)を特定する処理(第2セグメンテーション)と、特定された各細胞領域の位置(例えば、シャインプルーフ画像の定義座標系における2次元座標、又は、一連のシャインプルーフに基づく3次元画像の定義座標系における3次元座標)を求める処理と、求められた各細胞領域の位置に基づきマップを作成する処理とを含んでいる。 Examples of inflammatory state maps include an inflammatory cell map representing the position (distribution) of inflammatory cells in the anterior chamber, and an inflammatory cell density map representing the distribution of the density (or number) of inflammatory cells in the anterior chamber. (Inflammatory cell number map). The process of creating a map of these inflammatory cells is, for example, a process of identifying image regions (cell regions) corresponding to inflammatory cells from each of a series of Scheimpflug images acquired by slit scanning (second segmentation). and a process of obtaining the position of each identified cell region (for example, two-dimensional coordinates in the defined coordinate system of Scheimpflug images, or three-dimensional coordinates in the defined coordinate system of a series of Scheimpflug-based three-dimensional images); and a process of creating a map based on the determined positions of each cell region.
 眼科装置1000は、シャインプルーフ画像及び/又は炎症状態情報とともに炎症状態マップを表示させることができる。例えば、眼科装置1000は、スリットスキャンで収集された一連のシャインプルーフ画像に基づく正面画像と、同じ一連のシャインプルーフ画像に基づき生成された炎症状態マップとを表示させることができる。その具体例として、正面画像に炎症状態マップを重ねて表示させることや、正面画像と炎症状態マップとを並べて表示させることができる。 The ophthalmologic device 1000 can display an inflammation state map together with a Scheimpflug image and/or inflammation state information. For example, the ophthalmologic apparatus 1000 can display a frontal image based on a series of Scheimpflug images acquired by slit scanning and an inflammatory state map generated based on the same series of Scheimpflug images. As a specific example, it is possible to display the inflammatory state map superimposed on the front image, or to display the front image and the inflammatory state map side by side.
 眼科装置1000の第2の動作例を図20に示す。本動作例は、細胞評価情報を取得するために実行される。特に言及しない限り、図19の動作例に関して説明された任意の事項を本動作例に組み合わせることが可能である。 A second operation example of the ophthalmologic apparatus 1000 is shown in FIG. This operation example is performed to acquire cell evaluation information. Any items described with respect to the operation example of FIG. 19 can be combined with this operation example unless otherwise specified.
 まず、眼科装置1000は、画像取得部1010によって、被検眼のシャインプルーフ画像を取得する(S11)。 First, the ophthalmologic apparatus 1000 acquires a Scheimpflug image of the subject's eye using the image acquisition unit 1010 (S11).
 次に、眼科装置1000は、データ処理部1020によって、前述した第1セグメンテーション、第2セグメンテーション、及び細胞評価情報生成処理のうちの少なくとも1つの処理を、ステップS11で取得されたシャインプルーフ画像に適用する(S12)。 Next, the ophthalmologic apparatus 1000 causes the data processing unit 1020 to apply at least one of the above-described first segmentation, second segmentation, and cell evaluation information generation processing to the Scheimpflug image acquired in step S11. (S12).
 眼科装置1000が第1セグメンテーションと第2セグメンテーションと細胞評価情報生成処理とをシャインプルーフ画像に適用する場合、例えば、眼科装置1000のデータ処理部1020として図3のデータ処理部1030又は図8のデータ処理部1040が採用される。更に、本例のデータ処理部1020は、図4の第1セグメンテーション部1031A、図6の第2セグメンテーション部1032A、及び図7の細胞評価情報生成処理部1033Aのうちの少なくとも1つを含んでいてよい。 When the ophthalmologic apparatus 1000 applies the first segmentation, the second segmentation, and the cell evaluation information generation process to the Scheimpflug image, for example, the data processing unit 1030 in FIG. 3 or the data in FIG. A processing unit 1040 is employed. Furthermore, the data processing unit 1020 of this example includes at least one of the first segmentation unit 1031A in FIG. 4, the second segmentation unit 1032A in FIG. 6, and the cell evaluation information generation processing unit 1033A in FIG. good.
 眼科装置1000が第1セグメンテーションと第2セグメンテーションと細胞評価情報生成処理とをシャインプルーフ画像に適用する場合の他の例では、眼科装置1000のデータ処理部1020として図18のデータ処理部1090が採用される。 In another example in which the ophthalmologic apparatus 1000 applies the first segmentation, the second segmentation, and the cell evaluation information generation processing to the Scheimpflug image, the data processing unit 1090 in FIG. be done.
 眼科装置1000が第2セグメンテーションと細胞評価情報生成処理とをシャインプルーフ画像に適用する場合(換言すると、第1セグメンテーションを実行しない場合)、例えば、眼科装置1000のデータ処理部1020として図9のデータ処理部1050が採用される。更に、本例のデータ処理部1020は、図10の第2セグメンテーション部1051A、及び/又は、図11の細胞評価情報生成処理部1052Aを含んでいてよい。 When the ophthalmologic apparatus 1000 applies the second segmentation and the cell evaluation information generation process to the Scheimpflug image (in other words, when the first segmentation is not performed), for example, the data processing unit 1020 of the ophthalmologic apparatus 1000 performs the data shown in FIG. A processing unit 1050 is employed. Furthermore, the data processing unit 1020 of this example may include the second segmentation unit 1051A in FIG. 10 and/or the cell evaluation information generation processing unit 1052A in FIG.
 眼科装置1000が細胞評価情報生成処理をシャインプルーフ画像に適用する場合(換言すると、第1セグメンテーション及び第2セグメンテーションを実行しない場合)、例えば、眼科装置1000のデータ処理部1020として図12のデータ処理部1060が採用される。更に、本例のデータ処理部1020は、図13の細胞評価情報生成処理部1061Aを含んでいてよい。 When the ophthalmologic apparatus 1000 applies the cell evaluation information generation process to the Scheimpflug image (in other words, when the first segmentation and the second segmentation are not performed), for example, the data processing unit 1020 of the ophthalmologic apparatus 1000 performs the data processing of FIG. A section 1060 is employed. Furthermore, the data processing section 1020 of this example may include the cell evaluation information generation processing section 1061A of FIG.
 眼科装置1000が第1セグメンテーションと細胞評価情報生成処理とをシャインプルーフ画像に適用する場合(換言すると、第2セグメンテーションを実行しない場合)、例えば、眼科装置1000のデータ処理部1020として図14のデータ処理部1070又は図17のデータ処理部1080が採用される。更に、本例のデータ処理部1020は、図15の第1セグメンテーション部1071A、及び/又は、図16の細胞評価情報生成処理部1072Aを含んでいてよい。 When the ophthalmologic apparatus 1000 applies the first segmentation and the cell evaluation information generation process to the Scheimpflug image (in other words, when the second segmentation is not performed), for example, the data processing unit 1020 of the ophthalmologic apparatus 1000 performs the data shown in FIG. The processing unit 1070 or the data processing unit 1080 of FIG. 17 is employed. Furthermore, the data processing unit 1020 of this example may include the first segmentation unit 1071A in FIG. 15 and/or the cell evaluation information generation processing unit 1072A in FIG.
 ステップS12を実行するために採用可能な構成はこれらに限定されない。例えば、幾つかの例示的な態様に係る眼科装置1000のデータ処理部1020は、第1セグメンテーション、第2セグメンテーション、及び細胞評価情報生成処理のうち、第1セグメンテーションのみを実行可能に構成されてもよいし、第2セグメンテーションのみを実行可能に構成されてもよいし、第1セグメンテーションと第2セグメンテーションのみを実行可能に構成されてもよい。 The configurations that can be adopted for executing step S12 are not limited to these. For example, the data processing unit 1020 of the ophthalmologic apparatus 1000 according to some exemplary aspects may be configured to be able to execute only the first segmentation out of the first segmentation, the second segmentation, and the cell evaluation information generation process. Alternatively, it may be configured to be able to perform only the second segmentation, or may be configured to be able to perform only the first segmentation and the second segmentation.
 次に、眼科装置1000は、データ処理部1020によって、細胞評価情報を生成する(S13)。なお、ステップS12で細胞評価情報生成処理が実行された場合であって、本検査で取得されるべき全ての細胞評価情報がステップS12で取得された場合には、ステップS13を実行する必要はない(換言すると、ステップS13はステップS12に含まれる)。 Next, the ophthalmologic apparatus 1000 uses the data processing unit 1020 to generate cell evaluation information (S13). Note that when the cell evaluation information generating process is executed in step S12 and all the cell evaluation information to be acquired in the main examination is acquired in step S12, it is not necessary to execute step S13. (In other words, step S13 is included in step S12).
 ステップS12及び/又はステップS13は、ユーザーによる操作を部分的に含んでいてもよい。例えば、ユーザーによる操作の例として、シャインプルーフ画像中の前房領域を指定するための操作、シャインプルーフ画像中の細胞領域を指定するための操作、前房領域中の細胞領域を指定するための操作、前房領域から細胞評価情報を作成するための操作、細胞領域から細胞評価情報を作成するための操作、第1セグメンテーションで特定された前房領域を編集(補正)するための操作、第2セグメンテーションで特定された細胞領域を編集(補正)するための操作、細胞評価情報生成処理で生成された細胞評価情報を編集(補正)するための操作、細胞評価情報生成処理で生成された細胞評価情報から他の細胞評価情報を作成するための操作などがある。 Step S12 and/or step S13 may partially include an operation by the user. For example, as examples of operations by the user, an operation for specifying an anterior chamber region in a Scheimpflug image, an operation for specifying a cell region in a Scheimpflug image, an operation for specifying a cell region in the anterior chamber region, operation, operation for creating cell evaluation information from the anterior chamber region, operation for creating cell evaluation information from the cell region, operation for editing (correcting) the anterior chamber region identified in the first segmentation, 2 Operations for editing (correcting) cell regions identified by segmentation, operations for editing (correcting) cell evaluation information generated by cell evaluation information generation processing, cells generated by cell evaluation information generation processing There are operations for creating other cell evaluation information from the evaluation information.
 これらの操作は、ユーザーインターフェイスを用いて行われる。ユーザーインターフェイスは、表示デバイスや操作デバイスを含む。眼科装置1000は、ユーザーインターフェイスの少なくとも一部を含んでいてもよい。 These operations are performed using the user interface. A user interface includes a display device and an operation device. Ophthalmic device 1000 may include at least a portion of a user interface.
 次に、眼科装置1000は、ステップS11で取得されたシャインプルーフ画像、ステップS12で取得された情報(例えば、シャインプルーフ画像に基づく情報、前房領域、前房領域に基づく情報、細胞領域、細胞領域に関する情報、細胞評価情報)、ステップS13で生成された細胞評価情報などを、表示装置に表示させる(S14)。 Next, the ophthalmologic apparatus 1000 extracts the Scheimpflug image acquired in step S11, the information acquired in step S12 (for example, information based on the Scheimpflug image, the anterior chamber region, information based on the anterior chamber region, cell region, cell information on the region, cell evaluation information), cell evaluation information generated in step S13, and the like are displayed on the display device (S14).
 眼科装置1000の第3の動作例を図21に示す。本動作例は、細胞評価情報を取得するために実行される。特に言及しない限り、図19の動作例に関して説明された任意の事項及び/又は図20の動作例に関して説明された任意の事項を本動作例に組み合わせることが可能である。 A third operation example of the ophthalmologic apparatus 1000 is shown in FIG. This operation example is performed to acquire cell evaluation information. Unless otherwise specified, any items described with respect to the operation example of FIG. 19 and/or any items described with respect to the operation example of FIG. 20 can be combined with this operation example.
 本例に係る眼科装置1000のデータ処理部1020は、畳み込みニューラルネットワークを含む第2セグメンテーション部(例えば、第2セグメンテーション部1032A又は第2セグメンテーション部1051A)と、この畳み込みニューラルネットワークに合わせたデータ構造変換を行う要素(例えば、変換処理部1042又は変換処理部1082)とを含んでいる。 The data processing unit 1020 of the ophthalmologic apparatus 1000 according to this example includes a second segmentation unit (for example, a second segmentation unit 1032A or a second segmentation unit 1051A) including a convolutional neural network, and a data structure conversion unit suitable for this convolutional neural network. (for example, conversion processing unit 1042 or conversion processing unit 1082).
 まず、眼科装置1000は、画像取得部1010によって、被検眼のシャインプルーフ画像を取得する(S21)。 First, the ophthalmologic apparatus 1000 acquires a Scheimpflug image of the subject's eye using the image acquisition unit 1010 (S21).
 次に、眼科装置1000は、データ処理部1020によって、シャインプルーフ画像から前房領域を特定するための第1セグメンテーション(前房セグメンテーション)を、ステップS21で取得されたシャインプルーフ画像に適用する(S22)。本ステップの前房セグメンテーションの手法は任意であってよく、データ処理部1020によるデータ処理及びユーザーによる操作の一方又は双方を含んでいてよい。ここで、データ処理部1020による前房セグメンテーションは、前述した様々な手法のうちのいずれかの手法であってよい。 Next, the ophthalmologic apparatus 1000 applies the first segmentation (anterior chamber segmentation) for identifying the anterior chamber region from the Scheimpflug image to the Scheimpflug image acquired in step S21 by the data processing unit 1020 (S22 ). The anterior chamber segmentation method in this step may be arbitrary, and may include one or both of data processing by the data processing unit 1020 and operation by the user. Here, the anterior chamber segmentation by the data processing unit 1020 may be any one of the various techniques described above.
 次に、データ処理部1020は、ステップS22の前房セグメンテーションで特定された前房領域内のゴーストを除去する(S23)。これにより、後述するステップS25の細胞セグメンテーションにおいてゴーストが細胞領域として誤検出されてしまう不都合を防止することができる。 Next, the data processing unit 1020 removes ghosts in the anterior chamber region specified by the anterior chamber segmentation in step S22 (S23). As a result, it is possible to prevent a ghost from being erroneously detected as a cell region in cell segmentation in step S25, which will be described later.
 次に、データ処理部1020は、ステップS23でゴーストが除去された前房領域を、次のステップS25に用いられる畳み込みニューラルネットワークの入力層に対応した構造の画像データに変換する(S24)。 Next, the data processing unit 1020 converts the anterior chamber region from which the ghost has been removed in step S23 into image data having a structure corresponding to the input layer of the convolutional neural network used in the next step S25 (S24).
 なお、幾つかの例示的な態様において、前房領域からのゴースト除去(本例のステップS23)と、前房領域のデータ構造変換(本例のステップS24)との順序が逆であってもよい。 Note that in some exemplary aspects, even if the order of ghost removal from the anterior chamber region (step S23 in this example) and data structure conversion of the anterior chamber region (step S24 in this example) is reversed, good.
 次に、データ処理部1020は、ステップS24で取得された画像データ(前房領域の変換データ)を、細胞セグメンテーションを実行するように構成された第2セグメンテーション部の畳み込みニューラルネットワークに入力する(S25)。これにより、ステップS22で取得された前房領域内の細胞領域が特定される。 Next, the data processing unit 1020 inputs the image data (transformed data of the anterior chamber region) acquired in step S24 to the convolutional neural network of the second segmentation unit configured to perform cell segmentation (S25). ). Thereby, the cell area in the anterior chamber area obtained in step S22 is specified.
 次に、データ処理部1020は、ステップS25で実行された細胞領域特定の結果に基づいて、被検眼の前房内における炎症性細胞の密度を評価する(S26)。本ステップの処理は、例えば、前述した様々な細胞評価情報生成処理部のうちのいずれかによって実行される。 Next, the data processing unit 1020 evaluates the density of inflammatory cells in the anterior chamber of the subject's eye based on the results of cell region identification performed in step S25 (S26). The processing of this step is executed by, for example, one of the various cell evaluation information generation processing units described above.
 次に、データ処理部1020は、ステップS26で実行された評価の結果に基づいて、細胞評価情報を生成する(S27)。本ステップの処理は、例えば、前述した様々な細胞評価情報生成処理部のうちのいずれかによって実行される。 Next, the data processing unit 1020 generates cell evaluation information based on the results of the evaluation performed in step S26 (S27). The processing of this step is executed by, for example, one of the various cell evaluation information generation processing units described above.
 次に、眼科装置1000は、ステップS21~S27で取得された情報を表示装置に表示させる(S28)。 Next, the ophthalmologic apparatus 1000 causes the display device to display the information acquired in steps S21 to S27 (S28).
 本ステップで表示される情報の例として、ステップS21で取得されたシャインプルーフ画像、ステップS22で取得された情報(例えば、シャインプルーフ画像に基づく情報、前房領域、前房領域に基づく情報)、ステップS23で取得された情報(例えば、ゴーストが除去された前房領域、ゴーストが除去された前房領域に基づく情報)、ステップS24で取得された情報(例えば、成形された前房領域、成形された前房領域に基づく情報)、ステップS25で取得された情報(例えば、細胞領域、細胞領域に基づく情報)、ステップS26で取得された情報(例えば、炎症性細胞の密度、密度の評価情報、密度の評価情報に基づく情報)、ステップS27で取得された情報(例えば、細胞評価情報、細胞評価情報に基づく情報)などがある。 Examples of information displayed in this step include the Scheimpflug image acquired in step S21, the information acquired in step S22 (for example, information based on the Scheimpflug image, the anterior chamber region, information based on the anterior chamber region), Information acquired in step S23 (e.g., deghosted anterior chamber region, information based on deghosted anterior chamber region), information acquired in step S24 (e.g., shaped anterior chamber region, shaped information based on the anterior chamber region obtained), information acquired in step S25 (e.g., cell region, information based on cell region), information acquired in step S26 (e.g., inflammatory cell density, density evaluation information , information based on density evaluation information), information acquired in step S27 (for example, cell evaluation information, information based on cell evaluation information), and the like.
 幾つかの例示的な態様では、ステップS27において、SUN Working Groupにより提案された評価に使用されている炎症性細胞の密度の値(1ミリメートル四方の大きさの領域に存在する炎症性細胞の個数)、及び/又は、この密度の値に対応するグレードが求められ、更に、ステップS28において、少なくともステップS27で求められた密度の値及び/又はグレードが表示される。 In some exemplary aspects, in step S27, the inflammatory cell density value used in the evaluation proposed by the SUN Working Group (the number of inflammatory cells present in a 1 mm square area ) and/or the grade corresponding to this density value is determined, and in step S28 at least the density value and/or grade determined in step S27 is displayed.
 眼科装置1000の第4の動作例を図22に示す。本動作例は、細胞評価情報を取得するために実行される。特に言及しない限り、図19の動作例に関して説明された任意の事項、図20の動作例に関して説明された任意の事項、及び図21の動作例に関して説明された任意の事項のうちの少なくとも1つを本動作例に組み合わせることが可能である。 A fourth operation example of the ophthalmologic apparatus 1000 is shown in FIG. This operation example is performed to acquire cell evaluation information. Unless otherwise specified, at least one of any item described with respect to the operation example of FIG. 19, any item described with respect to the operation example of FIG. 20, and any item described with respect to the operation example of FIG. can be combined with this operation example.
 本例に係る眼科装置1000のデータ処理部1020は、畳み込みニューラルネットワークを含む細胞評価情報生成処理部(例えば、細胞評価情報生成処理部1033A、細胞評価情報生成処理部1052A、細胞評価情報生成処理部1061A、又は、細胞評価情報生成処理部1072A)と、この畳み込みニューラルネットワークに合わせたデータ構造変換を行う要素(例えば、変換処理部1042又は変換処理部1082)とを含んでいる。 The data processing unit 1020 of the ophthalmologic apparatus 1000 according to this example includes a cell evaluation information generation processing unit including a convolutional neural network (for example, a cell evaluation information generation processing unit 1033A, a cell evaluation information generation processing unit 1052A, a cell evaluation information generation processing unit 1061A or cell evaluation information generation processing unit 1072A) and an element (for example, conversion processing unit 1042 or conversion processing unit 1082) that performs data structure conversion in accordance with this convolutional neural network.
 まず、眼科装置1000は、画像取得部1010によって、被検眼のシャインプルーフ画像を取得する(S31)。 First, the ophthalmologic apparatus 1000 acquires a Scheimpflug image of the subject's eye using the image acquisition unit 1010 (S31).
 次に、眼科装置1000は、データ処理部1020によって、シャインプルーフ画像から前房領域を特定するための第1セグメンテーション(前房セグメンテーション)を、ステップS31で取得されたシャインプルーフ画像に適用する(S32)。第3の動作例のステップS22と同様に、本ステップの前房セグメンテーションの手法は任意であってよい。 Next, the ophthalmologic apparatus 1000 applies the first segmentation (anterior chamber segmentation) for identifying the anterior chamber region from the Scheimpflug image to the Scheimpflug image acquired in step S31 by the data processing unit 1020 (S32). ). As in step S22 of the third operation example, the anterior chamber segmentation method in this step may be arbitrary.
 次に、データ処理部1020は、ステップS32の前房セグメンテーションで特定された前房領域内のゴーストを除去する(S33)。これにより、後述するステップS35の細胞評価情報生成処理においてゴーストが評価結果に反映されてしまう不都合を防止することができる。 Next, the data processing unit 1020 removes ghosts in the anterior chamber region specified by the anterior chamber segmentation in step S32 (S33). As a result, it is possible to prevent the ghost from being reflected in the evaluation result in the cell evaluation information generating process in step S35, which will be described later.
 次に、データ処理部1020は、ステップS33でゴーストが除去された前房領域を、次のステップS35に用いられる畳み込みニューラルネットワークの入力層に対応した構造の画像データに変換する(S34)。 Next, the data processing unit 1020 converts the anterior chamber region from which the ghost has been removed in step S33 into image data having a structure corresponding to the input layer of the convolutional neural network used in the next step S35 (S34).
 なお、幾つかの例示的な態様において、前房領域からのゴースト除去(本例のステップS33)と、前房領域のデータ構造変換(本例のステップS34)との順序が逆であってもよい。 Note that in some exemplary aspects, even if the order of ghost removal from the anterior chamber region (step S33 in this example) and data structure conversion of the anterior chamber region (step S34 in this example) is reversed, good.
 次に、データ処理部1020は、ステップS34で取得された画像データ(前房領域の変換データ)を、細胞評価情報生成処理を実行するように構成された細胞評価情報生成処理部の畳み込みニューラルネットワークに入力する。これにより、ステップS32で取得された前房領域に基づく細胞評価情報が生成される(S35)。 Next, the data processing unit 1020 converts the image data (transformed data of the anterior chamber region) acquired in step S34 into the convolutional neural network of the cell evaluation information generation processing unit configured to execute the cell evaluation information generation processing. to enter. Thereby, cell evaluation information based on the anterior chamber region acquired in step S32 is generated (S35).
 次に、眼科装置1000は、ステップS35で生成された細胞評価情報を表示装置に表示させる(S36)。 Next, the ophthalmologic apparatus 1000 causes the display device to display the cell evaluation information generated in step S35 (S36).
 幾つかの例示的な態様では、ステップS35において、SUN Working Groupにより提案された評価に使用されている炎症性細胞の密度の値(1ミリメートル四方の大きさの領域に存在する炎症性細胞の個数)、及び/又は、この密度の値に対応するグレードが求められ、更に、ステップS36において、少なくともステップS35で求められた密度の値及び/又はグレードが表示される。 In some exemplary aspects, in step S35, the inflammatory cell density value used in the evaluation proposed by the SUN Working Group (the number of inflammatory cells present in a 1 mm square area ) and/or the grade corresponding to this density value is determined, and in step S36 at least the density value and/or grade determined in step S35 is displayed.
 また、幾つかの例示的な態様では、ステップS35において、SUN Working Groupにより提案された評価に使用されている炎症性細胞の密度の値(1ミリメートル四方の大きさの領域に存在する炎症性細胞の個数)が求められる。本態様のデータ処理部1020は、密度値とグレードとの対応関係を表す既定のデータを参照して、ステップS35で求められた密度値に対応するグレードを求める。本態様のステップS36では、データ処理部1020により求められたグレード(及び、本態様のステップS35で求められた密度値)が表示される。 Also, in some exemplary aspects, in step S35, the value of the density of inflammatory cells used in the evaluation proposed by the SUN Working Group (the inflammatory cells present in a 1 mm square area number of The data processing unit 1020 of this aspect obtains the grade corresponding to the density value obtained in step S35 by referring to predetermined data representing the correspondence relationship between the density value and the grade. In step S36 of this aspect, the grade obtained by the data processing unit 1020 (and the density value obtained in step S35 of this aspect) is displayed.
 ステップS35で生成された細胞評価情報及び/又はそれに基づく情報に加えて、眼科装置1000は、ステップS31~S34で取得された任意の情報を表示装置に表示させることができる。細胞評価情報に加えて表示される情報の例として、ステップS31で取得されたシャインプルーフ画像、ステップS32で取得された情報(例えば、シャインプルーフ画像に基づく情報、前房領域、前房領域に基づく情報)、ステップS33で取得された情報(例えば、ゴーストが除去された前房領域、ゴーストが除去された前房領域に基づく情報)、ステップS34で取得された情報(例えば、成形された前房領域、成形された前房領域に基づく情報)などがある。 In addition to the cell evaluation information generated in step S35 and/or information based thereon, the ophthalmologic apparatus 1000 can display arbitrary information acquired in steps S31 to S34 on the display device. Examples of information displayed in addition to the cell evaluation information include the Scheimpflug image acquired in step S31, the information acquired in step S32 (for example, information based on the Scheimpflug image, the anterior chamber region, the anterior chamber region, information), information acquired in step S33 (e.g., deghosted anterior chamber region, information based on deghosted anterior chamber region), information acquired in step S34 (e.g., shaped anterior chamber area, information based on the shaped anterior chamber area), etc.
 細胞評価情報以外の炎症状態情報を生成する場合においても、細胞評価情報を生成する場合と同じ要領で炎症状態情報を生成することができる。前述したように、細胞評価情報以外の炎症状態情報の例として、前房フレアに関する情報、水晶体の混濁に関する情報、疾患の発症・経過に関する情報、疾患の活動性に関する情報などがある。 Even when generating inflammation state information other than cell evaluation information, inflammation state information can be generated in the same manner as when generating cell evaluation information. As described above, examples of inflammatory state information other than cell evaluation information include information on anterior chamber flare, information on lens opacification, information on onset and progress of disease, and information on activity of disease.
 これらの例示的な炎症状態情報はいずれも、細胞評価情報の場合と同様に、SUN Working Groupにより提案されたぶどう膜炎疾患の分類基準を参照して生成及び評価されてよい。 Any of these exemplary inflammatory status information may be generated and evaluated with reference to the uveitis disease classification criteria proposed by the SUN Working Group, as in the case of the cell evaluation information.
 これらの例示的な炎症状態情報を生成及び評価する処理は、機械学習ベースの処理若しくは非機械学習ベースの処理であってよく、又は、機械学習ベースの処理及び非機械学習ベースの処理の組み合わせであってもよい。機械学習ベースの処理を行うためのニューラルネットワークの構築は、細胞評価情報を生成するためのニューラルネットワークの構築と同じ要領で実行されてよい。非機械学習ベースの処理を行うためのプロセッサは、例えば、評価対象(例えば、前房フレア、水晶体の混濁、疾患の発症・経過、疾患の活動性など)を求めるための処理を少なくとも実行するように構成され、更に、求められた評価対象に基づき評価を行う処理を実行するように構成されていてよい。 These exemplary inflammatory state information generating and evaluating processes may be machine learning based processes, non-machine learning based processes, or combinations of machine learning and non-machine learning based processes. There may be. Building a neural network for machine learning-based processing may be performed in the same manner as building a neural network for generating cell assessment information. The processor for performing non-machine learning-based processing, for example, is configured to at least perform processing for obtaining evaluation targets (e.g., anterior chamber flare, lens opacification, disease onset/course, disease activity, etc.). , and may be configured to execute a process of performing an evaluation based on the obtained evaluation object.
 以上に説明した眼科装置1000として機能することが可能な眼科装置の具体的構成の例を図23に示す。本例の眼科装置は、スリットランプ顕微鏡とコンピュータ(情報処理装置)とを組み合わせたシステム(スリットランプ顕微鏡システム1)である。 FIG. 23 shows an example of a specific configuration of an ophthalmologic apparatus capable of functioning as the ophthalmologic apparatus 1000 described above. The ophthalmologic apparatus of this example is a system (slit lamp microscope system 1) in which a slit lamp microscope and a computer (information processing apparatus) are combined.
 スリットランプ顕微鏡システム1は、照明系2と、撮影系3と、動画撮影系4と、光路結合素子5と、移動機構6と、制御部7と、データ処理部8と、通信部9と、ユーザーインターフェイス10とを含む。被検眼Eの角膜を符号Cで示し、水晶体を符号CLで示す。前房は、角膜Cと水晶体との間の領域に相当する The slit lamp microscope system 1 includes an illumination system 2, an imaging system 3, a moving image imaging system 4, an optical path coupling element 5, a moving mechanism 6, a control unit 7, a data processing unit 8, a communication unit 9, a user interface 10; The cornea of the subject's eye E is indicated by symbol C, and the lens is indicated by symbol CL. The anterior chamber corresponds to the area between the cornea C and the lens
 スリットランプ顕微鏡システム1の要素群の配置の非限定的な例として、幾つかの例示的な態様のスリットランプ顕微鏡システム1は、顕微鏡本体、コンピュータ、及び顕微鏡本体とコンピュータとの間の通信を担う通信デバイスを含んでいる。顕微鏡本体は、照明系2、撮影系3、動画撮影系4、光路結合素子5、及び移動機構6を含む。コンピュータは、制御部7、データ処理部8、通信部9、及びユーザーインターフェイス10を含む。コンピュータは、例えば、顕微鏡本体の近傍に設置されていてもよいし、ネットワーク上に設置されていてもよい。 As a non-limiting example of the arrangement of elements of the slit lamp microscope system 1, some exemplary aspects of the slit lamp microscope system 1 are responsible for the microscope body, the computer, and the communication between the microscope body and the computer. Includes communication devices. The main body of the microscope includes an illumination system 2 , an imaging system 3 , a video imaging system 4 , an optical path coupling element 5 and a moving mechanism 6 . The computer includes a control section 7 , a data processing section 8 , a communication section 9 and a user interface 10 . The computer may be installed, for example, in the vicinity of the microscope main body, or may be installed on the network.
 照明系2、撮影系3、及び移動機構6の組み合わせは、眼科装置1000の画像取得部1010の例である。照明系2は、眼科装置1000の照明系1011の例である。撮影系3は、眼科装置1000の撮影系1012の例である。 A combination of the illumination system 2, the imaging system 3, and the movement mechanism 6 is an example of the image acquisition unit 1010 of the ophthalmologic apparatus 1000. Illumination system 2 is an example of illumination system 1011 of ophthalmic apparatus 1000 . The imaging system 3 is an example of the imaging system 1012 of the ophthalmologic apparatus 1000 .
 照明系2は、被検眼Eの前眼部にスリット光を投射する。符号2aは、照明系2の光軸(照明光軸と呼ぶ)を示す。照明系2は、従来のスリットランプ顕微鏡の照明系と同様の構成を備えていてよい。例えば、図示は省略するが、照明系2は、被検眼Eから遠い側から順に、照明光源と、正レンズと、スリット形成部と、対物レンズとを含む。照明光源から出力された照明光は、正レンズを通過してスリット形成部に投射される。スリット形成部は、照明光の一部を通過させてスリット光を生成する。スリット形成部は、一対のスリット刃を有する。これらスリット刃の間隔(スリット幅と呼ぶ)を変化させることで、スリット光の幅を変化させることができる。また、一対のスリット刃を回転させることで、スリット光の長手方向の向きを変化させることができる。また、スリット形成部は、スリット光の長手方向の寸法を変化させることができる。スリット形成部により生成されたスリット光は、対物レンズにより屈折されて被検眼Eの前眼部に投射される。なお、スリット光を生成するための構成は本例に限定されず、スリット光の生成に使用可能な任意の構成であってもよい。照明系2は、スリット光のフォーカス位置を変更するための合焦機構を含んでいてもよい。この合焦機構は、例えば、対物レンズを照明光軸2aに沿って移動させる。或いは、合焦機構は、対物レンズとスリット形成部との間に配置された合焦レンズを移動させる。 The illumination system 2 projects slit light onto the anterior segment of the eye E to be examined. Reference numeral 2a denotes an optical axis of the illumination system 2 (referred to as an illumination optical axis). The illumination system 2 may have a configuration similar to that of a conventional slit lamp microscope illumination system. For example, although illustration is omitted, the illumination system 2 includes an illumination light source, a positive lens, a slit forming section, and an objective lens in order from the far side from the eye E to be examined. The illumination light output from the illumination light source passes through the positive lens and is projected onto the slit forming portion. The slit forming part passes a part of the illumination light to generate slit light. The slit forming part has a pair of slit blades. By changing the interval between these slit blades (called slit width), the width of the slit light can be changed. Also, by rotating the pair of slit blades, the longitudinal direction of the slit light can be changed. Moreover, the slit forming part can change the dimension of the slit light in the longitudinal direction. The slit light generated by the slit forming section is refracted by the objective lens and projected onto the anterior segment of the eye E to be examined. Note that the configuration for generating slit light is not limited to this example, and any configuration that can be used for generating slit light may be used. The illumination system 2 may include a focusing mechanism for changing the focus position of the slit light. This focusing mechanism, for example, moves the objective lens along the illumination optical axis 2a. Alternatively, the focusing mechanism moves a focusing lens arranged between the objective lens and the slit forming part.
 図23は上面図であり、被検眼Eの軸に沿う方向をZ方向とし、これに直交する方向のうち被検者にとって左右の方向をX方向とし、X方向及びZ方向の双方に直交する方向(上下方向、体軸方向)をY方向とする。本態様では、照明光軸2aが被検眼Eの軸に一致するように被検眼Eに対するスリットランプ顕微鏡システム1のアライメントを行うことができ、より広義には、照明光軸2aが被検眼Eの軸に平行に配置されるようにアライメントを行うことができる。 FIG. 23 is a top view, in which the direction along the axis of the subject's eye E is defined as the Z direction, and the left and right directions for the subject among the directions orthogonal to this direction are defined as the X direction, which are orthogonal to both the X direction and the Z direction. Let the direction (vertical direction, body axis direction) be the Y direction. In this aspect, the slit lamp microscope system 1 can be aligned with the eye to be examined E so that the illumination optical axis 2a coincides with the axis of the eye E to be examined. Alignment can be done so that they are arranged parallel to the axis.
 撮影系3は、照明系2からのスリット光が投射されている前眼部を撮影する。符号3aは、撮影系3の光軸(撮影光軸と呼ぶ)を示す。撮影系3は、光学系3Aと、撮像素子3Bとを含む。光学系3Aは、スリット光が投射されている被検眼Eの前眼部からの光を撮像素子3Bに導く。光学系3Aは、従来のスリットランプ顕微鏡の撮影系と同様の構成を備えていてよい。例えば、光学系3Aは、被検眼Eに近い側から順に、対物レンズと、変倍光学系と、結像レンズとを含む。スリット光が投射されている被検眼Eの前眼部からの光は、対物レンズ及び変倍光学系を通過し、結像レンズにより撮像素子3Bの撮像面に結像される。撮像素子3Bは、光学系3Aにより導かれた光を撮像面にて受光する。撮像素子3Bは、2次元の撮像エリアを有するエリアセンサを含む。このエリアセンサは、例えば、電荷結合素子(CCD)イメージセンサ、又は相補型金属酸化膜半導体(CMOS)イメージセンサであってよい。撮影系3は、そのフォーカス位置を変更するための合焦機構を含んでいてもよい。この合焦機構は、例えば、対物レンズを撮影光軸3aに沿って移動させる。或いは、合焦機構は、対物レンズと結像レンズとの間に配置された合焦レンズを撮影光軸3aに沿って移動させる。 The imaging system 3 images the anterior segment of the eye onto which the slit light from the illumination system 2 is projected. Reference numeral 3a indicates an optical axis of the imaging system 3 (referred to as an imaging optical axis). The imaging system 3 includes an optical system 3A and an imaging element 3B. The optical system 3A guides the light from the anterior segment of the subject's eye E on which the slit light is projected to the imaging device 3B. The optical system 3A may have the same configuration as the imaging system of a conventional slit lamp microscope. For example, the optical system 3A includes, in order from the side closer to the subject's eye E, an objective lens, a variable magnification optical system, and an imaging lens. Light from the anterior segment of the subject's eye E onto which the slit light is projected passes through the objective lens and the variable magnification optical system, and is imaged on the imaging surface of the imaging element 3B by the imaging lens. The imaging element 3B receives the light guided by the optical system 3A on its imaging surface. The imaging element 3B includes an area sensor having a two-dimensional imaging area. This area sensor may be, for example, a charge coupled device (CCD) image sensor or a complementary metal oxide semiconductor (CMOS) image sensor. The imaging system 3 may include a focusing mechanism for changing its focus position. This focusing mechanism, for example, moves the objective lens along the photographing optical axis 3a. Alternatively, the focusing mechanism moves a focusing lens arranged between the objective lens and the imaging lens along the photographing optical axis 3a.
 照明系2及び撮影系3は、シャインプルーフカメラとして機能する。すなわち、照明光軸2aに沿う物面と、光学系3Aと、撮像素子3Bの撮像面とが、いわゆるシャインプルーフの条件を満足するように、照明系2及び撮影系3が構成されている。より具体的には、照明光軸2aを通るYZ面(物面を含む)と、光学系3Aの主面と、撮像素子3Bの撮像面とが、同一の直線上にて交差する。これにより、物面内の全ての位置(照明光軸2aに沿う方向における全ての位置)にピントを合わせて撮影を行うことができる。 The illumination system 2 and the photography system 3 function as a Scheimpflug camera. That is, the illumination system 2 and the imaging system 3 are configured so that the object surface along the illumination optical axis 2a, the optical system 3A, and the imaging surface of the imaging device 3B satisfy the so-called Scheimpflug condition. More specifically, the YZ plane (including the object plane) passing through the illumination optical axis 2a, the principal plane of the optical system 3A, and the imaging plane of the imaging device 3B intersect on the same straight line. Thereby, all positions in the object plane (all positions in the direction along the illumination optical axis 2a) can be focused and photographed.
 本態様では、例えば、少なくとも角膜Cの後面から水晶体CLの前面までの範囲(前房)に撮影系3のピントがあるように、照明系2及び撮影系3が構成されている。なお、実用性などを考慮して、例えば、少なくとも角膜Cの前面から水晶体CLの後面までの範囲に撮影系3のピントが合うように、照明系2及び撮影系3を構成してもよい。これにより、スリットランプ顕微鏡システム1は、角膜Cの前面の頂点(Z=Z1)から水晶体CLの後面の頂点(Z=Z2)までの範囲の全体に撮影系3のピントが合っている状態で、被検眼Eの前眼部の撮影を行うことができる。なお、照明光軸2aと撮影光軸3aとの交点は、座標Z=Z0に位置している。このような条件は、照明系2に含まれる要素の構成及び配置、撮影系3に含まれる要素の構成及び配置、並びに、照明系2と撮影系3との相対位置などにしたがって実現される。照明系2と撮影系3との相対位置を示すパラメータは、例えば、照明光軸2aと撮影光軸3aとがなす角度θを含む。角度θは、例えば、17.5度、30度、又は45度に設定される。角度θは可変であってもよい。 In this aspect, for example, the illumination system 2 and the imaging system 3 are configured so that the imaging system 3 is focused in at least the range (anterior chamber) from the posterior surface of the cornea C to the anterior surface of the crystalline lens CL. In consideration of practicality, for example, the illumination system 2 and the imaging system 3 may be configured so that the imaging system 3 is focused on at least the range from the anterior surface of the cornea C to the posterior surface of the crystalline lens CL. As a result, the slit lamp microscope system 1 is in a state in which the imaging system 3 is focused on the entire range from the vertex (Z=Z1) of the anterior surface of the cornea C to the vertex (Z=Z2) of the posterior surface of the lens CL. , the anterior segment of the eye E to be examined can be photographed. The intersection of the illumination optical axis 2a and the photographing optical axis 3a is located at the coordinate Z=Z0. Such conditions are realized according to the configuration and arrangement of the elements included in the illumination system 2, the configuration and arrangement of the elements included in the imaging system 3, the relative positions of the illumination system 2 and the imaging system 3, and the like. Parameters indicating the relative positions of the illumination system 2 and the imaging system 3 include, for example, the angle θ between the illumination optical axis 2a and the imaging optical axis 3a. The angle θ is set to 17.5 degrees, 30 degrees, or 45 degrees, for example. The angle θ may be variable.
 動画撮影系4は、照明系2及び撮影系3による被検眼の撮影と並行して被検眼Eの前眼部を動画撮影する。動画撮影系4は、ビデオカメラとして機能する。 The moving image capturing system 4 captures a moving image of the anterior segment of the eye to be inspected E in parallel with the imaging of the eye to be inspected by the illumination system 2 and the imaging system 3 . The moving image capturing system 4 functions as a video camera.
 光路結合素子5は、照明系2の光路(照明光路)と、動画撮影系4の光路(動画撮影光路)とを結合している。光路結合素子5は、例えば、ハーフミラー又はダイクロイックミラーなどのビームスプリッタであってよい。 The optical path coupling element 5 couples the optical path of the illumination system 2 (illumination optical path) and the optical path of the video shooting system 4 (video shooting optical path). The optical coupling element 5 may be, for example, a beam splitter such as a half mirror or a dichroic mirror.
 照明系2、撮影系3、動画撮影系4、及び光路結合素子5を含む光学系の具体例を図24に示す。本例において、撮影系3は、2つの撮影系(第1撮影系及び第2撮影系)を含んでいる。幾つかの例示的な態様において、スリットランプ顕微鏡システム1の光学系は、図24に示す要素群に加えて、又はそれらのいずれかの代わりに、他の要素(例えば、眼科装置1000の説明における任意の要素、公知のスリットランプ顕微鏡の任意の要素、公知の眼科装置の任意の要素)を含んでいてもよい。 A specific example of an optical system including an illumination system 2, an imaging system 3, a video imaging system 4, and an optical path coupling element 5 is shown in FIG. In this example, the imaging system 3 includes two imaging systems (first imaging system and second imaging system). In some exemplary aspects, the optical system of the slit lamp microscope system 1 includes other elements in addition to or instead of any of the elements shown in FIG. any element, any element of known slit lamp microscopes, any element of known ophthalmic instruments).
 図24に示す光学系は、照明系20と、左撮影系30Lと、右撮影系30Rと、動画撮影系40とを含んでいる。照明系20は照明系2の例である。左撮影系30L及び右撮影系30Rの組み合わせは、撮影系3の例であり、眼科装置1000の第1撮影系1014及び第2撮影系1015の組み合わせの例である。動画撮影系40は、動画撮影系4の例である。ビームスプリッタ47は、光路結合素子5の例である。 The optical system shown in FIG. 24 includes an illumination system 20, a left imaging system 30L, a right imaging system 30R, and a moving image imaging system 40. Illumination system 20 is an example of illumination system 2 . A combination of the left imaging system 30L and the right imaging system 30R is an example of the imaging system 3, and an example of a combination of the first imaging system 1014 and the second imaging system 1015 of the ophthalmologic apparatus 1000. FIG. A moving image capturing system 40 is an example of the moving image capturing system 4 . The beam splitter 47 is an example of the optical coupling element 5 .
 図24において、符号20aは照明系20の光軸(照明光軸と呼ぶ)を示し、符号30Laは左撮影系30Lの光軸(左撮影光軸と呼ぶ)を示し、符号30Raは右撮影系30Rの光軸(右撮影光軸と呼ぶ)を示す。左撮影光軸30Laの向きと右撮影光軸30Raの向きとは互いに異なっている。照明光軸20aと左撮影光軸30Laとがなす角度をθLで示し、照明光軸20aと右撮影光軸30Raとがなす角度をθRで示す。角度θLと角度θRとは、互いに等しくてもよいし互いに異なってもよい。角度θL及び角度θRのそれぞれは可変であってもよい。照明光軸20aと左撮影光軸30Laと右撮影光軸30Raとは、一点で交差する。図23と同様に、この交点のZ座標をZ0で示す。 24, reference numeral 20a indicates the optical axis of the illumination system 20 (referred to as illumination optical axis), reference numeral 30La indicates the optical axis of the left imaging system 30L (referred to as the left imaging optical axis), and reference numeral 30Ra indicates the right imaging system. 30R optical axis (referred to as the right imaging optical axis). The orientation of the left imaging optical axis 30La and the orientation of the right imaging optical axis 30Ra are different from each other. The angle between the illumination optical axis 20a and the left imaging optical axis 30La is denoted by θL, and the angle between the illumination optical axis 20a and the right imaging optical axis 30Ra is denoted by θR. The angle θL and the angle θR may be equal to or different from each other. Each of the angles θL and θR may be variable. The illumination optical axis 20a, the left imaging optical axis 30La, and the right imaging optical axis 30Ra intersect at one point. As in FIG. 23, the Z coordinate of this intersection point is indicated by Z0.
 本例の移動機構6は、照明系20、左撮影系30L、及び右撮影系30Rを、矢印49で示す方向(X方向)に移動するように構成されている。幾つかの例示的な態様において、照明系20、左撮影系30L、及び右撮影系30Rは、少なくともX方向に移動可能なステージ上に載置されており、且つ、移動機構6は、制御部7からの制御信号にしたがってこの可動ステージをX方向に移動する。 The moving mechanism 6 of this example is configured to move the illumination system 20, the left imaging system 30L, and the right imaging system 30R in the direction indicated by the arrow 49 (X direction). In some exemplary aspects, the illumination system 20, the left imaging system 30L, and the right imaging system 30R are mounted on a stage movable at least in the X direction, and the movement mechanism 6 is controlled by the controller The movable stage is moved in the X direction according to the control signal from 7.
 照明系20は、被検眼Eの前眼部にスリット光を投射する。照明系20は、従来のスリットランプ顕微鏡の照明系と同様に、被検眼Eから遠い側から順に、照明光源21と、正レンズ22と、スリット形成部23と、対物レンズ群24及び25とを含む。 The illumination system 20 projects slit light onto the anterior segment of the eye E to be examined. Like the illumination system of a conventional slit lamp microscope, the illumination system 20 includes an illumination light source 21, a positive lens 22, a slit forming section 23, and objective lens groups 24 and 25 in order from the far side from the eye E to be examined. include.
 照明光源21から出力された照明光(例えば可視光)は、正レンズ22により屈折されてスリット形成部23に投射される。投射された照明光の一部は、スリット形成部23が形成するスリットを通過してスリット光となる。生成されたスリット光は、対物レンズ群24及び25により屈折された後、ビームスプリッタ47により反射され、被検眼Eの前眼部に投射される。 Illumination light (for example, visible light) output from the illumination light source 21 is refracted by the positive lens 22 and projected onto the slit forming portion 23 . A part of the projected illumination light passes through the slit formed by the slit forming part 23 and becomes slit light. The generated slit light is refracted by the objective lens groups 24 and 25, reflected by the beam splitter 47, and projected onto the anterior segment of the eye E to be examined.
 左撮影系30Lは、反射器31Lと、結像レンズ32Lと、撮像素子33Lとを含む。反射器31L及び結像レンズ32Lは、照明系20によりスリット光が投射されている前眼部からの光(左撮影系30Lの方向に進行する光)を撮像素子33Lに導く。 The left imaging system 30L includes a reflector 31L, an imaging lens 32L, and an imaging device 33L. The reflector 31L and the imaging lens 32L guide the light from the anterior segment on which the slit light is projected by the illumination system 20 (the light traveling in the direction of the left imaging system 30L) to the imaging element 33L.
 前眼部から左撮影系30Lの方向に進行する光は、スリット光が投射されている前眼部からの光であって、照明光軸20aから離れる方向に進行する光である。反射器31Lは、当該光を照明光軸20aに近づく方向に反射する。結像レンズ32Lは、反射器31Lにより反射された光を屈折して撮像素子33Lの撮像面34Lに結像する。撮像素子33Lは、当該光を撮像面34Lにて受光する。 The light traveling in the direction of the left imaging system 30L from the anterior segment is light from the anterior segment on which the slit light is projected and travels in a direction away from the illumination optical axis 20a. The reflector 31L reflects the light in a direction approaching the illumination optical axis 20a. The imaging lens 32L refracts the light reflected by the reflector 31L and forms an image on the imaging surface 34L of the imaging element 33L. The imaging element 33L receives the light on the imaging surface 34L.
 左撮影系30Lは、移動機構6による照明系20、左撮影系30L及び右撮影系30Rの移動と並行して繰り返し撮影を行う。これにより複数の前眼部画像(一連のシャインプルーフ画像)が得られる。 The left imaging system 30L repeatedly performs imaging in parallel with the movement of the illumination system 20, the left imaging system 30L, and the right imaging system 30R by the moving mechanism 6. This yields a plurality of anterior segment images (a series of Scheimpflug images).
 照明光軸20aに沿う物面と、反射器31L及び結像レンズ32Lを含む光学系と、撮像面34Lとは、シャインプルーフの条件を満足する。より具体的には、反射器31Lによる撮影系30Lの光路の偏向を考慮すると、照明光軸20aを通るYZ面(物面を含む)と、結像レンズ32Lの主面と、撮像面34Lとが、同一の直線上にて交差する。これにより、左撮影系30Lは、物面内の全ての位置(例えば、角膜前面から水晶体後面までの範囲)にピントを合わせて撮影を行うことができる。 The object plane along the illumination optical axis 20a, the optical system including the reflector 31L and the imaging lens 32L, and the imaging plane 34L satisfy the Scheimpflug condition. More specifically, considering the deflection of the optical path of the imaging system 30L by the reflector 31L, the YZ plane (including the object plane) passing through the illumination optical axis 20a, the principal plane of the imaging lens 32L, and the imaging plane 34L. intersect on the same straight line. As a result, the left imaging system 30L can perform imaging while focusing on all positions within the object plane (for example, the range from the anterior surface of the cornea to the posterior surface of the lens).
 右撮影系30Rは、反射器31Rと、結像レンズ32Rと、撮像素子33Rとを含む。反射器31R及び結像レンズ32Rは、照明系20によりスリット光が投射されている前眼部からの光(右撮影系30Rの方向に進行する光)を撮像素子33Rに導く。右撮影系30Rは、移動機構6による照明系20、左撮影系30L及び右撮影系30Rの移動と並行して繰り返し撮影を行うことで、複数の前眼部画像(一連のシャインプルーフ画像)を取得する。照明光軸20aに沿う物面と、反射器31R及び結像レンズ32Rを含む光学系と、撮像面34Rとは、シャインプルーフの条件を満足する。 The right imaging system 30R includes a reflector 31R, an imaging lens 32R, and an imaging device 33R. The reflector 31R and the imaging lens 32R guide the light from the anterior segment on which the slit light is projected by the illumination system 20 (the light traveling in the direction of the right imaging system 30R) to the imaging device 33R. The right imaging system 30R repeatedly performs imaging in parallel with the movement of the illumination system 20, the left imaging system 30L, and the right imaging system 30R by the moving mechanism 6, thereby obtaining a plurality of anterior segment images (a series of Scheimpflug images). get. The object surface along the illumination optical axis 20a, the optical system including the reflector 31R and the imaging lens 32R, and the imaging surface 34R satisfy the Scheimpflug condition.
 左撮影系30Lによるシャインプルーフ画像収集と右撮影系30Rによるシャインプルーフ画像収集とは、互いに並行して行われる。左撮影系30Lにより収集される一連のシャインプルーフ画像と右撮影系30Rにより収集される一連のシャインプルーフ画像との組み合わせは、第1シャインプルーフ画像群と第2シャインプルーフ画像群との組み合わせに相当する。 The Scheimpflug image acquisition by the left imaging system 30L and the Scheimpflug image acquisition by the right imaging system 30R are performed in parallel with each other. The combination of the series of Scheimpflug images acquired by the left imaging system 30L and the series of Scheimpflug images acquired by the right imaging system 30R corresponds to the combination of the first Scheimpflug image group and the second Scheimpflug image group. do.
 制御部7は、左撮影系30Lによる繰り返し撮影と、右撮影系30Rによる繰り返し撮影とを同期させることができる。これにより、左撮影系30Lにより得られた一連のシャインプルーフ画像と、右撮影系30Rにより得られた一連のシャインプルーフ画像との間の対応関係が得られる。この対応関係は、時間的な対応関係であり、より具体的には、実質的に同時に取得された画像同士をペアリングするものである。 The control unit 7 can synchronize repeated imaging by the left imaging system 30L and repeated imaging by the right imaging system 30R. As a result, a correspondence relationship is obtained between a series of Scheimpflug images obtained by the left imaging system 30L and a series of Scheimpflug images obtained by the right imaging system 30R. This correspondence is a temporal correspondence, more specifically, a pairing of images acquired substantially at the same time.
 或いは、制御部7又はデータ処理部8は、左撮影系30Lにより得られた複数の前眼部画像と、右撮影系30Rにより得られた複数の前眼部画像との間の対応関係を求める処理を実行することができる。例えば、制御部7又はデータ処理部8は、左撮影系30Lから逐次に入力される前眼部画像と、右撮影系30Rから逐次に入力される前眼部画像とを、それらの入力タイミングによってペアリングすることができる。 Alternatively, the control unit 7 or the data processing unit 8 obtains the correspondence relationship between the multiple anterior segment images obtained by the left imaging system 30L and the multiple anterior segment images obtained by the right imaging system 30R. Processing can be performed. For example, the control unit 7 or the data processing unit 8 can generate an anterior segment image sequentially input from the left imaging system 30L and an anterior segment image sequentially input from the right imaging system 30R according to their input timing. can be paired.
 動画撮影系40は、左撮影系30Lによる撮影及び右撮影系30Rによる撮影と並行して、被検眼Eの前眼部を固定位置から動画撮影する。ここで、動画撮影系40は移動機構6によって移動されなくてもよい。動画撮影系40は、照明系20と同軸に配置されているが、その配置はこれに限定されない。幾つかの例示的な態様では、照明系20と非同軸に動画撮影系を配置することができる。 In parallel with the imaging by the left imaging system 30L and the imaging by the right imaging system 30R, the moving image imaging system 40 performs moving image imaging of the anterior segment of the subject's eye E from a fixed position. Here, the moving image capturing system 40 does not have to be moved by the moving mechanism 6 . The moving image capturing system 40 is arranged coaxially with the illumination system 20, but the arrangement is not limited to this. In some exemplary aspects, a video capture system can be placed non-coaxially with illumination system 20 .
 ビームスプリッタ47を透過した光は、反射器48により反射されて動画撮影系40に入射する。動画撮影系40に入射した光は、対物レンズ41により屈折された後、結像レンズ42によって撮像素子43の撮像面に結像される。撮像素子43はエリアセンサである。 The light transmitted through the beam splitter 47 is reflected by the reflector 48 and enters the moving image capturing system 40 . The light incident on the moving image capturing system 40 is refracted by the objective lens 41 and then imaged on the imaging surface of the imaging device 43 by the imaging lens 42 . The imaging device 43 is an area sensor.
 動画撮影系40は、被検眼Eの動きのモニタ、アライメント、トラッキングなどに利用することができる。更に、動画撮影系40は、一連のシャインプルーフ画像を処理するために利用することができる。 The moving image capturing system 40 can be used for monitoring the movement of the subject's eye E, alignment, tracking, and the like. Additionally, the motion picture capture system 40 can be utilized to process a series of Scheimpflug images.
 図23の参照に戻る。移動機構6は、照明系2及び撮影系3を一体的にX方向に移動するように構成されている。 Return to reference to FIG. The moving mechanism 6 is configured to integrally move the illumination system 2 and the imaging system 3 in the X direction.
 制御部7は、スリットランプ顕微鏡システム1の各部を制御するように構成されている。例えば、制御部7は、照明系2の要素(照明光源、スリット形成部、合焦機構など)、撮影系3の要素(光学系3Aの合焦機構、撮像素子3Bなど)、動画撮影系4の要素(合焦機構、撮像素子など)、移動機構6、データ処理部8、通信部9、ユーザーインターフェイス10などを制御する。 The controller 7 is configured to control each part of the slit lamp microscope system 1 . For example, the control unit 7 controls the elements of the illumination system 2 (illumination light source, slit forming unit, focusing mechanism, etc.), the elements of the imaging system 3 (focusing mechanism of the optical system 3A, the imaging device 3B, etc.), the moving image imaging system 4 (focusing mechanism, imaging device, etc.), moving mechanism 6, data processing unit 8, communication unit 9, user interface 10, and the like.
 制御部7は、照明系2、撮影系3及び移動機構6の制御と、動画撮影系4の制御とを、互いに並行して実行することができる。これにより、眼科装置1000の画像取得部1010によるスリットスキャン(一連のシャインプルーフ画像の収集)と、動画撮影(一連の時系列画像の収集)とを互いに並行して実行することが可能になる。更に、制御部7は、照明系2、撮影系3及び移動機構6の制御と、動画撮影系4の制御とを、互いに同期して実行することができる。これにより、眼科装置1000の画像取得部1010によるスリットスキャンと動画撮影とを互いに同期させることが可能になる。 The control unit 7 can execute control of the illumination system 2, the imaging system 3, and the moving mechanism 6, and control of the video imaging system 4 in parallel. This enables slit scanning (collection of a series of Scheimpflug images) by the image acquisition unit 1010 of the ophthalmologic apparatus 1000 and moving image photography (collection of a series of time-series images) to be performed in parallel. Furthermore, the control unit 7 can execute the control of the illumination system 2, the imaging system 3 and the moving mechanism 6, and the control of the moving image imaging system 4 in synchronization with each other. This makes it possible to synchronize slit scanning and moving image capturing by the image acquiring unit 1010 of the ophthalmologic apparatus 1000 with each other.
 撮影系3が左撮影系30L及び右撮影系30Rを含む態様において、制御部7は、左撮影系30Lによる繰り返し撮影(第1シャインプルーフ画像群の収集)と、右撮影系30Rによる繰り返し撮影(第2シャインプルーフ画像群の収集)とを互いに同期して実行することができる。 In a mode in which the imaging system 3 includes the left imaging system 30L and the right imaging system 30R, the control unit 7 controls repeated imaging by the left imaging system 30L (acquisition of the first Scheimpflug image group) and repeated imaging by the right imaging system 30R ( Acquisition of the second Scheimpflug image group) can be performed synchronously with each other.
 制御部7は、プロセッサ、主記憶装置、補助記憶装置などを含む。補助記憶装置には、各種の制御プログラム等のコンピュータプログラムが記憶されている。これらのコンピュータプログラムは、スリットランプ顕微鏡システム1がアクセス可能なコンピュータや記憶装置に格納されていてもよい。制御部7の機能は、制御プログラム等のソフトウェアと、プロセッサ等のハードウェアとの協働によって実現される。 The control unit 7 includes a processor, main storage device, auxiliary storage device, and the like. Computer programs such as various control programs are stored in the auxiliary storage device. These computer programs may be stored in a computer or storage device accessible by the slit lamp microscope system 1 . The functions of the control unit 7 are realized by cooperation between software such as a control program and hardware such as a processor.
 制御部7は、被検眼Eの前眼部の3次元領域をスリット光でスキャンするために、照明系2、撮影系3及び移動機構6に対して次のような制御を適用することができる。 The control unit 7 can apply the following control to the illumination system 2, the imaging system 3, and the movement mechanism 6 in order to scan the three-dimensional region of the anterior segment of the eye E to be examined with slit light. .
 まず、制御部7は、照明系2及び撮影系3を所定のスキャン開始位置に配置するように移動機構6を制御する(アライメント制御)。スキャン開始位置は、例えば、X方向における角膜Cの端部(第1端部)に相当する位置、又は、それよりも被検眼Eの軸から離れた位置である。図25Aにおける符号X0は、X方向における角膜Cの第1端部に相当するスキャン開始位置を示している。また、図25Bの符号X0’は、X方向における角膜Cの第1端部に相当する位置よりも被検眼Eの軸EAから離れたスキャン開始位置を示している。 First, the control unit 7 controls the moving mechanism 6 so as to place the illumination system 2 and the imaging system 3 at a predetermined scan start position (alignment control). The scan start position is, for example, a position corresponding to the end (first end) of the cornea C in the X direction, or a position further away from the axis of the eye E to be examined. Symbol X0 in FIG. 25A indicates the scan start position corresponding to the first end of the cornea C in the X direction. Reference X0' in FIG. 25B indicates a scan start position that is farther from the axis EA of the subject's eye E than the position corresponding to the first end of the cornea C in the X direction.
 制御部7は、照明系2を制御して、被検眼Eの前眼部に対するスリット光の投射を開始させる(スリット光投射制御)。また、制御部7は、撮影系3を制御して、被検眼Eの前眼部の動画撮影を開始させる(撮影制御)。アライメント制御、スリット光投射制御、及び撮影制御の実行後、制御部7は、移動機構6を制御して、照明系2及び撮影系3の移動を開始する(移動制御)。移動制御により、照明系2及び撮影系3が一体的に移動される。つまり、照明系2と撮影系3との相対位置(角度θなど)を維持しつつ(シャインプルーフの条件を満足した状態で)照明系2及び撮影系3が移動される。照明系2及び撮影系3の移動は、前述したスキャン開始位置から所定のスキャン終了位置まで行われる。スキャン終了位置は、例えば、スキャン開始位置と同様に、X方向において第1端部の反対側の角膜Cの端部(第2端部)に相当する位置、又は、それよりも被検眼Eの軸から離れた位置である。 The control unit 7 controls the illumination system 2 to start projecting slit light onto the anterior segment of the subject's eye E (slit light projection control). Further, the control unit 7 controls the imaging system 3 to start moving image imaging of the anterior segment of the subject's eye E (imaging control). After executing alignment control, slit light projection control, and photographing control, the control unit 7 controls the moving mechanism 6 to start moving the illumination system 2 and the photographing system 3 (movement control). The movement control moves the illumination system 2 and the imaging system 3 integrally. That is, the illumination system 2 and the imaging system 3 are moved while maintaining the relative position (angle θ, etc.) between the illumination system 2 and the imaging system 3 (with the Scheimpflug condition satisfied). The movement of the illumination system 2 and the imaging system 3 is performed from the above-described scan start position to a predetermined scan end position. The scan end position is, for example, a position corresponding to the end (second end) of the cornea C on the opposite side of the first end in the X direction, or a position closer to the subject's eye E, similar to the scan start position. A position away from the axis.
 本例のスリットスキャンは、スキャン開始位置からスキャン終了位置までの範囲に適用される。このスリットスキャンは、X方向を幅方向とし且つY方向を長手方向としたスリット光の前眼部への投射と、照明系2及び撮影系3のX方向への一体的な移動と、撮影系3による動画撮影とを、互いに並行して(連係して、同期して)実行することによって実現される。スリット光の長さ(つまり、Y方向におけるスリット光のビーム断面の寸法)は、例えば、被検眼Eの表面において角膜Cの径以上に設定されている。また、移動機構6による照明系2及び撮影系3の移動距離は、X方向における角膜径以上に設定されている。これにより、角膜C全体を含む3次元領域にスリットスキャンを適用することが可能になり、前房の広い範囲を撮像することが可能になる。 The slit scan in this example is applied to the range from the scan start position to the scan end position. This slit scan consists of projecting slit light with the X direction as the width direction and the Y direction as the longitudinal direction to the anterior segment of the eye, the integral movement of the illumination system 2 and the imaging system 3 in the X direction, and the imaging system 3 in parallel (coordinated and synchronously) with each other. The length of the slit light (that is, the dimension of the beam cross section of the slit light in the Y direction) is set to be equal to or greater than the diameter of the cornea C on the surface of the eye E to be examined, for example. Further, the moving distance of the illumination system 2 and the imaging system 3 by the moving mechanism 6 is set to be equal to or larger than the corneal diameter in the X direction. This makes it possible to apply slit scanning to a three-dimensional region including the entire cornea C, and to image a wide range of the anterior chamber.
 このようなスリットスキャンにより、スリット光の投射位置が異なる複数の前眼部画像(一連のシャインプルーフ画像)が得られる。換言すると、スリット光の投射位置がX方向に移動する様が描写された動画像が得られる。このような複数の前眼部画像(つまり、動画像を構成するフレーム群)の例を図26に示す。 Through such a slit scan, a plurality of images of the anterior segment (a series of Scheimpflug images) with different projection positions of the slit light can be obtained. In other words, a moving image is obtained in which the projection position of the slit light is depicted to move in the X direction. FIG. 26 shows an example of such a plurality of anterior segment images (that is, a group of frames forming a moving image).
 図26は、複数の前眼部画像(フレーム群)F1、F2、F3、・・・、FNを示す。これら前眼部画像Fn(n=1、2、・・・、N)の添字nは、時系列順序を表している。つまり、第n番目に取得された前眼部画像が符号Fnで表される。前眼部画像Fnには、スリット光像Anが含まれている。図26に示すように、スリット光像A1、A2、A3、・・・、ANは、時系列に沿って右方向に移動している。図26に示す例では、スキャン開始位置及びスキャン終了位置は、X方向における角膜Cの両端に対応する。なお、スキャン開始位置及び/又はスキャン終了位置は本例に限定されず、例えば、角膜端部よりも被検眼Eの軸から離れた位置であってよい。また、スキャンの向きや回数についても任意に設定することが可能である。 FIG. 26 shows a plurality of anterior segment images (frame groups) F1, F2, F3, . . . , FN. The suffix n of these anterior segment images Fn (n=1, 2, . . . , N) represents the chronological order. That is, the n-th acquired anterior segment image is denoted by Fn. The anterior segment image Fn includes the slit optical image An. As shown in FIG. 26, the slit optical images A1, A2, A3, . . . , AN are moving rightward along the time series. In the example shown in FIG. 26, the scan start position and scan end position correspond to both ends of the cornea C in the X direction. Note that the scan start position and/or the scan end position are not limited to this example, and may be, for example, positions further from the axis of the subject's eye E than the end of the cornea. In addition, it is possible to arbitrarily set the direction and number of scans.
 データ処理部8は、各種のデータ処理を実行するように構成されている。処理されるデータは、スリットランプ顕微鏡システム1により取得されたデータ、及び、外部から入力されたデータのいずれでもよい。 The data processing unit 8 is configured to execute various data processing. Data to be processed may be either data acquired by the slit lamp microscope system 1 or data input from the outside.
 データ処理部8は、プロセッサ、主記憶装置、補助記憶装置などを含む。補助記憶装置には、各種のデータ処理プログラム等のコンピュータプログラムが記憶されている。これらのコンピュータプログラムは、スリットランプ顕微鏡システム1がアクセス可能なコンピュータや記憶装置に記憶されていてもよい。データ処理部8の機能は、データ処理プログラム等のソフトウェアと、プロセッサ等のハードウェアとの協働によって実現される。 The data processing unit 8 includes a processor, main storage device, auxiliary storage device, and the like. Computer programs such as various data processing programs are stored in the auxiliary storage device. These computer programs may be stored in a computer or storage device accessible by the slit lamp microscope system 1 . The functions of the data processing unit 8 are realized by cooperation between software such as a data processing program and hardware such as a processor.
 データ処理部8は、眼科装置1000のデータ処理部1020の説明におけるいずれかの構成を備えていてよい(図2C、図3~図18を参照)。データ処理部8の構成は、それらに限定されない。 The data processing unit 8 may have any configuration in the description of the data processing unit 1020 of the ophthalmologic apparatus 1000 (see FIGS. 2C and 3 to 18). The configuration of the data processing unit 8 is not limited to those.
 データ処理部8が画像選択部1021(図2Cを参照)を備えている場合について説明する。本態様の画像選択部1021は、スリットスキャンにおいて左撮影系30Lにより収集された一連のシャインプルーフ画像と右撮影系30Rにより収集された一連のシャインプルーフ画像との間の対応関係(第1シャインプルーフ画像群と第2シャインプルーフ画像群との間の対応関係)に基づいて、当該スリットスキャンに対応する新たな一連のシャインプルーフ画像をこれら2つの一連のシャインプルーフ画像から選択する。データ処理部8は、図2Cの炎症状態情報生成部1022と同様に、画像選択部1021により選択された新たな一連のシャインプルーフ画像に基づいて炎症状態情報を生成する。 A case where the data processing unit 8 includes an image selection unit 1021 (see FIG. 2C) will be described. The image selection unit 1021 of this aspect determines the correspondence relationship (first Scheimpflug A new set of Scheimpflug images corresponding to the slit scan is selected from these two sets of Scheimpflug images, based on the correspondence between the image set and the second Scheimpflug image set. The data processing unit 8 generates inflammation state information based on the series of new Scheimpflug images selected by the image selection unit 1021, like the inflammation state information generation unit 1022 in FIG. 2C.
 データ処理部8の構成はこれらの例に限定されない。幾つかの例示的な態様のデータ処理部8は、特許文献3(特開2019-213733号公報)に開示されている任意のデータ処理機能など、本願の出願人のいずれかにより開示された当該技術に関する任意のデータ処理機能を有していてよい。 The configuration of the data processing unit 8 is not limited to these examples. Some exemplary aspects of the data processing unit 8 include any of the data processing functions disclosed by any of the applicants of the present application, such as any of the data processing functions disclosed in Japanese Unexamined Patent Application Publication No. 2019-213733. It may have any data processing capabilities related to technology.
 通信部9は、スリットランプ顕微鏡システム1と他の装置との間におけるデータ通信を行う。すなわち、通信部9は、他の装置へのデータの送信と、他の装置から送信されたデータの受信とを行う。 The communication unit 9 performs data communication between the slit lamp microscope system 1 and other devices. That is, the communication unit 9 transmits data to other devices and receives data transmitted from other devices.
 通信部9が実行するデータ通信の方式は任意であってよい。例えば、通信部9は、インターネットに準拠した通信インターフェイス、専用線に準拠した通信インターフェイス、LANに準拠した通信インターフェイス、近距離通信に準拠した通信インターフェイスなど、各種の通信インターフェイスのうちの1つ以上を含む。データ通信は有線通信でも無線通信でもよい。 The data communication method executed by the communication unit 9 may be arbitrary. For example, the communication unit 9 uses one or more of various communication interfaces such as a communication interface conforming to the Internet, a communication interface conforming to a leased line, a communication interface conforming to a LAN, and a communication interface conforming to short-range communication. include. Data communication may be wired communication or wireless communication.
 通信部9により送受信されるデータは暗号化されていてよい。その場合、例えば、制御部7及び/又はデータ処理部8は、通信部9により送信されるデータを暗号化する暗号化処理部、及び、通信部9により受信されたデータを復号化する復号化処理部の少なくとも一方を含む。 The data sent and received by the communication unit 9 may be encrypted. In that case, for example, the control unit 7 and/or the data processing unit 8 include an encryption processing unit that encrypts data transmitted by the communication unit 9 and a decryption processing unit that decrypts data received by the communication unit 9. At least one of the processing units is included.
 ユーザーインターフェイス10は、表示デバイス、操作デバイスなど、任意のユーザーインターフェイスデバイスを含む。医師、被検者、補助者などのユーザーは、ユーザーインターフェイス10を用いることによって、スリットランプ顕微鏡システム1の操作や、スリットランプ顕微鏡システム1への情報入力を行うことができる。 The user interface 10 includes arbitrary user interface devices such as display devices and operation devices. Users such as doctors, subjects, and assistants can operate the slit lamp microscope system 1 and input information to the slit lamp microscope system 1 by using the user interface 10 .
 表示デバイスは、制御部7の制御を受けて各種の情報を表示する。表示デバイスは、液晶ディスプレイ(LCD)などのフラットパネルディスプレイを含んでいてよい。操作デバイスは、スリットランプ顕微鏡システム1を操作するためのデバイスや、情報を入力するためのデバイスを含む。操作デバイスは、例えば、ボタン、スイッチ、レバー、ダイアル、ハンドル、ノブ、マウス、キーボード、トラックボール、操作パネルなどを含む。タッチスクリーンのように、表示デバイスと操作デバイスとが一体化したデバイスを用いてもよい。 The display device displays various information under the control of the control unit 7. The display device may include a flat panel display such as a liquid crystal display (LCD). The operation device includes a device for operating the slit lamp microscope system 1 and a device for inputting information. Operation devices include, for example, buttons, switches, levers, dials, handles, knobs, mice, keyboards, trackballs, operation panels, and the like. A device such as a touch screen in which a display device and an operation device are integrated may be used.
 ユーザーインターフェイスの少なくとも一部がスリットランプ顕微鏡システム1の周辺機器として配置されていてもよい。 At least part of the user interface may be arranged as a peripheral device of the slit lamp microscope system 1.
 スリットランプ顕微鏡システム1の要素は、以上に説明したものに限定されない。スリットランプ顕微鏡システム1は、スリットランプ顕微鏡に組み合わせることが可能な任意の要素を含んでいてもよく、より一般に、眼科装置に組み合わせることが可能な任意の要素を含んでいてもよい。また、スリットランプ顕微鏡システム1は、スリットランプ顕微鏡により取得された被検眼のデータを処理するための任意の要素を含んでいてもよく、より一般に、任意の眼科データを処理するための任意要素を含んでいてもよい。 The elements of the slit lamp microscope system 1 are not limited to those described above. The slit lamp microscope system 1 may include any element that can be combined with a slit lamp microscope, or more generally any element that can be combined with an ophthalmic device. In addition, the slit lamp microscope system 1 may include any element for processing the data of the subject's eye acquired by the slit lamp microscope, and more generally any element for processing any ophthalmic data. may contain.
 例えば、スリットランプ顕微鏡システム1は、被検眼Eを固視させるための光(固視光)を出力する固視系を備えていてよい。固視系は、典型的には、少なくとも1つの可視光源(固視光源)、又は、風景チャートや固視標等の画像を表示する表示デバイスを含む。固視系は、例えば、照明系2又は撮影系3と同軸又は非同軸に配置される。 For example, the slit lamp microscope system 1 may include a fixation system that outputs light (fixation light) for fixing the eye E to be examined. A fixation system typically includes at least one visible light source (the fixation light source) or a display device that displays an image such as a landscape chart or a fixation target. The fixation system is arranged coaxially or non-coaxially with the illumination system 2 or the imaging system 3, for example.
 以上に説明した眼科装置1000及びスリットランプ顕微鏡システム1は、被検眼を撮影する機能(撮影機能、画像取得部1010)を備えているが、本開示に係る眼科装置はそのようなもの(眼科撮影装置)に限定されない。幾つかの例示的な態様の眼科装置は、撮影機能の代わりに(又は、撮影機能に加えて)、被検眼の画像を外部から受け付ける機能を有するコンピュータ(情報処理装置)を含んでいる。 The ophthalmologic apparatus 1000 and the slit lamp microscope system 1 described above have a function of photographing the subject's eye (imaging function, image acquisition unit 1010). equipment). Some exemplary embodiments of the ophthalmologic apparatus include a computer (information processing apparatus) having a function of externally receiving an image of the subject's eye instead of (or in addition to) the imaging function.
 そのような情報処理装置としての眼科装置の構成例を図27に示す。本例の眼科装置3000は、画像取得部3010とデータ処理部3020とを含んでいる。画像取得部3010は、画像受付部3011を含んでいる。なお、画像取得部3010は、眼科装置1000の画像取得部1010と同様の構成を更に備えていてもよい。データ処理部3020は、眼科装置1000のデータ処理部1020の説明における任意の構成を備えていてよいが、それに限定されない。 A configuration example of an ophthalmologic apparatus as such an information processing apparatus is shown in FIG. The ophthalmologic apparatus 3000 of this example includes an image acquisition section 3010 and a data processing section 3020 . The image acquisition section 3010 includes an image reception section 3011 . Note that the image acquisition section 3010 may further include a configuration similar to that of the image acquisition section 1010 of the ophthalmologic apparatus 1000 . The data processing unit 3020 may have any configuration in the description of the data processing unit 1020 of the ophthalmologic apparatus 1000, but is not limited thereto.
 画像受付部3011は、予め取得された被検眼のシャインプルーフ画像(換言すると、過去の撮影によって取得された被検眼のシャインプルーフ画像)を受け付けるように構成されている。画像受付部3011は、例えば、通信デバイス及び/又はメディアドライブを含む。通信デバイスは、例えばスリットランプ顕微鏡システム1の通信部9と同様に、外部の記憶装置に保存されているデータを受信するように構成されている。メディアドライブは、記録媒体に記録されているデータを読み出すように構成されている。 The image receiving unit 3011 is configured to receive a Scheimpflug image of the subject's eye acquired in advance (in other words, a Scheimpflug image of the subject's eye acquired by past imaging). Image acceptor 3011 includes, for example, a communication device and/or a media drive. The communication device, like the communication unit 9 of the slit lamp microscope system 1 for example, is configured to receive data stored in an external storage device. A media drive is configured to read data recorded on a recording medium.
 データ処理部3020は、画像受付部3011により受け付けられたシャインプルーフ画像から被検眼の炎症状態を示す炎症状態情報を生成するための処理を実行するように構成されている。データ処理部3020により実行可能な処理については、眼科装置1000の説明やスリットランプ顕微鏡システム1の説明を参照されたい。 The data processing unit 3020 is configured to execute processing for generating inflammation state information indicating the inflammation state of the subject's eye from the Scheimpflug image received by the image receiving unit 3011 . For processing executable by the data processing unit 3020, refer to the description of the ophthalmologic apparatus 1000 and the description of the slit lamp microscope system 1. FIG.
 本開示は、実施形態の幾つかの例示的な態様を提示するものである。これらの態様は、本発明の例示に過ぎない。したがって、本発明の要旨の範囲内における任意の変形(省略、置換、付加など)を本開示に適用することが可能である。 This disclosure presents several exemplary aspects of embodiments. These aspects are merely illustrative of the invention. Therefore, any modification (omission, substitution, addition, etc.) within the scope of the present invention can be applied to the present disclosure.
 本開示で説明された任意の1つ以上の処理をコンピュータに実行させるプログラムを構成することが可能である。また、そのようなプログラムを記録した記録媒体を作成することが可能である。記録媒体は、コンピュータによって読み取り可能な非一時的記録媒体である。このような記録媒体の形態は任意であり、その例として、磁気ディスク、光ディスク、光磁気ディスク、半導体メモリなどがある。 It is possible to configure a program that causes a computer to perform any one or more of the processes described in this disclosure. Also, it is possible to create a recording medium in which such a program is recorded. The recording medium is a computer-readable non-transitory recording medium. Such recording media may take any form, and examples thereof include magnetic disks, optical disks, magneto-optical disks, and semiconductor memories.
 本発明は、本開示で説明された任意の1つ以上の工程を含む方法を含んでいてよい。幾つかの例示的な態様に係る方法は、プロセッサを含む眼科装置(例えば、眼科装置1000、スリットランプ顕微鏡システム1、又は眼科装置3000)を制御する方法であり、眼科装置に被検眼のシャインプルーフ画像を取得させる工程(第1の取得工程と呼ぶ)と、被検眼の炎症状態を示す炎症状態情報をこのシャインプルーフ画像から生成するための処理をプロセッサに実行させる工程(第1の生成工程と呼ぶ)とを含んでいる。 The present invention may include methods including any one or more steps described in this disclosure. A method according to some exemplary aspects is a method of controlling an ophthalmic device (e.g., ophthalmic device 1000, slit lamp microscope system 1, or ophthalmic device 3000) that includes a processor, wherein the ophthalmic device performs Scheimpflug A step of acquiring an image (referred to as a first acquisition step); and a step of causing a processor to execute a process for generating inflammation state information indicating the inflammation state of the subject's eye from the Scheimpflug image (first generation step). called) and
 第1の取得工程及び/又は第1の生成工程は、眼科装置1000の説明における任意の事項、スリットランプ顕微鏡システム1の説明における任意の事項、及び、眼科装置3000の説明における任意の事項のうちのいずれかによって具体化されてもよい。また、眼科装置1000の説明における任意の工程、スリットランプ顕微鏡システム1の説明における任意の工程、及び、眼科装置3000の説明における任意の工程のうちのいずれかを、第1の取得工程及び第1の生成工程に組み合わせてもよい。 The first acquisition step and/or the first generation step may include any item in the description of the ophthalmologic apparatus 1000, any item in the description of the slit lamp microscope system 1, and any item in the description of the ophthalmic apparatus 3000. may be embodied by any of Further, any of the arbitrary steps in the description of the ophthalmic apparatus 1000, the arbitrary steps in the description of the slit lamp microscope system 1, and the arbitrary steps in the description of the ophthalmic apparatus 3000 may be referred to as the first acquisition step and the first may be combined with the production step of
 また、幾つかの例示的な態様に係る方法は、眼の画像を処理する方法であり、被検眼のシャインプルーフ画像を取得する工程(第2の取得工程と呼ぶ)と、被検眼の炎症状態を示す炎症状態情報をシャインプルーフ画像から生成するための処理を実行する工程(第2の生成工程と呼ぶ)とを含んでいる。 Also, a method according to some exemplary aspects is a method of processing an image of an eye, including the steps of acquiring a Scheimpflug image of the eye to be examined (referred to as a second acquisition step); from the Scheimpflug image (referred to as a second generation step).
 第2の取得工程及び/又は第2の生成工程は、眼科装置1000の説明における任意の事項、スリットランプ顕微鏡システム1の説明における任意の事項、及び、眼科装置3000の説明における任意の事項のうちのいずれかによって具体化されてもよい。また、眼科装置1000の説明における任意の工程、スリットランプ顕微鏡システム1の説明における任意の工程、及び、眼科装置3000の説明における任意の工程のうちのいずれかを、第2の取得工程及び第2の生成工程に組み合わせてもよい。 The second acquisition step and/or the second generation step may include any item in the description of the ophthalmologic apparatus 1000, any item in the description of the slit lamp microscope system 1, and any item in the description of the ophthalmic apparatus 3000. may be embodied by any of Further, any of the optional steps in the description of the ophthalmic apparatus 1000, the optional steps in the description of the slit lamp microscope system 1, and the optional steps in the description of the ophthalmic apparatus 3000 may be referred to as the second acquisition step and the second acquisition step. may be combined with the production step of
 本発明は、眼科装置を制御する方法をコンピュータに実行させるプログラム(第1のプログラムと呼ぶ)を含んでいてもよい。また、本発明は、眼の画像を処理する方法をコンピュータに実行させるプログラム(第2のプログラムと呼ぶ)を含んでいてもよい。更に、本発明は、第1のプログラムを記録したコンピュータ可読な非一時的記録媒体を含んでいてもよい。また、本発明は、第2のプログラムを記録したコンピュータ可読な非一時的記録媒体を含んでいてもよい。このような非一時的記録媒体は任意の形態であってよく、その例として、磁気ディスク、光ディスク、光磁気ディスク、半導体メモリなどがある。 The present invention may include a program (referred to as a first program) that causes a computer to execute a method of controlling an ophthalmologic apparatus. The present invention may also include a program (referred to as a second program) that causes a computer to execute a method of processing an eye image. Furthermore, the present invention may include a computer-readable non-transitory recording medium recording the first program. The present invention may also include a computer-readable non-temporary recording medium recording the second program. Such non-transitory recording media may be in any form, examples of which include magnetic disks, optical disks, magneto-optical disks, semiconductor memories, and the like.
 上記の実施形態では、前房内に存在する炎症性細胞の密度の自動評価について特に詳細に説明した。炎症性細胞の密度の自動評価に際して着目すべき事項には様々なものがあるが、本発明者は、上記の実施形態で説明した各種の事項に加えて、次のような事項についても検討した:(1)シャインプルーフ画像(特に前房領域)内のアーティファクトと細胞領域とを識別すること;(2)スリットランプ顕微鏡を用いて行われていた従来の評価手法(例えば、SUN Working Groupにより提案された評価手法)との整合性を図ること;(3)撮影条件(例えば、カメラのゲイン)の調整や変更にかかわらず、評価の品質(例えば、安定性、ロバスト性、再現性、正確度、精度など)を担保すること。 In the above embodiment, automatic evaluation of the density of inflammatory cells present in the anterior chamber was described in particular detail. There are various matters to which attention should be paid when automatically evaluating the density of inflammatory cells. : (1) Distinguishing between artifacts and cell regions in Scheimpflug images (especially in the anterior chamber region); (2) Conventional evaluation methods performed using a slit lamp microscope (for example, proposed by SUN Working Group (3) quality of evaluation (e.g. stability, robustness, reproducibility, accuracy) despite adjustments or changes in imaging conditions (e.g. camera gain); , accuracy, etc.).
 (1)については、上記の実施形態でも幾つかの解決手段が提案されているが、その他にも、睫毛に起因するアーティファクトの検出や除去など、任意の公知のアーティファクト検出技術やアーティファクト除去技術を用いることが可能である。 Regarding (1), some solutions have been proposed in the above embodiments, but any known artefact detection technology or artefact removal technology, such as detection or removal of artifacts caused by eyelashes, can be used. It is possible to use
 (2)についても、上記の実施形態で幾つかの解決手段が提案されているが、その他にも、例えば、従来の評価手法で得たデータ群と本開示に係る評価手法で得たデータ群との間の対応関係を求め、この対応関係に基づいて従来の評価手法との整合性を向上させることができる。データ群の間の対応関係の作成には、機械学習ベースの処理及び/又は非機械学習ベースの処理が用いられる。機械学習ベースの処理では、例えば、従来の評価手法で得たデータ群と本開示に係る評価手法で得たデータ群との複数のペアを含む訓練データを用いた機械学習が実行される。これにより構築される推論モデルは、本開示に係る評価手法で取得されたデータを入力とし、従来の評価手法で取得されたデータに模したデータを出力としたニューラルネットワークを含むものである。 Regarding (2), some solutions have been proposed in the above embodiments, but in addition, for example, a data group obtained by a conventional evaluation method and a data group obtained by the evaluation method according to the present disclosure , and based on this correspondence relationship, it is possible to improve consistency with conventional evaluation methods. Machine-learning-based processing and/or non-machine-learning-based processing are used to create correspondences between data sets. In machine learning-based processing, for example, machine learning is performed using training data that includes multiple pairs of data sets obtained from conventional evaluation techniques and data sets obtained from evaluation techniques according to the present disclosure. The inference model constructed by this includes a neural network that inputs the data acquired by the evaluation method according to the present disclosure and outputs data simulating the data acquired by the conventional evaluation method.
 (3)についても同様に、様々な撮影条件(例えば、カメラのゲインの様々な値)に対応して得られた様々なデータの間の対応関係を求め、この対応関係に基づいて評価品質の安定化を図ることができる。データ間の対応関係の作成には、機械学習ベースの処理及び/又は非機械学習ベースの処理が用いられる。機械学習ベースの処理では、例えば、第1の条件で得たデータと第2の条件で得たデータとの複数のペアを含む訓練データを用いた機械学習が実行される。これにより構築される推論モデルは、第1の条件(又は、第2の条件)で取得されたデータを入力とし、第2の条件(又は、第1の条件)で取得されたデータに模したデータを出力としたニューラルネットワークを含むものである。 Similarly, for (3), the correspondence between various data obtained corresponding to various shooting conditions (for example, various camera gain values) is obtained, and the evaluation quality is evaluated based on this correspondence. Stabilization can be achieved. Machine learning-based processing and/or non-machine learning-based processing are used to create correspondences between data. In machine learning-based processing, for example, machine learning is performed using training data that includes multiple pairs of data obtained under a first condition and data obtained under a second condition. The inference model constructed by this uses the data obtained under the first condition (or the second condition) as input, and imitates the data obtained under the second condition (or the first condition) It includes a neural network that outputs data.
 本開示に係る眼科装置、眼科装置を制御する方法、眼画像を処理する方法、プログラム、及び記録媒体によれば、従来は手動(手作業)で行われていた眼画像に基づく炎症状態の評価を少なくとも部分的に自動化することが可能である。 According to the ophthalmic device, the method of controlling the ophthalmic device, the method of processing the eye image, the program, and the recording medium according to the present disclosure, the evaluation of the inflammatory state based on the eye image, which has conventionally been performed manually (manually) can be at least partially automated.
 本開示では、シャインプルーフ画像に基づき炎症状態の評価を行っているので、広い範囲にピントが合った高品質の画像に基づき評価を行うことができる。よって、被検眼の広い範囲の評価を高い品質で行うことが可能である。 In the present disclosure, inflammatory conditions are evaluated based on Scheimpflug images, so evaluation can be performed based on high-quality images that are in focus over a wide range. Therefore, it is possible to evaluate a wide range of eyes to be examined with high quality.
 また、スリットスキャンを組み合わせることで、被検眼の広い3次元領域にピントが合った高品質のシャインプルーフ画像群(一連のシャインプルーフ画像)を迅速に取得することができ、このシャインプルーフ画像群に基づき評価を行うことが可能となるため、被検眼の非常に広い範囲の評価を高い品質で行うことが可能になる。例えば、前房の広い範囲を評価対象とすることが可能となり、水晶体や角膜なども評価対象に加えることも可能になる。 In addition, by combining slit scanning, a group of high-quality Scheimpflug images (a series of Scheimpflug images) in which a wide three-dimensional region of the subject's eye is in focus can be acquired quickly. Since it is possible to perform evaluation based on this, it is possible to perform evaluation of a very wide range of eyes to be examined with high quality. For example, a wide range of the anterior chamber can be evaluated, and the lens and cornea can be added to the evaluation targets.
 なお、特許文献5(国際公開第2018/003906号)に記載されている発明では、ビデオレートなど1フレームの撮影にかける露光時間が短いと散乱光を検出することができないため、1フレームの撮影に掛ける露光時間を100ミリ秒~1秒程度に設定しているが、被検眼の眼球運動や瞬きの影響を考慮すると、本開示のようなスリットスキャンを行うことはできない。 In addition, in the invention described in Patent Document 5 (International Publication No. 2018/003906), scattered light cannot be detected if the exposure time for shooting one frame, such as the video rate, is short. is set to about 100 milliseconds to 1 second, but considering the effects of eye movement and blinking of the subject's eye, slit scanning as in the present disclosure cannot be performed.
 また、特許文献5(国際公開第2018/003906号)に記載されている発明では、角膜におけるスリット光の投影像の寸法を0.2ミリメートル×2ミリメートルに設定しているため、前眼部の広い範囲を画像化することは難しい。これに対し、本実施形態では、例えば、角膜におけるスリット光の投影像の寸法を0.05ミリメートル×8~12ミリメートル程度に設定することができ、それにより前眼部の広い範囲を画像化することが可能になる。 Further, in the invention described in Patent Document 5 (International Publication No. 2018/003906), the size of the projection image of the slit light on the cornea is set to 0.2 mm × 2 mm, so It is difficult to image large areas. In contrast, in the present embodiment, for example, the size of the projected image of the slit light on the cornea can be set to about 0.05 mm×8 to 12 mm, thereby imaging a wide range of the anterior segment. becomes possible.
 更に、本実施形態では、特許文献5(国際公開第2018/003906号)に記載されている発明のような青色LEDではなく白色LEDなどを用いることができるとともに、モノクロカメラではなくカラーカメラを用いて色情報(R信号、G信号、B信号)を利用して評価を行うことができる。 Furthermore, in this embodiment, a white LED or the like can be used instead of a blue LED as in the invention described in Patent Document 5 (International Publication No. 2018/003906), and a color camera is used instead of a monochrome camera. can be evaluated using color information (R signal, G signal, B signal).
 このように、本実施形態によれば、前眼部の広い範囲を画像化することができ、前眼部画像や炎症性細胞の像や炎症状態情報の取得に加えて前眼部形状の提示や解析も可能になるため、様々な情報を医師に提供することができるという利点もある。 As described above, according to the present embodiment, a wide range of the anterior segment can be imaged, and in addition to acquiring an anterior segment image, an image of inflammatory cells, and inflammatory state information, the shape of the anterior segment can be presented. It also has the advantage of being able to provide a variety of information to doctors, as it also enables analysis.
1000 眼科装置
1010 画像取得部
1011 照明系
1012 撮影系
1020 データ処理部
1021 画像選択部
1022 炎症状態情報生成部
1031 第1セグメンテーション部
1032 第2セグメンテーション部
1033 細胞評価情報生成処理部
3000 眼科装置
3010 画像取得部
3011 画像受付部
3020 データ処理部

 
1000 ophthalmologic apparatus 1010 image acquisition unit 1011 illumination system 1012 imaging system 1020 data processing unit 1021 image selection unit 1022 inflammation state information generation unit 1031 first segmentation unit 1032 second segmentation unit 1033 cell evaluation information generation processing unit 3000 ophthalmology apparatus 3010 image acquisition Unit 3011 Image reception unit 3020 Data processing unit

Claims (30)

  1.  被検眼のシャインプルーフ画像を取得する画像取得部と、
     前記被検眼の炎症状態を示す炎症状態情報を前記シャインプルーフ画像から生成するための処理を実行するデータ処理部と
     を含む、眼科装置。
    an image acquisition unit that acquires a Scheimpflug image of the subject's eye;
    an ophthalmologic apparatus, comprising: a data processing unit that executes processing for generating inflammatory state information indicating the inflammatory state of the subject's eye from the Scheimpflug image.
  2.  前記炎症状態情報は、前記被検眼の前房内の炎症性細胞に関する評価情報である細胞評価情報を含み、
     前記データ処理部は、前記前房に相当する前房領域を特定するための第1セグメンテーションと、前記炎症性細胞に相当する細胞領域を特定するための第2セグメンテーションと、前記細胞評価情報を生成するための細胞評価情報生成処理とのうちの少なくとも1つを実行する、
     請求項1の眼科装置。
    The inflammatory state information includes cell evaluation information that is evaluation information regarding inflammatory cells in the anterior chamber of the eye to be examined,
    The data processing unit generates a first segmentation for identifying an anterior chamber region corresponding to the anterior chamber, a second segmentation for identifying a cell region corresponding to the inflammatory cells, and the cell evaluation information. Execute at least one of the cell evaluation information generation process for
    The ophthalmic device of claim 1.
  3.  前記データ処理部は、前記シャインプルーフ画像から前記前房領域を特定するための前記第1セグメンテーションと、前記第1セグメンテーションにより特定された前記前房領域から前記細胞領域を特定するための前記第2セグメンテーションと、前記第2セグメンテーションにより特定された前記細胞領域から前記細胞評価情報を生成するための前記細胞評価情報生成処理とを実行する、
     請求項2の眼科装置。
    The data processing unit performs the first segmentation for specifying the anterior chamber region from the Scheimpflug image, and the second segmentation for specifying the cell region from the anterior chamber region specified by the first segmentation. segmentation, and the cell evaluation information generation process for generating the cell evaluation information from the cell region identified by the second segmentation;
    3. The ophthalmic device of claim 2.
  4.  前記データ処理部は、予め構築された第1推論モデルを用いて前記第1セグメンテーションを実行し、
     前記第1推論モデルは、眼画像を少なくとも含む訓練データを用いた機械学習により構築された第1ニューラルネットワークを含み、
     前記第1ニューラルネットワークは、シャインプルーフ画像の入力を受け、前房領域を出力するように構成されている、
     請求項3の眼科装置。
    The data processing unit performs the first segmentation using a pre-built first inference model,
    The first inference model includes a first neural network constructed by machine learning using training data including at least eye images,
    The first neural network is configured to receive an input of a Scheimpflug image and output an anterior chamber region.
    4. The ophthalmic device of claim 3.
  5.  前記データ処理部は、予め構築された第2推論モデルを用いて前記第2セグメンテーションを実行し、
     前記第2推論モデルは、眼画像を少なくとも含む訓練データを用いた機械学習により構築された第2ニューラルネットワークを含み、
     前記第2ニューラルネットワークは、シャインプルーフ画像における前房領域の入力を受け、細胞領域を出力するように構成されている、
     請求項3又は4の眼科装置。
    The data processing unit performs the second segmentation using a pre-built second inference model,
    The second inference model includes a second neural network constructed by machine learning using training data including at least eye images,
    The second neural network is configured to receive input of an anterior chamber region in a Scheimpflug image and output a cell region.
    An ophthalmic device according to claim 3 or 4.
  6.  前記データ処理部は、シャインプルーフ画像の前房領域を前記第2ニューラルネットワークの入力層に対応した構造の画像データに変換する変換処理を更に実行し、
     前記第2ニューラルネットワークは、前記変換処理により生成された前記画像データの入力を受け、細胞領域を出力するように構成されている、
     請求項5の眼科装置。
    The data processing unit further performs conversion processing for converting the anterior chamber region of the Scheimpflug image into image data having a structure corresponding to the input layer of the second neural network,
    The second neural network is configured to receive an input of the image data generated by the conversion process and output a cell region.
    The ophthalmic device of claim 5.
  7.  前記データ処理部は、予め構築された第3推論モデルを用いて前記細胞評価情報生成処理を実行し、
     前記第3推論モデルは、眼画像を少なくとも含む訓練データを用いた機械学習により構築された第3ニューラルネットワークを含み、
     前記第3ニューラルネットワークは、シャインプルーフ画像における細胞領域の入力を受け、細胞評価情報を出力するように構成されている、
     請求項3~6のいずれかの眼科装置。
    The data processing unit executes the cell evaluation information generation process using a pre-built third inference model,
    The third inference model includes a third neural network constructed by machine learning using training data including at least eye images,
    The third neural network is configured to receive input of cell regions in the Scheimpflug image and output cell evaluation information.
    The ophthalmic device according to any one of claims 3-6.
  8.  前記データ処理部は、前記シャインプルーフ画像から前記細胞領域を特定するための前記第2セグメンテーションと、前記第2セグメンテーションにより特定された前記細胞領域から前記細胞評価情報を生成するための前記細胞評価情報生成処理とを実行する、
     請求項2の眼科装置。
    The data processing unit performs the second segmentation for identifying the cell region from the Scheimpflug image, and the cell evaluation information for generating the cell evaluation information from the cell region identified by the second segmentation. Execute the generation process and
    3. The ophthalmic device of claim 2.
  9.  前記データ処理部は、予め構築された第4推論モデルを用いて前記第2セグメンテーションを実行し、
     前記第4推論モデルは、眼画像を少なくとも含む訓練データを用いた機械学習により構築された第4ニューラルネットワークを含み、
     前記第4ニューラルネットワークは、シャインプルーフ画像の入力を受け、細胞領域を出力するように構成されている、
     請求項8の眼科装置。
    The data processing unit performs the second segmentation using a pre-built fourth inference model,
    The fourth inference model includes a fourth neural network constructed by machine learning using training data including at least eye images,
    The fourth neural network is configured to receive an input of a Scheimpflug image and output a cell area,
    The ophthalmic device of claim 8.
  10.  前記データ処理部は、予め構築された第5推論モデルを用いて前記細胞評価情報生成処理を実行し、
     前記第5推論モデルは、眼画像を少なくとも含む訓練データを用いた機械学習により構築された第5ニューラルネットワークを含み、
     前記第5ニューラルネットワークは、シャインプルーフ画像における細胞領域の入力を受け、細胞評価情報を出力するように構成されている、
     請求項8又は9の眼科装置。
    The data processing unit executes the cell evaluation information generation process using a pre-built fifth inference model,
    The fifth inference model includes a fifth neural network constructed by machine learning using training data including at least eye images,
    The fifth neural network is configured to receive input of cell regions in the Scheimpflug image and output cell evaluation information.
    An ophthalmic device according to claim 8 or 9.
  11.  前記データ処理部は、前記シャインプルーフ画像から前記細胞評価情報を生成するための前記細胞評価情報生成処理を実行する、
     請求項2の眼科装置。
    The data processing unit executes the cell evaluation information generation process for generating the cell evaluation information from the Scheimpflug image.
    3. The ophthalmic device of claim 2.
  12.  前記データ処理部は、予め構築された第6推論モデルを用いて前記細胞評価情報生成処理を実行し、
     前記第6推論モデルは、眼画像を少なくとも含む訓練データを用いた機械学習により構築された第6ニューラルネットワークを含み、
     前記第6ニューラルネットワークは、シャインプルーフ画像の入力を受け、細胞評価情報を出力するように構成されている、
     請求項11の眼科装置。
    The data processing unit executes the cell evaluation information generation process using a pre-built sixth inference model,
    The sixth inference model includes a sixth neural network constructed by machine learning using training data including at least eye images,
    The sixth neural network is configured to receive an input of a Scheimpflug image and output cell evaluation information,
    12. The ophthalmic device of claim 11.
  13.  前記データ処理部は、前記シャインプルーフ画像から前記前房領域を特定するための前記第1セグメンテーションと、前記第1セグメンテーションにより特定された前記前房領域から前記細胞評価情報を生成するための前記細胞評価情報生成処理とを実行する、
     請求項2の眼科装置。
    The data processing unit performs the first segmentation for specifying the anterior chamber region from the Scheimpflug image, and the cell for generating the cell evaluation information from the anterior chamber region specified by the first segmentation. executing an evaluation information generation process;
    3. The ophthalmic device of claim 2.
  14.  前記データ処理部は、予め構築された第7推論モデルを用いて前記第1セグメンテーションを実行し、
     前記第7推論モデルは、眼画像を少なくとも含む訓練データを用いた機械学習により構築された第7ニューラルネットワークを含み、
     前記第7ニューラルネットワークは、シャインプルーフ画像の入力を受け、前房領域を出力するように構成されている、
     請求項13の眼科装置。
    The data processing unit performs the first segmentation using a pre-built seventh inference model,
    The seventh inference model includes a seventh neural network constructed by machine learning using training data including at least eye images,
    The seventh neural network is configured to receive an input of a Scheimpflug image and output an anterior chamber region.
    14. The ophthalmic device of claim 13.
  15.  前記データ処理部は、予め構築された第8推論モデルを用いて前記細胞評価情報生成処理を実行し、
     前記第8推論モデルは、眼画像を少なくとも含む訓練データを用いた機械学習により構築された第8ニューラルネットワークを含み、
     前記第8ニューラルネットワークは、シャインプルーフ画像における前房領域の入力を受け、細胞評価情報を出力するように構成されている、
     請求項13又は14の眼科装置。
    The data processing unit executes the cell evaluation information generation process using a pre-constructed eighth inference model,
    The eighth inference model includes an eighth neural network constructed by machine learning using training data including at least eye images,
    The eighth neural network is configured to receive input of an anterior chamber region in the Scheimpflug image and output cell evaluation information.
    15. An ophthalmic device according to claim 13 or 14.
  16.  前記データ処理部は、シャインプルーフ画像の前房領域を前記第8ニューラルネットワークの入力層に対応した構造の画像データに変換する変換処理を更に実行し、
     前記第8ニューラルネットワークは、前記変換処理により生成された前記画像データの入力を受け、細胞評価情報を出力するように構成されている、
     請求項15の眼科装置。
    The data processing unit further performs conversion processing for converting the anterior chamber region of the Scheimpflug image into image data having a structure corresponding to the input layer of the eighth neural network,
    The eighth neural network is configured to receive input of the image data generated by the conversion process and output cell evaluation information.
    16. The ophthalmic device of claim 15.
  17.  前記データ処理部は、前記シャインプルーフ画像に第1解析処理を適用して前記前房領域を特定するための前記第1セグメンテーションと、前記第1セグメンテーションにより特定された前記前房領域に第2解析処理を適用して前記細胞領域を特定するための前記第2セグメンテーションと、前記第2セグメンテーションにより特定された前記細胞領域に第3解析処理を適用して前記細胞評価情報を生成するための前記細胞評価情報生成処理とを実行する、
     請求項2の眼科装置。
    The data processing unit performs the first segmentation for specifying the anterior chamber region by applying the first analysis processing to the Scheimpflug image, and the second analysis for the anterior chamber region specified by the first segmentation. the second segmentation for applying a process to identify the cell area; and the cell for generating the cell evaluation information by applying a third analysis process to the cell area identified by the second segmentation. executing an evaluation information generation process;
    3. The ophthalmic device of claim 2.
  18.  前記データ処理部は、前記第3解析処理において、前記第1セグメンテーションにより特定された前記前房領域の部分領域を特定し、前記部分領域に属する前記細胞領域の個数を求め、前記個数と前記部分領域の寸法とに基づいて前記炎症性細胞の密度を算出し、
     前記細胞評価情報は、前記密度を含む、
     請求項17の眼科装置。
    In the third analysis process, the data processing unit identifies a partial region of the anterior chamber region identified by the first segmentation, obtains the number of the cell regions belonging to the partial region, calculating the density of the inflammatory cells based on the dimensions of the area;
    wherein the cell evaluation information includes the density;
    18. The ophthalmic device of claim 17.
  19.  前記画像取得部は、
     前記被検眼にスリット光を投射する照明系と、
     前記被検眼を撮影する撮影系と
     を含み、
     前記照明系及び前記撮影系は、シャインプルーフの条件を満足するように構成されている、
     請求項1~18のいずれかの眼科装置。
    The image acquisition unit is
    an illumination system that projects slit light onto the eye to be inspected;
    an imaging system for imaging the eye to be inspected,
    The illumination system and the imaging system are configured to satisfy Scheimpflug conditions,
    An ophthalmic device according to any of claims 1-18.
  20.  前記画像取得部は、前記被検眼の3次元領域を前記スリット光でスキャンして一連のシャインプルーフ画像を収集する、
     請求項19の眼科装置。
    The image acquisition unit acquires a series of Scheimpflug images by scanning the three-dimensional region of the eye to be inspected with the slit light.
    20. The ophthalmic device of claim 19.
  21.  前記データ処理部は、前記一連のシャインプルーフ画像に含まれるシャインプルーフ画像から前記炎症状態情報を生成する、
     請求項20の眼科装置。
    The data processing unit generates the inflammatory state information from the Scheimpflug images included in the series of Scheimpflug images.
    21. The ophthalmic device of claim 20.
  22.  前記データ処理部は、前記一連のシャインプルーフ画像を加工して加工画像データを生成し、前記加工画像データから前記炎症状態情報を生成する、
     請求項20の眼科装置。
    The data processing unit processes the series of Scheimpflug images to generate processed image data, and generates the inflammation state information from the processed image data.
    21. The ophthalmic device of claim 20.
  23.  前記撮影系は、前記被検眼を互いに異なる方向からそれぞれ撮影する第1撮影系及び第2撮影系を含む、
     請求項19~22のいずれかの眼科装置。
    The imaging system includes a first imaging system and a second imaging system that respectively photograph the eye to be examined from different directions,
    An ophthalmic device according to any of claims 19-22.
  24.  前記第1撮影系の光軸と前記第2撮影系の光軸とは、前記照明系の光軸に対して互いに反対の方向に傾斜して配置され、
     前記データ処理部は、前記第1撮影系により取得された第1シャインプルーフ画像及び前記第2撮影系により取得された第2シャインプルーフ画像の一方を選択し、選択されたシャインプルーフ画像に基づいて前記炎症状態情報を生成する、
     請求項23の眼科装置。
    the optical axis of the first imaging system and the optical axis of the second imaging system are arranged to be inclined in opposite directions with respect to the optical axis of the illumination system;
    The data processing unit selects one of the first Scheimpflug image acquired by the first imaging system and the second Scheimpflug image acquired by the second imaging system, and based on the selected Scheimpflug image generating the inflammatory state information;
    24. The ophthalmic device of claim 23.
  25.  前記データ処理部は、前記第1シャインプルーフ画像及び前記第2シャインプルーフ画像のうち角膜反射アーティファクトを含まないシャインプルーフ画像を選択する、
     請求項24の眼科装置。
    The data processing unit selects a Scheimpflug image that does not contain a corneal reflection artifact from the first Scheimpflug image and the second Scheimpflug image.
    25. The ophthalmic device of claim 24.
  26.  前記画像取得部は、予め取得された前記シャインプルーフ画像を受け付ける画像受付部を含む、
     請求項1~25のいずれかの眼科装置。
    The image acquisition unit includes an image reception unit that receives the pre-acquired Scheimpflug image,
    An ophthalmic device according to any of claims 1-25.
  27.  プロセッサを含む眼科装置を制御する方法であって、
     前記眼科装置に、被検眼のシャインプルーフ画像を取得させ、
     前記プロセッサに、前記被検眼の炎症状態を示す炎症状態情報を前記シャインプルーフ画像から生成するための処理を実行させる、
     方法。
    A method of controlling an ophthalmic device comprising a processor, comprising:
    causing the ophthalmic device to acquire a Scheimpflug image of the eye to be examined;
    causing the processor to perform processing for generating inflammatory state information indicating an inflammatory state of the eye to be examined from the Scheimpflug image;
    Method.
  28.  眼の画像を処理する方法であって、
     被検眼のシャインプルーフ画像を取得し、
     前記被検眼の炎症状態を示す炎症状態情報を前記シャインプルーフ画像から生成するための処理を実行する、
     方法。
    A method of processing an image of an eye, comprising:
    Acquiring a Scheimpflug image of the eye to be examined,
    performing a process for generating inflammatory state information indicating an inflammatory state of the eye to be inspected from the Scheimpflug image;
    Method.
  29.  請求項27又は28の方法をコンピュータに実行させるプログラム。 A program that causes a computer to execute the method of claim 27 or 28.
  30.  請求項29のプログラムが記録されたコンピュータ可読な非一時的記録媒体。

     
    A computer-readable non-transitory recording medium in which the program of claim 29 is recorded.

PCT/JP2022/020096 2021-09-08 2022-05-12 Ophthalmological device, method for controlling ophthalmological device, method for processing eye image, program, and recording medium WO2023037658A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-145994 2021-09-08
JP2021145994A JP2023039046A (en) 2021-09-08 2021-09-08 Ophthalmologic apparatus, method for controlling ophthalmologic apparatus, method for processing eye image, program and recording medium

Publications (1)

Publication Number Publication Date
WO2023037658A1 true WO2023037658A1 (en) 2023-03-16

Family

ID=85506296

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/020096 WO2023037658A1 (en) 2021-09-08 2022-05-12 Ophthalmological device, method for controlling ophthalmological device, method for processing eye image, program, and recording medium

Country Status (2)

Country Link
JP (1) JP2023039046A (en)
WO (1) WO2023037658A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09313446A (en) * 1996-05-31 1997-12-09 Nidek Co Ltd Image pickup device for limbi palpebrales section
JP2004507296A (en) * 2000-08-21 2004-03-11 ザ ジェネラル ホスピタル コーポレーション Diagnostic methods for neurodegenerative conditions
CN105590323A (en) * 2016-02-02 2016-05-18 温州医科大学附属眼视光医院 Method for detecting vascularization degree of surface of filtering bleb based on ophthalmic slit lamp photographing
JP2019213729A (en) * 2018-06-13 2019-12-19 株式会社トプコン Slit lamp microscope and ophthalmologic system
JP2021040855A (en) * 2019-09-10 2021-03-18 株式会社トプコン Slit lamp microscope, ophthalmic information processing device, ophthalmic system, method for controlling slit lamp microscope, program, and recording medium
JP2021040856A (en) * 2019-09-10 2021-03-18 株式会社トプコン Slit lamp microscope, ophthalmic information processing device, ophthalmic system, method for controlling slit lamp microscope, program, and recording medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09313446A (en) * 1996-05-31 1997-12-09 Nidek Co Ltd Image pickup device for limbi palpebrales section
JP2004507296A (en) * 2000-08-21 2004-03-11 ザ ジェネラル ホスピタル コーポレーション Diagnostic methods for neurodegenerative conditions
CN105590323A (en) * 2016-02-02 2016-05-18 温州医科大学附属眼视光医院 Method for detecting vascularization degree of surface of filtering bleb based on ophthalmic slit lamp photographing
JP2019213729A (en) * 2018-06-13 2019-12-19 株式会社トプコン Slit lamp microscope and ophthalmologic system
JP2021040855A (en) * 2019-09-10 2021-03-18 株式会社トプコン Slit lamp microscope, ophthalmic information processing device, ophthalmic system, method for controlling slit lamp microscope, program, and recording medium
JP2021040856A (en) * 2019-09-10 2021-03-18 株式会社トプコン Slit lamp microscope, ophthalmic information processing device, ophthalmic system, method for controlling slit lamp microscope, program, and recording medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Slit lamp microscope atlas.", 10 April 2008, NAKAYAMA SHOTEN CO., LTD., JP, ISBN: 978-4-521-73015-8, article MITSURU SAWA, SHOJI KISHI, YASUYUKI SUZUKI AND JUN SHOJI: "4) anterior chamber", pages: 20 - 25, XP009544343 *
ANONYMOUS: "Guidelines for Uveitis. Chapter I General.", NIHON GANKA GAKKAI ZASSHI - JOURNAL OF JAPANESE OPHTHALMOLOGICAL SOCIETY - ACTA SOCIETATIS OPHTHALMOLOGICAE JAPONICAE, NIHON-GANKA-GAKKAI, JP, vol. 123, no. 6, 31 May 2019 (2019-05-31), JP , pages 639 - 650, XP009544294, ISSN: 0029-0203 *

Also Published As

Publication number Publication date
JP2023039046A (en) 2023-03-20

Similar Documents

Publication Publication Date Title
JP7154044B2 (en) slit lamp microscope and ophthalmic system
JP7321678B2 (en) slit lamp microscope and ophthalmic system
JP7228342B2 (en) slit lamp microscope and ophthalmic system
WO2021049341A1 (en) Slit lamp microscope, ophthalmic information processing device, ophthalmic system, method for controlling slit lamp microscope, program, and recording medium
WO2021256130A1 (en) Slit lamp microscope
JP7560303B2 (en) Slit Lamp Microscope System
JP2024152821A (en) Slit Lamp Microscope
US20220409046A1 (en) Ophthalmic imaging apparatus and ophthalmic image processing appratus
JP7345610B2 (en) slit lamp microscope
WO2023037658A1 (en) Ophthalmological device, method for controlling ophthalmological device, method for processing eye image, program, and recording medium
WO2022145129A1 (en) Ophthalmic information processing device, ophthalmic device, ophthalmic information processing method, and program
JP7517903B2 (en) Slit Lamp Microscope System
WO2021261103A1 (en) Slit-lamp microscope
WO2023238729A1 (en) Ophthalmologic device, method for controlling ophthalmologic device, program, and recording medium
WO2024004455A1 (en) Opthalmic information processing device, opthalmic device, opthalmic information processing method, and program
JP7560300B2 (en) Slit Lamp Microscope System
EP4194924A1 (en) Slit-lamp microscope
JP2024159832A (en) Slit Lamp Microscope
JP2024001913A (en) Image processing method, image processing apparatus, program and recording medium
JP2023138773A (en) Ophthalmologic information processing device, ophthalmologic apparatus, ophthalmologic information processing method and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22866989

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22866989

Country of ref document: EP

Kind code of ref document: A1