CN112233313A - Paper money identification method, device and equipment - Google Patents
Paper money identification method, device and equipment Download PDFInfo
- Publication number
- CN112233313A CN112233313A CN202011115963.8A CN202011115963A CN112233313A CN 112233313 A CN112233313 A CN 112233313A CN 202011115963 A CN202011115963 A CN 202011115963A CN 112233313 A CN112233313 A CN 112233313A
- Authority
- CN
- China
- Prior art keywords
- image
- deformation
- area
- sub
- paper money
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07D—HANDLING OF COINS OR VALUABLE PAPERS, e.g. TESTING, SORTING BY DENOMINATIONS, COUNTING, DISPENSING, CHANGING OR DEPOSITING
- G07D7/00—Testing specially adapted to determine the identity or genuineness of valuable papers or for segregating those which are unacceptable, e.g. banknotes that are alien to a currency
- G07D7/20—Testing patterns thereon
- G07D7/202—Testing patterns thereon using pattern matching
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07F—COIN-FREED OR LIKE APPARATUS
- G07F19/00—Complete banking systems; Coded card-freed arrangements adapted for dispensing or receiving monies or the like and posting such transactions to existing accounts, e.g. automatic teller machines
- G07F19/20—Automatic teller machines [ATMs]
- G07F19/202—Depositing operations within ATMs
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Accounting & Taxation (AREA)
- Finance (AREA)
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Inspection Of Paper Currency And Valuable Securities (AREA)
Abstract
The embodiment of the specification relates to the technical field of artificial intelligence data processing, and discloses a paper currency identification method, a device and equipment, wherein the method comprises the steps of receiving an initial image corresponding to a paper currency to be identified; acquiring a sample paper money image corresponding to the paper money to be identified; wherein the sample banknote image comprises a banknote image meeting a preset identification requirement; carrying out image pixel gray value subtraction processing on the initial image and the sample paper money image to obtain a difference image area; determining a crease position in the difference image area and a deformation area corresponding to the crease position according to the pixel value distribution in the difference image area; and carrying out image correction processing on the deformation area to obtain a corrected image corresponding to the paper money to be recognized so as to recognize the paper money to be recognized based on the corrected image. Thus, the accuracy and efficiency of banknote recognition can be further improved.
Description
Technical Field
The present disclosure relates to the field of artificial intelligence data processing technologies, and in particular, to a method, an apparatus, and a device for identifying paper money.
Background
The traditional mode of manual handling of counters at bank outlets has the problems of long queuing time, complicated flow and the like, and the mode cannot meet the requirements of people along with the change of working modes of modern society and the acceleration of life rhythm of people. Users are increasingly inclined to self-service using self-service terminals. The self-service terminal shortens the waiting time for service handling, has no use time period limitation for 24 hours all day, and improves the efficiency and quality of service handling.
During daily circulation and use of the paper money, creases are inevitably generated. When a user uses self-service terminal equipment to conduct business handling such as deposit and the like and meets paper money with creases, the self-service terminal can often generate the condition that the paper money cannot be identified and returned to the user. Even if the user slightly trowels and puts the trowel into the self-service terminal equipment again, the self-service terminal can still not recognize the trowel, and the use experience of the user is reduced. Meanwhile, repeated putting operation for many times also increases the processing time of the self-service terminal equipment, and influences the efficiency of service handling. Therefore, there is a need for a more effective solution for identifying folded notes.
Disclosure of Invention
An object of the embodiments of the present specification is to provide a method, an apparatus, and a device for identifying paper money, which can improve the accuracy and efficiency of paper money identification.
The specification provides a paper money identification method, a device and equipment, which are realized by the following modes:
a method of banknote recognition, the method comprising: receiving an initial image corresponding to the paper money to be identified; acquiring a sample paper money image corresponding to the paper money to be identified; wherein the sample banknote image comprises a banknote image meeting a preset identification requirement; carrying out image pixel gray value subtraction processing on the initial image and the sample paper money image to obtain a difference image area; determining a crease position in the difference image area and a deformation area corresponding to the crease position according to the pixel value distribution in the difference image area; and carrying out image correction processing on the deformation area to obtain a corrected image corresponding to the paper money to be recognized so as to recognize the paper money to be recognized based on the corrected image.
In other embodiments of the method provided in this specification, before determining the fold position in the difference image area and the deformed area corresponding to the fold position, the method further includes: comparing the image area of the difference image area with a preset damage area threshold value; under the condition that the image area of the difference image area is smaller than a first preset damaged area threshold and larger than a second preset damaged area threshold, determining a crease position in the difference image area and a deformation area corresponding to the crease position according to the pixel value distribution in the difference image area; wherein the first predetermined damaged area threshold is greater than the second predetermined damaged area threshold.
In other embodiments of the method provided in this specification, the performing image rectification processing on the deformed region includes: respectively taking the image areas on two sides of the crease position in the deformation area as a first sub-deformation area and a second sub-deformation area; intercepting undistorted image areas adjacent to the first sub-deformation area and the second sub-deformation area as a first reference image corresponding to the first sub-deformation area and a second reference image corresponding to the second sub-deformation area; comparing the first sub-deformation region with the first reference image to obtain a first deformation proportion of the first sub-deformation region, and comparing the second sub-deformation region with the second reference image to obtain a second deformation proportion of the second sub-deformation region; and correcting the first sub-deformation area according to the first deformation proportion, and correcting the second sub-deformation area according to the second deformation proportion.
In other embodiments of the method provided in this specification, the comparing the first sub-deformation region with the first reference image to obtain a deformation ratio of the first sub-deformation region includes: shadow images of the first sub-deformation area and the first reference image are respectively extracted; respectively calculating the pixel mean values of the shadow images of the first sub-deformation area and the first reference image to obtain a first pixel mean value and a second pixel mean value; and taking the ratio of the first pixel mean value to the second pixel mean value as the first deformation ratio of the first sub-deformation region.
In other embodiments of the method provided in this specification, the comparing the first sub-deformation region with the first reference image to obtain a deformation ratio of the second sub-deformation region includes: shadow images of the second sub-deformation area and the second reference image are respectively extracted; respectively calculating the pixel mean values of the shadow images of the second sub-deformation area and the second reference image to obtain a third pixel mean value and a fourth pixel mean value; and taking the ratio of the third pixel mean value to the fourth pixel mean value as a second deformation ratio of the second sub-deformation region.
In still other embodiments of the method provided in this specification, the extracting a shadow image of the first sub-deformation region and the first reference image includes: and extracting shadow images of the first sub-deformation region and the first reference image by using a convex hull algorithm.
In other embodiments of the method provided in this specification, the performing image rectification processing on the deformed region includes: and carrying out image correction processing on the deformation area by utilizing a bilinear interpolation method.
In other embodiments of the method provided in this specification, the acquiring a sample banknote image corresponding to the banknote to be recognized includes: performing character recognition on the initial image of the paper money to be recognized to obtain the paper money category of the paper money to be recognized; and acquiring a sample banknote image corresponding to the banknote type as a sample banknote image corresponding to the banknote to be identified.
In another aspect, an embodiment of the present specification further provides a banknote recognition apparatus, including: the initial image receiving module is used for receiving an initial image corresponding to the paper money to be identified; the sample image acquisition module is used for acquiring a sample paper money image corresponding to the paper money to be identified; wherein the sample banknote image comprises a banknote image meeting a preset identification requirement; the difference image extraction module is used for carrying out image pixel gray value subtraction processing on the initial image and the sample paper money image to obtain a difference image area; the crease positioning module is used for determining crease positions in the difference image areas and deformation areas corresponding to the crease positions according to the pixel value distribution in the difference image areas; and the correction processing module is used for carrying out image correction processing on the deformation area to obtain a corrected image corresponding to the paper money to be recognized so as to recognize the paper money to be recognized based on the corrected image.
In another aspect, the present specification further provides a banknote recognition apparatus, which includes at least one processor and a memory for storing processor-executable instructions, where the instructions, when executed by the processor, implement the steps of the method according to any one or more of the above embodiments.
According to the paper currency identification method, the device and the equipment provided by one or more embodiments of the specification, the difference area is determined by comparing the gray value distribution of the image of the paper currency to be identified with the gray value distribution of the sample paper currency. Then, the crease positions in the paper money to be identified are positioned based on the difference areas, so that the accuracy and the efficiency of crease position positioning can be greatly improved. And then, correspondingly stretching or compressing, correcting and restoring the deformation area corresponding to the crease position to obtain a corrected image. And then the paper money is identified based on the corrected image, so that the paper money identification frequency of the ATM can be reduced, and the identification rate is improved.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present specification, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort. In the drawings:
FIG. 1 is a schematic flow chart diagram of an embodiment of a banknote recognition method provided in the present specification;
FIG. 2 is a schematic diagram of a pixel distribution of a difference image area in one embodiment provided herein;
FIG. 3 is a schematic diagram of an image region of a shadow image to be extracted in one embodiment provided in the present specification;
FIG. 4 is a schematic diagram of an extracted shadow image in one embodiment provided herein;
FIG. 5 is a schematic view of a deformed region and a corresponding non-deformed region in one embodiment provided herein;
FIG. 6 is a schematic diagram of an extension of linear interpolation in one embodiment provided herein;
FIG. 7 is a schematic diagram of an extension of linear interpolation in one embodiment provided herein;
fig. 8 is a schematic block diagram of another banknote recognition apparatus provided in the present specification.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in one or more embodiments of the present specification will be clearly and completely described below with reference to the drawings in one or more embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the specification, and not all embodiments. All other embodiments obtained by a person skilled in the art based on one or more embodiments of the present specification without making any creative effort shall fall within the protection scope of the embodiments of the present specification.
In one example of a scenario provided by an embodiment of the present specification, the banknote recognition method may be applied to a data processing apparatus that performs banknote recognition or a data processing apparatus that performs only banknote image rectification processing. The data processing equipment can be an intelligent terminal of a financial institution, such as intelligent counter equipment, an ATM (automatic teller machine) and the like; or a local server or a cloud server connected with the intelligent terminal.
The existing crease detection and distortion removal scheme is generally directed at common documents, the documents usually contain a lot of characters, and the pixel distribution is uniform, so that the crease position can be obtained from the image by directly utilizing abnormal pixels. However, the difference between the pixels of each part of the paper money is large, for example, a very dark national emblem is arranged on the upper left side, a blank is arranged below the national emblem, and when the paper money transits from a blank area to the right, a pattern with a dark color appears, so that the crease position is difficult to be positioned directly based on the abnormal pixel. The positioning of the crease of the paper money generally needs to be carried out by selecting an area with more uniform pattern distribution on the back of the paper money, but the crease positioning by the mode has certain limitation, and the accurate positioning of the crease positions of all areas of the paper money cannot be realized.
Correspondingly, in the embodiment of the present specification, the difference region is determined by comparing the gray value distribution of the image of the banknote to be recognized with the gray value distribution of the image of the sample banknote. Then, the crease positions in the paper money to be identified are positioned based on the difference areas, so that the accuracy and the efficiency of crease position positioning can be greatly improved. And then, correspondingly stretching or compressing, correcting and restoring the deformation area corresponding to the crease position to obtain a corrected image. And then the paper money is identified based on the corrected image, so that the paper money identification frequency of the ATM can be reduced, and the identification rate is improved.
Fig. 1 is a schematic flow chart of an embodiment of the banknote recognition method provided in this specification. Although the present specification provides the method steps or apparatus structures as shown in the following examples or figures, more or less steps or modules may be included in the method or apparatus structures based on conventional or non-inventive efforts. In the case of steps or structures which do not logically have the necessary cause and effect relationship, the execution order of the steps or the block structure of the apparatus is not limited to the execution order or the block structure shown in the embodiments or the drawings of the present specification. When the described method or module structure is applied to a device, a server or an end product in practice, the method or module structure according to the embodiment or the figures may be executed sequentially or in parallel (for example, in a parallel processor or multi-thread processing environment, or even in an implementation environment including distributed processing and server clustering). In a specific embodiment of the banknote recognition method as shown in fig. 1, the method may be applied to the data processing apparatus, and the method may include the following steps:
s20: and receiving an initial image corresponding to the paper money to be identified.
The data processing device may receive an initial image corresponding to a banknote to be recognized. If the user deposits money through the ATM, the ATM can collect an image of the paper money deposited by the user as an initial image of the corresponding paper money, and the corresponding terminal can transmit the initial image to the data processing device. If the data processing equipment is an ATM, an image acquisition module of the ATM can store the initial image of the paper money into a processor of the ATM after acquiring the initial image of the paper money. If the data processing equipment is a local server or a cloud server, after the ATM collects the initial image of the paper money, the initial image can be sent to the local server or the cloud server through a communication network.
S22: acquiring a sample paper money image corresponding to the paper money to be identified; wherein the sample banknote image comprises a banknote image meeting a preset identification requirement.
The data processing device may obtain a sample banknote image corresponding to the banknote to be recognized. Wherein the sample banknote image may include a banknote image satisfying a preset identification requirement. The data processing equipment can pre-store the banknote image corresponding to the standard banknote without crease, pollution, damage and the like as a sample banknote image. In some embodiments, when acquiring the sample banknote image, the external environment in which the banknote is deposited may be referred to, and the standard banknote image acquisition may be performed in the same or similar external environment, so as to eliminate interference of other factors (such as brightness and contrast) on the subsequent image comparison process, and improve the accuracy of image identification.
In some embodiments, the initial image may be compared with each pre-stored sample banknote image, and the corresponding sample banknote image may be determined according to the similarity. In other embodiments, images of standard banknotes can be acquired correspondingly for different banknote categories, and then the sample banknote image and the corresponding banknote category can be stored in an associated manner. The banknote type may include renminbi types such as one yuan renminbi and five yuan renminbi, and may also be other types of currency, which is not limited herein.
In the process of identifying the paper currency, for example, character identification can be performed on the initial image of the paper currency to be identified, so that the paper currency category of the paper currency to be identified is obtained. For example, the initial image may be first subjected to Character Recognition by using an OCR (Optical Character Recognition) technique, characters which can represent the banknote type in the image, such as the chinese people bank, the first yuan, and the like, may be extracted, and then, the banknote type corresponding to the banknote to be recognized may be determined based on the extracted characters. Then, a sample banknote image corresponding to the banknote type may be acquired as a sample banknote image corresponding to the banknote to be recognized. Of course, the banknote type may be determined in other manners, which are not limited herein.
S24: and carrying out image pixel gray value subtraction processing on the initial image and the sample paper money image to obtain a difference image area.
After the sample banknote image is acquired, image pixel gray scale value subtraction processing may be performed on the initial image and the sample banknote image. If the gray values of the initial image and the sample paper money image can be respectively extracted, the gray value distribution is obtained. Then, the gray values of the corresponding pixel points of the two images can be subjected to difference processing respectively to obtain an image area with a gray value not equal to zero as a difference image area.
S26: and determining a crease position in the difference image area and a deformation area corresponding to the crease position according to the pixel value distribution in the difference image area.
After determining the difference image area, the data processing device may determine, according to the pixel value distribution in the difference image area, a fold position in the difference image area and a deformation area corresponding to the fold position. For example, a pixel distribution histogram of the difference image region may be counted. As shown in fig. 2, left to right on the abscissa, it can be found that the pixel value tends to stabilize at a certain value, which can be demarcated as the pixel value for the crease location. Then, the original background in the difference image area can be removed according to the value, and the crease is reserved, so that the position of the crease can be accurately positioned.
After the fold position is determined, the difference image area where the fold position is located may be used as the deformation area corresponding to the fold position. If there are multiple fold locations, then, correspondingly, for each fold location, its corresponding deformation zone may be determined.
In other embodiments, before determining the fold position in the difference image region and the deformation region corresponding to the fold position, the size relationship between the image area of the difference image region and a preset damaged area threshold may also be compared. The size of the preset damaged area threshold value can be determined according to an actual application scene. If the image area of the difference image area is larger than the first preset damaged area threshold value, the paper money to be deposited is possibly polluted or seriously damaged, and the rejection identification can be directly fed back to enable the ATM to execute the rejection operation. If the image area of the difference image area is smaller than the second preset damaged area threshold value, the damage is small, and the ATM can directly feed back to continue the identification so as to enable the ATM to continue other identification processing. Wherein the first predetermined damaged area threshold is greater than the second predetermined damaged area threshold.
Under the condition that the image area of the difference image area is smaller than a first preset damaged area threshold and larger than a second preset damaged area threshold, the fold position in the difference image area and the deformation area corresponding to the fold position can be determined according to the pixel value distribution in the difference image area. Through the mode, the paper money with serious damage can be removed in an express way, and the paper money identification efficiency is improved.
S28: and carrying out image correction processing on the deformation area to obtain a corrected image corresponding to the paper money to be recognized so as to recognize the paper money to be recognized based on the corrected image.
After the fold position and the deformed region corresponding to the fold position are determined, image correction processing may be performed on the deformed region. The deformation ratio of the deformation region can be determined, and then the deformation region can be adjusted based on the deformation ratio. Or, a corresponding sample banknote image can be acquired, the position of the deformation area in the sample banknote image is determined, and the image area of the corresponding position is extracted from the sample banknote image and used as a reference image. The corresponding deformed region may then be adjusted based on the reference image. After the deformation area in the initial image is adjusted, a corrected image corresponding to the paper money to be recognized can be obtained.
The bank note to be recognized can then be recognized on the basis of the rectified image. For example, if the data processing device is an ATM machine, the ATM machine may perform other recognition processing on the banknotes to be recognized directly based on the rectified image. If the data processing device is a local server or a cloud server, the data processing device can send the corrected image to the corresponding ATM, so that the ATM can perform other recognition processing on the paper money to be recognized based on the corrected image.
In some embodiments, the image areas on both sides of the fold position in the deformation area may be respectively used as the first sub-deformation area and the second sub-deformation area. Then, the undistorted image areas adjacent to the first sub-distortion area and the second sub-distortion area may be respectively cut out as a first reference image corresponding to the first sub-distortion area and a second reference image corresponding to the second sub-distortion area. And comparing the first sub-deformation area with the first reference image to obtain a first deformation proportion of the first sub-deformation area. And comparing the second sub-deformation region with the second reference image to obtain a second deformation proportion of the second sub-deformation region. Then, the first sub-deformation region may be subjected to a correction process according to the first deformation ratio, and the second sub-deformation region may be subjected to a correction process according to the second deformation ratio.
The deformation characteristics of the two sides of the crease usually have larger difference, the areas on the two sides of the crease can be respectively used as an analysis target, then the adjacent undeformed areas can be used as reference images, the deformation proportion of the deformed areas on the two sides of the crease can be simply and efficiently determined, and then the deformed images can be corrected based on the deformation proportion. In the process of determining the deformation ratio, the deformation ratio of the corresponding deformation region can be determined by comparing the deformation region with the geometric features of the corresponding reference image, for example.
In other embodiments, the following method may be used to determine the corresponding deformation ratio: and respectively extracting shadow images of the first sub-deformation area and the first reference image. And respectively calculating the pixel mean values of the shadow images of the first sub-deformation area and the first reference image to obtain a first pixel mean value and a second pixel mean value. And taking the ratio of the first pixel mean value to the second pixel mean value as the first deformation ratio of the first sub-deformation region. And respectively extracting shadow images of the second sub-deformation region and the second reference image. And respectively calculating the pixel mean values of the shadow images of the second sub-deformation area and the second reference image to obtain a third pixel mean value and a fourth pixel mean value. And taking the ratio of the third pixel mean value to the fourth pixel mean value as a second deformation ratio of the second sub-deformation region.
The shadow image of the first sub-deformation region and the first reference image may be extracted using a convex hull algorithm, and the shadow image of the second sub-deformation region and the second reference image may be extracted using a convex hull algorithm.
Taking the first sub-deformation region as an example, let an α level set on the input first sub-deformation region image I be:
wherein, p is the pixel of any pixel point on the image I, and alpha belongs to [0,255 ].
The alpha level set on the shadow image L is calculated by the following formulaThe convex hull envelope yields:
the method comprises the following specific steps:
assigning the maximum pixel value of the image to v1:maxI(p)→v1;
Assigning the minimum pixel value of the image to v2:minI(p)→v2;
④ for i=v1 to v2 do
C1→C0
end for
And fifthly, outputting the shadow image L corresponding to the image I.
In the same manner, shadow images of the first reference image, the second sub-deformation region, and the second reference image may be extracted, respectively. In the shadow image extraction, the first sub-deformation region, the first reference image, the second sub-deformation region, and the second reference image may be extracted together as one image analysis region. Of course, extraction may be performed separately. And are not limited herein. As shown in fig. 3, fig. 3 shows an image region corresponding to a deformed region and a non-deformed region immediately adjacent thereto, and fig. 4 shows a shadow image obtained by extracting a shadow image of the image region shown in fig. 3.
As shown in fig. 5, fig. 5 shows a shadow image of a deformed region corresponding to a certain fold and an adjacent non-adjacent region: normal area above crease A1Stretching area A above the crease2The contracted area A below the fold3Normal area A under the fold4。
Respectively calculate A1、A2、A3、A4And (4) calculating the stretching and shrinking deformation proportion of the image according to the pixel average values of the four regions. Stretching ratio: a. the2Area pixel mean and A1The ratio of the area pixel means; shrinkage ratio: a. the3Area pixel mean and A4The ratio of the area pixel means.
Then, the deformed region may be subjected to a correction process based on the deformation ratio, such as a correction process by phase-splitting projection feature analysis. In other embodiments, the deformed region may be subjected to image correction processing by using a bilinear interpolation method, so as to restore the correction of the deformed part of the banknote. Bilinear interpolation is linear interpolation expansion of an interpolation function with two variables, and linear interpolation is respectively carried out in two directions.
Assume that an unknown function f (in this embodiment, f refers to a pixel value of one pixel) needs to be obtained at a pixel point P, which is (x, y). As shown in FIG. 6, the known function f is at Q11=(x1,y1)、Q12=(x1,y2)、Q21=(x2,y1)、Q22=(x2,y2) Values of four points. Firstly, linear interpolation is carried out in the x direction to obtain:
in the formula, R1=(x,y1),R2=(x,y2)。
Then linearly interpolating in the y direction to yield:
the following is an example of the application of bilinear interpolation in image scaling:
assuming that the size of one original image is 365 × 549, the original image is enlarged by 1.5 times and 1.9 times in the x and y directions, that is, 365 × 1.5 equals 547.5, and 549 × 1.9 equals 1043.1, and the enlarged image is 548 × 1043 according to the rounding principle. If the pixel value is obtained at any point (143, 364) of the enlarged image, the original image is 143/1.5-95.333 and 364/1.9-191.579, and four adjacent positions in the original image are (95, 191), (95, 192), (96, 191), and (96, 192), respectively, and the schematic diagram is shown in fig. 7.
Then f (R)1)=(96-95.333)×f(95,191)+(95.333-95)×f(96,191),
f(R2)=(96-95.333)×f(95,192)+(95.333-95)×f(96,192),
Thus, f (p) × f (R) (192-191.579)1)+(191.579-191)×f(R2)。
f (P) is the pixel value of the corresponding original image at the position of the amplified image (143, 364), and the whole amplified image can be obtained by traversing each pixel point of the amplified image according to the method.
The step of implementing the reduced image can be implemented with reference to the above steps, and will not be described herein again.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. For details, reference may be made to the description of the related embodiments of the related processing, and details are not repeated herein.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
In the banknote identification method provided in one or more embodiments of the present specification, a difference area is determined by comparing a gray value distribution of an image of a banknote to be identified with a sample banknote image. Then, the crease positions in the paper money to be identified are positioned based on the difference areas, so that the accuracy and the efficiency of crease position positioning can be greatly improved. And then, correspondingly stretching or compressing, correcting and restoring the deformation area corresponding to the crease position to obtain a corrected image. And then the paper money is identified based on the corrected image, so that the paper money identification frequency of the ATM can be reduced, and the identification rate is improved.
Based on the above-mentioned banknote recognition method, one or more embodiments of the present specification further provide a banknote recognition apparatus. The apparatus may include systems, software (applications), modules, components, servers, etc. that utilize the methods described in the embodiments of the present specification in conjunction with hardware implementations as necessary. Based on the same innovative conception, embodiments of the present specification provide an apparatus as described in the following embodiments. Since the implementation scheme of the apparatus for solving the problem is similar to that of the method, the specific implementation of the apparatus in the embodiment of the present specification may refer to the implementation of the foregoing method, and repeated details are not repeated. As used hereinafter, the term "unit" or "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated. Specifically, fig. 8 is a schematic diagram illustrating a module structure of an embodiment of the banknote recognition apparatus provided in the specification, and as shown in fig. 8, the apparatus is applied to a data processing device, and may include:
the initial image receiving module 102 may be configured to receive an initial image corresponding to a banknote to be recognized.
The sample image obtaining module 104 may be configured to obtain a sample banknote image corresponding to the banknote to be identified; wherein the sample banknote image comprises a banknote image meeting a preset identification requirement.
The difference image extraction module 106 may be configured to perform image pixel gray value subtraction processing on the initial image and the sample banknote image to obtain a difference image area.
The fold positioning module 108 may be configured to determine a fold position in the difference image area and a deformation area corresponding to the fold position according to the pixel value distribution in the difference image area.
The correction processing module 110 may be configured to perform image correction processing on the deformed region to obtain a corrected image corresponding to the banknote to be recognized, so as to recognize the banknote to be recognized based on the corrected image.
It should be noted that the above-described apparatus may also include other embodiments according to the description of the method embodiment. The specific implementation manner may refer to the description of the related method embodiment, and is not described in detail herein.
In the banknote recognition device provided in one or more embodiments of the present specification, the difference region is determined by comparing the gray value distribution of the image of the banknote to be recognized with that of the sample banknote. Then, the crease positions in the paper money to be identified are positioned based on the difference areas, so that the accuracy and the efficiency of crease position positioning can be greatly improved. And then, correspondingly stretching or compressing, correcting and restoring the deformation area corresponding to the crease position to obtain a corrected image. And then the paper money is identified based on the corrected image, so that the paper money identification frequency of the ATM can be reduced, and the identification rate is improved.
The present specification also provides a paper money discriminating apparatus which can be applied to a single paper money discriminating system, and can also be applied to various computer data processing systems. The system may be a single server, or may include a server cluster, a system (including a distributed system), software (applications), an actual operating device, a logic gate device, a quantum computer, etc. using one or more of the methods or one or more of the example devices of the present specification, in combination with a terminal device implementing hardware as necessary. In some embodiments, an apparatus may include at least one processor and a memory storing processor-executable instructions that, when executed by the processor, perform steps comprising a method as in any one or more of the embodiments described above.
The memory may include physical means for storing information, typically by digitizing the information for storage on a medium using electrical, magnetic or optical means. The storage medium may include: devices that store information using electrical energy, such as various types of memory, e.g., RAM, ROM, etc.; devices that store information using magnetic energy, such as hard disks, floppy disks, tapes, core memories, bubble memories, and usb disks; devices that store information optically, such as CDs or DVDs. Of course, there are other ways of storing media that can be read, such as quantum memory, graphene memory, and so forth.
It should be noted that the above-mentioned device may also include other implementation manners according to the description of the method or apparatus embodiment, and specific implementation manners may refer to the description of the related method embodiment, which is not described in detail herein.
It should be noted that the embodiments of the present disclosure are not limited to the cases where the data model/template is necessarily compliant with the standard data model/template or the description of the embodiments of the present disclosure. Certain industry standards, or implementations modified slightly from those described using custom modes or examples, may also achieve the same, equivalent, or similar, or other, contemplated implementations of the above-described examples. The embodiments using these modified or transformed data acquisition, storage, judgment, processing, etc. may still fall within the scope of the alternative embodiments of the present description.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment. In the description of the specification, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the specification. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
The above description is only an example of the present specification, and is not intended to limit the present specification. Various modifications and alterations to this description will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present specification should be included in the scope of the claims of the present specification.
Claims (10)
1. A method of banknote recognition, the method comprising:
receiving an initial image corresponding to the paper money to be identified;
acquiring a sample paper money image corresponding to the paper money to be identified; wherein the sample banknote image comprises a banknote image meeting a preset identification requirement;
carrying out image pixel gray value subtraction processing on the initial image and the sample paper money image to obtain a difference image area;
determining a crease position in the difference image area and a deformation area corresponding to the crease position according to the pixel value distribution in the difference image area;
and carrying out image correction processing on the deformation area to obtain a corrected image corresponding to the paper money to be recognized so as to recognize the paper money to be recognized based on the corrected image.
2. The method according to claim 1, wherein before determining the crease position in the difference image area and the deformation area corresponding to the crease position, the method further comprises:
comparing the image area of the difference image area with a preset damage area threshold value;
under the condition that the image area of the difference image area is smaller than a first preset damaged area threshold and larger than a second preset damaged area threshold, determining a crease position in the difference image area and a deformation area corresponding to the crease position according to the pixel value distribution in the difference image area; wherein the first predetermined damaged area threshold is greater than the second predetermined damaged area threshold.
3. The method according to claim 1, wherein the image rectification processing on the deformed region comprises:
respectively taking the image areas on two sides of the crease position in the deformation area as a first sub-deformation area and a second sub-deformation area;
intercepting undistorted image areas adjacent to the first sub-deformation area and the second sub-deformation area as a first reference image corresponding to the first sub-deformation area and a second reference image corresponding to the second sub-deformation area;
comparing the first sub-deformation region with the first reference image to obtain a first deformation proportion of the first sub-deformation region, and comparing the second sub-deformation region with the second reference image to obtain a second deformation proportion of the second sub-deformation region;
and correcting the first sub-deformation area according to the first deformation proportion, and correcting the second sub-deformation area according to the second deformation proportion.
4. The method according to claim 3, wherein the comparing the first sub-deformation region with the first reference image to obtain the deformation ratio of the first sub-deformation region comprises:
shadow images of the first sub-deformation area and the first reference image are respectively extracted;
respectively calculating the pixel mean values of the shadow images of the first sub-deformation area and the first reference image to obtain a first pixel mean value and a second pixel mean value;
and taking the ratio of the first pixel mean value to the second pixel mean value as the first deformation ratio of the first sub-deformation region.
5. The method according to claim 3, wherein the comparing the first sub-deformation region with the first reference image to obtain the deformation ratio of the second sub-deformation region comprises:
shadow images of the second sub-deformation area and the second reference image are respectively extracted;
respectively calculating the pixel mean values of the shadow images of the second sub-deformation area and the second reference image to obtain a third pixel mean value and a fourth pixel mean value;
and taking the ratio of the third pixel mean value to the fourth pixel mean value as a second deformation ratio of the second sub-deformation region.
6. The method according to claim 4, wherein the extracting the shadow image of the first sub-deformed region, first reference image, comprises:
and extracting shadow images of the first sub-deformation region and the first reference image by using a convex hull algorithm.
7. The method according to claim 1, wherein the image rectification processing on the deformed region comprises:
and carrying out image correction processing on the deformation area by utilizing a bilinear interpolation method.
8. The method according to claim 1, wherein the obtaining of the sample banknote image corresponding to the banknote to be recognized comprises:
performing character recognition on the initial image of the paper money to be recognized to obtain the paper money category of the paper money to be recognized;
and acquiring a sample banknote image corresponding to the banknote type as a sample banknote image corresponding to the banknote to be identified.
9. A banknote recognition apparatus, characterized in that the apparatus comprises:
the initial image receiving module is used for receiving an initial image corresponding to the paper money to be identified;
the sample image acquisition module is used for acquiring a sample paper money image corresponding to the paper money to be identified; wherein the sample banknote image comprises a banknote image meeting a preset identification requirement;
the difference image extraction module is used for carrying out image pixel gray value subtraction processing on the initial image and the sample paper money image to obtain a difference image area;
the crease positioning module is used for determining crease positions in the difference image areas and deformation areas corresponding to the crease positions according to the pixel value distribution in the difference image areas;
and the correction processing module is used for carrying out image correction processing on the deformation area to obtain a corrected image corresponding to the paper money to be recognized so as to recognize the paper money to be recognized based on the corrected image.
10. A banknote recognition apparatus, comprising at least one processor and a memory for storing processor-executable instructions, which when executed by the processor, implement steps comprising the method of any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011115963.8A CN112233313B (en) | 2020-10-19 | 2020-10-19 | Paper money identification method, device and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011115963.8A CN112233313B (en) | 2020-10-19 | 2020-10-19 | Paper money identification method, device and equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112233313A true CN112233313A (en) | 2021-01-15 |
CN112233313B CN112233313B (en) | 2022-06-21 |
Family
ID=74117954
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011115963.8A Active CN112233313B (en) | 2020-10-19 | 2020-10-19 | Paper money identification method, device and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112233313B (en) |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1379364A (en) * | 2001-03-29 | 2002-11-13 | 日本电气株式会社 | Graph comparing device and graph comparing method |
CN101458770A (en) * | 2008-12-24 | 2009-06-17 | 北京文通科技有限公司 | Character recognition method and system |
JP2010093780A (en) * | 2008-09-11 | 2010-04-22 | Ricoh Co Ltd | Imaging apparatus and imaging method |
CN102123770A (en) * | 2008-07-28 | 2011-07-13 | 环球娱乐株式会社 | Game system |
CN102737435A (en) * | 2011-04-11 | 2012-10-17 | 北京新岸线数字图像技术有限公司 | Paper money discrimination method and device |
CN103929576A (en) * | 2013-01-14 | 2014-07-16 | 三星电子株式会社 | Method For Compressing Image Data Collected By Camera And Electronic Device For Supporting The Method |
CN103997647A (en) * | 2014-03-05 | 2014-08-20 | 浙江悍马光电设备有限公司 | Wide-dynamic-range image compression method |
CN105551134A (en) * | 2015-12-17 | 2016-05-04 | 深圳怡化电脑股份有限公司 | Paper currency wrinkle identification method and system |
JP2016123043A (en) * | 2014-12-25 | 2016-07-07 | キヤノン電子株式会社 | Image reading apparatus, control method of the same, program, and image reading system |
CN106097375A (en) * | 2016-06-27 | 2016-11-09 | 湖南大学 | The folding line detection method of a kind of scanogram and device |
CN107093259A (en) * | 2017-03-09 | 2017-08-25 | 深圳怡化电脑股份有限公司 | A kind of recognition methods of forge or true or paper money and its device |
WO2018019041A1 (en) * | 2016-07-29 | 2018-02-01 | 广州广电运通金融电子股份有限公司 | Pasted paper money detection method and device |
CN107679436A (en) * | 2017-09-04 | 2018-02-09 | 华南理工大学 | A kind of image correcting method suitable for Bending Deformation Quick Response Code |
CN108091033A (en) * | 2017-12-26 | 2018-05-29 | 深圳怡化电脑股份有限公司 | A kind of recognition methods of bank note, device, terminal device and storage medium |
CN108320372A (en) * | 2018-01-22 | 2018-07-24 | 中南大学 | A kind of folding Paper Currency Identification |
CN110766615A (en) * | 2019-09-09 | 2020-02-07 | 北京美院帮网络科技有限公司 | Picture correction method, device, terminal and computer readable storage medium |
-
2020
- 2020-10-19 CN CN202011115963.8A patent/CN112233313B/en active Active
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1379364A (en) * | 2001-03-29 | 2002-11-13 | 日本电气株式会社 | Graph comparing device and graph comparing method |
CN102123770A (en) * | 2008-07-28 | 2011-07-13 | 环球娱乐株式会社 | Game system |
JP2010093780A (en) * | 2008-09-11 | 2010-04-22 | Ricoh Co Ltd | Imaging apparatus and imaging method |
CN101458770A (en) * | 2008-12-24 | 2009-06-17 | 北京文通科技有限公司 | Character recognition method and system |
CN102737435A (en) * | 2011-04-11 | 2012-10-17 | 北京新岸线数字图像技术有限公司 | Paper money discrimination method and device |
CN103929576A (en) * | 2013-01-14 | 2014-07-16 | 三星电子株式会社 | Method For Compressing Image Data Collected By Camera And Electronic Device For Supporting The Method |
CN103997647A (en) * | 2014-03-05 | 2014-08-20 | 浙江悍马光电设备有限公司 | Wide-dynamic-range image compression method |
JP2016123043A (en) * | 2014-12-25 | 2016-07-07 | キヤノン電子株式会社 | Image reading apparatus, control method of the same, program, and image reading system |
CN105551134A (en) * | 2015-12-17 | 2016-05-04 | 深圳怡化电脑股份有限公司 | Paper currency wrinkle identification method and system |
CN106097375A (en) * | 2016-06-27 | 2016-11-09 | 湖南大学 | The folding line detection method of a kind of scanogram and device |
WO2018019041A1 (en) * | 2016-07-29 | 2018-02-01 | 广州广电运通金融电子股份有限公司 | Pasted paper money detection method and device |
CN107093259A (en) * | 2017-03-09 | 2017-08-25 | 深圳怡化电脑股份有限公司 | A kind of recognition methods of forge or true or paper money and its device |
CN107679436A (en) * | 2017-09-04 | 2018-02-09 | 华南理工大学 | A kind of image correcting method suitable for Bending Deformation Quick Response Code |
CN108091033A (en) * | 2017-12-26 | 2018-05-29 | 深圳怡化电脑股份有限公司 | A kind of recognition methods of bank note, device, terminal device and storage medium |
CN108320372A (en) * | 2018-01-22 | 2018-07-24 | 中南大学 | A kind of folding Paper Currency Identification |
CN110766615A (en) * | 2019-09-09 | 2020-02-07 | 北京美院帮网络科技有限公司 | Picture correction method, device, terminal and computer readable storage medium |
Non-Patent Citations (2)
Title |
---|
朱淑亮等: "结构化道路图像预处理技术", 《现代制造工程》 * |
樊凌涛等: "应用于JBIG2的一种文档图像分割方法", 《上海交通大学学报》 * |
Also Published As
Publication number | Publication date |
---|---|
CN112233313B (en) | 2022-06-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7092554B2 (en) | Method for detecting eye and mouth positions in a digital image | |
Gatos et al. | Automatic table detection in document images | |
US9262677B2 (en) | Valuable file identification method and identification system, device thereof | |
US6327386B1 (en) | Key character extraction and lexicon reduction for cursive text recognition | |
US20070253040A1 (en) | Color scanning to enhance bitonal image | |
CN107491730A (en) | A kind of laboratory test report recognition methods based on image procossing | |
CN107944452A (en) | A kind of circular stamp character recognition method | |
CN104463195A (en) | Printing style digital recognition method based on template matching | |
KR101893679B1 (en) | Card number recognition method using deep learnig | |
Zeggeye et al. | Automatic recognition and counterfeit detection of Ethiopian paper currency | |
CN107195069A (en) | A kind of RMB crown word number automatic identifying method | |
CN113392856B (en) | Image forgery detection device and method | |
CN104899965A (en) | Multi-national paper money serial number identification method based on sorting machine | |
Lim et al. | Text segmentation in color images using tensor voting | |
CN112699867A (en) | Fixed format target image element information extraction method and system | |
Pawade et al. | Comparative study of different paper currency and coin currency recognition method | |
CN113688821A (en) | OCR character recognition method based on deep learning | |
CN106599923B (en) | Method and device for detecting seal anti-counterfeiting features | |
Alnowaini et al. | Yemeni paper currency detection system | |
CN107358718A (en) | A kind of crown word number identification method, device, equipment and storage medium | |
CN108460775A (en) | A kind of forge or true or paper money recognition methods and device | |
CN112233313B (en) | Paper money identification method, device and equipment | |
Tsai et al. | Recognition of Vehicle License Plates from a Video Sequence. | |
CN117765287A (en) | Image target extraction method combining LWR and density clustering | |
CN116580410A (en) | Bill number identification method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |