CN111984180A - Terminal screen reading method, device, equipment and computer readable storage medium - Google Patents

Terminal screen reading method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN111984180A
CN111984180A CN202010883601.7A CN202010883601A CN111984180A CN 111984180 A CN111984180 A CN 111984180A CN 202010883601 A CN202010883601 A CN 202010883601A CN 111984180 A CN111984180 A CN 111984180A
Authority
CN
China
Prior art keywords
terminal
touch
screen reading
area
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010883601.7A
Other languages
Chinese (zh)
Other versions
CN111984180B (en
Inventor
华挺
晏斯
刁珍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
WeBank Co Ltd
Original Assignee
WeBank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by WeBank Co Ltd filed Critical WeBank Co Ltd
Priority to CN202010883601.7A priority Critical patent/CN111984180B/en
Publication of CN111984180A publication Critical patent/CN111984180A/en
Application granted granted Critical
Publication of CN111984180B publication Critical patent/CN111984180B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a terminal screen reading method, a device, equipment and a computer readable storage medium, wherein the terminal screen reading method comprises the following steps: when a first touch operation input is received in a terminal in a screen reading mode, acquiring an initial touch area of the terminal at a contact point position of the first touch operation; detecting whether the initial touch area is matched with a preset target area; and if so, acquiring the header data and the operation guide information of the terminal display page based on the function corresponding to the target area, converting the header data and the operation guide information into voice data, and outputting the voice data. The intelligence of the terminal screen reading function is improved.

Description

Terminal screen reading method, device, equipment and computer readable storage medium
Technical Field
The invention relates to the technical field of science and technology finance (Fintech), in particular to a terminal screen reading method, a device, equipment and a computer readable storage medium.
Background
Along with the rapid popularization of computer equipment such as smart phones and tablet computers, various application programs are also endless, and more users can really feel life convenience and infinite fun brought by application programs with various functions; however, computer devices such as smart phones are also needed by some special groups in the society, and people with certain dysfunction, such as visually impaired people, operate by listening to sound with ears. The screen reading function or software in the current terminal can only read out the corresponding voice according to the actual content of the opened page, and the visually impaired group can only know the complete content of the page by listening to all screen reading voices and perform the next operation, so that the efficiency is low, namely, the intelligence of the screen reading function of the terminal is low.
Disclosure of Invention
The invention mainly aims to provide a method, a device and equipment for reading a screen of a terminal and a computer readable storage medium, and aims to solve the technical problem of how to improve the intelligence of the screen reading function of the terminal.
In order to achieve the above object, the present invention provides a terminal screen reading method, device, apparatus and computer readable storage medium, where the terminal screen reading method includes:
when a first touch operation input is received in a terminal in a screen reading mode, acquiring an initial touch area of the terminal at a contact point position of the first touch operation;
detecting whether the initial touch area is matched with a preset target area;
and if so, acquiring the header data and the operation guide information of the terminal display page based on the function corresponding to the target area, converting the header data and the operation guide information into voice data, and outputting the voice data.
Optionally, after the step of detecting whether the initial touch area is matched with a preset target area, the method includes:
and if not, acquiring the display content in the initial touch area, determining the composition content corresponding to the display content, converting the display content and the composition content into new voice data, and outputting the new voice data.
Optionally, the step of determining the constituent content corresponding to the display content includes:
acquiring all contents in a display page of the terminal, determining associated contents related to the display contents in all the contents, calculating the association degree of the associated contents and the display contents, and taking the associated contents with the association degree larger than the preset association degree as constituent contents.
Optionally, the step of obtaining the title data and the operation guidance information of the terminal display page based on the function corresponding to the target area includes:
acquiring a function corresponding to the target area based on a preset function area comparison table, acquiring page display data of the terminal display page based on the function, and acquiring header data in the page display data;
and determining operation guide information corresponding to the page display data in a plurality of preset original operation guide information.
Optionally, the step of converting the header data and the operation guidance information into voice data includes:
detecting whether a login account in the terminal has the authority of controlling the operation guide information;
if yes, converting the header data and the operation guide information into voice data based on a preset text-to-voice engine.
Optionally, when an input first touch operation is received in the terminal in the screen reading mode, the step of acquiring that the touch point position of the first touch operation is in an initial touch area of the terminal is followed by the steps of:
detecting whether the terminal receives a second touch operation except the first touch operation;
if the terminal receives a second touch operation except the first touch operation, determining whether the touch point position of the second touch operation is the same as the initial touch area in the touch area of the terminal;
and if so, executing the step of detecting whether the initial touch area is matched with a preset target area.
Optionally, determining whether the touch point position of the second touch operation is in the same touch area of the terminal as the initial touch area after the step of determining whether the touch area of the terminal is the same as the initial touch area includes:
if the touch positions are different, whether the priority of the initial touch area is larger than the priority of the touch position of the second touch operation in the touch area of the terminal is detected;
and if so, executing the step of detecting whether the initial touch area is matched with a preset target area.
In addition, in order to achieve the above object, the present invention further provides a terminal screen reading device, including:
the terminal comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring the position of a contact point of a first touch operation in an initial touch area of the terminal when the first touch operation is received in the terminal in a screen reading mode;
the detection module is used for detecting whether the initial touch area is matched with a preset target area;
and the output module is used for acquiring the header data and the operation guide information of the terminal display page based on the function corresponding to the target area if the target area is matched with the voice data, and converting the header data and the operation guide information into voice data for output.
In addition, in order to realize the purpose, the invention also provides a terminal screen reading device;
the terminal screen reading equipment comprises: a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein:
the computer program, when executed by the processor, implements the steps of the terminal screen reading method as described above.
In addition, to achieve the above object, the present invention also provides a computer-readable storage medium;
the computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the terminal screen reading method as described above.
According to the method, when the input first touch operation is received in the terminal in the screen reading mode, the initial touch area of the terminal at the contact point position of the first touch operation is obtained; detecting whether the initial touch area is matched with a preset target area; and if so, acquiring the header data and the operation guide information of the terminal display page based on the function corresponding to the target area, and converting the header data and the operation guide information into voice data for output. The first touch operation is received at the terminal in the screen reading mode, and when the first touch operation is matched with the initial touch area and the target area of the terminal, the header data and the operation guide information of a terminal display page are acquired, and the header data and the operation guide information are converted into the voice data to be output, so that the phenomenon that in the prior art, a user only converts the header data of the page into the voice data to be output, and the user needs to search the operation information by himself/herself is avoided, the phenomenon that the intelligence of the page information in the user operation terminal software of the user with visual dysfunction is low occurs, the intelligence of the page information in the user operation terminal software of the user with visual dysfunction is improved, and the intelligence of the screen reading function of the terminal is improved.
Drawings
FIG. 1 is a schematic diagram of a terminal screen reading device in a hardware operating environment according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a screen reading method of a terminal according to a first embodiment of the present invention;
fig. 3 is a schematic diagram of device modules of the terminal screen reading device according to the present invention.
The objects, features and advantages of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, fig. 1 is a schematic diagram of a terminal screen reading device in a hardware operating environment according to an embodiment of the present invention.
The terminal of the embodiment of the invention is a terminal screen reading device.
As shown in fig. 1, the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Optionally, the terminal may further include a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WiFi module, and the like. Such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display screen according to the brightness of ambient light, and a proximity sensor that turns off the display screen and/or the backlight when the terminal device is moved to the ear. Of course, the terminal device may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which are not described herein again.
Those skilled in the art will appreciate that the terminal structure shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a kind of computer storage medium, may include therein an operating system, a network communication module, a user interface module, and a terminal screen reading program.
In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting to a backend server and performing data communication with the backend server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be configured to call the terminal screen reading program stored in the memory 1005, and perform the following operations:
when a first touch operation input is received in a terminal in a screen reading mode, acquiring an initial touch area of the terminal at a contact point position of the first touch operation;
detecting whether the initial touch area is matched with a preset target area;
and if so, acquiring the header data and the operation guide information of the terminal display page based on the function corresponding to the target area, and converting the header data and the operation guide information into voice data for output.
The invention provides a terminal screen reading method, which comprises the following steps in a first embodiment of the terminal screen reading method, referring to fig. 2:
step S10, when receiving an input first touch operation in a terminal in a screen reading mode, acquiring an initial touch area of the terminal where a touch point position of the first touch operation is;
because the mobile internet application used by the current visually impaired group has a lot of difficulties, the user can not operate the terminal after seeing the page information like the common user, and can only operate the terminal after listening the page information by means of the screen reading function or software of the terminal. Therefore, in the embodiment, besides the function of reading the screen by the terminal or the function of software for autonomously using the application in the terminal, the function of asymmetrically reading the screen is added, so that the visually impaired group can more conveniently and efficiently use the application in the terminal. That is, in this embodiment, a description or an operation guidance document is added to the page title and the button name in the terminal, so that the screen reading function in the terminal can read out more voice contents than the current page display contents, and the visually impaired group can use the terminal more conveniently and efficiently. For example, when the terminal receives an instruction input by a user and automatically enters a display page of an application in a screen reading mode, an asymmetric screen reading function in the terminal is actively triggered, namely, title information of the display page, such as 'xx page home page', and operation guide information, such as 'gliding down to enter a shortcut search page, gliding right to enter a payment code page, and shaking up to enter a voice assistant page', are read out.
Among them, the visually impaired group means a group in which the visual function is impaired to some extent, and the patients cannot achieve normal vision because of low visual acuity or impaired visual field, thereby affecting daily life. The information barrier-free means that the information can be obtained and utilized by any person (whether a healthy person or a disabled person, whether a young person or an old person) equally, conveniently and without any barrier under any condition. The screen reading function or software is a function or tool which can help the visually impaired to use the internet. Such as the voice-over function of iOS (apple operating system), and the screen reading function of Android (Android operating system). The terminal is a terminal with a touch function, such as a smart phone, a tablet computer, and the like.
Therefore, in this embodiment, it may be determined whether the terminal is in the screen reading mode, and if the terminal is in the screen reading mode, it is determined whether the terminal receives a first touch operation input by the user (that is, a touch operation triggered by the user touching the terminal display interface), and when the first touch operation is received, a touch area of a touch point position of the first touch operation in the terminal, that is, an initial touch area, needs to be acquired. The mode of receiving the input first touch operation may be that the user lightly touches and clicks a certain touch point position on a display interface in the terminal, and the terminal receives the touch operation transmitted from the touch point position, or that the user holds a certain touch point position in the terminal for a long time (that is, the pressing time is longer than the preset time), and the terminal receives the touch operation transmitted from the touch point position. The touch position is a touch position of the first touch operation in the terminal display interface.
In this embodiment, in order to implement the asymmetric screen reading function of the terminal, after a certain application is started and a certain display page is displayed, the display page may be divided into a preset number of display areas, and different functions are assigned to the display areas to establish a function area comparison table. It should be noted that, in this embodiment, each page of the terminal in the screen reading mode corresponds to one functional area comparison table. Therefore, when the first touch operation is detected, it is possible to determine which display area the touch point position of the first touch operation is in based on the display area divided in advance in the terminal, and to take it as the initial touch area.
Step S20, detecting whether the initial touch area matches a preset target area;
after the initial touch area corresponding to the first touch operation is determined, in order to more accurately identify the intention of the user, it is further required to continuously detect whether other touch operations except for the first touch operation, that is, a second touch operation, exist in the terminal display page, and if not, an operation of subsequently detecting whether the initial touch area is matched with the target area may be performed. If the second touch operation exists, whether the touch point position of the second touch operation and the touch point position of the first touch operation are in the same display area needs to be detected, and if the touch point positions of the second touch operation and the touch point positions of the first touch operation are in the same display area, namely the initial touch area, the initial touch area can be directly detected. If the touch screen is not in the same display area, the priorities of the first touch operation and the second touch operation need to be determined first, and the priority determination may be performed according to the pressing time of the first touch operation and the second touch operation, that is, the longer the pressing time is, the higher the priority is, if the pressing time of the first touch operation is greater than the pressing time of the second touch operation, the priority of the first touch operation is higher than the priority of the second touch operation, and if the pressing time of the second touch operation is greater than the pressing time of the first touch operation, the priority of the second touch operation is higher than the priority of the first touch operation; or the priority is determined according to the pressing frequency, that is, if the pressing frequency of the first touch operation is greater than the pressing frequency of the second touch operation, the priority of the first touch operation is higher than that of the second touch operation, and if the pressing frequency of the second touch operation is greater than that of the first touch operation, the priority of the second touch operation is higher than that of the first touch operation.
Or determining the priority of the initial touch area corresponding to the first touch operation and the priority of the second touch area corresponding to the second touch operation (namely, the position of the touch point of the second touch operation), namely, the user sets the priority of each touch area (namely, the display area) in advance, and then determining the priority of the first touch area and the priority of the second touch area according to the priority set in advance.
And only when the priority of the first touch operation is higher than that of the second touch operation or the priority of the initial touch area is higher than that of the second touch area, whether the initial touch area is matched with a preset target area or not is detected, and different operations are executed according to different detection results. And if the priority of the second touch area is higher than that of the initial touch area, performing detection operation on the second touch area, and not performing detection operation on the initial touch area, namely detecting whether the second touch area is matched with the target area. The target area can be a display area with an asymmetric screen reading function in a current display interface of the terminal.
And step S30, if the target area is matched with the target area, acquiring the title data and the operation guide information of the terminal display page based on the function corresponding to the target area, and converting the title data and the operation guide information into voice data for outputting.
When the initial touch area is found to be matched with the target area through judgment, the function of the target area, namely the asymmetric screen reading function, can be determined, the header data and the operation guide information in the terminal display page can be directly acquired according to the function, and the header data and the operation guide information are converted into voice data for output according to a preset text-to-voice engine, so that visually impaired people can directly acquire related information of the current page of the terminal. For example, the title data and the operation guidance information are converted into voice data by TTS (Text To Speech technology), and the voice data is played by a speaker or an earphone in the terminal.
In the embodiment, when a first touch operation input is received in a terminal in a screen reading mode, acquiring that a touch point position of the first touch operation is in an initial touch area of the terminal; detecting whether the initial touch area is matched with a preset target area; and if so, acquiring the header data and the operation guide information of the terminal display page based on the function corresponding to the target area, and converting the header data and the operation guide information into voice data for output. The first touch operation is received at the terminal in the screen reading mode, and when the first touch operation is matched with the initial touch area and the target area of the terminal, the header data and the operation guide information of a terminal display page are acquired, and the header data and the operation guide information are converted into the voice data to be output, so that the problem that in the prior art, only the header data of the page is converted by the terminal into the voice data to be output, a visually impaired group needs to automatically search for the operation information is avoided, the phenomenon that the intelligence of the page information in the crowd operation terminal software with visual dysfunction is low occurs, and the intelligence of the page information in the crowd operation terminal software with visual dysfunction is improved.
Further, on the basis of the first embodiment of the present invention, a second embodiment of the screen reading method of the terminal of the present invention is provided, where this embodiment is step S20 of the first embodiment of the present invention, and after the step of detecting whether the initial touch area matches the preset target area, the method includes:
step a, if the display content in the initial touch area is not matched, the display content in the initial touch area is obtained, the composition content corresponding to the display content is determined, and the display content and the composition content are converted into new voice data to be output.
In this embodiment, when it is determined that the initial touch area and the target area are not matched, the display content in the initial touch area, for example, 100000 yuan of the total asset, may be obtained, and after the display content is obtained, a component of the display content, that is, a component of the display content, for example, 100000 yuan of the total asset may be a balance and a profit. And after the composition content is acquired, the display content and the composition content are converted into voice data (namely new voice data) through a preset text-to-speech engine and are output. For example, the output speech "Total asset 100000 dollars, where the balance portion is 99900 dollars and the revenue portion is 100 dollars".
In the embodiment, when the initial touch area is determined to be not matched with the target area, the display content and the composition content in the initial touch area are determined, converted into new voice data and output, so that the intelligence of the terminal screen reading is improved, and the terminal screen reading is more intelligent compared with the prior art in which only the display content is read.
Further, the step of determining the constituent content corresponding to the display content includes:
and b, acquiring all contents in a display page of the terminal, determining associated contents related to the display contents in all the contents, calculating the association degree of the associated contents and the display contents, and taking the associated contents with the association degree larger than the preset association degree as the constituent contents.
In this embodiment, all page contents of a current display page of the terminal, that is, all contents (including the current display content and non-display contents), need to be obtained first, then all page contents are analyzed, all associated contents related to the display contents in all page contents are determined, a preset association degree calculation model is used to calculate association degrees of each associated content and the display contents in the initial touch area, then whether each association degree is greater than a preset association degree (any association degree preset by a user) is sequentially detected, and the associated contents whose association degrees are greater than the preset association degree are used as constituent contents.
In this embodiment, the associated content related to the display content is determined in all the contents of the display page of the terminal, and the associated content with the association degree greater than the preset association degree is used as the constituent content, so that the accuracy of the acquired constituent content is guaranteed.
Further, the step of obtaining the title data and the operation guidance information of the terminal display page based on the function corresponding to the target area includes:
step c, acquiring a function corresponding to the target area based on a preset function area comparison table, acquiring page display data of the terminal display page based on the function, and acquiring header data in the page display data;
in this embodiment, a preset functional area comparison table (a functional area comparison table extracted and set in the terminal) needs to be obtained, that is, a functional area comparison table corresponding to a display page currently displayed by the terminal is determined in each preset functional area comparison table, a function corresponding to a target area (that is, a function of obtaining header data and operation guidance information of the display page of the terminal) is obtained in the functional area comparison table, page display data in the display page of the terminal is obtained according to the function, and header data in the page display data is obtained. After the page display data are obtained, whether the title data exist in the page display data or not is judged, if yes, the title data are extracted, if not, semantic analysis is carried out on the page display data, the title data are determined according to semantic analysis results, and then the title data are obtained.
And d, determining operation guide information corresponding to the page display data in a plurality of preset original operation guide information.
After the title data is obtained, operation guide information corresponding to the page display data is determined from a plurality of preset original operation guide information, and operation guide information associated with the target area is further determined.
In this embodiment, by first obtaining the function corresponding to the target area, the header data in the page display data according to the function, and obtaining the operation guide information corresponding to the page display data from the preset plurality of original operation guide information, the accuracy of the obtained header data and operation guide information is ensured.
Further, the step of converting the header data and the operation guidance information into voice data includes:
step e, detecting whether a login account in the terminal has the authority of controlling the operation guide information;
in this embodiment, after the header data and the operation guidance information of the terminal display page are acquired, it is further required to detect whether the login account in the terminal has the authority to control the operation guidance information, and only when the login account has the authority to control the operation guidance information, the subsequent voice conversion operation is performed, and when a plurality of operation guidance information exist and the login account only has the authority to control the preset number of operation guidance information in each operation guidance information, only the operation guidance information is subjected to voice conversion. That is, if there are a plurality of pieces of operation guidance information, target operation guidance information having a control authority for a login account is determined in each piece of operation guidance information, and the target operation guidance information is converted into voice data. The login account is an account which the terminal has currently logged in.
And f, if so, converting the header data and the operation guide information into voice data based on a preset text-to-voice engine.
And when the login account is judged to have the authority for controlling the operation guide information, the header data and the operation guide information can be directly converted into voice data according to a preset text-to-voice engine.
In the embodiment, the accuracy of the converted voice data is ensured by determining that the login account has the authority to control the operation guide information and converting the header data and the operation guide information into the voice data according to the text-to-voice engine.
Further, when an input first touch operation is received in the terminal in the screen reading mode, the step of acquiring that the touch point position of the first touch operation is in the initial touch area of the terminal is followed by the steps of:
step h, detecting whether the terminal receives a second touch operation except the first touch operation;
in this embodiment, after the terminal receives the first touch operation, it is further required to detect whether the terminal receives other touch operations except the first touch operation, that is, the second touch operation, and if the second touch operation is not received, execute an operation corresponding to the first touch operation.
Step k, if the terminal receives a second touch operation except the first touch operation, determining whether the touch point position of the second touch operation is the same as the initial touch area in the touch area of the terminal;
when it is found through the judgment that the terminal receives a second touch operation except the first touch operation, the second touch operation needs to be judged, the touch area of the touch point position of the second touch operation in the terminal, namely the second touch area, is determined, and whether the second touch area is the same as the initial touch area or not is determined, namely whether the first touch operation is the same as the second touch operation is determined.
And m, if the initial touch area is the same as the preset target area, executing the step of detecting whether the initial touch area is matched with the preset target area.
When the touch point position of the second touch operation is found to be the same in the touch area (i.e. the second touch area) in the terminal and the initial touch area through judgment, the operation of detecting whether the initial touch area is matched with the preset target area can be directly executed. However, if the second touch area and the initial touch area are not matched, the priorities of the second touch area and the initial touch area need to be determined, and which touch operation is to be executed is determined according to different priorities.
In this embodiment, when a second touch operation other than the first touch operation is detected, when it is determined that the touch point position of the second touch operation is the same as the initial touch area in the terminal, the detection operation on the initial touch area is continuously performed, so that normal screen reading of the terminal is ensured.
Further, determining whether the touch point position of the second touch operation is the same as the initial touch area of the terminal after the step of determining whether the touch area of the terminal is the same as the initial touch area includes:
n, if the touch positions are different, detecting whether the priority of the initial touch area is greater than the priority of the touch position of the second touch operation in the touch area of the terminal;
when it is found that the touch point position of the second touch operation is different from the touch area of the terminal (i.e., the second touch area) and the initial touch area, it is required to detect whether the priority of the initial touch area is greater than the priority of the touch point position of the second touch operation in the touch area of the terminal.
And step x, if the touch area is larger than the preset target area, executing the step of detecting whether the initial touch area is matched with the preset target area.
And when the priority of the initial touch area is found to be larger than the priority of the touch point position of the second touch operation in the touch area of the terminal through judgment, executing a step of detecting whether the initial touch area is matched with a preset target area.
In this embodiment, when the touch point position of the second touch operation is different between the touch area in the terminal and the initial touch area, and the priority of the initial touch area is greater than the priority of the touch point position of the second touch operation in the touch area of the terminal, the detection operation on the initial touch area is continuously executed, so that normal screen reading of the terminal is ensured.
In addition, referring to fig. 3, an embodiment of the present invention further provides a terminal screen reading device, where the terminal screen reading device includes:
an obtaining module a10, configured to, when a first touch operation input in a terminal in a screen reading mode is received, obtain that a touch point position of the first touch operation is in an initial touch area of the terminal;
a detecting module a20, configured to detect whether the initial touch area matches a preset target area;
and the output module a30 is configured to, if the matching is performed, obtain the header data and the operation guidance information of the terminal display page based on the function corresponding to the target area, and convert the header data and the operation guidance information into voice data for output.
Optionally, the detecting module a20 is further configured to:
and if not, acquiring the display content in the initial touch area, determining the composition content corresponding to the display content, converting the display content and the composition content into new voice data, and outputting the new voice data.
Optionally, the detecting module a20 is further configured to:
acquiring all contents in a display page of the terminal, determining associated contents related to the display contents in all the contents, calculating the association degree of the associated contents and the display contents, and taking the associated contents with the association degree larger than the preset association degree as constituent contents.
Optionally, the output module a30 is further configured to:
acquiring a function corresponding to the target area based on a preset function area comparison table, acquiring page display data of the terminal display page based on the function, and acquiring header data in the page display data;
and determining operation guide information corresponding to the page display data in a plurality of preset original operation guide information.
Optionally, the output module a30 is further configured to:
detecting whether a login account in the terminal has the authority of controlling the operation guide information;
if yes, converting the header data and the operation guide information into voice data based on a preset text-to-voice engine.
Optionally, the obtaining module a10 is further configured to:
detecting whether the terminal receives a second touch operation except the first touch operation;
if the terminal receives a second touch operation except the first touch operation, determining whether the touch point position of the second touch operation is the same as the initial touch area in the touch area of the terminal;
and if so, executing the step of detecting whether the initial touch area is matched with a preset target area.
Optionally, the obtaining module a10 is further configured to:
if the touch positions are different, whether the priority of the initial touch area is larger than the priority of the touch position of the second touch operation in the touch area of the terminal is detected;
and if so, executing the step of detecting whether the initial touch area is matched with a preset target area.
The steps implemented by each functional module of the terminal screen reading device can refer to each embodiment of the terminal screen reading method of the present invention, and are not described herein again.
The invention also provides a terminal screen reading device, the terminal comprises: memory, processor, communication bus and terminal screen reading program stored on the memory:
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is used for executing the terminal screen reading program so as to realize the steps of the terminal screen reading method in each embodiment.
The present invention also provides a computer-readable storage medium, which stores one or more programs, where the one or more programs are further executable by one or more processors for implementing the steps of the embodiments of the terminal screen reading method.
The specific implementation of the computer-readable storage medium of the present invention is substantially the same as the embodiments of the terminal screen reading method described above, and is not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A terminal screen reading method is characterized in that the terminal screen reading method comprises the following steps:
when a first touch operation input is received in a terminal in a screen reading mode, acquiring an initial touch area of the terminal at a contact point position of the first touch operation;
detecting whether the initial touch area is matched with a preset target area;
and if so, acquiring the header data and the operation guide information of the terminal display page based on the function corresponding to the target area, converting the header data and the operation guide information into voice data, and outputting the voice data.
2. The terminal screen reading method of claim 1, wherein after the step of detecting whether the initial touch area matches a preset target area, the method comprises:
and if not, acquiring the display content in the initial touch area, determining the composition content corresponding to the display content, converting the display content and the composition content into new voice data, and outputting the new voice data.
3. The screen reading method of the terminal according to claim 2, wherein the step of determining the constituent content corresponding to the display content comprises:
acquiring all contents in a display page of the terminal, determining associated contents related to the display contents in all the contents, calculating the association degree of the associated contents and the display contents, and taking the associated contents with the association degree larger than the preset association degree as constituent contents.
4. The terminal screen reading method according to claim 1, wherein the step of acquiring the title data and the operation guide information of the terminal display page based on the function corresponding to the target area comprises:
acquiring a function corresponding to the target area based on a preset function area comparison table, acquiring page display data of the terminal display page based on the function, and acquiring header data in the page display data;
and determining operation guide information corresponding to the page display data in a plurality of preset original operation guide information.
5. The terminal screen reading method of claim 1, wherein the step of converting the header data and the operation guidance information into voice data comprises:
detecting whether a login account in the terminal has the authority of controlling the operation guide information;
if yes, converting the header data and the operation guide information into voice data based on a preset text-to-voice engine.
6. The terminal screen reading method of any one of claims 1 to 5, wherein the step of acquiring, when an input first touch operation is received in the terminal in the screen reading mode, that a touch point position of the first touch operation is in an initial touch area of the terminal, comprises:
detecting whether the terminal receives a second touch operation except the first touch operation;
if the terminal receives a second touch operation except the first touch operation, determining whether the touch point position of the second touch operation is the same as the initial touch area in the touch area of the terminal;
and if so, executing the step of detecting whether the initial touch area is matched with a preset target area.
7. The terminal screen reading method of claim 6, wherein the determining whether the touch point position of the second touch operation is the same as the initial touch area after the step of determining whether the touch area of the terminal is the same as the initial touch area comprises:
if the touch positions are different, whether the priority of the initial touch area is larger than the priority of the touch position of the second touch operation in the touch area of the terminal is detected;
and if so, executing the step of detecting whether the initial touch area is matched with a preset target area.
8. A terminal screen reading device is characterized in that the terminal screen reading device comprises:
the terminal comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring the position of a contact point of a first touch operation in an initial touch area of the terminal when the first touch operation is received in the terminal in a screen reading mode;
the detection module is used for detecting whether the initial touch area is matched with a preset target area;
and the output module is used for acquiring the header data and the operation guide information of the terminal display page based on the function corresponding to the target area if the target area is matched with the voice data, and converting the header data and the operation guide information into voice data for output.
9. A terminal screen reading device, characterized in that the terminal screen reading device comprises: memory, processor and a terminal screen reading program stored on the memory and executable on the processor, the terminal screen reading program when executed by the processor implementing the steps of the terminal screen reading method according to any of claims 1 to 7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a terminal screen reading program, which when executed by a processor implements the steps of the terminal screen reading method according to any one of claims 1 to 7.
CN202010883601.7A 2020-08-26 2020-08-26 Terminal screen reading method, device, equipment and computer readable storage medium Active CN111984180B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010883601.7A CN111984180B (en) 2020-08-26 2020-08-26 Terminal screen reading method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010883601.7A CN111984180B (en) 2020-08-26 2020-08-26 Terminal screen reading method, device, equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111984180A true CN111984180A (en) 2020-11-24
CN111984180B CN111984180B (en) 2021-12-28

Family

ID=73440160

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010883601.7A Active CN111984180B (en) 2020-08-26 2020-08-26 Terminal screen reading method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111984180B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112925603A (en) * 2021-05-11 2021-06-08 浙江口碑网络技术有限公司 Page information providing method and device, computer equipment and readable storage medium
CN113867544A (en) * 2021-09-28 2021-12-31 深圳前海微众银行股份有限公司 Secure keyboard input method, device, equipment and computer readable storage medium
CN113885768A (en) * 2021-10-19 2022-01-04 清华大学 Auxiliary reading control method and electronic equipment
CN114979366A (en) * 2021-02-24 2022-08-30 腾讯科技(深圳)有限公司 Control prompting method, device, terminal and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002032013A (en) * 2000-07-14 2002-01-31 Nippon Telesoft Co Ltd System for providing information for visually handicapped person by utilization of electric communication channel
CN104461346A (en) * 2014-10-20 2015-03-25 天闻数媒科技(北京)有限公司 Method and device for visually impaired people to touch screen and intelligent touch screen mobile terminal
US20150242096A1 (en) * 2003-04-18 2015-08-27 International Business Machines Corporation Enabling a visually impaired or blind person to have access to information printed on a physical document
CN105487744A (en) * 2014-09-23 2016-04-13 中兴通讯股份有限公司 Method and device for realizing interaction on accessible intelligent terminal
CN105788597A (en) * 2016-05-12 2016-07-20 深圳市联谛信息无障碍有限责任公司 Voice recognition-based screen reading application instruction input method and device
CN110825306A (en) * 2019-10-29 2020-02-21 深圳市证通电子股份有限公司 Braille input method, device, terminal and readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002032013A (en) * 2000-07-14 2002-01-31 Nippon Telesoft Co Ltd System for providing information for visually handicapped person by utilization of electric communication channel
US20150242096A1 (en) * 2003-04-18 2015-08-27 International Business Machines Corporation Enabling a visually impaired or blind person to have access to information printed on a physical document
CN105487744A (en) * 2014-09-23 2016-04-13 中兴通讯股份有限公司 Method and device for realizing interaction on accessible intelligent terminal
CN104461346A (en) * 2014-10-20 2015-03-25 天闻数媒科技(北京)有限公司 Method and device for visually impaired people to touch screen and intelligent touch screen mobile terminal
CN105788597A (en) * 2016-05-12 2016-07-20 深圳市联谛信息无障碍有限责任公司 Voice recognition-based screen reading application instruction input method and device
CN110825306A (en) * 2019-10-29 2020-02-21 深圳市证通电子股份有限公司 Braille input method, device, terminal and readable storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114979366A (en) * 2021-02-24 2022-08-30 腾讯科技(深圳)有限公司 Control prompting method, device, terminal and storage medium
CN114979366B (en) * 2021-02-24 2023-10-13 腾讯科技(深圳)有限公司 Control prompting method, device, terminal and storage medium
CN112925603A (en) * 2021-05-11 2021-06-08 浙江口碑网络技术有限公司 Page information providing method and device, computer equipment and readable storage medium
CN113867544A (en) * 2021-09-28 2021-12-31 深圳前海微众银行股份有限公司 Secure keyboard input method, device, equipment and computer readable storage medium
CN113885768A (en) * 2021-10-19 2022-01-04 清华大学 Auxiliary reading control method and electronic equipment

Also Published As

Publication number Publication date
CN111984180B (en) 2021-12-28

Similar Documents

Publication Publication Date Title
CN111984180B (en) Terminal screen reading method, device, equipment and computer readable storage medium
US12118999B2 (en) Reducing the need for manual start/end-pointing and trigger phrases
KR20210034572A (en) Message Service Providing Device and Method Providing Content thereof
KR101758302B1 (en) Voice recognition grammar selection based on context
US9900427B2 (en) Electronic device and method for displaying call information thereof
CN116312526A (en) Natural assistant interaction
CN106251869B (en) Voice processing method and device
KR102232929B1 (en) Message Service Providing Device and Method Providing Content thereof
CN108538291A (en) Sound control method, terminal device, cloud server and system
US10586528B2 (en) Domain-specific speech recognizers in a digital medium environment
CN110992989B (en) Voice acquisition method and device and computer readable storage medium
CN110989847B (en) Information recommendation method, device, terminal equipment and storage medium
CN110929176B (en) Information recommendation method and device and electronic equipment
WO2015043442A1 (en) Method, device and mobile terminal for text-to-speech processing
CN110827825A (en) Punctuation prediction method, system, terminal and storage medium for speech recognition text
CN108877780B (en) Voice question searching method and family education equipment
KR20210032875A (en) Voice information processing method, apparatus, program and storage medium
CN109215640B (en) Speech recognition method, intelligent terminal and computer readable storage medium
CN110944056A (en) Interaction method, mobile terminal and readable storage medium
CN108897508B (en) Voice question searching method based on split screen display and family education equipment
CN108491471B (en) Text information processing method and mobile terminal
CN110765326A (en) Recommendation method, device, equipment and computer readable storage medium
CN111352667A (en) Information pushing method and electronic equipment
CN111145604A (en) Method and device for recognizing picture books and computer readable storage medium
CN111638788A (en) Learning data output method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant