HK1208540A1 - Device, method, and graphical user interface for selecting user interface objects - Google Patents

Device, method, and graphical user interface for selecting user interface objects

Info

Publication number
HK1208540A1
HK1208540A1 HK15108890.7A HK15108890A HK1208540A1 HK 1208540 A1 HK1208540 A1 HK 1208540A1 HK 15108890 A HK15108890 A HK 15108890A HK 1208540 A1 HK1208540 A1 HK 1208540A1
Authority
HK
Hong Kong
Prior art keywords
user interface
contact
interface object
detecting
touch
Prior art date
Application number
HK15108890.7A
Other languages
Chinese (zh)
Other versions
HK1208540B (en
Inventor
.伯恩斯坦
J.T.伯恩斯坦
.米西格
J.米西格
A.E.西普林斯基
M.I.布朗
M-L.库伊
N.赞贝蒂
B.M.维克托
Original Assignee
苹果公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苹果公司 filed Critical 苹果公司
Publication of HK1208540A1 publication Critical patent/HK1208540A1/en
Publication of HK1208540B publication Critical patent/HK1208540B/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0412Digitisers structurally integrated in a display
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0414Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means using force sensing means to determine a position
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/041Indexing scheme relating to G06F3/041 - G06F3/045
    • G06F2203/04105Pressure sensors for measuring the pressure or force exerted on the touch surface without providing the touch position
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/041Indexing scheme relating to G06F3/041 - G06F3/045
    • G06F2203/04106Multi-sensing digitiser, i.e. digitiser using at least two different sensing technologies simultaneously or alternatively, e.g. for detecting pen and finger, for saving power or for improving position detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/048Indexing scheme relating to G06F3/048
    • G06F2203/04808Several contacts: gestures triggering a specific function, e.g. scrolling, zooming, right-click, when the user establishes several contacts with the surface simultaneously; e.g. using several fingers or a combination of fingers and pen

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)
  • Position Input By Displaying (AREA)
  • Input From Keyboards Or The Like (AREA)

Abstract

A method, comprising: at an electronic device with a touch-sensitive surface and a display, wherein the device includes one or more sensors to detect intensities of contacts with the touch-sensitive surface: displaying a virtual keyboard on the display; detecting a contact on the touch-sensitive surface; while continuously detecting the contact on the touch-sensitive surface: detecting one or more movements of the contact on the touch-sensitive surface that correspond to movement of a focus selector over the virtual keyboard; and for each respective key of a plurality of keys of the virtual keyboard, while detecting the focus selector over a respective key of the plurality of keys: in accordance with a determination that character-output criteria for outputting a character that corresponds to the respective key have been met, wherein the character-output criteria include that a respective intensity of the contact is above a first intensity threshold while detecting the focus selector over the respective key, outputting the character; and in accordance with a determination that the character-output criteria have not been met, forgoing outputting the character that corresponds to the respective key.

Description

Apparatus, method and graphical user interface for selecting user interface objects
Related patent application
This application claims priority from the following provisional patent applications: U.S. provisional patent application serial No. 61/778,413 entitled "Device, Method, and Graphical User Interface for Selecting User Interface objects" filed on 3/13/2013; U.S. provisional patent application 61/747,278 entitled "Device, Method, and Graphical User Interface for Manipulating User Interface Objects with Visual and/or Haptic Feedback", filed on 29.12.2012; and U.S. provisional patent application 61/688,227 entitled "Device, Method, and graphical User Interface for Manipulating User Interface Objects with visual and/or Haptic feed", filed on 9/5/2012, which are hereby incorporated by reference in their entirety.
The present application is also related to the following provisional patent applications: U.S. provisional patent application Ser. No. 61/778,092 entitled "Device, Method, and Graphical User Interface for Selecting Objects with in a group of Objects" filed on 12.3.2013; U.S. provisional patent application serial No. 61/778,125 entitled "Device, Method, and Graphical User Interface for Navigating User interfaces" filed on 12.3.2013; U.S. provisional patent application Ser. No. 61/778,156 entitled "Device, Method, and Graphical User Interface for manipulating Graphical Objects" filed on 12.3.2013; U.S. provisional patent application serial No. 61/778,179 entitled "Device, Method, and graphical user Interface for Scrolling Nested Regions," filed on 12.3.2013; U.S. provisional patent application serial No. 61/778,171 entitled "Device, Method, and graphic user Interface for Displaying Additional Information in Response to a user contact" filed on 12.3.2013; U.S. provisional patent Application Ser. No. 61/778,191 entitled "Device, Method, and Graphical User Interface for Displaying User Interface Objects reforming to an Application", filed on 12.3.2013; U.S. provisional patent application Ser. No. 61/778,211 entitled "Device, Method, and cosmetic User Interface for facing User Interface with Controls in a User Interface", filed on 12.3.2013; U.S. provisional patent application Ser. No. 61/778,239 entitled "Device, Method, and Graphical User Interface for a Multi-Contact Gesture" filed on 12.3.2013; U.S. provisional patent application serial No. 61/778,284 entitled "Device, Method, and graphical User Interface for operating Feedback for operating performance in a User Interface" filed on 12.3.2013; U.S. provisional patent application Ser. No. 61/778,287 entitled "Device, Method, and Graphical User Interface for Changing Activation States of a User Interface Object", filed on 12.3.2013; U.S. provisional patent application serial No. 61/778,363 entitled "Device, Method, and Graphical User Interface for transforming between Input to Display Output Relationships" filed on 12.3.2013; U.S. provisional patent application Ser. No. 61/778,367 entitled "Device, Method, and graphic User Interface for Moving a User Interface Object Based on an Intensity of an unpress Input", filed on 12.3.2013; U.S. provisional patent application serial No. 61/778,265 entitled "Device, Method, and graphic User Interface for Transitioningbetween Display States in Response to a texture" filed on 12.3.2013; U.S. provisional patent application Ser. No. 61/778,373 entitled "Device, Method, and graphical user Interface for Managing Activation of a Control Based on contact integrity", filed on 12.3.2013; U.S. provisional patent application serial No. 61/778,412 entitled "Device, Method, and graphic User Interface for displaying content Associated with a reforming affinity" filed on 3/13/2013; U.S. provisional patent application serial No. 61/778,414 entitled "Device Transitioningbetween Display States in Response to a texture, Method, and Graphical User Interface for Moving and Dropping a User Interface Object", filed 3/13.2013; U.S. provisional patent application serial No. 61/778,416 entitled "Device, Method, and graphic User Interface for Determining wheel to Scroll or Select Content" filed on 3, 13/2013; and U.S. provisional patent application serial No. 61/778,418 entitled "Device, Method, and Graphical User Interface for Switching between User interfaces," filed on 3/13 2013, which is hereby incorporated by reference in its entirety.
The present application is also related to the following provisional patent applications: U.S. provisional patent application serial No. 61/645,033 entitled "Adaptive happy Feedback for Electronic Devices" filed on 5, 9, 2012; U.S. provisional patent application serial No. 61/665,603 entitled "Adaptive happy feedback for Electronic Devices" filed on 28.6.2012; and U.S. provisional patent application serial No. 61/681,098 entitled "Adaptive happy Feedback for electronic devices," filed on 8/2012, which is incorporated herein by reference in its entirety.
Technical Field
The present disclosure relates generally to electronic devices with touch-sensitive surfaces, including, but not limited to, electronic devices with touch-sensitive surfaces that detect inputs for manipulating user interfaces.
Background
The use of touch sensitive surfaces as input devices for computers and other electronic computing devices has increased significantly in recent years. Exemplary touch sensitive surfaces include touch pads and touch screen displays. Such surfaces are widely used for manipulating user interface objects on a display.
Exemplary manipulations include adjusting the position and/or size of one or more user interface objects, or activating a button, or opening a file/application represented by a user interface object, and associating metadata with one or more user interface objects, or otherwise manipulating a user interface. Exemplary user interface objects include digital images, videos, text, icons, control elements such as buttons and other graphics. In some cases, the user will need to execute in a file management program (e.g., a Finder from Apple Inc. (Cupertino, California)), an image management application (e.g., an apex or iPhoto from Apple Inc. (Cupertino, California)), a digital content (e.g., video and music) management application (e.g., iTunes from Apple Inc., Cupertino, California), a drawing application, a rendering application (e.g., a Keynote from Apple Inc., Cupertino), a word processing application (e.g., Pages from Apple Inc. (Cupertino, California)), a website creation application (e.g., an iPod. from Apple Inc. (Cupertino, California)), a disk editing application (e.g., a Web authoring application, a) from a web authoring application, such as a Web authoring application, or a Web object.
However, existing methods for performing these manipulations are cumbersome and inefficient. Furthermore, the existing methods take longer than necessary and waste energy. This latter consideration is particularly important in battery-powered devices.
Disclosure of Invention
Accordingly, there is a need for an electronic device having a faster, more efficient method and interface for manipulating a user interface. Such methods and interfaces optionally complement or replace conventional methods for manipulating user interfaces. Such methods and interfaces reduce the cognitive burden placed on the user and produce a more efficient human-machine interface. For battery-driven devices, such methods and interfaces conserve power and increase the time between battery charges.
The above-described deficiencies and other problems associated with user interfaces for electronic devices having touch-sensitive surfaces are reduced or eliminated by the disclosed devices. In some embodiments, the device is a desktop computer. In some embodiments, the device is portable (e.g., a laptop, tablet, or handheld device). In some embodiments, the device has a touchpad. In some embodiments, the device has a touch-sensitive display (also known as a "touchscreen" or "touchscreen display"). In some embodiments, an apparatus has a Graphical User Interface (GUI), one or more processors, memory, and one or more modules, programs, or sets of instructions stored in the memory for performing a plurality of functions. In some embodiments, the user interacts with the GUI primarily through finger contacts and gestures on the touch-sensitive surface. In some embodiments, these functions optionally include image editing, drawing, rendering, word processing, web page creation, disc editing, spreadsheet making, game playing, telephone answering, video conferencing, e-mailing, instant messaging, fitness support, digital photography, digital video filming, web browsing, digital music playing, and/or digital video playing. Executable instructions for performing these functions are optionally included in a non-transitory computer-readable storage medium or other computer program product configured for execution by one or more processors.
There is a need for an electronic device having a faster, more efficient method and interface for determining whether to select a user interface object or forgo selecting a user interface object. Such methods and interfaces may complement or replace conventional methods for selecting user interface objects. Such methods and interfaces reduce the cognitive burden placed on the user and produce a more efficient human-machine interface. For battery-driven devices, such methods and interfaces conserve power and increase the time between battery charges.
In accordance with some embodiments, a method is performed on an electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensity of contacts with the touch-sensitive surface. The method comprises the following steps: displaying a first user interface object at a first location on a display; detecting a contact with the touch-sensitive surface; and detecting a first movement of the contact on the touch-sensitive surface that corresponds to movement of the focus selector toward the first position. The method further comprises the following steps: in response to detecting the first movement of the contact, the focus selector is moved from a position away from the first user interface object to a first position, and an intensity of the contact on the touch-sensitive surface is determined while the focus selector is in the first position. The method further comprises the following steps: after detecting the first movement of the contact, a second movement of the contact on the touch-sensitive surface is detected that corresponds to movement of the focus selector away from the first position. The method further comprises the following steps: in response to detecting the second movement of the contact, in accordance with a determination that the contact satisfies selection criteria for the first user interface object, moving the focus selector and the first user interface object away from the first location in accordance with the second movement of the contact, wherein the selection criteria for the first user interface object includes the contact reaching a predefined intensity threshold while the focus selector is in the first location; and in accordance with a determination that the contact does not satisfy the selection criteria for the first user interface object, moving the focus selector without moving the first user interface object in accordance with a second movement of the contact.
According to some embodiments, an electronic device comprises: a display unit configured to display a first user interface object at a first location on the display unit; a touch-sensitive surface unit configured to detect a contact; one or more sensor units configured to detect intensity of contacts with the touch-sensitive surface unit; and a processing unit coupled to the display unit, one or more sensor units, and touch-sensitive surface unit. The processing unit is configured to: a first movement of the contact on the touch-sensitive surface unit corresponding to movement of the focus selector toward the first position is detected. In response to detecting the first movement of the contact, the processing unit is configured to: the method further includes moving the focus selector from a position away from the first user interface object to a first position, and determining an intensity of the contact on the touch-sensitive surface while the focus selector is in the first position. The processing unit is further configured to: after detecting the first movement of the contact, a second movement of the contact on the touch-sensitive surface unit is detected that corresponds to movement of the focus selector away from the first position. The processing unit is further configured to: in response to detecting the second movement of the contact, in accordance with a determination that the contact satisfies selection criteria for the first user interface object, moving the focus selector and the first user interface object away from the first location in accordance with the second movement of the contact, wherein the selection criteria for the first user interface object includes the contact reaching a predefined intensity threshold while the focus selector is in the first location; and, in accordance with a determination that the contact does not satisfy the selection criteria for the first user interface object, moving the focus selector without moving the first user interface object in accordance with a second movement of the contact.
Accordingly, electronic devices having a display, a touch-sensitive surface, and one or more sensors for detecting intensity of contacts with the touch-sensitive surface are provided with faster, more efficient methods and interfaces for determining whether to select a user interface object or forgo selecting a user interface object, thereby increasing the effectiveness, efficiency, and user satisfaction of such devices. Such methods and interfaces may complement or replace conventional methods for selecting user interface objects.
Accordingly, there is a need for an electronic device having a faster, more efficient method and interface for manipulating a user interface. Such methods and interfaces may complement or replace conventional methods for selecting user interface objects. Such methods and interfaces reduce the cognitive burden placed on the user and produce a more efficient human-machine interface. For battery-driven devices, such methods and interfaces conserve power and increase the time between battery charges.
In accordance with some embodiments, a method is performed on an electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensity of contacts with the touch-sensitive surface. The method includes displaying a plurality of user interface objects on a display, the plurality of user interface objects including a first user interface object and a second user interface object. The method also includes detecting a first press input corresponding to an increase in intensity of a contact on the touch-sensitive surface above a first intensity threshold while the focus selector is positioned over the first user interface object. The method further comprises the following steps: in response to detecting the first press input, the method includes selecting a first user interface object; and after selecting the first user interface object, detecting a second press input corresponding to an increase in intensity of a contact on the touch-sensitive surface above a second intensity threshold while the focus selector is positioned over the second user interface object. The method further comprises the following steps: in response to detecting the second press input, the second user interface object is selected and the first user interface object remains selected.
According to some embodiments, an electronic device comprises: a display unit configured to display a plurality of user interface objects including a first user interface object and a second user interface object; a touch-sensitive surface unit configured to detect a gesture comprising a press input from a contact; one or more sensor units configured to detect intensity of contacts with the touch-sensitive surface unit; and a processing unit coupled to the display unit, touch-sensitive surface unit, and one or more sensor units. The processing unit is configured to detect a first press input corresponding to an increase in intensity of a contact on the touch-sensitive surface unit above a first intensity threshold while the focus selector is positioned over the first user interface object. In response to detecting the first press input, the processing unit is configured to select a first user interface object; and after selecting the first user interface object, detecting a second press input corresponding to an increase in intensity of a contact on the touch-sensitive surface unit above a second intensity threshold while the focus selector is located over the second user interface object. In response to detecting the second press input, the processing unit is configured to select the second user interface object and to remain selecting the first user interface object.
Accordingly, electronic devices having a display, a touch-sensitive surface, and one or more sensors for detecting intensity of contacts with the touch-sensitive surface are provided with faster, more efficient methods and interfaces for selecting user interface objects, thereby increasing the effectiveness, efficiency, and user satisfaction of such devices. Such methods and interfaces may complement or replace conventional methods for selecting user interface objects.
There is a need for an electronic device with faster, more efficient methods and interfaces for typing characters on a virtual keyboard while detecting continuous contacts on a touch-sensitive surface. Such methods and interfaces may complement or replace conventional methods for typing characters on a virtual keyboard. Such methods and interfaces reduce the cognitive burden placed on the user and produce a more efficient human-machine interface. For battery-driven devices, such methods and interfaces conserve power and increase the time between battery charges.
In accordance with some embodiments, a method is performed on an electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensity of contacts with the touch-sensitive surface. The method comprises the following steps: displaying a virtual keyboard on the display and detecting a contact on the touch-sensitive surface. The method further comprises the following steps: while the contact is continuously detected on the touch-sensitive surface, one or more movements of the contact on the touch-sensitive surface are detected, the one or more movements corresponding to movement of the focus selector over the virtual keyboard. The method further comprises the following steps: for each respective key of a plurality of keys of the virtual keyboard, upon detecting the focus selector over the respective key of the plurality of keys, outputting the character in accordance with a determination that a character output criterion for outputting the character corresponding to the respective key has been met, wherein the character output criterion comprises a respective intensity of contact above a first intensity threshold upon detecting the focus selector over the respective key. The method further comprises the following steps: in accordance with a determination that the character output criteria are not met, forgoing outputting the character corresponding to the respective key.
According to some embodiments, an electronic device comprises: a display unit configured to display a virtual keyboard; a touch-sensitive surface unit configured to make contact; one or more sensor units for detecting intensity of contacts on the touch-sensitive surface unit; and a processing unit coupled to the display unit and the touch-sensitive surface unit. The processing unit is configured to, while continuously detecting a contact on the touch-sensitive surface unit: detecting one or more movements of the contact on the touch-sensitive surface unit, the one or more movements corresponding to movement of the focus selector on the virtual keyboard; and for each respective key of a plurality of keys of a virtual keyboard, upon detecting a focus selector over the respective key of the plurality of keys: in accordance with a determination that a character output criterion for outputting a character corresponding to the respective key has been met, outputting the character, wherein the character output criterion includes a respective intensity of contact above a first intensity threshold upon detection of the focus selector over the respective key; and in accordance with a determination that the character output criteria are not met, forgoing outputting the character corresponding to the respective key.
Accordingly, electronic devices having a display, a touch-sensitive surface, and one or more sensors for detecting intensity of contacts with the touch-sensitive surface are provided with faster, more efficient methods and interfaces for typing characters on a virtual keyboard, thereby increasing the efficiency, and user satisfaction of such devices. Such methods and interfaces may complement or replace conventional methods for typing characters on a virtual keyboard.
In accordance with some embodiments, an electronic device includes a display, a touch-sensitive surface, optionally one or more sensors to detect intensity of contacts with the touch-sensitive surface, one or more processors, memory, and one or more programs; the one or more programs are stored in the memory and configured to be executed by the one or more processors and the one or more programs include instructions for performing operations according to any of the methods mentioned in paragraph [0040 ]. In accordance with some embodiments, a graphical user interface on an electronic device with a display, a touch-sensitive surface, optionally one or more sensors to detect intensity of contacts with the touch-sensitive surface, a memory, and one or more processors to execute one or more programs stored in the memory includes one or more elements displayed in accordance with any of the methods mentioned in paragraph [0040] that are updated in response to an input, as described in accordance with any of the methods mentioned in paragraph [0040 ]. According to some embodiments, a computer-readable storage medium has stored therein instructions that, when executed by an electronic device with a display, a touch-sensitive surface, and optionally one or more sensors to detect intensity of contacts with the touch-sensitive surface, cause the device to perform operations according to any of the methods mentioned in paragraph [0040 ]. According to some embodiments, an electronic device comprises: a display, a touch-sensitive surface, and optionally one or more sensors to detect intensity of contacts with the touch-sensitive surface; and means for performing operations according to any one of the methods mentioned in paragraph [0040 ]. In accordance with some embodiments, an information processing apparatus for use in an electronic device with a display and a touch-sensitive surface, optionally one or more sensors to detect intensity of contacts with the touch-sensitive surface, includes means for performing operations in accordance with any of the methods mentioned in paragraph [0040 ].
Drawings
For a better understanding of the various described embodiments of the invention, reference should be made to the following description of the embodiments taken in conjunction with the following drawings in which like reference numerals represent corresponding parts throughout the figures.
FIG. 1A is a block diagram illustrating a portable multifunction device with a touch-sensitive display in accordance with some embodiments.
FIG. 1B is a block diagram illustrating exemplary components for event processing according to some embodiments.
FIG. 2 illustrates a portable multifunction device with a touch screen in accordance with some embodiments.
FIG. 3 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments.
FIG. 4A illustrates an exemplary user interface for an application menu on a portable multifunction device according to some embodiments.
FIG. 4B illustrates an exemplary user interface for a multifunction device with a touch-sensitive surface separate from a display, in accordance with some embodiments.
5A-5AA illustrate exemplary user interfaces for determining whether to select a user interface object or forgo selecting a user interface object, in accordance with some embodiments.
6A-6E are flow diagrams illustrating methods for determining whether to select a user interface object or forgo selecting a user interface object, in accordance with some embodiments.
Fig. 7 is a functional block diagram of an electronic device according to some embodiments.
Fig. 8A-8DD illustrate exemplary user interfaces for selecting user interface objects according to some embodiments.
9A-9E are flow diagrams illustrating methods of selecting a user interface object according to some embodiments.
FIG. 10 is a functional block diagram of an electronic device according to some embodiments.
11A-11T illustrate exemplary user interfaces for typing characters on a virtual keyboard, according to some embodiments.
12A-12D are flow diagrams illustrating methods for typing characters on a virtual keyboard according to some embodiments.
FIG. 13 is a functional block diagram of an electronic device according to some embodiments.
Detailed Description
The methods, devices, and GUIs described herein provide visual and/or tactile feedback that makes manipulation of user interface objects more efficient and intuitive for a user. For example, in a system where the clicking action of the trackpad is decoupled from the intensity of contact (e.g., contact force, contact pressure, or a substitute thereof) required to reach the activation threshold, the device may generate different tactile outputs (e.g., "different clicks") for different activation events (e.g., such that a click that achieves a particular result is distinguished from a click that does not produce any result or achieves a result different from the particular result). Additionally, the tactile output may be generated in response to other events unrelated to the increased intensity of the contact, such as when the user interface object is moved to a particular location, boundary, or orientation, or when an event occurs at the device (e.g., "immobilization").
Additionally, in systems where the trackpad or touchscreen display is sensitive to a range of contact intensities that includes more than one or two particular intensity values (e.g., more than a simple on/off binary intensity determination), the user interface may provide a response (e.g., a visual or tactile cue) indicating the intensity of contact within the range. In some implementations, the pre-activation threshold response and/or post-activation threshold response to the input are displayed as a continuous animation. As one example of such a response, a preview of the operation is displayed in response to detecting that the increase in contact intensity is still below the activation threshold for performing the operation. As another example of such a response, the animation associated with the operation continues even after the activation threshold for the operation has been reached. Both examples provide the user with a continuous response to the force or pressure of the user's contact, which provides richer and more intuitive visual and/or tactile feedback to the user. More specifically, such continuous force responses give the user the experience of being able to press lightly to preview an operation and/or press deeply to "pass through" or "pass through" a predefined user interface state corresponding to the operation.
Additionally, for devices having touch-sensitive surfaces that are sensitive to a range of contact intensities, multiple contact intensity thresholds may be monitored by the device, and different functions may be mapped to different contact intensity thresholds. This serves to increase the available "gesture space" thereby making it easier for a user to access advanced features, who knows that increasing the contact intensity at or above the second "deep press" intensity threshold will cause the device to perform different operations than would be performed if the contact intensity were between the first "active" intensity threshold and the second "deep press" intensity threshold. An advantage of assigning the additional functionality to the second "deep press" intensity threshold while maintaining the familiar functionality at the first "activation" intensity threshold is that, in some cases, inexperienced users who are confused about the additional functionality may use the familiar functionality by applying only intensities up to the first "activation" intensity threshold, while more experienced users may utilize the additional functionality by applying intensities at the second "deep press" intensity threshold.
In addition, for devices having touch-sensitive surfaces that are sensitive to a range of contact intensities, the device may provide additional functionality by allowing a user to perform complex operations with a single continuous contact. For example, when selecting a set of objects, the user may move a continuous contact around the touch-sensitive surface and may press (e.g., apply an intensity greater than a "deep press" intensity threshold) while dragging to add additional elements to the selection. In this way, the user may intuitively interact with the user interface, where pressing harder with contact makes objects in the user interface "more sticky".
A number of different methods of providing an intuitive user interface on a device are described below in which the click action is decoupled from the force required to reach an activation threshold and/or the device is sensitive to a wide range of contact intensities. Using one or more of these methods (optionally in combination with each other) helps to provide a user interface that intuitively provides additional information and functionality to the user, thereby reducing the cognitive burden on the user and improving the human-machine interface. Such improvements in the human-machine interface enable a user to use the device more quickly and efficiently. For battery-driven devices, these improvements save power and increase the time between battery charges. For ease of explanation, the following describes systems, methods, and user interfaces for illustrative examples that include some of these methods as follows:
many electronic devices have graphical user interfaces that display user interface objects such as thumbnails, icons, folders, and scroll/handles in drag and slide bars. Typically, a user of an electronic device will wish to select and move a user interface object on a display. However, selecting a user interface object sometimes includes multiple steps performed by the user, which may be confusing and time consuming for the user. The embodiments described below provide an efficient and effective method of determining whether to select a user interface object based on the intensity of contacts with a touch-sensitive surface. 5A-5AA illustrate exemplary user interfaces for determining whether to select a user interface object or forgo selecting a user interface object, in accordance with some embodiments. The user interfaces in fig. 5A-5AA are used to illustrate the processes in fig. 6A-6E.
Many electronic devices have graphical user interfaces that display user interface objects such as thumbnails, icons, folders on the display, and scroll/handles in drag and slide bars. Typically, a user of an electronic device will wish to select and move a user interface object on a display. However, selecting a user interface object sometimes includes multiple steps performed by the user, which may be confusing and time consuming for the user. The embodiments described below provide an efficient and intuitive method implemented on an electronic device with a touch-sensitive surface for determining whether to select a user interface object or forgo selecting a user interface object based on the intensity of contacts with the touch-sensitive surface. 8A-8DD illustrate exemplary user interfaces for selecting user interface objects. 9A-9E are flow diagrams illustrating a method of selecting a user interface object. The user interfaces in fig. 8A-8DD are used to illustrate the processes in fig. 9A-9E.
Many electronic devices with touch-sensitive surfaces, such as portable multifunction devices with touch screen displays, have a graphical user interface with a displayed virtual keyboard for typing characters to be output in, for example, email messages, notepad applications, search fields. Some methods for entering a character or sequence of characters (e.g., entering input into a device corresponding to a request to output a character or characters) require a separate contact on the touch-sensitive surface for each entered character. However, entering characters by making individual contacts for each entered character may be inefficient and time consuming for the user. In the embodiments described below, a faster and more efficient method is provided for accurately typing characters on a virtual keyboard, wherein a sequence of characters may be selected by successive contacts in response to detecting an increase in intensity of the contact when the contact is over a key corresponding to the character. In particular, FIGS. 11A-11T illustrate exemplary user interfaces for typing characters on a virtual keyboard. 12A-12D are flow diagrams illustrating a method of typing characters on a virtual keyboard. The user interfaces in FIGS. 11A-11T are used to illustrate the processes in FIGS. 12A-12D.
Exemplary device
Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of various described embodiments. However, it will be apparent to one of ordinary skill in the art that various described embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail as not to unnecessarily obscure aspects of the embodiments.
It will also be understood that, although the terms first, second, etc. may be used herein in some embodiments to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first contact may be termed a second contact, and, similarly, a second contact may be termed a first contact, without departing from the scope of the various described embodiments. The first contact and the second contact are both contacts, but they are not the same contact.
The terminology used in the description of the various described embodiments herein is for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term "and/or" as used herein refers to and includes any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the term "if" is optionally to be interpreted to mean "when … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrase "if it is determined" or "if [ stated condition or event ] is detected" is optionally to be construed to mean "at the time of the determination … …" or "in response to the determination" or "upon detection of [ stated condition or event ] or" in response to the detection of [ stated condition or event ] ", depending on the context.
Embodiments of an electronic device, a user interface for such a device, and an associated process for using such a device are presented. In some embodiments, the device is a portable communication device, such as a mobile telephone, that also contains other functionality, such as personal digital assistant and/or music player functionality. Exemplary embodiments of portable multifunction devices include, but are not limited to, those available from Apple InciPodAndan apparatus. Other portable electronic devices are optionally used, such as a laptop or tablet computer with a touch-sensitive surface (e.g., a touch screen display and/or a touchpad). It should also be understood that in some embodiments, the device is not a portable communication device, but is a desktop computer with a touch-sensitive surface (e.g., a touch screen display and/or a touchpad).
In the following discussion, an electronic device including a display and a touch-sensitive surface is presented. It should be understood, however, that the electronic device optionally includes one or more other physical user interface devices, such as a physical keyboard, mouse, and/or joystick.
The device typically supports various applications, such as one or more of the following: a mapping application, a rendering application, a word processing application, a website creation application, a disc editing application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an email application, an instant messaging application, a fitness support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, and/or a digital video player application.
Various applications executing on the device optionally use at least one common physical user interface device, such as a touch-sensitive surface. One or more functions of the touch-sensitive surface and corresponding information displayed on the device are optionally adjusted and/or varied from one application to the next and/or within a respective application. In this way, a common physical architecture of the device (such as a touch-sensitive surface) optionally supports various applications with a user interface that is intuitive and clear to the user.
Attention is now directed to embodiments of portable devices having touch sensitive displays. FIG. 1A is a block diagram illustrating a portable multifunction device 100 with a touch-sensitive display 112 in accordance with some embodiments. Touch-sensitive display 112 is sometimes referred to as a "touch screen" for convenience, and is sometimes referred to or called a touch-sensitive display system. Device 100 includes memory 102 (optionally including one or more computer-readable storage media), a memory controller 122, one or more processing units (CPUs) 120, a peripheral interface 118, RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, an input/output (I/O) subsystem 106, other input or control devices 116, and an external port 124. The device 100 optionally includes one or more optical sensors 164. Device 100 optionally includes one or more intensity sensors 165 (e.g., a touch-sensitive surface, such as touch-sensitive display system 112 of device 100) for detecting the intensity of contacts on device 100. Device 100 optionally includes one or more tactile output generators 167 for generating tactile outputs on device 100 (e.g., generating tactile outputs on a touch-sensitive surface such as touch-sensitive display system 112 of device 100 or touch panel 355 of device 300). These components optionally communicate over one or more communication buses or signal lines 103.
As used in this specification and claims, the term "intensity" of a contact on a touch-sensitive surface refers to the force or pressure (force per unit area) of a contact (e.g., a finger contact) on the touch-sensitive surface, or to a substitute (surrogate) for the force or pressure of a contact on the touch-sensitive surface. The intensity of the contact has a range of values that includes at least four different values and more typically includes hundreds of different values (e.g., at least 256). The intensity of the contact is optionally determined (or measured) using various methods and various sensors or combinations of sensors. For example, one or more force sensors below or adjacent to the touch-sensitive surface are optionally used to measure forces at different points on the touch-sensitive surface. In some implementations, force measurements from multiple force sensors are combined (e.g., a weighted average) to determine an estimated force of contact. Similarly, the pressure sensitive tip of the stylus is optionally used to determine the pressure of the stylus on the touch-sensitive surface. Alternatively, the size of the contact area detected on the touch-sensitive surface and/or changes thereto, the capacitance of the touch-sensitive surface proximate to the contact and/or changes thereto, and/or the resistance of the touch-sensitive surface proximate to the contact and/or changes thereto are optionally used as a substitute for the force or pressure of the contact on the touch-sensitive surface. In some implementations, the surrogate measurement of contact force or pressure is used directly to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is described in units corresponding to the surrogate measurement). In some implementations, the surrogate measurement of contact force or pressure is converted to an estimated force or pressure, and the estimated force or pressure is used to determine whether an intensity threshold has been exceeded (e.g., the intensity threshold is a pressure threshold measured in units of pressure).
As used in this specification and claims, the term "haptic output" refers to a physical displacement of a device relative to a previous position of the device, a physical displacement of a component of the device (e.g., a touch-sensitive surface) relative to another component of the device (e.g., a housing), or a displacement of a component relative to a center of mass of the device that is to be detected by a user through a sense of touch of the user. For example, where a device or component of a device is in contact with a surface of the user that is sensitive to touch (e.g., a finger, palm, or other portion of the user's hand), the tactile output generated by the physical displacement will be interpreted by the user as a tactile sensation that corresponds to the perceived change in the physical characteristic of the device or device component. For example, movement of a touch-sensitive surface (e.g., a touch-sensitive display or a trackpad) is optionally interpreted by a user as a "down click" or "up click" of a physical actuator button. In some cases, the user will feel a tactile sensation, such as a "press click" or "release click," even when the physical actuator button associated with the touch-sensitive surface that is physically pressed (e.g., displaced) by the user's movement is not moving. As another example, movement of the touch-sensitive surface is optionally interpreted or sensed by the user as "roughness" of the touch-sensitive surface, even when there is no change in the smoothness of the touch-sensitive surface. While such interpretation of touches by a user will be limited by the user's individualized sensory perception, sensory perception of many touches is common to most users. Thus, when a haptic output is described as corresponding to a particular sensory perception of a user (e.g., "click down," "click up," "roughness"), unless otherwise stated, the generated haptic output corresponds to a physical displacement of the device or a component thereof that would generate the described sensory perception of a typical (or ordinary) user.
It should be understood that device 100 is just one example of a portable multifunction device, and that device 100 optionally has more or fewer components than shown, optionally combines two or more components, or optionally has a different configuration or arrangement of these components. The various components shown in fig. 1A are implemented in hardware, software, or a combination of both hardware and software, including one or more signal processing and/or application specific integrated circuits.
The memory 102 optionally includes high-speed random access memory, and also optionally includes non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state memory devices. Access to memory 102 by other components of device 100, such as CPU 120 and peripherals interface 118, is optionally controlled by memory controller 122.
Peripheral interface 118 may be used to couple input and output peripherals of the device to CPU 120 and memory 102. The one or more processors 120 run or execute various software programs and/or sets of instructions stored in the memory 102 to perform various functions of the device 100 and to process data.
In some embodiments, peripherals interface 118, CPU 120, and memory controller 122 are optionally implemented on a single chip, such as chip 104. In some other embodiments, they are optionally implemented on separate chips.
RF (radio frequency) circuitry 108 receives and transmits RF signals, also called electromagnetic signals. The RF circuitry 108 converts electrical signals to/from electromagnetic signals and communicates with communication networks and other communication devices via electromagnetic signals. RF circuitry 108 optionally includes well-known circuitry for performing these functions, including but not limited to an antenna system, an RF transceiver, one or more amplifiers, a tuner, one or more oscillators, a digital signal processor, a codec chipset, a Subscriber Identity Module (SIM) card, memory, and so forth. RF circuitry 108 optionally communicates with networks, such as the internet (also known as the World Wide Web (WWW)), intranets, and/or wireless networks, such as cellular telephone networks, wireless Local Area Networks (LANs), and/or Metropolitan Area Networks (MANs), as well as other devices via wireless communications. The wireless communication optionally uses any of a number of communication standards, protocols, and technologies, including, but not limited to, global system for mobile communications (GSM), Enhanced Data GSM Environment (EDGE), High Speed Downlink Packet Access (HSDPA), High Speed Uplink Packet Access (HSUPA), evolution, pure data (EV-DO), HSPA +, dual cell HSPA (DC-HSPDA), Long Term Evolution (LTE), Near Field Communication (NFC), wideband code division multiple access (W-CDMA), Code Division Multiple Access (CDMA), Time Division Multiple Access (TDMA), bluetooth, wireless fidelity (Wi-Fi) (e.g., IEEE802.11a, IEEE802.11 b, IEEE802.11 g, and/or IEEE802.11 n), voice over internet protocol (VoIP), Wi-MAX, email protocols (e.g., Internet Message Access Protocol (IMAP), and/or Post Office Protocol (POP)), instant messaging (e.g., extensible messaging and presence protocol (XMPP), session initiation protocol for instant messaging and presence with extensions (SIMPLE), Instant Messaging and Presence Service (IMPS)), and/or Short Message Service (SMS), or any other suitable communication protocol, including communication protocols not yet developed at the filing date of this document.
Audio circuitry 110, speaker 111, and microphone 113 provide an audio interface between a user and device 100. The audio circuitry 110 receives audio data from the peripheral interface 118, converts the audio data to electrical signals, and transmits the electrical signals to the speaker 111. The speaker 111 converts the electrical signals into human-audible sound waves. The audio circuit 110 also receives electrical signals converted by the microphone 113 from sound waves. The audio circuit 110 converts the electrical signals to audio data and transmits the audio data to the peripheral interface 118 for processing. Audio data is optionally retrieved from and/or transmitted to memory 102 and/or RF circuitry 108 by peripheral interface 118. In some embodiments, the audio circuit 110 also includes a headset jack (e.g., 212 in fig. 2). The headset jack provides an interface between the audio circuitry 110 and a removable audio input/output peripheral such as an output-only headphone or a headset having both an output (e.g., a monaural or binaural headphone) and an input (e.g., a microphone).
The I/O subsystem 106 couples input/output peripheral devices on the device 100, such as the touch screen 112 and other input control devices 116, to a peripheral interface 118. The I/O subsystem 106 optionally includes a display controller 156, an optical sensor controller 158, an intensity sensor controller 159, a haptic feedback controller 161, and one or more input controllers 160 for other input or control devices. The one or more input controllers 160 receive/transmit electrical signals from/to other input or control devices 116. Other input control devices 116 optionally include physical buttons (e.g., push buttons, rocker buttons, etc.), dials, slide switches, joysticks, click wheels, and the like. In some alternative embodiments, input controller 160 is optionally coupled to (or not coupled to) any of the following: a keyboard, an infrared port, a USB port, and a pointing device such as a mouse. The one or more buttons (e.g., 208 in fig. 2) optionally include an up/down button for volume control of the speaker 111 and/or microphone 113. The one or more buttons optionally include a push button (e.g., 206 in fig. 2).
Touch-sensitive display 112 provides an input interface and an output interface between the device and a user. Display controller 156 receives electrical signals from touch screen 112 and/or transmits electrical signals to touch screen 112. Touch screen 112 displays visual output to a user. The visual output optionally includes graphics, text, icons, video, and any combination thereof (collectively "graphics"). In some embodiments, some or all of the visual output corresponds to a user interface object.
Touch screen 112 has a touch-sensitive surface, sensor, or group of sensors that accept input from a user based on tactile sensation and/or tactile contact. Touch screen 112 and display controller 156 (along with any associated modules and/or sets of instructions in memory 102) detect contact (and any movement or breaking of the contact) on touch screen 112 and convert the detected contact into interaction with user interface objects (e.g., one or more soft keys, icons, web pages, or images) displayed on touch screen 112. In an exemplary embodiment, the point of contact between the touch screen 112 and the user corresponds to a finger of the user.
Touch screen 112 optionally uses LCD (liquid crystal display) technology, LPD (light emitting polymer display) technology, or LED (light emitting diode) technology, although other display technologies are used in other embodiments. Touch screen 112 and display controller 156 optionally use what is now known or later developed Any of a variety of touch sensing technologies including, but not limited to, capacitive, resistive, infrared, and surface acoustic wave technologies, as well as other proximity sensor arrays or other elements for determining one or more points of contact with touch screen 112 to detect contact and any movement or breaking thereof. In an exemplary embodiment, projected mutual capacitance sensing technology is used, such as that from Apple Inc (Cupertino, California)iPodAndthe technique found.
The touch screen 112 optionally has a video resolution in excess of 100 dpi. In some embodiments, the touch screen has a video resolution of about 160 dpi. The user optionally makes contact with the touch screen 112 using any suitable object or appendage, such as a stylus, finger, or the like. In some embodiments, the user interface is designed to work primarily with finger-based contacts and gestures, which may be less accurate than stylus-based input due to the larger contact area of the finger on the touch screen. In some embodiments, the device translates the rough finger-based input into a precise pointer/cursor position or command to perform the action desired by the user.
In some embodiments, device 100 optionally includes a touch pad (not shown) for activating or deactivating particular functions in addition to the touch screen. In some embodiments, the touchpad is a touch-sensitive area of the device that, unlike a touch screen, does not display visual output. The touchpad is optionally a touch-sensitive surface separate from touch screen 112 or an extension of the touch-sensitive surface formed by the touch screen.
The device 100 also includes a power system 162 for powering the various components. Power system 162 optionally includes a power management system, one or more power sources (e.g., batteries, Alternating Current (AC)), a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., a Light Emitting Diode (LED)) and any other components associated with the generation, management and distribution of power in a portable device.
The device 100 optionally further includes one or more optical sensors 164. FIG. 1A shows an optical sensor coupled to an optical sensor controller 158 in the I/O subsystem 106. The optical sensor 164 optionally includes a Charge Coupled Device (CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The optical sensor 164 receives light from the environment projected through one or more lenses and converts the light into data representing an image. In conjunction with imaging module 143 (also called a camera module), optical sensor 164 optionally captures still images or video. In some embodiments, an optical sensor is located on the back of device 100, opposite touch screen display 112 on the front of the device, so that the touch screen display can be used as a viewfinder for still and/or video image capture. In some embodiments, another optical sensor is located on the front of the device so that the user optionally obtains an image of the user for the video conference while viewing other video conference participants on the touch screen display.
Device 100 optionally further comprises one or more contact intensity sensors 165. FIG. 1A shows a contact intensity sensor coupled to an intensity sensor controller 159 in the I/O subsystem 106. Contact intensity sensor 165 optionally includes one or more piezoresistive strain gauges, capacitive force sensors, electrical force sensors, piezoelectric force sensors, optical force sensors, capacitive touch-sensitive surfaces, or other intensity sensors (e.g., sensors for measuring the force (or pressure) of a contact on a touch-sensitive surface). Contact intensity sensor 165 receives contact intensity information (e.g., pressure information or a proxy for pressure information) from the environment. In some embodiments, at least one contact intensity sensor is collocated with or proximate to a touch-sensitive surface (e.g., touch-sensitive display system 112). In some embodiments, at least one contact intensity sensor is located on the back of device 100, opposite touch screen display 112, which is located on the front of device 100.
The device 100 optionally further includes one or more proximity sensors 166. Fig. 1A shows a proximity sensor 166 coupled to the peripheral interface 118. Alternatively, the proximity sensor 166 is coupled to the input controller 160 in the I/O subsystem 106. In some embodiments, the proximity sensor turns off and disables touch screen 112 when the multifunction device is placed near the user's ear (e.g., when the user makes a phone call).
Device 100 optionally further comprises one or more tactile output generators 167. FIG. 1A shows a haptic output generator coupled to a haptic feedback controller 161 in I/O subsystem 106. Tactile output generator 167 optionally includes: one or more electro-acoustic devices, such as speakers or other audio components; and/or an electromechanical device that converts energy into linear motion, such as a motor, solenoid, electroactive polymer, piezoelectric actuator, electrostatic actuator, or other tactile output generating component (e.g., a component that converts an electrical signal into a tactile output on the device). Contact intensity sensor 165 receives haptic feedback generation instructions from haptic feedback module 133 and generates haptic output on device 100 that can be sensed by a user of device 100. In some embodiments, at least one tactile output generator is collocated with or proximate to a touch-sensitive surface (e.g., touch-sensitive display system 112), and optionally generates tactile output by moving the touch-sensitive surface vertically (e.g., into/out of the surface of device 100) or laterally (e.g., back and forth in the same plane as the surface of device 100). In some embodiments, at least one tactile output generator sensor is located on the back of device 100, opposite touch screen display 112, which is located on the front of device 100.
Device 100 optionally also includes one or more accelerometers 168. Fig. 1A shows accelerometer 168 coupled to peripheral interface 118. Alternatively, accelerometer 168 is optionally coupled to input controller 160 in I/O subsystem 106. In some embodiments, the information is displayed in a portrait view or a landscape view on the touch screen display based on an analysis of the data received from the one or more accelerometers. Device 100 optionally includes a magnetometer (not shown) and a GPS (or GLONASS or other global navigation system) receiver (not shown) in addition to accelerometer 168 for obtaining information about the position and orientation (e.g., portrait or landscape) of device 100.
In some embodiments, the software components stored in memory 102 include an operating system 126, a communication module (or set of instructions) 128, a contact/motion module (or set of instructions) 130, a graphics module (or set of instructions) 132, a text input module (or set of instructions) 134, a Global Positioning System (GPS) module (or set of instructions) 135, and an application program (or set of instructions) 136. Further, in some embodiments, memory 102 stores device/global internal state 157, as shown in fig. 1A and 3. Device/global internal state 157 includes one or more of: an active application state indicating which applications (if any) are currently active; display state indicating what applications, views, or other information occupy various areas of the touch screen display 112; sensor states, which include information obtained from the various sensors of the device and the input control device 116; and location information relating to the location and/or attitude of the device.
Operating system 126 (e.g., Darwin, RTXC, LINUX, UNIX, OS X, WINDOWS, or an embedded operating system such as VxWorks) includes various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitates communication between various hardware and software components.
The communication module 128 facilitates communication with other devices through one or more external ports 124, and also includes various software components for processing data received by the RF circuitry 108 and/or the external ports 124. An external port 124 (e.g., Universal Serial Bus (USB), firewire, etc.) is adapted to couple directly to other devices or indirectly through a network (e.g., the internet, wireless LAN, etc.). In some embodiments, the external port is a multi-pin (e.g., 30-pin) connector that is the same as or similar to and/or compatible with a 30-pin connector used on iPod (trademark of Apple inc.) devices.
Contact/motion module 130 optionally detects contact with touch screen 112 (in conjunction with display controller 156) and other touch-sensitive devices (e.g., a touchpad or a physical click wheel). The contact/motion module 130 includes various software components for performing various operations related to the detection of contact, such as determining whether contact has occurred (e.g., detecting a finger-down event), determining contact intensity (e.g., force or pressure of contact, or a substitute for force or pressure of contact), determining whether there is movement of contact and tracking movement across the touch-sensitive surface (e.g., detecting one or more finger-dragging events), and determining whether contact has ceased (e.g., detecting a finger-up event or disconnection of contact). The contact/motion module 130 receives contact data from the touch-sensitive surface. Determining movement of the point of contact optionally includes determining velocity (magnitude), velocity (magnitude and direction), and/or acceleration (change in magnitude and/or direction) of the point of contact, the movement of the point of contact being represented by a series of contact data. These operations are optionally applied to a single contact (e.g., one finger contact) or multiple simultaneous contacts (e.g., "multi-touch"/multiple finger contacts). In some embodiments, the contact/motion module 130 and the display controller 156 detect contact on the touch panel.
In some embodiments, the contact/motion module 130 uses a set of one or more intensity thresholds to determine whether an operation has been performed by the user (e.g., determine whether the user has "clicked" an icon). In some embodiments, at least a subset of the intensity thresholds are determined from software parameters (e.g., the intensity thresholds are not determined by the activation thresholds of particular physical actuators and may be adjusted without changing the physical hardware of device 100). For example, the mouse "click" threshold of the trackpad or touchscreen can be set to any of a wide range of predefined thresholds without changing the trackpad or touchscreen display hardware. Additionally, in some implementations, a user of the device is provided with software settings for adjusting one or more of the sets of intensity thresholds (e.g., by adjusting individual intensity thresholds and/or by adjusting multiple intensity thresholds at once with a system-level click on the "intensity" parameter).
The contact/motion module 130 optionally detects gesture input by the user. Different gestures on the touch-sensitive surface have different contact patterns and intensities. Thus, the gesture is optionally detected by detecting a specific contact pattern. For example, detecting a finger tap gesture includes detecting a finger down event, and then detecting a finger up (lift off) event at the same location (or substantially the same location) as the finger down event (e.g., at an icon location). As another example, detecting a finger swipe gesture on the touch-sensitive surface includes detecting a finger-down event, then detecting one or more finger-dragging events, and then subsequently detecting a finger-up (lift-off) event.
Graphics module 132 includes various known software components for presenting and displaying graphics on touch screen 112 or other display, including components for changing the visual impact (e.g., brightness, transparency, saturation, contrast, or other visual characteristics) of the displayed graphics. As used herein, the term "graphic" includes any object that may be displayed to a user, including without limitation text, web pages, icons (such as user interface objects including soft keys), digital images, videos, animations and the like.
In some embodiments, the graphics module 132 stores data representing graphics to be used. Each graphic is optionally assigned a corresponding code. The graphics module 132 receives one or more codes specifying graphics to be displayed from an application program or the like, and also receives coordinate data and other graphics attribute data together if necessary, and then generates screen image data to output to the display controller 156.
Haptic feedback module 133 includes various software components for generating instructions for use by haptic output generator 167 to produce haptic outputs at one or more locations on device 100 in response to user interaction with device 100.
Text input module 134, which is optionally a component of graphics module 132, provides a soft keyboard for entering text in various applications such as contacts 137, email 140, IM 141, browser 147, and any other application that requires text input.
The GPS module 135 determines the location of the device and provides this information for use in various applications (e.g., to the phone 138 for location-based dialing, to the camera 143 as picture/video metadata, and to applications that provide location-based services such as weather desktop widgets, local yellow pages desktop widgets, and map/navigation desktop widgets).
Application 136 optionally includes the following modules (or sets of instructions), or a subset or superset thereof:
a contacts module 137 (sometimes referred to as an address book or contact list);
a phone module 138;
a video conferencing module 139;
an email client module 140;
an Instant Messaging (IM) module 141;
fitness support module 142;
a camera module 143 for still and/or video images;
an image management module 144;
a browser module 147;
a calendar module 148;
desktop applet module 149, optionally including one or more of: a weather desktop applet 149-1, a stock market desktop applet 149-2, a calculator desktop applet 149-3, an alarm desktop applet 149-4, a dictionary desktop applet 149-5 and other desktop applets obtained by the user, and a user created desktop applet 149-6;
A desktop applet creator module 150 for forming a user-created desktop applet 149-6;
a search module 151;
a video and music player module 152, optionally consisting of a video player module and a music player module;
a notepad module 153;
a map module 154; and/or
Online video module 155.
Examples of other applications 136 that are optionally stored in memory 102 include other word processing applications, other image editing applications, drawing applications, rendering applications, JAVA-enabled applications, encryption, digital rights management, voice recognition, and voice replication.
In conjunction with touch screen 112, display controller 156, contact module 130, graphics module 132, and text input module 134, contact module 137 is optionally used to manage contact lists or contact lists (e.g., stored in memory 102 or in an application internal state 192 of contact module 137 in memory 370), including: adding the name to the address book; deleting the name from the address book; associating a telephone number, email address, physical address, or other information with a name; associating the image with a name; classifying and ordering the names; providing a telephone number or email address to initiate and/or facilitate communications over telephone 138, video conference 139, email 140, or IM 141; and so on.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch screen 112, display controller 156, contact module 130, graphics module 132, and text input module 134, telephone module 138 is optionally used to enter a sequence of characters corresponding to a telephone number, access one or more telephone numbers in address book 137, modify an already entered telephone number, dial a corresponding telephone number, conduct a conversation, and disconnect or hang up when the conversation is complete. As noted above, the wireless communication optionally uses any of a variety of communication standards, protocols, and technologies.
In conjunction with RF circuitry 108, audio circuitry 110, speaker 111, microphone 113, touch screen 112, display controller 156, optical sensor 164, optical sensor controller 158, contact module 130, graphics module 132, text input module 134, contact list 137, and telephone module 138, video conference module 139 includes executable instructions for initiating, conducting, and ending a video conference between a user and one or more other participants according to user instructions.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact module 130, graphics module 132, and text input module 134, email client module 140 includes executable instructions for creating, sending, receiving, and managing emails in response to user instructions. In conjunction with the image management module 144, the email client module 140 makes it very easy to create and send an email with a still image or a video image captured by the camera module 143.
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact module 130, graphics module 132, and text input module 134, instant message module 141 includes executable instructions for entering a sequence of characters corresponding to an instant message, modifying previously entered characters, transmitting a corresponding instant message (e.g., using a Short Message Service (SMS) or Multimedia Messaging Service (MMS) protocol for a phone-based instant message or using XMPP, SIMPLE, or IMPS for an internet-based instant message), receiving an instant message, and viewing the received instant message. In some embodiments, the transmitted and/or received instant messages optionally include graphics, photos, audio files, video files, and/or MMS and/or other attachments supported in an Enhanced Messaging Service (EMS). As used herein, "instant message" refers to both telephony-based messages (e.g., messages sent using SMS or MMS) and internet-based messages (e.g., messages sent using XMPP, SIMPLE, or IMPS).
In conjunction with RF circuitry 108, touch screen 112, display controller 156, contact module 130, graphics module 132, text input module 134, GPS module 135, map module 154, and music player module 146, fitness support module 142 includes functionality for creating fitness (e.g., including time, distance, and/or calorie burning goals); communicating with fitness sensors (sports equipment); receiving fitness sensor data; calibrating a sensor for monitoring fitness; selecting and playing music for fitness; and executable instructions to display, store, and transmit fitness data.
In conjunction with touch screen 112, display controller 156, optical sensor 164, optical sensor controller 158, contact module 130, graphics module 132, and image management module 144, camera module 143 includes executable instructions for capturing still images or video (including video streams) and storing them in memory 102, modifying characteristics of the still images or video, or deleting the still images or video from memory 102.
In conjunction with touch screen 112, display controller 156, contact module 130, graphics module 132, text input module 134, and camera module 143, image management module 144 includes executable instructions for arranging, modifying (e.g., editing), or otherwise manipulating, labeling, deleting, presenting (e.g., in a digital slide or album), and storing still and/or video images.
In conjunction with RF circuitry 108, touch screen 112, display system controller 156, contact module 130, graphics module 132, and text input module 134, browser module 147 includes executable instructions for browsing the internet (including searching, linking to, receiving, and displaying web pages or portions thereof, and attachments and other files linked to web pages) according to user instructions.
In conjunction with the RF circuitry 108, the touch screen 112, the display system controller 156, the contact module 130, the graphics module 132, the text input module 134, the email client module 140, and the browser module 147, the calendar module 148 includes executable instructions to create, display, modify, and store calendars and data associated with calendars (e.g., calendar entries, to-do, etc.) according to user instructions.
In conjunction with RF circuitry 108, touch screen 112, display system controller 156, contact module 130, graphics module 132, text input module 134, and browser module 147, the desktop applet module 149 is a mini-application (e.g., weather desktop applet 149-1, stock desktop applet 149-2, calculator desktop applet 149-3, alarm desktop applet 149-4, and dictionary desktop applet 149-5) or a mini-application created by a user (e.g., user created desktop applet 149-6) that is optionally downloaded and used by the user. In some embodiments, the desktop applet includes an HTML (HyperText markup language) file, a CSS (cascading Style sheet) file, and a JavaScript file. In some embodiments, the desktop applet includes an XML (extensible markup language) file and a JavaScript file (e.g., Yahoo! desktop applet).
In conjunction with RF circuitry 108, touch screen 112, display system controller 156, contact module 130, graphics module 132, text input module 134, and browser module 147, the desktop applet creator module 150 is optionally used by a user to create a desktop applet (e.g., to turn a user-specified portion of a web page into a desktop applet).
In conjunction with touch screen 112, display system controller 156, contact module 130, graphics module 132, and text input module 134, search module 151 includes executable instructions to search for text, music, sound, images, video, and/or other files in memory 102 that match one or more search criteria (e.g., one or more user-specified search terms) according to user instructions.
In conjunction with touch screen 112, display system controller 156, contact module 130, graphics module 132, audio circuitry 110, speakers 111, RF circuitry 108, and browser module 147, video and music player module 152 includes executable instructions that allow a user to download and playback recorded music and other sound files stored in one or more file formats, such as MP3 or AAC files, as well as executable instructions for displaying, rendering, or otherwise playing back video (e.g., on touch screen 112 or on an external display connected via external port 124). In some embodiments, the device 100 optionally includes the functionality of an MP3 player, such as an iPod (trademark of Apple inc.).
In conjunction with touch screen 112, display controller 156, contact module 130, graphics module 132, and text input module 134, notepad module 153 includes executable instructions for creating and managing notes, to-do task lists, and the like, according to user instructions.
In conjunction with RF circuitry 108, touch screen 112, display system controller 156, contact module 130, graphics module 132, text input module 134, GPS module 135, and browser module 147, map module 154 is optionally used to receive, display, modify, and store maps and data associated with maps (e.g., driving directions; data related to stores and other points of interest at or near a particular location; and other location-based data) according to user instructions.
In conjunction with touch screen 112, display system controller 156, contact module 130, graphics module 132, audio circuit 110, speaker 111, RF circuit 108, text input module 134, email client module 140, and browser module 147, online video module 155 includes instructions that allow a user to access, browse, receive (e.g., via streaming media and/or download), play (e.g., on the touch screen or on an external display connected via external port 124), send email with links to a particular online video, and otherwise manage one or more file formats such as online video of h.264. In some embodiments, the instant message module 141, rather than the email client module 140, is used to send a link to a particular online video.
Each of the modules and applications described above corresponds to a set of executable instructions for performing one or more of the functions described above as well as the methods described in this application (e.g., the computer-implemented methods and other information processing methods described herein). These modules (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules are optionally combined or otherwise rearranged in various embodiments. In some embodiments, memory 102 optionally stores a subset of the modules and data structures described above. Further, memory 102 optionally stores additional modules and data structures not described above.
In some embodiments, device 100 is a device on which the operation of a predefined set of functions is performed exclusively through a touch screen and/or a touchpad. The number of physical input control devices (such as push buttons, dials, etc.) on device 100 is optionally reduced by using a touch screen and/or touchpad as the primary input control device for operation of device 100.
The predefined set of functions performed exclusively by the touchscreen and/or touchpad optionally includes navigation between user interfaces. In some embodiments, the touchpad, when touched by a user, navigates device 100 from any user interface displayed on device 100 to a main, home, or root menu. In such embodiments, the "menu button" is implemented using a touchpad. In some other embodiments, the menu button is a physical push button or other physical input control device, rather than a touchpad.
FIG. 1B is a block diagram illustrating exemplary components for event processing according to some embodiments. In some embodiments, memory 102 (in FIG. 1A) or memory 370 (FIG. 3) includes event classifier 170 (e.g., in operating system 126) and corresponding application 136-1 (e.g., any of the aforementioned applications 137 and 151, 155, 380 and 390).
Event sorter 170 receives the event information and determines application 136-1 and application view 191 of application 136-1 to which the event information is to be passed. The event sorter 170 includes an event monitor 171 and an event dispatcher module 174. In some embodiments, application 136-1 includes an application internal state 192 that indicates a current application view that is displayed on touch-sensitive display 112 when the application is active or executing. In some embodiments, the device/global internal state 157 is used by the event classifier 170 to determine which application(s) are currently active, and the application internal state 192 is used by the event classifier 170 to determine the application view 191 to which to pass event information.
In some embodiments, the application internal state 192 includes additional information, such as one or more of the following: resume information to be used when the application 136-1 resumes execution, user interface state information indicating information that the application 136-1 is displaying or is ready to display, a state queue that enables the user to transition back to a previous state or view of the application 136-1, and a resume/undo queue of previous actions taken by the user.
Event monitor 171 receives event information from peripheral interface 118. The event information includes information about the sub-event (e.g., a user contacting touch-sensitive display 112 as part of a multi-touch gesture). Peripherals interface 118 transmits information it receives from I/O subsystem 106 or sensors (such as proximity sensor 166), accelerometer 168, and/or microphone 113 (through audio circuitry 110). Information received by peripheral interface 118 from I/O subsystem 106 includes information from touch-sensitive display 112 or a touch-sensitive surface.
In some embodiments, event monitor 171 sends requests to peripheral interface 118 at predetermined intervals. In response, peripheral interface 118 transmits the event information. In other embodiments, peripheral interface 118 transmits event information only when there is a significant event (e.g., receiving input above a predetermined noise threshold and/or receiving input for more than a predetermined duration).
In some embodiments, event classifier 170 further includes hit view determination module 172 and/or active event recognizer determination module 173.
When touch-sensitive display 112 displays more than one view, hit view determination module 172 provides a software process for determining where a sub-event has occurred within one or more views. The view consists of controls and other elements that the user can see on the display.
Another aspect of the user interface associated with an application is a set of views, sometimes referred to herein as application views or user interface windows, in which information is displayed and touch-based gestures occur. The application view (of the respective application) in which the touch is detected optionally corresponds to a programmatic level within a programmatic or view hierarchy of applications. For example, the lowest level view in which a touch is detected is optionally called a hit view, and the set of events identified as correct inputs is optionally determined based at least in part on the hit view of the initial touch that began the touch-based gesture.
Hit view determination module 172 receives information related to sub-events of the touch-based gesture. When the application has multiple views organized in a hierarchy, hit view determination module 172 identifies the hit view as the lowest view in the hierarchy that should handle the sub-event. In most cases, the hit view is the lowest level view in which the initiating sub-event (i.e., the first sub-event in the sequence of sub-events that form an event or potential event) occurs. Once the hit view is identified by the hit view determination module, the hit view typically receives all sub-events related to the same touch or input source for which it was identified as the hit view.
The active event recognizer determination module 173 determines which view or views within the view hierarchy should receive a particular sequence of sub-events. In some embodiments, the active event recognizer determination module 173 determines that only the hit view should receive a particular sequence of sub-events. In other embodiments, active event recognizer determination module 173 determines that all views that include the physical location of the sub-event are actively participating views, and thus determines that all actively participating views should receive a particular sequence of sub-events. In other embodiments, even if the touch sub-event is completely constrained to the area associated with one particular view, the views higher in the hierarchy will remain actively involved views.
The event dispatcher module 174 dispatches the event information to an event recognizer (e.g., event recognizer 180). In embodiments including active event recognizer determination module 173, event dispatcher module 174 delivers event information to event recognizers determined by active event recognizer determination module 173. In some embodiments, the event dispatcher module 174 stores event information in the event queue, which is retrieved by the respective event receiver module 182.
In some embodiments, the operating system 126 includes an event classifier 170. Alternatively, application 136-1 includes event classifier 170. In another embodiment, the event classifier 170 is a stand-alone module or is part of another module stored in the memory 102, such as the contact/motion module 130.
In some embodiments, the application 136-1 includes a plurality of event handlers 190 and one or more application views 191, each of which includes instructions for handling touch events that occur within a respective view of the application's user interface. Each application view 191 of the application 136-1 includes one or more event recognizers 180. Typically, the respective application view 191 includes a plurality of event recognizers 180. In other embodiments, one or more of the event recognizers 180 are part of a separate module, such as a user interface toolkit (not shown) or a higher level object from which the application 136-1 inherits methods and other characteristics. In some embodiments, the respective event handlers 190 comprise one or more of: data updater 176, object updater 177, GUI updater 178, and/or event data 179 received from event sorter 170. Event handler 190 optionally utilizes or calls data updater 176, object updater 177 or GUI updater 178 to update application internal state 192. Alternatively, one or more of the application views 191 include one or more corresponding event handlers 190. Additionally, in some embodiments, one or more of data updater 176, object updater 177, and GUI updater 178 are included in a respective application view 191.
The corresponding event recognizer 180 receives event information (e.g., event data 179) from the event classifier 170 and identifies events from the event information. The event recognizer 180 includes an event receiver 182 and an event comparator 184. In some embodiments, the event recognizer 180 further comprises at least a subset of: metadata 183 and event delivery instructions 188 (which optionally include sub-event delivery instructions).
The event receiver 182 receives event information from the event classifier 170. The event information includes information about the sub-event, such as a touch or touch movement. Depending on the sub-event, the event information also includes additional information, such as the location of the sub-event. When the sub-event relates to motion of a touch, the event information optionally also includes the velocity and direction of the sub-event. In some embodiments, the event comprises a rotation of the device from one orientation to another (e.g., from a portrait orientation to a landscape orientation, or vice versa), and the event information comprises corresponding information about the current orientation of the device (also referred to as the device pose).
Event comparator 184 compares the event information to predefined event or sub-event definitions and determines an event or sub-event or determines or updates the state of an event or sub-event based on the comparison. In some embodiments, event comparator 184 includes event definitions 186. Event definition 186 contains definitions of events (e.g., predefined sub-event sequences), such as event 1(187-1), event 2(187-2), and others. In some embodiments, sub-events in event 187 include, for example, touch start, touch end, touch move, touch cancel, and multiple touches. In one example, the definition of event 1(187-1) is a double click on the displayed object. For example, the double tap includes a first touch (touch start) on the displayed object for a predetermined length of time, a first lift-off (touch end) for a predetermined length of time, a second touch (touch start) on the displayed object for a predetermined length of time, and a second lift-off (touch end) for a predetermined length of time. In another example, the definition of event 2(187-2) is a drag on the displayed object. For example, dragging includes a contact (or contact) on one displayed object for a predetermined phase, a movement of the contact on the touch-sensitive display 112, and a lifting of the contact (end of contact). In some embodiments, the event also includes information for one or more associated event handlers 190.
In some embodiments, event definition 187 includes definitions of events for respective user interface objects. In some embodiments, event comparator 184 performs a hit test to determine which user interface object is associated with a sub-event. For example, in an application view in which three user interface objects are displayed on touch-sensitive display 112, when a touch is detected on touch-sensitive display 112, event comparator 184 performs a hit test to determine which of the three user interface objects is associated with the touch (sub-event). If each displayed object is associated with a corresponding event handler 190, the event comparator uses the results of the hit test to determine which event handler 190 should be activated. For example, event comparator 184 selects the event handler associated with the object and sub-event that triggered the hit test.
In some embodiments, the definition of the respective event 187 further includes a delay action that delays the delivery of the event information until after it has been determined whether the sequence of sub-events does correspond to the event type of the event recognizer.
When the respective event recognizer 180 determines that the sub-event string does not match any event in the event definition 186, the respective event recognizer 180 enters an event not possible, event failed, or event ended state, which then disregards subsequent sub-events of the touch-based gesture. In this case, other event recognizers (if any) that remain active for hit views continue to track and process sub-events of the continuous contact-based gesture.
In some embodiments, the respective event recognizer 180 includes metadata 183 with configurable attributes, tags, and/or lists that indicate how the event delivery system should perform sub-event delivery to actively participating event recognizers. In some embodiments, metadata 183 includes configurable attributes, tags, and/or lists that indicate how or how event recognizers interact with each other. In some embodiments, metadata 183 includes configurable attributes, flags, and/or lists that indicate whether a sub-event is passed to varying levels in the view or programmatic hierarchy.
In some embodiments, when one or more particular sub-events of an event are identified, the respective event identifier 180 activates an event handler 190 associated with the event. In some embodiments, the respective event identifier 180 delivers event information associated with the event to the event handler 190. The activation event handler 190 is distinct from sending (and deferring sending) sub-events to the corresponding hit view. In some embodiments, the event recognizer 180 throws a flag associated with the recognized event, and the event handler 190 associated with the flag receives the flag and performs a predefined process.
In some embodiments, event delivery instructions 188 include sub-event delivery instructions that deliver event information about sub-events without activating an event handler. Instead, the sub-event delivery instructions deliver event information to event handlers associated with the sub-event string or to actively participating views. Event handlers associated with the sub-event strings or with actively involved views receive the event information and perform a predetermined process.
In some embodiments, data updater 176 creates and updates data used in application 136-1. For example, data updater 176 updates a phone number used in contacts module 137, or stores a video file used in video player module 145. In some embodiments, object updater 177 creates and updates objects used in application 136-1. For example, object updater 176 creates a new user interface object, or updates the location of a user interface object. GUI updater 178 updates the GUI. For example, GUI updater 178 prepares display information and sends it to graphics module 132 for display on the touch-sensitive display.
In some embodiments, event handler 190 includes or has access to data updater 176, object updater 177, and GUI updater 178. In some embodiments, data updater 176, object updater 177, and GUI updater 178 are included in a single module of a respective application 136-1 or application view 191. In other embodiments, they are included in two or more software modules.
It should be understood that the above discussion of event processing with respect to user touches on a touch sensitive display also applies to other forms of user input utilizing an input device to operate multifunction device 100, not all of which are initiated on a touch screen. For example, mouse movements and mouse button presses, optionally in conjunction with single or multiple keyboard presses or holds; contact movements on the touchpad such as taps, drags, scrolls, and the like; inputting by a touch pen; movement of the device; verbal instructions; the detected eye movement; inputting biological characteristics; and/or any combination thereof, is optionally used as input corresponding to sub-events defining the event to be identified.
FIG. 2 illustrates a portable multifunction device 100 with a touch screen 112 in accordance with some embodiments. The touch screen optionally displays one or more graphics within the User Interface (UI) 200. In this embodiment, as well as other embodiments described below, a user can select one or more of these graphics by, for example, gesturing graphically with one or more fingers 202 (not drawn to scale in the figure) or with one or more styluses 203 (not drawn to scale in the figure). In some embodiments, the selection of the one or more graphics occurs when the user breaks contact with the one or more graphics. In some embodiments, the gesture optionally includes one or more taps, one or more swipes (left to right, right to left, up, and/or down), and/or a swipe of a finger (right to left, left to right, up, and/or down) that has made contact with device 100. In some implementations, or in some cases, inadvertent contact with a graphic does not select the graphic. For example, a swipe gesture that sweeps over an application icon optionally does not select the corresponding application when the gesture corresponding to the selection is a tap.
Device 100 optionally also includes one or more physical buttons, such as a "home" or menu button 204. As previously described, the menu button 204 is optionally used to navigate to any application 136 in a set of applications that are optionally executed on the device 100. Alternatively, in some embodiments, the menu buttons are implemented as soft keys in a GUI displayed on touch screen 112.
In one embodiment, device 100 includes touch screen 112, menu button 204, push button 206 for turning the device on and off and locking the device, volume adjustment button(s) 208, Subscriber Identity Module (SIM) card slot 210, headset jack 212, and docking/charging external port 124. The push button 206 is optionally used to: powering on/off the device by depressing the button and maintaining the button in a depressed state for a predetermined time interval; locking the device by depressing the button and releasing the button before a predetermined time interval has elapsed; and/or unlock the device or initiate an unlocking process. In an alternative embodiment, device 100 also accepts verbal input through microphone 113 for activating or deactivating certain functions. Device 100 also optionally includes one or more contact intensity sensors 165 for detecting contact intensity of contacts on touch screen 112, and/or one or more tactile output generators 167 for generating tactile outputs for a user of device 100.
FIG. 3 is a block diagram of an exemplary multifunction device with a display and a touch-sensitive surface in accordance with some embodiments. The device 300 need not be portable. In some embodiments, the device 300 is a laptop computer, desktop computer, tablet computer, multimedia player device, navigation device, educational device (such as a child learning toy), gaming system, or control device (e.g., a home or industrial controller). Device 300 typically includes one or more processing units (CPUs) 310, one or more network or other communication interfaces 360, memory 370, and one or more communication buses 320 for interconnecting these components. The communication bus 320 optionally includes circuitry (sometimes called a chipset) that interconnects and controls communication between system components. Device 300 includes an input/output (I/O) interface 330 having a display 340, display 340 typically being a touch screen display. I/O interface 330 also optionally includes a keyboard and/or mouse (or other pointing device) 350 and touchpad 355, a tactile output generator 357 (e.g., similar to tactile output generator 167 described above with reference to fig. 1A) for generating tactile outputs on device 300, a sensor 359 (e.g., an optical sensor, an acceleration sensor, a proximity sensor, a touch-sensitive sensor, and/or a contact intensity sensor similar to contact intensity sensor 165 described above with reference to fig. 1A). Memory 370 includes high speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices; and optionally includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. Memory 370 optionally includes one or more storage devices located remotely from CPU 310. In some embodiments, memory 370 stores programs, modules, and data structures similar to, or a subset of, the programs, modules, and data structures stored in memory 102 of portable multifunction device 100 (FIG. 1A). Further, memory 370 optionally stores additional programs, modules, and data structures not present in memory 102 of portable multifunction device 100. For example, memory 370 of device 300 optionally stores drawing module 380, presentation module 382, word processing module 384, website creation module 386, disk editing module 388, and/or spreadsheet module 390, while memory 102 of portable multifunction device 100 (FIG. 1A) optionally does not store these modules.
Each of the above identified elements in fig. 3 is optionally stored in one or more of the previously mentioned memory devices. Each of the identified modules corresponds to a set of instructions for performing a function described above. The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules are optionally combined or otherwise rearranged in various embodiments. In some embodiments, memory 370 optionally stores a subset of the modules and data structures described above. Further, memory 370 optionally stores additional modules and data structures not described above.
Attention is now directed to embodiments of a user interface ("UI") optionally implemented on portable multifunction device 100.
FIG. 4A illustrates an exemplary user interface for an application menu on portable multifunction device 100 according to some embodiments. A similar user interface is optionally implemented on device 300. In some embodiments, the user interface 400 includes the following elements, or a subset or superset thereof:
one or more signal strength indicators 402 for one or more wireless communications, such as mobile phones and Wi-Fi signals;
Time 404;
a bluetooth indicator 405;
a battery status indicator 406;
a tray 408 with icons of applications used in many cases, such as:
an icon 416 of the phone module 138 labeled "phone," optionally including an indicator 414 of the number of missed calls or voice messages;
an icon 418 of the email client module 140 labeled "mail", optionally including an indicator 410 of the number of unread emails;
an icon 420 marking "browser" for the browser module 147; and
an icon 422 labeled "iPod" for the video and music player module 152 (also called iPod (trademark of Apple inc.) module 152); and
icons for other applications, such as:
icon 424 of IM module 141 marking "text";
icon 426 of calendar module 148 marking "calendar";
the icon 428 of the image management module 144 marking "photo";
icon 430 of camera module 143 labeled "camera";
icon 432 of online video module 155 marking "online video";
the icon 434 of the "stock market" label of the O-stock desktop applet 149-2;
icon 436 of the map module 154 labeled "map";
Icon 438 marking "weather" for weather desktop applet 149-1;
icon 440 of alarm clock desktop applet 149-4 labeled "clock";
icon 442 for fitness support module 142 labeled "fitness support";
icon 444 of notepad module 153 marking "note"; and
an application or module icon 446 is set, which provides access to the settings of the device 100 and its various applications 136.
It should be noted that the icon labels shown in fig. 4A are merely exemplary. For example, icon 422 of video and music player module 152 is labeled "music" or "music player". Other tabs are optionally used for various application icons. In some embodiments, the label of the respective application icon includes a name of the application corresponding to the respective application icon. In some embodiments, the label of a particular application icon is different from the name of the application corresponding to the particular application icon.
Fig. 4B illustrates an exemplary user interface on a device (e.g., device 300 in fig. 3) having a touch-sensitive surface 451 (e.g., tablet or touchpad 355 in fig. 3) separate from a display 450 (e.g., touch screen display 112). Device 300 also optionally includes one or more contact intensity sensors (e.g., one or more of sensors 357) for detecting the intensity of contacts on touch-sensitive surface 451, and/or one or more tactile output generators 359 for generating tactile outputs for a user of device 300.
Although some of the examples that follow will be given with reference to input on touch screen display 112 (where the touch-sensitive surface and the display are combined), in some embodiments, the device detects input on a touch-sensitive surface that is separate from the display, as shown in fig. 4B. In some embodiments, the touch-sensitive surface (e.g., 451 in fig. 4B) has a primary axis (e.g., 452 in fig. 4B) that corresponds to a primary axis (e.g., 453 in fig. 4B) on the display (e.g., 450). In accordance with these embodiments, the device detects contacts (e.g., 460 and 462 in fig. 4B) with the touch-sensitive surface 451 at locations corresponding to respective locations on the display (e.g., in fig. 4B, 460 corresponds to 468 and 462 corresponds to 470). Thus, when the touch-sensitive surface (e.g., 451 in FIG. 4B) is separated from the display (450 in FIG. 4B) of the multifunction device, user inputs (e.g., contacts 460 and 462, and their movements) detected by the device on the touch-sensitive surface are used by the device to manipulate the user interface on the display. It should be understood that similar methods are optionally used for the other user interfaces described herein.
Additionally, while the following examples are given primarily with reference to finger inputs (e.g., finger contact, finger tap gesture, finger swipe gesture), it should be understood that in some embodiments one or more of these finger inputs are replaced by inputs from another input device (e.g., mouse-based inputs or stylus inputs). For example, the swipe gesture is optionally replaced by a mouse click (e.g., rather than a contact), followed by movement of the cursor along the path of the swipe (e.g., rather than movement of the contact). As another example, the flick gesture is optionally replaced by a mouse click (e.g., rather than detection of a contact followed by termination of detection of the contact) while the cursor is over the location of the flick gesture. Similarly, when multiple user inputs are detected simultaneously, it should be understood that multiple computer mice are optionally used simultaneously, or one mouse and multiple finger contacts are optionally used simultaneously.
As used herein, the term "focus selector" refers to an input element that indicates the current portion of the user interface with which the user is interacting. In some implementations that include a cursor or other location indicia, the cursor acts as a "focus selector" such that when an input (e.g., a press input) is detected on a touch-sensitive surface (e.g., touchpad 355 in fig. 3 or touch-sensitive surface 451 in fig. 4B) while the cursor is over a particular user interface element (e.g., a button, window, slider, or other user interface element), the particular user interface element is adjusted in accordance with the detected input. In some implementations that include a touch screen display (e.g., touch-sensitive display system 112 in fig. 1A or touch screen 112 in fig. 4A) that enables direct interaction with a user interface element on the touch screen display, a detected contact on the touch screen acts as a "focus selector" such that when an input (e.g., a press input by the contact) is detected at a location of a particular user interface element (e.g., a button, window, slider, or other user interface element) on the touch screen display, the particular user interface element is adjusted in accordance with the detected input. In some implementations, focus is moved from one area of the user interface to another area of the user interface without corresponding cursor movement or movement of contact on the touch screen display (e.g., by using tab keys or directional keys to move focus from one button to another); in these implementations, the focus selector moves according to movement of the focus between different regions of the user interface. Regardless of the particular form taken by the focus selector, the focus selector is typically a user interface element (or contact on a touch screen display) that is controlled by the user to communicate the user's intended interaction with the user interface (e.g., by indicating to the device the element of the user interface with which the user is intending to interact). For example, when a press input is detected on a touch-sensitive surface (e.g., a touchpad or touchscreen), the location of a focus selector (e.g., a cursor, contact, or selection box) over a respective button will indicate that the user is intending to activate the respective button (as opposed to other user interface elements shown on the display of the device).
The user interface diagrams described below include various intensity diagrams that illustrate contacts on a touch-sensitive surface relative to one or more intensity thresholds (e.g., a contact detection intensity threshold, IT)0Light press pressure intensity threshold ITLDeep compression strength threshold ITDAnd/or one or more other intensity thresholds). This intensity map is typically not part of the displayed user interface, but is provided to assist in interpreting the map. In some embodiments, the light press intensity threshold corresponds to an intensity at which the device will perform an operation typically associated with clicking a button of a physical mouse or trackpad. In some embodiments, the deep press intensity threshold corresponds to an intensity at which the device will perform a different operation than that typically associated with clicking a button of a physical mouse or trackpad. In some embodiments, when the detected intensity is below the light press intensity threshold (e.g., and above the nominal contact detection intensity threshold IT)0A contact lower than the threshold is no longer detected), the device will move the focus selector in accordance with movement of the contact on the touch-sensitive surface without performing a contact with a light press intensity threshold or a deep press intensity threshold Value association operations. Generally, unless otherwise stated, these intensity thresholds are consistent between different sets of user interface graphs.
The intensity of the contact is from below the light press intensity threshold ITLTo be between the light press intensity threshold ITLAnd deep press intensity threshold ITDThe intensity in between is sometimes referred to as a "light press" input. The intensity of the contact is from below the deep-press intensity threshold ITDIs increased to above the deep press strength threshold ITDIs sometimes referred to as a "deep press" input. The intensity of the contact is controlled from below the contact detection intensity threshold IT0To be intermediate the contact detection intensity threshold IT0And light press intensity threshold ITLSometimes referred to as detecting contact on the touch surface. The intensity of the contact is above a contact detection intensity threshold IT0Is reduced to below the contact strength threshold IT0Is sometimes referred to as detecting lift-off of the contact from the touch surface. In some embodiments, IT0Is zero. In some embodiments, IT0Greater than zero. In some illustrations, shaded circles or ellipses are used to represent the intensity of contacts on the touch-sensitive surface. In some illustrations, circles or ellipses without shading are used to represent respective contacts on the touch-sensitive surface without specifying the intensity of the respective contacts.
In some embodiments described herein, one or more operations are performed in response to detecting a gesture that includes a respective press input or in response to detecting a respective press input performed with a respective contact (or contacts), where the respective press input is detected based at least in part on detecting an increase in intensity of the contact (or contacts) above a press input intensity threshold. In some embodiments, the respective operation is performed in response to detecting that the respective contact intensity increases above a press input intensity threshold (e.g., a "downstroke" of the respective press input). In some embodiments, the press input includes an increase in intensity of the respective contact above a press input intensity threshold and a subsequent decrease in intensity of the contact below the press input intensity threshold, and the respective operation is performed in response to detecting a subsequent decrease in intensity of the respective contact below the press input threshold (e.g., an "upstroke" of the respective press input).
In some embodiments, the device employs intensity hysteresis to avoid accidental input sometimes referred to as "jitter," where the device defines or selects a hysteresis intensity threshold having a predefined relationship to the press input intensity threshold (e.g., the hysteresis intensity threshold is X intensity units lower than the press input intensity threshold, or the hysteresis intensity threshold is 75%, 90%, or some reasonable proportion of the press input intensity threshold). Thus, in some embodiments, a press input includes an increase in intensity of a respective contact above a press input intensity threshold and a subsequent decrease in intensity of the contact below a hysteresis intensity threshold corresponding to the press input intensity threshold, and a respective operation is performed in response to detecting a subsequent decrease in intensity of the respective contact below the hysteresis intensity threshold (e.g., an "upstroke" of the respective press input). Similarly, in some embodiments, a press input is detected only when the device detects an increase in intensity of the contact from an intensity at or below the hysteresis intensity threshold to an intensity at or above the press input intensity threshold, and optionally a subsequent decrease in intensity of the contact to an intensity at or below the hysteresis intensity, and a corresponding operation is performed in response to detecting the press input (e.g., an increase in intensity of the contact or a decrease in intensity of the contact, depending on the circumstances).
For ease of explanation, a description of an operation performed in response to a press input associated with a press input intensity threshold or in response to a gesture that includes a press input is optionally triggered in response to detection of either: the intensity of the contact increases above the press input intensity threshold, the intensity of the contact increases from an intensity below the hysteresis intensity threshold to an intensity above the press input intensity threshold, the intensity of the contact decreases below the press input intensity threshold, and/or the intensity of the contact decreases below the hysteresis intensity threshold corresponding to the press input intensity threshold. Additionally, in examples where the operation is described as being performed in response to detecting that the intensity of the contact decreases below the press input intensity threshold, the operation is optionally performed in response to detecting that the intensity of the contact decreases below a hysteresis intensity threshold corresponding to and less than the press input intensity threshold.
User interface and associated process
Selecting user interface objects
Many electronic devices have graphical user interfaces that display user interface objects such as thumbnails, icons, folders, and scroll/handles in drag and slider bars. Typically, a user of an electronic device will wish to select and move a user interface object on a display. For example, a user would like to rearrange desktop items on the desktop of the user interface. As another example, a user may wish to rearrange the order of application programs or "applications" (apps) displayed on a display of a portable multifunction device (such as a smartphone, etc.). As another example, a user may wish to move the handle of a volume bar (which is a type of user interface object) to change the volume generated by the media player. Some methods of selecting user interface objects on electronic devices with touch-sensitive surfaces typically require new input (e.g., mouse click or tap and drag input) to individually select the user interface objects. Further, once one user interface object is selected, selection of a second user interface object (e.g., another desktop item) requires separate input (e.g., different tap and drag gestures with different contacts). A problem with such approaches is that they do not provide a convenient way for a user to select user interface objects during continuous contact with the touch-sensitive surface. The embodiments described below provide an effective and efficient method for selecting multiple objects for an electronic device having a touch-sensitive surface by determining whether to select a user interface object based on the intensity of contacts with the touch-sensitive surface.
5A-5AA illustrate exemplary user interfaces for selecting user interface objects according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below,including the processes in fig. 6A-6E. 5A-5AA include intensity graphs that illustrate contacts on a touch-sensitive surface relative to a touch-sensitive surface that include a predefined intensity threshold (e.g., a light press intensity threshold "IT)L") current intensity of a plurality of intensity thresholds. In some embodiments, the light press pressure threshold IT is referenced belowLOperations similar to those described are with reference to the deep press intensity threshold ITDTo be executed.
In some embodiments, the device is a portable multifunction device 100, the display is a touch-sensitive display system 112, and the touch-sensitive surface includes tactile output generators 167 on the display (FIG. 1A). For ease of explanation, the embodiments described with reference to FIGS. 5A-5AA and 6A-6E will be discussed with reference to display 450 and independent touch-sensitive surface 451, however, similar operations are optionally performed on a device having touch-sensitive display system 112 in response to detecting the contact described in FIGS. 5A-5AA on touch-sensitive display system 112 while the user interface shown in FIGS. 5A-5AA is displayed on touch-sensitive display system 112; in such embodiments, the focus selector is optionally: a respective contact, a point of representation corresponding to the contact (e.g., a centroid of the respective contact or a point associated with the respective contact), or a centroid of two or more contacts detected on touch-sensitive display system 112 in place of cursor 17108, cursor 17132, or cursor 17140.
5A-5AA illustrate exemplary user interfaces for selecting user interface objects according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in fig. 6A-6E.
5A-5E illustrate examples of selecting a user interface object according to some embodiments. The user interface 17100 is displayed on the display 450 and includes user interface objects (e.g., thumbnails 17102, task bar 17104, task bar 17105) and a focus selector (e.g., cursor 17108).
Fig. 5B shows an example of a user interface in which a contact 17110 (e.g., a press input) is detected on the touch-sensitive surface 451. A contact 17110 is detected on the touch-sensitive surface (the contact 17110 has an intensity on the touch-sensitive surface 451, which is sometimes referred to simply as the "intensity of the contact" or "contact intensity"). The contact 17110 in fig. 5B controls the position of the cursor 17108. For example, movement of the contact 17110 on the touch-sensitive surface 451 (shown by the arrow attached to the contact 17110) causes the cursor 17108 to move toward, or in some cases to, the location of the thumbnail 17102-1 on the display 450.
5B-5C further illustrate examples of moving a focus selector (e.g., cursor 17108) over a user interface object. Fig. 5C follows fig. 5B in that detecting the movement of the contact 17110 on the touch-sensitive surface 451 from the location of the contact 17110 in fig. 5B to the location of the contact 17110 in fig. 5C causes the device to move the cursor 17108 over the thumbnail 17102-1. It should be understood that the location of the thumbnail 17102-1 is optionally defined as a point (e.g., a corner, centroid, or geometric center of the thumbnail) or defined by a non-zero area, such as any location within the boundary of the thumbnail 17102 or a hidden hit area of the thumbnail 17102-1. In some embodiments, the hidden hit area is larger than the thumbnail 17102-1. In some embodiments, the hidden hit region is "shifted" relative to the boundary of the thumbnail 17102-1. Thus, in some embodiments, the cursor 17108 is considered "above" the thumbnail 17102-1 whenever the cursor 17108 is displayed within the boundary defining the location of the thumbnail 17102-1. The locations of other user interface objects are optionally defined in a similar manner.
5C-5D illustrate an example of selecting a user interface object based on the intensity of the contact 17110 when the focus selector (cursor 17108 in this example) is at the location of the thumbnail 17102-1. In this example, a light press input is detected while the cursor 17108 is over the thumbnail 17102-1 (e.g., the intensity of the contact 17108 is lower than IT from fig. 5CLIs increased to be higher than IT in fig. 5DLIntensity of). FIG. 5D shows an example of a response by the device to detecting a light press while the cursor 17108 is over the thumbnail 17102-1. In response to detecting the light press input, the device selects thumbnail 17102-1, as shown in FIG. 5D. In some embodiments, the thumbnail 17102-1 is passed through the source in the thumbnailA thumbnail representation (e.g., TNR 17116-1 of fig. 5E) is displayed at the starting location to indicate that the thumbnail 17102-1 is selected. In some embodiments, thumbnail representations are not displayed. In some embodiments, the device changes the appearance of the thumbnail 17102-1 to indicate that it has been selected (e.g., the displayed thumbnail is highlighted). In the example shown in fig. 5E, the thumbnail 17102-1 is now "attached" to the cursor 17108 and will move on the display along with the cursor 17108 in response to subsequent detection of movement of the contact 17110 until the thumbnail is dropped.
FIG. 5E shows an example of a response to movement of the contact 17110 after selection of the thumbnail 17102-1. In response to movement of the contact 17110 on the touch-sensitive surface 451 (e.g., from the location of the contact 17110 in fig. 5D to the location of the contact 17110 in fig. 5E), the cursor 17108 is moved and the thumbnail 17102-1 is moved in a corresponding manner (e.g., moving the thumbnail 17102-1 to remain adjacent to the cursor 17108). In some embodiments, the intensity of the contact 17110 need not remain above a predefined intensity threshold after the user interface object 17102-1 is selected. For example, the intensity of contact 17110, as shown in FIG. 5E, is below the light press intensity threshold ITLHowever, user interface object 17102-1 remains selected. In some embodiments, the intensity of the contact 17110 remains above the light press intensity threshold with the same effect.
5A-5B and 5F-5G illustrate examples of forgoing selection of a user interface object according to some embodiments. Fig. 5A and 5B again illustrate aspects of the methods previously described with reference to these figures, e.g., detection of contact, movement of contact, and corresponding movement of the focus selector, etc. However, in this example, while the cursor 17108 is positioned over the user interface object 17102-1, the contact 17110 as shown in FIG. 5F remains below the tap pressure intensity threshold IT L. Thus, the device forgoes selecting thumbnail 17102-1. This response of the device provides an intuitive way for the user to drag the cursor 17108 over the thumbnail 17102-1 (e.g., "mouse over") without selecting the thumbnail because the user has not increased the intensity of the contact 17110 above ITL. The user may then move the cursor to a different location (e.g., the location of cursor 17108 in fig. 5G) without dragging the thumbnail 17102-1 along with the cursor.
5A-5B and 5H-5J illustrate examples of embodiments in which selection of a user interface object is based on a change in intensity of a contact relative to an initial intensity of the contact. The examples shown in these figures, in which a particular intensity value (e.g., IT) is to be used, differ from the embodiments described above with reference to FIGS. 5A-5GL) Used as an intensity threshold for determining whether to select or forgo selecting a user interface object. Fig. 5A and 5B illustrate positioning a cursor 17108 over the thumbnail 17102-1, as previously described. Fig. 5G includes a graph showing the intensity of the contact 17110 versus time during the time period in which the cursor 17108 is positioned over the thumbnail 17102-1. The device selects a reference intensity for comparison, which is labeled I0. It should be understood that I0 is optionally determined in any number of ways. For example, in some embodiments, I0 is the contact strength when the cursor 17108 is first detected to be "over" the thumbnail 17102-1, where the term "over … …" should be understood as previously described. In some embodiments, I0 is the average contact strength of the contact 17110 from the beginning of the contact. In other alternative embodiments, I0 is a "smart" value, meaning that the value adapts according to the particular user (e.g., I0 is higher for users that tend to press harder during normal use). Fig. 5H shows an example of a contact 17110 whose intensity exceeds a predefined threshold for a change in intensity of the contact with respect to I0 at a particular time T0. In this example, the predefined threshold for the change in contact strength relative to I0 is 50%. Thus, in this example, the contact strength reached 50% of I0+ I0 (or, equivalently, I T0) ]1.5 × I0), a predefined selection criterion is met and thumbnail 17102-1 is selected. Fig. 5I shows that thumbnail 17102-1 has been selected subsequent to (e.g., at time T0+ Δ) and in response to the predefined selection criteria being met as described with reference to fig. 5H. 5I-5J illustrate movement (e.g., from) in response to detecting contact 17110 after selection of thumbnail 17102-1The location of the contact 17110 in fig. 5I to the location of the contact 17110 in fig. 5J) to move the cursor 17108 and the thumbnail representation 17116-1. These operations are similar to those discussed with reference to fig. 5E.
5A-5B and 5K-5L illustrate example embodiments in which forgoing selection of a user interface object is based on a change in intensity of a contact relative to an initial intensity of the contact, according to some embodiments. In FIG. 5K, a cursor 17108 is positioned over the thumbnail 17102-1, as previously described with reference to FIGS. 5A-5B. The initial contact strength I0 was determined as described with reference to fig. 5H. However, in this example, the contact intensity does not exceed the predefined threshold for a change in contact intensity when the cursor 17108 is at a position above the thumbnail 17102-1. Thus, the device forgoes selecting thumbnail 17102-1. As shown in fig. 5L, detecting a subsequent movement of the contact 17110 causes movement of the cursor (e.g., from the position of the cursor 17108 in fig. 5K to the position of the cursor 17108 in fig. 5L) without a corresponding movement of the thumbnail 17102-1.
5M-5P illustrate selection of a second user interface object (e.g., thumbnail 17102-2) according to some embodiments. After selecting thumbnail 17102-1 (e.g., as shown in fig. 5D), the device detects movement of contact 17110 in fig. 5M and, in response, moves cursor 17108 from its previous location in fig. 5M to a new location over thumbnail 17102-2 in fig. 5N. In response to detecting a tap input while cursor 17108 is over thumbnail 17102-2, as shown in FIGS. 5N-5O, where contact 17110 is from a lower intensity than ITLIs increased to be higher than ITLIntensity, device selects thumbnail 17102-2 without deselecting or dropping thumbnail 17102-1. In FIG. 5P, the device detects movement of the contact 17110 (e.g., from the location of the contact 17110 in FIG. 5O to the location of the contact 17110 in FIG. 5P), and in response to detecting movement of the contact 17110 in FIG. 5P, the device moves the cursor 17108 and the two selected thumbnails 17102-1 and 17102-2, as shown in FIG. 5P.
As shown in FIG. 5P, after the thumbnails 17102-1 and 17102-2 have been selected and the thumbnails 17102-1 and 17102-2 have been moved in accordance with the movement of the cursor 17108The device displays respective residual images 17116-1 and 17116-2 corresponding to the respective thumbnails. In some embodiments, a tap input is detected (e.g., the intensity of the contact 17110 is from below IT) while the cursor 17108 is over one of the residual images LIs increased to be higher than ITLIntensity) will cause the device to deselect the corresponding thumbnail. For example, in fig. 5P, if the device detects a tap input after moving the cursor 17108 over the residual image 17116-2, the device will deselect the thumbnail 17102-2. Similarly, in FIG. 5P, if the device detects a tap input after moving the cursor 17108 over the residual image 17116-1, the device will deselect the thumbnail 17102-1.
5M-5N and 5Q-5R illustrate examples of forgoing selection of a second user interface object (e.g., thumbnail 17102-2) while maintaining selection of a first user interface object (e.g., thumbnail 17102-1). After selecting the thumbnail 17102-1 (e.g., as shown in fig. 5D), the device detects movement of the contact 17110 in fig. 5M and, in response, moves the cursor 17108 from its previous position shown in fig. 5M to a new position over the thumbnail as shown in fig. 5N. In fig. 5Q, the intensity of the contact 17110 remains below the tap threshold for a period of time during which the cursor 17108 is positioned over the thumbnail 17102-2. Thus, the device forgoes selection of thumbnail 17102-2, and detecting movement of contact 17110 causes movement of cursor 17108 along with thumbnail 17102-1 and not along with thumbnail 17102-2 or the representation of thumbnail 17102-2, as shown in FIG. 5R. In some embodiments, as previously described, the selection or deselection of the second user interface object is based on a change in intensity of the contact relative to an initial intensity of the contact, rather than a "fixed" or "absolute" intensity threshold.
In some cases, movement of a particular user interface object is naturally constrained to one dimension. For example, a volume slider (which lets a user graphically adjust the volume of a speaker, for example, integrated into the electronic device 300) and a video drag bar (which lets a user graphically "fast forward" or "rewind" a digital video segment, also sometimes referred to as a video drag bar) are constrained in an up-down direction, or alternatively, a front-back direction. 5S-5AA illustrate several examples of selecting a user interface object or forgoing selection of a user interface object constrained to one dimension.
Fig. 5S shows a user interface with a media player 17130. The media player 17130 includes a video drag bar 17134. The video drag bar 17134 includes a handle 17136 that indicates the progress of the video clip. For example, as the video clip advances, the handle 17136 moves to the right. In some embodiments, the user may "click and drag" handle 17136 to the left (and thus "rewind") or to the right (and thus "fast forward"). However, in some user interfaces, subsequent movement of the cursor 17132 away from the drag bar causes the handle 17136 to be deselected or dropped. In some embodiments described herein, whether to select the handle 17136 is determined based on the intensity of the contact when the cursor 17132 is positioned over the handle 17136. Upon selection of the handle 17136, the handle 17136 remains selected despite movement of the cursor 17132 away from the drag bar, as described below. Further, in some user interfaces, a respective user interface object is only selected when contact is initially detected and a focus selector over the respective user interface object is detected (e.g., if contact is detected on the touch screen display at a location away from a scroll thumb on the slider, the scroll thumb is not selected even if the contact moves onto the scroll thumb). Thus, it would be advantageous to be able to select and keep selecting user interface objects that are constrained to a predefined path based on the intensity of the contact rather than the initial position of the focus selector on the touch-sensitive surface.
5S-5T illustrate moving a focus selector (e.g., cursor 17132) on the display 450 in response to detecting movement of the contact 17138 on the touch-sensitive surface 451. As shown in fig. 5S, prior to detecting movement of the contact 17138, the cursor 17132 is in a position away from the handle 17136 and the device moves the cursor 17132 over the handle 17136, as shown in fig. 5T. 5T-5U illustrate detection of a light press input, including detection of contact 17138 having an intensity from below ITLIs increased to be higher than ITLThe strength of (2). Fig. 5U-5V illustrate movement of the contact 17138 that corresponds to movement of the cursor 17132 to the new position shown in fig. 5V. Although in fig. 5S-5V cursor 17138 is allowed to move freely in two dimensions of the display, handle 17136 is constrained to the allowed direction defined by drag bar 17134. Thus, the handle 17136 follows the projection (or component) of the cursor 17132 on the display along the drag bar 17134 to allow movement of the direction.
5W-5AA illustrate examples of user interface objects being constrained to one dimension selection and movement. However, in this example, the user interface object (in this case, the icon in the icon bar) is constrained to a visually discrete position within its one-dimensional range of motion. For example, the icons in the icon bar are ordered from left to right and are uniformly spaced. Thus, the user is not allowed to position the icons randomly within the icon bar, but the positions of the two icons can be swapped. In fig. 5W, the device detects a contact 17142 on the touch-sensitive surface 451 and detects movement of the contact (e.g., from the location of the contact 17142 in fig. 5W to the location of the contact 17142 in fig. 5X), and in response to detecting movement of the contact 17142, the device moves the cursor 17140. In fig. 5W-5X, the device moves the cursor 17140 from a position away from folder a in fig. 5W to a position above folder a in fig. 5X. In FIGS. 5X-5Y, the device detects a tap input, including detecting that the intensity of the contact 17142 has gone from below IT LIs increased to be higher than ITLAnd in response, the device selects folder a. In response to detecting subsequent movement of the contact 17142 shown in FIGS. 5Z-5AA, the device moves the cursor 17140 and reorders the icons in the task bar as shown in FIGS. 5Z-5 AA. For example, in some embodiments, the final location of folder A is determined using a projection of cursor movement in the allowed direction, then rounding to determine a new location in the discrete locations that is available for folder A. Upon determining that folder a should be moved (e.g., to a place to the right of its current location), the devices swap the locations of folder a and its right icon. For example, FIG. 5Z illustrates swapping the locations of folder A and folder B. Similarly, FIG. 5AA illustrates the joining of folder A with a cursor 17140 in response to additional movementAn example of a further exchange of music icons, where the additional movement of the cursor 17140 includes a component corresponding to movement of the cursor 17140 to the right on the display 450.
6A-6E are flow diagrams illustrating a method 17200 of determining whether to select a user interface object or forgo selecting a user interface object when a focus selector is positioned over the user interface object based on an intensity of a contact on a touch-sensitive surface when the focus selector corresponding to a contact that has been detected on the touch-sensitive surface passes over the user interface object, in accordance with some embodiments. Method 17200 is performed at an electronic device (e.g., device 300 of FIG. 3 or portable multifunction device 100 of FIG. 1A) having a display and a touch-sensitive surface. In some embodiments, the display is a touch screen display and the touch sensitive surface is on the display. In some embodiments, the display is separate from the touch-sensitive surface. Some operations in method 17200 are optionally combined, and/or the order of some operations is optionally changed.
Method 17200 provides an intuitive method for selecting user interface objects, as described below. The method reduces the cognitive burden on the user when selecting user interface objects, thereby creating a more efficient human-machine interface. For battery-powered electronic devices, method 17200 enables a user to select user interface objects faster and more efficiently, saving power and increasing the time between battery charges.
The device displays (17202) a first user interface, e.g., thumbnail 17102-1 as described with reference to fig. 5A, at a first location on the display. The device detects (17204) a contact (e.g., finger contact) with the touch-sensitive surface, such as contact 17110 described with reference to fig. 171B. The device detects (17206) a first movement of the contact on the touch-sensitive surface that corresponds to movement of the focus selector toward (e.g., to) a first position. The first location is optionally a point or a region having a non-zero area, such as a hidden hit region of the first user interface object. In response to detecting the first movement (17208) of the contact, the device moves (17210) the focus selector from a position away from the first user interface object to the first position. For example, in fig. 5B, the cursor 17108 begins at an initial position, and in response to detecting movement of the contact 17110 in fig. 5C, the device moves the cursor 17108 to a new position over the thumbnail 17102-1.
The device also determines (17212) an intensity of the contact on the touch-sensitive surface when the focus selector is in the first position. After detecting the first movement of the contact, the device detects (17214) a second movement of the contact on the touch-sensitive surface that corresponds to movement of the focus selector away from the first position. For example, in fig. 5E, 5G, 5J, and 5L, the device detects movement of the contact 17110, and in response to detecting the movement of 17110, the device moves the cursor 17108 away from the location corresponding to the thumbnail 17102-1. In response to detecting the second movement of the contact (17216), the device determines (17218) whether the contact satisfies selection criteria for the first user interface object. Selection criteria for the first user interface object include the contact reaching a predefined intensity threshold while the focus selector is in the first position. 5C-5E and 5H-5J illustrate examples of contacts meeting selection criteria when a focus selector (e.g., cursor 17108) is positioned over a first user interface object (e.g., thumbnail 17102-1). 5F-5G and 5K-5L illustrate examples where a contact does not meet selection criteria when a focus selector (e.g., cursor 17108) is positioned over a first user interface object (e.g., thumbnail 17102-1).
In some embodiments, the predefined intensity threshold is based at least in part on (17220) a magnitude of the intensity of the contact (e.g., if the intensity of the contact is above some predefined amount of pressure greater than zero, the device picks up the first user interface object). For example, FIGS. 5C-5E illustrate an example in which the predefined intensity threshold is a light press intensity threshold (e.g., ITL) And because the intensity of the contact 17110 is above the tap intensity threshold when the cursor 17108 is positioned over the thumbnail 17102-1, the predefined selection criteria are met. On the other hand, in FIGS. 5F-5G, since the contact 17110 does not have a pressure above the tap pressure threshold (e.g., IT) when the cursor 17108 is positioned over the thumbnail 17102-1L) Is therefore not strongA predefined selection criterion is fulfilled. In some embodiments, the predefined intensity threshold is based at least in part on (17222) an amount of change in intensity of the contact (e.g., if the intensity of the contact increases by 50%, then the first user interface object is picked up). For example, fig. 5H-5J illustrate an example in which the predefined selection criteria are met because the intensity of the contact 17110 increases by more than 50% from the reference intensity I0 when the cursor 17108 is positioned over the thumbnail 17102-1. On the other hand, in fig. 5K-5L, the predefined selection criteria are not met because the contact 17110 has not increased more than 50% from the reference intensity I0 while the cursor 17108 is positioned over the thumbnail 17102-1.
In accordance with a determination that the contact satisfies (17224-yes) selection criteria for the first user interface object, the device moves the focus selector and the first user interface object, as described in more detail below. In contrast, in accordance with a determination (17224-no) that the contact does not satisfy the selection criteria for the first user interface object, the device moves (17226) the focus selector without moving the first user interface object in accordance with the second movement of the contact (e.g., the device forgoes selecting/picking the first user interface object). For example, FIG. 5F shows the cursor 17108 at a respective location over the thumbnail 17102-1, while FIG. 5G shows movement of the contact 17110 along with corresponding movement of the focus selector to a new location away from the thumbnail 17102-1. However, since the selection criteria for the thumbnail 17102-1 are not satisfied until movement of the contact 17110 is detected, the thumbnail 17102-1 remains in the first position in FIG. 5G.
In accordance with a determination (17224 — yes) that the contact satisfies (17224) the selection criteria for the first user interface object, the device moves (17228) the focus selector and the first user interface object away from the first location in accordance with the second movement of the contact (e.g., the device selects/picks up the first user interface object, as shown in fig. 5E and 5J, where the thumbnail 17102-1 moves in accordance with the movement of the cursor 17108).
In some embodiments, movement of the first user interface object (such as a scroll thumb or a handle in a drag bar or slider) is constrained (17230) to a predefined path in the user interface, and moving the first user interface object includes moving the first user interface object along the predefined path according to a motion component of the focus selector that corresponds to a direction of allowed motion along the predefined path. Examples of these embodiments are shown in the user interfaces shown in FIGS. 5S-5 AA. Alternatively, in some embodiments, the first user interface object has a two-dimensional range of motion (17232), and moving the first user interface object includes moving the first user interface object to a position on the display at or adjacent to the focus selector. For example, the first user interface object is a document icon that is laterally movable in a two-dimensional plane on the display and is not constrained to a predefined path. Similar examples are shown in the user interfaces shown in fig. 5A-5R. In some embodiments, while displaying the first user interface object (e.g., thumbnail 17102-2 in fig. 5A) on the display, the device displays (17234) the second user interface object at a second location on the display. While continuing to detect (17236) the contact and move the first user interface object in accordance with the movement of the focus selector, after detecting the second movement of the contact, the device detects (17238) a third movement of the contact on the touch-sensitive surface that corresponds to movement of the focus selector toward (e.g., to) the second position. In response to detecting the third movement of the contact (17240), the device moves (17242) the focus selector from a position away from the second user interface object (e.g., from the first position or a position adjacent to the first position) to the second position. In some embodiments, the second location is a point or a region having a non-zero area, such as a hidden hit region of the second user interface object. For example, in fig. 5N and 5Q, the device detects movement of the contact 17110 and, in response to detecting movement of the contact 17110 downward on the touch-sensitive surface 451, the device moves the cursor 17108 over a second user interface object (e.g., thumbnail 17102-2).
In response to detecting the third movement of the contact, the device also determines (17244) an intensity of the contact on the touch-sensitive surface while the focus selector is in the second position. After detecting the third movement of the contact, the device detects (17246) a fourth movement of the contact on the touch-sensitive surface that corresponds to movement of the focus selector away from the second position. For example, in fig. 5P and 5R, the device detects movement of the contact 17110, and in response to detecting movement of the contact 17110 to the left on the touch-sensitive surface 451, the device moves the cursor 17108 away from the position occupied by the second user interface object (e.g., thumbnail 17102-2) before detecting the third movement or the fourth movement.
In some embodiments, after detecting the first movement and before detecting the fourth movement, the device detects (17247) a decrease in intensity of the contact below a predefined intensity threshold, and after detecting the decrease in intensity of the contact below the predefined intensity threshold, the device continues to move the first user interface object according to the movement of the focus selector (e.g., in fig. 5Q, the intensity of the contact 17110 is below ITL). For example, after "picking up" a first user interface object, the user may reduce the contact strength without "dropping" the first user interface object, such that the user will be able to "pick up" additional user interface objects (e.g., second user interface objects) by increasing the strength of the contact above each additional user interface object again above a predefined strength threshold. If the user has picked up several user interface objects (e.g., a first user interface object and a second user interface object), the user may decrease the intensity of the contact without "dropping" any of the user interface objects, such that the user will be able to "pick up" additional user interface objects (e.g., a second user interface object) by increasing the intensity of the contact above each additional user interface object again above a predefined intensity threshold.
In response to detecting (17248) the fourth movement of the contact, the device determines whether the contact satisfies selection criteria for the second user interface object. The selection criteria for the second user interface object include the contact reaching a predefined intensity threshold while the focus selector is in the second position.
In accordance with a determination (17252-no) that the contact does not satisfy the selection criteria for the second user interface object, the device moves (17254) the focus selector and the first user interface object without moving the second user interface object in accordance with a fourth movement of the contact (e.g., the device forgoes selecting/failing to pick up the second user interface object, as shown in fig. 5R, where the thumbnail 17102-2 did not move in accordance with movement of the cursor 17108). In some embodiments, when a first user interface object has been selected and the device detects an increase in intensity of the contact while the focus selector is over a second object, the second object is picked up in addition to the first object. Thus, a user may select and move multiple objects with a single contact by moving the focus selector over multiple different user interface objects and performing a press gesture that includes increasing the intensity of the contact above a predefined intensity threshold while the focus selector is over each of the user interface objects.
In accordance with a determination (17252-yes) that the contact satisfies selection criteria for the second user interface object, the device moves (17256) the focus selector, the first user interface object, and the second user interface object away from the second location in accordance with a fourth movement of the contact (e.g., the device selects/picks up the second user interface object and moves the second user interface object along with the first user interface object, as shown in fig. 5P, where the thumbnail 17102-2 moves in accordance with movement of the cursor 17108). In some embodiments, after detecting the fourth movement of the contact, the representation of the first user interface object and the representation of the second user interface object are displayed (17258) for movement on the display in accordance with the movement of the focus selector (e.g., as shown in fig. 5P). In some embodiments, after detecting the fourth movement of the contact, representations of a set of objects corresponding to the first user interface object and the second user interface object are displayed (17260) for movement on the display in accordance with movement of the focus selector.
It should be understood that the particular order of operations in fig. 6A-6E that has been described is merely exemplary and is not intended to suggest that the order described is the only order in which the operations may be performed. Various ways of reordering the operations described herein will occur to those of ordinary skill in the art. In addition, it should be noted that details of other processes described herein with respect to other methods described herein (e.g., those listed in paragraph [0043 ]) also apply in a similar manner to the method 17200 described above with respect to fig. 6A-6E. For example, the contacts, user interface objects, intensity thresholds, and focus selectors described above with reference to method 17200 optionally have one or more characteristics of the contacts, user interface objects, intensity thresholds, and focus selectors described herein with reference to other methods described herein (e.g., those methods listed in paragraph [0043 ]). For the sake of brevity, these details are not repeated here.
Fig. 7 illustrates a functional block diagram of an electronic device 17300 configured in accordance with the principles of various described embodiments, in accordance with some embodiments. The functional blocks of the device are optionally implemented by hardware, software, or a combination of hardware and software which embody the principles of the various described embodiments. Those skilled in the art will understand that the functional blocks described in fig. 7 are optionally combined or separated into sub-blocks to implement the principles of the various described embodiments. Thus, the description herein optionally supports any possible combination or separation or further definition of the functional blocks described herein.
As shown in fig. 7, electronic device 17300 includes display unit 17302 configured to display a user interface including a first user interface object at a first location on the display unit; a touch-sensitive surface unit 17304 configured to detect contacts; one or more sensor units 17306 configured to detect intensity of contacts with the touch-sensitive surface unit 17304; and a processing unit 17308 coupled to the display unit 17302, the touch-sensitive surface unit 17304, and the one or more sensor units 17306. In some embodiments, the processing unit 17308 includes a display enabling unit 17310, a detecting unit 17312, a determining unit 17313, a selecting unit 17314, and a moving unit 17316.
The processing unit 17308 is configured to detect a first movement of the contact on the touch-sensitive surface unit 17304 (e.g., with the detection unit 17312) that corresponds to movement of the focus selector toward the first position. In response to detecting the first movement of the contact, the processing unit 17308 is configured to move (e.g., with the movement unit 17316) the focus selector from a position away from the first user interface object to the first position, and determine (e.g., with the determination unit 17313) an intensity of the contact on the touch-sensitive surface unit 17304 while the focus selector is in the first position. After detecting the first movement of the contact, the processing unit 17308 is configured to detect a second movement of the contact on the touch-sensitive surface unit 17304 (e.g., with the detection unit 17312) that corresponds to movement of the focus selector away from the first position. In response to detecting the second movement of the contact, in accordance with a determination that the contact satisfies selection criteria for the first user interface object, the processing unit 17308 is configured to move the focus selector and the first user interface object away from the first location in accordance with the second movement of the contact (e.g., with the movement unit 17316), wherein the selection criteria for the first user interface object includes the contact reaching a predefined intensity threshold while the focus selector is in the first location. In response to detecting the second movement of the contact, in accordance with a determination that the contact does not satisfy the selection criteria for the first user interface object, the processing unit 17308 is configured to move (e.g., with the movement unit 17316) the focus selector in accordance with the second movement of the contact without moving the first user interface object.
In some embodiments, the movement of the first user interface object is constrained to a predefined path in the user interface, and moving the first user interface object includes moving the first user interface object along the predefined path according to a motion component of the focus selector that corresponds to an allowed direction of motion along the predefined path (e.g., with the movement unit 17316).
In some embodiments, the first user interface object has a two-dimensional range of motion, and moving the first user interface object includes moving (e.g., with the movement unit 17316) the first user interface object to a location on the display unit at or adjacent to the focus selector.
In some embodiments, the predefined intensity threshold is based at least in part on an amount of change in intensity of the contact.
In some embodiments, the predefined intensity threshold is based at least in part on a magnitude of the intensity of the contact.
In some embodiments, while the first user interface object is displayed on the display unit 17302, a second user interface object is displayed on the display unit 17302 at a second location on the display unit 17302, and the processing unit 17304 is configured to detect a third movement of the contact on the touch-sensitive surface unit 17304 (e.g., with the detection unit 17312) corresponding to movement of the focus selector toward the second location (e.g., to the second location) while continuing to detect the contact and moving the first user interface object in accordance with the movement of the focus selector and after detecting the second movement of the contact. In response to detecting the third movement of the contact, the processing unit 17304 is configured to move (e.g., with the movement unit 17316) the focus selector from a position away from the second user interface object to a second position, and determine (e.g., with the determination unit 17313) an intensity of the contact on the touch-sensitive surface unit 17304 while the focus selector is in the second position. After detecting the third movement of the contact, the processing unit 17304 is configured to detect a fourth movement of the contact on the touch-sensitive surface unit 17304 (e.g., with the detection unit 17312) that corresponds to movement of the focus selector away from the second location. In response to detecting the fourth movement of the contact, in accordance with a determination that the contact satisfies selection criteria for the second user interface object, the processing unit 17304 is configured to move the focus selector, the first user interface object, and the second user interface object away from the second location in accordance with the fourth movement of the contact (e.g., with the movement unit 17316), wherein the selection criteria for the second user interface object includes the contact reaching a predefined intensity threshold while the focus selector is in the second location. In response to detecting the third movement of the contact, in accordance with a determination that the contact does not satisfy the selection criteria for the second user interface object, the processing unit 17304 is configured to move (e.g., with the movement unit 17316) the focus selector and the first user interface object without moving the second user interface object in accordance with the third movement of the contact.
In some embodiments, the processing unit 17304 is further configured to, after detecting the fourth movement of the contact, display the representation of the first user interface object and the representation of the second user interface object as moving on the display unit (e.g., with the display enabling unit 17310) in accordance with the movement of the focus selector.
In some embodiments, the processing unit 17304 is further configured to, after detecting the fourth movement of the contact, display (e.g., with the display enabling unit 17310) a representation of a set of objects corresponding to the first user interface object and the second user interface object as moving on the display unit in accordance with the movement of the focus selector.
In some embodiments, the processing unit is further configured to, after detecting the first movement and before detecting the fourth movement, detect that the intensity of the contact decreases below (e.g., with the detection unit 17312) a predefined intensity threshold, and, after detecting that the intensity of the contact decreases below the predefined intensity threshold, continue to move (e.g., with the movement unit 17316) the first user interface object in accordance with the movement of the focus selector.
The operations in the above-described information processing methods are optionally implemented by running one or more functional modules in an information processing apparatus, such as a general-purpose processor (e.g., as described above with respect to fig. 1A and 3) or an application-specific chip.
The operations described above with reference to fig. 6A-6E are optionally performed by components depicted in fig. 1A-1B or fig. 7. For example, detection operations 17204 and 17212, movement operations 17210, 17226, and 17228, and determination operation 17212 are optionally implemented by event sorter 170, event recognizer 180, and event handler 190. Event monitor 171 in event sorter 170 detects a contact on touch-sensitive display 112 and event dispatcher module 174 passes the event information to application 136-1. The respective event recognizer 180 of application 136-1 compares the event information to the respective event definition 186 and determines whether the first contact at the first location on the touch-sensitive surface corresponds to a predefined event or sub-event, such as selecting an object on the user interface. When a respective predefined event or sub-event is detected, the event recognizer 180 activates an event handler 190 associated with the detection of the event or sub-event. Event handler 190 optionally uses or calls data updater 176 or object updater 177 to update application internal state 192. In some embodiments, event handler 190 accesses a respective GUI updater 178 to update the content displayed by the application. Similarly, those skilled in the art will clearly know how other processes may be implemented based on the components shown in FIGS. 1A-1B.
Selecting user interface objects
Many electronic devices have graphical user interfaces that display user interface objects such as thumbnails, icons, folders on the display, and scroll/handles in drag and slider bars. Such user interface items typically represent files or directories (or subdirectories) corresponding to a collection of files. Typically, a user of an electronic device will wish to select and move a user interface object on a display. For example, a user would like to rearrange desktop items in a desktop environment/window system. As another example, a user may wish to select several user interface objects and add the selected user interface objects to a set of user interface objects. Such operations occur, for example, while using the desktop environment (e.g., adding files to folders), or between the desktop environment and the application (e.g., adding files from the desktop window to a playlist in the media player), or within the application (e.g., selecting and dragging user interface items within the media player). As yet another example, a user may wish to rearrange the order of thumbnails corresponding to applications or "applications" displayed on the display of the portable multifunction device.
Some methods of selecting user interface objects on an electronic device with a touch-sensitive surface typically require a different selection operation (e.g., activating a mouse button or placing a contact on the touch-sensitive surface) to be performed on each of a plurality of user interface objects to independently select the user interface object. Typically, in such embodiments, to perform a subsequent selection operation, the user first stops selecting the previously-selected object (e.g., the previously-selected user interface object is deselected when the mouse button is deactivated or the contact lifts off of the touch-sensitive surface). Thus, in these examples, selection of a second user interface object (e.g., another desktop item) requires a separate selection operation, and thus only one user interface object is selected at a time. Alternatively, some methods enable a user to select multiple user interface objects for selecting multiple user interfaces within a region, yet such methods of selecting objects do not enable a user to select a particular set of user interface objects from a set of user interface objects located in proximity to each other. The embodiments described below provide a more efficient and intuitive method implemented on an electronic device with a touch-sensitive surface for determining whether to select a user interface object or forgo selecting a user interface object based on the intensity of contacts with the touch-sensitive surface. In some cases, multiple user interface objects are selected using a single continuous contact, or alternatively, multiple different contacts, on the touch-sensitive surface.
Fig. 8A-8DD illustrate exemplary user interfaces for selecting user interface objects according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in fig. 9A-9E. For the figures that illustrate contact with a touch-sensitive surface, an intensity map is included that illustrates the current intensity of a contact on the touch-sensitive surface relative to multiple intensity thresholds, including an alternative mode intensity threshold (e.g., "ITD") and a selection intensity threshold (e.g.," ITL”)。
In some embodiments, the device is a portable multifunction device 100, the display is a touch-sensitive display system 112, and the touch-sensitive surface includes tactile output generators 167 on the display (FIG. 1A). For ease of explanation, the embodiments described with reference to FIGS. 8A-8DD and 9A-9E will be discussed with reference to display 450 and independent touch-sensitive surface 451, however, similar operations are optionally performed on a device having touch-sensitive display system 112 in response to detecting the contact described in FIGS. 8A-8DD on touch-sensitive display system 112 while the user interface shown in FIGS. 8A-8DD is displayed on touch-sensitive display system 112; in such embodiments, the focus selector is optionally: a respective contact, a point of representation corresponding to the contact (e.g., a centroid of or a point associated with the respective contact), or a centroid of two or more contacts detected on touch-sensitive display system 112 in place of cursor 17408.
FIG. 8A illustrates an exemplary user interface for selecting a user interface object according to some embodiments. Fig. 8A illustrates an exemplary user interface 17400. User interface 17400 is displayed on display 450 and includes user interface objects (e.g., thumbnails 17402, folders 17404) and a focus selector (e.g., cursor 17408). FIG. 8B shows the device detecting movement of contact 17406 on touch-sensitive surface 451 and in response moving cursor 17408 on the display from a location in FIG. 8A away from thumbnail 17402-1 to a location in FIG. 8B above thumbnail 17402-1.
8B-8F illustrate exemplary user interfaces for selecting a user interface object in a first selection mode according to some embodiments. At the outset of FIG. 8B, user interface 17400 is in a first selection mode (sometimes referred to as the "single object selection mode"), which has the properties described below. Further, in fig. 8B-8F, contact 17406 represents a continuously detected (e.g., uninterrupted) contact with touch-sensitive surface 451 (e.g., contact is continuously detected between the start of the first press input and the end of the second press input).
In some embodiments, contact 17406 controls the position of cursor 17408. For example, movement of contact 17406 on touch-sensitive surface 451 (shown by the arrow of FIG. 8B attached to contact 17406) causes cursor 17408 to correspondingly move toward, or in some cases to, the location of thumbnail 17402-1. It should be appreciated that the location of thumbnail 17402-1 is optionally defined as a point (e.g., a corner, or geometric centroid of the thumbnail) or by a non-zero area, such as any location within the boundary of thumbnail 17402 or a hidden hit area of thumbnail 17402-1. In some embodiments, the hidden hit area is larger than the thumbnail 17402-1. In some implementations, the hidden hit region is "shifted" relative to the boundary of the thumbnail 17402-1. Thus, in some embodiments, whenever cursor 17408 is displayed within the boundaries defining the location of thumbnail 17402-1, cursor 17408 is considered to be "over" thumbnail 17402-1. Likewise, the locations of other user interface objects are similarly defined. Regardless of the definition of the location of the user interface object, the press input detected while the focus selector is positioned over the user interface object is sometimes referred to as a "press input on the corresponding user interface object" or the like.
FIG. 8C illustrates detection of a "light press input," e.g., corresponding to an increase in intensity of contact 17410 above a select intensity threshold (e.g., "IT" for example)L") but below an alternative pattern intensity threshold (e.g., IT)D) The pressing input of (2). In response to detecting a light press while cursor 17408 is over thumbnail 17402-1, thumbnail 17402-1 is selected, as shown in FIG. 8D. As shown in FIGS. 8D-8E, when the device is in the single object selection mode and the intensity of contact 17406 is reduced below ITLWhen selected, thumbnail 17402-1 is deselected or "dropped". Since thumbnail 17402-1 is no longer selected in FIG. 8E, subsequent movement of contact 17406 on touch-sensitive surface 451 causes cursor 17408 to move to that location without moving thumbnail 17402-1, as shown in FIG. 8F.
8G-8O illustrate exemplary user interfaces for selecting user interface objects in an alternative mode (sometimes referred to as a "select multiple objects" mode) according to some embodiments. For ease of explanation, the embodiment in FIGS. 8G-8O is described with reference to continuous contact 17410. At the beginning of FIG. 8G, user interface 17400 is in the selection mode, as described above. FIGS. 8G and 8H are similar to FIGS. 8B and 8C, respectively, with device response Moving cursor 17408 on the display from a location in FIG. 8A away from thumbnail 17402-1 to a location in FIG. 8G above thumbnail 17402-1 upon detecting movement of contact 17410 on touch-sensitive surface 451, except that the press input in FIGS. 8G-8H corresponds to an increase in intensity of contact 17410 above an alternative mode intensity threshold (e.g., "IT" intensity threshold)D"). Accordingly, user interface 17400 enters an alternative mode (e.g., a "select multiple objects" mode). In some embodiments, the intensity of contact 17410 is subsequently reduced below IT while user interface 17400 is in the alternative modeLCausing thumbnail 17402-1 to be dropped. For example, in FIG. 8I, even though the strength of contact 17410 has been reduced below ITLThumbnail 17402-1 continues to be selected. Subsequent movement of contact 17410 on touch-sensitive surface 451 results in movement of cursor 17408 shown in FIG. 8J, along with movement of thumbnail 17402-1. FIG. 8J also shows an example of the device displaying a residual image 17416 (e.g., 17416-1) of the thumbnail 17402-1 on the display 450.
8K-8L illustrate the device detecting a subsequent movement of contact 17410 on touch-sensitive surface 451 and, in response, moving cursor 17408 from a position away from thumbnail 17402-2, as shown in FIG. 8K, to a position over thumbnail 17402-2, as shown in FIG. 8L. In FIG. 8M, the device detected an increase in intensity above the selection threshold (e.g., IT) corresponding to the cursor 17408 being positioned over thumbnail 17402-2 L) And, in response, the device selects thumbnail 17402-2 without dropping thumbnail 17402-1. In some embodiments, the first and second press inputs are generated by continuously detected (uninterrupted) contacts on the touch-sensitive surface.
After selection of thumbnail 17402-2, while the device is in the multi-object selection mode, even if the intensity of contact 17410 decreases below a selection intensity threshold (e.g., "IT" for example)L"), the device remains selecting the selected thumbnail 17402-2 and 17402-1. In response to detecting the movement of contact 17410 in FIG. 8O, the device moves the selected thumbnail away from the position previously occupied by thumbnail 17402-2, as shown in FIG. 8O. FIG. 8N-8O is similar to FIGS. 8I-8J, except that the movement of cursor 17408 is along with both thumbnails 17402-1 and 17402-2 because both thumbnails are selected. Also shown in FIG. 8O is thumbnail remnant 17416-2, which corresponds to the remnant of thumbnail 17402-2. Residual image 17416 has additional attributes. In some embodiments, after a user interface object is selected and a residual image of the user interface object is displayed, a press input is detected on the residual image (e.g., contact 17410 is from below IT when cursor 17408 is over the corresponding residual image LIs increased to be higher than ITLIntensity of). In some embodiments, in response to detecting a press input on a respective residual image, a user interface object corresponding to the respective residual image is deselected. For example, upon selection of thumbnails 17402-1 and 17402-2, a press input on residual image 17416-1 (e.g., contact 17410 is from less intense than IT while cursor 17408 is positioned over residual image 17416-1LIs increased to be higher than ITLIntensity) will cause the device to deselect thumbnail 17402-1 and keep selecting thumbnail 17402-2. In some embodiments, an animation of thumbnail 17402-1 "fly back" and replacing residual image 17416-1 is performed.
In some embodiments, in response to detecting liftoff of a continuous contact (e.g., contact 17410 of FIGS. 8G-8O), the previously selected user interface objects (e.g., thumbnails 17402-1 and 17402-2 of FIG. 8O) are dropped down in the user interface, as shown in FIG. 8P. In some embodiments, when the previously selected user interface object is dropped, the corresponding residual image (e.g., thumbnail residual 17416) is no longer displayed, and when liftoff of contact 17410 is detected, the user interface object is displayed at a location adjacent to the location of cursor 17408, as shown in fig. 8P.
8G-8N and 8Q-8T illustrate selection of a third user interface object that represents a collection of user interface objects (e.g., folders, subdirectories, albums, playlists, etc.). 8G-8N, which have been described, FIG. 8Q shows contact 17410 at after thumbnails 17402-1 and 17402-2 have been selectedMovement on the touch-sensitive surface 451 that corresponds to movement of the cursor 17408 to a position above the document folder 17404. In response to detecting a light press input (e.g., intensity of contact 17410 from below ITLTo intermediate ITLAnd ITDAs shown in fig. 8Q-8R), the device selects folder 17404 without deselecting any of the thumbnails as shown in fig. 8R. 8S-8T illustrate subsequent movement of the cursor along with the selected thumbnail and subsequent movement of the folder in response to detecting movement of the contact 17410 on the touch-sensitive surface 451.
8U-8X also follow FIGS. 8G-8N, but in this case the detected press input is a deep press input (e.g., the intensity of contact 17410 is from below ITDIs increased to be higher than ITDAs shown in fig. 8U-8V). Thus, upon detecting a deep press input rather than a light press input, the device displays a user interface (e.g., an opened folder) with areas for adding thumbnails 17402-1 and 17402-2 to the set of thumbnails. In FIG. 8W, thumbnails 17402-1 and 17402-1 are deselected in response to detecting liftoff of contact 17410, and in FIG. 8W, these two thumbnails are added to the contents of the opened "documents" folder because cursor 17408 is located above the representation of document folder 17404 when liftoff of contact 17410 is detected. The "documents" folder already contains other thumbnails, such as thumbnails 17402-4 and 17402-5, for example. 8W-8X show an animation in the representation of the document folder in which thumbnails 17402-1 and 17402-2 have moved from a location adjacent cursor 17408 to a location in the arrangement of thumbnail 17402. The exemplary user interfaces in FIGS. 8Q-8X thus show differentiating reaching different intensity thresholds (e.g., IT LAnd ITD) The user interface may be configured to provide an intuitive user interface that enables the user to add a folder to a selection or open a folder to add a selected item to the folder.
FIGS. 8Y-8DD illustrate an embodiment of a user interface 17420Wherein a plurality of different contacts are used to select a user interface object. For example, the detection of the first contact is stopped before the detection of the second contact. In some embodiments, the first contact and the second contact are generated by the same finger at different times. In some embodiments (e.g., as shown in FIGS. 8Y-8 DD), the first press input is an increase in intensity corresponding to the contact above a selected intensity threshold (e.g., "ITL") an alternative pattern intensity threshold (e.g.," ITD") and in response to detecting the first press input, the device enters a" select multiple objects "mode in which the intensity reaches or is above a selection intensity threshold (e.g.," ITL") causes the device to simultaneously select a plurality of user interface objects corresponding to the consecutive press inputs.
FIG. 8Y shows a plurality of user interface objects (e.g., thumbnails 17418) representing media objects (e.g., photos) in a media player displaying, for example, an album (e.g., "family album"). A user of such a media player may wish to select several thumbnails at once, for example, to add selected photos to different albums and/or delete unwanted pictures. FIG. 8Z shows contact 17412, which corresponds to a press input when cursor 17411 is positioned over thumbnail 17418-1. This press input corresponds to an increase in intensity of contact 17412 above an alternative mode intensity threshold (e.g., "IT D"). In response to detecting the press input, the device enters an alternative mode for selecting a user interface object and selects user interface object 17418-1.
FIG. 8AA illustrates lift-off of contact 17412 (e.g., contact 17412 is no longer detected on touch-sensitive surface 451). In FIG. 8AA, after liftoff is detected, thumbnail 17418-1 remains selected, allowing other user interface items to be selected. FIG. 8AA illustrates the device detecting movement of contact 17414 and, in response to detecting movement of contact 17414, moving cursor 17411 from the position above thumbnail 17418-1 in FIG. 8Z to the position above thumbnail 17418-7 in FIG. 8 AA. In FIGS. 8AA-8BB, the device detects pairs when cursor 17411 is over thumbnail 17418-7Should the intensity of contact 17414 be below a selected intensity threshold (e.g., "ITL") increases in intensity above a selected intensity threshold (e.g.," IT)L") as shown in fig. 8 BB. In response to detecting the press input in FIG. 8BB, the device selects thumbnail 17418-7 in addition to thumbnail 17418-1.
Fig. 8CC illustrates the detection of lift off of contact 17414. In FIG. 8CC, even though liftoff of contact 17414 has been detected, both thumbnail 17418-1 and thumbnail 17418-7 remain selected in the illustrated embodiment. 8DD-8EE illustrate detection of a press input that does not correspond to a selectable user interface object (e.g., when the focus selector is in a position on the display that is outside of the plurality of user interface objects). In FIG. 8CC, the device detects movement of contact 17416 on touch-sensitive surface 451 and, in response, moves cursor 17411 on the touch-sensitive surface to a location on the display that does not correspond to any of the plurality of thumbnails 17418. While cursor 17411 is in a position that does not correspond to any of the plurality of thumbnails 17418, the device detects a press input corresponding to contact 17416 (e.g., the intensity of contact 17416 is from below IT LTo intermediate ITLAnd ITDIn between) and in response to detecting the press input in fig. 8DD, the device deselects and exits the "select multiple objects" or "alternative" mode.
9A-9E are flow diagrams illustrating a method 17500 of selecting a user interface object according to some embodiments. Method 17500 is performed on an electronic device (e.g., device 300 of fig. 3 or portable multifunction device 100 of fig. 1A) having a display and a touch-sensitive surface. In some embodiments, the display is a touch screen display and the touch sensitive surface is on the display. In some embodiments, the display is separate from the touch-sensitive surface. Some operations in method 17500 are optionally combined, and/or the order of some operations is optionally changed.
Method 17500 provides an intuitive method for selecting a user interface object, as described below. The method reduces the cognitive burden on the user when selecting user interface objects, thereby creating a more efficient human-machine interface. For battery-driven electronic devices, enabling a user to select user interface objects faster and more efficiently conserves power and increases the time between battery charges.
The device displays (17502) a plurality of user interface objects on the display, the plurality of user interface objects including a first user interface object and a second user interface object (e.g., thumbnails 17402-1 and 17402-2 of FIG. 8A, and thumbnails 17418-1 and 17418-7 of FIG. 8Y). In some embodiments, the device is configured to detect (17504) a series of contact intensity values and compare the detected intensity values to a plurality of different intensity thresholds, and the plurality of different intensity thresholds include an alternative mode intensity threshold (e.g., a "deep press" threshold IT) D) And selecting an intensity threshold (e.g., "light press" threshold IT)L) The device transitions from a first selection mode (e.g., a "single object selection" mode) to a second selection mode (e.g., a "multiple object selection" mode) using an alternative mode intensity threshold, which the device uses to select an input (e.g., an intensity between IT) corresponding to movement of the focus selector on the display0And ITLIn between) and corresponding to selecting a user interface object on the display at a location at or near the location of the focus selector (e.g., an intensity between IT and the user interface object is displayed on the displayLAnd ITDInput in between) where the selection intensity threshold is different from (e.g., lower than) the alternative pattern intensity threshold. In some embodiments, during the normal mode of operation, when the device detects that the intensity of the contact increases above the selection intensity threshold while the focus selector is over the user interface object, the device selects the user interface object, and when the device detects that the intensity of the contact decreases below the selection intensity threshold (or a predefined amount less than the selection intensity threshold), the device drops the object or performs an operation associated with activating the object (e.g., the device stops dragging the object that is moving according to movement of the focus selector, or initiates an operation with the object if the object is not moving after detecting the increase in intensity of the contact) The associated application).
While the plurality of user interface objects are displayed, the device detects (17510) a first press input that corresponds to an increase in intensity of a contact on the touch-sensitive surface above a first intensity threshold while the focus selector is positioned over the first user interface object. In response to detecting the first press input, the device selects (17512) the first user interface object (e.g., selection of thumbnail 17402-1 in FIG. 8D, and selection of thumbnail 17418-1 in FIG. 8Z).
After selecting (17514) the first user interface object, the device detects (17516) a second press input that corresponds to an increase in intensity of a contact on the touch-sensitive surface above a second intensity threshold while the focus selector is positioned over the second user interface object. In some embodiments, the first press input corresponds (17518) to a first contact on the touch-sensitive surface, and the second press input corresponds to a second contact on the touch-sensitive surface that is different from the first contact (e.g., ceasing to detect the first contact before detecting the second contact). In some embodiments, the first contact and the second contact are generated by the same finger at different times, as shown in FIGS. 8Y-8 DD. In some embodiments, the first press input is an increase in intensity corresponding to the contact above a selection intensity threshold (e.g., "IT L") intensity threshold (e.g.," ITD") and in response to detecting the first press input, the device enters a" select multiple objects "mode in which consecutive press inputs having an intensity at or above a selection intensity threshold cause the device to simultaneously select multiple user interface objects corresponding to the consecutive press inputs. Alternatively, the first and second press inputs are part of a single gesture that includes (17520) continuously detected contacts on the touch-sensitive surface, as shown in fig. 8A-8X. For example, contact is continuously detected between the start of the first press input and the end of the second press input. For example, in fig. 8B-8F, the device detects various movements of contact 17406 on touch-sensitive surface 451 and various press inputs performed without detecting liftoff of contact 17406 from touch-sensitive surface 451. Like8G-8X, the device detects various movements of contact 17410 on touch-sensitive surface 451 and various press inputs performed without detecting liftoff of contact 17410 from touch-sensitive surface 451. In contrast, in FIGS. 8Y-8DD, the device detects multiple different contacts (e.g., 17412, 17414, and 17416) when multiple user interface objects are selected, rather than a continuous contact.
In some embodiments, the first and second press inputs are generated by continuously detected (uninterrupted) contacts on the touch-sensitive surface. In some embodiments, the gesture includes (17522) an intermediate portion (e.g., movement of the contact 17410 in fig. 8J-8L) between the first and second press inputs, the intermediate portion including movement of the continuously detected contact that corresponds to movement of the focus selector from the first user interface object to the second user interface object (e.g., while the focus selector is at the first user interface object, the user selects the first user interface object, then drags the focus selector on the display from the first user interface object to the second user interface object and selects the second user interface object as one continuous gesture).
In some embodiments, regardless of whether a single contact or multiple contacts are used, as shown in FIGS. 8A-8X, and independently as shown in FIGS. 8Y-8DD, the first intensity threshold is (17524) an alternative mode intensity threshold (e.g., "IT" for example)D") and the second intensity threshold is a select intensity threshold (e.g.," IT ")L"). In some embodiments, the first press input is an increase in intensity corresponding to the contact above a selection intensity threshold (e.g., "IT L") intensity threshold (e.g.," ITD") and in response to detecting the first press input, the device enters a" select multiple objects "mode in which consecutive press inputs having an intensity at or above the light press intensity threshold cause the device to simultaneously select multiple user interface objects corresponding to the consecutive press inputs, as shown in fig. 8G-8X. In the "select multiple objects" mode, the focus selector moves consistently over the user interface objects, and when the focus selector is over the respective user interface objectThe intensity of the contact increases above a selection intensity threshold to select the respective user interface object, and then decreases below the selection intensity threshold while remaining selected so that a next user interface object may be selected (e.g., a first deep press enters the device into a multiple selection mode and subsequent presses may reach a lower threshold, such as a light press input threshold).
Alternatively, in some embodiments, the first intensity threshold is (17526) an alternative mode intensity threshold (e.g., "ITD") and the second intensity threshold is an alternative mode intensity threshold (e.g.," IT ") D"). Thus, in some embodiments, the second intensity threshold is the same as the first intensity threshold. For example, in some embodiments, the device responds to detecting a deep press input (e.g., including the intensity of the contact from below IT) during selection of the first user interface objectDIs increased to be higher than ITDOf the intensity of the user) and the device selects a second (or third, fourth, etc.) user interface object in response to detecting other deep press inputs. In some embodiments, the device responds to detecting a tap input (e.g., including the intensity of the contact from below IT) during selection of the first user interface objectLIs increased to be higher than ITLOf the intensity of the user) and the device selects a second (or third, fourth, etc.) user interface object in response to detecting other tap inputs.
In response to detecting the second press input, the device selects (17528) the second user interface object and remains selecting the first user interface object, e.g., as shown in FIGS. 8L-8O, where the device responds to detecting that the intensity of contact 17410 is from below ITLIs increased to be higher than ITLAnd thumbnail 17402-2 is selected. In some embodiments, as also shown in FIGS. 8L-8O, after selecting (17530) the first user interface object, the device displays a first residual image (e.g., residual image 17416-1 in FIGS. 8J-8O) at the original location of the first user interface object, and after selecting the second user interface object, sets A second residual image (e.g., residual image 17416-2 in fig. 8O) is ready to be displayed at the original location of the second user interface object. In some embodiments, even if the focus selector (and, optionally, the representation of the user interface object) is moved on the display, the residual image remains stationary until the user interface object is moved to a different location in the user interface (e.g., as shown in fig. 8P).
In some embodiments, after displaying the first and second residual images (17532), the device detects (17534) an end of the selection of the first and second user interface objects. For example, when the focus selector is positioned over an area of the display that is not capable of dropping the selected user interface object, the device detects an invalid drop of the selected user interface object, such as a lift-off of a contact (or a deep press/double tap). In response to detecting the end of the selection of the first user interface object and the second user interface object, the device displays (17536) an animation of the representation of the first user interface object moving back to the first residual image and an animation of the representation of the second user interface object moving back to the second residual image. For example, in FIG. 8O, when the device detects a valid drop operation, thumbnails 17402-1 and 17402-2 are placed at a location adjacent to cursor 17408, as shown in FIG. 8P. In contrast, in some embodiments, if the device detects an invalid drop operation in FIG. 8O, the device moves the display thumbnails 17402-1 and 17402-2 back to the animation of the residual images 17416-1 and 17416-2 to return to the state of the user interface shown in FIG. 8F.
In some embodiments, the device detects (17538) a press input on the respective residual image (e.g., the device detects an increase in intensity of the contact corresponding to the cursor 17408 above IT when the cursor is over the respective residual imageL). In response to detecting the press input on the respective residual image, the device deselects (17540) the user interface object corresponding to the respective residual image (e.g., deselects the first user interface object if the respective residual image is a first residual image and deselects the second user interface object if the respective residual image is a second residual image).In some embodiments, after deselecting the user interface object, the user interface object is displayed in its original position and the representation of the user interface object previously moved in accordance with the movement of the focus selector and the corresponding residual image cease to be displayed.
In some embodiments, upon selection of the first user interface object, the device displays (17542) a representation of the first user interface object proximate to the focus selector, and upon selection of the second user interface object, the device displays a representation of the second user interface object proximate to the focus selector (e.g., a representation of a stack of photographs follows a cursor/contact around the display). Examples of representations of user interface objects adjacent to the focus selector include a "stack" or "heap" of user interface objects representing thumbnails 17402-1 and 17402-2 as shown, for example, in FIG. 8O.
In some embodiments, after selecting the first user interface object, the device changes (17544) the display of the first user interface object to provide a visual indication that the first user interface object has been selected (e.g., the border of the thumbnail 17418-1 changes between fig. 8Y and 8Z to show that the thumbnail 17418-1 has been selected), and after selecting the second user interface object, the device changes the display of the second user interface object to provide a visual indication that the second user interface object has been selected (e.g., the border of the thumbnail 17418-7 changes between fig. 8AA and 8BB to show that the thumbnail 17418-7 has been selected). For example, multiple thumbnails are popped from the page at the same time (e.g., using shadows or pseudo-three-dimensional effects) to provide a visual indication that the user interface object corresponding to the popped-up image has been selected. As another example, a residual image of the thumbnail is displayed to provide a visual indication that the user interface object corresponding to the residual image has been selected.
In some embodiments, after selecting (17546) the first user interface object and the second user interface object, the device detects (17548) liftoff of the second contact. After detecting lift-off of the second contact, the device detects (17550) a third press input corresponding to a third contact. In response to detecting the third press input, the device deselects (17552) the first user interface object and the second user interface object (e.g., in fig. 8DD, in response to detecting contact 17416 that does not correspond to selectable user interface object 17418, the device deselects and exits the "select multiple objects" mode).
In some embodiments, the device detects (17554) a third press input that includes an increase in intensity of the contact above an alternative mode intensity threshold (e.g., when the focus selector is at a location on the display that is outside of the plurality of user interface objects). In response to detecting the third press input, the device deselects (17556) the first user interface object and the second user interface object. In some embodiments, if an increase in intensity is detected when the focus selector is over a portion of the user interface that does not include any selectable user interface objects, the first user interface object and the second user interface object are deselected, and if an increase in intensity is detected when the focus selector is over a selectable third user interface object, the third user interface object is selected in addition to the first user interface object and the second user interface object that were previously selected. For example, if the device detects a deep press input in FIG. 8DD when the cursor 17411 is over the thumbnail 17418-2 instead of over the portion of the user interface that does not include any thumbnails, then the thumbnail 17418-2 is selected in addition to the thumbnails 17418-1 and 17418-7 instead of deselecting the thumbnails 17418-1 and 17418-7, as shown in FIG. 8 DD.
In some embodiments, the device detects (17558) lift off of a continuously detected contact. In response to detecting liftoff of the continuously detected contact, the device deselects (17560) the first user interface object and the second user interface object (e.g., remains simultaneously selecting multiple user interface objects until the contact used to select the user interface object lifts off of the touch-sensitive surface), as shown in fig. 8P, where the device deselects thumbnails 17402-1 and 17402-2 in response to detecting liftoff of contact 17410 from touch-sensitive surface 451.
In some embodiments, the plurality of user interface objects includes (17562)) A third user interface object representing a set of user interface objects (e.g., a folder icon representing a file directory, such as "documents" folder 17404 in fig. 8A). After detecting selection of the first user interface object and the second user interface object (17564), the device detects (17566) a third press input that corresponds to an increase in intensity of a contact on the touch-sensitive surface while the focus selector is over the third user interface object. In some of these embodiments, in response to detecting (17568) the third press input, in accordance with a determination that the third press input comprises an increase in intensity above the first intensity threshold (e.g., contact 17410 has been above IT DAs shown by 174U), the device displays (17570) a user interface having an area for adding the first user interface object and the second user interface object to a set of user interface objects represented by a third user interface object (e.g., opening a file in a file manager program, e.g., as shown in fig. 8V-8X), and in accordance with a determination that the third press input includes an increase in intensity above a second intensity threshold (e.g., "ITL") and below a first intensity threshold (e.g.," ITD") that the device selected 17572 a third user interface object in addition to the first user interface object and the second user interface object (e.g., in response to detecting the press input in fig. 8S, the device picked up the folder icon, as shown in fig. 8S-8T). In some embodiments, if the increase in intensity of the contact is below a second intensity threshold (e.g., "IT)L"), the device abandons the operation associated with the third user interface object.
It should be understood that the particular order of operations in fig. 9A-9E that has been described is merely exemplary and is not intended to indicate that the order is the only order in which the operations may be performed. Those of ordinary skill in the art will recognize various ways to reorder the operations described herein. Additionally, it should be noted that the details of other processes described herein with respect to other methods described herein (e.g., those listed in paragraph [0043 ]) also apply in a similar manner to method 17500 described above with respect to fig. 9A-9E. For example, the contact, press input, user interface object, intensity threshold, focus selector described above with reference to method 17500 optionally have one or more characteristics of the contact, press input, user interface object, intensity threshold, focus selector described herein with reference to other methods described herein (e.g., those methods listed in paragraph [0043 ]). For the sake of brevity, these details are not repeated here.
Fig. 10 illustrates a functional block diagram of an electronic device 17600 configured in accordance with the principles of various described embodiments, in accordance with some embodiments. The functional blocks of the device are optionally implemented by hardware, software, or a combination of hardware and software which embody the principles of the various described embodiments. Those skilled in the art will understand that the functional blocks described in fig. 10 are optionally combined or separated into sub-blocks to implement the principles of the various described embodiments. Thus, the description herein optionally supports any possible combination or separation or further definition of the functional blocks described herein.
As shown in fig. 10, electronic device 17600 includes a display unit 17602 configured to display a graphical user interface, a touch-sensitive surface unit 17604 configured to receive contacts, one or more sensor units 17606 configured to detect intensities of contacts with a touch-sensitive surface unit 17604; and a processing unit 17608 coupled to the display unit 17602, the touch-sensitive surface unit 17604, and the one or more sensor units 17606. In some embodiments, the processing unit 17608 includes a detection unit 17610, a display enabling unit 17612, and a selection unit 17614.
The display unit 17602 is configured to display a plurality of user interface objects including a first user interface object and a second user interface object. The processing unit 17608 is configured to detect a first press input (e.g., with the detection unit 17610) that corresponds to an increase in intensity of a contact on the touch-sensitive surface unit 17604 above a first intensity threshold while the focus selector is positioned over the first user interface object. In response to detecting the first press input, the processing element 17608 is configured to select the first user interface object (e.g., with the selection element 17614); and, after selecting the first user interface object, detect a second press input (e.g., with detection unit 17610) corresponding to an increase in intensity of a contact on the touch-sensitive surface unit 17604 above a second intensity threshold while the focus selector is located over the second user interface object. In response to detecting the second press input, the processing element 17608 is configured to select the second user interface object (e.g., with the selection element 17614) and remain selected for the first user interface object.
In some embodiments, the first press input corresponds to a first contact on the touch-sensitive surface unit and the second press input corresponds to a second contact on the touch-sensitive surface unit that is different from the first contact.
In some embodiments, the processing unit 17608 is further configured to detect liftoff of the second contact after selecting the first user interface object and the second user interface object. After detecting the liftoff of the second contact, the processing unit 17608 is further configured to detect a third press input corresponding to a third contact (e.g., with the detection unit 17610); and, in response to detecting the third press input, deselecting the first user interface object and the second user interface object (e.g., with the selection unit 17614).
In some embodiments, the first and second press inputs are part of a single gesture that includes a contact continuously detected on the touch-sensitive surface unit 17604.
In some embodiments, the processing element 17608 is further configured to, after selecting the first user interface object and the second user interface object, detect liftoff of the continuously detected contact (e.g., with the detection element 17610); and deselecting the first user interface object and the second user interface object (e.g., with the selection unit 17614) in response to detecting liftoff of the continuously detected contact.
In some embodiments, the first press input and the second press input are part of a single gesture that includes a continuously detected contact on the touch-sensitive surface unit; and the gesture includes an intermediate portion between the first press input and the second press input, the intermediate portion including movement of the continuously detected contact corresponding to movement of the focus selector from the first user interface object to the second user interface object.
In some embodiments, the processing unit 17608 is configured to detect a series of contact intensity values and compare the detected intensity values to a plurality of different intensity thresholds. The plurality of different intensity thresholds comprises an alternative mode intensity threshold that is used by the processing unit 17608 to transition from a first selection mode to a second selection mode; and a selection intensity threshold used by the processing unit 17608 to distinguish between input corresponding to movement of the focus selector on the display unit 17602 and input corresponding to selection of a user interface object on the display unit 17602 at a location at or near the location of the focus selector, where the selection intensity threshold is different from the alternative mode intensity threshold.
In some embodiments, the processing unit 17608 is further configured to, after selecting the first user interface object and the second user interface object, detect a third press input (e.g., with the detection unit 17610) that includes an increase in intensity of the contact above an alternative pattern intensity threshold; and deselecting the first user interface object and the second user interface object (e.g., with the selection unit 17614) in response to detecting the third press input.
In some embodiments, the first intensity threshold is an alternative mode intensity threshold and the second intensity threshold is an alternative mode intensity threshold.
In some embodiments, the first intensity threshold is an alternative mode intensity threshold and the second intensity threshold is a select intensity threshold.
In some embodiments, the plurality of user interface objects includes a third user interface object representing a set of user interface objects, and the processing unit 17608 is further configured to, after selecting the second user interface object, detect (e.g., with the detection unit 17610) a third press input corresponding to an increase in intensity of a contact on the touch-sensitive surface unit 17604 while the focus selector is positioned over the third user interface object. The processing element 17608 is further configured to, in response to detecting the third press input, in accordance with a determination that the third press input includes an increase in intensity above the first intensity threshold, display (e.g., with the display enabling element 17612) a user interface having an area for adding the first user interface object and the second user interface object to the set of user interface objects represented by the third user interface object; and, in accordance with a determination that the third press input includes an increase in intensity to a maximum intensity that is above the second intensity threshold and below the first intensity threshold, select a third user interface object in addition to the first user interface object and the second user interface object (e.g., with the selection unit 17614).
In some embodiments, the processing element 17608 is further configured to, after selecting the first user interface object, display the first residual image at the original location of the first user interface object (e.g., with the display enabling element 17612); and, after selecting the second user interface object, displaying the second residual image at the original location of the second user interface object (e.g., with the display enabling unit 17612).
In some embodiments, the processing unit 17608 is further configured to, after displaying the first and second residual images, detect (e.g., with the detection unit 17610) an end of the selection of the first and second user interface objects; and in response to detecting the end of the selection of the first user interface object and the second user interface object, displaying (e.g., with the display-enabling unit 17612) an animation of the representation of the first user interface object moving back to the first residual image and displaying (e.g., with the display-enabling unit 17612) an animation of the representation of the second user interface object moving back to the second residual image.
In some embodiments, the processing unit 17608 is further configured to detect a press input on the respective residual image after displaying the first and second residual images (e.g., with the detection unit 17610), and to deselect the user interface object corresponding to the respective residual image in response to detecting the press input on the respective residual image (e.g., with the selection unit 17614).
In some embodiments, the processing element 17608 is further configured to, after selecting the first user interface object, display a representation of the first user interface object adjacent to the focus selector (e.g., with the display enabling element 17612); and upon selection of the second user interface object, a representation of the second user interface object is displayed adjacent to the focus selector (e.g., with the display enabling element 17612).
In some embodiments, the processing element 17608 is further configured to, after selecting the first user interface object, change the display of the first user interface object (e.g., with the display enabling element 17612) to provide a visual indication that the first user interface object has been selected; and, upon selection of the second user interface object, change the display of the second user interface object (e.g., with the display enabling unit 17612) to provide a visual indication that the second user interface object has been selected.
The operations in the above-described information processing method are optionally implemented by running one or more functional modules in an information processing apparatus, such as a general-purpose processor (e.g., as described above with respect to fig. 1A and 3) or an application-specific chip.
The operations described above with reference to fig. 9A-9E are optionally performed by components depicted in fig. 1A-1B or fig. 10. For example, detect operations 17510 and 17512, select operations 17512 and 17528, and deselect operation 17540 are optionally implemented by event classifier 170, event recognizer 180, and event handler 190. Event monitor 171 in event sorter 170 detects a contact on touch-sensitive display 112 and event dispatcher module 174 passes the event information to application 136-1. The respective event recognizer 180 of application 136-1 compares the event information to the respective event definition 186 and determines whether the first contact at the first location on the touch-sensitive surface corresponds to a predefined event or sub-event, such as selecting an object on the user interface. When a respective predefined event or sub-event is detected, the event recognizer 180 activates an event handler 190 associated with the detection of the event or sub-event. Event handler 190 optionally uses or calls data updater 176 or object updater 177 to update application internal state 192. In some embodiments, event handler 190 accesses a respective GUI updater 178 to update the content displayed by the application. Similarly, those skilled in the art will clearly know how other processes may be implemented based on the components shown in FIGS. 1A-1B.
Typing characters on a virtual keyboard
Many electronic devices with touch-sensitive surfaces, such as portable multifunction devices with touch screen displays, have graphical user interfaces with displayed virtual keyboards for typing characters to be output in, for example, email messages, notepad applications, and search fields. Some methods for entering a character or sequence of characters (e.g., entering an input into a device corresponding to a request to output a character or characters) require a separate contact on the touch-sensitive surface for each entered character. However, entering characters by making individual contacts for each entered character may be inefficient and time consuming for the user.
In the embodiments described below, faster and more efficient methods for accurately typing characters on a virtual keyboard are provided in which a sequence of characters may be selected with successive contacts in response to detecting an increase in intensity of the contact while the contact is over a key corresponding to the character. In some embodiments, detecting a press input (e.g., on a device having a touch-sensitive surface configured to detect intensity of contacts) that includes a contact with a respective intensity above a respective threshold while the contact is above the respective character causes the device to output the respective character. In contrast, detecting a press input with a maximum intensity below a respective threshold causes the device to forgo outputting the respective character. This approach simplifies the character entry process by allowing the user to enter the character quickly and accurately using a single continuous movement of the contact.
11A-11T illustrate exemplary user interfaces for typing characters on a virtual keyboard, according to some embodiments. The user interfaces in these figures are used to illustrate the processes described below, including the processes in fig. 12A-12D. 11B-11T include intensity graphs that illustrate contacts on a touch-sensitive surface relative to a touch-sensitive surface that include a first intensity threshold (e.g., "IT)L"), deep press intensity threshold (e.g.," ITD") and a character output intensity threshold (e.g.," ITC") current intensity of a plurality of intensity thresholds.
Fig. 11A illustrates an exemplary user interface displayed on a device 300 with a touch screen 112 for typing characters on a virtual keyboard 17704 according to some embodiments. For example, the device displays a notepad application (app) 17702. Letters and/or other characters generated using input (e.g., gestures, contacts, etc.) are output in a notepad within notepad app 17702.
FIG. 11B illustrates detection of a contact 17706 on the touch screen 112. By means of IT above a minimum contact strength threshold0The intensity of the contact to detect the contact 17706. However, in fig. 11B, the intensity of contact 17706 is below the light press intensity threshold ITLAnd therefore, no characters are output in notepad app 17702 (as described with reference to method 17800 of fig. 12A-12D). FIG. 11B also shows contact 17706 moving from the position shown in the figure to a position over the key corresponding to the character "G".
FIG. 11C shows that contact 17706 is less intense than IT when contact 17706 is over the key corresponding to the character "GL. Optionally, the device displays a pop-up label 17708 that displays words corresponding to the location of contact 17706And (4) sign. For example, since contact 17706 is currently located on the hit region corresponding to the character "G," the character "G" is displayed in the pop-up label. Thus, the pop-up label allows the user to see the character to which the contact corresponds, even though the character is being covered by the user's finger. In this example, the pop-up label 17708 is displayed without regard to intensity (e.g., without having to output the character "G," as explained below).
FIG. 11D shows that contact 17706 is more intense than IT when contact 17706 is over the key corresponding to the character "G"L(e.g., a tap input is detected over the key corresponding to the character "G"). The character "G" is output in notepad app 17702 because the character output criteria are met (as described with reference to method 17800 of FIGS. 12A-12D). In some embodiments, the intensity of contact 17706 is increased above IT upon detectionLAt time (e.g., on a rising edge of intensity or "downstroke" of a contact, which then has a higher than IT LIntensity of) of the character, the character "G" is output. For example, in FIG. 11D, the intensity of contact 17706 is higher than IT when contact 17706 is located over a key on the virtual keyboard corresponding to the character "G"LWhen so, the character "G" is output.
In some embodiments, the intensity of contact 17706 is detected from above ITLDecrease to below ITLAt time (e.g., on a falling edge or "upstroke" of a contact that previously had a higher IT than ITLIntensity of) of the character, the character "G" is output. In some embodiments, the character output criteria include detecting contact 17706 at a higher intensity than IT is detected when contact 17706 is continuously detected over a key on the virtual keyboard corresponding to the character "GLAnd the intensity is then output from above a different character intensity threshold (e.g., ITC) Decreasing below the intensity threshold. In such embodiments, there are different thresholds to activate the potential output of a character (e.g., IT)L) And to actually trigger the output of characters (e.g., IT)C) Thereby providing hysteresis and preventing repeated accidental output of the same character. Alternatively, in some embodiments, ITLAnd ITCAre equal. For ease of explanation, above IT, unless otherwise noted (e.g., triggering output on downstroke) LThe corresponding input of (a) shows the output of the corresponding character.
In some embodiments, after outputting the uppercase character (e.g., the character "G"), the virtual keyboard 17704 automatically transitions to displaying the lowercase character. In some embodiments, subsequent satisfaction of the character output criteria when the contact is located on a lower case character (e.g., the character "g") results in the lower case character being output. For ease of explanation, embodiments are described with reference to upper case (i.e., upper case) characters.
FIG. 11D also shows a subsequent movement of contact 17706 to a position above the key corresponding to the character "F".
FIG. 11E shows that the maximum intensity of contact 17706 is less than IT when contact 17706 is over the key corresponding to the character "FL. In this example, the intensity of contact 17706 remains below IT during the time period in which contact 17706 is over the key corresponding to the character "FL. Accordingly, the device foregoes outputting the character "F" in notepad app 17702 (as described with reference to method 17800 of FIGS. 12A-12D). FIG. 11E also shows a subsequent movement of contact 17706 to a position above the key corresponding to the character "E".
FIG. 11F shows that contact 17706 is less intense than IT when contact 17706 is over the key corresponding to the character "E L
FIG. 11G shows that the intensity of contact 17706 is from below ITLIncrease to above ITLAnd the output of the resulting character "E".
FIG. 11H illustrates a subsequent movement of contact 17706 to a position above the key corresponding to the character "R". The device foregoes outputting the character "R" as shown in FIG. 11I because the intensity of contact 17706 remains below IT while contact 17706 is positioned over the key corresponding to the character "R"L
11J-11M illustrate methods for consecutively outputting identical charactersExemplary user interfaces of more than one instance. Fig. 11J shows contact 17706 at a location above the key corresponding to the character "T". FIG. 11K shows that contact 17706 is stronger than ITLAnd the output of the resulting character "T", as described above. FIG. 11L shows the intensity decreasing below ITC. FIG. 11M shows that the subsequent intensity of contact 17706 is higher than ITL(e.g., by aiming for intensities lower than ITCBy an intermediate detection of contact 17706, contact 17706 is at a second, higher intensity than ITL). Thus, a second instance of the character "T" is output in notepad app 17702.
FIG. 11M also illustrates the display of an auto-correction and/or auto-completion interface that displays suggested corrections and/or completions to a user that outputs a string (e.g., "GETT" in this example). In this example, the device suggests correcting and completing "JETTY" to replace the already output "GETT". Fig. 11M also shows a subsequent movement of the contact 17706 to a position above the space key (the intensity of the contact 17706 is not necessarily required to be above any particular threshold during the movement of the contact 17706). In this example, the space key is a predefined affordance for accepting or rejecting auto-correction and/or auto-completion suggestions. In some embodiments, a light press input is detected while contact 17706 is over a predefined affordance (e.g., intensity of contact 17706 is from below IT LIs increased to be higher than ITLOptionally followed by a reduction in the intensity of contact 17706 below ITL) Resulting in acceptance (and output) of the recommendation (shown in fig. 11O-11P). In some embodiments, a deep press input is detected while contact 17706 is over a predefined affordance (e.g., the intensity of contact 17706 is from below ITDIs increased to be higher than ITDOptionally followed by a reduction in the intensity of contact 17706 below ITD) Replaces (rejects) the suggestion and causes continued display of the user-output string (shown in fig. 11Q-11R). Alternatively, in some embodiments, a deep press results in acceptance of the suggestion, and a light press results in replacement (e.g., rejection) of the suggestion (e.g., function reversal compared to the embodiment described with reference to fig. 11M-11S).
FIG. 11T shows that contact 17710 is above the deep compression intensity threshold ITDThe strength of (2). In some embodiments, the intensity of the contact detected 17710 is higher than ITDResulting in the display of a special character interface that displays special characters (e.g., "e" characters with diacritics, sharp notes, super points, nasal sounds, and diacritics). In some embodiments, selecting a particular displayed special character causes that particular displayed special character to be output in notepad 17702 (e.g., instead of outputting character "E").
Fig. 12A-12D are flow diagrams illustrating a method 17800 for typing characters on a virtual keyboard, according to some embodiments. Method 17800 is performed at an electronic device (e.g., device 300 of FIG. 3 or portable multifunction device 100 of FIG. 1A) having a display and a touch-sensitive surface. In some embodiments, the display is a touch screen display and the touch sensitive surface is on the display. In some embodiments, the display is separate from the touch-sensitive surface. Some operations in method 17800 are optionally combined, and/or the order of some operations is optionally changed.
Method 17800 provides an intuitive method for typing characters on a virtual keyboard, as described below. The method reduces the cognitive burden on the user when typing characters on the virtual keyboard, resulting in a more efficient human-machine interface. For battery-driven electronic devices, enabling a user to type characters on a virtual keyboard faster and more efficiently conserves power and increases the time between battery charges.
The device displays (17802) a virtual keyboard (e.g., an alphanumeric keyboard for entering text on the device shown in fig. 11A) on the display. The device detects (17804) a contact (e.g., contact 17706 of fig. 11B) on the touch-sensitive surface. While continuously detecting (17806) a contact on the touch-sensitive surface, the device detects (17808) one or more movements of the contact on the touch-sensitive surface that correspond to movement of the focus selector over the virtual keyboard (e.g., a single continuous movement over multiple keys of the virtual keyboard, such as contact 17706 in fig. 11B, and/or multiple discrete movements from one key to another, such as contact 17706 in fig. 11E). For each respective key of a plurality of keys of the virtual keyboard, upon detecting the focus selector over the respective key of the plurality of keys, in accordance with a determination that a character output criterion for outputting a character corresponding to the respective key has been met, the device outputting (17810) the character, wherein the character output criterion includes a respective intensity of contact at the time the focus selector is detected over the respective key being above a first intensity threshold; and in accordance with a determination that the character output criteria are not met, the device forgoes outputting the character corresponding to the respective key. In some embodiments, the first intensity threshold is a higher intensity threshold than the input detection intensity threshold at which contact was initially detected. In some embodiments, the character is output in response to detecting an increase in intensity of the contact from an intensity below a first intensity threshold to an intensity above the first intensity threshold.
In some embodiments, the character output criteria for outputting the character corresponding to the respective key includes (17811), while the focus selector is over the respective key, the contact corresponding to the focus selector is increased from an intensity below a first intensity threshold (e.g., the character is output in response to detecting an increase in intensity of the contact from an intensity below the first intensity threshold to an intensity above the first intensity threshold).
In some embodiments, the character output criteria for outputting the character corresponding to the respective key includes (17812), when the focus selector is positioned over the respective key, the contact corresponding to the focus selector decreases from an intensity above a first intensity threshold to an intensity below a character output intensity threshold. In some embodiments, the character output intensity threshold is the same as the first intensity threshold. In some embodiments, the character output intensity threshold is lower than the first intensity threshold.
In some embodiments, the character output criteria for outputting the character corresponding to the respective key includes (17813), upon successive detection of the focus selector over the respective key, the contact corresponding to the focus selector increases from an intensity below a first intensity and then decreases from an intensity above a first intensity threshold to an intensity below a character output intensity threshold (e.g., the character output criteria includes detection of a downstroke and an upstroke while the contact is successively over the respective key).
In some embodiments, while continuously detecting the contact on the touch-sensitive surface, the device detects (17814) a first press input that includes detecting that an intensity of the contact increases above a first intensity threshold while the focus selector is over the first key. In response to detecting the first press input, the device outputs (17815) a character corresponding to the first key. In some embodiments, the device outputs the character in response to detecting that the intensity of the contact increases above a first intensity threshold (e.g., a "downstroke" of the press input). In some embodiments, the device outputs the character in response to detecting that the intensity of the contact decreases below a character output intensity threshold (e.g., an "up stroke" of the press input).
In some embodiments, while continuously detecting the contact on the touch-sensitive surface, the device detects that the intensity of the contact decreases below a first intensity threshold. After detecting that the intensity of the contact decreases below the first intensity threshold, the device detects (17816) that the intensity of the contact decreases below the first intensity threshold (or, optionally, a character output intensity threshold). After detecting that the intensity of the contact decreases below the first intensity threshold, the device detects (17818) a second press input (or, in some cases, a third press input, a fourth press input, etc.) that includes detecting that the intensity of the contact increases above the first intensity threshold while the focus selector is over the first key. In response to detecting the second press input, the device again outputs (17820) the character corresponding to the first key as additional output (e.g., outputs the second character "T", as shown in FIGS. 11J-11M). Thus, in some embodiments, the first key may be selected twice as the output of the keyboard without detecting liftoff of the contact. For example, the user may hold the contact over the "A" key and perform an increase pressure, decrease pressure, increase pressure sequence to select the key twice (e.g., to type "AA"). Similarly, the user may type a sequence of characters (e.g., "ABAB") using a single continuous contact with multiple cycles of increasing pressure and decreasing pressure when the focus selector is over multiple keys (e.g., "a" key and "B" key). In some embodiments, the device outputs the character in response to detecting that the intensity of the contact increases above a first intensity threshold (e.g., a "downstroke" of the press input). In some embodiments, the device outputs the character in response to detecting that the intensity of the contact decreases below a character output intensity threshold (e.g., an "up stroke" of the press input).
In some embodiments, while continuously detecting the contact on the touch-sensitive surface, the device detects (17822) a second press input that includes detecting an increase in intensity of the contact above the first intensity threshold while the focus selector is over the second key. In response to detecting the second press input, outputting (17824) a character corresponding to the second key (e.g., as the user moves the focus selector around the keyboard, multiple different keys may be selected by increasing the intensity of the contact while the focus selector is over the different keys in the keyboard). In some embodiments, the device outputs the character in response to detecting that the intensity of the contact increases above a first intensity threshold (e.g., a "downstroke" of the press input). In some embodiments, the device outputs the character in response to detecting that the intensity of the contact decreases below a character output intensity threshold (e.g., an "up stroke" of the press input).
In some embodiments, while continuously detecting the contact on the touch-sensitive surface, the device detects (17826) movement of the contact that corresponds to movement of the focus selector over the second key, and while the focus selector is over the second key, a maximum intensity of the contact is below a first intensity threshold. In response to detecting movement of the contact corresponding to movement of the focus selector over the second key, wherein the maximum intensity of the contact is below the first intensity threshold while the focus selector is over the second key, the device foregoes (17828) outputting the character corresponding to the second key.
In some embodiments, while continuously detecting the contact on the touch-sensitive surface, the device detects (17830) a plurality of inputs corresponding to an input character sequence (e.g., the character sequence "get" of fig. 11M). In response to detecting the plurality of inputs, the device displays (17832) an auto-correcting user interface for changing the character sequence to a modified character sequence (e.g., displays an auto-correcting character sequence with a cancel affordance, such as the auto-correcting character sequence "JETTY" of FIG. 11M, or displays one or more auto-correcting options for replacing the character sequence). While displaying the auto-correcting user interface, the device detects (17834) an auto-correcting input that includes an increase in intensity of the contact above a first intensity threshold when the focus selector is positioned over a corresponding affordance (e.g., a space key or a delete key) in the user interface. In response to detecting the auto-correcting input, in accordance with a determination that the contact included in the auto-correcting input has an intensity above a second intensity threshold, the device performs (17836) a first operation associated with the sequence of characters, the second intensity threshold being above the first intensity threshold.
In some embodiments, in response to detecting the auto-correcting input, in accordance with a determination that the contact included in the auto-correcting input has an intensity between a first intensity threshold and a second intensity threshold, the device performs (17838) a second operation associated with the sequence of characters, the second operation being different from the first operation.
In some embodiments, the first operation includes (17840) rejecting the modified character sequence (e.g., rejecting the suggested automatic correction, as shown in fig. 11Q-11S), and the second operation includes replacing the character sequence with the modified character sequence (e.g., accepting the suggested automatic correction, as shown in fig. 11O-11P).
Alternatively, in some embodiments, the first operation includes (17842) replacing the character sequence with a modified character sequence, and the second operation includes rejecting the modified character sequence.
It should be understood that the particular order of operations in fig. 12A-12D that has been described is merely exemplary and is not intended to indicate that the order is the only order in which the operations may be performed. One of ordinary skill in the art will recognize various ways to reorder the operations described herein. In addition, it should be noted that the details of other processes described herein with respect to other methods described herein (e.g., those listed in paragraph [0043 ]) also apply in a similar manner to the method 17800 described above with respect to fig. 12A-12D. For example, the contact, gesture, character, intensity threshold, and focus selector described above with reference to method 17800 optionally have one or more characteristics of the contact, gesture, character, intensity threshold, and focus selector described herein with reference to other methods described herein (e.g., those methods listed in paragraph [0043 ]). For the sake of brevity, these details are not repeated here.
Fig. 13 illustrates a functional block diagram of an electronic device 17900 configured in accordance with the principles of various described embodiments, in accordance with some embodiments. The functional blocks of the device are optionally implemented by hardware, software, or a combination of hardware and software which embody the principles of the various described embodiments. Those skilled in the art will understand that the functional blocks described in fig. 13 are optionally combined or separated into sub-blocks to implement the principles of the various described embodiments. Thus, the description herein optionally supports any possible combination or separation or further definition of the functional blocks described herein.
As shown in fig. 13, the electronic device 17900 includes a display unit 17902 configured to display a virtual keyboard, a touch-sensitive surface unit 17904 configured to receive contacts, one or more sensor units 17906 configured to detect intensity of contacts with the touch-sensitive surface unit 17904; and a processing unit 17908 coupled to the display unit 17902, the touch-sensitive surface unit 17904, and the one or more sensor units 17906. In some embodiments, processing unit 17908 includes detection unit 17910, output unit 17912, auto-correcting replacement unit 17914, and auto-correcting rejection unit 17916.
The processing unit 17908 is configured to: while continuously detecting a contact on the touch-sensitive surface unit 17904: detecting one or more movements of the contact on the touch-sensitive surface unit 17904 that correspond to movement of the focus selector over the virtual keyboard; and for each respective key of a plurality of keys of the virtual keyboard, upon detecting the focus selector over the respective key of the plurality of keys (e.g., with the detection unit 17910): in accordance with a determination that a character output criterion for outputting a character corresponding to the respective key has been met, outputting the character (e.g., with output unit 17912), wherein the character output criterion includes a respective intensity of the contact being above a first intensity threshold upon detection of the focus selector over the respective key; and in accordance with a determination that the character output criteria are not met, forgoing outputting the character corresponding to the respective key.
In some embodiments, the character output criteria for outputting the character corresponding to the respective key comprises, when the focus selector is positioned over the respective key: the contact corresponding to the focus selector is increased from an intensity below the first intensity threshold.
In some embodiments, the character output criteria for outputting the character corresponding to the respective key comprises, when the focus selector is positioned over the respective key: the contact corresponding to the focus selector is reduced from an intensity above the first intensity threshold to an intensity below the character output intensity threshold.
In some embodiments, the character output criteria for outputting the character corresponding to the respective key comprises, upon successive detection of the focus selector over the respective key: the contact corresponding to the focus selector increases from an intensity below a first intensity and then decreases from an intensity above a first intensity threshold to an intensity below a character output intensity threshold.
In some embodiments, the processing unit 17908 is further configured to, while continuously detecting the contact on the touch-sensitive surface unit 17904: detecting an increase in intensity of the contact above a first intensity threshold while the focus selector is over the first key; and in response to detecting an increase in the intensity of the contact, outputting a character corresponding to the first key.
In some embodiments, the processing unit 17908 is further configured to, while continuously detecting the contact on the touch-sensitive surface unit 17904: detecting movement of the contact corresponding to movement of the focus selector over the second key, wherein a maximum intensity of the contact is below a first intensity threshold when the focus selector is over the second key; and in response to detecting movement of the contact corresponding to movement of the focus selector over the second key, wherein the maximum intensity of the contact is below the first intensity threshold while the focus selector is over the second key, thereby forgoing output of the character corresponding to the second key.
In some embodiments, the processing unit 17908 is further configured to, while continuously detecting the contact on the touch-sensitive surface unit 17904 and after outputting the character corresponding to the first key: detecting a second press input comprising detecting an increase in intensity of the contact above the first intensity threshold while the focus selector is over the second key; and in response to detecting the second press input, outputting a character corresponding to the second key.
In some embodiments, the processing unit 17908 is further configured to, while continuously detecting the contact on the touch-sensitive surface unit 17904 and after outputting the character corresponding to the first key: detecting a decrease in intensity of the contact below a first intensity threshold; after detecting that the intensity of the contact decreases below the first intensity threshold, detecting a second press input that includes detecting that the intensity of the contact increases above the first intensity threshold while the focus selector is over the first key; and in response to detecting the second press input, outputting again the character corresponding to the first key as an additional output.
In some embodiments, the processing unit 17908 is further configured to, while continuously detecting the contact on the touch-sensitive surface unit 17904: detecting a plurality of inputs corresponding to a sequence of input characters; in response to detecting the plurality of inputs, displaying an auto-correcting user interface for changing the character sequence to a modified character sequence; while displaying the auto-correcting user interface, detecting an auto-correcting input that includes an increase in intensity of a contact above a first intensity threshold when the focus selector is positioned over a respective affordance in the user interface; and in response to detecting the auto-correcting input, in accordance with a determination that the contact included in the auto-correcting input has an intensity above a second intensity threshold, performing a first operation associated with the sequence of characters, the second intensity threshold being higher than the first intensity threshold.
In some embodiments, the processing unit 17908 is further configured to, in response to detecting the auto-correcting input, in accordance with a determination that the contact included in the auto-correcting input has an intensity between a first intensity threshold and a second intensity threshold, perform a second operation associated with the sequence of characters, wherein the second operation is different from the first operation.
In some embodiments, the first operation includes rejecting the modified character sequence (e.g., rejecting the auto-correction suggestion with auto-correction rejection unit 17916); and the second operation includes replacing the character sequence with the modified character sequence (e.g., accepting the auto-correction suggestion with the auto-correction replacement unit 17914).
Alternatively, in some embodiments, the first operation comprises replacing the character sequence with a modified character sequence, and the second operation comprises rejecting the modified character sequence.
The operations in the above-described information processing method are optionally implemented by running one or more functional modules in an information processing apparatus, such as a general-purpose processor (e.g., as described above with respect to fig. 1A and 3) or an application-specific chip.
The operations described above with reference to fig. 12A-12D are optionally performed by components depicted in fig. 1A-1B or fig. 13. For example, detection operation 17804, output operation 17810, and automatic update operation 17836 are optionally implemented by event sorter 170, event recognizer 180, and event handler 190. Event monitor 171 in event sorter 170 detects a contact on touch-sensitive display 112 and event dispatcher module 174 passes the event information to application 136-1. The respective event recognizer 180 of application 136-1 compares the event information to the respective event definition 186 and determines whether the first contact at the first location on the touch-sensitive surface corresponds to a predefined event or sub-event, such as selecting an object on the user interface. When a respective predefined event or sub-event is detected, the event recognizer 180 activates an event handler 190 associated with the detection of the event or sub-event. Event handler 190 optionally uses or calls data updater 176 or object updater 177 to update application internal state 192. In some embodiments, event handler 190 accesses a respective GUI updater 178 to update the content displayed by the application. Similarly, those skilled in the art will clearly know how other processes may be implemented based on the components shown in FIGS. 1A-1B.
It should be understood that the particular order in which the operations have been described is merely exemplary and is not intended to suggest that the order is the only order in which the operations may be performed. Various ways of reordering the operations described herein will occur to those of ordinary skill in the art. In addition, it should be noted that the various processes described independently herein (e.g., those listed in paragraph [0043 ]) can be combined with each other in different arrangements. For example, the contacts, user interface objects, tactile sensations, intensity thresholds, and/or focus selectors described above with reference to any of the various processes independently described herein (e.g., those processes listed in paragraph [0043 ]) optionally have one or more of the characteristics of the contacts, gestures, user interface objects, tactile sensations, intensity thresholds, and focus selectors described herein with reference to one or more of the other methods described herein (e.g., those methods listed in paragraph [0043 ]). For the sake of brevity, not all of the various possible combinations are specifically enumerated herein, but it should be understood that the claims described above may be combined in any manner not to the exclusion of claim features that are mutually exclusive.
The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the various described embodiments to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of various described embodiments and its practical application, to thereby enable others skilled in the art to best utilize various described embodiments with various modifications as are suited to the particular use contemplated.

Claims (104)

1. A method, comprising:
at an electronic device with a touch-sensitive surface and a display, wherein the device includes one or more sensors to detect intensity of contacts with the touch-sensitive surface:
displaying a first user interface object at a first location on the display;
detecting a contact with the touch-sensitive surface;
detecting a first movement of the contact across the touch-sensitive surface that corresponds to movement of a focus selector toward the first position;
in response to detecting the first movement of the contact:
moving the focus selector from a position away from the first user interface object to the first position; and
determining an intensity of the contact on the touch-sensitive surface while the focus selector is in the first position;
after detecting the first movement of the contact, detecting a second movement of the contact across the touch-sensitive surface that corresponds to movement of the focus selector away from the first location; and
in response to detecting the second movement of the contact:
in accordance with a determination that the contact satisfies selection criteria for the first user interface object, moving the focus selector and the first user interface object away from the first location in accordance with the second movement of the contact, wherein the selection criteria for the first user interface object includes the contact reaching a predefined intensity threshold while the focus selector is in the first location; and
In accordance with a determination that the contact does not satisfy the selection criteria for the first user interface object, moving the focus selector without moving the first user interface object in accordance with the second movement of the contact.
2. The method of claim 1, wherein:
the movement of the first user interface object is constrained to a predefined path in the user interface; and is
Moving the first user interface object includes moving the first user interface object along the predefined path according to a motion component of the focus selector that corresponds to an allowed direction of motion along the predefined path.
3. The method of claim 1, wherein:
the first user interface object has a two-dimensional range of motion; and is
Moving the first user interface object includes moving the first user interface object to a location on the display at or adjacent to the focus selector.
4. The method of any of claims 1-3, wherein the predefined intensity threshold is based at least in part on an amount of change in intensity of the contact.
5. The method of any of claims 1-3, wherein the predefined intensity threshold is based at least in part on a magnitude of the intensity of the contact.
6. The method of any of claims 1-5, wherein while displaying the first user interface object on the display, a second user interface object is displayed at a second location on the display, and the method comprises:
while continuing to detect the contact and move the first user interface object in accordance with movement of the focus selector:
after detecting the second movement of the contact, detecting a third movement of the contact across the touch-sensitive surface that corresponds to movement of the focus selector toward the second position;
in response to detecting the third movement of the contact:
moving the focus selector from a position away from the second user interface object to the second position; and
determining an intensity of the contact on the touch-sensitive surface while the focus selector is in the second position;
after detecting the third movement of the contact, detecting a fourth movement of the contact across the touch-sensitive surface that corresponds to movement of the focus selector away from the second location; and
in response to detecting the fourth movement of the contact:
In accordance with a determination that the contact satisfies selection criteria for the second user interface object, moving the focus selector, the first user interface object, and the second user interface object away from the second location in accordance with the fourth movement of the contact, wherein the selection criteria for the second user interface object includes the contact reaching the predefined intensity threshold while the focus selector is in the second location; and
in accordance with a determination that the contact does not satisfy the selection criteria for the second user interface object, moving the focus selector and the first user interface object without moving the second user interface object in accordance with the fourth movement of the contact.
7. The method of claim 6, comprising: after detecting the fourth movement of the contact, displaying a representation of the first user interface object and a representation of the second user interface object as moving on the display in accordance with movement of the focus selector.
8. The method of claim 6, comprising: after detecting the fourth movement of the contact, displaying representations of a set of objects corresponding to the first user interface object and the second user interface object as moving on the display in accordance with movement of the focus selector.
9. The method of any of claims 6 to 8, comprising, after detecting the first movement and before detecting the fourth movement:
detecting a decrease in intensity of the contact below the predefined intensity threshold; and is
Continuing to move the first user interface object in accordance with movement of the focus selector after detecting that the intensity of the contact decreases below the predefined intensity threshold.
10. An electronic device, comprising:
a display;
a touch-sensitive surface;
one or more sensors to detect intensity of contacts with the touch-sensitive surface;
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for:
displaying a first user interface object at a first location on the display;
detecting a contact with the touch-sensitive surface;
detecting a first movement of the contact across the touch-sensitive surface that corresponds to movement of a focus selector toward the first position;
In response to detecting the first movement of the contact:
moving the focus selector from a position away from the first user interface object to the first position; and
determining an intensity of the contact on the touch-sensitive surface while the focus selector is in the first position;
after detecting the first movement of the contact, detecting a second movement of the contact across the touch-sensitive surface that corresponds to movement of the focus selector away from the first location; and
in response to detecting the second movement of the contact:
in accordance with a determination that the contact satisfies selection criteria for the first user interface object, moving the focus selector and the first user interface object away from the first location in accordance with the second movement of the contact, wherein the selection criteria for the first user interface object includes the contact reaching a predefined intensity threshold while the focus selector is in the first location; and
in accordance with a determination that the contact does not satisfy the selection criteria for the first user interface object, moving the focus selector without moving the first user interface object in accordance with the second movement of the contact.
11. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by an electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensity of contacts with the touch-sensitive surface, cause the device to:
displaying a first user interface object at a first location on the display;
detecting a contact with the touch-sensitive surface;
detecting a first movement of the contact across the touch-sensitive surface that corresponds to movement of a focus selector toward the first position;
in response to detecting the first movement of the contact:
moving the focus selector from a position away from the first user interface object to the first position; and
determining an intensity of the contact on the touch-sensitive surface while the focus selector is in the first position;
after detecting the first movement of the contact, detecting a second movement of the contact across the touch-sensitive surface that corresponds to movement of the focus selector away from the first location; and
in response to detecting the second movement of the contact:
In accordance with a determination that the contact satisfies selection criteria for the first user interface object, moving the focus selector and the first user interface object away from the first location in accordance with the second movement of the contact, wherein the selection criteria for the first user interface object includes the contact reaching a predefined intensity threshold while the focus selector is in the first location; and
in accordance with a determination that the contact does not satisfy the selection criteria for the first user interface object, moving the focus selector without moving the first user interface object in accordance with the second movement of the contact.
12. A graphical user interface on an electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensity of contacts with the touch-sensitive surface, a memory, and one or more processors to execute one or more programs stored in the memory, the graphical user interface comprising:
a first user interface object;
wherein:
the first user interface object is displayed at a first location on the display;
Detecting a contact with the touch-sensitive surface;
detecting a first movement of the contact across the touch-sensitive surface that corresponds to movement of a focus selector toward the first position;
in response to detecting the first movement of the contact:
moving the focus selector from a position away from the first user interface object to the first position; and
determining an intensity of the contact on the touch-sensitive surface while the focus selector is in the first position;
after detecting the first movement of the contact, detecting a second movement of the contact across the touch-sensitive surface that corresponds to movement of the focus selector away from the first location; and
in response to detecting the second movement of the contact:
in accordance with a determination that the contact satisfies selection criteria for the first user interface object, moving the focus selector and the first user interface object away from the first location in accordance with the second movement of the contact, wherein the selection criteria for the first user interface object includes the contact reaching a predefined intensity threshold while the focus selector is in the first location; and
In accordance with a determination that the contact does not satisfy the selection criteria for the first user interface object, moving the focus selector without moving the first user interface object in accordance with the second movement of the contact.
13. An electronic device, comprising:
a display;
a touch-sensitive surface;
one or more sensors to detect intensity of contacts with the touch-sensitive surface; and
means for displaying a first user interface object at a first location on the display;
means for detecting a contact with the touch-sensitive surface;
means for detecting a first movement of the contact across the touch-sensitive surface, the first movement corresponding to movement of a focus selector toward the first position;
means for, in response to detecting the first movement of the contact, performing:
moving the focus selector from a position away from the first user interface object to the first position; and
determining an intensity of the contact on the touch-sensitive surface while the focus selector is in the first position;
means for detecting a second movement of the contact across the touch-sensitive surface after detecting the first movement of the contact, the second movement corresponding to movement of the focus selector away from the first location; and
Means for, in response to detecting the second movement of the contact, performing:
in accordance with a determination that the contact satisfies selection criteria for the first user interface object, moving the focus selector and the first user interface object away from the first location in accordance with the second movement of the contact, wherein the selection criteria for the first user interface object includes the contact reaching a predefined intensity threshold while the focus selector is in the first location; and
in accordance with a determination that the contact does not satisfy the selection criteria for the first user interface object, moving the focus selector without moving the first user interface object in accordance with the second movement of the contact.
14. An information processing apparatus for use in an electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensity of contacts with the touch-sensitive surface, comprising:
means for displaying a first user interface object at a first location on the display;
means for detecting a contact with the touch-sensitive surface;
Means for detecting a first movement of the contact across the touch-sensitive surface, the first movement corresponding to movement of a focus selector toward the first position;
means for, in response to detecting the first movement of the contact, performing:
moving the focus selector from a position away from the first user interface object to the first position; and
determining an intensity of the contact on the touch-sensitive surface while the focus selector is in the first position;
means for detecting a second movement of the contact across the touch-sensitive surface after detecting the first movement of the contact, the second movement corresponding to movement of the focus selector away from the first location; and
means for, in response to detecting the second movement of the contact, performing:
in accordance with a determination that the contact satisfies selection criteria for the first user interface object, moving the focus selector and the first user interface object away from the first location in accordance with the second movement of the contact, wherein the selection criteria for the first user interface object includes the contact reaching a predefined intensity threshold while the focus selector is in the first location; and
In accordance with a determination that the contact does not satisfy the selection criteria for the first user interface object, moving the focus selector without moving the first user interface object in accordance with the second movement of the contact.
15. An electronic device, comprising:
a display;
a touch-sensitive surface;
one or more sensors to detect intensity of contacts with the touch-sensitive surface;
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing any of the methods of claims 1-9.
16. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by an electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensity of contacts with the touch-sensitive surface, cause the device to perform any of the methods of claims 1-9.
17. A graphical user interface on an electronic device with a display, a touch-sensitive surface, a memory, one or more sensors to detect intensity of contacts with the touch-sensitive surface, and one or more processors to execute one or more programs stored in the memory, the graphical user interface comprising user interfaces displayed in accordance with any of the methods of claims 1-9.
18. An electronic device, comprising:
a display;
a touch-sensitive surface;
one or more sensors to detect intensity of contacts with the touch-sensitive surface; and
apparatus for performing any of the methods of claims 1-9.
19. An information processing apparatus for use in an electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensity of contacts with the touch-sensitive surface, comprising:
apparatus for performing any of the methods of claims 1-9.
20. An electronic device, comprising:
a display unit configured to display a first user interface object at a first location on the display unit;
A touch-sensitive surface unit configured to detect a contact;
one or more sensor units configured to detect intensity of contacts with the touch-sensitive surface unit; and
a processing unit coupled to the display unit, the one or more sensor units, and the touch-sensitive surface unit, the processing unit configured to:
detecting a first movement of the contact on the touch-sensitive surface unit that corresponds to movement of a focus selector toward the first position;
in response to detecting the first movement of the contact:
moving the focus selector from a position away from the first user interface object to the first position; and
determining an intensity of the contact on the touch-sensitive surface unit while the focus selector is in the first position;
after detecting the first movement of the contact, detecting a second movement of the contact on the touch-sensitive surface unit that corresponds to movement of the focus selector away from the first location; and
in response to detecting the second movement of the contact:
In accordance with a determination that the contact satisfies selection criteria for the first user interface object, moving the focus selector and the first user interface object away from the first location in accordance with the second movement of the contact, wherein the selection criteria for the first user interface object includes the contact reaching a predefined intensity threshold while the focus selector is in the first location; and
in accordance with a determination that the contact does not satisfy the selection criteria for the first user interface object, moving the focus selector without moving the first user interface object in accordance with the second movement of the contact.
21. The electronic device of claim 20, wherein:
the movement of the first user interface object is constrained to a predefined path in the user interface; and is
Moving the first user interface object includes moving the first user interface object along the predefined path according to a motion component of the focus selector that corresponds to an allowed direction of motion along the predefined path.
22. The electronic device of claim 20, wherein:
The first user interface object has a two-dimensional range of motion; and is
Moving the first user interface object includes moving the first user interface object to a location on the display unit at or adjacent to the focus selector.
23. The electronic device of any of claims 20-22, wherein the predefined intensity threshold is based at least in part on an amount of change in intensity of the contact.
24. The electronic device of any of claims 20-22, wherein the predefined intensity threshold is based at least in part on a magnitude of the intensity of the contact.
25. The electronic device of any of claims 20-24, wherein while displaying the first user interface object on the display unit, a second user interface object is displayed at a second location on the display unit, and the processing unit is further configured to:
while continuing to detect the contact and move the first user interface object in accordance with movement of the focus selector:
after detecting the second movement of the contact, detect a third movement of the contact across the touch-sensitive surface unit that corresponds to movement of the focus selector toward the second position;
In response to detecting the third movement of the contact:
moving the focus selector from a position away from the second user interface object to the second position; and
determining an intensity of the contact on the touch-sensitive surface unit while the focus selector is in the second position;
after detecting the third movement of the contact, detect a fourth movement of the contact on the touch-sensitive surface unit that corresponds to movement of the focus selector away from the second location; and
in response to detecting the fourth movement of the contact:
in accordance with a determination that the contact satisfies selection criteria for the second user interface object, moving the focus selector, the first user interface object, and the second user interface object away from the second location in accordance with the fourth movement of the contact, wherein the selection criteria for the second user interface object includes the contact reaching the predefined intensity threshold while the focus selector is in the second location; and
in accordance with a determination that the contact does not satisfy the selection criteria for the second user interface object, moving the focus selector and the first user interface object without moving the second user interface object in accordance with the fourth movement of the contact.
26. The electronic device of claim 25, wherein the processing unit is further configured to, after detecting the fourth movement of the contact, display a representation of the first user interface object and a representation of the second user interface object as moving on the display unit in accordance with movement of the focus selector.
27. The electronic device of claim 25, wherein the processing unit is further configured to, after detecting the fourth movement of the contact, display a representation of a set of objects corresponding to the first user interface object and the second user interface object as moving on the display unit in accordance with movement of the focus selector.
28. The electronic device of any of claims 25-27, wherein the processing unit is further configured to, after detecting the first movement and before detecting the fourth movement:
detecting a decrease in intensity of the contact below the predefined intensity threshold; and
continuing to move the first user interface object in accordance with movement of the focus selector after detecting that the intensity of the contact decreases below the predefined intensity threshold.
29. A method, comprising:
at an electronic device with a touch-sensitive surface and a display, wherein the device includes one or more sensors to detect intensity of contacts with the touch-sensitive surface:
displaying a plurality of user interface objects on the display, the plurality of user interface objects including a first user interface object and a second user interface object;
detecting a first press input corresponding to an increase in intensity of a contact on the touch-sensitive surface above a first intensity threshold while a focus selector is positioned over the first user interface object;
in response to detecting the first press input, selecting the first user interface object; and
after selecting the first user interface object:
detecting a second press input that corresponds to an increase in intensity of a contact on the touch-sensitive surface above a second intensity threshold while the focus selector is positioned over the second user interface object; and
in response to detecting the second press input, selecting the second user interface object and remaining selecting the first user interface object.
30. The method of claim 29, wherein:
The first press input corresponds to a first contact on the touch-sensitive surface; and is
The second press input corresponds to a second contact on the touch-sensitive surface that is different from the first contact.
31. The method of claim 30, further comprising, after selecting the first user interface object and the second user interface object:
detecting liftoff of the second contact;
detecting a third press input corresponding to a third contact after detecting liftoff of the second contact; and
in response to detecting the third press input, deselecting the first user interface object and the second user interface object.
32. The method of claim 29, wherein the first press input and the second press input are part of a single gesture that includes a continuously detected contact on the touch-sensitive surface.
33. The method of claim 32, further comprising, after selecting the first user interface object and the second user interface object:
detecting liftoff of the successively detected contacts; and
deselecting the first user interface object and the second user interface object in response to detecting liftoff of the continuously detected contact.
34. The method of claim 32, wherein:
the gesture includes an intermediate portion between the first press input and the second press input, the intermediate portion including movement of the continuously detected contact corresponding to movement of the focus selector from the first user interface object to the second user interface object.
35. The method of any of claims 29 to 34, wherein the device is configured to detect a range of contact intensity values and compare the detected intensity values to a plurality of different intensity thresholds, the plurality of different intensity thresholds comprising:
an alternative mode intensity threshold used by the device to transition from a first selection mode to a second selection mode; and
a selection intensity threshold used by the device to distinguish between input corresponding to movement of the focus selector on the display and input corresponding to selection of a user interface object on the display at a location at or near the location of the focus selector, wherein the selection intensity threshold is different from the alternative mode intensity threshold.
36. The method of claim 35, further comprising, after selecting the first user interface object and the second user interface object:
detecting a third press input comprising an increase in intensity of a contact above the alternative pattern intensity threshold; and
in response to detecting the third press input, deselecting the first user interface object and the second user interface object.
37. The method of any one of claims 35 to 36, wherein:
the first intensity threshold is the alternative mode intensity threshold; and is
The second intensity threshold is the alternative mode intensity threshold.
38. The method of any one of claims 35 to 36, wherein:
the first intensity threshold is the alternative mode intensity threshold; and is
The second intensity threshold is the select intensity threshold.
39. The method of claim 38, wherein:
the plurality of user interface objects includes a third user interface object representing a set of user interface objects; and is
The method includes, after selecting the first user interface object and the second user interface object:
detecting a third press input corresponding to an increase in intensity of a contact on the touch-sensitive surface while a focus selector is positioned over the third user interface object; and
In response to detecting the third press input:
in accordance with a determination that the third press input includes an increase in intensity above the first intensity threshold, displaying a user interface having an area for adding the first user interface object and a second user interface object to the set of user interface objects represented by the third user interface object; and is
In accordance with a determination that the third press input includes an increase in intensity to a maximum intensity above the second intensity threshold and below the first intensity threshold, selecting the third user interface object in addition to the first user interface object and the second user interface object.
40. The method of any of claims 29 to 39, further comprising:
displaying a first residual image at an original position of the first user interface object after selecting the first user interface object; and is
Displaying a second residual image at an original position of the second user interface object after selecting the second user interface object.
41. The method of claim 40, further comprising, after displaying the first residual image and the second residual image:
Detecting an end of selection of the first user interface object and the second user interface object; and
in response to detecting the end of selection of the first user interface object and the second user interface object, displaying an animation of the representation of the first user interface object moving back to the first residual image and displaying an animation of the representation of the second user interface object moving back to the second residual image.
42. The method of any of claims 40-41, further comprising, after displaying the first residual image and the second residual image:
detecting a press input on the corresponding residual image; and
in response to detecting the press input on the respective residual image, deselecting a user interface object corresponding to the respective residual image.
43. The method of any of claims 29 to 42, further comprising:
after selecting the first user interface object, displaying a representation of the first user interface object adjacent to the focus selector; and is
After selecting the second user interface object, displaying a representation of the second user interface object adjacent to the focus selector.
44. The method of any of claims 29 to 43, further comprising:
after selecting the first user interface object, changing display of the first user interface object to provide a visual indication that the first user interface object has been selected; and is
After selecting the second user interface object, changing display of the second user interface object to provide a visual indication that the second user interface object has been selected.
45. An electronic device, comprising:
a display;
a touch-sensitive surface;
one or more sensors to detect intensity of contacts with the touch-sensitive surface;
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for:
displaying a plurality of user interface objects on the display, the plurality of user interface objects including a first user interface object and a second user interface object;
detecting a first press input corresponding to an increase in intensity of a contact on the touch-sensitive surface above a first intensity threshold while a focus selector is positioned over the first user interface object;
In response to detecting the first press input, selecting the first user interface object; and
after selecting the first user interface object:
detecting a second press input that corresponds to an increase in intensity of a contact on the touch-sensitive surface above a second intensity threshold while the focus selector is positioned over the second user interface object; and
in response to detecting the second press input, selecting the second user interface object and remaining selecting the first user interface object.
46. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by an electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensity of contacts with the touch-sensitive surface, cause the device to:
displaying a plurality of user interface objects on the display, the plurality of user interface objects including a first user interface object and a second user interface object;
detecting a first press input corresponding to an increase in intensity of a contact on the touch-sensitive surface above a first intensity threshold while a focus selector is positioned over the first user interface object;
In response to detecting the first press input, selecting the first user interface object; and
after selecting the first user interface object:
detecting a second press input that corresponds to an increase in intensity of a contact on the touch-sensitive surface above a second intensity threshold while the focus selector is positioned over the second user interface object; and
in response to detecting the second press input, selecting the second user interface object and remaining selecting the first user interface object.
47. A graphical user interface on an electronic device with a display, a touch-sensitive surface, a memory, and one or more sensors to detect intensity of contacts with the touch-sensitive surface and one or more processors to execute one or more programs stored in the memory, the graphical user interface comprising:
a plurality of user interface objects displayed on the display, the plurality of user interface objects including a first user interface object and a second user interface object;
wherein:
detecting a first press input corresponding to an increase in intensity of a contact on the touch-sensitive surface above a first intensity threshold while a focus selector is positioned over the first user interface object;
In response to detecting the first press input, selecting the first user interface object; and
after selecting the first user interface object:
detecting a second press input that corresponds to an increase in intensity of a contact on the touch-sensitive surface above a second intensity threshold while the focus selector is positioned over the second user interface object; and
in response to detecting the second press input, selecting the second user interface object and remaining selecting the first user interface object.
48. An electronic device, comprising:
a display;
a touch-sensitive surface;
one or more sensors to detect intensity of contacts with the touch-sensitive surface; and
means for displaying a plurality of user interface objects on the display, the plurality of user interface objects including a first user interface object and a second user interface object;
means for detecting a first press input corresponding to an increase in intensity of a contact on the touch-sensitive surface above a first intensity threshold while a focus selector is located over the first user interface object;
Means for selecting the first user interface object in response to detecting the first press input; and
means for, after selecting the first user interface object:
detecting a second press input that corresponds to an increase in intensity of a contact on the touch-sensitive surface above a second intensity threshold while the focus selector is positioned over the second user interface object; and
in response to detecting the second press input, selecting the second user interface object and remaining selecting the first user interface object.
49. An information processing apparatus for use in an electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensity of contacts with the touch-sensitive surface, comprising:
means for displaying a plurality of user interface objects on the display, the plurality of user interface objects including a first user interface object and a second user interface object;
means for detecting a first press input corresponding to an increase in intensity of a contact on the touch-sensitive surface above a first intensity threshold while a focus selector is located over the first user interface object;
Means for selecting the first user interface object in response to detecting the first press input; and
means for, after selecting the first user interface object:
detecting a second press input that corresponds to an increase in intensity of a contact on the touch-sensitive surface above a second intensity threshold while the focus selector is positioned over the second user interface object; and
in response to detecting the second press input, selecting the second user interface object and remaining selecting the first user interface object.
50. An electronic device, comprising:
a display;
a touch-sensitive surface;
one or more sensors to detect intensity of contacts with the touch-sensitive surface;
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing any of the methods of claims 29-44.
51. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by an electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensity of contacts with the touch-sensitive surface, cause the device to perform any of the methods of claims 29-44.
52. A graphical user interface on an electronic device with a display, a touch-sensitive surface, a memory, one or more sensors to detect intensity of contacts with the touch-sensitive surface, and one or more processors to execute one or more programs stored in the memory, the graphical user interface comprising user interfaces displayed in accordance with any of the methods of claims 29-44.
53. An electronic device, comprising:
a display;
a touch-sensitive surface;
one or more sensors to detect intensity of contacts with the touch-sensitive surface; and
means for performing any of the methods of claims 29-44.
54. An information processing apparatus for use in an electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensity of contacts with the touch-sensitive surface, comprising:
means for performing any of the methods of claims 29-44.
55. An electronic device, comprising:
a display unit configured to display a plurality of user interface objects including a first user interface object and a second user interface object;
A touch-sensitive surface unit configured to detect gestures;
one or more sensor units configured to detect intensity of contacts with the touch-sensitive surface unit; and
a processing unit coupled to the display unit, the touch-sensitive surface unit, and the one or more sensor units, the processing unit configured to:
detecting a first press input corresponding to an increase in intensity of a contact on the touch-sensitive surface unit above a first intensity threshold while a focus selector is positioned over the first user interface object;
in response to detecting the first press input, selecting the first user interface object; and
after selecting the first user interface object:
detecting a second press input that corresponds to an increase in intensity of a contact on the touch-sensitive surface unit above a second intensity threshold while the focus selector is positioned over the second user interface object; and
in response to detecting the second press input, selecting the second user interface object and remaining selecting the first user interface object.
56. The electronic device of claim 55, wherein:
the first press input corresponds to a first contact on the touch-sensitive surface unit; and is
The second press input corresponds to a second contact on the touch-sensitive surface unit that is different from the first contact.
57. The electronic device of claim 56, the processing unit further configured to, after selecting the first user interface object and the second user interface object:
detecting liftoff of the second contact;
detecting a third press input corresponding to a third contact after detecting liftoff of the second contact; and
in response to detecting the third press input, deselecting the first user interface object and the second user interface object.
58. The electronic device of claim 55, wherein the first press input and the second press input are part of a single gesture that includes a continuously detected contact on the touch-sensitive surface unit.
59. The electronic device of claim 58, the processing unit further configured to, after selecting the first user interface object and the second user interface object:
Detecting liftoff of the successively detected contacts; and
deselecting the first user interface object and the second user interface object in response to detecting liftoff of the continuously detected contact.
60. The electronic device of any of claims 55 and 57-59, wherein:
the first and second press inputs are part of a single gesture that includes a contact continuously detected on the touch-sensitive surface unit; and is
The gesture includes an intermediate portion between the first press input and the second press input, the intermediate portion including movement of the continuously detected contact corresponding to movement of the focus selector from the first user interface object to the second user interface object.
61. The electronic device of any of claims 55-60, wherein the processing unit is configured to detect a range of contact intensity values and compare the detected intensity values to a plurality of different intensity thresholds, the plurality of different intensity thresholds including:
an alternative mode intensity threshold for use by the processing unit to transition from a first selection mode to a second selection mode; and
A selection intensity threshold used by the processing unit to distinguish between an input corresponding to movement of the focus selector on the display unit and an input corresponding to selection of a user interface object on the display unit at a location at or near the location of the focus selector, wherein the selection intensity threshold is different from the alternative mode intensity threshold.
62. The electronic device of claim 61, the processing unit further configured to, after selecting the first user interface object and the second user interface object:
detecting a third press input comprising an increase in intensity of a contact above the alternative pattern intensity threshold; and
in response to detecting the third press input, deselecting the first user interface object and the second user interface object.
63. The electronic device of any of claims 61-62, wherein:
the first intensity threshold is the alternative mode intensity threshold; and is
The second intensity threshold is the alternative mode intensity threshold.
64. The electronic device of any of claims 61-62, wherein:
The first intensity threshold is the alternative mode intensity threshold; and is
The second intensity threshold is the select intensity threshold.
65. The electronic device of claim 64, wherein:
the plurality of user interface objects includes a third user interface object representing a set of user interface objects; and is
The processing unit is further configured to, after selecting the second user interface object:
detecting a third press input corresponding to an increase in intensity of a contact on the touch-sensitive surface unit while a focus selector is positioned over the third user interface object; and
in response to detecting the third press input:
in accordance with a determination that the third press input includes an increase in intensity above the first intensity threshold, displaying a user interface having an area for adding the first user interface object and a second user interface object to the set of user interface objects represented by the third user interface object; and
in accordance with a determination that the third press input includes an increase in intensity to a maximum intensity above the second intensity threshold and below the first intensity threshold, selecting the third user interface object in addition to the first user interface object and the second user interface object.
66. The electronic device of any of claims 55-65, the processing unit further configured to:
displaying a first residual image at an original position of the first user interface object after selecting the first user interface object; and is
Displaying a second residual image at an original position of the second user interface object after selecting the second user interface object.
67. The electronic device of claim 66, the processing unit further configured to, after displaying the first and second residual images:
detecting an end of selection of the first user interface object and the second user interface object; and
in response to detecting the end of selection of the first user interface object and the second user interface object, displaying an animation of the representation of the first user interface object moving back to the first residual image and displaying an animation of the representation of the second user interface object moving back to the second residual image.
68. The electronic device of any of claims 66-67, the processing unit further configured to, after displaying the first and second residual images:
Detecting a press input on the corresponding residual image; and
in response to detecting the press input on the respective residual image, deselecting a user interface object corresponding to the respective residual image.
69. The electronic device of any of claims 55-68, the processing unit further configured to:
after selecting the first user interface object, displaying a representation of the first user interface object adjacent to the focus selector; and is
After selecting the second user interface object, displaying a representation of the second user interface object adjacent to the focus selector.
70. The electronic device of any of claims 55-69, the processing unit further configured to:
after selecting the first user interface object, changing display of the first user interface object to provide a visual indication that the first user interface object has been selected; and is
After selecting the second user interface object, changing display of the second user interface object to provide a visual indication that the second user interface object has been selected.
71. A method, comprising:
At an electronic device with a touch-sensitive surface and a display, wherein the device includes one or more sensors to detect intensity of contacts with the touch-sensitive surface:
displaying a virtual keyboard on the display;
detecting a contact on the touch-sensitive surface;
while continuously detecting the contact on the touch-sensitive surface:
detecting one or more movements of the contact on the touch-sensitive surface, the one or more movements corresponding to movement of a focus selector over the virtual keyboard; and
for each respective key of a plurality of keys of the virtual keyboard, upon detection of the focus selector over the respective key of the plurality of keys:
in accordance with a determination that a character output criterion for outputting a character corresponding to the respective key has been met, outputting the character, wherein the character output criterion includes a respective intensity of the contact being above a first intensity threshold when the focus selector is detected over the respective key; and is
In accordance with a determination that the character output criteria are not satisfied, forgoing outputting the character corresponding to the respective key.
72. The method of claim 71, wherein the character output criteria for outputting the character corresponding to the respective key comprises, while the focus selector is located over the respective key:
The contact corresponding to the focus selector is increased from an intensity below the first intensity threshold.
73. The method of any of claims 71-72, wherein the character output criteria for outputting the character corresponding to the respective key includes, while the focus selector is located over the respective key:
the contact corresponding to the focus selector is reduced from an intensity above the first intensity threshold to an intensity below a character output intensity threshold.
74. The method of claim 71, wherein the character output criteria for outputting the character corresponding to the respective key comprises, upon successive detection of the focus selector over the respective key:
the contact corresponding to the focus selector is increased from an intensity below the first intensity and then decreased from an intensity above the first intensity threshold to an intensity below a character output intensity threshold.
75. The method of any of claims 71-74, comprising, while continuously detecting the contact on the touch-sensitive surface:
detecting a first press input comprising detecting an increase in intensity of the contact above the first intensity threshold while the focus selector is over a first key; and
In response to detecting the first press input, outputting a character corresponding to the first key.
76. The method of any of claims 71-75, comprising, while continuously detecting the contact on the touch-sensitive surface:
detecting movement of the contact corresponding to movement of the focus selector over a second key, wherein a maximum intensity of the contact is below the first intensity threshold when the focus selector is over the second key; and
in response to detecting movement of the contact corresponding to movement of the focus selector over the second key, forgoing outputting a character corresponding to the second key, wherein the maximum intensity of the contact while the focus selector is over the second key is below the first intensity threshold.
77. The method of claim 75, comprising, while continuously detecting the contact on the touch-sensitive surface and after outputting a character corresponding to the first key:
detecting a second press input comprising detecting an increase in intensity of the contact above the first intensity threshold while the focus selector is over a second key; and
In response to detecting the second press input, outputting a character corresponding to the second key.
78. The method of any of claims 75 and 77, comprising, while continuously detecting the contact on the touch-sensitive surface and after outputting a character corresponding to the first key:
detecting a decrease in intensity of the contact below the first intensity threshold;
after detecting that the intensity of the contact decreases below the first intensity threshold, detecting a second press input that includes detecting that the intensity of the contact increases above the first intensity threshold while the focus selector is over the first key; and
in response to detecting the second press input, outputting again the character corresponding to the first key as additional output.
79. The method of any of claims 71-78, comprising, while continuously detecting the contact on the touch-sensitive surface:
detecting a plurality of inputs corresponding to a sequence of input characters;
in response to detecting the plurality of inputs, displaying an auto-correcting user interface for changing the character sequence to a modified character sequence;
While displaying the auto-correcting user interface, detecting an auto-correcting input that includes an increase in intensity of the contact above the first intensity threshold when the focus selector is positioned over a respective affordance in the user interface; and
in response to detecting the auto-correcting input, in accordance with a determination that the contact included in the auto-correcting input has an intensity above a second intensity threshold, performing a first operation associated with the sequence of characters, the second intensity threshold being above the first intensity threshold.
80. The method of claim 79, comprising, in response to detecting the auto-correcting input, in accordance with a determination that the contact included in the auto-correcting input has an intensity between the first intensity threshold and the second intensity threshold, performing a second operation associated with the sequence of characters, wherein the second operation is different from the first operation.
81. The method of claim 80, wherein:
the first operation comprises rejecting the modified character sequence; and is
The second operation includes replacing the character sequence with the modified character sequence.
82. The method of claim 80, wherein:
the first operation comprises replacing the sequence of characters with a modified sequence of characters; and is
The second operation includes rejecting the modified character sequence.
83. An electronic device, comprising:
a display;
a touch-sensitive surface;
one or more sensors to detect intensity of contacts with the touch-sensitive surface;
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for:
displaying a virtual keyboard on the display;
detecting a contact on the touch-sensitive surface;
while continuously detecting the contact on the touch-sensitive surface:
detecting one or more movements of the contact on the touch-sensitive surface, the one or more movements corresponding to movement of a focus selector over the virtual keyboard; and
for each respective key of a plurality of keys of the virtual keyboard, upon detection of the focus selector over the respective key of the plurality of keys:
In accordance with a determination that a character output criterion for outputting a character corresponding to the respective key has been met, outputting the character, wherein the character output criterion includes a respective intensity of the contact being above a first intensity threshold when the focus selector is detected over the respective key; and is
In accordance with a determination that the character output criteria are not satisfied, forgoing outputting the character corresponding to the respective key.
84. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by an electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensity of contacts with the touch-sensitive surface, cause the device to:
displaying a virtual keyboard on the display;
detecting a contact on the touch-sensitive surface;
while continuously detecting the contact on the touch-sensitive surface:
for each respective key of a plurality of keys of the virtual keyboard, upon detection of the focus selector over the respective key of the plurality of keys:
in accordance with a determination that a character output criterion for outputting a character corresponding to the respective key has been met, outputting the character, wherein the character output criterion includes a respective intensity of the contact being above a first intensity threshold when the focus selector is detected over the respective key; and is
In accordance with a determination that the character output criteria are not satisfied, forgoing outputting the character corresponding to the respective key.
85. A graphical user interface on an electronic device with a display, a touch-sensitive surface, a memory, and one or more sensors to detect intensity of contacts with the touch-sensitive surface and one or more processors to execute one or more programs stored in the memory, the graphical user interface comprising:
a virtual keyboard displayed on the display;
wherein:
in response to detecting a contact on the touch-sensitive surface, while continuously detecting the contact on the touch-sensitive surface:
detecting one or more movements of the contact on the touch-sensitive surface, the one or more movements corresponding to movement of a focus selector over the virtual keyboard; and
for each respective key of a plurality of keys of the virtual keyboard, upon detection of the focus selector over the respective key of the plurality of keys:
in accordance with a determination that a character output criterion for outputting a character corresponding to the respective key has been met, outputting the character corresponding to the respective key to the graphical user interface, wherein the character output criterion includes a respective intensity of the contact at a time the focus selector is detected over the respective key being above a first intensity threshold; and
In accordance with a determination that the character output criteria are not satisfied, forgoing outputting the character corresponding to the respective key.
86. An electronic device, comprising:
a display;
a touch-sensitive surface;
one or more sensors to detect intensity of contacts with the touch-sensitive surface; and
means for displaying a virtual keyboard on the display;
means for detecting a contact on the touch-sensitive surface;
means for processing the contact while continuously detecting the contact on the touch-sensitive surface, the means comprising:
means for detecting one or more movements of the contact on the touch-sensitive surface that correspond to movement of a focus selector over the virtual keyboard; and
means corresponding to each respective key of a plurality of keys of the virtual keyboard, the means for use in detecting the focus selector over a respective key of the plurality of keys, the means comprising:
in accordance with a determination that a character output criterion for outputting a character corresponding to the respective key has been met, outputting the character corresponding to the respective key, wherein the character output criterion includes a respective intensity of the contact being above a first intensity threshold when the focus selector is detected over the respective key; and
Means for, in accordance with a determination that the character output criteria are not met, forgoing output of the character corresponding to the respective key.
87. An information processing apparatus for use in an electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensity of contacts with the touch-sensitive surface, comprising:
means for displaying a virtual keyboard on the display;
means for detecting a contact on the touch-sensitive surface;
means for processing the contact while continuously detecting the contact on the touch-sensitive surface, the means comprising:
means for detecting one or more movements of the contact on the touch-sensitive surface that correspond to movement of a focus selector over the virtual keyboard; and
means corresponding to each respective key of a plurality of keys of the virtual keyboard, the means for use in detecting the focus selector over a respective key of the plurality of keys, the means comprising:
in accordance with a determination that a character output criterion for outputting a character corresponding to the respective key has been met, outputting the character corresponding to the respective key, wherein the character output criterion includes a respective intensity of the contact being above a first intensity threshold when the focus selector is detected over the respective key; and
Means for, in accordance with a determination that the character output criteria are not met, forgoing output of the character corresponding to the respective key.
88. An electronic device, comprising:
a display;
a touch-sensitive surface;
one or more sensors to detect intensity of contacts with the touch-sensitive surface;
one or more processors;
a memory; and
one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing any of the methods of claims 71-82.
89. A computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by an electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensity of contacts with the touch-sensitive surface, cause the device to perform any of the methods of claims 71-82.
90. A graphical user interface on an electronic device with a display, a touch-sensitive surface, a memory, one or more sensors to detect intensity of contacts with the touch-sensitive surface, and one or more processors to execute one or more programs stored in the memory, the graphical user interface comprising user interfaces displayed in accordance with any of the methods of claims 71-82.
91. An electronic device, comprising:
a display;
a touch-sensitive surface;
one or more sensors to detect intensity of contacts with the touch-sensitive surface; and
means for performing any of the methods of claims 71-82.
92. An information processing apparatus for use in an electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensity of contacts with the touch-sensitive surface, comprising:
means for performing any of the methods of claims 71-82.
93. An electronic device, comprising:
a display unit configured to display a virtual keyboard;
a touch-sensitive surface unit configured to detect a contact; and
one or more sensor units to detect intensity of contacts on the touch-sensitive surface unit;
a processing unit coupled to the display unit and the touch-sensitive surface unit, the processing unit configured to:
while continuously detecting the contact on the touch-sensitive surface unit:
Detecting one or more movements of the contact on the touch-sensitive surface unit, the one or more movements corresponding to movement of a focus selector over the virtual keyboard; and
for each respective key of a plurality of keys of the virtual keyboard, upon detection of the focus selector over the respective key of the plurality of keys:
in accordance with a determination that a character output criterion for outputting a character corresponding to the respective key has been met, outputting the character, wherein the character output criterion includes a respective intensity of the contact being above a first intensity threshold when the focus selector is detected over the respective key; and is
In accordance with a determination that the character output criteria are not satisfied, forgoing outputting the character corresponding to the respective key.
94. The electronic device of claim 93, wherein the character output criteria for outputting the character corresponding to the respective key comprises, while the focus selector is located over the respective key:
the contact corresponding to the focus selector is increased from an intensity below the first intensity threshold.
95. The electronic device of any of claims 93 and 94, wherein the character output criteria for outputting the character corresponding to the respective key includes, while the focus selector is located over the respective key:
The contact corresponding to the focus selector is reduced from an intensity above the first intensity threshold to an intensity below a character output intensity threshold.
96. The electronic device of claim 93, wherein the character output criteria for outputting the character corresponding to the respective key comprises, upon successive detection of the focus selector over the respective key:
the contact corresponding to the focus selector is increased from an intensity below the first intensity and then decreased from an intensity above the first intensity threshold to an intensity below a character output intensity threshold.
97. The electronic device of any of claims 93-96, the processing unit further configured to, while continuously detecting the contact on the touch-sensitive surface unit:
detecting a first press input comprising detecting an increase in intensity of the contact above the first intensity threshold while the focus selector is over a first key; and
in response to detecting the first press input, outputting a character corresponding to the first key.
98. The electronic device of any of claims 93-97, the processing unit further configured to, while continuously detecting the contact on the touch-sensitive surface unit:
Detecting movement of the contact corresponding to movement of the focus selector over a second key, wherein a maximum intensity of the contact is below the first intensity threshold when the focus selector is over the second key; and
in response to detecting movement of the contact corresponding to movement of the focus selector over the second key, forgoing outputting a character corresponding to the second key, wherein the maximum intensity of the contact while the focus selector is over the second key is below the first intensity threshold.
99. The electronic device of claim 97, the processing unit further configured to, while continuously detecting the contact on the touch-sensitive surface unit and after outputting a character corresponding to the first key:
detecting a second press input comprising detecting an increase in intensity of the contact above the first intensity threshold while the focus selector is over a second key; and
in response to detecting the second press input, outputting a character corresponding to the second key.
100. The electronic device of any of claims 97 and 99, the processing unit further configured to, while continuously detecting the contact on the touch-sensitive surface unit and after outputting a character corresponding to the first key:
Detecting a decrease in intensity of the contact below the first intensity threshold;
after detecting that the intensity of the contact decreases below the first intensity threshold, detecting a second press input that includes detecting that the intensity of the contact increases above the first intensity threshold while the focus selector is over the first key; and
in response to detecting the second press input, outputting again the character corresponding to the first key as additional output.
101. The electronic device of any of claims 91-100, the processing unit further configured to, while continuously detecting the contact on the touch-sensitive surface unit:
detecting a plurality of inputs corresponding to a sequence of input characters;
in response to detecting the plurality of inputs, displaying an auto-correcting user interface for changing the character sequence to a modified character sequence;
while displaying the auto-correcting user interface, detecting an auto-correcting input that includes an increase in intensity of the contact above the first intensity threshold when the focus selector is positioned over a respective affordance in the user interface; and
In response to detecting the auto-correcting input, in accordance with a determination that the contact included in the auto-correcting input has an intensity above a second intensity threshold, performing a first operation associated with the sequence of characters, the second intensity threshold being above the first intensity threshold.
102. The electronic device of claim 101, the processing unit further configured to, in response to detecting the auto-correcting input, in accordance with a determination that the contact included in the auto-correcting input has an intensity between the first intensity threshold and the second intensity threshold, perform a second operation associated with the sequence of characters, wherein the second operation is different from the first operation.
103. The electronic device of claim 102, wherein:
the first operation comprises rejecting the modified character sequence; and is
The second operation includes replacing the character sequence with the modified character sequence.
104. The electronic device of claim 102, wherein:
the first operation comprises replacing the sequence of characters with a modified sequence of characters; and is
The second operation includes rejecting the modified character sequence.
HK15108890.7A 2012-05-09 2013-05-08 Device, method, and graphical user interface for selecting user interface objects HK1208540B (en)

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US201261688227P 2012-05-09 2012-05-09
US61/688,227 2012-05-09
US201261747278P 2012-12-29 2012-12-29
US61/747,278 2012-12-29
US201361778413P 2013-03-13 2013-03-13
US61/778,413 2013-03-13
PCT/US2013/040101 WO2013169877A2 (en) 2012-05-09 2013-05-08 Device, method, and graphical user interface for selecting user interface objects

Publications (2)

Publication Number Publication Date
HK1208540A1 true HK1208540A1 (en) 2016-03-04
HK1208540B HK1208540B (en) 2019-11-08

Family

ID=

Also Published As

Publication number Publication date
CN104487927B (en) 2018-04-20
CN109062488A (en) 2018-12-21
AU2013259637B2 (en) 2016-07-07
KR20170136019A (en) 2017-12-08
JP6592496B2 (en) 2019-10-16
AU2013259637A1 (en) 2014-12-04
EP3410287B1 (en) 2022-08-17
EP3410287A1 (en) 2018-12-05
JP6259869B2 (en) 2018-01-10
JP2016197429A (en) 2016-11-24
CN104487927A (en) 2015-04-01
AU2018204236B2 (en) 2019-05-16
KR20160127162A (en) 2016-11-02
AU2016204411B2 (en) 2018-03-15
CN106201316B (en) 2020-09-29
EP2847660B1 (en) 2018-11-14
KR101670570B1 (en) 2016-10-28
CN109062488B (en) 2022-05-27
EP3096218A1 (en) 2016-11-23
US12340075B2 (en) 2025-06-24
KR101806350B1 (en) 2017-12-07
JP6031186B2 (en) 2016-11-24
AU2016204411A1 (en) 2016-07-21
WO2013169877A3 (en) 2014-03-13
KR20150013264A (en) 2015-02-04
US20210191602A1 (en) 2021-06-24
US10969945B2 (en) 2021-04-06
US20150067602A1 (en) 2015-03-05
US10095391B2 (en) 2018-10-09
JP2018067334A (en) 2018-04-26
HK1207171A1 (en) 2016-01-22
WO2013169877A2 (en) 2013-11-14
US20190042075A1 (en) 2019-02-07
JP2015521317A (en) 2015-07-27
KR101956082B1 (en) 2019-03-11
EP2847660A2 (en) 2015-03-18
AU2018204236A1 (en) 2018-07-05
EP3096218B1 (en) 2018-12-26
CN106201316A (en) 2016-12-07

Similar Documents

Publication Publication Date Title
US12340075B2 (en) Device, method, and graphical user interface for selecting user interface objects
US10915243B2 (en) Device, method, and graphical user interface for adjusting content selection
US10775999B2 (en) Device, method, and graphical user interface for displaying user interface objects corresponding to an application
HK1207171B (en) Device, method, and graphical user interface for selecting user interface objects
HK1208540B (en) Device, method, and graphical user interface for selecting user interface objects
HK1208541A1 (en) Device, method, and graphical user interface for displaying additional information in response to a user contact
HK1212065B (en) Device, method, and graphical user interface for determining whether to scroll or select contents
HK1208541B (en) Device, method, and graphical user interface for displaying additional information in response to a user contact

Legal Events

Date Code Title Description
PC Patent ceased (i.e. patent has lapsed due to the failure to pay the renewal fee)

Effective date: 20230508