Control your iPhone and iPad using your eyes and voice with Apple’s upcoming AI patches


Why it matters: Apple just announced some upcoming iOS and iPadOS accessibility features in recognition of Global Accessibility Awareness Day. While the new technology is intended for those with disabilities, Eye Tracking and Vocal Shortcuts could prove helpful to anybody in some situations.

Apple devices, including MacBooks, have supported eye-tracking technology for some time. However, it has always required external hardware. Thanks to advancements in AI, iPhone and iPad owners can now control their devices without peripheral devices.

Apple Eye Tracking uses the front-facing camera to calibrate and track eye movement. As users look at different parts of the screen, interactive elements highlight. It registers a tap when users’ gaze lingers on the element – a feature Apple calls “Dwell Control.” It can also mimic physical button presses and swipe gestures. Since Eye Tracking is a layer of the operating system, it is compatible with any iPhone or iPad app.

Vocal Shortcuts are another way that users can obtain some hands-free control. Apple didn’t provide a detailed explanation. However, it looks easier to use than the existing Shortcuts system, which automates simple to complex tasks.

We have tested standard Shortcuts and found the system to be more trouble than it’s worth because of the manual programming involved. A decent and wide selection of pre-made shortcuts from third-party providers would make the feature more appealing. Verbal Shortcuts seems easier to set up, but without a better explanation it’s hard to tell if the feature is just adding voice activation to the normal Shortcuts functionality.

Another feature added to the suite of voice assistive technology is Listen for Atypical Speech. This setting allows Apple’s voice recognition tech to look for and learn a user’s speech patterns. It can help Siri understand those who have trouble speaking due to conditions like ALS, cerebral palsy, or stroke.

“Artificial intelligence has the potential to improve speech recognition for millions of people with atypical speech, so we are thrilled that Apple is bringing these new accessibility features to consumers,” said Mark Hasegawa-Johnson, principal investigator for the Speech Accessibility Project at the Beckman Institute for Advanced Science and Technology at the University of Illinois Urbana-Champaign.

All of these new AI-powered features work using onboard machine learning. Biometric data is securely stored and processed locally and is never sent to Apple or iCloud.

Apple has a few other features coming, including vehicle motion cues to reduce car sickness when using your phone in a moving vehicle, voice control, color filters, and sound recognition for CarPlay. VoiceOver, Magnifier, Personal Voice, Live Speech, and other existing accessibility features are also getting improvements.

Apple didn’t have a specific timeline for rollout other than “before the end of the year.” However, the company celebrates accessibility throughout May, so launching the new features sooner than later makes sense. The developers are probably in the final stretch of working out the kinks.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top