Apple Watch
WatchOS 6 allows the Apple Watch to do more independent of the iPhone, with the launch of its own app store and the ability to create apps specifically for the watch.
Some neat new watch use cases include a calculator app which will split bills and will work out tips and ‘taptics’, which give the user a gentle pulse to indicate the passing of the hour.
Why we’re excited?
Developing bespoke products for voice and wearable interfaces significantly opens up the range of use cases and interactions. Freeing up developers from being ancillary to apps on people’s homescreens will allow them to create propositions that solve for use cases a wearable can uniquely solve. This also continues the wearables’ long march to being a more independent device - the watch is likely a key component in the post-iPhone ecosystem of the future.
Health
WatchOS developments also see Apple continuing to add to its health tracking features. After their roundly mocked continued omission of it, they are finally adding a menstrual (and optional) fertility tracking component. Long-term and comparative tracking of exercise and fitness levels at far more granular detail than has previously been offered, will enable much better tracking and coaching opportunities to improve people’s health.
Why are we excited?
The Apple Watch represents a natural home for many health and fitness applications, its discrete identity has some interesting implications. An app designed specifically for Apple Watch will be able to leverage the range of biometrics supported by the platform in a way that iPhone or iPad isn’t able to. And we really haven’t scratched the surface of real medical use cases for these devices. With increasingly granular data and medical evidence for the value of these devices, we’re hoping to see more and more health organisations making use of them.
SwiftUI
Undoubtedly a highlight for developers was the announcement of SwiftUI. SwiftUI is a framework built to make coding with Apple’s programming language even faster and more immediately interactive, featuring some drag-and-drop elements. It relieves developers of some of the most time-consuming, manual coding tasks to focus on unlocking higher value features. It also makes things like simple prototyping faster, and more accessible to non-developers.
Why are we excited?
Lowering the effort required for simpler tasks frees up developer time to spend on tougher, more valuable challenges. But it also means a lower effort to innovation, by enabling faster prototype and proof of concept builds. Lightweight initial builds enable a tighter feedback loop, with products built in an increasingly lean and efficient way as a result - and enabling you to try new things with lower risk.
Machine Learning
Among other enhancements to their machine learning toolkit, for the first time Core ML 3 will allow developers with no knowledge of Machine Learning to use its preset components to build a range of models directly into apps for on-device training, specific to each user.
Core ML capabilities are powering a range of performance and interface improvements in Apple’s own services. tv OS will gauge what kind of film you might fancy watching based on your viewing history and serving you related content, much like Netflix’s recommendation engine. Apple’s native Photo app will use ML classification to organise images rather than leaving users to trawl through increasingly voluminous albums. Elsewhere, HomePod’s voice recognition will be used to identify and deliver personalised content - their calendar, reminders, notes - to each member of the household.
Why are we excited?
Core ML 3 is a powerful existing tool which can be easily embedded into apps to create deeply personalised experiences. Device-specific training means users drive their own interactions and will prompt more repeat and more rewarding interactions with apps. In addition, it strengthens any recommendations with the weight of the user’s own experience and history.
Accessibility - voice
Apple’s main accessibility news doubles up as another use case for its on-device ML functionality. Voice Control - allowing users to control their devices entirely by voice - employs Siri’s speech recognition tech. It is entirely personalised, only training and accepting a single user’s voice cues.
Why are we excited?
We make better apps for a range of users when we design for accessibility in mind - whether their impairment is permanent, ie. they are partially-sighted, or situational, ie. they’re working in poor light conditions, or need to do something with their hands full. Any tool we can make use of to make our products available to a wider audience is a valuable tool. Find out more about how our teams took an accessibility-first approach to building the new groceries app for Tesco.