Currently Apple's voice-activated iOS features -- both Dictation and Siri -- use the convoluted but less memory-extensive method of sending voice recordings to the cloud for processing. This means that Apple users rely on having internet access in order to use voice dictation, and can cause problems when Apple's servers are struggling. Being able to process voice locally and offline could make things easier, but the processing software is likely to take up a hefty chunk of memory that iPhone users may be reluctant to give up.
(Last month it was discovered that Apple is offering offline Dictation in its next desktop operating system, OS X Mavericks, at the cost of 785MB of disc space.)
iOS's inability to process voice locally has made Dictation (and Siri) far less practical tools than Apple would like; its enthusiastic championing of Siri in particular suggests that the company foresees a future of entirely voice-controlled mobile computing devices. And if the iWatch is ever to see the light of day, voice control on the go will be crucial.
Offline Dictation is still at the exploratory stage, clearly, but if Apple moves ahead with it, Siri can't be far behind.
Google, unsurprisingly, has been more adventurous in the features it's been offering in this department. Android Jelly Bean has offered offline dictation since the middle of last year. Apple would love to be able to match its rival; although we would argue that neither firm's voice offering is truly practical at this point, voice control is going to be massive in the future.
Sign up for CIO Asia eNewsletters.