Samsung is due to launch its Galaxy S8 on March 29 and it will come with a series of Bixby-enabled, preinstalled apps

We're a little over a week away from the launch of Samsung's next-generation, flagship handset, the Galaxy S8 . Typically announced at Mobile World Congress , the phone's release was pushed back following the furore surrounding the firm's 'exploding' Note 7 devices.

While there have been countless leaks, claiming to show renders of the Galaxy S8, Samsung is yet to confirm hardware features. It has, however, released details about software on the handset, a voice-powered, AI assistant called Bixby .

"Technology is supposed to make our lives easier, but as the capabilities of machines such as smartphones, PCs, home appliances and IoT devices become more diverse, the interfaces on these devices are becoming too complicated for users to take advantage of these many functions conveniently," said InJong Rhee, executive vice president, head of R&D software and services at Samsung. "Samsung has a conceptually new philosophy to target this problem: instead of humans learning how a machine interacts with the world, it is the machine that needs to learn and adapt to us."

Rhee continued that Bixby is "fundamentally different" from other voice assistants and offers a more "in-depth" experience due to "completeness, context awareness, and cognitive tolerance".

For example, once an app is compatible with Bixby, you will be able to use your voice to control and manage "almost every task" that particular app is capable of, and which are typically controlled using touch commands. Samsung makes the point that most existing AI assistants support a limited number of selected tasks for each application. "The completeness property of Bixby will simplify user education on the capability of the agent, making the behaviours of the agent much more predictable," Rhee said.

Secondly, Samsung claims that when using a Bixby-enabled app, you will be able to use Bixby at any time and it will understand the current context. This means a mixture of voice and touch can be used easily. "Most existing agents completely dictate the interaction modality and, when switching among the modes, may either start the entire task over again, losing all the work in progress, or simply not understand the user’s intention," said Rhee.