So please, stop dreaming about a J.A.R.V.I.S.-like Bot. AI will never be like a personal assistant that knows everything about you, that understands the environment, your feelings and your needs. AI assistant will be for ever a digital system that gives complex and nice outputs just because someone coded all kind of linguistic inputs that a human can produce; this kind of assistant will never really understand what’s happening. The most advanced AI possible is the one that has the biggest relational and semantic database tested (manually!) by real operators (read “The Humans Hiding Behind the Chatbots” by Ellen Huet).
Natural language isn’t the key
Machines that understand some plain language commands and that can anticipate some users needs are possible, but computers that are able to understand all kind of phrases that a human pronounces, sorry, but aren’t near to come.
Like everybody us today can understand icons on expensive glass-plates called smartphone, in the same way we must create a simplified language for communicating and using Bots.
For me nobody wants to lose his time talking with a Bot even if companies would love the idea that millions of virtual and assertive sales people talk 24h/7 with customers. Instead, the most amazing feature of the Bots AI isn’t their humanity, but the fact that users can treat them without any courtesy, that they will memorize users tastes and credentials, that they will anticipate users needs thanks to some “natural language” commands and some Facebook profile analysis.
All this doesn’t mean that companies shouldn’t care about language per se, but that they should drive users to use a simplified language for the following reasons:
a simple language is easier to explain in a sort of tutorial during the first chats
a simple language is faster and more efficient than the natural one. If the number of taps for receiving an information on a chat is a way more than searching it on a website, the chatbot is going to fail
creating a sort of standard simplified language for all the Bots will ease exponentially their usage.
The users fruition model will be like the one that today drives sites like Yahoo Answers, Quora or the common FAQs pages where contents are organized and required using the “How to…” and “What is…” format.
During the last years I developed a strange professional syndrome.
Everytime I use an object I analyze usability and functions trying to learn or imaging improvements. Today the interaction between humans and machines is powered by all kind of sensors that can interpret imput like natural voice commands, objects movements, touch and hand free, etc.
Today I want to introduce you my concept for an in-ear headphones touch gestures. As you can see in the following gifs, I imagined to turn the headphones cables in a control device dedicated to the four most common commands used during the music listening: volume up, volume down, next song and last song.
For designing the in-ear headphones touch gestures I was inspired by the “traditional” touch pattern gestures and by the emerging smart clothing technology. I admit even that sometimes during my trainings or in a crowded metro I’d appreciated these gestures because I didn’t have how to switch that shitty song that everyone have on its library.
Following the in ear headphones touch gestures concept.
Volume up: thumb + index finger down on the right cable
I work in Digital Communication and I’ve worked on the functional & user experience design of websites, mobile applications, advergames, digital signage systems and info kiosks.
I love cars and motorcycles since when I was a child. I remember very well the “procedure” that my parents had to apply first to start our old Fiat 500, the incredible internal design of the Renault 4 of my neighbour and the unintelligible fashion of the Motobecane Mobyx parked in my garage.
I think that cars and motorcycles are the most impressive demonstration of the humankind power of imagination and adaptation. Imagination because who put together the technology necessary for an “autonomous run” of a 4/2 wheels object for me was an artists, not an engineer. Adaptation because driving a car or a motorcycle is one of the most complex mixture of unnatural gestures that we have on the earth.
Soli is a Google’s Project that enable users to interact with digital devices without touching them. Nothing really new, except for the hardware technology that concentrates everything in one small “piece of sand” on the electronic device board.
Using the same “approach” that is already used by dolphins, whales and bats, the Google researchers created a single chip that can identify and translate simple gestures in effective commands even through other materials.
Last Saturday September 20th, IDF Milan organized its 4th event “Design beyond visual boundaries” in collaboration with the Italian startup Horus Technology. The main objective of the workshop was to start designing the User Interface of their product.
Horus is a device that supports visually impairedpeople like a virtual assistant. It will be positioned on normal glasses and it will interact through audio bone conduction and a manual controller with buttons. Horus will have two 5mpx cameras, a separated battery pack and it won’t rely on internet/bluetooth connections to be functional.
I work in Digital Communication since 2005. I’ve a humanistic university background but instead of specializing in Contents and Social Media, I’ve always studied technical, graphic and design subjects.
Lavoro nella comunicazione digitale dal 2005 e nonostante la mia preparazione fortemente umanistica, invece che specializzarmi su contenuti e social media, ho sempre cercato di approfondire tematiche tecniche, di grafica e design.
Da marzo di quest’anno, a quanto imparato fino ad oggi, ho deciso di aggiungere anche lo User Experience Design frequentando i corsi dell’Interaction Design Foundation di cui gestisco sono diventato anche l’European Continent Managaer, l’Italian Country Manager e il Milan Local Leader.
Ma perché un Digital Product Manager deve saperne di User Experience e Interaction Design?
Perché nel digitale tutto è esperienza utente e tutto è comunicazione.
A partire dalla piacevolezza dell’interfaccia, dei colori e delle immagini, fino alla semplicità di utilizzo delle funzioni, la leggibilità dei contenuti e la velocità di risposta del prodotto digitale, tutto è User Experience.
For promoting a discussion on the IDF Milan group, I analyzed the ANSA’s design and I wrote down a UX critical issues list. But first of the list, I want to declare that I never used the ANSA web site and that beyond all the critics, I think that globally they did a good job, except for the point 4.
1) The header buttons are links to different services, function and sites. They are not coherent and not really visible. They seem graphical elements.
2) The search field is not visible and could be confused with the other icons.
3) The “temi caldi” is confusing because it’s positioned in the breadcrumb position.
Please identify an example where a search box has been used in a design and outline the various stages of implementing the search box in this particular instance.
I chose as an example the amazing WordReference translation service. I use it since 2004 and I clearly know the users needs: translate.
In 2001 it didn’t have the search box focus on the home. The multiple search boxes were displayed on the left small column. There was not enough space for long words and users had to scroll the page for certain languages. Moreover the centre of the page was full of contents that were absolutely not useful for the users. Visit it on the Wayback Machine