I’m following the evolution of the self-driving technologies with a lot of interest. Many automotive companies say that by 2020/2022 they will commercialize autonomous cars that will reach the level 4 or 5 of the SAE International Automated Driving standards.
Below the table that is commonly adopted by all the automotive industry.
Wired points the level 3 human problem in a very clear way: humans are not capable to maintain their attention if they are not interested or required to. For simplifying, a crash in self-driving mode cannot be avoided thanks to the intervention of the driver that in the meanwhile could be reading a newspaper or watching a video. Humans are just too slow and in that case even too distracted for recognizing the risk and avoiding a crush.
In the last months automotive world is talking a lot about autonomous and self-driving vehicles both for private and public transportation. During my day researches one day I found the exciting call for collaboration for Olli, the self-driving vehicle produced by Local Motors.
Designing the autonomous bus user experience is a complex task: for first because self-driving buses will serve the traditional public transportation diversified and multi-age target; second because without the driver and, in some cases, without a fixed route, passengers will have some new functional and informational needs.
The first part of my project started with a Service Design session focused on what kind of transportation services a self-driving bus can serve.
Personal on-demand shuttle
It’s like a Taxi/Uber, but less exclusive and more spacious. It brings one or more people from A to B. It can be reserved days in advance and can make various stops during a single dedicated service. The served area is restricted.
Shared on-demand shuttle
It’s like public transport service except for the fact that passengers can add a personalized stop to the route within the bus pertaining area. The route is dynamically optimized depending on users destinations and pick-up calls. The high level of complexity makes this service ideal for closed areas like small districts, big companies, entertainment parks etc.
It’s exactly the same public transport service as we know it.
It’s like sending objects using a shipping company, but instead of giving the package to a human, users will schedule the shipment using an app or a dedicated device in the bus, and then they store the package in a secured housing inside the vehicle. The recipient will track the shipment in real-time and will be alerted when the bus is at the delivery point (or in front of his door). This service can be added to the “Shared on-demand shuttle” one, or it can be configured as an automated delivery service with customized buses and dedicated physical hubs.
This delivery service model is useful for companies that need to transport small parts within a relatively big space, or in modern cities creating a sort of fully automated shipping/delivery hubs for connecting wholesale shops and retails stores.
After this first Service Design session, I started a User Centered Analysis focused on the self-driving bus passengers needs. For designing a real accessible service, I defined only “analogue” needs excluding all the information/functions that a smartphone app could have. What you read is what my grandmother or a manager with a dead smartphone could need for using an autonomous bus.
What self-driving bus passengers need outside the bus
– Passengers need a purchase and reservation system that should be both digital (app), physical (street’s stops signs) and gestural (raising the hand for asking to catch the bus).
So please, stop dreaming about a J.A.R.V.I.S.-like Bot. AI will never be like a personal assistant that knows everything about you, that understands the environment, your feelings and your needs. AI assistant will be for ever a digital system that gives complex and nice outputs just because someone coded all kind of linguistic inputs that a human can produce; this kind of assistant will never really understand what’s happening. The most advanced AI possible is the one that has the biggest relational and semantic database tested (manually!) by real operators (read “The Humans Hiding Behind the Chatbots” by Ellen Huet).
Natural language isn’t the key
Machines that understand some plain language commands and that can anticipate some users needs are possible, but computers that are able to understand all kind of phrases that a human pronounces, sorry, but aren’t near to come.
Like everybody us today can understand icons on expensive glass-plates called smartphone, in the same way we must create a simplified language for communicating and using Bots.
For me nobody wants to lose his time talking with a Bot even if companies would love the idea that millions of virtual and assertive sales people talk 24h/7 with customers. Instead, the most amazing feature of the Bots AI isn’t their humanity, but the fact that users can treat them without any courtesy, that they will memorize users tastes and credentials, that they will anticipate users needs thanks to some “natural language” commands and some Facebook profile analysis.
All this doesn’t mean that companies shouldn’t care about language per se, but that they should drive users to use a simplified language for the following reasons:
a simple language is easier to explain in a sort of tutorial during the first chats
a simple language is faster and more efficient than the natural one. If the number of taps for receiving an information on a chat is a way more than searching it on a website, the chatbot is going to fail
creating a sort of standard simplified language for all the Bots will ease exponentially their usage.
The users fruition model will be like the one that today drives sites like Yahoo Answers, Quora or the common FAQs pages where contents are organized and required using the “How to…” and “What is…” format.
Generation Z is composed by young that are actually between 12 and 17 years old. For the automotive industry they are evaluated 3.2 trillion of dollars by 2020, so it’s really interesting to understand what they will look into their future cars.
The most important insights that I read are:
92% wants to own a car
they don’t care a lot about style and design
they remind some old-fashioned brands like Ford, Chevrolet and Honda for their solidity
they care more about saving money (in the purchase and running costs) than in saving the environment
they care more about safety than infotainment
they’d like to have autonomous vehicles for increasing security, but they don’t trust in that technology at all
they’ll buy a car in a car dealer, not online
Looking with attention the slide where the generations are described, I found some new interpretative keys of the Gen Z’s purchase intentions.
During the last years I developed a strange professional syndrome.
Everytime I use an object I analyze usability and functions trying to learn or imaging improvements. Today the interaction between humans and machines is powered by all kind of sensors that can interpret imput like natural voice commands, objects movements, touch and hand free, etc.
Today I want to introduce you my concept for an in-ear headphones touch gestures. As you can see in the following gifs, I imagined to turn the headphones cables in a control device dedicated to the four most common commands used during the music listening: volume up, volume down, next song and last song.
For designing the in-ear headphones touch gestures I was inspired by the “traditional” touch pattern gestures and by the emerging smart clothing technology. I admit even that sometimes during my trainings or in a crowded metro I’d appreciated these gestures because I didn’t have how to switch that shitty song that everyone have on its library.
Following the in ear headphones touch gestures concept.
Volume up: thumb + index finger down on the right cable
I work in Digital Communication and I’ve worked on the functional & user experience design of websites, mobile applications, advergames, digital signage systems and info kiosks.
I love cars and motorcycles since when I was a child. I remember very well the “procedure” that my parents had to apply first to start our old Fiat 500, the incredible internal design of the Renault 4 of my neighbour and the unintelligible fashion of the Motobecane Mobyx parked in my garage.
I think that cars and motorcycles are the most impressive demonstration of the humankind power of imagination and adaptation. Imagination because who put together the technology necessary for an “autonomous run” of a 4/2 wheels object for me was an artists, not an engineer. Adaptation because driving a car or a motorcycle is one of the most complex mixture of unnatural gestures that we have on the earth.
I work in Digital Communication since 2005. I’ve a humanistic university background but instead of specializing in Contents and Social Media, I’ve always studied technical, graphic and design subjects.
Lavoro nella comunicazione digitale dal 2005 e nonostante la mia preparazione fortemente umanistica, invece che specializzarmi su contenuti e social media, ho sempre cercato di approfondire tematiche tecniche, di grafica e design.
Da marzo di quest’anno, a quanto imparato fino ad oggi, ho deciso di aggiungere anche lo User Experience Design frequentando i corsi dell’Interaction Design Foundation di cui gestisco sono diventato anche l’European Continent Managaer, l’Italian Country Manager e il Milan Local Leader.
Ma perché un Digital Product Manager deve saperne di User Experience e Interaction Design?
Perché nel digitale tutto è esperienza utente e tutto è comunicazione.
A partire dalla piacevolezza dell’interfaccia, dei colori e delle immagini, fino alla semplicità di utilizzo delle funzioni, la leggibilità dei contenuti e la velocità di risposta del prodotto digitale, tutto è User Experience.
I love to design even if I don’t know how to develop my projects because I can’t code. I’ve hundreds of sheets full of ideas, functional requirements and wireframes, but writing them down is really time consuming and I’ve never enough time.
For the Drivin’ project I’m doing an exception. I really want to transform my sheets in a real service so I decided to write a presentation to introduce my idea and searching partnerships.
First of all, what is Drivin?
– Drivin’ is a service that helps users to share car routes with their friends through the social networks
– Drivin’ is a platform e that puts in communication people that have similar transportation needs
– Drivin’ is a service that creates a new trusted Car Pooling network and puts the basis for a Neighborhood social platform
Read the full presentation on Slideshare and if you are interested, read at the end of this page.
I’m searching for someone that can help me to develop Drivin’.
If you are a freelance coder or a company, contact me!
We can collaborate for building something socially meaningful and that could become a StartUp.
PS I really believe in this project and I don’t care about the intellectual property. I trust in the web knowledge sharing and in the execution. If someone will copy Drivin’, it will never be the same of what I designed 🙂
I hope that this transition will help the open source/data world like never before. This is the first time that an Open Source project can be improved simply using an easy, known and fun application like Foursquare.
The first change reason they considered for is the economic one. Last October Google announced that more than 25,000 map loads per day had to be paid with the Google Premier API. Thor Mitchell wrote that this change affected just 0.35% of world’s websites, but for that small amount of people the prices plans were really high.
Ed Freyfogle of Nestoria had the same 4SQ experience and solved the problem in the same “open way”. In his post he wrote strong words about the commercial attitude of the Google’s employee that represent the real value of the map services for web and mobile.
Unfortunately Google’s sales process was not good. Having agreed to a time for a call, the sales rep missed the appointment with no warning, instead calling me 45 minutes late. It was quickly obvious he had done no research whatsoever about our service, what we do, or even where (in which countries) we do it. He was unable to explain the basics of the new charging regime – for example, what exactly is a “map-view”, telling me instead to “ask your developers”. Finally he quoted a price to continue using Google Maps (just on nestoria.co.uk, one of eight countries we operate in) that would have bankrupted our company.
On the Foursquare post comments are focused on the bad OSM mapping information of cities like New Mexico and Sao Paulo, while some Russian countries are mapped better than Google Map. What does it mean?
Civil earth mapping is still generally incomplete according to the entity that promote it. Google Map is focused on commercial scope because it works on advertising and business applications. OpenStreetMap is focused on the openness of its platform, but depends totally from the users. The advanced digital users like the russians 🙂
Unfortunately the paradox is that the Google Map success is due to his easy personalization and integration on the web. Everyone can build a little map or can add his company on it. Using Google Map gives the sense of openness that is not real!
When users put datas on Google Map give their information to BigG that first tracks our profiles, and then filsl its local databases. Ok, it gives us a a great map service, but this is not open, its a business product that users are building with their datas and a strong usage.
If I were in OpenStreetMap I focus on little instruments to facilitate access to the map instruments. For example the maps for personal web sites, small applications for mobile and so on. Untill companies will be forced to consider OMS as a Google Map alternative just for economic reasons, the OMS project probably will not fill the mapping holes of his atlas.
Open Source and Open Data should be easy end fun.
Foursquare has the unique occasion to spread an open source project that need to be filled with users, gamification, badges, specials and so on. Foursquare could even innovate his system developing a personal reality mapping function that goes beyond the simple check-in system. Users could build their onf maps using the 4SQ mobile technology mixed to the OSM map framework, and then they could open ecommerce corners or sell local adv.