Hector Ouilhet, Google’s head of design for Search and Assistant, is part of a rare breed in the tech sector. He appreciates the sartorial grandeur of a fine necktie and regularly wears one to the office. Today, he’s chosen a tangerine-colored Hermès cravat crawling with blue alligators and parrots that perfectly matches the pantone of the flower pinned to his left lapel. “Hermès ties are so intricate and interesting, and they usually help me strike up a conversation,” says Ouihlet. “And accents in somebody’s outfit say a lot of things about who they are.”
Ouihlet has routine and experiment days, where he’ll either play it safe or start with the accessory, like the tie, belt, or pocket square, and build the ensemble from there. With age, he’s gotten more adventurous. “Years ago, there were certain colors I’d never wear together,” he says. “Now I wear things that don’t feel fantastic yet, but I can see will eventually come.”
An eye looking toward the future is also central to Ouhlet’s work at Google, where he leads the design team making the products and interfaces that a good chunk of the population will be using in the next few years. His career trajectory was anything but a straight line. He was born and raised in Mexico City, then moved to South Korea in his teenage years to live with a relative (it was a youth rebellion phase).
Later, he studied fine arts, sculpture, and computer engineering in Mexico and then interaction design in Italy. Since 2008, he’s worked at Google in New York City and now in Mountain View, California, with a focus on the intersection of design, communication, and technology. He remains one of the most stylish people in the Bay Area. Ninety Nine U recently spoke with Ouihlet at Google’s San Francisco office about Google’s next-gen projects, including creating voice-controlled “conversational interfaces,” how his team is trying to make technology more human, and what he learns from watching his four-year-old daughter interact with his prototypes.
What’s the most exciting new idea you’re working on right now?
I’m currently working on how you design a platform that is able to give you the right answer no matter what you’re looking for. It could be something very specific, like What is the weather right now? Or more broad, like When should I change the tires on my car? Then how do we apply that way of thinking to a new set of devices? The new set of devices is particularly interesting, because people like my four-year-old daughter won’t really know what certain devices are. She recently grabbed a keyboard, and she thought it was a guitar. She was touching the keys and asking, “Why is this not making music?” She saw a keyboard as an artifact from the past. And I’m also excited to look at how you mimic this notion of human-to-human conversation in human-to-technology conversation.
Actual conversations, with back-and-forth dialogue where the machines understand us?
Oh, yeah; that is where we’re heading. Communication works with two main pieces: audio and visual. Depending on the device, we’ll be able to use both. Here’s an example: You go to a restaurant and you don’t know what to order, so you have two choices. One is, the waiter can tell you the menu in a linear way. Or the waiter can give you a menu, and then you scan it and are able to jump around, because the visual medium is nonlinear. So you go directly to the dessert. Or to the beer.
To find out more, you can either ask the waiter or – imagine if the menu gets to know you better. The next time you come in, the first thing you’ll see on your menu is the beer, then the dessert: The menu adjusts itself to what we know about you. We can then design things like, “Okay, it’s a rainy Thursday. You feel like whiskey?” “Yeah.” So the next time it’s a rainy Thursday, the whiskey shows up without you even asking.
What kind of timeline are we talking about here?
Five to ten years. I was in Berlin recently, and someone asked me if conversational interfaces would happen in a leap or a breakthrough. Well, no. It’s like human beings: A kid doesn’t suddenly become an adult. You go through these painful yet interesting learning phases. Same thing with technology: It’s going to learn from you, and you’re going to learn from it.
You mentioned your daughter earlier. What have you observed and learned from watching how a four-year-old interacts with your voice-recognition prototypes?
Kids have a constrained vocabulary, and they use context to say what they mean. A sign like this [points to the ground] can mean, “Put me down,” or “I’ve got something in my shoe.” It could also mean many different things, depending on the location in your house. If you translate that to technology, how can you use a device’s location or place to help you in the experience? Because technology, like kids, has a constrained vocabulary and understanding. How can you use these signals to make your experience better? Maybe the first thing you would tell your device is that you’re in the kitchen, so it knows you’re in the kitchen and is only going to say certain things. You start treating the device less as a general-purpose machine, like most phones are, to something more specific, because this thing is in the bathroom, kitchen, or car. It’s fascinating to learn from little kids how much they use context to help them tell what they have in their hand.
You’ve said that Google technology must act more human. What do you mean by that?
I’m hoping that technology can get to know you, so the response you get from machines is better over time. Humans are predictable beings. Like when I ask for the temperature, the machine should know that I like Celsius because I’m from Mexico. I don’t know what Fahrenheit is. Things like that can make people appreciate that somebody’s listening. So if we’re talking and you make a reference to my daughter, I like that you’re trying to use the knowledge that we have of each other to enable a better conversation. That is how Search and Assistant should be, and are becoming, actually: more understanding of your intent. With that, we’re able to provide you with the right answer.
On a personal note, you grew up in Mexico City. Was there a big design scene there when you were growing up?
Not really. At the time I was really into fine arts; that’s what I wanted to study. I studied that for a bit because my mom is really good at it, and she encouraged me. But my dad was like, “You’re probably not going to make enough to live on in the fine arts.” He asked me to consider engineering, because I always loved tinkering with machines. My first business, which I started with some high school friends, was making digital yearbooks that we put on CDROMs instead of printing them. We took photos of everyone in the school, scanned the photos, and burned them onto CDs. We would stay up all night doing that. I liked the act of creating – bonding creativity and technology to make something powerful. I ultimately studied computer engineering at University of the Americas Puebla, which, looking back, was the right choice.
Yet you’ve continued to dabble in the fine arts and even studied sculpture at one point. What impact has that had on your design process?
I love making something tangible, and now I apply that to how we work in our team at Google. We usually start our product reviews with a piece of paper the size of a table. Because something like Search is so deep and broad, we try to visualize it by drawing it, and then we draw on top of the original drawings to answer how we would code that design element. Drawing is a natural way to tell what you have in your head – once you see it, you can see your own gaps or your own possibilities.
This interview was originally published in 99U’s special issue for Adobe XD.
via 99U99U http://ift.tt/2jNTgGa
No comments:
Post a Comment