Unboxing Google’s 7 new principles on Artificial Intelligence

How many times have you heard that Artificial Intelligence (AI) is humanity’s biggest threat? Some people think that Google brought us a step closer to a dark future when Duplex was announced last month, a new capability of Google’s digital Assistant that enables it to make phone calls on your behalf to book appointments with small businesses. You can see it in action here:

The root of the controversy lied on the fact that the Assistant successfully pretended to be a real human, never disclosing its true identity to the other side of the call. Many tech experts wondered if this is an ethical practice or if it’s necessary to hide the digital nature of the voice.

turing-test3

Google was also criticized last month by another sensitive topic: the company’s involvement in a Pentagon program that uses AI to interpret video imagery and could be used to improve the targeting of drone strikes. Thousands of employees signed a letter protesting the program and asking for change:

“We believe that Google should not be in the business of war. Therefore we ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.”

A “clear policy” around AI is a bold ask because none of the big players have ever done it before, and for good reasons. It is such a new and powerful technology that it’s still unclear how many areas of our life will we dare to infuse with it, and it’s difficult to set rules around the unknown. Google Duplex is a good example of this, it’s a technological development that we would have considered “magical” 10 years ago, that today scares many people.

Regardless, Sundar Pichai not only complied with the request, but took it a step further by creating 7 principles that the company will promote and enforce as one of the industry drivers of AI. Here are some remarks on each of them:

1. Be socially beneficial

For years, we have dealt with comfortable boundaries, creating increasingly intelligent entities in very focused areas. AI is now getting the ability to switch between different domain areas in a transparent way for the user. For example, having an AI that knows your habits at home is very convenient, especially when your home appliances are connected to the same network. When that same AI also knows your habits outside home, like your favorite restaurants, your friends, your calendar, etc., its influence in your life can become scary. It’s precisely this convenience that is pushing us out of our comfort zone.

This principle is the most important one since it bows to “respect cultural, social, and legal norms”. It’s a broad principle, but it’s intended to ease that uncomfortable feeling by adapting AI to our times and letting it evolve at the same pace as our social conventions do.

2. Avoid creating or reinforcing unfair bias

AI can become racist if we allow it. A good example of this happened in March 2016, when Microsoft unveiled an AI with a Twitter interface and in less than a day people taught it the worst aspects of our humanity. AI learns by example, so ensuring that safeguards are in place to avoid this type of situations is critical. Our kids are going to grow in a world increasingly assisted by AI, so we need to educate the system before it’s exposed to internet trolls and other bad players.

3. Be built and tested for safety

This point goes hand in hand with the previous one. In fact, Microsoft’s response to the Tai fiasco was to take it down and admit an oversight on the type of scenarios that the AI was tested against. Safety should always be one of the first considerations when designing an AI.

4. Be accountable to people

The biggest criticism Google Duplex received was whether or not it was ethical to mimic a real human without letting other humans know. I’m glad that this principle just states that “technologies will be subject to appropriate human direction and control”, since it doesn’t discount the possibility of building human-like AIs in the future.

An AI that makes a phone call on our behalf must sound as human as possible, since it’s the best way of ensuring a smooth interaction with the person on the other side. Human-like AIs shall be designed with respect, patience and empathy in mind, but also with human monitoring and control capabilities.

5. Incorporate privacy design principles

When the convenience created by AI intersects with our personal feelings or private data, a new concern is revealed: our personal data can be used against us. Cambridge Analytica’s incident, where personal data was shared with unauthorized third parties, magnified the problem by jeopardizing user’s trust in technology.

Google didn’t use many words on this principle, probably because it’s the most difficult one to clarify without directly impacting their business model. However, it represents the biggest tech challenge of the decade, to find the balance between giving up your privacy and getting a reasonable benefit in return. Providing “appropriate transparency and control over the use of data” is the right mitigation, but it won’t make us less uncomfortable when an AI knows the most intimate details about our lives.

6. Uphold high standards of scientific excellence

An open approach to building the AI systems of tomorrow is the best way of keeping any company honest. Providing access to “educational materials, best practices, and research that enable more people to develop useful AI applications” is a great commitment because it can expose problems faster and help find solutions sooner.

7. Be made available for uses that accord with these principles

It’s not an accident that this last principle is the predecessor of a section that outlines the applications that Google promises not to pursue. Many people fear AI just because they imagine what could go wrong if a faulty and uncontrollable system had the capability to make judgement calls on human behavior.

Google promises not to build AIs that can cause physical harm, promote surveillance that “violates internationally accepted norms” or any type of law/human right violation.

These principles are an invaluable start and a commendable attempt to establish the rules that will drive us towards the future. But they are not enough, proper regulation should be there to protect consumers. We are on the verge of a long journey involving AI that will change us, and we need to push forward and ensure that the proper guidelines are in place. Self-regulation cannot be the only type of control we see on AI. We need to push the tech industry towards the highest standards, for our future’s sake.

You can read Google’s principles here.


Did you like this article? Subscribe to get new posts by email:

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 71 other followers


Image via University of Canterbury

Advertisements

Fixing Facebook’s privacy problem

Facebook has been receiving criticism once again for how they handled users’ personal data. Here is a quick summary: in 2013, a 3rd party developer acquired large amounts of data from about 50 million users through an old platform capability (which was removed by Facebook itself one year later to prevent abuse); this data was then used to target US voters during the 2016 Presidential Election. The issue is complex in depth and it highlights a bigger underlying problem: users’ privacy expectations are not aligned with the commitment from most tech companies.

Zuckerberg said in a recent interview with Wired, “early on […] we had this very idealistic vision around how data portability would allow all these different new experiences, and I think the feedback that we’ve gotten from our community and from the world is that privacy and having the data locked down is more important to people.”

Regardless, Facebook never committed to fully lock down users’ data, and their business model was in fact built around the value that data can have for advertisers through interest relevance and demographic targeting. Google and Facebook accounted for 73% of all US digital ad revenue in the second quarter of FY18, up from 63% two years before.

I can nonetheless relate to that idealistic vision between privacy and technology. The more information the Google Assistant knows about the music I like, the better it can personalize my listening experience. Richer actions become available too, like allowing me to control the Nest thermostat or the lights by voice. At the end of the day, I’m trusting Google with my music taste and the devices installed in my house, and I get the benefit of convenience in return.

Continue reading

I experienced the future of retail: Amazon Go

The craziest thing I’ve seen is someone who came in dressed in a Pikachu costume“, said an Amazon employee while she handed me a promotional bag with the Amazon Go logo on one side and the text ‘good food fast’ on the other.

I arrived at the new store in downtown Seattle around 7:20 pm and was surprised to see the line of people still reached the end of the block. It had been a cold day in Seattle but that didn’t discourage the hundreds of people who came to see the ‘magical’ store on day 0. I didn’t use the term ‘magical’ lightly here: the experience was truly unique and it felt too good to be true. Amazon Go is probably the store with more sensors on the planet right now, and it is intimidating:

The ceiling of the Amazon Go store

Each of those boxes on the ceiling are cameras connected to deep learning algorithms that analyze every move you make: which aisle you walk through, what items you grab to read and then return to the shelf, what items you put in your pockets or bag… everything to ensure you only get charged for what you take home. But also, everything to ensure your shopping pattern is studied and well understood. Maybe not today, but it’s the inevitable next step and the ultimate dream for any retail store: to know what their customers like and the type of advertisements that will work best on them.

Continue reading

Nintendo Labo: Thinking outside the box (or with the box?)

When Nintendo released the Switch last year I was very surprised by what they had been able to achieve, take the gaming industry on a spin (again). Once again they proved that they can innovate in a crowded space with deep pocket rivals. They were able to achieve something fun, flexible and that meets our new lifestyle not by thinking of specs but thinking of use cases. They understand people still want to play but they don’t do it just in a living room, so they would meet them where they are by providing play flexibility (great article about that here). Now, with Labo they have done something I consider priceless: enable kids to imagine, play and dream by connecting both the physical world and the digital one.

I have to be honest, I did not buy the Switch right away and when I did I played it and then returned it. Sometimes there is a price for innovation. To me, the Switch has two big drawbacks. First it is the lack of games. I could care less for Zelda (yeah yeah hate me) and some of the other games are just “meh”. However, it was the release of Mario Odyssey that finally made me get it. I loved it, it was fun, I could play at home and take it with me. I bought my Switch just before my holiday trip and took it on the road with me. This meant playing with the Joy-Cons inserted to make a huge Gameboy. I’m a big guy and I am very jumpy and move around when playing. Towards the end of the trip my Switch started to break. My gameplay would stop every minute because they would get disconnected (guess I can’t be that excited while playing). Turned out that the price to pay for the hardware flexibility was ruggedness. So when it was time to return, I could exchange or return and decided for the former due to lack of games.

I thought that would be the end of my Switch journey but this week Nintendo announced Labo. Nintendo has always been great at thinking outside the box. Some of these product work (Wii, Amiibos) and some don’t (VirtualBoy, Wii U) and that is the price to pay to try new things. What amazes me is Nintendo’s relentless pursuit of not thinking about what is the next big technology push they can do, but how to enable new ways to bring playfulness into our lives.

Continue reading

Would you give up your privacy for unlimited movies? interview with René Sánchez from CineSinFronteras.com

MoviePass is a subscription-based service that allows users to watch almost any movie in theaters for a flat monthly rate. In August, the company announced a surprisingly low price of $9.95, leaving many scratching their heads. I interviewed René Sánchez, cinema expert and movie critic at CineSinFronteras.com, and we discussed the privacy implications and the potential impact to the online streaming industry.

moviepass.png

Even though I’ve been using it for a month already, it still feels too good to be true. Were you surprised by the MoviePass announcement?

Yes, I was surprised by their announcement to reduce the monthly subscription price to just $9.95. It is such an amazing deal, especially when you consider that a regular, 2D movie here in the Seattle metro area costs between $12-15. So even if you only watch one movie every month, you will be saving some dollars with MoviePass! What shocked me the most was to know that the major exhibitors and theater chains were onboard with this change. I expected a lot of pushback from them, considering their old-school ways to operate. So far, only AMC has tried (and failed) to restrict the use of MoviePass in their theaters.

What’s the problem that MoviePass is trying to solve?

People don’t go to the movie theaters anymore. Studios and exhibitors keep blaming Netflix and other rival streaming platforms for their audience loss, instead of recognizing the real root cause: the movie-going experience has become very expensive and obsolete. Ticket prices rise every year (the same goes for concessions), studios keep releasing sequels and remakes no one asked for, and most multiplexes scream for renovations (uncomfortable seats, run-down interiors, and poor image and sound quality). To top it off, patrons can sometimes be rude and annoying.

Again, it’s really not Netflix’s fault that people want to stay at home, rather than going out to watch a movie. Who wants to pay more than $60 (including tickets, food and parking/Uber) to enjoy a mediocre movie in a rickety auditorium, while everyone else is either talking or staring at their phones?

Continue reading

Visiting the Oculus office in Seattle: is augmented reality (AR) or virtual reality (VR) the future of user interfaces?

Earlier this week I had the pleasure of visiting the Oculus Seattle office for a private tour, some cool demos and a very interesting conversation. During the whole visit, a question kept popping up in my mind: will augmented reality (AR) or virtual reality (VR) ever become the standard way of interacting with our desktop or mobile devices?

User interfaces have evolved over the years in very significant ways: we moved from punched cards to command-line interfaces, and from there to graphical interfaces, which ended up evolving into what we know today, mouse, keyboard and touch. With recent advances in artificial intelligence, we are beginning to transition into conversational interfaces, where we can use natural language to get things done, sometimes even without touching a button or reading a line of text.

Is the future of user interfaces an (almost) invisible one? In many cases, yes, just watch the 2014 movie Her to see a glimpse of where we will be in a few years (minus the “falling in love” part):

However, for many other tasks we will still need to read, type, touch and draw. This doesn’t mean that we will be tied forever to a screen, and here’s where VR and AR come in.

Continue reading

About the iPhone X notch controversy

The iPhone X was already controversial even before it was officially introduced last Tuesday, mostly due to the rumored removal of Touch ID in favor of Face ID.

However, Apple’s presentation caused a new controversy: the infamous notch. Even though the array of cameras and sensors got leaked long before the event, nobody knew how Apple was planning to do in order to integrate it with iOS 11. We have the answer now: Apple is so proud of that black bar that they decided to render the user interface around it.

Since Apple controls the operating system, they made sure it looks good with most 1st party apps. But what happens with 3rd party content like a website? The notch gets in the way. Continue reading

3 reasons why I’m excited about the new iPhone

Apple will present the new iPhone this Tuesday and, as usual, most of the details have already been leaked.

What seems guaranteed is that we’ll see 3 models being introduced: the iPhone 7s, 7s Plus and a special edition to celebrate the 10th anniversary of the original iPhone. That special edition has been known until now as iPhone 8, iPhone Edition, or iPhone Pro, but the official name iPhone X has been confirmed (among other details) thanks to the final version of iOS 11 leaking.

These are the top 3 reasons why I’m excited about the iPhone X.

Continue reading

Facebook created a mess trying to take on Snapchat

If you use Facebook, Instagram, Messenger or WhatsApp, you have probably noticed recent updates that allow you to share a picture that expires after 24 hours.

Stories, Shared Days, or Status, all different names for the same feature across 4 different apps. This is what they look like side by side:

Facebook is trying to suffocate Snap by flooding every app they own with the one thing that made Snapchat special.

Continue reading

On Leadership

What is a leader? What traits do I want to make sure I have as a leader? What has my experience taught me so far? I’ve been trying to answer these questions and have been thinking about what leadership means to me. 

Even though the dictionary defines a leader as “the person who leads or commands a group”, I do not believe that this simple definition is the same thing as being a true leader. A person that merely uses their “power” to intimidate others into getting things done is not a leader. Likewise, a person who uses fear to motivate people into getting things done, or walks over people in a selfish attempt to achieve a goal, is also not a leader.

So, what qualifies someone as a great leader? Let’s start with a fact, and one that many leaders refuse to acknowledge; there are no perfect leaders. We are human and we have weaknesses. We are human and we have strengths. The key is to spot the differences between the two, understand the pros and cons of both, and be a leader that is balanced.

Continue reading