Unboxing Google’s 7 new principles on Artificial Intelligence

How many times have you heard that Artificial Intelligence (AI) is humanity’s biggest threat? Some people think that Google brought us a step closer to a dark future when Duplex was announced last month, a new capability of Google’s digital Assistant that enables it to make phone calls on your behalf to book appointments with small businesses. You can see it in action here:

The root of the controversy lied on the fact that the Assistant successfully pretended to be a real human, never disclosing its true identity to the other side of the call. Many tech experts wondered if this is an ethical practice or if it’s necessary to hide the digital nature of the voice.

turing-test3

Google was also criticized last month by another sensitive topic: the company’s involvement in a Pentagon program that uses AI to interpret video imagery and could be used to improve the targeting of drone strikes. Thousands of employees signed a letter protesting the program and asking for change:

“We believe that Google should not be in the business of war. Therefore we ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.”

A “clear policy” around AI is a bold ask because none of the big players have ever done it before, and for good reasons. It is such a new and powerful technology that it’s still unclear how many areas of our life will we dare to infuse with it, and it’s difficult to set rules around the unknown. Google Duplex is a good example of this, it’s a technological development that we would have considered “magical” 10 years ago, that today scares many people.

Regardless, Sundar Pichai not only complied with the request, but took it a step further by creating 7 principles that the company will promote and enforce as one of the industry drivers of AI. Here are some remarks on each of them:

1. Be socially beneficial

For years, we have dealt with comfortable boundaries, creating increasingly intelligent entities in very focused areas. AI is now getting the ability to switch between different domain areas in a transparent way for the user. For example, having an AI that knows your habits at home is very convenient, especially when your home appliances are connected to the same network. When that same AI also knows your habits outside home, like your favorite restaurants, your friends, your calendar, etc., its influence in your life can become scary. It’s precisely this convenience that is pushing us out of our comfort zone.

This principle is the most important one since it bows to “respect cultural, social, and legal norms”. It’s a broad principle, but it’s intended to ease that uncomfortable feeling by adapting AI to our times and letting it evolve at the same pace as our social conventions do.

2. Avoid creating or reinforcing unfair bias

AI can become racist if we allow it. A good example of this happened in March 2016, when Microsoft unveiled an AI with a Twitter interface and in less than a day people taught it the worst aspects of our humanity. AI learns by example, so ensuring that safeguards are in place to avoid this type of situations is critical. Our kids are going to grow in a world increasingly assisted by AI, so we need to educate the system before it’s exposed to internet trolls and other bad players.

3. Be built and tested for safety

This point goes hand in hand with the previous one. In fact, Microsoft’s response to the Tai fiasco was to take it down and admit an oversight on the type of scenarios that the AI was tested against. Safety should always be one of the first considerations when designing an AI.

4. Be accountable to people

The biggest criticism Google Duplex received was whether or not it was ethical to mimic a real human without letting other humans know. I’m glad that this principle just states that “technologies will be subject to appropriate human direction and control”, since it doesn’t discount the possibility of building human-like AIs in the future.

An AI that makes a phone call on our behalf must sound as human as possible, since it’s the best way of ensuring a smooth interaction with the person on the other side. Human-like AIs shall be designed with respect, patience and empathy in mind, but also with human monitoring and control capabilities.

5. Incorporate privacy design principles

When the convenience created by AI intersects with our personal feelings or private data, a new concern is revealed: our personal data can be used against us. Cambridge Analytica’s incident, where personal data was shared with unauthorized third parties, magnified the problem by jeopardizing user’s trust in technology.

Google didn’t use many words on this principle, probably because it’s the most difficult one to clarify without directly impacting their business model. However, it represents the biggest tech challenge of the decade, to find the balance between giving up your privacy and getting a reasonable benefit in return. Providing “appropriate transparency and control over the use of data” is the right mitigation, but it won’t make us less uncomfortable when an AI knows the most intimate details about our lives.

6. Uphold high standards of scientific excellence

An open approach to building the AI systems of tomorrow is the best way of keeping any company honest. Providing access to “educational materials, best practices, and research that enable more people to develop useful AI applications” is a great commitment because it can expose problems faster and help find solutions sooner.

7. Be made available for uses that accord with these principles

It’s not an accident that this last principle is the predecessor of a section that outlines the applications that Google promises not to pursue. Many people fear AI just because they imagine what could go wrong if a faulty and uncontrollable system had the capability to make judgement calls on human behavior.

Google promises not to build AIs that can cause physical harm, promote surveillance that “violates internationally accepted norms” or any type of law/human right violation.

These principles are an invaluable start and a commendable attempt to establish the rules that will drive us towards the future. But they are not enough, proper regulation should be there to protect consumers. We are on the verge of a long journey involving AI that will change us, and we need to push forward and ensure that the proper guidelines are in place. Self-regulation cannot be the only type of control we see on AI. We need to push the tech industry towards the highest standards, for our future’s sake.

You can read Google’s principles here.


Did you like this article? Subscribe to get new posts by email:

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 71 other followers


Image via University of Canterbury

Advertisements

What I learned moving from building enterprise to consumer software

Last April I decided to take a big jump from building enterprise software to building consumer products. I am very grateful to have found a place that would allow me to learn the ropes of the consumer business without sacrificing any of the internal goals. This past year has been a great learning experience with big learnings and here are my key takeaways.

Enterprise vs Consumer? What’s the big deal? 

Building enterprise software is a different beast than building it for consumers. They share several core components such as requiring a secure, reliable infrastructure and following best software practices including sprint models. However, I see three key differences.

Continue reading

4 lessons I learned losing money on Bitcoin

I looked at the “Buy Bitcoin” button and paused, was I ready to do it? had I read enough articles explaining what is blockchain? 2017 had just closed after an all-time high for cryptocurrencies, and according to many enthusiasts, it was just the beginning. I felt like I was missing out, so I pushed the button and sat back. I felt confident, but in reality, I had no idea what I was doing.

I passively consumed news about Bitcoin for years, but I never went deep enough to properly understand the technology behind it and its potential. Even though I followed the ultimate rule of “investing only what you can afford losing”, the truth is that I only began to comprehend blockchain technology after I already got my feet wet. I started losing money shortly after my first order completed, these are the 4 lessons I learned since then.

pexels-photo-315785

1. A big Bitcoin dive can drag the rest of the crypto market with it

There is so much speculation around cryptocurrencies and so many people investing in them without having a clue, that a moment of panic can snowball into a sudden market crash. A Bitcoin crash can affect many investors’ confidence in other cryptocurrencies (or altcoins), dragging their price down as well.

Many altcoins are variants of Bitcoin with small code differences, making their prices change practically in parallel to Bitcoin’s.

Continue reading

Amazon Go: A.I.’s grim face?

I have been waiting since college on RFID’s failed promise to deliver a walk-away checkout experience, and Amazon finally made it possible. After reading my co-blog writer’s experience in the Amazon Go store I had to check it out for myself and was excited for it. All my friend’s pictures were of long lines, but thankfully I am a morning person and there was no line when I got there. My goal was to pretend I had no idea what it was or how it worked. My experience overall was good, with the exception of the on-boarding process. I was greeted with a condescending “oh, you don’t have the app?” and was asked to stay aside. My T-Mobile reception was very poor so it took me a bit to get started. Once I downloaded the app and signed into my Amazon account everything was smooth. Mission accomplished! In this post, I’m not going to talk about the actual store (Ivan did a great job already) but about the implications of the first tangible and successful AI automated store.

20180126_163329061_iOS.jpg

Exterior of the Amazon Go store

Automation has always been part of our history. Automation has helped us evolve into the society we have now. Such as, automating how we grow and crop food so we can have a good food supply, the industrial revolution to make things faster and cheaper, the assembly line to make them even faster and cheaper, and finally computers to automate processes and tasks. Now, AI is here and it will automate all of our productivity.

Continue reading

Are you rude to your virtual assistant?

2017 has been the year of the smart speaker. Amazon’s Echo Dot and Google’s Home Mini are currently selling for around $30, which makes them a popular Christmas gift. Using an Artificial Intelligence (AI) has never been cheaper and it’s finally reaching critical mass.

Companies are investing on AI more than ever: natural language recognition still has to improve a lot, but the current algorithms are already impressive. My favorite example: it’s now possible to ask “how long would it take me to get to Starbucks on 15th Ave?” and get an accurate response with the right assumptions. What a time to be alive!

All of this progress comes with side effects: having to learn how to talk to a machine. Often, people start talking without the wake-up keyword, and sometimes they forget to check if the device is actually listening, getting confused when there is no response to their inquiry. Talking to a machine is not easy and usually, very unsatisfactory.

Perhaps that dissatisfaction is what makes us be less aware about our manners when addressing an AI. What would you think if someone interrupted you mid-sentence with a sudden “STOP”? What if someone kept giving you orders relentlessly, never pausing to thank you? That’s how most of us talk to AI’s like Alexa or Siri, never saying “please” or “thank you”.

Continue reading

What’s in my phone’s home screen?

2017 is almost over, so I wanted talk about the apps that have taken the most important space on my phone during this year, and whether or not I think they’ll still be there next year.

Let’s start with a screenshot of my home screen:

I place apps in my home screen based on the frequency in which I use them. I try to minimize the amount of times I have to go to other pages of the home screen, so these are truly the apps that keep me going. But are all of these apps equally important for my daily tech routine? Will they stay in such a prominent position next year? Let’s break them into categories.

Connecting with friends & family

Messages, WhatsApp, Facebook Messenger and Mail are absolutely critical to stay connected with family and friends, especially those in other countries. I’m convinced that I’ll keep these around since they are literally the first thing I check every morning.

Facebook, Instagram and Snapchat have been part of an interesting migration during 2017: most of my friends stopped posting on Facebook and became more active on apps where their posts have a 24 hour expiration date. So far, most of my friends are choosing Instagram, probably due to the fact that it has a classic profile of everlasting posts; Snapchat will have a hard time recovering after the aggressive takeover from Instagram, so I would not be surprised if Snapchat didn’t make it on my phone through the next year.

Continue reading

About the iPhone X notch controversy

The iPhone X was already controversial even before it was officially introduced last Tuesday, mostly due to the rumored removal of Touch ID in favor of Face ID.

However, Apple’s presentation caused a new controversy: the infamous notch. Even though the array of cameras and sensors got leaked long before the event, nobody knew how Apple was planning to do in order to integrate it with iOS 11. We have the answer now: Apple is so proud of that black bar that they decided to render the user interface around it.

Since Apple controls the operating system, they made sure it looks good with most 1st party apps. But what happens with 3rd party content like a website? The notch gets in the way. Continue reading

Tech interviews and how to cope with them

‘The recruiter will call you back soon’ told me the fourth (and last) interviewer I spoke with after a long day interviewing at the Microsoft campus.

I was pretty psyched about getting an offer and moving to Redmond. I wasn’t desperate (I think) but I definitely was a Microsoft fanboy willing to change his entire world to work there. I had decided to tell the recruiter that although I preferred a position related to developing Word’s ultimate new feature, I was willing to take pretty much any job there.

‘Let’s go straight to the point –  I accept your offer’, I practiced many times with a mirror. You can imagine my disappointment when the recruiter didn’t call me back, didn’t pick up my calls and didn’t reply to my emails.

Though I am not a black belt at interviewing in big tech companies, I have had my share of reality checks:

You had me at ‘hello’. I found that getting an interview with the Tech titans requires a lot more than building a nice resume and submitting it through their careers web page. I don’t think I am overstating when I say that this worked for me once in a hundred times. On the other hand, having someone internally refer me worked more often than not and reaching out to recruiters through LinkedIn also turned out to be a pretty good option. But by far the best way to get these companies’ attention is to be already in the club – once I joined Microsoft, other companies started poaching me.

Continue reading

Why did you stop posting on Facebook?

Many of my friends have stopped posting on Facebook. Some have uninstalled the app and others even deleted their accounts.

They are not posting on Twitter either, and the more ephemeral Snapchat hasn’t reached critical mass among my closest friends.

Instagram is the only place where I still get a glimpse of the most intimate side of the people I love the most, but I’d say only 20% of my online friends actively use it.

What causes someone to stop sharing on social media? Is it a natural part of being over 30? Or is there an actual problem with the platform? Talking to 12 of these friends, I learned that there are several groups.

Continue reading

I broke Facebook

I’m sitting on a train, on my way to Whistler, and I decide to check Facebook. I’m hoping to see what my friends did last night, or what their plans are for Thanksgiving, but this is what I see:

I don’t see almost any personal posts, or pictures that help me connect with my friends, with the people I love. Isn’t that Facebook’s mission? Instead, I get irrelevant stories.

Continue reading