About 8 years ago, people were discussing how helpful technology had been during the election. It was the first time that candidates intensilly used social networks to reach voters and get donations. On the election night, The New York Times had an app showing everything we needed to know about the results. At that time, it was new and exciting. Now, back to 2016, the discussion is about how damaging technology can be.

I worked as a journalist before becoming a programmer and discussions about the limits of journalism were (are) frequent. How far does one go for a story? When omitting information is better than publishing it? Is trying to be impartial worse than clearly supporting or opposing something? Those questions never get old. Different time periods will have different answers for them. And the criticism remains. Media outlets are accused of not knowing how to read the pools, and of not having a balanced coverage.

But while the criticism against the traditional media is nothing new, the one against tech companies is unexpected. We are not talking about privacy and security (which is still an important issue). Now, the discussion is about the contribution of algorithms in the result of an election: Google and Facebook have been blamed for allowing the spread of fake news.

An algorithm, of course, is not biased. It is just a bunch of code that works flawlessly as it supposed to. But it does not mean that it cannot/should not be changed if its use has consequences bigger than a search result. The accusation of giving too much space for fake news has lead Mark Zuckerberg to make a statement and list some changes that include fact-checking and input from journalists — which shows that humans are not that unimportant.

Why does it matter?

All of this discussion might seem very far from our day to day work. We just want to drink our coffee and write code. But how would we feel if our work is used for something that hurt other people? Who are those programmers using their skills to build software for companies that promote scam? Are we OK if we are asked to build software that decides if some people are going to be able to rent a house, or find a job? Would it be better if we knew that the software would be used by a company that needs to filter out thousands of candidates? Would that make a difference in our opinion?

Of course, whoever invented the airplane did not think about it being used in wars. The creator of a painkiller probably did not have as their first goal to cause addiction. Some philosophers didn’t foresee the use of their ideas by dictatorial regimes. When we write code, we cannot predict how it is going to be used. But at least acknowledging the consequences of what we do might be step towards more meaningful conversations.

In journalism, the discipline of Ethics discuss the consequences of mass communication and indoctrination discourses. Those are things to always have in mind, but the daily work poses other questions, that need more immediate answers. And journalists are used to talk about those questions — maybe just because newspapers have been in people’s hands longer than smartphones. In technology, Ethics discuss the consequences of Artificial Intelligence, use of drones and other issues that have the potential to transform the way humanity is organized. But it still dismiss more immediate consequences with the childish argument that “if you don’t like it, don’t use it”.

The importance of diversity in tech

I recently read an article about a software able to determine if we are going to be criminals based on some features of our faces. I read it this morning. I’m not talking about some study from the 1930s or 1940s, during the nazi period in some countries in Europe. No. It is November of 2016. I guess the authors of the study are unaware of how dangerous it is the fact that people are still being profiled based on their physical characteristics. Which, in that case, just shows the stupidity of the study.

Tech needs diversity to burst the bubble and put some people in contact in the world. I mean, it does because it does. It does because it is horrifying that any industry — not only tech — still choose their team based on bias. But, taking in consideration only the ethics — not any other of the many benefits of having a diverse team — it does because it would make some people realize that world is not only about themselves. It does because more people would bring different perspectives to the discussion about the consequences of a software like the facial profiling one, or how allowing fake news to be the first thing in a search result page can impact people’s lives.

Paved with good intentions

The interesting thing is that when we think about tech industry, we think about a bunch of young, idealist and smart people trying to make the world a better place. There is a lot of misconception on that idea, starting with ageism. Daniel Lyons, writer of the show Silicon Valley and someone who experienced bias while working for a tech company, made probably one of the best criticism about the idea that saying don’t be evil or do the right thing is good enough to guarantee that a company does the right thing. Here is the “make the world a better place” scene from the show:

Silicon Valley, make the world a better place


apprenticeship