"The algorithms will know who is an alcoholic before he knows it, just by analyzing the photos of Instagram"

Is 59 years old and is called Ranga Yogeshwar. He is the son of an artist luxembourg and an indian engineer. He spent his childhood in India, her adolescence in

Is 59 years old and is called Ranga Yogeshwar. He is the son of an artist luxembourg and an indian engineer. He spent his childhood in India, her adolescence in Luxembourg and its youth on horseback between Switzerland and Germany. He studied Astrophysics and Experimental Particle Physics. He worked at the Swiss Institute for Nuclear Research and CERN. And for 35 years dedicated to the dissemination of scientific research.

it Is a real star in Germany, where she has presented numerous programmes for radio and television and where it has sold over two million copies of some of his books. The last of them bears the title "Next season's Future", examines how science and technology will transform our lives and just see the light in Spain of the hand of the publisher's Harp.

we are surrounded by algorithms. Can the algorithms modify human behavior?Yes, it can happen, there is the potential to do so. There are some fields in which we have to be very careful, because the algorithms can in some way dictate our life. Let me give you an example: in the next few years, to make health insurance and life companies will assess the risk that the person in question falls ill, dies, etc, And when there is ever more data, they will be able to point out these risks of a more specific and the insurance will offer discounts on the policies those customers that leave for a run or do some type of physical exercise. That way the people will go jogging in the morning not for fun or because they like it but by imposition of the seguro.De fact there is already life insurance that make rebates to their customers for exercise... I know. And such insurance to boast that it does not impose anything, your customers are free to choose. But what is clear is that if we are not careful technology could end up being each time more and more oppressive, the technology and the algorithms can stop people to say certain things, behave a certain way. In the next few years, the algorithms will know virtually all of us in all aspects of our life, we will be more guarded. And one behaves differently when you know observed. I work in television and I am very well known in Germany, I know that if I go to a restaurant people are going to be aware of what I do, what I ask, if river, if I have fun, if I don't enjoy... If we know look at us, we behave in a different way. We entered in an era in which, as a society, we're going to win in some things, but we're going to lose in freedom.You argues that the algorithms can even put in danger the democracy...it Is something that we are already seeing. The main motivation of Facebook, Twitter and all the social networks is a commercial one. The business of Facebook is based on the attention, get users to spend as much time as possible on Facebook, because that on the one hand extends the time in which they are exposed to advertising, and on the other allows you to gather more and more individual data about them. These algorithms are now also adopting the news agencies; the major media are using algorithms that make the most read articles go up in the hierarchy, and are placed at positions preferred. It is not the quality that matters, not even the truth, what is important is the attention that aroused. We are looking for mechanisms to get more attention, and at this point we all know that that increases the circulation of false news. Numerous studies, some from Harvard and MIT, show, for example, that on Twitter the false news spread six times faster, six, that the true news. And when the priority is to win the attention, it is encouraging to launch false news. What is the effect at a social level, the fake news?The spread of false news, coupled with the existence of bubbles closed in that a lot of people close themselves to reaffirm their beliefs, is dividend companies. You are creating bubbles in which each one is enclosed in its own truth, it is isolated and does not participate in the common decisions. And that is very destabilizing to a democracy, because democracies need a platform of agreements on a social level. In addition, the technological revolution is increasing the gap between rich and poor, isn't it?Companies with more value in the world 20 years ago, were oil, pharmaceutical. In 1998, the ranking headed by General Electric, Exon, Intel, Coca-Cola, wal-Mart... Now, the most important companies in the world are Apple, Alphabet, (i.e., Google), Amazon, Facebook, Alibaba... Two of them, Apple and Amazon, they are worth a trillion us dollars, a figure higher than the general budgets of many countries. It is a huge concentration in a very small number of companies, owned by a very small number of people extremely rich, while the rest of the society in general is not looking to increase their wealth. In the world there is what is called a 'zero-sum game', in which what you win or lose someone is done at the expense of what wins or loses somebody else. If we look at the data, it sees a rupture of the middle classes and a widening of the gap between the very rich and the very poor. And in the coming years, when new algorithms and advances in artificial intelligence, the situation will be exacerbated further, because there will be many jobs that make the machines.How the automation will be accompanied by social destabilization?Yes. Let me give you an example: a call center. In the next five years call centers will not need people, because the machines will be better than the humans doing that work. What will we do with the people that worked there? It is true that these things have always happened when there arose new technologies, but for the first time, I think that we do not see viable alternatives. The changes are so fast, they occur at a speed so dizzying, in so short a time, that does not give us time to find an alternative. And that can lead to a huge social instability; if it reaches a critical point of people unemployed or with very low incomes is to imagine that there will be social conflicts. In many countries it is already starting to be destabilizing, with the emergence of parties that do not share democratic ideals. It is not something that only happens in Europe or in the united States with Donald Trump, but also in countries such as India, is at the global level. The machines are beginning to learn by themselves and with their own patterns. What we will overcome, we will dominate?We are now in a first stage in which the machines are only better than humans in some very specific tasks, such as the classic example of chess or some fields of medicine. If you will be bound to X-rays to determine if you have a tumor, the machines are already better than humans in interpreting these x-rays. Little by little, the machines are going to be better than us in most and more fields. Right now the machines are able to speak in such a way that people are unable to distinguish whether it is a machine or a human being. And in the future we will see amazing things. Imagine for example an algorithm which take the millions of pictures that are on Instagram and scan the faces, you will be able to predict who has a drinking problem, even before the person who is alcoholic is aware that it is. Honestly, a little scary... There are reasons to be alert. The machines, the algorithms, are not neutral; they are subjective, dependent on the data that they enter. And that can give rise to problems of severe discrimination. We are already seeing it in some very important areas, as for example the prevention of crimes. In the united States there is a system called COMPASS that keeps track of the people who have committed crimes and that calculates the probability of re-offending. But that can be very discriminatory, and in fact the predictions of COMPAS regarding the blacks are worse, much worse, that relative to whites. And you know why? What they are, because their software is discriminatory, because the algorithm is not working well. That is dangerous, very dangerous. There are systems that work broadly, but that have faults. And there is the temptation to use them in spite of everything, and that in fields of great social relevance as it may be the legality or the financial system can be terrible. The computer system of a bank can deny a credit without having compelling grounds for doing so, only by an algorithm.
Updated Date: 28 October 2018, 19:01



You need to login to comment.

Please register or login.