Ethics and algorithms in a “smart” world
Yeah, algorithms, “smart software” and so on are here to stay, and that’s a reality we’ll have to accept some day because the more time passes, the more we find ourselves standing at the intersection of humanity and technology in a clash of ethics and needs to solve, with all aspects of our lives closely connected to technology in ways we don’t even understand.
We humans as a species have tied our evolution to technology, and that’s ok; but the most recent efforts have been skewed because we weren’t paying attention to some invisible aspects of technology and people relationships with it, and that is coming to bite us.
In a recent study by the Pew Research Center with more than 4,000 people, they found how people feels about algorithms making decisions in society, and notice than people feels “broad concerns over the fairness and effectiveness of computer programs making important decisions in people’s lives“, showing that the top two most frequent emotions they report feeling are amusement and anger.
Something already evidenced in a previous study of 7000 New York Times stories which revealed that anger or anxiety inducing articles are both more likely to make the paper’s most emailed list, and something also found in another study on more than half a million tweets about political topics where researchers found than tweets “that include moral and emotional language are more likely to spread within the ideological networks of the sender”.
Ethics as a work in progress.
Some of these “connections” are downright threatening. Developments in previously disjointed fields such as artificial intelligence (AI), machine learning, robotics, nanotechnology, 3D printing, and genetics and biotechnology, will cause widespread disruption not only to business models but also to labor markets over the next five years, which in time causes enormous levels of anxiety and mistrust from the public eye, not just in technology, but also in the people who creates it, and in some cases with reason; there is great concern about bias and unfairness infused in algorithms, sometimes because even those who created the code don’t even know how it does its “thing”.
We, the people who creates this future, are the ones who have the bigger responsibility here, because we are the ones who decide who has to take the blame when our algorithms fail, and how the technology we created impacts people lives, a daunting task because the ethical issues involved in managing and developing information technology are many, and they are increasingly complicated by the power of individuals and infrastructures. There are multiple efforts to look at this new problematic, but there’s no one all-encompassing set of standards that includes the entire industry. Perhaps that’s because, as Yonatan Zunger writes in the Boston Globe, “… [T]he field of computer science, unlike other sciences, has not yet faced serious negative consequences for the work its practitioners do.”
Is there a way to involve ethics in algorithms?
I know it can be difficult to know where the line exists between right and wrong in our context, even more when we are walking it, which makes it harder because we are already biased towards our work, but understanding the ethical principles, being alert and keeping ourselves humane about it is the starting point of a really long journey, one I want to take.