The issue is whether the new innovations that monitor worker performance in terms of speed of productivity at the assembly line, or the distance travelled by a worker to report to duty, could affect corporate social responsibility that is expected to ensure the worker gets a fair deal from the employer.
by Dr. Ruwantissa Abeyratne
The greater the freedom of a machine, the more it will need moral standards
Rosalind Picard, Director, Affective Computing Group, MIT
( April 7, 2018, Montreal, Sri Lanka Guardian) The march of artificial intelligence (AI) into business is an incontrovertible fact. The latest issue of The Economist points out that more and more, businesses are using AI to forecast demands, hire people and deal with customers. AI also adds economic value to a business. On the positive side, AI could be used to enhance productivity; achieve better schedules for both meetings and employee work hours; evaluate the speed in which employees finish their tasks; monitor conduct within the workplace; and ascertain when employees goof off. It can also determine when a staff member deserves to receive a pay rise and flag patterns that the human brain might miss.
However, the use of AI in the workplace could also have questionable consequences, as reflected in The Economist which records some startling facts in its opening article titled AI-spy. For instance, the wrist band that Amazon has introduced to be worn by line workers which tracks the hand movements of the workers and goads them to work faster could be the modern and more humanitarian equivalent of the measures used by slave owners in forcing slaves to work faster. The Economist goes on: “[A]nd surveillance may feel Orwellian—a sensitive matter now that people have begun to question how much Facebook and other tech giants know about their private lives. Companies are starting to monitor how much time employees spend on breaks. Veriato, a software firm, goes so far as to track and log every keystroke employees make on their computers in order to gauge how committed they are to their company. Firms can use AI to sift through not just employees’ professional communications but their social-media profiles, too. The clue is in Slack’s name, which stands for “searchable log of all conversation and knowledge”.
Sutapa Amornvivat, CEO of an AI powered analytics company in Bangkok, cautions us about what she calls confirmation bias that AI could impose on us. This is when AI, which is essentially data driven, could establish patterns that enables it to discriminate against a particular group of people or race. In an earlier article in this newspaper I said: “[D]espite its many advantages, the frightening thing about AI is that machines cannot take decisions the way humans do. AI has been defined as “the broadest term, applying to any technique that enables computers to mimic human intelligence, using logic, if-then rules, decision trees and machine learning”.
The issue is whether the new innovations that monitor worker performance in terms of speed of productivity at the assembly line, or the distance travelled by a worker to report to duty, could affect corporate social responsibility that is expected to ensure the worker gets a fair deal from the employer. Another issue is whether there could be confirmation bias that could mislead the company into discontinuing the services of a worker who is subjected to confirmation bias and whose only source of income that supports his school going children is his wage. The International Labour Organization (ILO) defines “corporate social responsibility” as: “a way in which enterprises give consideration to the impact of their operations on society and affirm their principles and values, both in their internal methods and processes and in their interaction with other actors”. Frontstream, an internet medium, says in an article titled Why corporate social responsibility is so important: “one of the greatest benefits of promoting social responsibility in the workplace is the positive environment you build for your employees. When employees and management feel they are working for a company that has a true conscience, they will likely be more enthusiastic and engaged in their jobs. This can build a sense of community and teamwork which brings everyone together and leads to happier, more productive employees”.
The ILO Forced Labour Convention of 1930 defines “forced or compulsory labour” as “all work or service which is exacted from any person under the menace of any penalty and for which the said person has not offered himself voluntarily”. To fall within the definition of “forced or compulsory labour” in the 1930 Convention, work or service must be exacted “under the menace of any penalty”. It was unequivocally clear as an outcome of the draft instrument by the Conference leading to the Convention that the penalty in question need not necessarily be in the form of penal sanctions but may take the form also of a loss of rights or privileges. Additionally, The ILO Convention No. 122 calls each member State to pursue a national policy for full, productive and freely chosen employment as a major goal. The “freely chosen” element intrinsically requires the absence of compulsion or discrimination, and the presence of opportunities to acquire and apply skills.
The questions that might arise from this discussion are: does the wrist band that measures the level and speed of productivity of a worker and urges her to speed up her work come within the definition of “forced or compulsory labour” of the Convention? Would the monitoring of every key stroke of an employee on the keyboard affect his freedom to work, considering that The Universal Declaration of Human Rights provides that "everyone has the right to work, to free choice of employment, to just and favourable conditions of work and to protection against unemployment”? Furthermore, the Employment Policy Convention of 1964 provides that there is a right to work as productively as possible. In this context one may well ask: is AI taking businesses too far?
Here’s my take.
The economist offers three ways out of this dilemma: anonymity; transparency; and entitlement of employees to access to their data and information. The first – anonymity – is where managers do not receive individual details but instead aggregate information. This is counter intuitive as it is difficult to evaluate how performance and competency can be assessed this way. The second – transparency – which provides information to employees as to what data is gathered, is also not helpful as the employee may not have a choice as to the type of his information that is collected. The third – access to information gathered – may only make matters worse for the employee, making him anxious and unproductive.
At the core of this issue is human dignity, and the right of the individual to working conditions as enunciated in international treaties which effectively preclude employees from being treated as robots. The human is an emotive being and should not exclusively be treated as a machine of productivity that is calculated to obviate the law of diminishing returns. An article on Google’s employees states: “Google has people whose sole job is to keep employees happy and maintain productivity. It may sound too controlling to some, but it’s how this world-changing organization operates…Google bases nearly everything off data, and while some of it may work best only for Google, there are surely other areas that can work for all companies, regardless of size”. Burlington based Ben & Jerry’s had a representative saying: “We look out for our employees’ quality of life: and providing space and time for naps is just another way for us to take care of the people who work there”. Maybe one need not go that far. But at least this approach does not flout any provision of an international treaty.
The author is a visiting professor at McGill University and a former official of the United Nations. He is also Senior Associate, Aviation Law and Policy, Aviation Strategies International.
Post a Comment