|By Le Williams | 3 years ago|
According to Eric Horvitz, a director of Microsoft Research Labs, the company has refused proposed agreements with consumers over ethical interests involving the misuse of AI technology.
In April, Horvitz spoke at the Carnegie Mellon University –K&L Gates Conference on Ethics and AI in Pittsburgh, summarizing investigations facilitated by Microsoft towards possible misuse on a case-by-case basis by the Aether Committee.
“Significant sales have been cut off,” Horvitz stated. “And in other sales, various specific limitations were written down in terms of usage, including ‘may no use data-driven pattern recognition for use in face recognition or predictions of this type’.”
Microsoft has noted the caution displayed by consumers amid the Cambridge Analytic and Facebook scandal, of which stolen data was used to target voters during the 2016 U.S. presidential campaign.
Horvitz explained Microsoft’s concerns involving violations of human rights, increasing the risk of physical harm, or preventing access to critical services and resources. Through a manipulation incident, the company’s ‘Tay’ chatbot was trained by people online to spew racist comments. “It’s a great example of things going awry,” Horvitz acknowledged.
Microsoft is taking initiatives for its AI involvement to be complementarity as opposed to a replacement of humans, with tasks can be performed when a human would not be as effective.