A trade organization representing the interests of tech juggernauts like Amazon, Apple, Facebook, Google, IBM, and Microsoft released a list of principles it says should guide the responsible development of artificial intelligence technologies. The principles stress that companies should keep human safety in mind, but the government should limit regulation.
The Information Technology Industry Council (ITIC), a Washington, D.C.-based trade group, wrote in its policy principles paper released on Wednesday that it wants to "be a catalyst for preparing for an AI world" and help solve "potential negative externalities" that arise from the technology, including potential job loss and threats to humans.
Dean Garfield, president and CEO of ITIC, tells The Hill that the group's major concern is to help foster the responsible development of A.I. by making sure technologies are built using data that's representative of all people. Garfield says the industry needs to work to prevent A.I. systems from being prejudice or discriminatory and tech companies should test systems for potentially harmful bias before and during deployment.
The group also says A.I. systems need to be designed ethically to make sure "human dignity, rights, and freedoms" are not violated or impeded.
ITIC vaguely outlines how it will prevent A.I. from infringing on human rights and safety by ensuring tech companies will partner with "relevant stakeholders" to build a "reasonable accountability framework" to govern autonomous systems.
"There is concern that AI can do harm to people," Garfield told The Hill. "As we develop and design AI, human safety will be at the center of that deployment."
The group promotes a "flexible regulatory approach" when it comes to laws and regulations concerning A.I. technology, warning that too many regulations could snuff startups and mid-sized businesses working on A.I. technologies.
"We encourage governments to evaluate existing policy tools and use caution before adopting new laws, regulations, or taxes that may inadvertently or unnecessarily impede the responsible development and use of AI," the paper states.
Peter W. Singer, political scientist and author, says you shouldn't be surprised to see the ITIC pushing for pro-business policies like limited government regulations and protections of trade secrets and source code--the group represents tech companies after all. But he is optimistic to see that the trade group acknowledges that A.I. will be revolutionary and will have wide-ranging societal consequences, both good and bad. He is encouraged that companies are describing the ripple effects of A.I. in a realistic way and trying to define how it should and shouldn't be used.
"The fact that they recognize A.I. isn't like a jump from the iPhone 6 to the iPhone 7, but rather that it's the equivalent of the steam engine, or the computer, or the Internet is promising," says Singer. "It's good they are asking the questions about what is lawful, what is proper, and what it means to be responsible. These are not science or tech questions; they're ethical, legal, and philanthropic questions. You cannot answer them with code or law, we need to define what it means to build responsible A.I. as a society."