• Home
  • About
  • CBS Interactive
  • Cambodia
  • Freelance
  • UN/IOM
  • BUSINESS DAY
  • MEDILL
Menu

Jonathan Greig

Street Address
City, State, Zip
Phone Number

Your Custom Text Here

Jonathan Greig

  • Home
  • About
  • CBS Interactive
  • Cambodia
  • Freelance
  • UN/IOM
  • BUSINESS DAY
  • MEDILL

HeadGaze app lets users with disabilities navigate with simple head movements →

September 12, 2018 Jonathan Greig
Image: HeadGaze

Image: HeadGaze

An eBay intern behind the app hopes it will help those with physical disabilities navigate their iPhone X with just a head nod.

Companies and organizations are quickly realizing the many ways new technology can help those with disabilities navigate the new digital environment, and one team of eBay workers put their heads together and created HeadGaze, an iOS app that lets you move around an iPhone screen with only a turn of the head.

"As someone with extensive motor impairments, I do not have full control of my limbs. Consequently, I am unable to walk or grab anything with my hands. These limitations hinder my ability to perform everyday tasks, like going to the grocery store and shopping independently -- even though I have my own income," wrote eBay intern Muratcan Cicek, who suffers from a physical disability and was looking for an app to help people like himself shop online.

"This year as part of my internship project at eBay, my team and I developed HeadGaze, a reusable technology library that tracks head movement on your iPhone X and starting today, the technology is available via open source on GitHub.com," he added.

"The first of its kind, this technology uses Apple ARKit and the iPhone X camera to track your head motion so you can navigate your phone easily without using your hands."

In a blog post on eBay and a video released on Vimeo, the creators show how the app's simple but powerful functions can help people move around an iPhone. Cicek said the app uses a "virtual stylus" to track your head movements and create a 3D map that can find and move a cursor on your screen.

To make the app useful, the team had to create an interface that would allow you to take actions with a cursor, like the way every mouse has two click buttons. The app, Cicek said, can sense how long the cursor has been on something and that will cause it to take a clicking action.

With the help of the app, users can navigate a website, scroll up and down a webpage, move between pages and, in the case of a website like eBay, search or make purchases, all without touching the iPhone at all. They also made a concerted effort to help other developers use their technology for a variety of websites and apps, posting the HeadGaze designs on GitHub. To display their technology, they created the HeadSwipe app specifically for eBay to test whether users could swipe between offers and deals. HeadSwipe's designs are also available on GitHub.

"It is because of HeadGaze's potential to make a tremendous impact on the lives of many people that we are open-sourcing this tool. We want to encourage developers to build more apps that don't require screen touch," Cicek said. The app's creation is part of a series of efforts by Partnership on AI, a group of businesses interested in integrating AI into the public responsibly.

"While Assistive Technology helps the disabled to perform some everyday tasks, there is no existing tool that considers our needs when shopping online. And with 39.5 million Americans currently considered physically disabled, according to The Centers for Disease Control and Prevention, we saw an opportunity to create a tool that would promote independence."

Cicek also wrote that the tool has many other potential uses for those performing tasks that make it impossible to hold a smartphone, like cooking or construction. "The fusion of these gazing experiences open up a broader possibility on defining various hands-free gestures, enabling much more interesting applications," he added.

*This article as featured on Download.com on September 12, 2018: https://download.cnet.com/blog/download-blog/headgaze-app-lets-users-with-disabilities-navigate-with-simple-head-movements/

In cbs interactive Tags ebay, headgaze, disability, app, ai

AI-powered autonomous drone could bring new capabilities to agriculture, logistics, more →

May 16, 2018 Jonathan Greig
Image: IEEE Internet of Things Journal

Image: IEEE Internet of Things Journal

The nano drone can move without human assistance and is considered the first of its kind.

Scientists have created the first nano drone capable of flying itself without a human operator, breaking ground on new ways to miniaturize artificial intelligence (AI) and limit processing power.

Six researchers from ETH Zurich and the University of Bologna figured out a way to maximize the drone's bite-sized power and memory limitations using DroNet, "a lightweight residual convolutional neural network (CNN) architecture," according to a paper they released earlier this month.

Antonio Loquercio, one of the lead scientists on the project, told The Register that the machine's computation and navigation controls were created fully onboard the device.

"This means, nano-drones are completely autonomous. This is the first time such a small quadrotor can be controlled this way, without any need of external sensing and computing. The methodology remains however almost unchanged using steering angle and the collision probability prediction [in DroNet]," Loquercio said.

The scientists said their work could be instrumental in a number of different ways. Drones are already being proposed for a number of different uses, with Amazon's Prime Air service keen to start operations once more regulatory work is finished. These unmanned aerial vehicles (UAVs) are already in use in farming, industrial inspections, natural disaster assistance, and hazardous area management, as well as in surveillance and security, the paper noted.

"To expand the class of activities that can be performed by UAVs, a recent trend of their evolution is their miniaturization. Commercial-Off-The-Shelf (COTS) quadrotors have already started to enter the nano-scale, featuring only few centimeters in diameter and few tens of grams in weight," they wrote in their study.

However, these nano-drones still lack the autonomous navigation capabilities of their larger counterparts, the paper noted, since their computational power is constrained by their small form factor.

The researchers provided detailed designs and explanations on how they got around size constraints using a platform developed by both universities called PULP. The platform functions by using GAP8, a chip that is nearly the size of a quarter.

"The authors estimate the power breakdown for small-size UAVs; they show that the maximum power budget for on-board computation is 5% of the total, the rest being used by the propellers (86%) and the low-level control parts (9%)," they wrote.

"The problem of bringing state-of-the-art navigation capabilities on the challenging classes of nano and pico-size UAVs is therefore strictly dependent on the development of energy-efficient computing architectures, highly optimized software and new classes of algorithms."

The tiny tech will no doubt have an effect and the burgeoning drone expansion. Just last month, US Transportation Secretary Elaine Chao announced the specifics of a pilot program where companies like FedEx, Alphabet and Uber can test the use of unmanned aircrafts across the country. President Donald Trump's Integration Pilot Program was inaugurated last year.

"Our country is on the verge of the most significant new development in aviation since the emergence of the jet age," Chao said at a press conference in April. "We've got to create a path forward for the safe integration of drones if our country is to remain a global aviation leader and reap the safety and economic benefits drones have to offer."

The programs will involve drones in everything from infrastructure inspections to pest control and emergency services. According to the Association for Unmanned Vehicle Systems International (AUVSI), the use of unmanned aircrafts could lead to nearly $82 billion in potential economic benefit as well as the creation of 100,000 jobs in the next decade.

Loquercio and the other scientists said the popularization of Internet of Things (IoT) devices is also having an effect on demand for drones and other machines of this size.

"Full autonomy of nano-scale UAVs is extremely desirable as it would make them the perfect 'smart sensors' in the Internet-of-Things era," according to the paper. "The development of the IoT is fueling a trend toward edge computing, to improve scalability, robustness, and security. While today's IoT edge nodes are usually stationary, autonomous nano-UAVs can be seen as perfect examples of next-generation IoT end-nodes, with high mobility and requiring an unprecedented level of on-board intelligence."

The researchers are still testing the drone, and its flying capabilities are still limited due to the type of information its AI is getting about the surrounding environment. It can only fly horizontally, and not up or down.

*this article was featured on the Tech Republic website on May 16, 2018: https://www.techrepublic.com/article/scientists-create-miniature-drone-that-can-fly-itself-with-ai/

In cbs interactive Tags ai, drone, agriculture

Google's AI pact with Pentagon sparks resignations, highlighting leadership disconnect →

May 15, 2018 Jonathan Greig
Image: iStockphoto/rvolkan

Image: iStockphoto/rvolkan

Nearly a dozen employees have quit to protest the tech giant's work for the Defense Department's 'Project Maven,' where AI is used to analyze drone footage.

Since its inception, Google has promoted an employee-inclusive decision making process and popularized their internal motto of "don't be evil." But Monday, about a dozen Google employees resigned in protest of Google's involvement in the development of artificial intelligence (AI) software for a Defense Department drone program called Project Maven.

The employees told Gizmodo that since news of Google's involvement broke earlier this year, senior management has been less than forthcoming with the decision-making process on the issue and believed it had been addressed sufficiently through their statement and a few employee listening sessions.

More than 4,000 employees of the company signed a letter last monthcondemning Google's work with Project Maven and demanding more accountability with how the company deploys its products.

"Google is implementing Project Maven, a customized AI surveillance engine that uses 'Wide Area Motion Imagery' data captured by US Government drones to detect vehicles and other objects, track their motions, and provide results to the Department of Defense," the employees wrote in the letter. "We cannot outsource the moral responsibility of our technologies to third parties. Building this technology to assist the US Government in military surveillance - and potentially lethal outcomes - is not acceptable."

The issue has only gained more steam as technology scholars, academics, and researchers chimed in on the larger implications of AI being weaponized by the US military. A petition signed by 90 academics calls for major technology companies to sign onto an international treaty that would ban autonomous weapons systems.

"With Project Maven, Google becomes implicated in the questionable practice of targeted killings. These include so-called signature strikes and pattern-of-life strikes that target people based not on known activities but on probabilities drawn from long range surveillance footage. The legality of these operations has come into question under international and U.S. law," the academics wrote in the petition. "These operations also have raised significant questions of racial and gender bias (most notoriously, the blanket categorization of adult males as militants) in target identification and strike analysis. These problems cannot be reduced to the accuracy of image analysis algorithms, but can only be addressed through greater accountability to international institutions and deeper understanding of geopolitical situations on the ground."

Google has defended their involvement in the program, saying their technology will handle tedious tasks that waste soldiers' time, while also making drone surveillance more accurate.

"An important part of our culture is having employees who are actively engaged in the work that we do. We know that there are many open questions involved in the use of new technologies, so these conversations—with employees and outside experts—are hugely important and beneficial," a Google spokesperson said in a statement after news of Project Maven became publicized last month.

The spokesperson added in the statement that their work was "intended to save lives" and that they were working on internal policies to govern complicated decisions involving AI technology and defense contracts.

Both the Defense Department and Google have adamantly denied that AI will be used in combat situations, but Marine Corps Col. Drew Cukor was quick to add the phrase "any time soon," during a defense-tech conference speech last year.

Google's response to the situation was not enough, according to the former employees who spoke to Gizmodo, who said in the interview that "the strongest possible statement [they] could take against this was to leave."

In addition to the letter released by nearly 4,000 employees and the petition signed by academics, the Tech Workers Coalition created their own petition criticizing Google not just for Project Maven but for doubling down on the controversy by bidding heavily on a contract to work on the Pentagon's JEDI program, an effort by the military to integrate cloud computing into their work.

Google is in competition with Microsoft and other tech giants for a number of Defense Department contracts, and US military officials have repeatedly said publicly that they are in an "AI arms race" with the rest of the world. According to the Wall Street Journal, the Defense Department spent $7.4 billion on technology involving AI last year alone.

But industry stakeholders are already ramping up calls for tech companies to be more transparent about their military work and at least have policies in place to adjudicate decisions of this magnitude.

"According to Defense One, the DoD already plans to install image analysis technologies on-board the drones themselves, including armed drones. We are then just a short step away from authorizing autonomous drones to kill automatically, without human supervision or meaningful human control," the International Committee for Robot Arms Control wrote in their open letter to Google's leaders.

For other tech leaders, the resignations sparked by Google's work with Project Maven is a warning sign of the potential unrest that can come from such a huge disconnect between employees and leadership. Company leaders must work to be transparent about their goals with employees, so as to avoid the issues that come from holding opposing goals.

*this article was featured on the Tech Republic website on May 15, 2018: https://www.techrepublic.com/article/googles-ai-pact-with-pentagon-sparks-resignations-highlighting-leadership-disconnect/

In cbs interactive Tags google, ai, pentagon, defense department, project maven, jedi program

Welsh police facial recognition software has 92% fail rate, showing dangers of early AI →

May 8, 2018 Jonathan Greig
Image: iStockphoto/stevanovicigor

Image: iStockphoto/stevanovicigor

Data released by the UK police force confirmed claims from watchdog groups that the software is inaccurate.

Police officials in South Wales are battling criticism of their new facial recognition technology after it was revealed that the program had a 92% fail rate when it was used at the June 2017 UEFA Champions League Final in Cardiff, meaning only 8% of the people "identified" were actual matches with names and faces in the criminal database.

According to statistics released by the South Wales Police, their Automated Facial Recognition (AFR) 'Locate' system found 2,470 potential matches out of 170,000 attendees and a database of 500,000 images of persons of interest at the event last summer. Only 173 were correctly identified and actually matched someone in the database.

Overall, the program has been used at 15 events and flagged 2,685 people, only 234 of whom were truly persons of interest, according to the statistics.

The South Wales Police countered the troubling fail rate with their own statistics: 2,000 positive matches and 450 arrests in the last nine months since the program was put into use. They add that no one has ever been mistakenly arrested after being flagged and the officers in charge can, and often do, dismiss matches if they believe it is an obvious misidentification. If there is a match, an "intervention team" is sent to question and possibly arrest the person.

"Officers can quickly establish if the person has been correctly or incorrectly matched by traditional policing methods i.e. normally a dialogue between the officer/s and the individual," a police spokeswoman told Wired.

In addition to never erroneously arresting anyone, the South Wales Police claimed in a press release that "no members of the public have complained."

But some members of the public have, in fact, complained. Tony Porter, the UK's Surveillance Camera Commissioner, wrote in a 2017 report that the facial recognition program needed oversight to stop it from becoming "obtrusive."

"The public will be more amenable to surveillance when there is justification, legitimacy and proportionality to its intent," Porter told Wired. "Currently there are gaps and overlaps in regulatory oversight."

In a February report submitted to the House of Lords by watchdog group Big Brother Watch, Silkie Carlo, the group's director, wrote that there is "no law, no oversight, and no policy regulating the police's use of automated facial recognition." The UK government, he said, had not even set a target fail rate, allowing the system to continue flagging thousands of people erroneously at wildly high rates.

Carlo's report also added that facial recognition algorithms are known to be inaccurate, citing statistics from the US Government Accountability Office that showed "facial recognition algorithms used by the FBI are inaccurate almost 15% of the time and are more likely to misidentify female and black people."

In the report, Carlo also criticizes the database of photos taken from events and stored on police hard drives. At large events, CCTV cameras are set up in specific spots near the venue and are fed into a computer, which takes the video and scans every face to match it against the police database of 500,000 people they're looking for, Carlo wrote. But concerns have been raised about the CCTV footage and how long it is kept by police.

"The custody image database, which provides the basis for both facial matching and automated facial recognition, unnecessarily contains a significant proportion of photos of innocent people under what is likely to be an unlawful retention policy," Carlo wrote.

The South Wales Police have released multiple reports addressing this, writing that they are "very cognisant of concerns about privacy and we have built in checks and balances into our methodology to make sure our approach is justified and balanced. We have had detailed discussions and consultation with all interested regulatory partners."

The report later adds that: "Watchlists and the associated metadata are manually added to the system and will be reviewed regularly to ensure accuracy and currency and will be deleted at the conclusion of the respective deployment."

Matt Jukes, the chief constable of the South Wales Police, told the BBC that they needed to use the technology to protect large events like concerts and games from terrorist threats but "don't take the use of it lightly" and were attempting to make "sure it is accurate."

Facial recognition technology is being used by a number of countries, most notably Australia and China, which has a particularly robust algorithm that they use extensively.

NEC, the company that created the software being used by the South Wales Police, admitted to ZDNet in October that the program does not do well when working against a database as large as the one used in Cardiff and said the system was more accurate when used in smaller pools of people.

Chris de Silva, Europe head of Global Face Recognition Solutions, said, "You're going to find false alarms, and you are going to get answers, but they are not going to be always correct, and the more of that you get, the less likely people are going to be happy about using the system."

Being that the system has likely encountered EU citizens, questions could be raised about how its capabilities, and the underlying database, fit into the upcoming GDPR guidelines. Additionally, the high failure rate of such a program could be evidence that artificial intelligence (AI) used in tools like this may not be ready for primetime, especially when it comes to a contentious use case such as predictive policing.

*this article was featured on the Tech Republic website on May 8, 2018: https://www.techrepublic.com/article/welsh-police-facial-recognition-has-92-fail-rate-showing-dangers-of-early-ai/

In cbs interactive Tags wales, welsh, facial recognition, ai, eu, software, police

Google's Dialogflow Enterprise helps businesses create AI-powered chatbots →

April 17, 2018 Jonathan Greig
Image: Google

Image: Google

A beta version of the product was released in November, and thousands of developers currently use it to create AI-based conversational experiences.

Google's Dialogflow Enterprise Edition was officially released on Tuesday after months in beta, continuing the internet giant's foray into the ever-widening conversational interface field.

The move comes only a week after Google updated its Cloud Speech-to-Text technology, and introduced its Cloud Text-to-Speech software, to make it easier for businesses use.

According to a Google blog post, Dialogflow (which was named API.AI before it was bought by Google in 2016) is used by developers to "build voice- and text-based conversational experiences powered by machine learning and natural language understanding."

The tech is specifically designed for people without expertise in the field, so that companies can take advantage of it in a variety of ways. Dialogflow released a beta version of the software in November 2017, and said that companies are already using it to enhance their services.

"I remember how excited I was the first time I saw Dialogflow; my mind started racing with ideas about how Ticketmaster could benefit from a cloud-based natural language processing provider," Tariq El-Khatib, product manager at Ticketmaster, said in the post. "Now with the launch of Dialogflow Enterprise Edition, I can start turning those ideas into reality. With higher transaction quotas and support levels, we can integrate Dialogflow with our Customer Service IVR to increase our rate of caller intent recognition and improve customer experience."

Dialogflow also allows users to create services that work on a multitude of websites, apps and platforms, including Google Assistant, Amazon Alexa, and Facebook Messenger.

According to Google, KLM Royal Dutch Airlines, Domino's, Ubisoft, and Best Buy are patrons of Dialogflow, and "hundreds of thousands" of developers are already using it to improve customer service and gaming experiences.

"Dialogflow made it easy to build a AI-powered conversational experience that delights consumers using the resources and skill sets we already have. We estimate that Dialogflow helped us get our conversational interface to market 12 months sooner than planned," Max Glaisher, product innovation manager at DPD, one of the UK's leading parcel delivery companies, said in the post.

Ubisoft said it was using Dialogflow in conjunction with its "Sam" personal gaming assistant program.

"The team needed tools that let them iterate quickly and make modifications immediately, and Dialogflow Enterprise Edition was the best choice for those needs," Thomas Belmont, a producer at Ubisoft, said in the post.

The enterprise edition of Dialogflow has additional features not seen in the beta version, including a total of 30 available languages and ways to integrate features of Google Assistant into your project. It also comes with support interfaces and Service Level Agreements.

Businesses are in a race to automate many of the services they offer, and Dialogflow's release will accelerate the use of artificial intelligence (AI) in customer service and many other fields.

*this article was featured on the Tech Republic website on April 17, 2018: https://www.techrepublic.com/article/google-officially-unveils-chatbot-dialogflow-enterprise/

In cbs interactive Tags google, dialogflow, ai, business, chatbot

Could MIT's AI headset transcribe your future strategy straight from your brain? →

April 6, 2018 Jonathan Greig
Image: Lorrie Lejeune/MIT

Image: Lorrie Lejeune/MIT

Scientists created a computer interface that can pick up on internal verbalizations based on neuromuscular signals in the jaw and face.

Researchers at MIT's Media Lab announced the creation of AlterEgo, a computer system that can transcribe the words you say in your head, according to an MIT News report. Using hardware that can detect neuromuscular signals in the jaw and face through electrodes, the system can pick up on things that are "undetectable to the human eye."

The system has tied specific neural signals to certain words, the report said, allowing it to decipher the minuscule physical messages your body sends when you internally verbalize something.

Other professors and researchers say this technology could be applied in a number of ways. Thad Starner, a professor at Georgia Tech's College of Computing, told MIT's News Office that the tech would be great in any situation where people need to communicate clearly in loud environments, like airport tarmacs or soldiers and police in tactical situations.

"You can imagine all these situations where you have a high-noise environment, like the flight deck of an aircraft carrier, or even places with a lot of machinery, like a power plant or a printing press," Starner told MIT News. "This is a system that would make sense, especially because oftentimes in these types of or situations people are already wearing protective gear."

Part of the researchers' goal for the project was to make wearable technology that could understand minute signals and create a system where artificial intelligence (AI) worked to enhance the human mind, according to the report.

"The motivation for this was to build an IA device — an intelligence-augmentation device," Arnav Kapur, an MIT graduate student told the campus publication. Mr. Kapur lead the research and development of the system. "Our idea was: Could we have a computing platform that's more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?"

Kapur and his thesis advisor, media arts and sciences professor Pattie Maes, said many people are inextricably attached to their smartphones, for better or for worse. Their research team was interested in finding a way to make the vast amount of information on the internet easily accessible and less cumbersome.

"At the moment, the use of those devices is very disruptive. If I want to look something up that's relevant to a conversation I'm having, I have to find my phone and type in the passcode and open an app and type in some search keyword, and the whole thing requires that I completely shift attention from my environment and the people that I'm with to the phone itself," Maes told MIT News.

Instead, Maes and her students have been working on tech tools that can allow a user to access all the information available online while remaining "in the present," she told MIT News.

They initially tested the software during a chess game, with the user silently verbalizing his opponents' moves and an AI algorithm responding with moves the user should make. The devices are constantly learning, correlating more neuromuscular signals with more words and phrases.

The team behind AlterEgo first needed to figure out which part of the face and jaw had the strongest signals so they knew where to put the device. In their paper on the study, they describe the prototype as "a wearable silent-speech interface, which wraps around the back of the neck like a telephone headset and has tentacle-like curved appendages that touch the face at seven locations on either side of the mouth and along the jaws."

Tests indicated that they could get the same results with fewer electrodes on only one side of the face. Further experiments found that, on average, the system transcribed words accurately 92% of the time, the report said. As the device and AI learn more human speech, the accuracy will increase, Kapur said, noting that his own device, which he had been using extensively, had a higher accuracy rate than those used for brief periods by test subjects.

In addition to communication in loud environments, Professor Starner wondered whether the technology could be used for those with speaking disabilities or those who have suffered an illness that ends their ability to speak.

"I think that they're a little underselling what I think is a real potential for the work," Starner told MIT News. "The last one is people who have disabilities where they can't vocalize normally. For example, Roger Ebert did not have the ability to speak anymore because lost his jaw to cancer. Could he do this sort of silent speech and then have a synthesizer that would speak the words?"

*this article was featured on the Tech Republic website on April 6, 2018: https://www.techrepublic.com/article/mit-researchers-develop-tech-to-transcribe-the-words-youre-thinking/

In cbs interactive Tags mit, ai, future, brain, hardware

Google employees demand end to company's AI work with Defense Department →

April 5, 2018 Jonathan Greig
drone.jpg

More than 3,000 Google employees signed a letter criticizing the company for assisting with Project Maven, a Pentagon initiative involving AI and drone footage.

Google is facing heavy criticism from its own employees following revelations that the tech company is working with the Department of Defense on Project Maven, an effort to use artificial intelligence (AI) image recognition software to sort through drone and security footage.

"We cannot outsource the moral responsibility of our technologies to third parties," they wrote in a letter signed by 3,100 employees. "Building this technology to assist the US Government in military surveillance - and potentially lethal outcomes - is not acceptable."

Outrage has been growing within Google since the pact with the Pentagon was announced last year. The deal involves Google's TensorFlow software, which the letter says is being adapted into "a customized AI surveillance engine that uses 'Wide Area Motion Imagery' data captured by US Government drones to detect vehicles and other objects, track their motions, and provide results to the Department of Defense."

In a statement, Google said the project is for "non-offensive purposes" and was only intended "to save lives and save people from having to do highly tedious work."

"Any military use of machine learning naturally raises valid concerns," Google said in the statement. "We're actively engaged across the company in a comprehensive discussion of this important topic and also with outside experts, as we continue to develop our policies around the development and use of our machine learning technologies."

Both Google and the Pentagon have stressed that the technology is not ready to be used in combat situations, with Marine Corps Col. Drew Cukor telling the audience at the 2017 Defense One Tech Summit audience that "AI will not be selecting a target [in combat] ... any time soon. What AI will do is [complement] the human operator."

But Col. Cukor also said that he believes the Defense Department is "in an AI arms race," and acknowledged that "the big five Internet companies are pursuing this heavily."

Cukor later added: "Key elements have to be put together...and the only way to do that is with commercial partners alongside us."

According to the Wall Street Journal, the Defense Department spent $7.4 billion on technology involving AI last year, and Google, Microsoft, and Amazon are openly battling for a variety of defense contracts involving cloud computing and other software.

But the employee letter argues that Google is damaging its brand by working on Project Maven and contributing to "growing fears of biased and weaponized AI."

"The argument that other firms, like Microsoft and Amazon, are also participating doesn't make this any less risky for Google," the letter said. "Google's unique history, its motto Don't Be Evil, and its direct reach into the lives of billions of users set it apart."

Project Maven began in April last year, with the stated goal of utilizing machines to capitalize on the Defense Department's massive troves of data collected through drone footage and surveillance operations. AI is already used by other parts of the military, and since 2014 has been used widely in law enforcement.

The Justice Department now promotes the use of AI software to do "risk assessments" on how likely a person on trial is of committing a future crime. The scores are often handed to judges and affect sentencings in states across the country, having disastrous effects. Black defendants were 77% more likely to be pegged as "at higher risk of committing a future violent crime" and 45% were "more likely to be predicted to commit a future crime of any kind," according to ProPublica.

Google has tried to tamp down concerns about handing over vital AI recognition software to the Defense Department, with former Alphabet Executive Chairman Eric Schmidt admitting last year in an interview that "there's a general concern in the tech community of somehow the military-industrial complex using their stuff to kill people incorrectly, if you will."

But Schmidt went on to say in that interview that it was vital that he and other tech industry leaders stay in communication with the military "to keep the country safe."

Yet many of Google's employees disagreed, starting the letter off with: "We believe that Google should not be in the business of war."

*this article was featured on the Tech Republic site on April 5, 2018: https://www.techrepublic.com/article/google-employees-demand-end-to-companys-ai-work-with-defense-department/

In cbs interactive Tags google, drones, ai, pentagon, defense department

POWERED BY SQUARESPACE.