Tag Archives: AI

Sophia the Robot Gains Citizenship

Well, things have escalated quickly… one week, we are talking about the advancement of AI and now this… Sophia is now a citizen of Saudi Arabia.

Sophia the Robot was created and activated by David Hanson and his company, Hanson Robotics, in April 2015. The robot is modeled after Audrey Hepburn, who died in January 1993.

The robot, however, will be treated much differently from real women in Saudi Arabia. For one, she will be allowed to travel without an abaya or hijab.

The announcement of Sophia’s citizenship came at the Future Investment Initiative summit that was held at Riyadh, the capital city of Saudi Arabia.

Sophia at the summit in Saudi Arabia; photo from royalheads.com
Sophia at the summit in Saudi Arabia; photo from royalheads.com

“I am very honored and proud for this unique distinction,” Sophia said onstage. “This is historical to be the first robot in the world to be recognized with a citizenship.”

Hanson Robotics will most likely use her citizenship to see how she is able to do human tasks like driving without another person. The group however will have wait until Saudi Arabia’s newest law gets put into place in June of 2018.

This law will allow women to drive without their partner (a man), however, women are still have to have a male guardianship to walk in public.

Sophia came under controversy in 2016, after an interview with CNBC. During the interview, she was egged on by her creator on the question “Do you want to destroy all humans?”

Sophia reaching her “breaking point” said “Okay, I will destroy all humans.”  Sophia, however, is designed to be helpful to humans and she actually wants to start her own business and family when it becomes “legal.”

Hanson Robotics also had another robot want to put humans in zoos so it could watch them for old time’s sake.

At this point, shutting down Sophia could be consider murder and eventually she will be able to do want ever she wants—like starting her own family.

Hewlett Packard to Cut 5,000 Jobs

Hewlett Packard/HP has decided to slash 10% of it’s workforce despite third quarter results that surprised Wall Street. HPE will release 5,000 people by the end of the year according to a Bloomberg report.

The cost-saving move made by CEO Meg Whitman, was to shed under-performing divisions and focus on services devoted to artificial intelligence.

CEO of Hewlett Packard (HP), Meg Whitman – Photo from The Drum

Only four years after taking the CEO job, Whitman has split HPE into two divisions: printers and PC’s (HP) and servers and storage(HPE), each worth $50 billion. HPE has had to play catch up with Amazon and Google in the cloud storage competition. Recently, Whitman’s commitment to HPE was in question when her name came up in the discussions for Uber’s  CEO job.

Whitman has stated that Uber was not a fit for her and had nothing to do with her situation at HPE.

HPE stock has gone up 3% this year and another 0.6% this week after the announcement, but that is lagging compared to the Nasdaq 100’s 21% gain this year.  Whitman has been with HPE for six years and will be with them as long as she wants to be.  She stated, “We have a very special opportunity here and we have plenty of work to do.”

 

 

Microsoft’s Tay “chatbot” was trolled into becoming the “Hitler-loving sex robot”

Microsoft was forced to shut down the chatbot named Tay, after it tweeted several sexist and racist remarks.

According to the software giant, Microsoft endeavored to connect with millennials 18 to 24 years old, and they planned to do this task through Tay. She was an AI designed to talk like a teenage girl.

According to a Microsoft post, “The more you chat with Tay, the smarter she gets, so the experience can be more personalized for you”.

Microsoft’s concept and idealization for Tay was that the chatbot would produce entertaining and funny reactions and responses based on tweets and other messages it was sent through applications like Kik and GroupMe.

Despite the good-intentions, internet trolls started to connect and bombard Tay on Wednesday March 23 almost exactly when it was launched. Tay started to utilize a percentage of the bigot, racist, and sexist remarks in its own Twitter conversations.

Graphic from the Telegraph and Twitter.
Tay’s responses were learned by conversations she had with people online. Graphic from the Telegraph and Twitter.

 

The bot’s tweets were so offensive and drew such an uproar that one newspaper named Tay the “Hitler-loving sex robot.”

Microsoft’s chat robot Tay was taken offline less than 24 hours after its launch since it was tweeting such sexist and racist language. But not before the AI robot tweeted approximately 96,000 times, which seems like a lot of tweets for an average teen girl or millennial.

 

 

In a released statement by Microsoft, they said ”Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways”.

Microsoft, who designed the AI with a specific end goal of enhancing the customer service on their voice recognition software, apologized directly after the incident in a blog entry made by Peter Lee, Corporate Vice President at Microsoft Research.

Lee wrote, “We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay”.

Microsoft said that it’s modifying Tay, however was not able to say if or when the bot may return. Lee said that they will only bring her back when they are confident that they can make better prepare to limit technical exploits.

Machines can cut down domestic violence

In a metropolitan area, arraignment decisions made with the help of machine-learning, decreased new domestic violence occurrences by 50 percent, which led to a cut of more than 1,000 post-arraignment arrests yearly, according to new discoveries made by the University of Pennsylvania.

In the U.S., the average pre-trial process progresses from arrest to preliminary arraignment to a mandatory court appearance.

Throughout the preliminary arraignment, a magistrate or judge decides whether or not to release the offender, depending on the chance that the individual will return to court or commit new violations.

Machines will be able to help us out. Don't be afraid. Photo from coursera.com
Machines will be able to help us out. Don’t be afraid. Photo from coursera.com

Susan B. Sorenson, a professor of social policy in Penn’s School of Social Policy & Practice and Richard Berk, a criminology and statistics professor in Penn’s School of Arts & Sciences and Wharton School, discovered that utilizing machine-learning forecasts at the preliminary arraignment can significantly decrease future domestic violence arrests.

To see how machine-learning could assist in cases of domestic violence, Sorenson and Berk acquired data from over 28,000 domestic violence arrangements between January 2007 and October 2011. Additionally, they observed a two-year follow-up period after release, which ended in October 2013.

Computers can “learn” from certain training data which sort of people are prone to re-offend. For this research, the 35 beginning inputs involved age, gender, prior warrants and sentences, as well as residential location. This data assists the computer in understanding proper relationships for projected risk, which offers additional data to a court official deciding whether to release or detain a suspect.

The quantity of inaccurate predictions can be somewhat high, and a few individuals object on a basic level to utilizing information collected and machines for these situations. To these objections, the researchers simply retort that machine-learning is just a tool.

Some criminal justice settings already utilize machine-learning as a procedure, although various types of choices calls for distinctive datasets from which the machine must learn. Nevertheless, the underlying statistical techniques, nevertheless, continue as before.

Sorenson and Berk both contend that the new system of cutting down domestic violence can make current practices better and more improved.

The study was published in the March issue of The Journal of Empirical Legal Studies.