Hewlett Packard/HP has decided to slash 10% of it’s workforce despite third quarter results that surprised Wall Street. HPE will release 5,000 people by the end of the year according to a Bloomberg report.
The cost-saving move made by CEO Meg Whitman, was to shed under-performing divisions and focus on services devoted to artificial intelligence.
Only four years after taking the CEO job, Whitman has split HPE into two divisions: printers and PC’s (HP) and servers and storage(HPE), each worth $50 billion. HPE has had to play catch up with Amazon and Google in the cloud storage competition. Recently, Whitman’s commitment to HPE was in question when her name came up in the discussions for Uber’s CEO job.
Whitman has stated that Uber was not a fit for her and had nothing to do with her situation at HPE.
HPE stock has gone up 3% this year and another 0.6% this week after the announcement, but that is lagging compared to the Nasdaq 100’s 21% gain this year. Whitman has been with HPE for six years and will be with them as long as she wants to be. She stated, “We have a very special opportunity here and we have plenty of work to do.”
Microsoft was forced to shut down the chatbot named Tay, after it tweeted several sexist and racist remarks.
According to the software giant, Microsoft endeavored to connect with millennials 18 to 24 years old, and they planned to do this task through Tay. She was an AI designed to talk like a teenage girl.
According to a Microsoft post, “The more you chat with Tay, the smarter she gets, so the experience can be more personalized for you”.
Microsoft’s concept and idealization for Tay was that the chatbot would produce entertaining and funny reactions and responses based on tweets and other messages it was sent through applications like Kik and GroupMe.
Despite the good-intentions, internet trolls started to connect and bombard Tay on Wednesday March 23 almost exactly when it was launched. Tay started to utilize a percentage of the bigot, racist, and sexist remarks in its own Twitter conversations.
The bot’s tweets were so offensive and drew such an uproar that one newspaper named Tay the “Hitler-loving sex robot.”
Microsoft’s chat robot Tay was taken offline less than 24 hours after its launch since it was tweeting such sexist and racist language. But not before the AI robot tweeted approximately 96,000 times, which seems like a lot of tweets for an average teen girl or millennial.
“Tay” went from “humans are super cool” to full nazi in <24 hrs and I’m not at all concerned about the future of AI pic.twitter.com/xuGi1u9S1A
In a released statement by Microsoft, they said ”Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways”.
Microsoft, who designed the AI with a specific end goal of enhancing the customer service on their voice recognition software, apologized directly after the incident in a blog entry made by Peter Lee, Corporate Vice President at Microsoft Research.
Lee wrote, “We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay”.
Microsoft said that it’s modifying Tay, however was not able to say if or when the bot may return. Lee said that they will only bring her back when they are confident that they can make better prepare to limit technical exploits.
In a metropolitan area, arraignment decisions made with the help of machine-learning, decreased new domestic violence occurrences by 50 percent, which led to a cut of more than 1,000 post-arraignment arrests yearly, according to new discoveries made by the University of Pennsylvania.
In the U.S., the average pre-trial process progresses from arrest to preliminary arraignment to a mandatory court appearance.
Throughout the preliminary arraignment, a magistrate or judge decides whether or not to release the offender, depending on the chance that the individual will return to court or commit new violations.
To see how machine-learning could assist in cases of domestic violence, Sorenson and Berk acquired data from over 28,000 domestic violence arrangements between January 2007 and October 2011. Additionally, they observed a two-year follow-up period after release, which ended in October 2013.
Computers can “learn” from certain training data which sort of people are prone to re-offend. For this research, the 35 beginning inputs involved age, gender, prior warrants and sentences, as well as residential location. This data assists the computer in understanding proper relationships for projected risk, which offers additional data to a court official deciding whether to release or detain a suspect.
The quantity of inaccurate predictions can be somewhat high, and a few individuals object on a basic level to utilizing information collected and machines for these situations. To these objections, the researchers simply retort that machine-learning is just a tool.
Some criminal justice settings already utilize machine-learning as a procedure, although various types of choices calls for distinctive datasets from which the machine must learn. Nevertheless, the underlying statistical techniques, nevertheless, continue as before.
Sorenson and Berk both contend that the new system of cutting down domestic violence can make current practices better and more improved.