Here is your December guide to the latest trends in technology, labor markets and industry insights. This month we’re focusing on all things AI! We’ve highlighted the biggest news affecting the industry and explain what to expect as new trends continue to emerge.
DID YOU REGIFT THIS CHATBOT?
Just in time for the holidays, Facebook recently rolled out a chatbot for its workers. No, not for greater work efficiency. The “Liam Bot” was created to help employees seamlessly navigate dinner table conversation. Considering the flurry of media attention, the tech company has received lately, the bot provides PR team-sanctioned answers to help employees navigate questions from loved ones.
| If a relative asked how Facebook handled hate speech, for example, the |
chatbot … would instruct the employee to answer with these points:
– Facebook consults with experts on the matter.
– It has hired more moderators to police its content.
– It is working on AI to spot hate speech.
– Regulation is important for addressing the issue.
It would also suggest citing statistics from a Facebook report about how
the company enforces its standards.
Liam Bot answers questions and links to reference material on the company’s blog and news releases. Whether or not employees toe the company line, this is an interesting use-case for chatbot in the employment space. As adjacent conversations about employers’ stance on diversity, pay equity, etc., become more visible, it’s possible this type of
engagement information delivery with workers may become the norm. https://www.nytimes.com/2019/12/02/technology/facebook-chatbot-workers.html
THE POWER OF UNCONVENTIONAL THINKERS
Excited about what you’ve read on The Scoop? We’re always looking for people who are just as interested in these topics as we are! Join TMP and help us shape the future of recruitment marketing.
CIRCLING THE WAGONS ON AI LEGISLATION IN HR: THE EEOC FINALLY WEIGHS IN
Over the years, technology has streamlined the job application process. As the number of resumes for each job opening continues to increase, the tech market has again stepped in to increase efficiency in the recruitment process. From advertising to selection, a greater number of employers use AI-powered tools that have been increasingly coming under scrutiny from advocacy groups and policymakers (check out the June edition of The Scoop, Artificial Intelligence Regulation and Legislation).
AI is showing up in more and more spaces of our lives; there is no stopping progress. Proponents for using these tools point to increased efficiency in spend, reduced assumptions about where to post jobs, and the calibration of data such as commute time to provide greater insight into candidate needs. However, the opposition focuses on factors such as additional bias from tools that include functions such as facial recognition, and the propensity for the algorithms to optimize for an end of which its users may be unaware.
|“… Even if you explicitly instruct a machine learning tool not to discriminate against women, it might inadvertently learn to discriminate against other proxies associated with being female, like having graduated from a women’s college.”|
And while equal employment opportunity laws haven’t changed much in this regard since the 70s, the EEOC is reportedly now investigating at least two discrimination cases involving job decision algorithms used to help make hiring, promotion, and other job decisions.
POLITICS MAKES STRANGE BEDFELLOWS: THE RACE FOR PRESIDENT SHINES MORE LIGHT ON AI AND OUR DAILY LIVES
The conversation has shifted from “the robots will take our jobs” to a greater focus on the implementation of AI and automation. As presidential hopefuls lay out their plans for the future, they’re being tasked with providing their approach for U.S. AI dominance to keep pace with international rivals, and at the same time ensuring that there are regulations in place to thwart misuse and be a source for social good.
In the long run, whether each candidate’s strategic approach focuses on reframing the players, with an eye toward public-private partnerships, or tackles the topic from a broad national policy and initiative lens, they will be shaping the future of AI in America. In the short run, politics has created a unique spotlight that has thrust the topic more firmly into the minds of Americans who have become increasingly skeptical as a result of the national attention around issues dealing with bias and transparency. Ultimately, we will find the “middle way,” but with much more friction to get there than innovators had hoped.
GOOGLE’S NEW PUSH INTO EXPLAINABLE AI: A MOVE TOWARD MORE ALGORITHMIC DECISIONS
Beyond legislation, for tech companies developing solutions powered by AI, the increased focus on algorithmic bias in our everyday systems has led to addressing the “Blackbox” problem. In 2017, Google committed to be an AI-first organization, so it should come as no surprise that the tech company has announced a new initiative, Explainable AI.
|“Explainable AI is a set of tools and frameworks to help you develop interpretable and inclusive machine learning models and deploy them with confidence. With it, you can understand feature attributions in AutoML Tables and AI Platform and visually investigate model behavior using the What-If Tool.”|
While still in Beta, Google’s set of tools, such as “Model Cards” that provide information about the performance and potential shortcomings of the face- and object-detection models and – my favorite – the scenario analysis “What-If tool,” aim to provide greater depth of insight into the data and model behind the outcomes.
ROUNDING OUT THE SCOOP: PSYCH, SOCIAL, LABOR
- The world of work is often watching the organizational machinations of tech giants to find the new trend in hiring practices. Contrary to popular lore, Google has moved away from brain teasers in favor of structured interviews. Learn why: https://www.thinkwithgoogle.com/marketing-resources/organizational-culture/structured-interviewing/
- We can’t all be data scientists, but Microsoft’s Machine Learning cheat sheet provides enough use-case context to be dangerous … in meetings. Here’s a flow: https://101.datascience.community/2019/12/14/machine-learning-algorithm-cheatsheet/