Pictured: Joy Buolamwini, graduate researcher at MIT Media Lab’s Civic Media Group
Image: Bryce Vickmark
Joy Buolamwini is an graduate researcher at MIT Media Lab and is also the founder of Algorithmic Justice League - an organisation which solves biases in decision-making software. As a computer science undergraduate, Buolamwini worked on a robot which used computer vision to socially interact with humans. She realized that the robot was not able to identify her compared to other light-skinned humans. At the time, Buolamwini thought that people would fix this issue.
But this issue did not get solved. Her anecdote is one of many, including when Google’s facial recognition tagged two black friends as gorillas, when HP’s webcams easily tracked a white face but were unable to track a black one or when Nikon’s cameras continually told at least one Asian user that their eyes were closed. Although this may seem simple, as so many facial recognition softwares struggle with non-white faces, it reminds non-white people that facial recognition is not for them.
According to a study released by researchers at Stanford University and MIT, three commercially released facial recognition softwares from three major tech companies all have gender and skin type bias. In these experiments, the percent error when determining gender of light-skinned males was never larger than 0.8%. However, once determining gender of dark-skinned women, percent error jumped to 20% in one software and over 34% in the other two. These findings raise questions about what kind of data that these neural networks are trained on. After all, the accuracy of a neural network is based upon the variety of data that is provided to it. Although the accuracy of certain softwares was 97%, the data tested was more than 77% male and 83% white.
When companies think about error, they typically look at the statistics. If a program is right 97% of the time, the program is probably accurate enough. But most companies fail to see if the system is making the error randomly over a wide distribution, or only makes mistakes with a certain group of people. So, even if a system is right 95% of the time, it may be wrong with all Asian people in the United States. Or, if a system is correct 99% of the time, it may only make errors with all transgender people in the United States.
But how does one solve this problem? The first step is awareness. Companies must be made aware of the fact that their algorithms and programs need a more diverse data set in order to curb the bias that many programs face. Joy Buolamwini and her partner, Timnit Gebru, hope that their research paper will spur more work into looking at gender and racial disparities in other fields of computer science.
Interested in starting an AI club at your school?
At Allgirlithm, we are developing an AI club curriculum to help make AI education more accessible to middle and high school students in diverse communities. We understand that artificial intelligence is not a typical subject to be taught in school, but we want to change that. Our goal for launching the Allgirlithm Initiative Program is not only to teach students the fundamentals of AI, but also to spark interest, promote diversity, and break stereotypes in this emerging field.
The club lasts for a total of 10 weeks, with students learning 1-2 concepts per meeting. Several topics and subcategories covered include introduction to machine learning, supervised vs. unsupervised learning, clustering algorithms, neural networks, future implications of AI, computer vision, natural language processing, autonomous systems, expert systems, etc. Along with lectures, we will also provide you with links to instructional videos, AI demonstrations, AI/CS-related opportunities, and more. If you are interested in starting your own chapter or know anyone who may be interested, email us in advance at email@example.com. In the email, please describe your previous experience(s) with artificial intelligence and/or computer science and what contributed to your interest in this field. All curriculum materials provided are free, and we will send them to you in early summer so that you can launch your own AI club for the 2018-2019 school year. Feel free to contact us with any questions.
There are already technologies that can help writers follow rules in writing. The Hemingway Editor is an example of a system that does this well, helping users make their writing more concise and grammatically correct.
Revision Assistant, an artificially intelligent system from Turnitin (the company that checks for literary plagiarism), says they can use algorithms to help people become better writers.
“Like any art, writing is not necessarily intuitive,” Elijah Mayfield, the founder of Revision Assistant told Quartz Magazine. “Learning to write is at least as hard as learning calculus or learning how to build a circuit. Different, but it’s a skill nonetheless and something you can learn.”
Revision Assistant is now being used by 200,000 students in more than 100 US school districts, according to Quartz. The machine can recognize things in the data and decide what the right course of action is. For example, if Revision Assistant spots a section that seems to need supporting examples, it highlights the area and encourages the writer to expand. It focuses the writer’s attention on areas for improvement, and leaves the “creative” work to them.
The repetition and instant engaging help the writing process become better. Over many revisions, students learn to spot these issues on their own and become more confident writers. Revision Assistant helps prove that there can be tools to help us become better skilled in creative fields.
AI Writer is another example that not only helps write, but research for an article. Based on a headline the user inputs, the free software generates an article of length 100-1000 words. As AI becomes more incorporated into creative fields like art, writing endeavors are also benefited.
Harvard Business Review
Photo Credit: smartify.org
AI is being integrated into all aspects of our lives. One place where it has tremendous applications is in museums. Museums are full of huge amounts of data that hold many opportunities for AI.
One way that AI can help is by sorting collections, as “more than 90 percent of (enterprise) data is unstructured, human-generated and sourced from various disparate entities” (IDC, 2015). Using image recognition, pattern recognition, machine vision and sentiment analysis (analysing the emotions conveyed through text or faces recognised), museums can find interesting and new ways to quickly sort through their collections. Here are some museums that have used sentiment analysis to sort through their data:
Another exciting avenue for museums to explore is the use of smartphones to easily and instantaneously recognise works and access additional information about them. The app Smartify, which launched in 30 museums this year including The Met in New York and the National Gallery in London, does exactly this. It has been described as the ‘Shazam of the art world’, as it uses image recognition to scan and identify art works. It then provides the user with information about the work as well as interviews with the artist and other audio-visual information pertaining to the work. Currently, the app does not recognise images that are not already stored on its database, but the company is working towards changing this.
Additionally, some museums, like the Anne Frank House in Amsterdam, have launched Messenger bots to interact with visitors. The one in the Anne Frank House tells users about the history of the museum and Anne Frank’s past. The SFMOMA also uses a chatbot that connects to visitors through texts. It allows users to send in keywords and in turn sends back answers with pictures of works at the museum including the title, artist and year. These chatbots do make mistakes occasionally, but showcase the great ways in which museums can use AI to make their works more accessible.
These are just a few of the millions of ways in which museums can innovatively use AI to their benefit. AI can help keep track of and sort through museums’ vast amounts of data, can make this information more reachable to the public, and can allow visitors to have more insightful and enjoyable museum visits.
Museums and the web
Photo credit: How-To Geek
All we hear about AI these days is how it has the potential to be used in ground-breaking technology and will drastically change various aspects in an average person’s life. However, AI is already making a dent in the everyday aspects of human life.
For instance, a line of Google Nest products are applying AI in everyday items. CNET gives examples of the Nest Hello smart doorbell, the Nest Secure alarm system, the Nest Yale Lock and the Nest Cam IQ Outdoor security camera. The alarm system and security camera both use AI to help with facial recognition along with motion sensing. All of these applications of AI are helping people make their homes “smarter.”
Another Google Nest item is the Nest Learning Thermostat. The NEST website explains that the thermostat uses AI to “learn what temperature you like and build a schedule around yours.” It recognizes daily patterns and begins to learn a routine, which helps with saving energy as well as convenience.
AI has the potential to change life in many little ways, as well as leading technology into great breakthroughs.
Photo Credit: Business Insider
DeepArt.io is a website that uses deep neural networks to identify and combine stylistic elements of two separate images, a technique known as style transfer. But it’s not traditional “artificial intelligence”: no coding experience is required.
The program relies on a neural algorithm, developed by Leon Gatys and colleagues at the University of Tübingen in 2015. This has been used in photo filters on Facebook and Prisma, as well as on moving image. Kristen Stewart used style transfer in her directorial short film debut Come Swim to redraw a brief dream sequence.
In recent years, these kinds of programs have proliferated, using different techniques to create AI-assisted works which are both sophisticated and beautiful. In fact, a study says that AI-generated art now looks more convincingly human than work at Art Basel.
For good or bad, the consequences could transform mainstream art production, consumption, and artists.
First, a little history. The earliest known generative computergraphik, created around 1960 by Georg Nees, the German “father of computer art”, consisted mainly of black-and-white drawings of shapes. The first computer-generated music piece, Lejaren Hiller and Leonard Isaacson’s Illiac Suite for String Quartet, came in 1957. Both experiments were aimed at academic audiences, and not very “artistic.”
We’ve come a long way since then. Deepjazz, created by Princeton University Ph.D. student Ji-Sung Kim, used neural networks to detect jazz musical patterns and generate new songs.
Nvidia recently published a paper documenting how researchers, with incredibly convincing results, generate life-like images. The algorithm takes images of a winter street and predicts what it would look like during summer. Gene Kogan, a generative artist and author of Machine Learning for Artists has used similar methods to make realistic place images.
Cornell University and Adobe researchers have also been working on a sophisticated version of style transfer for photos. The process they’re developing can even use the sunset lighting of one photo and apply it to a daytime photo of another location. Google, too, has been working on “supercharging style transfer.” Researchers developed a way to combine multiple styles, mixing them like paints.
These may change how we value artists, too. We’re likely to see a proliferation of algorithmic art in mainstream culture. These tools will take some of the burden off of artists, but may lead to fewer job opportunities in the digital economy.
Taobao, a Chinese shopping website, created banner ads for its mega-shopping Singles’ Holiday by training algorithms on design patterns of successful ads Airbnb also showed off a tool which uses algorithmic art techniques to convert sketches into fully designed and functional prototypes.
The vibrant world of artistic potential that’s opened up by algorithms will be darkened by the potential for artists to lose control.
Photo credit: Pixabay
It hasn't been two years since AlphaGo beat world champion Lee Sedol in Go, but Google's DeepMind already launched a new AI program to take its place.
On December of 2017, Alpha Zero single handedly defeated a world class chess engine, Stockfish, in only 4 hours. In fact, it had no previous experience with chess besides learning the basic rules, but the results were incredible: the AI went undefeated, winning 28 games and drawing the rest in an 100 game matchup. After this match, it went on to beat its former self AlphaGo in Go as well as Elmo in shogi.
With this breakthrough, experts were able to discover more about the thought process of a machine. According to Demis Hassabis, the AI "doesn't play like a human, and it doesn't play like a program . . . It plays in a third, almost alien, way." As he analyzed the games of Alpha Zero, he noticed it played some outlandish yet positionally profound moves. Hassabis offers an explanation for this strange behavior. Rather than reinforcement learning (letting the AI learn from example games), Alpha Zero was taught solely by playing games against itself without any human input. DeepMind also says it takes on an "arguably more human-like approach", one that involves more evaluation and planning instead of calculating lengthy variations.
Ever since 1997 when DeepBlue beat the world chess champion Gary Kasparov, computers have revolutionized the game of chess. Now powerful forms of machine learning like AlphaGo are making a drastic impact in the field of board games. Surely enough, it keeps us wondering: who will defeat Alpha Zero?
Google Code-in and the Google Code-in logo are trademarks of Google Inc.
Are you interested in working on projects with open-source organizations along with peers from around the world? Getting first-hand experience in the world of project development? Interested in coding, quality control, documentation, or outreach? Just getting into the world of programming?
The Google Code-In, which opened for registration on November 28th and will run until January 17th, is an annual event allowing participants to do just that. Pre-university students of all skill levels, ages 13-17, are invited to participate. According to their webpage, over 4500 students from 99 countries have completed work in the contest since 2010. Google partners with a number of open-source organizations (this year’s bunch includes Ubuntu and JBoss), giving participants the opportunity to claim tasks and work with mentors to complete assignments.
Assignments range from developing new code for an application or webpage, to installing software and documenting the process, to designing company laptop stickers and t-shirts. The Google Code-In has tasks for everyone and is a great way to be introduced to the world of programming. Additionally, participants get to win prizes ranging from t-shirts to a trip to Google HQ! If you’re interested, we encourage you to sign up at https://codein.withgoogle.com/.
Photo Credit: Gigaom
AI has been making great advances in social media. On November 27th, Guy Rosen, VP of Facebook Product Management, announced in a blog post that Facebook is using AI to help identify suicidal users and connect them to help.
This tool has been in use in the US for months and will now be implemented in other countries as well. Rosen wrote that in the last month alone, Facebook ‘worked with first responders on over 100 wellness checks based on reports’ thanks to this technology.
This tech uses AI for pattern recognition to ‘help accelerate the most concerning reports’ and inform local authorities, writes Rosen. Pattern recognition helps Facebook flag posts and live streams through which users may be expressing suicidal thoughts. It also searches for comments like, ‘Are you ok?’ and ‘Can I help?’ which can be strong indicators of someone needing support. It then prioritizes the posts and sends more pressing ones to be reviewed first.
Snapchat, too, has recently unveiled AI image-recognition technology in its latest update. It recognises objects in pictures and then offers image-recognition filters which are tailor made to match the objects in the picture. For example, if you take a picture with food, Snapchat will offer filters with words like, ‘get in my belly’ and ‘eatin’ good’.
This is not the first time that the company has incorporated object recognition in its app. Snapchat already allows you to search for certain objects, places and events in ‘stories’. For example, if you search for ‘beach’, you will get snaps of people at beaches, and if you search for ‘football’ you will find snaps of people at football games.
This is just the beginning of AI being incorporated into social media and eventually all aspects of our daily lives.
Photo Credit: Daily Mail
Researchers at the University of California, San Diego, and Adobe have recently created a way for AI to both learn a person’s style and create images of items that match the style. The system could potentially allow retailers to create personalized clothing, or help predict fashion trends.
The two algorithms used are a convolutional neural network (CNN) and a generative adversarial network (GAN). The two networks improve the results and can create multiple item images for each user. There’s still a few obstacles to these AI-generated textiles hitting the market, however. For example, researchers need to turn two-dimensional computer images into 3-D images used to produce an actual piece of clothing. And of course, fashion sense requires knowing which items pair well together.
Amazon has been working on using AI to spot fashion trends, and Alibaba, a Chinese retail giant, has introduced FashionAI, which recommends items based on what shoppers brought into the dressing room.
Vue.ai is a fashion AI startup that recently revealed a method for creating fake fashion models. Last fall, Burberry launched a Facebook Messenger bot during London Fashion Week, which offered glimpses of the new collection and shared trivia, as well as a live buying option. HighSnobiety is a website covering streetwear trends, which also launched a Sneaker Bot on Facebook Messenger, which quickly conveyed information and news from different brands.
This is just the tip of the iceberg when it comes to AI applications in fashion. It’s an exciting field, with many high-profile clients and players.
MIT Tech Review