Photo Credit: Shutterstock
Back in November 2017, Facebook announced that it was using artificial intelligence to help
flag suicidal users. The tech was used to identify posts, videos and live streams that could be
indicative of suicidal thoughts. The algorithm also prioritises the order in which the
Facebook team reviews these posts. According to Facebook, these accelerated reports get
reported to local authorities twice as fast as unaccelerated ones – which could save many
Recently, the Canadian government has also announced its use of AI to help prevent
suicides in the nation. It partnered with Advanced Symbolics, an AI firm, which will research
and predict suicide rates by analysing Canadian social media posts. The firm aims to predict
which areas of Canada could see an increase in suicidal behaviour, after which the
government can plan and provide mental health resources, in the right places, at the right
“To help prevent suicide, develop effective prevention programs and recognize ways to
intervene earlier, we must first understand the various patterns and characteristics of
suicide-related behaviours"; a Public Health Agency of Canada spokesperson said to CBC in a statement. "PHAC is exploring ways to pilot a new approach to assist in identifying patterns,
based on online data, associated with users who discuss suicide-related behaviours.”
AI has tremendous potential in the field of suicide prevention as many people that need
help go unnoticed by others. However, there are potential issues that must be taken care of:
1. Safety concerns as AI programs must learn to respond appropriately so as to not worsen
suicidal users’ emotional states.
2. Privacy concerns regarding users’ health information.
3. The accuracy of the AIs in determining suicide intent, to list a few.
Nonetheless, small steps, like those being taken by Facebook and the Canadian government,
will provide valuable insights to improve and advance AI technology.
For our younger audiences, we have an awesome opportunity from ThinkSteam. Enter their global thinkBIG challenge. If you're a girl aged 8-14, you could win an iPad Mini or iPad Pro, among other prizes! Make a 1-4 minute video on the importance of STEM or STEAM (Science, Technology, Engineering, Art, and Math). Teams can range from 1-3 members, with two divisions, open until August 15th. To learn more about how to enter, read the full rules, and see past winners, visit: http://www.thinksteam4girls.org/tbc18/. We hope our elementary school and middle school readers will take advantage of this cool opportunity.
Photo credit: Dubai News Gate
Artificial intelligence is becoming prevalent in various aspects of everyday life. Its various applications are starting to show in fashion and clothing. In fact, according to the MIT Technology Review, Ministry of Supply, a Boston-based clothing company has launched a jacket that incorporates AI. The jacket syncs with Alexa, is controlled by an app and customizes its temperature using machine learning. The jacket is not perfect; there are some connectivity issues, although many of these kinks are being worked out.
Aside from being used in clothes, artificial intelligence is helping with fashion choices. The Verge explains that Amazon has an algorithm that analyzes pictures to design clothing. It helps Amazon to quickly realize a trend and design clothes based on the trends. Additionally, another article from the MIT Technology Review finds that researchers from the University of California, San Diego and Adobe have found a way for AI to learn about a person’s specific fashion style and generate outfits to suit that style. This could help consumers make clothing choices and help retailers decide what clothing to sell.
MIT Technology Review
MIT Technology Review
Pictured: Joy Buolamwini, graduate researcher at MIT Media Lab’s Civic Media Group
Image: Bryce Vickmark
Joy Buolamwini is an graduate researcher at MIT Media Lab and is also the founder of Algorithmic Justice League - an organisation which solves biases in decision-making software. As a computer science undergraduate, Buolamwini worked on a robot which used computer vision to socially interact with humans. She realized that the robot was not able to identify her compared to other light-skinned humans. At the time, Buolamwini thought that people would fix this issue.
But this issue did not get solved. Her anecdote is one of many, including when Google’s facial recognition tagged two black friends as gorillas, when HP’s webcams easily tracked a white face but were unable to track a black one or when Nikon’s cameras continually told at least one Asian user that their eyes were closed. Although this may seem simple, as so many facial recognition softwares struggle with non-white faces, it reminds non-white people that facial recognition is not for them.
According to a study released by researchers at Stanford University and MIT, three commercially released facial recognition softwares from three major tech companies all have gender and skin type bias. In these experiments, the percent error when determining gender of light-skinned males was never larger than 0.8%. However, once determining gender of dark-skinned women, percent error jumped to 20% in one software and over 34% in the other two. These findings raise questions about what kind of data that these neural networks are trained on. After all, the accuracy of a neural network is based upon the variety of data that is provided to it. Although the accuracy of certain softwares was 97%, the data tested was more than 77% male and 83% white.
When companies think about error, they typically look at the statistics. If a program is right 97% of the time, the program is probably accurate enough. But most companies fail to see if the system is making the error randomly over a wide distribution, or only makes mistakes with a certain group of people. So, even if a system is right 95% of the time, it may be wrong with all Asian people in the United States. Or, if a system is correct 99% of the time, it may only make errors with all transgender people in the United States.
But how does one solve this problem? The first step is awareness. Companies must be made aware of the fact that their algorithms and programs need a more diverse data set in order to curb the bias that many programs face. Joy Buolamwini and her partner, Timnit Gebru, hope that their research paper will spur more work into looking at gender and racial disparities in other fields of computer science.
Interested in starting an AI club at your school?
At Allgirlithm, we are developing an AI club curriculum to help make AI education more accessible to middle and high school students in diverse communities. We understand that artificial intelligence is not a typical subject to be taught in school, but we want to change that. Our goal for launching the Allgirlithm Initiative Program is not only to teach students the fundamentals of AI, but also to spark interest, promote diversity, and break stereotypes in this emerging field.
The club lasts for a total of 10 weeks, with students learning 1-2 concepts per meeting. Several topics and subcategories covered include introduction to machine learning, supervised vs. unsupervised learning, clustering algorithms, neural networks, future implications of AI, computer vision, natural language processing, autonomous systems, expert systems, etc. Along with lectures, we will also provide you with links to instructional videos, AI demonstrations, AI/CS-related opportunities, and more. If you are interested in starting your own chapter or know anyone who may be interested, email us in advance at email@example.com. In the email, please describe your previous experience(s) with artificial intelligence and/or computer science and what contributed to your interest in this field. All curriculum materials provided are free, and we will send them to you in early summer so that you can launch your own AI club for the 2018-2019 school year. Feel free to contact us with any questions.
There are already technologies that can help writers follow rules in writing. The Hemingway Editor is an example of a system that does this well, helping users make their writing more concise and grammatically correct.
Revision Assistant, an artificially intelligent system from Turnitin (the company that checks for literary plagiarism), says they can use algorithms to help people become better writers.
“Like any art, writing is not necessarily intuitive,” Elijah Mayfield, the founder of Revision Assistant told Quartz Magazine. “Learning to write is at least as hard as learning calculus or learning how to build a circuit. Different, but it’s a skill nonetheless and something you can learn.”
Revision Assistant is now being used by 200,000 students in more than 100 US school districts, according to Quartz. The machine can recognize things in the data and decide what the right course of action is. For example, if Revision Assistant spots a section that seems to need supporting examples, it highlights the area and encourages the writer to expand. It focuses the writer’s attention on areas for improvement, and leaves the “creative” work to them.
The repetition and instant engaging help the writing process become better. Over many revisions, students learn to spot these issues on their own and become more confident writers. Revision Assistant helps prove that there can be tools to help us become better skilled in creative fields.
AI Writer is another example that not only helps write, but research for an article. Based on a headline the user inputs, the free software generates an article of length 100-1000 words. As AI becomes more incorporated into creative fields like art, writing endeavors are also benefited.
Harvard Business Review
Photo Credit: smartify.org
AI is being integrated into all aspects of our lives. One place where it has tremendous applications is in museums. Museums are full of huge amounts of data that hold many opportunities for AI.
One way that AI can help is by sorting collections, as “more than 90 percent of (enterprise) data is unstructured, human-generated and sourced from various disparate entities” (IDC, 2015). Using image recognition, pattern recognition, machine vision and sentiment analysis (analysing the emotions conveyed through text or faces recognised), museums can find interesting and new ways to quickly sort through their collections. Here are some museums that have used sentiment analysis to sort through their data:
Another exciting avenue for museums to explore is the use of smartphones to easily and instantaneously recognise works and access additional information about them. The app Smartify, which launched in 30 museums this year including The Met in New York and the National Gallery in London, does exactly this. It has been described as the ‘Shazam of the art world’, as it uses image recognition to scan and identify art works. It then provides the user with information about the work as well as interviews with the artist and other audio-visual information pertaining to the work. Currently, the app does not recognise images that are not already stored on its database, but the company is working towards changing this.
Additionally, some museums, like the Anne Frank House in Amsterdam, have launched Messenger bots to interact with visitors. The one in the Anne Frank House tells users about the history of the museum and Anne Frank’s past. The SFMOMA also uses a chatbot that connects to visitors through texts. It allows users to send in keywords and in turn sends back answers with pictures of works at the museum including the title, artist and year. These chatbots do make mistakes occasionally, but showcase the great ways in which museums can use AI to make their works more accessible.
These are just a few of the millions of ways in which museums can innovatively use AI to their benefit. AI can help keep track of and sort through museums’ vast amounts of data, can make this information more reachable to the public, and can allow visitors to have more insightful and enjoyable museum visits.
Museums and the web
Photo credit: How-To Geek
All we hear about AI these days is how it has the potential to be used in ground-breaking technology and will drastically change various aspects in an average person’s life. However, AI is already making a dent in the everyday aspects of human life.
For instance, a line of Google Nest products are applying AI in everyday items. CNET gives examples of the Nest Hello smart doorbell, the Nest Secure alarm system, the Nest Yale Lock and the Nest Cam IQ Outdoor security camera. The alarm system and security camera both use AI to help with facial recognition along with motion sensing. All of these applications of AI are helping people make their homes “smarter.”
Another Google Nest item is the Nest Learning Thermostat. The NEST website explains that the thermostat uses AI to “learn what temperature you like and build a schedule around yours.” It recognizes daily patterns and begins to learn a routine, which helps with saving energy as well as convenience.
AI has the potential to change life in many little ways, as well as leading technology into great breakthroughs.
Photo Credit: Business Insider
DeepArt.io is a website that uses deep neural networks to identify and combine stylistic elements of two separate images, a technique known as style transfer. But it’s not traditional “artificial intelligence”: no coding experience is required.
The program relies on a neural algorithm, developed by Leon Gatys and colleagues at the University of Tübingen in 2015. This has been used in photo filters on Facebook and Prisma, as well as on moving image. Kristen Stewart used style transfer in her directorial short film debut Come Swim to redraw a brief dream sequence.
In recent years, these kinds of programs have proliferated, using different techniques to create AI-assisted works which are both sophisticated and beautiful. In fact, a study says that AI-generated art now looks more convincingly human than work at Art Basel.
For good or bad, the consequences could transform mainstream art production, consumption, and artists.
First, a little history. The earliest known generative computergraphik, created around 1960 by Georg Nees, the German “father of computer art”, consisted mainly of black-and-white drawings of shapes. The first computer-generated music piece, Lejaren Hiller and Leonard Isaacson’s Illiac Suite for String Quartet, came in 1957. Both experiments were aimed at academic audiences, and not very “artistic.”
We’ve come a long way since then. Deepjazz, created by Princeton University Ph.D. student Ji-Sung Kim, used neural networks to detect jazz musical patterns and generate new songs.
Nvidia recently published a paper documenting how researchers, with incredibly convincing results, generate life-like images. The algorithm takes images of a winter street and predicts what it would look like during summer. Gene Kogan, a generative artist and author of Machine Learning for Artists has used similar methods to make realistic place images.
Cornell University and Adobe researchers have also been working on a sophisticated version of style transfer for photos. The process they’re developing can even use the sunset lighting of one photo and apply it to a daytime photo of another location. Google, too, has been working on “supercharging style transfer.” Researchers developed a way to combine multiple styles, mixing them like paints.
These may change how we value artists, too. We’re likely to see a proliferation of algorithmic art in mainstream culture. These tools will take some of the burden off of artists, but may lead to fewer job opportunities in the digital economy.
Taobao, a Chinese shopping website, created banner ads for its mega-shopping Singles’ Holiday by training algorithms on design patterns of successful ads Airbnb also showed off a tool which uses algorithmic art techniques to convert sketches into fully designed and functional prototypes.
The vibrant world of artistic potential that’s opened up by algorithms will be darkened by the potential for artists to lose control.
Photo credit: Pixabay
It hasn't been two years since AlphaGo beat world champion Lee Sedol in Go, but Google's DeepMind already launched a new AI program to take its place.
On December of 2017, Alpha Zero single handedly defeated a world class chess engine, Stockfish, in only 4 hours. In fact, it had no previous experience with chess besides learning the basic rules, but the results were incredible: the AI went undefeated, winning 28 games and drawing the rest in an 100 game matchup. After this match, it went on to beat its former self AlphaGo in Go as well as Elmo in shogi.
With this breakthrough, experts were able to discover more about the thought process of a machine. According to Demis Hassabis, the AI "doesn't play like a human, and it doesn't play like a program . . . It plays in a third, almost alien, way." As he analyzed the games of Alpha Zero, he noticed it played some outlandish yet positionally profound moves. Hassabis offers an explanation for this strange behavior. Rather than reinforcement learning (letting the AI learn from example games), Alpha Zero was taught solely by playing games against itself without any human input. DeepMind also says it takes on an "arguably more human-like approach", one that involves more evaluation and planning instead of calculating lengthy variations.
Ever since 1997 when DeepBlue beat the world chess champion Gary Kasparov, computers have revolutionized the game of chess. Now powerful forms of machine learning like AlphaGo are making a drastic impact in the field of board games. Surely enough, it keeps us wondering: who will defeat Alpha Zero?