5 AI Trends to Watch in 2020
What AI trends should you keep an eye on? As Udemy instructors and the founders of SuperDataScience, a common refrain we hear from students and companies is that there are too many artificial intelligence trends to keep up with — how do you know which one matters and will still be in use in five years? If you train your team of data scientists in machine learning, will it have a lasting impact on the business? What other businesses are using this technology, and is it working for them?
We recently hosted a webinar on Udemy for Business that cuts through the AI hype and focuses on which technologies companies and individuals should consider adopting in the coming decade. As AI becomes ubiquitous, it can also be challenging to know which buzzword is worth the investment. In this article, we examine 5 AI trends that we’re telling students and businesses to follow in 2020 and beyond.
Additionally, exclusive to Udemy for Business users, I launched 7 new Executive Briefing courses for organizational leaders and non-technical professionals who would like to gain a better understanding of these skills and their application in the real world. Learn more about these courses with a Udemy for Business demo.
5 AI Trends to Watch in 2020
1. Robotic Process Automation (RPA)
Robotic Process Automation (RPA) is a simple AI technology, but also one of the most disruptive. Imagine your job requires you to perform a high-volume, repetitive task on the computer. Maybe it’s related to invoicing a client. This requires you to open an email attachment, copy data from the attachment into a CRM database, then grab related data from a different database, and send that new data in an email reply. The same task is done multiple times per day and prevents you from working on projects that you’re more interested in.
Robotic Process Automation is a type of software robot that can take on these manual repetitive tasks. Using the example above, an RPA tool would read the email, open the attachment, copy data into a CRM, get data from a different database, and even send the email reply. If there were an escalation requiring human intervention, the RPA would notify the employee to step in. In a nutshell, RPA removes mundane tasks and frees up people for more exciting work, a key AI trend for companies to consider.
Key RPA applications: Invoicing, billing, payroll processing, data extraction and aggregation, shipment scheduling and tracking.
RPA case study: Financial services company Vanguard has $5.6 trillion in global assets under management. It uses RPA to perform certain straightforward trading tasks, “when x happens, do y,”etc. The RPA tools have not diminished the need for human traders. Rather, the combination of the two allows humans to work on more complex jobs, thereby creating a better overall service for Vanguard clients.
2. Natural language processing (NLP)
Natural language processing applies machine learning models to teach computers how to understand what is said in written and spoken language. Because of its rich and growing applications, natural language processing is arguably one of the top branches of AI in overall economic value. It’s becoming especially popular as consumers adopt voice interface technology like Google Home or Amazon Alexa. Instead of writing or interacting with graphics on a screen, we talk to devices that can understand our casual language.
Natural language processing can be divided into two sub-applications:
- Natural language understanding, which consists of a machine reviewing a text and accurately interpreting its meaning.
- Natural language generation, where a system generates a logical response to a text or input.
Key natural language processing applications: Sentiment analysis, chatbots, machine translation, automatic summarization, auto video captioning.
Natural language processing case study: YouTube uses Natural Language Processing technology in many applications across the platform. One use most people will be familiar with is auto-generated captions. Speech recognition software ingests a YouTube video and returns the output of video captions. This technology first went live on the site in 2009 and has been fine-tuned and translated across a dozen languages thanks to the growing dataset available to the company — the videos uploaded every day to the platform.
Deep Learning and NLP A-Z™: How to Create a Chatbot
Last Updated February 2024
Learn the Theory of Deep Natural Language Processing with the Seq2Seq model and enjoy several ChatGPT Prizes at the end! | By Hadelin de Ponteves, Kirill Eremenko, SuperDataScience Team, Ligency TeamExplore Course
3. Reinforcement learning
In its most simple explanation, reinforcement learning is an input- and output-based system that trains itself over trial and error to reach a certain goal, while using a reward system to reinforce its decisions. So, an AI takes as input some data and returns as output an action. When it does this correctly, it receives an award. The better it performs its task, the more rewards the system is given and vice versa.
Imagine training an AI agent to predict whether an object is a carrot or a wood stick. If it accurately predicts a carrot, we give it a reward of plus one and if it erroneously predicts the wood stick, we give it a reward of minus one.
Key reinforcement learning applications: Personalized recommendations, advertising budget optimization, and advertising content optimization.
Reinforcement learning case study: Alibaba, the popular Chinese e-commerce site, leveraged reinforcement learning to increase its return on investment for online advertising by 240% without increasing the advertising budget. In a research paper, the Alibaba team explains how reinforcement learning was used to optimize a sponsored search campaign by creating a bidding model for impressions each hour and performing real-time bidding accordingly. In the paper, you can see how this reinforcement learning system outperformed the benchmark of the other bidding systems.
Last Updated February 2024
Artificial Intelligence 2.0: The smartest combination of Double Deep Q-Learning, Policy Gradient, Actor Critic, DDPG | By Hadelin de Ponteves, SuperDataScience Team, Ligency TeamExplore Course
4. Edge computing
With smartphones, smartwatches, and Internet of Things-enabled devices in our homes and cars, there is a lot of data flying around. Processing all this data is a complex exercise requiring information sent to cloud computing machines based on servers hundreds or even thousands of miles away. Lose a Wi-Fi connection and your smart device becomes a very expensive brick.
Enter edge computing, which takes the servers and data storage required for devices to access their smarts, and puts it directly on the device. This is real-time data processing that results in much faster computing responses and avoids network latency. If cloud computing is big data, edge computing is instant data.
Another type of edge computing is performed on nodes. An edge computing node is a mini-server close to a local telecommunications provider. Using a node creates a bridge between cloud and local computing options. This technique results in lower costs and less time spent on data computation, making for a faster experience for the consumer.
Empower your team. Lead your industry.
Find out how you can train your team on the latest AI skills with a Udemy for Business subscription.
Key edge computing applications: the interconnection of more devices, growth of Internet of Things technology.
Edge computing case study: Consider the Amazon Echo on your kitchen counter. The Alexa assistant technology on the Echo is not actually in the device. It recognizes the “wake-word” of “Alexa,” but the Echo must connect to Wi-Fi to process your audio query via a cloud-based server, no matter how simple or complex the request is.
With a specially designed AI chip enabling edge computing, Amazon hopes to resolve simple questions such as “What time is it?” directly in the device, reducing the response time and providing a better user experience.
Learn BERT – Most Powerful NLP Algorithm by Google
Last Updated December 2023
Understand and apply Google’s game-changing NLP algorithm to real-world tasks. Build 2 NLP applications. | By Martin Jocqueviel, SuperDataScience Team, Ligency TeamExplore Course
5. Open-source AI frameworks
Thanks to the libraries and platforms built for AI functionality, highly complex artificial intelligence algorithms, models, pipelines, and training procedures are now accessible to those with an interest in the technology. Say you want to build a computer vision-based project, some open-source AI frameworks will allow you to implement a computer vision system with very few lines of codes.
Key open-source AI framework applications: Prototype and train complex AI algorithms; build pipelines to define, optimize, and assess an AI model; automate the training of a reinforcement learning module; build neural networks with just a few lines of code.
Open-source AI framework case study: TensorFlow is an AI framework developed by Google that can be used across any branch of artificial intelligence. With TensorFlow, you can build a convolutional neural network for image classification. Some TensorFlow modules will also help simplify the creation of NLP systems. This is among the most popular AI frameworks, especially since the development of TensorFlow 2.0, which allows users to create even more advanced AI systems.
A Complete Guide on TensorFlow 2.0 Using Keras API
Last Updated January 2024
Build Amazing Applications of Deep Learning and Artificial Intelligence in TensorFlow 2.0 | By Hadelin de Ponteves, SuperDataScience Team, Luka Anicin, Ligency TeamExplore Course
There are many more open-source frameworks and libraries helping the advancement of artificial intelligence applications. We dive deeper into frameworks as well as the real-life business use cases of these AI trends in the full webinar, so be sure to watch the webinar here.