Artificial Intelligence innovation continues apace – with explosive growth in virtually all industries. So what did the last year bring, and what can we expect from AI in 2021?
In this article, I list five trends that I saw developing in 2020 that I expect will be even more dominant in 2021.
1. MLOps
MLOps (“Machine Learning Operations”, the practice of production Machine Learning) has been around for some time. During 2020, however, COVID-19 brought a new appreciation for the need to monitor and manage production Machine Learning instances. The massive change to operational workflows, inventory management, traffic patterns, etc. caused many AIs to behave unexpectedly. This is known in the MLOps world as Drift – when incoming data does not match what the AI was trained to expect. While drift and other challenges of production ML were known to companies that have deployed ML in production before, the changes caused by COVID caused a much broader appreciation for the need for MLOps. Similarly, as privacy regulations such as the CCPA take hold, companies that operate on customer data have an increased need for governance and risk management. Finally, the first MLOps community gathering – the Operational ML Conference – which started in 2019, also saw a significant growth of ideas, experiences, and breadth of participation in 2020.
2. Low Code/No Code
AutoML (automated machine learning) has been around for some time. AutoML has traditionally focused on algorithmic selection and finding the best Machine Learning or Deep Learning solution for a particular dataset. Last year saw growth in the Low-Code/No-Code movement across the board, from applications to targeted vertical AI solutions for businesses. While AutoML enabled building high-quality AI models without in-depth Data Science knowledge, modern Low-Code/No-Code platforms enable building entire production-grade AI-powered applications without deep programming knowledge.
3. Advanced pre-trained language models
The last few years have brought substantial advances to the Natural Language Processing space, the greatest of which may be Transformers and Attention, a common application of which is BERT (Bidirectional Encoder Representations with Transformers). These models are extremely powerful and have revolutionized language translation, comprehension, summarization, and more. However, these models are extremely expensive and time-consuming to train. The good news is that pre-trained models (and sometimes APIs that allow direct access to them) can spawn a new generation of effective and extremely easy-to-build AI services. One of the largest examples of an advanced model accessible via API is GPT-3 – which has been demonstrated for use cases ranging from writing code to writing poetry.
4. Synthetic content generation (and its cousin, the Deep Fake)
NLP is not the only AI area to see substantial algorithmic innovation. Generative Adversarial Networks (GANs) have also seen innovation, demonstrating remarkable feats in creating art and fake images. Similar to transformers, GANs have also been complex to train and tune as they require large training sets. However, innovations have dramatically reduced the data sizes of creating a GAN. For example, Nvidia has demonstrated a new augmented method for GAN training that requires much less data than its predecessors. This innovation can spawn the use of GANs in everything from medical applications such as synthetic cancer histology images, to even more deep fakes.MORE FOR YOUHow COVID-19 Broke AI, And Why AI May Break AgainThe Latest In ML Ops – 5 Evolutions of Production MLThe Rise Of AI Systems Of Action
5. AI for kids
As low-code tools become prevalent, the age at which young people can build AIs is decreasing. It is now possible for an elementary or middle school student to build their own AI to do anything from classifying text to images. High Schools in the United States are starting to teach AI, with Middle Schools looking to follow. As an example – in Silicon Valley’s Synopsys Science Fair 2020, 31% of the winning software projects used AI in their innovation. Even more impressively, 27% of these AIs were built by students in grades 6-8. An example winner, who went on to the national Broadcom MASTERS, was an eighth-grader who created a Convolutional Neural Network to detect Diabetic Retinopathy from eye scans.
What does all this mean?
These are not the only trends in AI. However, they are noteworthy because they point in three significant and critical directions:
- The increased real-world use of AI – as evidenced by the problems caused by COVID-19 and the growth of MLOps
- Continued innovation, as seen in BERT and GANs.
- Democratization into not just engineering but all industries and skill sets – as evidenced by low-code / no-code and its ability to bring AI reach to everyone from software engineers to school kids.