The Future of AI in the Face of Data Famine

The Future of AI in the Face of Data FamineMichael RenzBlockedUnblockFollowFollowingJun 11Photo by Mathew MacQuarrie on UnsplashThe field of artificial intelligence research was founded as an academic discipline in 1956.

Despite a history of 60 years, the era is still at the very beginning, and the future has a bumpy road ahead when compared to similar disciplines, which is mainly driven by challenges in the domain of ethics and availability of data.

Fluctuating Fortunes of AISince its beginning, Artificial Intelligence has experienced three major breakthroughs and two periods of stagnation.

Its most recent renaissance was triggered in 2016 with the historical moment of AlphaGo defeating the world’s best players of Go, a game thought to be too complex for Artificial Intelligence.

As we learned from the previous circles of AI, whenever it makes a leap forward, there is a lot of scrutiny and concern over what this means for the world; both in the industry as well as society.

As a result, certain ideas for AI have become highly controversial in the public and enters a “Trough of Disillusionment”.

Thinking about why Artificial Intelligence remains so controversial, it turns out that there is a significant gap between the expectations of what AI can provide, and what it is able to accomplish in reality.

Today’s truth real-world examples of AI are still rare and often focus on very niche cases, far away from scenarios marketers write to chase clicks on social.

There is still a long way to go before AI goes mainstream.

As we are not lacking visions in this domain, we see signals of doubts about what AI can truly accomplish today.

Now towards the end of this third rise of Artificial Intelligence, the fate of this emerging field is uncertain — again.

Data Famine Is ComingThe most recent rise of AI was largely fueled from the availability of big data, which has powered the development of deep-learning in areas such as facial recognition, which can be considered as one of the main breakthroughs of this AI wave.

In more complex fields, such as disease diagnosis, deep learning still faces challenges in bridging the gap between businesses and institutions.

A major issue in this field is the accessibility of data From a holistic perspective, the data is available, but for several reasons not assessable.

A common problem is that data is stored within silos.

These silos often are a result of physical separation within companies internal network or even within the companies themselves.

Another prominent issue is data structure incompatibility.

As a result, there is no centralized data hub to train a powerful neural network via deep learning mechanisms.

Cloud-based computing is often cited as a potential solution to the data silo problem, but it has proven to be expensive and time-consuming for large amounts of data.

And then there are still the increasingly stringent data privacy regulations, such as General Data Protection Regulation.

While such policies are important towards protecting the privacy of consumers, they also place heavy constraints on the usage of data and require to rethink how to build Artificial Intelligence applications in a compliant way.

Federated Learning — the promise of a 4th big breakthroughConsumer protection practices and data privacy are non-negotiable, and the bottom line to establish the needed trust.

On the other side, it brings the risk of a data famine and a slowdown of the rise of AI.

Federated Learning is a new approach to Artificial Intelligence that has the potential to bring the next big breakthrough in AI and overcome the data privacy and trust challenges of this wave.

It is a machine learning framework that allows users to train machine learning models using multiple datasets distributed across a variety of locations while preventing data leakage and follow stringent data privacy regulations.

In practice, Federated Learning has three major categories, depending on the distribution characteristics of the data.

Horizontal federated learning divides datasets according to features and is typically implemented in cases in which features overlap more than users.

For example, three logistics companies operating in different regions may keep similar data on their consumers, but the overlap between consumers themselves is relatively small.

Since their features are almost identical, users with the same features can be extracted to train models.

Vertical federated learning is generally used when multiple datasets have a large overlap of users but have different features.

For example, a food delivery service and a hospital operating in the same area are likely to have a similar set of users, but keep track of different information between each: the hospital keeps track of health data, while the food delivery service tracks things like browsing habits and purchasing data.

Vertical federated learning aggregates all of these features in order to build a model for both parties collaboratively.

When there is very little overlap between both the users and features of a dataset, federated transfer learning is used to overcome this lack of data or labels.

Take, for example, a manufacturer in China and a logistics provider in the USA.

Since they are geographically constrained, there is very little overlap between users; likewise, since they are different types of institutions, their features also have very little overlap.

In such cases, transfer learning should be applied in conjunction with federated learning to define common representation between datasets and improve the overall performance of a model.

Despite its capabilities, an effective framework alone is not enough to completely address the challenges.

Federated Learning must be developed into a commercial application that offers a flexible, win-win business model for a certain industry.

By aggregating multiple isolated datasets across different institutions, federated learning makes it possible to develop an ideal model without the need to infringe on the privacy of each individual.

Simplified spoken, it’s a method of training an algorithm with Data from multiple stakeholders by keeping the data in silos — Data Sharing Economy where data holders benefit by sharing their data, while the application providers can profit by providing the services necessary to the development of those models.

Written byCyrano Chen, Zion Chen and Michael Renz.

. More details

Leave a Reply