Artificial Intelligence Demystified

Artificial Intelligence DemystifiedJosef BajadaBlockedUnblockFollowFollowingJan 4A.

I.

is this year’s buzzword of choice across the Tech industry, and speculation about what this field can achieve is already running rife.

Let’s separate fact from fiction and make some sense of all the hype.

Photo by Rock'n Roll Monkey on UnsplashAs we start the new year, the Tech propaganda machine is already ramping up its next generation of buzzwords, promising paradigm shifts and silver bullets that will make whole industries obsolete, enable huge efficiency gains, and make the world a better place.

Blockchain, which used to top keyword search trends and social media posts, suffered a significant decline in interest, partly due to the fact that its initial hype was residual from the Bitcoin bubble.

It seems that this year’s buzzword of choice is going to be Artificial Intelligence.

It is the new gravy train, and everyone is jumping on the band wagon out of the usual fear of missing out.

On the other hand, some very prominent people have been voicing out serious concerns about the risks of mass proliferation of A.

I.

in society, leading to a lot of misconceptions, speculation, and also seriously plausible concerns.

The number of people asking me to help them make sense of all this has been on the increase.

Here is a list of frequently asked questions, and my opinionated answer to them.

Is A.

I.

some new technology?No.

Artificial Intelligence has been around for decades.

Ever since the first computers were built, researchers have speculated on how the fast computational abilities of these machines could be used to automate the various intelligent traits of the human mind.

A.

I.

is not even a technology per se, it is an academic sub-field of Computer Science that brings in various numerical techniques, such as calculus, probability, statistics and linear programming.

Photo by Antoine Dautry on UnsplashA.

I.

actually went through long periods of unpopularity, often referred to as the A.

I.

Winter, due to over ambitious promises.

The current hype could indeed lead to a similar situation.

It regained popularity in recent years with successes that made it to mainstream media, such as autonomous vehicles and Deep Mind’s AlphaGo beating a human champion player at the game of Go.

A.

I.

had made similar media hits in 1997, when IBM’s Deep Blue beat chess champion Garry Kasparov (even though he accused IBM of cheating), and in 2011, when Watson won against two champions in Jeopardy.

But isn’t A.

I.

advancing at an exponential rate?Well, yes and no.

The core A.

I.

techniques are not.

All you have to do is look at serious journals and research conferences.

Algorithmic advances have been steady but slow over the past few decades.

The algorithms that are making most noise in the media are just fine-tuned flavours of techniques developed back in the 50s and 60s.

EDSAC, one of the first general purpose computers.

What did change, however, is the technological environment within which these algorithms are now operating.

For example, Artificial Neural Networks were ahead of their time when they were initially conceived, and were limited in their scope of application due to lack of data availability.

Nowadays, an SSD memory chip, measuring less than half the size of a credit card, can hold terabytes of data.

Processors are much faster and can be found virtually in everyone’s pocket.

Massive data sets are available online and cloud-based infrastructure provides on-demand computational resources.

Sensors are cheap and embedded in almost every mobile device or smart watch, and high speed wireless broadband networks cover most developed or developing countries enabling real-time data transfer.

This has made hundreds of applications possible, from forecasting road traffic to predictive health analytics.

As more data sets and real-time data streams become available, and algorithms are fine-tuned to process this data, we are bound to see numerous new applications in a variety of fields over the next few years.

Why is it called Artificial Intelligence?It is believed that the term Artificial Intelligence was coined way back in 1955, when a cross-disciplinary workshop was proposed, which took place the following year.

Its purpose was to bring various experts from different fields together and propose different ways to simulate human thinking.

Where A.

I.

stands out from the broader field of Computer Science is that it tries to tackle problems which are known to be very hard to solve computationally, or sometimes even to model in a traditional way.

This could be either due to sheer combinatorial explosion, missing information, or ambiguity in the input signals.

Photo by Olav Ahrens Røtne on UnsplashHumans (and also animals) seem to have a natural disposition to solve such problems, which we attribute to intelligence.

It is in fact the result of a combination of genetic pre-programming, acquired skills, knowledge, experience, contextual information, out-of-band clues, reasoning abilities, and sometimes mere intuition.

For instance, we seem to have a natural ability to recognise faces, and we understand language, despite its ambiguities, much more effectively than computers.

Is A.

I.

and Machine Learning the same?No!.It is actually frustrating when supposedly field experts use both terms interchangeably.

Machine Learning is just a subset of algorithms from a very broad field of A.

I.

techniques.

You only need to look at the de facto standard textbook, Artificial Intelligence: A Modern Approach, to see that it covers numerous other topics, such as search-based problem solving and constraint satisfaction, knowledge-based reasoning, planning, probabilistic reasoning, and natural language processing.

Each set of techniques is effective for certain kinds of problems and ineffective for others.

The de-facto standard A.

I.

textbook.

Techniques that take some domain model as input and make use of search and logical reasoning are categorised as Symbolic A.

I.

On the other hand, techniques that make use of input and sample output data to infer the world model are categorised as Connectionist A.

I.

, derived from the fact that such algorithms depend on networks of connected nodes, such as the neurons of an Artificial Neural Network.

An Artificial Neural Network.

While Connectionist A.

I.

seems to be getting all the media hype, the truth is that real-world applications often need a hybrid solution.

Even AlphaGo Zero, which is a primary example of deep learning success, makes use of search (Monte-Carlo Tree Search) and reinforcement learning to find the best move out of thousands of possible ones.

What is Deep Learning?A.

I.

seems to have a fond affection for the term deep.

IBM’s chess playing computer was called Deep Blue.

It’s original name was actually Deep Thought, probably paying homage to the fictional computer in “The Hitchiker’s Guide to the Galaxy”.

The software used by Watson to win at Jeopardy was called DeepQA, and the company behind AlphaGo is called Deep Mind (acquired by Google in 2014).

So no wonder that the term deep learning carries more marketing connotations than any strong technical significance.

Deep learning is mostly associated with a specific flavour of configurations of Artificial Neural Networks, consisting of hidden processing layers that capture multiple levels of feature abstraction, and enable richer non-linear classification capabilities.

Deep neural networks have been particularly successful in image recognition and problems where the inputs are bitmaps.

So can machines really learn?Machine Learning is very different from human learning.

Supervised machine learning often involves some kind of regression mechanism, which adjusts the parameters of a mathematical model to fit a training data set.

A more accurate name for it would have been automated model fitting, but it wouldn’t have sounded cool enough to attract the same level of investment and innovation interest.

What the machine is actually learning is the best mathematical model that fits the data with the least amount of error across the training data points.

It is not assimilating any knowledge, not understanding concepts, and not acquiring any skills in the sense that humans do.

Computers are still the same dumb machines that do exactly as told, and in this respect “A.

I.

is currently very, very stupid”.

Photo by Franck V.

on UnsplashReinforcement learning works by rewarding good actions and penalising bad ones.

In a certain sense it is somewhat similar to how humans acquire certain motor skills, or adjust their behaviour to be more socially acceptable, by trial and error.

This kind of machine learning had some interesting success stories, such as Stanford’s autonomous helicopter control.

But it is still a process that finds the best fitting solution for the scenarios it encounters.

Is A.

I.

really everywhere?Yes, but it depends on what you consider to be intelligent.

When you switch on Netflix or browse Amazon’s products, a recommender system uses a machine learning algorithm to analyse your profile, match it with other users, and suggest items that have a high probability of being relevant to you.

Search engines like Google have been applying natural language processing techniques for years, and personal assistants like Siri, Cortana and Alexa take that to another level with speech recognition.

Facebook uses face recognition to help you tag people in pictures.

Photo by Piotr Cichosz on UnsplashWhen you enable location services on your smart phone, Google will collect all the information around you, such as GPS coordinates, cellular information and WiFi hotspots, to train its own models and not only improve the positioning accuracy for other users, but also predict how busy that place will be at different times of the day.

Probably you have been using simpler A.

I.

components for longer than you think.

Appliances like air conditioners, washing machines and cameras have been using fuzzy logic controllers for ages.

Is an A.

I.

revolution inevitable?Yes.

It is already happening, and it will go as fast as research and technological innovation allows it to.

It is not only inevitable, but in my opinion, our only hope to progress further as a species without going to war, if we want to maintain and improve our standard of living.

A.

I.

is another level of automation just like the various industrial revolutions before it.

We cannot keep moving factories to cheaper manufacturing countries to keep the cost of production down.

Eventually those countries will develop their economy, improve their infrastructure and standard of living, and become costlier.

If we want this modern day slavery to end while still satisfying our needs for consumer products at affordable prices, we need to think of smarter ways to automate our manufacturing processes.

China is already leading the race.

Hazardous jobs also need to be heavily automated.

Activities such as mining and drilling for oil and gas are high risk operations that cost lives and have a huge environmental impact when things go wrong.

A machine not only performs things more consistently without human error, but can be simply abandoned if the site needs to be sealed off.

As we speak, 15 men have been trapped in a coal mine in India for the past 3 weeks.

A.

I.

will also make deep space exploration and satellite maintenance easier, and in the more distant future, help in commercial space activities such as lunar and asteroid mining.

Photo by NASA on UnsplashA.

I.

will also have a direct positive impact on the way we live.

Autonomous vehicles could revolutionise the way we currently do personal transport.

You would not need to own a car, because you would be able to book the one you need and have it pick you up from your front door within a few minutes.

Once the technology is mature enough, the number of traffic accidents should reduce drastically, eliminating drunk driving casualties and fatalities caused by human error.

The time wasted looking for a parking space during busy hours would also be saved.

A personal healthcare assistant device could be available at your home, ready to check all the vital indicators, analyse symptoms, and deduce the most probable cause.

You won’t need to wake up your GP in the middle of the night because your child fell sick, or wait till morning to diagnose the illness, get a prescription, and queue at the pharmacy to get some badly needed medicine.

Such a device would have access to a wider central database of symptoms and causes, and could even analyse recent trends and identify outbreaks of infections in specific geographic areas.

It could generate the prescription for you, with a unique QR code that could be used to redeem medicine from a 24/7 automated dispenser, or even get it delivered to you by the pharmacy’s drone.

The patient’s vitals, ailments and treatment history are then recorded automatically for future reference (on a blockchain of course).

Are we going to face a robot uprising?Photo by Franck V.

on UnsplashYou can’t blame anyone for asking this question, especially after the charades some governments and sham technology conferences are trying to pull off with a puppet called Sophia.

However, the answer to this is not any time soon.

While all the recent advancements would make one think that progress is so fast, that some dystopian future where robots will control our lives and turn against us is imminent, this couldn’t be further from the truth.

The A.

I.

algorithms we have so far are very simplistic and are just focused on the one task they are designed or trained to do.

An autonomous vehicle just knows how to drive from point a to point b with the kinds of roads, obstacles and surrounding environment it was designed to handle.

Change a couple of rules of Go, or simply change the board’s grid size, and you will probably need to retrain AlphaGo for the new configuration (which takes hours).

We are very far from Artificial General Intelligence, where robots reason, adapt, and behave in an autonomous self-fulfilling way as humans or animals do.

For robots to even ponder taking over society, they not only need to have the problem solving skills, but they also need to have initiative.

They need to have some sort of central consciousness, self-awareness, and free will to “wake up” one day and decide to take matters in their own robotic hands.

We have very little understanding about how these processes work in our own mind, and while not impossible to solve (nature managed to achieve it through millions of years of evolution), it has been one of the most challenging questions of cognitive science.

As Prof.

Andrew Ng puts it, worrying about this is like “worrying about overpopulation on Mars”.

This does not mean that A.

I.

proliferation does not bring any threats.

The availability of so much data about each individual could fall in the wrong hands, intentionally or unintentionally.

In the best case it would be used for commercial reasons like targeted advertising.

But in the worst case it could heavily impinge on the privacy and freedom of an individual.

We could end up in a situation where companies and governments know literally everything about each individual, from who they talk to on social media to where they like to have lunch every day.

Citizens could be profiled and preemptively classified as potential criminals before they even do anything wrong.

Your data could end up on the black market, due to a security breach, and used to impersonate you, taint your reputation, or for blackmail.

Some jobs will obviously be displaced or become redundant.

This is nothing new, and comes with all forms of automation that mankind managed to achieve.

Horse-drawn coaches were replaced by cars, and people working on assembly lines were replaced by robotic arms.

What makes A.

I.

different is its broad impact on a wide range of jobs at all levels of society.

Photo by Franki Chamaki on UnsplashWherever the job depends on data or prior knowledge, there is a good probability that A.

I.

can do better, because it will have access to larger more comprehensive databases, can process data faster, and is not subject to human error.

Wherever the job requires repetition, control, or constant attention, like driving a car, train dispatching, or air traffic control, A.

I.

will be much more efficient and orders of magnitude less accident prone.

New jobs will be created, the education system will adapt to create less of the redundant jobs, and society will ultimately adjust, but the transition will be painful.

There are also ethical issues that come with uncovering humanity’s darkest innate secrets.

Powerful data analysis techniques can be used to analyse DNA patterns and relationships between genes.

An individual’s traits, strengths and weaknesses can be predicted.

Techniques like polygenic scoring, could lead to a society where individuals are pre-assigned their role at birth, or even worse, reopen arguments about race superiority.

When combined with other advances in DNA sequencing and editing technologies, it could help eliminate genetic disorders, but could also lead to a world where an offspring is designed according to the preferences of the parents, or even worse, according to what the government deems most “necessary” for society.

Josef Bajada has a Ph.

D.

in Computer Science specialising in A.

I.

techniques for planning and scheduling.

He works as a technology consultant developing A.

I.

solutions for logistics and oilfield technology applications.

Any opinions expressed in the above article are purely his own, and are not necessarily the view of any of the affiliated organisations.

.

. More details

Leave a Reply