How many hidden layers? How deep should your neural network be? How large or deep a fully-connected neural network can or should be?
All good questions, here we explore some answers.
This book’s chapter takes the cake for how large or deep a fully-connected neural network can or should be:
TensorFlow for Deep Learning
Chapter 4. Fully Connected Deep Networks This chapter will introduce you to fully connected deep networks. Fully connected networks are the workhorses of deep learning, used for thousands of applications. … - Selection from TensorFlow for Deep Learning [Book]
At present day, it looks like theoretically demonstrating (or disproving) the superiority of deep networks is far outside the ability of our mathematicians.
One way of thinking about fully connected networks is that each fully connected layer effects a transformation of the feature space in which the problem resides. The idea of transforming the representation of a problem to render it more malleable is a very old one in engineering and physics. It follows that deep learning methods are sometimes called “representation learning.”
How to choose the number of hidden layers and nodes in a feedforward neural network?
Is there a standard and accepted method for selecting the number of layers, and the number of nodes in each layer, in a feed-forward neural network? I’m interested in automated ways of building neu...
An answer quotes:
Determining the Number of Hidden Layers
Number of Hidden Layers
Only capable of representing linear separable functions or decisions
Can approximate any function that contains a continuous mapping
from one finite space to another
Can represent an arbitrary decision boundary to arbitrary accuracy
with rational activation functions and can approximate any smooth
mapping to any accuracy
From Introduction to Neural Networks for Java (second edition) by Jeff Heaton
Another answer says:
More than 2 [Number of Hidden Layers] – Additional layers can learn complex representations (sort of automatic feature engineering) for layer layers.
These nice academic folks wrote a whole paper exploring heuristics and things like genetic algorithms to find the optimal size and depth of a fully-connected neural network:
How many hidden layers and nodes?
(2009). How many hidden layers and nodes? International Journal of Remote Sensing: Vol. 30, No. 8, pp. 2133-2147.
Maximum accuracy was achieved with a network with 2 hidden layers, of which the topology was found using a genetic algorithm.
I have extracted Table 2 from the paper for your viewing pleasure:
Note that for deeper topologies (i.e. more hidden layers), the variance of accuracy and gap between max and min accuracies are far larger. This implies more time and effort is needed to figure out the best training method for a deeper network.
It seems that deeper networks can achieve higher accuracy due to better representation learning, however, they are much more unstable when training and many training iterations may be required to exceed the performance of a shallower fully-connected neural network. This implies that a should system should be in place to permutate or learn the hyper-parameter search-space.
As for how many nodes per hidden layer, the evidence seems to point towards larger numbers and taking advantage of the drop-off hyperparameter to avoid overfitting the model.
The world of NLP already contains an assortment of pre-trained models and techniques. This article discusses how to best discern which model will work for your goals.
The post above examines current state-of-the-art (SOTA) models namely:
USE (Universal Sentence Encoder)
It goes on to introduce different methods to evaluate those models based on the task at hand.
A little explanation of why the models are different is also given.
They did not state which version of USE was used – there are two versions:
Deep Averaging Network (USE-DAN)
The former being less accurate but more performant on longer sentences.
Another thing to note is that ELMo while being contextual is not deeply contextual as declared by the people that created BERT. Obviously BERT is.
Also missing from the action is OpenAI’s GPT-2, I would have liked it if it was included.
There is some buzz about XLNet, but I have not read enough about it to comment other than it promises the ability to learn longer-term dependencies in text. However, given that transformer models compute cost grows quadratically to input text length, I am curious how they handled that.
…without specific fine-tuning, it seems that BERT is not suited to finding similar sentences.
…USE is trained on a number of tasks but one of the main tasks is to identify the similarity between pairs of sentences. The authors note that the task was to identify “semantic textual similarity (STS) between sentence pairs scored by Pearson correlation with human judgments”. This would help explain why the USE is better at the similarity task.
Pre-trained models are your friend: Most of the models published now are capable of being fine-tuned, but you should use the pre-trained model to get a quick idea of its suitability.
Being creative or artistic has long been the sole domain of humans. What if machines are able to be as creative? What if machines get better than humans at that?
Allow me to show you some contemporary developments in Artificial Intelligence that might just challenge our assumptions in these four areas:
Problem-solving and Teamwork
But first, let us have a brief discourse about creativity.
What is Creativity?
Mundane dictionary definition below:
For our purposes, I focus on the “create something not seen before” and “invent a new way to solve problems” part of creativity.
In February 2019, OpenAI released GPT-2 to the public – an AI that read 8 million web pages to learn English. It has about 1.5 billion parameters (a.k.a. artificial neurons). In comparison, the human brain has about 100 billion biological neurons.
In school, our writing skills are evaluated in a number of ways, for example, say, the ubiquitous “Complete the story” exercise. That is to say, we get a prompt, usually in the form of a paragraph like:
In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.
We are then expected to write the rest of the story in a way that pleases our teachers. The following is one such example of such a story:
The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science.
Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved.
Dr. Jorge Pérez, an evolutionary biologist from the University of La Paz, and several companions, were exploring the Andes Mountains when they found a small valley, with no other animals or humans. Pérez noticed that the valley had what appeared to be a natural fountain, surrounded by two peaks of rock and silver snow.
Pérez and the others then ventured further into the valley. “By the time we reached the top of one peak, the water looked blue, with some crystals on top,” said Pérez.
Pérez and his friends were astonished to see the unicorn herd. These creatures could be seen from the air without having to move too much to see them – they were so close they could touch their horns.
While examining these bizarre creatures the scientists discovered that the creatures also spoke some fairly regular English. Pérez stated, “We can see, for example, that they have a common ‘language,’ something like a dialect or dialectic.”
Dr. Pérez believes that the unicorns may have originated in Argentina, where the animals were believed to be descendants of a lost race of people who lived there before the arrival of humans in those parts of South America.
While their origins are still unclear, some believe that perhaps the creatures were created when a human and a unicorn met each other in a time before human civilization. According to Pérez, “In South America, such incidents seem to be quite common.”
However, Pérez also pointed out that it is likely that the only way of knowing for sure if unicorns are indeed the descendants of a lost alien race is through DNA. “But they seem to be able to communicate in English quite well, which I believe is a sign of evolution, or at least a change in social organization,” said the scientist.
What do you think? Not too shabby eh?
As you might have guessed, that was written by GPT-2. Its writing is not perfect, but it was pretty entertaining to me. It took one paragraph – the prompt and created a story of nine original paragraphs all along the same theme as the prompt. Along the way, it invented names, places and introduced the notion that unicorns are “descendants of a lost alien race” and the way of “knowing for sure” is to through DNA.
One way to learn art is to view many drawings of a particular art style. Our brain then does something wonderful – it generalizes features of an art style so we can transform, mix and match the features to create original art. i.e. Copy from one artwork and it is plagiarism, copy from a number of artworks and that is being creative!
Generative Adversarial Networks (GANs) are one of the most interesting ideas in computer science today. Two models are trained simultaneously by an adversarial process. A generator (“the artist”) learns to create images that look real, while a discriminator (“the art critic”) learns to tell real images apart from fakes.
So we feed the AI one thousand anime faces – here’s a subset of them (see all of them here):
After just half an hour of training on a humble PC, here is a sub-set of its output:
And there we go – original art by an AI. It is not perfect, but some of its “drawings” are pretty good. It is important to note that none of the drawings exists and that they are not just augmentations of the drawing fed to it – the AI learned the art style and proportions features found on anime faces and “day-dreamed” the new drawings. On a more technical note, the AI was fed random noise and out came those drawings.
We have seen this scene many times, someone plays a short piece (aka riff/motif) on say, a piano, then a fellow band member says: “Hey! That sounds rad.” and proceeds to play an extension of that motif in the same spirit.
How does it work? Well, it’s complicated but here is a glimpse:
Generating long pieces of music is a challenging problem, as music contains structure at multiple timescales, from milisecond timings to motifs to phrases to repetition of entire sections. We present Music Transformer, an attention-based neural network that can generate music with improved long-term coherence.
And here is a peek on how the AI is doing it:
To see the self-reference, we visualized the last layer of attention weights with the arcs showing which notes in the past are informing the future.
I wish I could play the piano half as well… the AI is paying “attention” to what is played to determine what to play next – hence the self-reference/self-attention.
Problem-solving and Teamwork
In typical human fashion, we have left the best for last – problem-solving and teamwork.
It is a game that has 5 players on both opposing teams (for a total of 10 players per game) and requires the best teamwork to win.
A game between 2 teams takes 40 to 50 minutes to complete (on average). This means some longer-term strategic planning is required to win. i.e. Instead of scoring immediately, do something that seems to compromise the chance of victory early in the game because it is important for victory 10 to 30 minutes later.
Like any proper sport, every team in Dota 2 is ranked. Win games and your Matchmaking Rating (MMR) increases, vice versa if you lose games. As its name implies, it is pretty handy to use a team’s MMR to matchmake them with a team that has a similar MMR to have more enjoyable games.
With the above in mind, the AI (OpenAI Five) has beaten teams in the following order:
Best OpenAI employee team: 2.5k MMR (46th percentile)
Blitz – a professional Dota 2 commentator said that OpenAI Five used tactics that he only learned after 8 years of playing the game.
OpenAI Five was also observed to “deviate” from current playstyle (i.e. optimal way of playing the game as done by the pros) This suggests that it found a better way to win games that humans did not discover yet.
OpenAI Five is scaled up to play the Internet as a competitor or teammate and has won 99.4% of 7656 games it played. It played against 15,000 players and played cooperatively with 18,700 players.
OpenAI Five’s ability to play with humans presents a compelling vision for the future of human-AI interaction, one where AI systems collaborate and enhance the human experience. Our testers reported feeling supported by their bot teammates, that they learned from playing alongside these advanced systems, and that it was generally a fun experience overall.
Wait, what? Players feel supported and learned from playing alongside the AI? Here is more fuel to this fire:
It actually felt nice; my Viper gave his life for me at some point. He tried to help me, thinking “I’m sure she knows what she’s doing” and then obviously I didn’t. But, you know, he believed in me. I don’t get that a lot with [human] teammates. —Sheever
“…he [the AI] believed in me. I don’t get that a lot with [human] teammates”
Yep, I totally see a future where my best buddy that I play games with is an AI. Curious about how our buddies of the future works? Read on!
Over-simplification/glossing of how OpenAI Five works
OpenAI Five uses a Large-Scale Reinforcement Learning Long-Short Term Memory Network trained using Proximal Policy Optimization. See the paper here.
Large-scale means they managed to make it learn/train on many many computers (128,000 CPU cores – the average computer these days has 6 CPU cores)
The primary reason why so many computers are required is that it learns by playing with itself! Playing with humans to learn the game is just too slow and expensive. Not to mention the AI might pick up their bad habits. It plays the equivalent of 900 years of games every day.
Reinforcement Learning means it learns which actions are best to take given a particular state of the game. It does so by exploring (aka messing around) and exploiting – making use of what it has learned. Rewards (e.g. scores, or kills) from taking those actions are used to determine which state and action are best.
Long-Short Term Memory (LSTM) is quite like what it sounds – an artificial neural network that takes note of things that happens in the short term and long term.
Proximal Policy Optimization (PPO) effectively makes the AI learn slowly so that it fairly explores as many possibilities as it can before learning that a particular way is better.
OpenAI Five Disclaimers
OpenAI does not play with all Dota 2 Features, it uses a custom game type specifically made for the AI. In particular, it restricts hero selection to only 17 types vs 117 in the official game. The game is played by controlling heroes selected. Invisibility effects are also removed.
The captain of the team OG (which made history as the first two-time world champion team) said this:
I don’t believe in comparing OpenAI Five to human performance, since it’s like comparing the strength we have to hydraulics. Instead of looking at how inhuman and absurd its reaction time is, or how it will never get tired or make the mistakes you’ll make as a human, we looked at the patterns it showed moving around the map and allocating resources.
Needless to say, we have just scratched the surface of it all. These are interesting times, my hope is that the future is humans cooperating with AI to make the world a better place to live in. However, in the long-run human-AI integration might be inevitable for humans to evolve past an AI dominated landscape.
Better Language Models and Their Implications
We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization.
Music Transformer: Generating Music with Long-Term Structure
Update (9/16/19): Play with Music Transformer in an interactive colab! Generating long pieces of music is a challenging problem, as music containsstructure ...
At OpenAI, we’ve used the multiplayer video game Dota 2
[https://www.dota2.com/play/] as a research platform for general-purpose AI
systems. Our Dota 2 AI, called OpenAI Five, learned by playing over 10,000 years
of games against itself. It demonstrated the ability to achieve expert-level
Run the command below in Terminal to get the latest index of Macports packages:
sudo port -v sync
Get Macports to install krusader. It installs packages and downloads source code for compilation.
sudo port install krusader
As mentioned earlier, it will fail to install due to a compilation error. Macports will print a vague message telling you that you want to take a look at a .log file to see what happened. Open the .log file (using your favorite text editor – I use Sublime, but TextEdit that comes with OSX will work), scroll down to near the end and notice the filenames that gave an error during compilation.
What is the underlying focus of a KPI? It could be a telltale sign of the organization’s culture.
The following post is a riveting and sometimes incriminating commentary on how relationship-based goals in an organization can prevent alienation of the people most important to it, namely:
A handy “test” is included for anyone interested. 🙂
…measuring performance is measuring relationships.
Here was an organization with nearly 7,000 staff but none of its 29 KPIs related to employee satisfaction, safety, turnover, productivity, or innovation. This was not a good sign for the workforce, nor did it reflect positively on the way the CEO and the executive team thought about the organization.
Create KPIs That Reflect Your Strategic Priorities
Start by identifying your most important stakeholders.