Artificial intelligence (AI) is one of the most important and long-lived areas of research in computer science. It’s a broad area with crossover into philosophical questions about the nature of mind and consciousness. On the practical side, present day AI is largely the field of machine learning (ML). Machine learning deals with software systems capable of changing in response to training data. A prominent style of architecture is known as the neural network, a form of so-called deep learning. This article introduces you to neural networks and how they work.

Neural networks and the human brain

Neural networks are inspired by the human brain structure, the basic idea being that a group of objects called neurons are combined into a network. Each neuron receives one or more inputs and a single output based on internal computation. Neural networks are therefore a specialized kind of directed graph.

Many neural networks distinguish between three layers of nodes: input, hidden, and output. The input layer has neurons that accept the raw input; the hidden layers modify that input; and the output layer produces the final result. The process of moving data forward through the network is called feedforward.

The network “learns” to perform better by consuming input, passing it up through the ranks of neurons, and then comparing its final output against known results, which are then fed backwards through the system to alter how the nodes perform their computations. This reversing process is known as backpropagation and is a main feature of machine learning in general.

An enormous amount of variety is encompassed within the basic structure of a neural network. Every aspect of these systems is open to refinement within specific problem domains. Backpropagation algorithms, likewise, have any number of implementations. A common approach is to use partial derivatives calculus (also known as gradient backpropagation) to determine the effect of specific steps in the overall network performance. Neurons can have different numbers of inputs (1 – *) and different ways they are connected to form a network. Two inputs per neuron is common.

Figure 1 shows the overall idea, with a network of two-input nodes.

Copyright © 2023 IDG Communications, Inc.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *