{"id":20406,"date":"2024-10-21T20:32:00","date_gmt":"2024-10-21T20:32:00","guid":{"rendered":"https:\/\/liquidinstruments.com\/?p=20406"},"modified":"2025-12-03T22:12:50","modified_gmt":"2025-12-03T22:12:50","slug":"what-is-a-neural-network","status":"publish","type":"post","link":"https:\/\/liquidinstruments.com\/blog\/what-is-a-neural-network\/","title":{"rendered":"What is a neural network?","gt_translate_keys":[{"key":"rendered","format":"text"}]},"content":{"rendered":"<p><span style=\"font-weight: 400;\"><a href=\"https:\/\/liquidinstruments.com\/blog\/moku-version-3-3-delivers-capability-and-efficiency-gains-with-integrated-ai\/\" target=\"_blank\" rel=\"noopener\">Moku Version 3.3<\/a> brings a new <a href=\"https:\/\/liquidinstruments.com\/integrated-instruments\/neural-network\/\" target=\"_blank\" rel=\"noopener\">Neural Network<\/a> instrument to <\/span><a href=\"https:\/\/liquidinstruments.com\/products\/hardware-platforms\/mokupro\/\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">Moku:Pro<\/span><\/a><span style=\"font-weight: 400;\"> that enables users to implement artificial neural networks for fast, flexible signal analysis, denoising, sensor conditioning, closed-loop feedback, and more. If you are unfamiliar with the basics of a neural network or want to know how one can benefit your research and development goals, read on to explore real-life applications and tutorials.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">First, it\u2019s important to note that \u201cartificial neural network\u201d (ANN) is the more accurate term for these systems, since they are software-based, rather than biological neurons. With this clarification in mind, we will use the term \u201cneural network\u201d to refer to an ANN. For this introduction, we\u2019ll consider only fully connected neural networks, rather than complicated setups such as convolutional, recurrent, and transformer architectures.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">A neural network is a system of interconnected nodes, or neurons, arranged in layers. The first layer takes external data as its input. Each subsequent neuron computes a weighted sum of its inputs from the previous layer, and then adding a value called a bias. This value is then passed through an activation function, which introduces non-linearity, enabling the network to learn complex patterns. The final layer produces the network\u2019s output, while all layers in between are known as hidden layers. Through training, the network adjusts its weights and biases to improve its accuracy over time.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">In mathematical terms, imagine the input layer as an <\/span><i><span style=\"font-weight: 400;\">N \u2715 1 <\/span><\/i><span style=\"font-weight: 400;\">matrix, where <\/span><i><span style=\"font-weight: 400;\">N<\/span><\/i><span style=\"font-weight: 400;\"> is the number of nodes in the input layer and each element in the matrix corresponds to the activation value, as shown in Figure 1.<\/span><\/p>\n<p><img decoding=\"async\" class=\"aligncenter wp-image-20418 size-large\" src=\"https:\/\/liquidinstruments.com\/wp-content\/uploads\/2024\/10\/Screenshot-2024-10-10-at-9.16.41-AM-1-1024x573.png\" alt=\"\" width=\"900\" height=\"504\" srcset=\"https:\/\/liquidinstruments.com\/wp-content\/uploads\/2024\/10\/Screenshot-2024-10-10-at-9.16.41-AM-1-1024x573.png 1024w, https:\/\/liquidinstruments.com\/wp-content\/uploads\/2024\/10\/Screenshot-2024-10-10-at-9.16.41-AM-1-300x168.png 300w, https:\/\/liquidinstruments.com\/wp-content\/uploads\/2024\/10\/Screenshot-2024-10-10-at-9.16.41-AM-1-768x430.png 768w, https:\/\/liquidinstruments.com\/wp-content\/uploads\/2024\/10\/Screenshot-2024-10-10-at-9.16.41-AM-1-600x336.png 600w, https:\/\/liquidinstruments.com\/wp-content\/uploads\/2024\/10\/Screenshot-2024-10-10-at-9.16.41-AM-1.png 1439w\" sizes=\"(max-width: 900px) 100vw, 900px\" \/><\/p>\n<p style=\"text-align: center;\"><span style=\"font-weight: 400;\">Figure 1: Neural network architecture, showing input, hidden, and output layers.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Next are the hidden layers. The number of hidden layers and nodes within them depends on the complexity of the model and available computational power. The activations in the hidden layers are calculated using a combination of the activations from the previous layer, with each node applying different weights and biases to the input layer values. This is shown in linear algebra terms in Figure 2.&nbsp;<\/span><\/p>\n<p><a href=\"https:\/\/liquidinstruments.com\/wp-content\/uploads\/2024\/10\/equation.png\"><img decoding=\"async\" class=\"aligncenter wp-image-20555 size-large\" src=\"https:\/\/liquidinstruments.com\/wp-content\/uploads\/2024\/10\/equation-1024x232.png\" alt=\"\" width=\"900\" height=\"204\" srcset=\"https:\/\/liquidinstruments.com\/wp-content\/uploads\/2024\/10\/equation-1024x232.png 1024w, https:\/\/liquidinstruments.com\/wp-content\/uploads\/2024\/10\/equation-300x68.png 300w, https:\/\/liquidinstruments.com\/wp-content\/uploads\/2024\/10\/equation-768x174.png 768w, https:\/\/liquidinstruments.com\/wp-content\/uploads\/2024\/10\/equation-600x136.png 600w, https:\/\/liquidinstruments.com\/wp-content\/uploads\/2024\/10\/equation.png 1245w\" sizes=\"(max-width: 900px) 100vw, 900px\" \/><\/a><\/p>\n<p style=\"text-align: center;\"><span style=\"font-weight: 400;\">Figure 2: The activations in the hidden layers are calculated using a combination of the activations from the previous layer.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">If the input layer is an <\/span><i><span style=\"font-weight: 400;\">N \u2715 1<\/span><\/i><span style=\"font-weight: 400;\"> matrix (\\(n_1\\)<\/span><i><span style=\"font-weight: 400;\">,<\/span><\/i><span style=\"font-weight: 400;\"> \\(n_2\\)<\/span><span style=\"font-weight: 400;\">),&nbsp; the next activations are obtained by multiplying this with an <\/span><i><span style=\"font-weight: 400;\">M \u2715 N<\/span><\/i><span style=\"font-weight: 400;\"> matrix, where <\/span><i><span style=\"font-weight: 400;\">M <\/span><\/i><span style=\"font-weight: 400;\">is the number of nodes in the hidden layer. Each element in the matrix is a weight represented by \\(w_{mn}\\)<\/span><span style=\"font-weight: 400;\">, meaning that <\/span><i><span style=\"font-weight: 400;\">MN<\/span><\/i><span style=\"font-weight: 400;\"> parameters are needed for each layer. The result is an <\/span><i><span style=\"font-weight: 400;\">M \u2715 1 <\/span><\/i><span style=\"font-weight: 400;\">matrix, which is then offset by a bias value (\\(b_1\\)<i>,<\/i> \\(b_2\\)<\/span><span style=\"font-weight: 400;\">\u2026). After calculating the new activations, they are passed through an \u201cactivation function\u201d. The activation function can provide nonlinear behavior such as clipping and normalizing, making the network much more powerful than it would be if it were simply a series of matrix multiplications.<\/span><\/p>\n<p>Each activation function has distinct properties, and the &#8220;correct&#8221; choice depends heavily on the application. Common options include ReLU (Rectified Linear Unit), which is computationally efficient but can struggle with certain training scenarios, as it truncates negative values. Others, like Tanh and Sigmoid functions, produce smooth, bounded outputs useful for classification but can weaken learning in deep networks, with the curve flattening out at large input values. Lastly, Linear functions work well for regression tasks but limit the network&#8217;s ability to model complex, non-linear patterns. Different layers can use different activation functions, and the choice depends on the specific application requirements and network architecture.<\/p>\n<p><span style=\"font-weight: 400;\">After passing through several hidden layers, the data finally arrives at the output layer. In the output nodes, the value of the activation corresponds to some parameter of interest. As an example, suppose that time series data collected from an oscilloscope is fed into the input, and the goal of the network is to classify the signal as a sine wave, square wave, sawtooth wave, or DC signal. In the output layer, each node would correspond to one of these options, with the highest value activation representing the network\u2019s best guess as to the form of the signal. If one activation is close to 1 and the others are close to 0, the confidence of the network\u2019s guess is high. If the activations are similarly valued, this indicates low confidence in the answer.<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">How do neural networks work?<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">Without adjusting the weights and biases of the hidden layers, a neural network ends up being nothing more than a complex random number generator. To improve the accuracy of the model, the user must provide training data, where both the input dataset and expected answer are known. The model can then calculate its own answer to the training set, which can be compared to the true value. The calculated difference, known as the cost function, gives a quantitative assessment of the model\u2019s performance.<\/span><\/p>\n<p>The model aims to reduce this cost function through training. For instance, Mean Squared Error (MSE) finds the average of the squared differences between the predicted and real values. An optimizer is the method used to efficiently minimize this cost function. Machine learning packages for Python, such as Keras, help guide the choice of the right cost function and optimizer.<\/p>\n<p><span style=\"font-weight: 400;\">After calculating the cost function for a given dataset, the weights and biases of the hidden layers can then be adjusted through various calculus operations, with the goal of minimizing the cost function. This is similar to the concept of gradient descent in vector calculus and can be explored further in literature [1]. This process, called backpropagation<\/span><i><span style=\"font-weight: 400;\">,<\/span><\/i><span style=\"font-weight: 400;\"> allows information obtained via the cost function to work backward through the layers, resulting in the model learning, or adjusting itself, without human input.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">Training data is often run through the neural network multiple times. Each instance of this data being provided to the model is known as an epoch. Typically, some training data will be reserved for validation. In validation, a trained network is used to infer outputs from this reserved data set and its predictions compared to the known correct output. This gives a more accurate picture of the model\u2019s performance than the cost function value alone, as it indicates how well the model can generalize results to new and novel inputs.<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">What are the different types of neural networks?<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">Neural networks, while operating on similar principles, can take various forms depending on the application. A few common neural network examples include:<\/span><\/p>\n<ul>\n<li><span style=\"font-weight: 400;\">Feed-forward neural network (FNN): This is the standard format, such as the one discussed in the above examples. In an FNN, data is passed forward through the network without any feedback or memory of the previous input. A typical example is an image, where each pixel is an input into the neural network, and the output is a classification of that image.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Convolutional neural network (CNN): This is a subtype of FNN that is commonly used to detect features in images through the use of filters. Given the typically large size of images, these filters act to reduce the dimensions of the input image into a much smaller number of weights. Each neuron in the hidden layer can then scan for the same feature over the entire input, making CNNs robust to translation of images.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Recurrent neural network (RNN): As opposed to feed-forward networks, RNNs involve the use of feedback in the hidden layers. Feedback mechanisms give the system a memory, so that the output of a given layer can depend on prior inputs. This makes RNNs excellent choices for sequential data sets such as time series, speech, and audio data.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Autoencoder: An autoencoder is a special configuration of a neural network, typically an FNN or CNN, that encodes a given data into a reduced dimensional space, and then reconstructs, or decodes, it from the encoded data. Conceptually, this is very similar to principal component analysis (PCA), which is useful in statistics and bioinformatics.<\/span><\/li>\n<\/ul>\n<h2><span style=\"font-weight: 400;\">How are neural networks used in signal processing?&nbsp;<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">While neural networks are popular for things like powering large language models, deciphering images, and translation, they are also incredibly useful for signal processing. A few examples where machine learning can improve a measurement setup include:<\/span><\/p>\n<ul>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Control systems: In some systems, the inputs required to achieve a particular control state are hard to know in advance, or put differently, the plant model is hard to invert. In such cases, a waveform generator or function generator probes the plant, while an oscilloscope monitors the resulting state. The neural network then learns the reverse mapping from the difference between the current state and the required control. When used in conjunction with a PID controller, this could enable the self-tuning of PID parameters [2].<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Sensor conditioning: A neural network can take sensor data and compensate for systematic errors, such as phase distortion or delay in cables, or beam misalignment in a photodetector. This approach allows data to be corrected before it is passed to the next stage of the experiment.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\">Signal denoising: This technique uses the neural network as an autoencoder to pull out the key features of a signal, and then reconstructs it based on these features. Random noise is not a key feature, so the reconstructed signal should be noise-free, essentially using the neural network as a noise filter, as seen in Figure 3.<\/span><\/li>\n<li style=\"font-weight: 400;\" aria-level=\"1\"><span style=\"font-weight: 400;\"><span style=\"font-weight: 400;\">Signal classification: The neural network can compare an input signal, such as a time series, to a known template or series of templates. This enables the user to quickly categorize signal classes, identify outliers or errors in a dataset, detect random events, or identify quantum states based on IQ quadrature amplitudes [3].<\/span><\/span><\/li>\n<\/ul>\n<p><a href=\"https:\/\/liquidinstruments.com\/wp-content\/uploads\/2024\/10\/Screenshot-2024-10-09-at-11.33.25-AM-2.png\"><img decoding=\"async\" width=\"908\" height=\"470\" class=\"aligncenter size-full wp-image-20906\" src=\"https:\/\/liquidinstruments.com\/wp-content\/uploads\/2024\/10\/Screenshot-2024-10-09-at-11.33.25-AM-2.png\" alt=\"\" srcset=\"https:\/\/liquidinstruments.com\/wp-content\/uploads\/2024\/10\/Screenshot-2024-10-09-at-11.33.25-AM-2.png 908w, https:\/\/liquidinstruments.com\/wp-content\/uploads\/2024\/10\/Screenshot-2024-10-09-at-11.33.25-AM-2-300x155.png 300w, https:\/\/liquidinstruments.com\/wp-content\/uploads\/2024\/10\/Screenshot-2024-10-09-at-11.33.25-AM-2-768x398.png 768w, https:\/\/liquidinstruments.com\/wp-content\/uploads\/2024\/10\/Screenshot-2024-10-09-at-11.33.25-AM-2-600x311.png 600w\" sizes=\"(max-width: 908px) 100vw, 908px\" \/><\/a><\/p>\n<p style=\"text-align: center;\"><span style=\"font-weight: 400;\">Figure 3:&nbsp; A reconstructed, denoised signal after being fed through a neural network.&nbsp;<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">What are the benefits of an FPGA-based neural network?<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">Neural networks are typically built and run on combinations of CPUs and\/or GPUs. This approach gives incredible computing power, but it is also resource intensive. Large AI models are energy hungry and often excessive for the types of signal processing applications previously mentioned.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">FPGAs, by comparison, are not as intrinsically powerful as high-end computers. However, their flexibility makes them strong candidates for implementing small-scale neural networks. Their parallel processing capability benefits the linear algebra and other complex mathematics involved in the forward and backward propagation of information through the network. Larger neural networks often require an increased number of cycles for feed-forward passes on an FPGA because single-cycle processing is constrained by the FPGA&#8217;s spatial capacity and associated memory.<\/span><\/p>\n<p><span style=\"font-weight: 400;\">FPGA-based neural networks are ideal for experimental situations, as their speed in handling real-time data enables rapid control and decision-making, without having to communicate with a host PC. FPGAs are also reconfigurable, so the user can quickly configure the neural network to their own needs. Lastly, given their compact size, neural networks implemented on FPGAs can help reduce resource and energy consumption [4][5].<\/span><\/p>\n<h2><span style=\"font-weight: 400;\">What is the Moku Neural Network?<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">In addition to a reconfigurable suite of fast, flexible, FPGA-based test and measurement instruments, <\/span><a href=\"https:\/\/liquidinstruments.com\/products\/hardware-platforms\/mokupro\/\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">Moku:Pro<\/span><\/a><span style=\"font-weight: 400;\"> now offers the Moku <a href=\"https:\/\/liquidinstruments.com\/integrated-instruments\/neural-network\/\" target=\"_blank\" rel=\"noopener\">Neural Network<\/a>. Benefiting from the versatility and fast processing speed of FPGAs, the Neural Network can be used alongside other Moku instruments such as the <\/span><a href=\"https:\/\/liquidinstruments.com\/products\/integrated-instruments\/waveform-generator\/\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">Waveform Generator<\/span><\/a><span style=\"font-weight: 400;\">, <\/span><a href=\"https:\/\/liquidinstruments.com\/products\/integrated-instruments\/pid-controller\/\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">PID Controller<\/span><\/a><span style=\"font-weight: 400;\">, and <\/span><a href=\"https:\/\/liquidinstruments.com\/products\/integrated-instruments\/oscilloscope\/\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">Oscilloscope<\/span><\/a><span style=\"font-weight: 400;\"> in applications such as signal analysis, denoising, sensor conditioning, and closed-loop feedback.&nbsp;<\/span><\/p>\n<p><span style=\"font-weight: 400;\">You can use <\/span><a href=\"https:\/\/liquidinstruments.com\/products\/apis\/python-api\/\"><span style=\"font-weight: 400;\">Python<\/span><\/a><span style=\"font-weight: 400;\"> to develop and train your own neural networks and upload them to Moku:Pro using the Moku Neural Network in <\/span><a href=\"https:\/\/liquidinstruments.com\/multi-instrument-mode\/\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">Multi-Instrument Mode<\/span><\/a><span style=\"font-weight: 400;\">. This allows for analysis of up to four input channels, or one channel of time series data, and up to four outputs for processing experimental data in real time \u2014 all on Moku:Pro. The Moku Neural Network features up to five dense layers of up to 100 neurons each, and five different activation functions depending on your application.&nbsp;<\/span><\/p>\n<p><span style=\"font-weight: 400;\">If you\u2019re interested in seeing how the FPGA-based Moku Neural Network can benefit your research, check out this <a href=\"https:\/\/liquidinstruments.com\/blog\/creating-a-neural-network\/\" target=\"_blank\" rel=\"noopener\">step-by-step tutorial<\/a>. This guide walks through all the basics, including Python installation, Moku Neural Network construction and training, and implementation. If you\u2019re already familiar with the fundamental concepts of a neural network, you can find advanced, ready-to-use examples <a href=\"https:\/\/github.com\/liquidinstruments\/moku-examples\/blob\/main\/neural-network\/Autoencoder.ipynb\" target=\"_blank\" rel=\"noopener\">here<\/a>.<\/span><\/p>\n<p>Prefer a video tutorial?&nbsp;<a href=\"https:\/\/liquidinstruments.com\/webinar-registration-mastering-moku-introducing-the-moku-neural-network\/\" target=\"_blank\" rel=\"noopener\">Watch our webinar on demand<\/a>. You\u2019ll learn how to implement an FPGA-based neural network for fast, flexible signal analysis, closed-loop feedback, and more.<\/p>\n<h2><span style=\"font-weight: 400;\">Citations<\/span><\/h2>\n<p><span style=\"font-weight: 400;\">[1] K. Clark, Class Lecture, Topic: \u201cComputing Neural Network Gradients,\u201d CS224n, Stanford University, USA, 2019. <\/span><a href=\"https:\/\/web.stanford.edu\/class\/cs224n\/readings\/gradient-notes.pdf\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">https:\/\/web.stanford.edu\/class\/cs224n\/readings\/gradient-notes.pdf<\/span><\/a><\/p>\n<p><span style=\"font-weight: 400;\">[2] J. Wang, M. Li, W. Jiang, Y. Huang, and R. Lin, \u201cA Design of FPGA-Based Neural Network PID Controller for Motion Control System.\u201d <\/span><i><span style=\"font-weight: 400;\">Sensors,<\/span><\/i><span style=\"font-weight: 400;\"> vol. 22, no. 3, p. 889, Jan. 2022. <\/span><a href=\"https:\/\/doi.org\/10.3390\/s22030889\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">https:\/\/doi.org\/10.3390\/s22030889<\/span><\/a><span style=\"font-weight: 400;\">&nbsp;<\/span><\/p>\n<p><span style=\"font-weight: 400;\">[3] N. R. Vora <\/span><i><span style=\"font-weight: 400;\">et al., <\/span><\/i><span style=\"font-weight: 400;\">\u201cML-Powered FPGA-based Real-Time Quantum State Discrimination Enabling Mid-circuit Measurements,\u201d arXiv:2406.18807 [quant-ph], Jun. 2024.&nbsp;<\/span><\/p>\n<p><a href=\"https:\/\/arxiv.org\/abs\/2406.18807\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">https:\/\/arxiv.org\/abs\/2406.18807<\/span><\/a><span style=\"font-weight: 400;\">&nbsp;<\/span><\/p>\n<p><span style=\"font-weight: 400;\">[4] A. El Bouazzaoui, A. Hadjoudja, O. Mouhib, \u201cReal-Time Adaptive Neural Network on FPGA: Enhancing Adaptability through Dynamic Classifier Selection,\u201d arXiv:2311.09516v2 [cs.AR], Nov. 2023.&nbsp;<\/span><\/p>\n<p><a href=\"https:\/\/arxiv.org\/html\/2311.09516v2\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">https:\/\/arxiv.org\/html\/2311.09516v2<\/span><\/a><\/p>\n<p><span style=\"font-weight: 400;\">[5] C. Wang and Z. Luo, \u201cA Review of the Optimal Design of Neural Networks Based on FPGA,\u201d <\/span><i><span style=\"font-weight: 400;\">Appl. Sci., <\/span><\/i><span style=\"font-weight: 400;\">vol. 12, no. 3, p. 10771, Oct. 2022. <\/span><a href=\"https:\/\/doi.org\/10.3390\/app122110771\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400;\">https:\/\/doi.org\/10.3390\/app122110771<\/span><\/a><span style=\"font-weight: 400;\">&nbsp;<\/span><\/p>\n","protected":false,"gt_translate_keys":[{"key":"rendered","format":"html"}]},"excerpt":{"rendered":"<p>Moku Version 3.3 brings a new Neural Network instrument to Moku:Pro that enables users to implement artificial neural networks for fast, flexible signal analysis, denoising, sensor conditioning, closed-loop feedback, and more. If you are unfamiliar with the basics of a neural network or want to know how one can benefit your research and development goals, [&hellip;]<\/p>\n","protected":false,"gt_translate_keys":[{"key":"rendered","format":"html"}]},"author":36,"featured_media":20956,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"content-type":"","footnotes":""},"categories":[3],"tags":[],"class_list":["post-20406","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-blog","site-category-mokupro","site-category-multi-instrument-mode","site-category-neural-network"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.0 (Yoast SEO v27.0) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>What is a neural network?<\/title>\n<meta name=\"description\" content=\"Learn new ways to advance experimental research with a neural network, and the advantages of an FPGA-based approach.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/liquidinstruments.com\/blog\/what-is-a-neural-network\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"What is a neural network?\" \/>\n<meta property=\"og:description\" content=\"Learn new ways to advance experimental research with a neural network, and the advantages of an FPGA-based approach.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/liquidinstruments.com\/blog\/what-is-a-neural-network\/\" \/>\n<meta property=\"og:site_name\" content=\"Liquid Instruments\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/LiquidInstruments\/\" \/>\n<meta property=\"article:published_time\" content=\"2024-10-21T20:32:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-12-03T22:12:50+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/liquidinstruments.com\/wp-content\/uploads\/2024\/10\/Screen-Shot-2024-10-21-at-3.28.00-PM.png\" \/>\n\t<meta property=\"og:image:width\" content=\"2322\" \/>\n\t<meta property=\"og:image:height\" content=\"1394\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"mmcardle\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@liquidinstrmnts\" \/>\n<meta name=\"twitter:site\" content=\"@liquidinstrmnts\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"mmcardle\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"10 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/liquidinstruments.com\/blog\/what-is-a-neural-network\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/liquidinstruments.com\/blog\/what-is-a-neural-network\/\"},\"author\":{\"name\":\"mmcardle\",\"@id\":\"https:\/\/liquidinstruments.com\/#\/schema\/person\/a3d838bd1576c0f8f6fb52f263bd338a\"},\"headline\":\"What is a neural network?\",\"datePublished\":\"2024-10-21T20:32:00+00:00\",\"dateModified\":\"2025-12-03T22:12:50+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/liquidinstruments.com\/blog\/what-is-a-neural-network\/\"},\"wordCount\":2183,\"publisher\":{\"@id\":\"https:\/\/liquidinstruments.com\/#organization\"},\"image\":{\"@id\":\"https:\/\/liquidinstruments.com\/blog\/what-is-a-neural-network\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/liquidinstruments.com\/wp-content\/uploads\/2024\/10\/Screen-Shot-2024-10-21-at-3.28.00-PM.png\",\"articleSection\":[\"Blog\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/liquidinstruments.com\/blog\/what-is-a-neural-network\/\",\"url\":\"https:\/\/liquidinstruments.com\/blog\/what-is-a-neural-network\/\",\"name\":\"What is a neural network?\",\"isPartOf\":{\"@id\":\"https:\/\/liquidinstruments.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/liquidinstruments.com\/blog\/what-is-a-neural-network\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/liquidinstruments.com\/blog\/what-is-a-neural-network\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/liquidinstruments.com\/wp-content\/uploads\/2024\/10\/Screen-Shot-2024-10-21-at-3.28.00-PM.png\",\"datePublished\":\"2024-10-21T20:32:00+00:00\",\"dateModified\":\"2025-12-03T22:12:50+00:00\",\"description\":\"Learn new ways to advance experimental research with a neural network, and the advantages of an FPGA-based approach.\",\"breadcrumb\":{\"@id\":\"https:\/\/liquidinstruments.com\/blog\/what-is-a-neural-network\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/liquidinstruments.com\/blog\/what-is-a-neural-network\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/liquidinstruments.com\/blog\/what-is-a-neural-network\/#primaryimage\",\"url\":\"https:\/\/liquidinstruments.com\/wp-content\/uploads\/2024\/10\/Screen-Shot-2024-10-21-at-3.28.00-PM.png\",\"contentUrl\":\"https:\/\/liquidinstruments.com\/wp-content\/uploads\/2024\/10\/Screen-Shot-2024-10-21-at-3.28.00-PM.png\",\"width\":2322,\"height\":1394,\"caption\":\"Neural network architecture, showing input, hidden, and output layers.\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/liquidinstruments.com\/blog\/what-is-a-neural-network\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/liquidinstruments.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"What is a neural network?\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/liquidinstruments.com\/#website\",\"url\":\"https:\/\/liquidinstruments.com\/\",\"name\":\"Liquid Instruments\",\"description\":\"\",\"publisher\":{\"@id\":\"https:\/\/liquidinstruments.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/liquidinstruments.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/liquidinstruments.com\/#organization\",\"name\":\"Liquid Instruments\",\"url\":\"https:\/\/liquidinstruments.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/liquidinstruments.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/i0.wp.com\/liquidinstruments.com\/wp-content\/uploads\/2020\/10\/BrandMark-Preferred-RGB-Color.png?fit=1000%2C924&ssl=1\",\"contentUrl\":\"https:\/\/i0.wp.com\/liquidinstruments.com\/wp-content\/uploads\/2020\/10\/BrandMark-Preferred-RGB-Color.png?fit=1000%2C924&ssl=1\",\"width\":1000,\"height\":924,\"caption\":\"Liquid Instruments\"},\"image\":{\"@id\":\"https:\/\/liquidinstruments.com\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/LiquidInstruments\/\",\"https:\/\/x.com\/liquidinstrmnts\",\"https:\/\/www.instagram.com\/liquidinstruments\/\",\"https:\/\/www.linkedin.com\/company\/liquidinstruments\/\",\"https:\/\/www.youtube.com\/c\/LiquidInstruments\",\"https:\/\/vimeo.com\/liquidinstruments\"],\"hasMerchantReturnPolicy\":{\"@type\":\"MerchantReturnPolicy\",\"merchantReturnLink\":\"https:\/\/liquidinstruments.com\/support\/warranty-repairs-and-service\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\/\/liquidinstruments.com\/#\/schema\/person\/a3d838bd1576c0f8f6fb52f263bd338a\",\"name\":\"mmcardle\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/liquidinstruments.com\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/c87d184ad2c207a8044a2c4f53a6a74745451f55e144a9a9f3cc8b3c04aa1671?s=96&d=wavatar&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/c87d184ad2c207a8044a2c4f53a6a74745451f55e144a9a9f3cc8b3c04aa1671?s=96&d=wavatar&r=g\",\"caption\":\"mmcardle\"}}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"What is a neural network?","description":"Learn new ways to advance experimental research with a neural network, and the advantages of an FPGA-based approach.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/liquidinstruments.com\/blog\/what-is-a-neural-network\/","og_locale":"en_US","og_type":"article","og_title":"What is a neural network?","og_description":"Learn new ways to advance experimental research with a neural network, and the advantages of an FPGA-based approach.","og_url":"https:\/\/liquidinstruments.com\/blog\/what-is-a-neural-network\/","og_site_name":"Liquid Instruments","article_publisher":"https:\/\/www.facebook.com\/LiquidInstruments\/","article_published_time":"2024-10-21T20:32:00+00:00","article_modified_time":"2025-12-03T22:12:50+00:00","og_image":[{"width":2322,"height":1394,"url":"https:\/\/liquidinstruments.com\/wp-content\/uploads\/2024\/10\/Screen-Shot-2024-10-21-at-3.28.00-PM.png","type":"image\/png"}],"author":"mmcardle","twitter_card":"summary_large_image","twitter_creator":"@liquidinstrmnts","twitter_site":"@liquidinstrmnts","twitter_misc":{"Written by":"mmcardle","Est. reading time":"10 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/liquidinstruments.com\/blog\/what-is-a-neural-network\/#article","isPartOf":{"@id":"https:\/\/liquidinstruments.com\/blog\/what-is-a-neural-network\/"},"author":{"name":"mmcardle","@id":"https:\/\/liquidinstruments.com\/#\/schema\/person\/a3d838bd1576c0f8f6fb52f263bd338a"},"headline":"What is a neural network?","datePublished":"2024-10-21T20:32:00+00:00","dateModified":"2025-12-03T22:12:50+00:00","mainEntityOfPage":{"@id":"https:\/\/liquidinstruments.com\/blog\/what-is-a-neural-network\/"},"wordCount":2183,"publisher":{"@id":"https:\/\/liquidinstruments.com\/#organization"},"image":{"@id":"https:\/\/liquidinstruments.com\/blog\/what-is-a-neural-network\/#primaryimage"},"thumbnailUrl":"https:\/\/liquidinstruments.com\/wp-content\/uploads\/2024\/10\/Screen-Shot-2024-10-21-at-3.28.00-PM.png","articleSection":["Blog"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/liquidinstruments.com\/blog\/what-is-a-neural-network\/","url":"https:\/\/liquidinstruments.com\/blog\/what-is-a-neural-network\/","name":"What is a neural network?","isPartOf":{"@id":"https:\/\/liquidinstruments.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/liquidinstruments.com\/blog\/what-is-a-neural-network\/#primaryimage"},"image":{"@id":"https:\/\/liquidinstruments.com\/blog\/what-is-a-neural-network\/#primaryimage"},"thumbnailUrl":"https:\/\/liquidinstruments.com\/wp-content\/uploads\/2024\/10\/Screen-Shot-2024-10-21-at-3.28.00-PM.png","datePublished":"2024-10-21T20:32:00+00:00","dateModified":"2025-12-03T22:12:50+00:00","description":"Learn new ways to advance experimental research with a neural network, and the advantages of an FPGA-based approach.","breadcrumb":{"@id":"https:\/\/liquidinstruments.com\/blog\/what-is-a-neural-network\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/liquidinstruments.com\/blog\/what-is-a-neural-network\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/liquidinstruments.com\/blog\/what-is-a-neural-network\/#primaryimage","url":"https:\/\/liquidinstruments.com\/wp-content\/uploads\/2024\/10\/Screen-Shot-2024-10-21-at-3.28.00-PM.png","contentUrl":"https:\/\/liquidinstruments.com\/wp-content\/uploads\/2024\/10\/Screen-Shot-2024-10-21-at-3.28.00-PM.png","width":2322,"height":1394,"caption":"Neural network architecture, showing input, hidden, and output layers."},{"@type":"BreadcrumbList","@id":"https:\/\/liquidinstruments.com\/blog\/what-is-a-neural-network\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/liquidinstruments.com\/"},{"@type":"ListItem","position":2,"name":"What is a neural network?"}]},{"@type":"WebSite","@id":"https:\/\/liquidinstruments.com\/#website","url":"https:\/\/liquidinstruments.com\/","name":"Liquid Instruments","description":"","publisher":{"@id":"https:\/\/liquidinstruments.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/liquidinstruments.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/liquidinstruments.com\/#organization","name":"Liquid Instruments","url":"https:\/\/liquidinstruments.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/liquidinstruments.com\/#\/schema\/logo\/image\/","url":"https:\/\/i0.wp.com\/liquidinstruments.com\/wp-content\/uploads\/2020\/10\/BrandMark-Preferred-RGB-Color.png?fit=1000%2C924&ssl=1","contentUrl":"https:\/\/i0.wp.com\/liquidinstruments.com\/wp-content\/uploads\/2020\/10\/BrandMark-Preferred-RGB-Color.png?fit=1000%2C924&ssl=1","width":1000,"height":924,"caption":"Liquid Instruments"},"image":{"@id":"https:\/\/liquidinstruments.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/LiquidInstruments\/","https:\/\/x.com\/liquidinstrmnts","https:\/\/www.instagram.com\/liquidinstruments\/","https:\/\/www.linkedin.com\/company\/liquidinstruments\/","https:\/\/www.youtube.com\/c\/LiquidInstruments","https:\/\/vimeo.com\/liquidinstruments"],"hasMerchantReturnPolicy":{"@type":"MerchantReturnPolicy","merchantReturnLink":"https:\/\/liquidinstruments.com\/support\/warranty-repairs-and-service\/"}},{"@type":"Person","@id":"https:\/\/liquidinstruments.com\/#\/schema\/person\/a3d838bd1576c0f8f6fb52f263bd338a","name":"mmcardle","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/liquidinstruments.com\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/c87d184ad2c207a8044a2c4f53a6a74745451f55e144a9a9f3cc8b3c04aa1671?s=96&d=wavatar&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/c87d184ad2c207a8044a2c4f53a6a74745451f55e144a9a9f3cc8b3c04aa1671?s=96&d=wavatar&r=g","caption":"mmcardle"}}]}},"gt_translate_keys":[{"key":"link","format":"url"}],"_links":{"self":[{"href":"https:\/\/liquidinstruments.com\/wp-json\/wp\/v2\/posts\/20406","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/liquidinstruments.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/liquidinstruments.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/liquidinstruments.com\/wp-json\/wp\/v2\/users\/36"}],"replies":[{"embeddable":true,"href":"https:\/\/liquidinstruments.com\/wp-json\/wp\/v2\/comments?post=20406"}],"version-history":[{"count":26,"href":"https:\/\/liquidinstruments.com\/wp-json\/wp\/v2\/posts\/20406\/revisions"}],"predecessor-version":[{"id":26786,"href":"https:\/\/liquidinstruments.com\/wp-json\/wp\/v2\/posts\/20406\/revisions\/26786"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/liquidinstruments.com\/wp-json\/wp\/v2\/media\/20956"}],"wp:attachment":[{"href":"https:\/\/liquidinstruments.com\/wp-json\/wp\/v2\/media?parent=20406"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/liquidinstruments.com\/wp-json\/wp\/v2\/categories?post=20406"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/liquidinstruments.com\/wp-json\/wp\/v2\/tags?post=20406"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}