What is a Recurrent Neural Network?What is an artificial neural network?What can be considered a deep recurrent neural network?Arbitrarily big neural networkWhat is a state in a recurrent neural network?Spam Detection using Recurrent Neural NetworksNeural network design when amount of input neurons varyHow to train recurrent neural network?What is the significance of this Stanford University “Financial Market Time Series Prediction with RNN's” paper?Structure of LSTM RNNsTrain a recurrent neural network by concatenating time series. Is it safe?How can my Neural Network categorize message strings?What is the feasible neural network structure that can learn to identify types of trajectory of moving dots?

How do LIGO and VIRGO know that a gravitational wave has its origin in a neutron star or a black hole?

Can Infinity Stones be retrieved more than once?

What to use instead of cling film to wrap pastry

As matter approaches a black hole, does it speed up?

Randomness of Python's random

Verb "geeitet" in an old scientific text

Multi-channel audio upsampling interpolation

Using column size much larger than necessary

If I readied a spell with the trigger "When I take damage", do I have to make a constitution saving throw to avoid losing Concentration?

What are the advantages of luxury car brands like Acura/Lexus over their sibling non-luxury brands Honda/Toyota?

What happens if you dump antimatter into a black hole?

Where can I go to avoid planes overhead?

How does this change to the opportunity attack rule impact combat?

Did we get closer to another plane than we were supposed to, or was the pilot just protecting our delicate sensibilities?

If your medical expenses exceed your income does the IRS pay you?

Upside-Down Pyramid Addition...REVERSED!

How to change lightning-radio-group selected value programmatically?

Send iMessage from Firefox

Should I mention being denied entry to UK due to a confusion in my Visa and Ticket bookings?

What does a spell range of "25 ft. + 5 ft./2 levels" mean?

What is the name of this hexagon/pentagon polyhedron?

Expressing 'our' for objects belonging to our apartment

Would Hubble Space Telescope improve black hole image observed by EHT if it joined array of telesopes?

Why is the relative clause in the following sentence not directly after the noun and why is the verb not in the end of the sentence?



What is a Recurrent Neural Network?


What is an artificial neural network?What can be considered a deep recurrent neural network?Arbitrarily big neural networkWhat is a state in a recurrent neural network?Spam Detection using Recurrent Neural NetworksNeural network design when amount of input neurons varyHow to train recurrent neural network?What is the significance of this Stanford University “Financial Market Time Series Prediction with RNN's” paper?Structure of LSTM RNNsTrain a recurrent neural network by concatenating time series. Is it safe?How can my Neural Network categorize message strings?What is the feasible neural network structure that can learn to identify types of trajectory of moving dots?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








8












$begingroup$


Surprisingly this wasn't asked before - at least I didn't find anything besides some vaguely related questions.



So, what is a recurrent neural network, and what are their advantages over regular NNs?










share|improve this question









$endgroup$







  • 2




    $begingroup$
    In the 1990s Mark W. Tilden has introduced the first BEAM robotics walker. The system is based on the nv-neuron which is an oscillating neural network. Tilden has called the concept bicores, but it's the same like a recurrent neural network. Explaining the inner working in a few sentences is a bit complicated. The more easier way to introduce the technology is an autonomous boolean network. This logic gate network contains of a feedback loop which means the system is oscillating. In contrast to a boolean logic gate, a recurrent neural network has more features and can be trained by algorithms.
    $endgroup$
    – Manuel Rodriguez
    Apr 28 at 18:59

















8












$begingroup$


Surprisingly this wasn't asked before - at least I didn't find anything besides some vaguely related questions.



So, what is a recurrent neural network, and what are their advantages over regular NNs?










share|improve this question









$endgroup$







  • 2




    $begingroup$
    In the 1990s Mark W. Tilden has introduced the first BEAM robotics walker. The system is based on the nv-neuron which is an oscillating neural network. Tilden has called the concept bicores, but it's the same like a recurrent neural network. Explaining the inner working in a few sentences is a bit complicated. The more easier way to introduce the technology is an autonomous boolean network. This logic gate network contains of a feedback loop which means the system is oscillating. In contrast to a boolean logic gate, a recurrent neural network has more features and can be trained by algorithms.
    $endgroup$
    – Manuel Rodriguez
    Apr 28 at 18:59













8












8








8


5



$begingroup$


Surprisingly this wasn't asked before - at least I didn't find anything besides some vaguely related questions.



So, what is a recurrent neural network, and what are their advantages over regular NNs?










share|improve this question









$endgroup$




Surprisingly this wasn't asked before - at least I didn't find anything besides some vaguely related questions.



So, what is a recurrent neural network, and what are their advantages over regular NNs?







recurrent-neural-networks






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Apr 28 at 16:55









NetHackerNetHacker

1959




1959







  • 2




    $begingroup$
    In the 1990s Mark W. Tilden has introduced the first BEAM robotics walker. The system is based on the nv-neuron which is an oscillating neural network. Tilden has called the concept bicores, but it's the same like a recurrent neural network. Explaining the inner working in a few sentences is a bit complicated. The more easier way to introduce the technology is an autonomous boolean network. This logic gate network contains of a feedback loop which means the system is oscillating. In contrast to a boolean logic gate, a recurrent neural network has more features and can be trained by algorithms.
    $endgroup$
    – Manuel Rodriguez
    Apr 28 at 18:59












  • 2




    $begingroup$
    In the 1990s Mark W. Tilden has introduced the first BEAM robotics walker. The system is based on the nv-neuron which is an oscillating neural network. Tilden has called the concept bicores, but it's the same like a recurrent neural network. Explaining the inner working in a few sentences is a bit complicated. The more easier way to introduce the technology is an autonomous boolean network. This logic gate network contains of a feedback loop which means the system is oscillating. In contrast to a boolean logic gate, a recurrent neural network has more features and can be trained by algorithms.
    $endgroup$
    – Manuel Rodriguez
    Apr 28 at 18:59







2




2




$begingroup$
In the 1990s Mark W. Tilden has introduced the first BEAM robotics walker. The system is based on the nv-neuron which is an oscillating neural network. Tilden has called the concept bicores, but it's the same like a recurrent neural network. Explaining the inner working in a few sentences is a bit complicated. The more easier way to introduce the technology is an autonomous boolean network. This logic gate network contains of a feedback loop which means the system is oscillating. In contrast to a boolean logic gate, a recurrent neural network has more features and can be trained by algorithms.
$endgroup$
– Manuel Rodriguez
Apr 28 at 18:59




$begingroup$
In the 1990s Mark W. Tilden has introduced the first BEAM robotics walker. The system is based on the nv-neuron which is an oscillating neural network. Tilden has called the concept bicores, but it's the same like a recurrent neural network. Explaining the inner working in a few sentences is a bit complicated. The more easier way to introduce the technology is an autonomous boolean network. This logic gate network contains of a feedback loop which means the system is oscillating. In contrast to a boolean logic gate, a recurrent neural network has more features and can be trained by algorithms.
$endgroup$
– Manuel Rodriguez
Apr 28 at 18:59










2 Answers
2






active

oldest

votes


















4












$begingroup$

Recurrent neural networks (RNNs) are a class of artificial neural network
architecture inspired by the cyclical connectivity of neurons in the brain. It uses iterative function loops to store information.



Difference with traditional Neural networks using pictures from this book:



enter image description here



And, an RNN:



enter image description here



Notice the difference -- feedforward neural networks' connections
do not form cycles. If we relax this condition, and allow cyclical
connections as well, we obtain recurrent neural networks (RNNs). You can see that in the hidden layer of the architecture.



While the difference between a multilayer perceptron and an RNN may seem
trivial, the implications for sequence learning are far-reaching. An MLP can only
map from input to output vectors, whereas an RNN can in principle map from
the entire history of previous inputs to each output. Indeed, the equivalent
result to the universal approximation theory for MLPs is that an RNN with a
sufficient number of hidden units can approximate any measurable sequence-to-sequence
mapping to arbitrary accuracy.



Important takeaway:



The recurrent connections allow a 'memory' of previous inputs to persist in the
network's internal state, and thereby influence the network output.



Talking in terms of advantages is not appropriate as they both are state-of-the-art and are particularly good at certain tasks. A broad category of tasks that RNN excel at is:



Sequence Labelling



The goal of sequence labelling is to assign sequences of labels, drawn from a fixed alphabet, to sequences of input data.



Ex: Transcribe a sequence of acoustic features with spoken words (speech recognition), or a sequence of video frames with hand gestures (gesture recognition).



Some of the sub-tasks in sequence labelling are:



Sequence Classification



Label sequences are constrained to be of length one. This is referred to as sequence classification, since each input sequence is assigned to a single class. Examples of sequence classification task include the identification of a single spoken work and the recognition of an individual
handwritten letter.



Segment Classification



Segment classification refers to those tasks where the target sequences consist
of multiple labels, but the locations of the labels -- that is, the positions of the input segments to which the labels apply -- are known in advance.






share|improve this answer











$endgroup$












  • $begingroup$
    very nice answer thanks! I am starting to regret not taking that Systems and Control theory class. Seems like useful stuff, feedback loops and all that, to know in the context of NNs.
    $endgroup$
    – NetHacker
    2 days ago






  • 1




    $begingroup$
    Welcome! They certainly are useful.
    $endgroup$
    – naive
    2 days ago


















9












$begingroup$

A recurrent neural network (RNN) is an artificial neural network that contains backward or self-connections, as opposed to just having forward connections, like in a feed-forward neural network (FFNN). The adjective "recurrent" thus refers to this backward or self-connections, which create loops in these networks.



An RNN can be trained using back-propagation through time (BBTT), such that these backward or self-connections "memorise" previously seen inputs. Hence, these connections are mainly used to track temporal relations between elements of a sequence of inputs, which makes RNNs well suited to sequence prediction and similar tasks.



There are several RNN models: for example, RNNs with LSTM or GRU units. LSTM (or GRU) is an RNN whose single units perform a more complex transformation than a unit in a "plain RNN", which performs a linear transformation of the input followed by the application of a non-linear function (e.g. ReLU) to this linear transformation. In theory, "plain RNN" are as powerful as RNNs with LSTM units. In practice, they suffer from the "vanishing and exploding gradients" problem. Hence, in practice, LSTMs (or similar sophisticated recurrent units) are used.






share|improve this answer









$endgroup$













    Your Answer








    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "658"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fai.stackexchange.com%2fquestions%2f12042%2fwhat-is-a-recurrent-neural-network%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    4












    $begingroup$

    Recurrent neural networks (RNNs) are a class of artificial neural network
    architecture inspired by the cyclical connectivity of neurons in the brain. It uses iterative function loops to store information.



    Difference with traditional Neural networks using pictures from this book:



    enter image description here



    And, an RNN:



    enter image description here



    Notice the difference -- feedforward neural networks' connections
    do not form cycles. If we relax this condition, and allow cyclical
    connections as well, we obtain recurrent neural networks (RNNs). You can see that in the hidden layer of the architecture.



    While the difference between a multilayer perceptron and an RNN may seem
    trivial, the implications for sequence learning are far-reaching. An MLP can only
    map from input to output vectors, whereas an RNN can in principle map from
    the entire history of previous inputs to each output. Indeed, the equivalent
    result to the universal approximation theory for MLPs is that an RNN with a
    sufficient number of hidden units can approximate any measurable sequence-to-sequence
    mapping to arbitrary accuracy.



    Important takeaway:



    The recurrent connections allow a 'memory' of previous inputs to persist in the
    network's internal state, and thereby influence the network output.



    Talking in terms of advantages is not appropriate as they both are state-of-the-art and are particularly good at certain tasks. A broad category of tasks that RNN excel at is:



    Sequence Labelling



    The goal of sequence labelling is to assign sequences of labels, drawn from a fixed alphabet, to sequences of input data.



    Ex: Transcribe a sequence of acoustic features with spoken words (speech recognition), or a sequence of video frames with hand gestures (gesture recognition).



    Some of the sub-tasks in sequence labelling are:



    Sequence Classification



    Label sequences are constrained to be of length one. This is referred to as sequence classification, since each input sequence is assigned to a single class. Examples of sequence classification task include the identification of a single spoken work and the recognition of an individual
    handwritten letter.



    Segment Classification



    Segment classification refers to those tasks where the target sequences consist
    of multiple labels, but the locations of the labels -- that is, the positions of the input segments to which the labels apply -- are known in advance.






    share|improve this answer











    $endgroup$












    • $begingroup$
      very nice answer thanks! I am starting to regret not taking that Systems and Control theory class. Seems like useful stuff, feedback loops and all that, to know in the context of NNs.
      $endgroup$
      – NetHacker
      2 days ago






    • 1




      $begingroup$
      Welcome! They certainly are useful.
      $endgroup$
      – naive
      2 days ago















    4












    $begingroup$

    Recurrent neural networks (RNNs) are a class of artificial neural network
    architecture inspired by the cyclical connectivity of neurons in the brain. It uses iterative function loops to store information.



    Difference with traditional Neural networks using pictures from this book:



    enter image description here



    And, an RNN:



    enter image description here



    Notice the difference -- feedforward neural networks' connections
    do not form cycles. If we relax this condition, and allow cyclical
    connections as well, we obtain recurrent neural networks (RNNs). You can see that in the hidden layer of the architecture.



    While the difference between a multilayer perceptron and an RNN may seem
    trivial, the implications for sequence learning are far-reaching. An MLP can only
    map from input to output vectors, whereas an RNN can in principle map from
    the entire history of previous inputs to each output. Indeed, the equivalent
    result to the universal approximation theory for MLPs is that an RNN with a
    sufficient number of hidden units can approximate any measurable sequence-to-sequence
    mapping to arbitrary accuracy.



    Important takeaway:



    The recurrent connections allow a 'memory' of previous inputs to persist in the
    network's internal state, and thereby influence the network output.



    Talking in terms of advantages is not appropriate as they both are state-of-the-art and are particularly good at certain tasks. A broad category of tasks that RNN excel at is:



    Sequence Labelling



    The goal of sequence labelling is to assign sequences of labels, drawn from a fixed alphabet, to sequences of input data.



    Ex: Transcribe a sequence of acoustic features with spoken words (speech recognition), or a sequence of video frames with hand gestures (gesture recognition).



    Some of the sub-tasks in sequence labelling are:



    Sequence Classification



    Label sequences are constrained to be of length one. This is referred to as sequence classification, since each input sequence is assigned to a single class. Examples of sequence classification task include the identification of a single spoken work and the recognition of an individual
    handwritten letter.



    Segment Classification



    Segment classification refers to those tasks where the target sequences consist
    of multiple labels, but the locations of the labels -- that is, the positions of the input segments to which the labels apply -- are known in advance.






    share|improve this answer











    $endgroup$












    • $begingroup$
      very nice answer thanks! I am starting to regret not taking that Systems and Control theory class. Seems like useful stuff, feedback loops and all that, to know in the context of NNs.
      $endgroup$
      – NetHacker
      2 days ago






    • 1




      $begingroup$
      Welcome! They certainly are useful.
      $endgroup$
      – naive
      2 days ago













    4












    4








    4





    $begingroup$

    Recurrent neural networks (RNNs) are a class of artificial neural network
    architecture inspired by the cyclical connectivity of neurons in the brain. It uses iterative function loops to store information.



    Difference with traditional Neural networks using pictures from this book:



    enter image description here



    And, an RNN:



    enter image description here



    Notice the difference -- feedforward neural networks' connections
    do not form cycles. If we relax this condition, and allow cyclical
    connections as well, we obtain recurrent neural networks (RNNs). You can see that in the hidden layer of the architecture.



    While the difference between a multilayer perceptron and an RNN may seem
    trivial, the implications for sequence learning are far-reaching. An MLP can only
    map from input to output vectors, whereas an RNN can in principle map from
    the entire history of previous inputs to each output. Indeed, the equivalent
    result to the universal approximation theory for MLPs is that an RNN with a
    sufficient number of hidden units can approximate any measurable sequence-to-sequence
    mapping to arbitrary accuracy.



    Important takeaway:



    The recurrent connections allow a 'memory' of previous inputs to persist in the
    network's internal state, and thereby influence the network output.



    Talking in terms of advantages is not appropriate as they both are state-of-the-art and are particularly good at certain tasks. A broad category of tasks that RNN excel at is:



    Sequence Labelling



    The goal of sequence labelling is to assign sequences of labels, drawn from a fixed alphabet, to sequences of input data.



    Ex: Transcribe a sequence of acoustic features with spoken words (speech recognition), or a sequence of video frames with hand gestures (gesture recognition).



    Some of the sub-tasks in sequence labelling are:



    Sequence Classification



    Label sequences are constrained to be of length one. This is referred to as sequence classification, since each input sequence is assigned to a single class. Examples of sequence classification task include the identification of a single spoken work and the recognition of an individual
    handwritten letter.



    Segment Classification



    Segment classification refers to those tasks where the target sequences consist
    of multiple labels, but the locations of the labels -- that is, the positions of the input segments to which the labels apply -- are known in advance.






    share|improve this answer











    $endgroup$



    Recurrent neural networks (RNNs) are a class of artificial neural network
    architecture inspired by the cyclical connectivity of neurons in the brain. It uses iterative function loops to store information.



    Difference with traditional Neural networks using pictures from this book:



    enter image description here



    And, an RNN:



    enter image description here



    Notice the difference -- feedforward neural networks' connections
    do not form cycles. If we relax this condition, and allow cyclical
    connections as well, we obtain recurrent neural networks (RNNs). You can see that in the hidden layer of the architecture.



    While the difference between a multilayer perceptron and an RNN may seem
    trivial, the implications for sequence learning are far-reaching. An MLP can only
    map from input to output vectors, whereas an RNN can in principle map from
    the entire history of previous inputs to each output. Indeed, the equivalent
    result to the universal approximation theory for MLPs is that an RNN with a
    sufficient number of hidden units can approximate any measurable sequence-to-sequence
    mapping to arbitrary accuracy.



    Important takeaway:



    The recurrent connections allow a 'memory' of previous inputs to persist in the
    network's internal state, and thereby influence the network output.



    Talking in terms of advantages is not appropriate as they both are state-of-the-art and are particularly good at certain tasks. A broad category of tasks that RNN excel at is:



    Sequence Labelling



    The goal of sequence labelling is to assign sequences of labels, drawn from a fixed alphabet, to sequences of input data.



    Ex: Transcribe a sequence of acoustic features with spoken words (speech recognition), or a sequence of video frames with hand gestures (gesture recognition).



    Some of the sub-tasks in sequence labelling are:



    Sequence Classification



    Label sequences are constrained to be of length one. This is referred to as sequence classification, since each input sequence is assigned to a single class. Examples of sequence classification task include the identification of a single spoken work and the recognition of an individual
    handwritten letter.



    Segment Classification



    Segment classification refers to those tasks where the target sequences consist
    of multiple labels, but the locations of the labels -- that is, the positions of the input segments to which the labels apply -- are known in advance.







    share|improve this answer














    share|improve this answer



    share|improve this answer








    edited 2 days ago

























    answered 2 days ago









    naivenaive

    2066




    2066











    • $begingroup$
      very nice answer thanks! I am starting to regret not taking that Systems and Control theory class. Seems like useful stuff, feedback loops and all that, to know in the context of NNs.
      $endgroup$
      – NetHacker
      2 days ago






    • 1




      $begingroup$
      Welcome! They certainly are useful.
      $endgroup$
      – naive
      2 days ago
















    • $begingroup$
      very nice answer thanks! I am starting to regret not taking that Systems and Control theory class. Seems like useful stuff, feedback loops and all that, to know in the context of NNs.
      $endgroup$
      – NetHacker
      2 days ago






    • 1




      $begingroup$
      Welcome! They certainly are useful.
      $endgroup$
      – naive
      2 days ago















    $begingroup$
    very nice answer thanks! I am starting to regret not taking that Systems and Control theory class. Seems like useful stuff, feedback loops and all that, to know in the context of NNs.
    $endgroup$
    – NetHacker
    2 days ago




    $begingroup$
    very nice answer thanks! I am starting to regret not taking that Systems and Control theory class. Seems like useful stuff, feedback loops and all that, to know in the context of NNs.
    $endgroup$
    – NetHacker
    2 days ago




    1




    1




    $begingroup$
    Welcome! They certainly are useful.
    $endgroup$
    – naive
    2 days ago




    $begingroup$
    Welcome! They certainly are useful.
    $endgroup$
    – naive
    2 days ago













    9












    $begingroup$

    A recurrent neural network (RNN) is an artificial neural network that contains backward or self-connections, as opposed to just having forward connections, like in a feed-forward neural network (FFNN). The adjective "recurrent" thus refers to this backward or self-connections, which create loops in these networks.



    An RNN can be trained using back-propagation through time (BBTT), such that these backward or self-connections "memorise" previously seen inputs. Hence, these connections are mainly used to track temporal relations between elements of a sequence of inputs, which makes RNNs well suited to sequence prediction and similar tasks.



    There are several RNN models: for example, RNNs with LSTM or GRU units. LSTM (or GRU) is an RNN whose single units perform a more complex transformation than a unit in a "plain RNN", which performs a linear transformation of the input followed by the application of a non-linear function (e.g. ReLU) to this linear transformation. In theory, "plain RNN" are as powerful as RNNs with LSTM units. In practice, they suffer from the "vanishing and exploding gradients" problem. Hence, in practice, LSTMs (or similar sophisticated recurrent units) are used.






    share|improve this answer









    $endgroup$

















      9












      $begingroup$

      A recurrent neural network (RNN) is an artificial neural network that contains backward or self-connections, as opposed to just having forward connections, like in a feed-forward neural network (FFNN). The adjective "recurrent" thus refers to this backward or self-connections, which create loops in these networks.



      An RNN can be trained using back-propagation through time (BBTT), such that these backward or self-connections "memorise" previously seen inputs. Hence, these connections are mainly used to track temporal relations between elements of a sequence of inputs, which makes RNNs well suited to sequence prediction and similar tasks.



      There are several RNN models: for example, RNNs with LSTM or GRU units. LSTM (or GRU) is an RNN whose single units perform a more complex transformation than a unit in a "plain RNN", which performs a linear transformation of the input followed by the application of a non-linear function (e.g. ReLU) to this linear transformation. In theory, "plain RNN" are as powerful as RNNs with LSTM units. In practice, they suffer from the "vanishing and exploding gradients" problem. Hence, in practice, LSTMs (or similar sophisticated recurrent units) are used.






      share|improve this answer









      $endgroup$















        9












        9








        9





        $begingroup$

        A recurrent neural network (RNN) is an artificial neural network that contains backward or self-connections, as opposed to just having forward connections, like in a feed-forward neural network (FFNN). The adjective "recurrent" thus refers to this backward or self-connections, which create loops in these networks.



        An RNN can be trained using back-propagation through time (BBTT), such that these backward or self-connections "memorise" previously seen inputs. Hence, these connections are mainly used to track temporal relations between elements of a sequence of inputs, which makes RNNs well suited to sequence prediction and similar tasks.



        There are several RNN models: for example, RNNs with LSTM or GRU units. LSTM (or GRU) is an RNN whose single units perform a more complex transformation than a unit in a "plain RNN", which performs a linear transformation of the input followed by the application of a non-linear function (e.g. ReLU) to this linear transformation. In theory, "plain RNN" are as powerful as RNNs with LSTM units. In practice, they suffer from the "vanishing and exploding gradients" problem. Hence, in practice, LSTMs (or similar sophisticated recurrent units) are used.






        share|improve this answer









        $endgroup$



        A recurrent neural network (RNN) is an artificial neural network that contains backward or self-connections, as opposed to just having forward connections, like in a feed-forward neural network (FFNN). The adjective "recurrent" thus refers to this backward or self-connections, which create loops in these networks.



        An RNN can be trained using back-propagation through time (BBTT), such that these backward or self-connections "memorise" previously seen inputs. Hence, these connections are mainly used to track temporal relations between elements of a sequence of inputs, which makes RNNs well suited to sequence prediction and similar tasks.



        There are several RNN models: for example, RNNs with LSTM or GRU units. LSTM (or GRU) is an RNN whose single units perform a more complex transformation than a unit in a "plain RNN", which performs a linear transformation of the input followed by the application of a non-linear function (e.g. ReLU) to this linear transformation. In theory, "plain RNN" are as powerful as RNNs with LSTM units. In practice, they suffer from the "vanishing and exploding gradients" problem. Hence, in practice, LSTMs (or similar sophisticated recurrent units) are used.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Apr 28 at 17:39









        nbronbro

        2,7681726




        2,7681726



























            draft saved

            draft discarded
















































            Thanks for contributing an answer to Artificial Intelligence Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fai.stackexchange.com%2fquestions%2f12042%2fwhat-is-a-recurrent-neural-network%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Category:9 (number) SubcategoriesMedia in category "9 (number)"Navigation menuUpload mediaGND ID: 4485639-8Library of Congress authority ID: sh85091979ReasonatorScholiaStatistics

            Circuit construction for execution of conditional statements using least significant bitHow are two different registers being used as “control”?How exactly is the stated composite state of the two registers being produced using the $R_zz$ controlled rotations?Efficiently performing controlled rotations in HHLWould this quantum algorithm implementation work?How to prepare a superposed states of odd integers from $1$ to $sqrtN$?Why is this implementation of the order finding algorithm not working?Circuit construction for Hamiltonian simulationHow can I invert the least significant bit of a certain term of a superposed state?Implementing an oracleImplementing a controlled sum operation

            Magento 2 “No Payment Methods” in Admin New OrderHow to integrate Paypal Express Checkout with the Magento APIMagento 1.5 - Sales > Order > edit order and shipping methods disappearAuto Invoice Check/Money Order Payment methodAdd more simple payment methods?Shipping methods not showingWhat should I do to change payment methods if changing the configuration has no effects?1.9 - No Payment Methods showing upMy Payment Methods not Showing for downloadable/virtual product when checkout?Magento2 API to access internal payment methodHow to call an existing payment methods in the registration form?