Is it a good idea to use CNN to classify 1D signal? Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)Is Convolutional Neural Network (CNN) faster than Recurrent Neural Network (RNN)?Time steps in Keras LSTMHow does an LSTM process sequences longer than its memory?Convolution operator in CNN and how it differs from feed forward NN operation?Moving from support vector machine to neural network (Back propagation)What is the difference between Machine Learning and Deep Learning?Neural net for predicting pseudo periodic signalHow are SVMs = Template Matching?Machine Learning Classification: 8-Dimensional Time SeriesCan I add data, that my neural network classified, to the training set, in order to improve it?neural netork loss function for hierarchical classificationProbability threshold and signal/noise ratioIs a SVM (+Boost) faster than a NN to train with similar accuracy?Suggestions required from experts for performance improvement for a binary classification problem using timing data

Amount of permutations on an NxNxN Rubik's Cube

Does the Mueller report show a conspiracy between Russia and the Trump Campaign?

Sum letters are not two different

Do wooden building fires get hotter than 600°C?

What initially awakened the Balrog?

Did Mueller's report provide an evidentiary basis for the claim of Russian govt election interference via social media?

Why weren't discrete x86 CPUs ever used in game hardware?

The Nth Gryphon Number

Deconstruction is ambiguous

1-probability to calculate two events in a row

Is there any word for a place full of confusion?

Is it fair for a professor to grade us on the possession of past papers?

Co-worker has annoying ringtone

How can I prevent/balance waiting and turtling as a response to cooldown mechanics

The test team as an enemy of development? And how can this be avoided?

Crossing US/Canada Border for less than 24 hours

How do I find out the mythology and history of my Fortress?

What is the meaning of 'breadth' in breadth first search?

Is CEO the "profession" with the most psychopaths?

Maximum summed subsequences with non-adjacent items

How often does castling occur in grandmaster games?

Random body shuffle every night—can we still function?

How does Belgium enforce obligatory attendance in elections?

An adverb for when you're not exaggerating



Is it a good idea to use CNN to classify 1D signal?



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)Is Convolutional Neural Network (CNN) faster than Recurrent Neural Network (RNN)?Time steps in Keras LSTMHow does an LSTM process sequences longer than its memory?Convolution operator in CNN and how it differs from feed forward NN operation?Moving from support vector machine to neural network (Back propagation)What is the difference between Machine Learning and Deep Learning?Neural net for predicting pseudo periodic signalHow are SVMs = Template Matching?Machine Learning Classification: 8-Dimensional Time SeriesCan I add data, that my neural network classified, to the training set, in order to improve it?neural netork loss function for hierarchical classificationProbability threshold and signal/noise ratioIs a SVM (+Boost) faster than a NN to train with similar accuracy?Suggestions required from experts for performance improvement for a binary classification problem using timing data



.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








14












$begingroup$


I am working on the sleep stage classification. I read some research articles about this topic many of them used SVM or ensemble method. Is it a good idea to use convolutional neural network to classify one-dimensional EEG signal?

I am new to this kind of work. Pardon me if I ask anything wrong?










share|cite|improve this question







New contributor




Fazla Rabbi Mashrur is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$











  • $begingroup$
    A 1D signal can be transformed into a 2D signal by breaking up the signal into frames and taking the FFT of each frame. For audio this is quite uncommon.
    $endgroup$
    – MSalters
    yesterday

















14












$begingroup$


I am working on the sleep stage classification. I read some research articles about this topic many of them used SVM or ensemble method. Is it a good idea to use convolutional neural network to classify one-dimensional EEG signal?

I am new to this kind of work. Pardon me if I ask anything wrong?










share|cite|improve this question







New contributor




Fazla Rabbi Mashrur is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$











  • $begingroup$
    A 1D signal can be transformed into a 2D signal by breaking up the signal into frames and taking the FFT of each frame. For audio this is quite uncommon.
    $endgroup$
    – MSalters
    yesterday













14












14








14


9



$begingroup$


I am working on the sleep stage classification. I read some research articles about this topic many of them used SVM or ensemble method. Is it a good idea to use convolutional neural network to classify one-dimensional EEG signal?

I am new to this kind of work. Pardon me if I ask anything wrong?










share|cite|improve this question







New contributor




Fazla Rabbi Mashrur is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$




I am working on the sleep stage classification. I read some research articles about this topic many of them used SVM or ensemble method. Is it a good idea to use convolutional neural network to classify one-dimensional EEG signal?

I am new to this kind of work. Pardon me if I ask anything wrong?







neural-networks svm conv-neural-network signal-processing






share|cite|improve this question







New contributor




Fazla Rabbi Mashrur is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|cite|improve this question







New contributor




Fazla Rabbi Mashrur is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|cite|improve this question




share|cite|improve this question






New contributor




Fazla Rabbi Mashrur is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked 2 days ago









Fazla Rabbi MashrurFazla Rabbi Mashrur

17115




17115




New contributor




Fazla Rabbi Mashrur is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





Fazla Rabbi Mashrur is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






Fazla Rabbi Mashrur is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











  • $begingroup$
    A 1D signal can be transformed into a 2D signal by breaking up the signal into frames and taking the FFT of each frame. For audio this is quite uncommon.
    $endgroup$
    – MSalters
    yesterday
















  • $begingroup$
    A 1D signal can be transformed into a 2D signal by breaking up the signal into frames and taking the FFT of each frame. For audio this is quite uncommon.
    $endgroup$
    – MSalters
    yesterday















$begingroup$
A 1D signal can be transformed into a 2D signal by breaking up the signal into frames and taking the FFT of each frame. For audio this is quite uncommon.
$endgroup$
– MSalters
yesterday




$begingroup$
A 1D signal can be transformed into a 2D signal by breaking up the signal into frames and taking the FFT of each frame. For audio this is quite uncommon.
$endgroup$
– MSalters
yesterday










4 Answers
4






active

oldest

votes


















21












$begingroup$

I guess that by 1D signal you mean time-series data, where you assume temporal dependence between the values. In such cases convolutional neural networks (CNN) are one of the possible approaches. The most popular neural network approach to such data is to use recurrent neural networks (RNN), but you can alternatively use CNNs, or hybrid approach (quasi-recurrent neural networks, QRNN) as discussed by Bradbury et al (2016), and also illustrated on their figure below. There also other approaches, like using attention alone, as in Transformer network described by Vaswani et al (‎2017), where the information about time is passed via Fourier series features.



enter image description here



With RNN, you would use a cell that takes as input previous hidden state and current input value, to return output and another hidden state, so the information flows via the hidden states. With CNN, you would use sliding window of some width, that would look of certain (learned) patterns in the data, and stack such windows on top of each other, so that higher-level windows would look for patterns within the lower-level patterns. Using such sliding windows may be helpful for finding things such as repeating patterns within the data (e.g. seasonal patterns). QRNN layers mix both approaches. In fact, one of the advantages of CNN and QRNN architectures is that they are faster then RNN.






share|cite|improve this answer











$endgroup$




















    9












    $begingroup$

    You can certainly use a CNN to classify a 1D signal. Since you are interested in sleep stage classification see this paper. Its a deep neural network called the DeepSleepNet, and uses a combination of 1D convolutional and LSTM layers to classify EEG signals into sleep stages.



    Here is the architecture:



    DeepSleepNet



    There are two parts to the network:




    • Representational learning layers:
      This consists of two convolutional networks in parallel. The main difference between the two networks is the kernel size and max-pooling window size. The left one uses kernel size = $F_s/2$ (where $F_s$ is the sampling rate of the signal) whereas the one the right uses kernel size = $F_s times 4$. The intuition behind this is that one network tries to learn "fine" (or high frequency) features, and the other tries to learn "coarse" (or low frequency) features.


    • Sequential learning layers: The embeddings (or learnt features) from the convolutional layers are concatenated and fed into the LSTM layers to learn temporal dependencies between the embeddings.

    At the end there is a 5-way softmax layer to classify the time series into one-of-five classes corresponding to sleep stages.






    share|cite|improve this answer











    $endgroup$




















      3












      $begingroup$

      FWIW, I'll recommend checking out the Temporal Convolutional Network from this paper (I am not the author). They have a neat idea for using CNN for time-series data, is sensitive to time order and can model arbitrarily long sequences (but doesn't have a memory).



      enter image description here






      share|cite|improve this answer









      $endgroup$




















        2












        $begingroup$

        I want to emphasis the use of a stacked hybrid approach (CNN + RNN) for processing long sequences:



        • As you may know, 1D CNNs are not sensitive to the order of timesteps (not further than a local scale); of course, by stacking lots of convolution and pooling layers on top of each other, the final layers are able to observe longer sub-sequences of the original input. However, that might not be an effective approach to model long-term dependencies. Although, CNNs are very fast compared to RNNs.


        • On the other hand, RNNs are sensitive to the order of timesteps and therefore can model the temporal dependencies very well. However, they are known to be weak at modeling very long-term dependencies, where a timestep may have a temporal dependency with the timesteps very far back in the input. Further, they are very slow when the number of timesteps is high.


        So, an effective approach might be to combine CNNs and RNNs in this way: first we use convolution and pooling layers to reduce the dimensionality of the input. This would give us a rather compressed representation of the original input with higher-level features. Then we can feed this shorter 1D sequence to the RNNs for further processing. So we are taking advantage of the speed of the CNNs as well as the representational capabilities of RNNs at the same time. Although, like any other method, you should experiment with this on your specific use case and dataset to find out whether it's effective or not.



        Here is a rough illustration of this method:



        --------------------------
        - -
        - long 1D sequence -
        - -
        --------------------------
        |
        |
        v
        ==========================
        = =
        = Conv + Pooling layers =
        = =
        ==========================
        |
        |
        v
        ---------------------------
        - -
        - Shorter representations -
        - (higher-level -
        - CNN features) -
        - -
        ---------------------------
        |
        |
        v
        ===========================
        = =
        = (stack of) RNN layers =
        = =
        ===========================
        |
        |
        v
        ===============================
        = =
        = classifier, regressor, etc. =
        = =
        ===============================





        share|cite|improve this answer









        $endgroup$













          Your Answer








          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "65"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );






          Fazla Rabbi Mashrur is a new contributor. Be nice, and check out our Code of Conduct.









          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f403502%2fis-it-a-good-idea-to-use-cnn-to-classify-1d-signal%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          4 Answers
          4






          active

          oldest

          votes








          4 Answers
          4






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          21












          $begingroup$

          I guess that by 1D signal you mean time-series data, where you assume temporal dependence between the values. In such cases convolutional neural networks (CNN) are one of the possible approaches. The most popular neural network approach to such data is to use recurrent neural networks (RNN), but you can alternatively use CNNs, or hybrid approach (quasi-recurrent neural networks, QRNN) as discussed by Bradbury et al (2016), and also illustrated on their figure below. There also other approaches, like using attention alone, as in Transformer network described by Vaswani et al (‎2017), where the information about time is passed via Fourier series features.



          enter image description here



          With RNN, you would use a cell that takes as input previous hidden state and current input value, to return output and another hidden state, so the information flows via the hidden states. With CNN, you would use sliding window of some width, that would look of certain (learned) patterns in the data, and stack such windows on top of each other, so that higher-level windows would look for patterns within the lower-level patterns. Using such sliding windows may be helpful for finding things such as repeating patterns within the data (e.g. seasonal patterns). QRNN layers mix both approaches. In fact, one of the advantages of CNN and QRNN architectures is that they are faster then RNN.






          share|cite|improve this answer











          $endgroup$

















            21












            $begingroup$

            I guess that by 1D signal you mean time-series data, where you assume temporal dependence between the values. In such cases convolutional neural networks (CNN) are one of the possible approaches. The most popular neural network approach to such data is to use recurrent neural networks (RNN), but you can alternatively use CNNs, or hybrid approach (quasi-recurrent neural networks, QRNN) as discussed by Bradbury et al (2016), and also illustrated on their figure below. There also other approaches, like using attention alone, as in Transformer network described by Vaswani et al (‎2017), where the information about time is passed via Fourier series features.



            enter image description here



            With RNN, you would use a cell that takes as input previous hidden state and current input value, to return output and another hidden state, so the information flows via the hidden states. With CNN, you would use sliding window of some width, that would look of certain (learned) patterns in the data, and stack such windows on top of each other, so that higher-level windows would look for patterns within the lower-level patterns. Using such sliding windows may be helpful for finding things such as repeating patterns within the data (e.g. seasonal patterns). QRNN layers mix both approaches. In fact, one of the advantages of CNN and QRNN architectures is that they are faster then RNN.






            share|cite|improve this answer











            $endgroup$















              21












              21








              21





              $begingroup$

              I guess that by 1D signal you mean time-series data, where you assume temporal dependence between the values. In such cases convolutional neural networks (CNN) are one of the possible approaches. The most popular neural network approach to such data is to use recurrent neural networks (RNN), but you can alternatively use CNNs, or hybrid approach (quasi-recurrent neural networks, QRNN) as discussed by Bradbury et al (2016), and also illustrated on their figure below. There also other approaches, like using attention alone, as in Transformer network described by Vaswani et al (‎2017), where the information about time is passed via Fourier series features.



              enter image description here



              With RNN, you would use a cell that takes as input previous hidden state and current input value, to return output and another hidden state, so the information flows via the hidden states. With CNN, you would use sliding window of some width, that would look of certain (learned) patterns in the data, and stack such windows on top of each other, so that higher-level windows would look for patterns within the lower-level patterns. Using such sliding windows may be helpful for finding things such as repeating patterns within the data (e.g. seasonal patterns). QRNN layers mix both approaches. In fact, one of the advantages of CNN and QRNN architectures is that they are faster then RNN.






              share|cite|improve this answer











              $endgroup$



              I guess that by 1D signal you mean time-series data, where you assume temporal dependence between the values. In such cases convolutional neural networks (CNN) are one of the possible approaches. The most popular neural network approach to such data is to use recurrent neural networks (RNN), but you can alternatively use CNNs, or hybrid approach (quasi-recurrent neural networks, QRNN) as discussed by Bradbury et al (2016), and also illustrated on their figure below. There also other approaches, like using attention alone, as in Transformer network described by Vaswani et al (‎2017), where the information about time is passed via Fourier series features.



              enter image description here



              With RNN, you would use a cell that takes as input previous hidden state and current input value, to return output and another hidden state, so the information flows via the hidden states. With CNN, you would use sliding window of some width, that would look of certain (learned) patterns in the data, and stack such windows on top of each other, so that higher-level windows would look for patterns within the lower-level patterns. Using such sliding windows may be helpful for finding things such as repeating patterns within the data (e.g. seasonal patterns). QRNN layers mix both approaches. In fact, one of the advantages of CNN and QRNN architectures is that they are faster then RNN.







              share|cite|improve this answer














              share|cite|improve this answer



              share|cite|improve this answer








              edited yesterday

























              answered 2 days ago









              TimTim

              60.5k9133230




              60.5k9133230























                  9












                  $begingroup$

                  You can certainly use a CNN to classify a 1D signal. Since you are interested in sleep stage classification see this paper. Its a deep neural network called the DeepSleepNet, and uses a combination of 1D convolutional and LSTM layers to classify EEG signals into sleep stages.



                  Here is the architecture:



                  DeepSleepNet



                  There are two parts to the network:




                  • Representational learning layers:
                    This consists of two convolutional networks in parallel. The main difference between the two networks is the kernel size and max-pooling window size. The left one uses kernel size = $F_s/2$ (where $F_s$ is the sampling rate of the signal) whereas the one the right uses kernel size = $F_s times 4$. The intuition behind this is that one network tries to learn "fine" (or high frequency) features, and the other tries to learn "coarse" (or low frequency) features.


                  • Sequential learning layers: The embeddings (or learnt features) from the convolutional layers are concatenated and fed into the LSTM layers to learn temporal dependencies between the embeddings.

                  At the end there is a 5-way softmax layer to classify the time series into one-of-five classes corresponding to sleep stages.






                  share|cite|improve this answer











                  $endgroup$

















                    9












                    $begingroup$

                    You can certainly use a CNN to classify a 1D signal. Since you are interested in sleep stage classification see this paper. Its a deep neural network called the DeepSleepNet, and uses a combination of 1D convolutional and LSTM layers to classify EEG signals into sleep stages.



                    Here is the architecture:



                    DeepSleepNet



                    There are two parts to the network:




                    • Representational learning layers:
                      This consists of two convolutional networks in parallel. The main difference between the two networks is the kernel size and max-pooling window size. The left one uses kernel size = $F_s/2$ (where $F_s$ is the sampling rate of the signal) whereas the one the right uses kernel size = $F_s times 4$. The intuition behind this is that one network tries to learn "fine" (or high frequency) features, and the other tries to learn "coarse" (or low frequency) features.


                    • Sequential learning layers: The embeddings (or learnt features) from the convolutional layers are concatenated and fed into the LSTM layers to learn temporal dependencies between the embeddings.

                    At the end there is a 5-way softmax layer to classify the time series into one-of-five classes corresponding to sleep stages.






                    share|cite|improve this answer











                    $endgroup$















                      9












                      9








                      9





                      $begingroup$

                      You can certainly use a CNN to classify a 1D signal. Since you are interested in sleep stage classification see this paper. Its a deep neural network called the DeepSleepNet, and uses a combination of 1D convolutional and LSTM layers to classify EEG signals into sleep stages.



                      Here is the architecture:



                      DeepSleepNet



                      There are two parts to the network:




                      • Representational learning layers:
                        This consists of two convolutional networks in parallel. The main difference between the two networks is the kernel size and max-pooling window size. The left one uses kernel size = $F_s/2$ (where $F_s$ is the sampling rate of the signal) whereas the one the right uses kernel size = $F_s times 4$. The intuition behind this is that one network tries to learn "fine" (or high frequency) features, and the other tries to learn "coarse" (or low frequency) features.


                      • Sequential learning layers: The embeddings (or learnt features) from the convolutional layers are concatenated and fed into the LSTM layers to learn temporal dependencies between the embeddings.

                      At the end there is a 5-way softmax layer to classify the time series into one-of-five classes corresponding to sleep stages.






                      share|cite|improve this answer











                      $endgroup$



                      You can certainly use a CNN to classify a 1D signal. Since you are interested in sleep stage classification see this paper. Its a deep neural network called the DeepSleepNet, and uses a combination of 1D convolutional and LSTM layers to classify EEG signals into sleep stages.



                      Here is the architecture:



                      DeepSleepNet



                      There are two parts to the network:




                      • Representational learning layers:
                        This consists of two convolutional networks in parallel. The main difference between the two networks is the kernel size and max-pooling window size. The left one uses kernel size = $F_s/2$ (where $F_s$ is the sampling rate of the signal) whereas the one the right uses kernel size = $F_s times 4$. The intuition behind this is that one network tries to learn "fine" (or high frequency) features, and the other tries to learn "coarse" (or low frequency) features.


                      • Sequential learning layers: The embeddings (or learnt features) from the convolutional layers are concatenated and fed into the LSTM layers to learn temporal dependencies between the embeddings.

                      At the end there is a 5-way softmax layer to classify the time series into one-of-five classes corresponding to sleep stages.







                      share|cite|improve this answer














                      share|cite|improve this answer



                      share|cite|improve this answer








                      edited 2 days ago

























                      answered 2 days ago









                      kedarpskedarps

                      790720




                      790720





















                          3












                          $begingroup$

                          FWIW, I'll recommend checking out the Temporal Convolutional Network from this paper (I am not the author). They have a neat idea for using CNN for time-series data, is sensitive to time order and can model arbitrarily long sequences (but doesn't have a memory).



                          enter image description here






                          share|cite|improve this answer









                          $endgroup$

















                            3












                            $begingroup$

                            FWIW, I'll recommend checking out the Temporal Convolutional Network from this paper (I am not the author). They have a neat idea for using CNN for time-series data, is sensitive to time order and can model arbitrarily long sequences (but doesn't have a memory).



                            enter image description here






                            share|cite|improve this answer









                            $endgroup$















                              3












                              3








                              3





                              $begingroup$

                              FWIW, I'll recommend checking out the Temporal Convolutional Network from this paper (I am not the author). They have a neat idea for using CNN for time-series data, is sensitive to time order and can model arbitrarily long sequences (but doesn't have a memory).



                              enter image description here






                              share|cite|improve this answer









                              $endgroup$



                              FWIW, I'll recommend checking out the Temporal Convolutional Network from this paper (I am not the author). They have a neat idea for using CNN for time-series data, is sensitive to time order and can model arbitrarily long sequences (but doesn't have a memory).



                              enter image description here







                              share|cite|improve this answer












                              share|cite|improve this answer



                              share|cite|improve this answer










                              answered 2 days ago









                              kamptakampta

                              1677




                              1677





















                                  2












                                  $begingroup$

                                  I want to emphasis the use of a stacked hybrid approach (CNN + RNN) for processing long sequences:



                                  • As you may know, 1D CNNs are not sensitive to the order of timesteps (not further than a local scale); of course, by stacking lots of convolution and pooling layers on top of each other, the final layers are able to observe longer sub-sequences of the original input. However, that might not be an effective approach to model long-term dependencies. Although, CNNs are very fast compared to RNNs.


                                  • On the other hand, RNNs are sensitive to the order of timesteps and therefore can model the temporal dependencies very well. However, they are known to be weak at modeling very long-term dependencies, where a timestep may have a temporal dependency with the timesteps very far back in the input. Further, they are very slow when the number of timesteps is high.


                                  So, an effective approach might be to combine CNNs and RNNs in this way: first we use convolution and pooling layers to reduce the dimensionality of the input. This would give us a rather compressed representation of the original input with higher-level features. Then we can feed this shorter 1D sequence to the RNNs for further processing. So we are taking advantage of the speed of the CNNs as well as the representational capabilities of RNNs at the same time. Although, like any other method, you should experiment with this on your specific use case and dataset to find out whether it's effective or not.



                                  Here is a rough illustration of this method:



                                  --------------------------
                                  - -
                                  - long 1D sequence -
                                  - -
                                  --------------------------
                                  |
                                  |
                                  v
                                  ==========================
                                  = =
                                  = Conv + Pooling layers =
                                  = =
                                  ==========================
                                  |
                                  |
                                  v
                                  ---------------------------
                                  - -
                                  - Shorter representations -
                                  - (higher-level -
                                  - CNN features) -
                                  - -
                                  ---------------------------
                                  |
                                  |
                                  v
                                  ===========================
                                  = =
                                  = (stack of) RNN layers =
                                  = =
                                  ===========================
                                  |
                                  |
                                  v
                                  ===============================
                                  = =
                                  = classifier, regressor, etc. =
                                  = =
                                  ===============================





                                  share|cite|improve this answer









                                  $endgroup$

















                                    2












                                    $begingroup$

                                    I want to emphasis the use of a stacked hybrid approach (CNN + RNN) for processing long sequences:



                                    • As you may know, 1D CNNs are not sensitive to the order of timesteps (not further than a local scale); of course, by stacking lots of convolution and pooling layers on top of each other, the final layers are able to observe longer sub-sequences of the original input. However, that might not be an effective approach to model long-term dependencies. Although, CNNs are very fast compared to RNNs.


                                    • On the other hand, RNNs are sensitive to the order of timesteps and therefore can model the temporal dependencies very well. However, they are known to be weak at modeling very long-term dependencies, where a timestep may have a temporal dependency with the timesteps very far back in the input. Further, they are very slow when the number of timesteps is high.


                                    So, an effective approach might be to combine CNNs and RNNs in this way: first we use convolution and pooling layers to reduce the dimensionality of the input. This would give us a rather compressed representation of the original input with higher-level features. Then we can feed this shorter 1D sequence to the RNNs for further processing. So we are taking advantage of the speed of the CNNs as well as the representational capabilities of RNNs at the same time. Although, like any other method, you should experiment with this on your specific use case and dataset to find out whether it's effective or not.



                                    Here is a rough illustration of this method:



                                    --------------------------
                                    - -
                                    - long 1D sequence -
                                    - -
                                    --------------------------
                                    |
                                    |
                                    v
                                    ==========================
                                    = =
                                    = Conv + Pooling layers =
                                    = =
                                    ==========================
                                    |
                                    |
                                    v
                                    ---------------------------
                                    - -
                                    - Shorter representations -
                                    - (higher-level -
                                    - CNN features) -
                                    - -
                                    ---------------------------
                                    |
                                    |
                                    v
                                    ===========================
                                    = =
                                    = (stack of) RNN layers =
                                    = =
                                    ===========================
                                    |
                                    |
                                    v
                                    ===============================
                                    = =
                                    = classifier, regressor, etc. =
                                    = =
                                    ===============================





                                    share|cite|improve this answer









                                    $endgroup$















                                      2












                                      2








                                      2





                                      $begingroup$

                                      I want to emphasis the use of a stacked hybrid approach (CNN + RNN) for processing long sequences:



                                      • As you may know, 1D CNNs are not sensitive to the order of timesteps (not further than a local scale); of course, by stacking lots of convolution and pooling layers on top of each other, the final layers are able to observe longer sub-sequences of the original input. However, that might not be an effective approach to model long-term dependencies. Although, CNNs are very fast compared to RNNs.


                                      • On the other hand, RNNs are sensitive to the order of timesteps and therefore can model the temporal dependencies very well. However, they are known to be weak at modeling very long-term dependencies, where a timestep may have a temporal dependency with the timesteps very far back in the input. Further, they are very slow when the number of timesteps is high.


                                      So, an effective approach might be to combine CNNs and RNNs in this way: first we use convolution and pooling layers to reduce the dimensionality of the input. This would give us a rather compressed representation of the original input with higher-level features. Then we can feed this shorter 1D sequence to the RNNs for further processing. So we are taking advantage of the speed of the CNNs as well as the representational capabilities of RNNs at the same time. Although, like any other method, you should experiment with this on your specific use case and dataset to find out whether it's effective or not.



                                      Here is a rough illustration of this method:



                                      --------------------------
                                      - -
                                      - long 1D sequence -
                                      - -
                                      --------------------------
                                      |
                                      |
                                      v
                                      ==========================
                                      = =
                                      = Conv + Pooling layers =
                                      = =
                                      ==========================
                                      |
                                      |
                                      v
                                      ---------------------------
                                      - -
                                      - Shorter representations -
                                      - (higher-level -
                                      - CNN features) -
                                      - -
                                      ---------------------------
                                      |
                                      |
                                      v
                                      ===========================
                                      = =
                                      = (stack of) RNN layers =
                                      = =
                                      ===========================
                                      |
                                      |
                                      v
                                      ===============================
                                      = =
                                      = classifier, regressor, etc. =
                                      = =
                                      ===============================





                                      share|cite|improve this answer









                                      $endgroup$



                                      I want to emphasis the use of a stacked hybrid approach (CNN + RNN) for processing long sequences:



                                      • As you may know, 1D CNNs are not sensitive to the order of timesteps (not further than a local scale); of course, by stacking lots of convolution and pooling layers on top of each other, the final layers are able to observe longer sub-sequences of the original input. However, that might not be an effective approach to model long-term dependencies. Although, CNNs are very fast compared to RNNs.


                                      • On the other hand, RNNs are sensitive to the order of timesteps and therefore can model the temporal dependencies very well. However, they are known to be weak at modeling very long-term dependencies, where a timestep may have a temporal dependency with the timesteps very far back in the input. Further, they are very slow when the number of timesteps is high.


                                      So, an effective approach might be to combine CNNs and RNNs in this way: first we use convolution and pooling layers to reduce the dimensionality of the input. This would give us a rather compressed representation of the original input with higher-level features. Then we can feed this shorter 1D sequence to the RNNs for further processing. So we are taking advantage of the speed of the CNNs as well as the representational capabilities of RNNs at the same time. Although, like any other method, you should experiment with this on your specific use case and dataset to find out whether it's effective or not.



                                      Here is a rough illustration of this method:



                                      --------------------------
                                      - -
                                      - long 1D sequence -
                                      - -
                                      --------------------------
                                      |
                                      |
                                      v
                                      ==========================
                                      = =
                                      = Conv + Pooling layers =
                                      = =
                                      ==========================
                                      |
                                      |
                                      v
                                      ---------------------------
                                      - -
                                      - Shorter representations -
                                      - (higher-level -
                                      - CNN features) -
                                      - -
                                      ---------------------------
                                      |
                                      |
                                      v
                                      ===========================
                                      = =
                                      = (stack of) RNN layers =
                                      = =
                                      ===========================
                                      |
                                      |
                                      v
                                      ===============================
                                      = =
                                      = classifier, regressor, etc. =
                                      = =
                                      ===============================






                                      share|cite|improve this answer












                                      share|cite|improve this answer



                                      share|cite|improve this answer










                                      answered 2 days ago









                                      todaytoday

                                      31119




                                      31119




















                                          Fazla Rabbi Mashrur is a new contributor. Be nice, and check out our Code of Conduct.









                                          draft saved

                                          draft discarded


















                                          Fazla Rabbi Mashrur is a new contributor. Be nice, and check out our Code of Conduct.












                                          Fazla Rabbi Mashrur is a new contributor. Be nice, and check out our Code of Conduct.











                                          Fazla Rabbi Mashrur is a new contributor. Be nice, and check out our Code of Conduct.














                                          Thanks for contributing an answer to Cross Validated!


                                          • Please be sure to answer the question. Provide details and share your research!

                                          But avoid


                                          • Asking for help, clarification, or responding to other answers.

                                          • Making statements based on opinion; back them up with references or personal experience.

                                          Use MathJax to format equations. MathJax reference.


                                          To learn more, see our tips on writing great answers.




                                          draft saved


                                          draft discarded














                                          StackExchange.ready(
                                          function ()
                                          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f403502%2fis-it-a-good-idea-to-use-cnn-to-classify-1d-signal%23new-answer', 'question_page');

                                          );

                                          Post as a guest















                                          Required, but never shown





















































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown

































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown







                                          Popular posts from this blog

                                          Category:9 (number) SubcategoriesMedia in category "9 (number)"Navigation menuUpload mediaGND ID: 4485639-8Library of Congress authority ID: sh85091979ReasonatorScholiaStatistics

                                          Circuit construction for execution of conditional statements using least significant bitHow are two different registers being used as “control”?How exactly is the stated composite state of the two registers being produced using the $R_zz$ controlled rotations?Efficiently performing controlled rotations in HHLWould this quantum algorithm implementation work?How to prepare a superposed states of odd integers from $1$ to $sqrtN$?Why is this implementation of the order finding algorithm not working?Circuit construction for Hamiltonian simulationHow can I invert the least significant bit of a certain term of a superposed state?Implementing an oracleImplementing a controlled sum operation

                                          Magento 2 “No Payment Methods” in Admin New OrderHow to integrate Paypal Express Checkout with the Magento APIMagento 1.5 - Sales > Order > edit order and shipping methods disappearAuto Invoice Check/Money Order Payment methodAdd more simple payment methods?Shipping methods not showingWhat should I do to change payment methods if changing the configuration has no effects?1.9 - No Payment Methods showing upMy Payment Methods not Showing for downloadable/virtual product when checkout?Magento2 API to access internal payment methodHow to call an existing payment methods in the registration form?