What is the difference between and Embedding Layer and an Autoencoder?What is the difference between NLP and text mining?How to user Keras's Embedding Layer properly?What are 2D dimensionality reduction algorithms good for?What is the difference between fasttext and DANs in document classification?Retain similarity distances when using an autoencoder for dimensionality reductionLink between Correspondance Analysis and MCAWhich dissimilarity/similarity measure use after a dimension reduction ( PCA / AutoEncoder / … )?Training of word weights in Word Embedding and Word2VecGlove supported languagesWord2Vec how to choose the embedding size parameter

Why did the Apple //e make a hideous noise if you inserted the disk upside down?

Does a lens with a bigger max. aperture focus faster than a lens with a smaller max. aperture?

Why will we fail creating a self sustaining off world colony?

How can an inexperienced GM keep a game fun for experienced players?

I agreed to cancel a long-planned vacation (with travel costs) due to project deadlines, but now the timeline has all changed again

Is it OK to throw pebbles and stones in streams, waterfalls, ponds, etc.?

Early 2000s movie about time travel, protagonist travels back to save girlfriend, then into multiple points in future

Why should I allow multiple IPs on a website for a single session?

How does the 'five minute adventuring day' affect class balance?

Journal standards vs. personal standards

Could you fall off a planet if it was being accelerated by engines?

What was the point of separating stdout and stderr?

Why do some PCBs have exposed plated perimeters?

How do I keep a running total of data in a column in Excel?

Why doesn't SpaceX land boosters in Africa?

Perform mirror symmetry transformation of 3D model (in OBJ)

Is it advisable to inform the CEO about his brother accessing his office?

Customs and immigration on a USA-UK-Sweden flight itinerary

Could all three Gorgons turn people to stone, or just Medusa?

Can dual citizens open crypto exchange accounts where U.S. citizens are prohibited?

Why isn't UDP with reliability (implemented at Application layer) a substitute of TCP?

Avoiding repetition when using the "snprintf idiom" to write text

Why is exile often an intermediate step?

Word ending in "-ine" for rat-like



What is the difference between and Embedding Layer and an Autoencoder?


What is the difference between NLP and text mining?How to user Keras's Embedding Layer properly?What are 2D dimensionality reduction algorithms good for?What is the difference between fasttext and DANs in document classification?Retain similarity distances when using an autoencoder for dimensionality reductionLink between Correspondance Analysis and MCAWhich dissimilarity/similarity measure use after a dimension reduction ( PCA / AutoEncoder / … )?Training of word weights in Word Embedding and Word2VecGlove supported languagesWord2Vec how to choose the embedding size parameter













3












$begingroup$


I'm reading about Embedding layers, especially applied to NLP and word2vec, and they seem nothing more than an application of Autoencoders for dimensionality reduction. Are they different? If so, what are the differences between them?










share|improve this question









$endgroup$
















    3












    $begingroup$


    I'm reading about Embedding layers, especially applied to NLP and word2vec, and they seem nothing more than an application of Autoencoders for dimensionality reduction. Are they different? If so, what are the differences between them?










    share|improve this question









    $endgroup$














      3












      3








      3


      1



      $begingroup$


      I'm reading about Embedding layers, especially applied to NLP and word2vec, and they seem nothing more than an application of Autoencoders for dimensionality reduction. Are they different? If so, what are the differences between them?










      share|improve this question









      $endgroup$




      I'm reading about Embedding layers, especially applied to NLP and word2vec, and they seem nothing more than an application of Autoencoders for dimensionality reduction. Are they different? If so, what are the differences between them?







      nlp word2vec word-embeddings dimensionality-reduction embeddings






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Jun 21 at 15:52









      LeevoLeevo

      74710 bronze badges




      74710 bronze badges




















          2 Answers
          2






          active

          oldest

          votes


















          3












          $begingroup$

          Actually they are 3 different things (embedding layer, word2vec, autoencoder), though they can be used to solve similar problems. (i.e. dense representation of data)



          Autoencoder is a type of neural network where the inputs and outputs are the same but in the hidden layer the dimensionality is reduced in order to get a more dense representation of the data.



          Word2vec contains only 1 hidden layer but the inputs are the neighborhood words and the output is the word itself (or the other way around). So it cannot be an autoencoder cause the inputs and outputs are different.



          Embedding layer is only a "simple" layer in a neural network. You can imagine it as a dictionary where a category (i.e word) is represented as a vector (list of numbers). The value of the vectors are defined by backpropagating the errors of the network.






          share|improve this answer









          $endgroup$




















            0












            $begingroup$

            In short :

            Input vector --> embedding layer --> embedding vector

            VS

            Autoencoder :

            Input vector --> encoder --> embedding vector --> decoder --> input vector



            So the goal of an embedding layer is the same as the encoder part of an autoencoder.






            share|improve this answer









            $endgroup$















              Your Answer








              StackExchange.ready(function()
              var channelOptions =
              tags: "".split(" "),
              id: "557"
              ;
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function()
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled)
              StackExchange.using("snippets", function()
              createEditor();
              );

              else
              createEditor();

              );

              function createEditor()
              StackExchange.prepareEditor(
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: false,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: null,
              bindNavPrevention: true,
              postfix: "",
              imageUploader:
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              ,
              onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              );



              );













              draft saved

              draft discarded


















              StackExchange.ready(
              function ()
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f54230%2fwhat-is-the-difference-between-and-embedding-layer-and-an-autoencoder%23new-answer', 'question_page');

              );

              Post as a guest















              Required, but never shown

























              2 Answers
              2






              active

              oldest

              votes








              2 Answers
              2






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              3












              $begingroup$

              Actually they are 3 different things (embedding layer, word2vec, autoencoder), though they can be used to solve similar problems. (i.e. dense representation of data)



              Autoencoder is a type of neural network where the inputs and outputs are the same but in the hidden layer the dimensionality is reduced in order to get a more dense representation of the data.



              Word2vec contains only 1 hidden layer but the inputs are the neighborhood words and the output is the word itself (or the other way around). So it cannot be an autoencoder cause the inputs and outputs are different.



              Embedding layer is only a "simple" layer in a neural network. You can imagine it as a dictionary where a category (i.e word) is represented as a vector (list of numbers). The value of the vectors are defined by backpropagating the errors of the network.






              share|improve this answer









              $endgroup$

















                3












                $begingroup$

                Actually they are 3 different things (embedding layer, word2vec, autoencoder), though they can be used to solve similar problems. (i.e. dense representation of data)



                Autoencoder is a type of neural network where the inputs and outputs are the same but in the hidden layer the dimensionality is reduced in order to get a more dense representation of the data.



                Word2vec contains only 1 hidden layer but the inputs are the neighborhood words and the output is the word itself (or the other way around). So it cannot be an autoencoder cause the inputs and outputs are different.



                Embedding layer is only a "simple" layer in a neural network. You can imagine it as a dictionary where a category (i.e word) is represented as a vector (list of numbers). The value of the vectors are defined by backpropagating the errors of the network.






                share|improve this answer









                $endgroup$















                  3












                  3








                  3





                  $begingroup$

                  Actually they are 3 different things (embedding layer, word2vec, autoencoder), though they can be used to solve similar problems. (i.e. dense representation of data)



                  Autoencoder is a type of neural network where the inputs and outputs are the same but in the hidden layer the dimensionality is reduced in order to get a more dense representation of the data.



                  Word2vec contains only 1 hidden layer but the inputs are the neighborhood words and the output is the word itself (or the other way around). So it cannot be an autoencoder cause the inputs and outputs are different.



                  Embedding layer is only a "simple" layer in a neural network. You can imagine it as a dictionary where a category (i.e word) is represented as a vector (list of numbers). The value of the vectors are defined by backpropagating the errors of the network.






                  share|improve this answer









                  $endgroup$



                  Actually they are 3 different things (embedding layer, word2vec, autoencoder), though they can be used to solve similar problems. (i.e. dense representation of data)



                  Autoencoder is a type of neural network where the inputs and outputs are the same but in the hidden layer the dimensionality is reduced in order to get a more dense representation of the data.



                  Word2vec contains only 1 hidden layer but the inputs are the neighborhood words and the output is the word itself (or the other way around). So it cannot be an autoencoder cause the inputs and outputs are different.



                  Embedding layer is only a "simple" layer in a neural network. You can imagine it as a dictionary where a category (i.e word) is represented as a vector (list of numbers). The value of the vectors are defined by backpropagating the errors of the network.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Jun 21 at 16:26









                  ViktorViktor

                  4841 gold badge3 silver badges14 bronze badges




                  4841 gold badge3 silver badges14 bronze badges





















                      0












                      $begingroup$

                      In short :

                      Input vector --> embedding layer --> embedding vector

                      VS

                      Autoencoder :

                      Input vector --> encoder --> embedding vector --> decoder --> input vector



                      So the goal of an embedding layer is the same as the encoder part of an autoencoder.






                      share|improve this answer









                      $endgroup$

















                        0












                        $begingroup$

                        In short :

                        Input vector --> embedding layer --> embedding vector

                        VS

                        Autoencoder :

                        Input vector --> encoder --> embedding vector --> decoder --> input vector



                        So the goal of an embedding layer is the same as the encoder part of an autoencoder.






                        share|improve this answer









                        $endgroup$















                          0












                          0








                          0





                          $begingroup$

                          In short :

                          Input vector --> embedding layer --> embedding vector

                          VS

                          Autoencoder :

                          Input vector --> encoder --> embedding vector --> decoder --> input vector



                          So the goal of an embedding layer is the same as the encoder part of an autoencoder.






                          share|improve this answer









                          $endgroup$



                          In short :

                          Input vector --> embedding layer --> embedding vector

                          VS

                          Autoencoder :

                          Input vector --> encoder --> embedding vector --> decoder --> input vector



                          So the goal of an embedding layer is the same as the encoder part of an autoencoder.







                          share|improve this answer












                          share|improve this answer



                          share|improve this answer










                          answered Jun 22 at 18:36









                          Ismael EL ATIFIIsmael EL ATIFI

                          1675 bronze badges




                          1675 bronze badges



























                              draft saved

                              draft discarded
















































                              Thanks for contributing an answer to Data Science Stack Exchange!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid


                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.

                              Use MathJax to format equations. MathJax reference.


                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function ()
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f54230%2fwhat-is-the-difference-between-and-embedding-layer-and-an-autoencoder%23new-answer', 'question_page');

                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              Category:9 (number) SubcategoriesMedia in category "9 (number)"Navigation menuUpload mediaGND ID: 4485639-8Library of Congress authority ID: sh85091979ReasonatorScholiaStatistics

                              Circuit construction for execution of conditional statements using least significant bitHow are two different registers being used as “control”?How exactly is the stated composite state of the two registers being produced using the $R_zz$ controlled rotations?Efficiently performing controlled rotations in HHLWould this quantum algorithm implementation work?How to prepare a superposed states of odd integers from $1$ to $sqrtN$?Why is this implementation of the order finding algorithm not working?Circuit construction for Hamiltonian simulationHow can I invert the least significant bit of a certain term of a superposed state?Implementing an oracleImplementing a controlled sum operation

                              Magento 2 “No Payment Methods” in Admin New OrderHow to integrate Paypal Express Checkout with the Magento APIMagento 1.5 - Sales > Order > edit order and shipping methods disappearAuto Invoice Check/Money Order Payment methodAdd more simple payment methods?Shipping methods not showingWhat should I do to change payment methods if changing the configuration has no effects?1.9 - No Payment Methods showing upMy Payment Methods not Showing for downloadable/virtual product when checkout?Magento2 API to access internal payment methodHow to call an existing payment methods in the registration form?