What is the difference between and Embedding Layer and an Autoencoder?What is the difference between NLP and text mining?How to user Keras's Embedding Layer properly?What are 2D dimensionality reduction algorithms good for?What is the difference between fasttext and DANs in document classification?Retain similarity distances when using an autoencoder for dimensionality reductionLink between Correspondance Analysis and MCAWhich dissimilarity/similarity measure use after a dimension reduction ( PCA / AutoEncoder / … )?Training of word weights in Word Embedding and Word2VecGlove supported languagesWord2Vec how to choose the embedding size parameter
Why did the Apple //e make a hideous noise if you inserted the disk upside down?
Does a lens with a bigger max. aperture focus faster than a lens with a smaller max. aperture?
Why will we fail creating a self sustaining off world colony?
How can an inexperienced GM keep a game fun for experienced players?
I agreed to cancel a long-planned vacation (with travel costs) due to project deadlines, but now the timeline has all changed again
Is it OK to throw pebbles and stones in streams, waterfalls, ponds, etc.?
Early 2000s movie about time travel, protagonist travels back to save girlfriend, then into multiple points in future
Why should I allow multiple IPs on a website for a single session?
How does the 'five minute adventuring day' affect class balance?
Journal standards vs. personal standards
Could you fall off a planet if it was being accelerated by engines?
What was the point of separating stdout and stderr?
Why do some PCBs have exposed plated perimeters?
How do I keep a running total of data in a column in Excel?
Why doesn't SpaceX land boosters in Africa?
Perform mirror symmetry transformation of 3D model (in OBJ)
Is it advisable to inform the CEO about his brother accessing his office?
Customs and immigration on a USA-UK-Sweden flight itinerary
Could all three Gorgons turn people to stone, or just Medusa?
Can dual citizens open crypto exchange accounts where U.S. citizens are prohibited?
Why isn't UDP with reliability (implemented at Application layer) a substitute of TCP?
Avoiding repetition when using the "snprintf idiom" to write text
Why is exile often an intermediate step?
Word ending in "-ine" for rat-like
What is the difference between and Embedding Layer and an Autoencoder?
What is the difference between NLP and text mining?How to user Keras's Embedding Layer properly?What are 2D dimensionality reduction algorithms good for?What is the difference between fasttext and DANs in document classification?Retain similarity distances when using an autoencoder for dimensionality reductionLink between Correspondance Analysis and MCAWhich dissimilarity/similarity measure use after a dimension reduction ( PCA / AutoEncoder / … )?Training of word weights in Word Embedding and Word2VecGlove supported languagesWord2Vec how to choose the embedding size parameter
$begingroup$
I'm reading about Embedding layers, especially applied to NLP and word2vec, and they seem nothing more than an application of Autoencoders for dimensionality reduction. Are they different? If so, what are the differences between them?
nlp word2vec word-embeddings dimensionality-reduction embeddings
$endgroup$
add a comment |
$begingroup$
I'm reading about Embedding layers, especially applied to NLP and word2vec, and they seem nothing more than an application of Autoencoders for dimensionality reduction. Are they different? If so, what are the differences between them?
nlp word2vec word-embeddings dimensionality-reduction embeddings
$endgroup$
add a comment |
$begingroup$
I'm reading about Embedding layers, especially applied to NLP and word2vec, and they seem nothing more than an application of Autoencoders for dimensionality reduction. Are they different? If so, what are the differences between them?
nlp word2vec word-embeddings dimensionality-reduction embeddings
$endgroup$
I'm reading about Embedding layers, especially applied to NLP and word2vec, and they seem nothing more than an application of Autoencoders for dimensionality reduction. Are they different? If so, what are the differences between them?
nlp word2vec word-embeddings dimensionality-reduction embeddings
nlp word2vec word-embeddings dimensionality-reduction embeddings
asked Jun 21 at 15:52
LeevoLeevo
74710 bronze badges
74710 bronze badges
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
Actually they are 3 different things (embedding layer, word2vec, autoencoder), though they can be used to solve similar problems. (i.e. dense representation of data)
Autoencoder is a type of neural network where the inputs and outputs are the same but in the hidden layer the dimensionality is reduced in order to get a more dense representation of the data.
Word2vec contains only 1 hidden layer but the inputs are the neighborhood words and the output is the word itself (or the other way around). So it cannot be an autoencoder cause the inputs and outputs are different.
Embedding layer is only a "simple" layer in a neural network. You can imagine it as a dictionary where a category (i.e word) is represented as a vector (list of numbers). The value of the vectors are defined by backpropagating the errors of the network.
$endgroup$
add a comment |
$begingroup$
In short :
Input vector --> embedding layer --> embedding vector
VS
Autoencoder :
Input vector --> encoder --> embedding vector --> decoder --> input vector
So the goal of an embedding layer is the same as the encoder part of an autoencoder.
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f54230%2fwhat-is-the-difference-between-and-embedding-layer-and-an-autoencoder%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Actually they are 3 different things (embedding layer, word2vec, autoencoder), though they can be used to solve similar problems. (i.e. dense representation of data)
Autoencoder is a type of neural network where the inputs and outputs are the same but in the hidden layer the dimensionality is reduced in order to get a more dense representation of the data.
Word2vec contains only 1 hidden layer but the inputs are the neighborhood words and the output is the word itself (or the other way around). So it cannot be an autoencoder cause the inputs and outputs are different.
Embedding layer is only a "simple" layer in a neural network. You can imagine it as a dictionary where a category (i.e word) is represented as a vector (list of numbers). The value of the vectors are defined by backpropagating the errors of the network.
$endgroup$
add a comment |
$begingroup$
Actually they are 3 different things (embedding layer, word2vec, autoencoder), though they can be used to solve similar problems. (i.e. dense representation of data)
Autoencoder is a type of neural network where the inputs and outputs are the same but in the hidden layer the dimensionality is reduced in order to get a more dense representation of the data.
Word2vec contains only 1 hidden layer but the inputs are the neighborhood words and the output is the word itself (or the other way around). So it cannot be an autoencoder cause the inputs and outputs are different.
Embedding layer is only a "simple" layer in a neural network. You can imagine it as a dictionary where a category (i.e word) is represented as a vector (list of numbers). The value of the vectors are defined by backpropagating the errors of the network.
$endgroup$
add a comment |
$begingroup$
Actually they are 3 different things (embedding layer, word2vec, autoencoder), though they can be used to solve similar problems. (i.e. dense representation of data)
Autoencoder is a type of neural network where the inputs and outputs are the same but in the hidden layer the dimensionality is reduced in order to get a more dense representation of the data.
Word2vec contains only 1 hidden layer but the inputs are the neighborhood words and the output is the word itself (or the other way around). So it cannot be an autoencoder cause the inputs and outputs are different.
Embedding layer is only a "simple" layer in a neural network. You can imagine it as a dictionary where a category (i.e word) is represented as a vector (list of numbers). The value of the vectors are defined by backpropagating the errors of the network.
$endgroup$
Actually they are 3 different things (embedding layer, word2vec, autoencoder), though they can be used to solve similar problems. (i.e. dense representation of data)
Autoencoder is a type of neural network where the inputs and outputs are the same but in the hidden layer the dimensionality is reduced in order to get a more dense representation of the data.
Word2vec contains only 1 hidden layer but the inputs are the neighborhood words and the output is the word itself (or the other way around). So it cannot be an autoencoder cause the inputs and outputs are different.
Embedding layer is only a "simple" layer in a neural network. You can imagine it as a dictionary where a category (i.e word) is represented as a vector (list of numbers). The value of the vectors are defined by backpropagating the errors of the network.
answered Jun 21 at 16:26
ViktorViktor
4841 gold badge3 silver badges14 bronze badges
4841 gold badge3 silver badges14 bronze badges
add a comment |
add a comment |
$begingroup$
In short :
Input vector --> embedding layer --> embedding vector
VS
Autoencoder :
Input vector --> encoder --> embedding vector --> decoder --> input vector
So the goal of an embedding layer is the same as the encoder part of an autoencoder.
$endgroup$
add a comment |
$begingroup$
In short :
Input vector --> embedding layer --> embedding vector
VS
Autoencoder :
Input vector --> encoder --> embedding vector --> decoder --> input vector
So the goal of an embedding layer is the same as the encoder part of an autoencoder.
$endgroup$
add a comment |
$begingroup$
In short :
Input vector --> embedding layer --> embedding vector
VS
Autoencoder :
Input vector --> encoder --> embedding vector --> decoder --> input vector
So the goal of an embedding layer is the same as the encoder part of an autoencoder.
$endgroup$
In short :
Input vector --> embedding layer --> embedding vector
VS
Autoencoder :
Input vector --> encoder --> embedding vector --> decoder --> input vector
So the goal of an embedding layer is the same as the encoder part of an autoencoder.
answered Jun 22 at 18:36
Ismael EL ATIFIIsmael EL ATIFI
1675 bronze badges
1675 bronze badges
add a comment |
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f54230%2fwhat-is-the-difference-between-and-embedding-layer-and-an-autoencoder%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown