Proof of (weak) consistency for an unbiased estimatorConsistency of an order statistic in exponential distributionWhy is the definition of a consistent estimator the way it is? What about alternative definitions of consistency?Prove the consistency of estimatorHow does one explain what an unbiased estimator is to a layperson?Is this MLE estimator unbiased?Trying to prove consistency, but getting non-sensical limit probabilitiesConsistent estimator, that is not MSE consistentProof of posterior consistencyUnbiased and consistent estimatewhy does unbiasedness not imply consistency
Do electrons really perform instantaneous quantum leaps?
Which high-degree derivatives play an essential role?
Why are examinees often not allowed to leave during the start and end of an exam?
Cat files in subfolders in order given by a list
How many transistors are there in a logic gate?
How to count the number of bytes in a file, grouping the same bytes?
Installed software from source, how to say yum not to install it from package?
Drawing a sigmoid function and its derivative in tikz
Is it beneficial to use a crop sensor camera with a full frame telezoom?
Rear derailleur got caught in the spokes, what could be a root cause
Why should I allow multiple IP addresses on a website for a single session?
Russian equivalents of 能骗就骗 (if you can cheat, then cheat)
Basic calculations in PGF/TikZ for loop
Robots in a spaceship
What are the children of two Muggle-borns called?
Having to constantly redo everything because I don't know how to do it
Chandra exiles a card, I play it, it gets exiled again
Could you fall off a planet if it was being accelerated by engines?
Why would Dementors torture a Death Eater if they are loyal to Voldemort?
Why are symbols not written in words?
ATMEGA328P-U vs ATMEGA328-PU
"in 60 seconds or less" or "in 60 seconds or fewer"?
Tikz radius of the bullets with node
Why didn't Caesar move against Sextus Pompey immediately after Munda?
Proof of (weak) consistency for an unbiased estimator
Consistency of an order statistic in exponential distributionWhy is the definition of a consistent estimator the way it is? What about alternative definitions of consistency?Prove the consistency of estimatorHow does one explain what an unbiased estimator is to a layperson?Is this MLE estimator unbiased?Trying to prove consistency, but getting non-sensical limit probabilitiesConsistent estimator, that is not MSE consistentProof of posterior consistencyUnbiased and consistent estimatewhy does unbiasedness not imply consistency
$begingroup$
I want to prove a theorem stating:
An unbiased estimator $hattheta$ of the unknown parameter $theta$ is consistent if $V(hattheta_n$) $to0$ for $ntoinfty$.
I've tried using the definition of consistency which is $lim_ntoinfty mathbbP(|hattheta-theta|≥ epsilon)=0$ and Markov's inequality. However I am having trouble solving the expected value of $|hattheta-theta|$. Can anyone explain the process of deriving this theorem?
expected-value markov-process unbiased-estimator consistency
$endgroup$
add a comment |
$begingroup$
I want to prove a theorem stating:
An unbiased estimator $hattheta$ of the unknown parameter $theta$ is consistent if $V(hattheta_n$) $to0$ for $ntoinfty$.
I've tried using the definition of consistency which is $lim_ntoinfty mathbbP(|hattheta-theta|≥ epsilon)=0$ and Markov's inequality. However I am having trouble solving the expected value of $|hattheta-theta|$. Can anyone explain the process of deriving this theorem?
expected-value markov-process unbiased-estimator consistency
$endgroup$
add a comment |
$begingroup$
I want to prove a theorem stating:
An unbiased estimator $hattheta$ of the unknown parameter $theta$ is consistent if $V(hattheta_n$) $to0$ for $ntoinfty$.
I've tried using the definition of consistency which is $lim_ntoinfty mathbbP(|hattheta-theta|≥ epsilon)=0$ and Markov's inequality. However I am having trouble solving the expected value of $|hattheta-theta|$. Can anyone explain the process of deriving this theorem?
expected-value markov-process unbiased-estimator consistency
$endgroup$
I want to prove a theorem stating:
An unbiased estimator $hattheta$ of the unknown parameter $theta$ is consistent if $V(hattheta_n$) $to0$ for $ntoinfty$.
I've tried using the definition of consistency which is $lim_ntoinfty mathbbP(|hattheta-theta|≥ epsilon)=0$ and Markov's inequality. However I am having trouble solving the expected value of $|hattheta-theta|$. Can anyone explain the process of deriving this theorem?
expected-value markov-process unbiased-estimator consistency
expected-value markov-process unbiased-estimator consistency
edited Jun 22 at 23:43
Ben
33.4k2 gold badges39 silver badges146 bronze badges
33.4k2 gold badges39 silver badges146 bronze badges
asked Jun 22 at 21:29
Johnny YangJohnny Yang
132 bronze badges
132 bronze badges
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
The standard method of proving (weak) consistency is to use Chebychev's inequality and apply the triangle inequality to deal with the bias in the estimator. From the triangle inequality, you have:
$$|hattheta_n - theta|
= |(hattheta_n - mathbbE(hattheta_n)) - (theta - mathbbE(hattheta_n))|
leqslant |hattheta - mathbbE(hattheta_n)| + |theta - mathbbE(hattheta_n)|.$$
In your problem you have an unbiased estimator, so the last term is zero. We therefore obtain:
$$beginequation beginaligned
mathbbP(|hattheta_n - theta| geqslant epsilon)
&leqslant mathbbP(|hattheta_n - mathbbE(hattheta_n)| + |theta - mathbbE(hattheta_n)| geqslant epsilon) \[6pt]
&= mathbbP(|hattheta_n - mathbbE(hattheta_n)| geqslant epsilon) \[6pt]
&leqslant fracmathbbV(hattheta_n)epsilon^2. \[6pt]
endaligned endequation$$
Taking $n rightarrow infty$ with $mathbbV(hattheta_n) rightarrow 0$ gives the desired result. Note here that the triangle inequality has allowed us to isolate the term required for the Chebychev inequality, and in the present case the other term is zero since the estimator is unbiased. In the more general case you can still proceed with this method, and convergence occurs so long as the estimator is asymptotically unbiased.
$endgroup$
add a comment |
$begingroup$
Another method might be the following:
$$P(|hattheta_n-theta|geqepsilon)=P(|hattheta_n-theta|^2geqepsilon^2)underbraceleq_textMarkov Ineq.fracE[epsilon^2underbrace=_textE[$hattheta_n]=theta$fracmathbbV(hattheta_n)epsilon^2$$
So, when RHS goes to $0$, LHS does, which is what we want.
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "65"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f414265%2fproof-of-weak-consistency-for-an-unbiased-estimator%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
The standard method of proving (weak) consistency is to use Chebychev's inequality and apply the triangle inequality to deal with the bias in the estimator. From the triangle inequality, you have:
$$|hattheta_n - theta|
= |(hattheta_n - mathbbE(hattheta_n)) - (theta - mathbbE(hattheta_n))|
leqslant |hattheta - mathbbE(hattheta_n)| + |theta - mathbbE(hattheta_n)|.$$
In your problem you have an unbiased estimator, so the last term is zero. We therefore obtain:
$$beginequation beginaligned
mathbbP(|hattheta_n - theta| geqslant epsilon)
&leqslant mathbbP(|hattheta_n - mathbbE(hattheta_n)| + |theta - mathbbE(hattheta_n)| geqslant epsilon) \[6pt]
&= mathbbP(|hattheta_n - mathbbE(hattheta_n)| geqslant epsilon) \[6pt]
&leqslant fracmathbbV(hattheta_n)epsilon^2. \[6pt]
endaligned endequation$$
Taking $n rightarrow infty$ with $mathbbV(hattheta_n) rightarrow 0$ gives the desired result. Note here that the triangle inequality has allowed us to isolate the term required for the Chebychev inequality, and in the present case the other term is zero since the estimator is unbiased. In the more general case you can still proceed with this method, and convergence occurs so long as the estimator is asymptotically unbiased.
$endgroup$
add a comment |
$begingroup$
The standard method of proving (weak) consistency is to use Chebychev's inequality and apply the triangle inequality to deal with the bias in the estimator. From the triangle inequality, you have:
$$|hattheta_n - theta|
= |(hattheta_n - mathbbE(hattheta_n)) - (theta - mathbbE(hattheta_n))|
leqslant |hattheta - mathbbE(hattheta_n)| + |theta - mathbbE(hattheta_n)|.$$
In your problem you have an unbiased estimator, so the last term is zero. We therefore obtain:
$$beginequation beginaligned
mathbbP(|hattheta_n - theta| geqslant epsilon)
&leqslant mathbbP(|hattheta_n - mathbbE(hattheta_n)| + |theta - mathbbE(hattheta_n)| geqslant epsilon) \[6pt]
&= mathbbP(|hattheta_n - mathbbE(hattheta_n)| geqslant epsilon) \[6pt]
&leqslant fracmathbbV(hattheta_n)epsilon^2. \[6pt]
endaligned endequation$$
Taking $n rightarrow infty$ with $mathbbV(hattheta_n) rightarrow 0$ gives the desired result. Note here that the triangle inequality has allowed us to isolate the term required for the Chebychev inequality, and in the present case the other term is zero since the estimator is unbiased. In the more general case you can still proceed with this method, and convergence occurs so long as the estimator is asymptotically unbiased.
$endgroup$
add a comment |
$begingroup$
The standard method of proving (weak) consistency is to use Chebychev's inequality and apply the triangle inequality to deal with the bias in the estimator. From the triangle inequality, you have:
$$|hattheta_n - theta|
= |(hattheta_n - mathbbE(hattheta_n)) - (theta - mathbbE(hattheta_n))|
leqslant |hattheta - mathbbE(hattheta_n)| + |theta - mathbbE(hattheta_n)|.$$
In your problem you have an unbiased estimator, so the last term is zero. We therefore obtain:
$$beginequation beginaligned
mathbbP(|hattheta_n - theta| geqslant epsilon)
&leqslant mathbbP(|hattheta_n - mathbbE(hattheta_n)| + |theta - mathbbE(hattheta_n)| geqslant epsilon) \[6pt]
&= mathbbP(|hattheta_n - mathbbE(hattheta_n)| geqslant epsilon) \[6pt]
&leqslant fracmathbbV(hattheta_n)epsilon^2. \[6pt]
endaligned endequation$$
Taking $n rightarrow infty$ with $mathbbV(hattheta_n) rightarrow 0$ gives the desired result. Note here that the triangle inequality has allowed us to isolate the term required for the Chebychev inequality, and in the present case the other term is zero since the estimator is unbiased. In the more general case you can still proceed with this method, and convergence occurs so long as the estimator is asymptotically unbiased.
$endgroup$
The standard method of proving (weak) consistency is to use Chebychev's inequality and apply the triangle inequality to deal with the bias in the estimator. From the triangle inequality, you have:
$$|hattheta_n - theta|
= |(hattheta_n - mathbbE(hattheta_n)) - (theta - mathbbE(hattheta_n))|
leqslant |hattheta - mathbbE(hattheta_n)| + |theta - mathbbE(hattheta_n)|.$$
In your problem you have an unbiased estimator, so the last term is zero. We therefore obtain:
$$beginequation beginaligned
mathbbP(|hattheta_n - theta| geqslant epsilon)
&leqslant mathbbP(|hattheta_n - mathbbE(hattheta_n)| + |theta - mathbbE(hattheta_n)| geqslant epsilon) \[6pt]
&= mathbbP(|hattheta_n - mathbbE(hattheta_n)| geqslant epsilon) \[6pt]
&leqslant fracmathbbV(hattheta_n)epsilon^2. \[6pt]
endaligned endequation$$
Taking $n rightarrow infty$ with $mathbbV(hattheta_n) rightarrow 0$ gives the desired result. Note here that the triangle inequality has allowed us to isolate the term required for the Chebychev inequality, and in the present case the other term is zero since the estimator is unbiased. In the more general case you can still proceed with this method, and convergence occurs so long as the estimator is asymptotically unbiased.
answered Jun 22 at 23:42
BenBen
33.4k2 gold badges39 silver badges146 bronze badges
33.4k2 gold badges39 silver badges146 bronze badges
add a comment |
add a comment |
$begingroup$
Another method might be the following:
$$P(|hattheta_n-theta|geqepsilon)=P(|hattheta_n-theta|^2geqepsilon^2)underbraceleq_textMarkov Ineq.fracE[epsilon^2underbrace=_textE[$hattheta_n]=theta$fracmathbbV(hattheta_n)epsilon^2$$
So, when RHS goes to $0$, LHS does, which is what we want.
$endgroup$
add a comment |
$begingroup$
Another method might be the following:
$$P(|hattheta_n-theta|geqepsilon)=P(|hattheta_n-theta|^2geqepsilon^2)underbraceleq_textMarkov Ineq.fracE[epsilon^2underbrace=_textE[$hattheta_n]=theta$fracmathbbV(hattheta_n)epsilon^2$$
So, when RHS goes to $0$, LHS does, which is what we want.
$endgroup$
add a comment |
$begingroup$
Another method might be the following:
$$P(|hattheta_n-theta|geqepsilon)=P(|hattheta_n-theta|^2geqepsilon^2)underbraceleq_textMarkov Ineq.fracE[epsilon^2underbrace=_textE[$hattheta_n]=theta$fracmathbbV(hattheta_n)epsilon^2$$
So, when RHS goes to $0$, LHS does, which is what we want.
$endgroup$
Another method might be the following:
$$P(|hattheta_n-theta|geqepsilon)=P(|hattheta_n-theta|^2geqepsilon^2)underbraceleq_textMarkov Ineq.fracE[epsilon^2underbrace=_textE[$hattheta_n]=theta$fracmathbbV(hattheta_n)epsilon^2$$
So, when RHS goes to $0$, LHS does, which is what we want.
answered Jun 23 at 0:51
gunesgunes
10.8k1 gold badge4 silver badges19 bronze badges
10.8k1 gold badge4 silver badges19 bronze badges
add a comment |
add a comment |
Thanks for contributing an answer to Cross Validated!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f414265%2fproof-of-weak-consistency-for-an-unbiased-estimator%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown