Proof of (weak) consistency for an unbiased estimatorConsistency of an order statistic in exponential distributionWhy is the definition of a consistent estimator the way it is? What about alternative definitions of consistency?Prove the consistency of estimatorHow does one explain what an unbiased estimator is to a layperson?Is this MLE estimator unbiased?Trying to prove consistency, but getting non-sensical limit probabilitiesConsistent estimator, that is not MSE consistentProof of posterior consistencyUnbiased and consistent estimatewhy does unbiasedness not imply consistency

Do electrons really perform instantaneous quantum leaps?

Which high-degree derivatives play an essential role?

Why are examinees often not allowed to leave during the start and end of an exam?

Cat files in subfolders in order given by a list

How many transistors are there in a logic gate?

How to count the number of bytes in a file, grouping the same bytes?

Installed software from source, how to say yum not to install it from package?

Drawing a sigmoid function and its derivative in tikz

Is it beneficial to use a crop sensor camera with a full frame telezoom?

Rear derailleur got caught in the spokes, what could be a root cause

Why should I allow multiple IP addresses on a website for a single session?

Russian equivalents of 能骗就骗 (if you can cheat, then cheat)

Basic calculations in PGF/TikZ for loop

Robots in a spaceship

What are the children of two Muggle-borns called?

Having to constantly redo everything because I don't know how to do it

Chandra exiles a card, I play it, it gets exiled again

Could you fall off a planet if it was being accelerated by engines?

Why would Dementors torture a Death Eater if they are loyal to Voldemort?

Why are symbols not written in words?

ATMEGA328P-U vs ATMEGA328-PU

"in 60 seconds or less" or "in 60 seconds or fewer"?

Tikz radius of the bullets with node

Why didn't Caesar move against Sextus Pompey immediately after Munda?



Proof of (weak) consistency for an unbiased estimator


Consistency of an order statistic in exponential distributionWhy is the definition of a consistent estimator the way it is? What about alternative definitions of consistency?Prove the consistency of estimatorHow does one explain what an unbiased estimator is to a layperson?Is this MLE estimator unbiased?Trying to prove consistency, but getting non-sensical limit probabilitiesConsistent estimator, that is not MSE consistentProof of posterior consistencyUnbiased and consistent estimatewhy does unbiasedness not imply consistency













2












$begingroup$


I want to prove a theorem stating:




An unbiased estimator $hattheta$ of the unknown parameter $theta$ is consistent if $V(hattheta_n$) $to0$ for $ntoinfty$.




I've tried using the definition of consistency which is $lim_ntoinfty mathbbP(|hattheta-theta|≥ epsilon)=0$ and Markov's inequality. However I am having trouble solving the expected value of $|hattheta-theta|$. Can anyone explain the process of deriving this theorem?










share|cite|improve this question











$endgroup$
















    2












    $begingroup$


    I want to prove a theorem stating:




    An unbiased estimator $hattheta$ of the unknown parameter $theta$ is consistent if $V(hattheta_n$) $to0$ for $ntoinfty$.




    I've tried using the definition of consistency which is $lim_ntoinfty mathbbP(|hattheta-theta|≥ epsilon)=0$ and Markov's inequality. However I am having trouble solving the expected value of $|hattheta-theta|$. Can anyone explain the process of deriving this theorem?










    share|cite|improve this question











    $endgroup$














      2












      2








      2





      $begingroup$


      I want to prove a theorem stating:




      An unbiased estimator $hattheta$ of the unknown parameter $theta$ is consistent if $V(hattheta_n$) $to0$ for $ntoinfty$.




      I've tried using the definition of consistency which is $lim_ntoinfty mathbbP(|hattheta-theta|≥ epsilon)=0$ and Markov's inequality. However I am having trouble solving the expected value of $|hattheta-theta|$. Can anyone explain the process of deriving this theorem?










      share|cite|improve this question











      $endgroup$




      I want to prove a theorem stating:




      An unbiased estimator $hattheta$ of the unknown parameter $theta$ is consistent if $V(hattheta_n$) $to0$ for $ntoinfty$.




      I've tried using the definition of consistency which is $lim_ntoinfty mathbbP(|hattheta-theta|≥ epsilon)=0$ and Markov's inequality. However I am having trouble solving the expected value of $|hattheta-theta|$. Can anyone explain the process of deriving this theorem?







      expected-value markov-process unbiased-estimator consistency






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Jun 22 at 23:43









      Ben

      33.4k2 gold badges39 silver badges146 bronze badges




      33.4k2 gold badges39 silver badges146 bronze badges










      asked Jun 22 at 21:29









      Johnny YangJohnny Yang

      132 bronze badges




      132 bronze badges




















          2 Answers
          2






          active

          oldest

          votes


















          3












          $begingroup$

          The standard method of proving (weak) consistency is to use Chebychev's inequality and apply the triangle inequality to deal with the bias in the estimator. From the triangle inequality, you have:



          $$|hattheta_n - theta|
          = |(hattheta_n - mathbbE(hattheta_n)) - (theta - mathbbE(hattheta_n))|
          leqslant |hattheta - mathbbE(hattheta_n)| + |theta - mathbbE(hattheta_n)|.$$



          In your problem you have an unbiased estimator, so the last term is zero. We therefore obtain:



          $$beginequation beginaligned
          mathbbP(|hattheta_n - theta| geqslant epsilon)
          &leqslant mathbbP(|hattheta_n - mathbbE(hattheta_n)| + |theta - mathbbE(hattheta_n)| geqslant epsilon) \[6pt]
          &= mathbbP(|hattheta_n - mathbbE(hattheta_n)| geqslant epsilon) \[6pt]
          &leqslant fracmathbbV(hattheta_n)epsilon^2. \[6pt]
          endaligned endequation$$



          Taking $n rightarrow infty$ with $mathbbV(hattheta_n) rightarrow 0$ gives the desired result. Note here that the triangle inequality has allowed us to isolate the term required for the Chebychev inequality, and in the present case the other term is zero since the estimator is unbiased. In the more general case you can still proceed with this method, and convergence occurs so long as the estimator is asymptotically unbiased.






          share|cite|improve this answer









          $endgroup$




















            2












            $begingroup$

            Another method might be the following:
            $$P(|hattheta_n-theta|geqepsilon)=P(|hattheta_n-theta|^2geqepsilon^2)underbraceleq_textMarkov Ineq.fracE[epsilon^2underbrace=_textE[$hattheta_n]=theta$fracmathbbV(hattheta_n)epsilon^2$$
            So, when RHS goes to $0$, LHS does, which is what we want.






            share|cite|improve this answer









            $endgroup$















              Your Answer








              StackExchange.ready(function()
              var channelOptions =
              tags: "".split(" "),
              id: "65"
              ;
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function()
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled)
              StackExchange.using("snippets", function()
              createEditor();
              );

              else
              createEditor();

              );

              function createEditor()
              StackExchange.prepareEditor(
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: false,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: null,
              bindNavPrevention: true,
              postfix: "",
              imageUploader:
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              ,
              onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              );



              );













              draft saved

              draft discarded


















              StackExchange.ready(
              function ()
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f414265%2fproof-of-weak-consistency-for-an-unbiased-estimator%23new-answer', 'question_page');

              );

              Post as a guest















              Required, but never shown

























              2 Answers
              2






              active

              oldest

              votes








              2 Answers
              2






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              3












              $begingroup$

              The standard method of proving (weak) consistency is to use Chebychev's inequality and apply the triangle inequality to deal with the bias in the estimator. From the triangle inequality, you have:



              $$|hattheta_n - theta|
              = |(hattheta_n - mathbbE(hattheta_n)) - (theta - mathbbE(hattheta_n))|
              leqslant |hattheta - mathbbE(hattheta_n)| + |theta - mathbbE(hattheta_n)|.$$



              In your problem you have an unbiased estimator, so the last term is zero. We therefore obtain:



              $$beginequation beginaligned
              mathbbP(|hattheta_n - theta| geqslant epsilon)
              &leqslant mathbbP(|hattheta_n - mathbbE(hattheta_n)| + |theta - mathbbE(hattheta_n)| geqslant epsilon) \[6pt]
              &= mathbbP(|hattheta_n - mathbbE(hattheta_n)| geqslant epsilon) \[6pt]
              &leqslant fracmathbbV(hattheta_n)epsilon^2. \[6pt]
              endaligned endequation$$



              Taking $n rightarrow infty$ with $mathbbV(hattheta_n) rightarrow 0$ gives the desired result. Note here that the triangle inequality has allowed us to isolate the term required for the Chebychev inequality, and in the present case the other term is zero since the estimator is unbiased. In the more general case you can still proceed with this method, and convergence occurs so long as the estimator is asymptotically unbiased.






              share|cite|improve this answer









              $endgroup$

















                3












                $begingroup$

                The standard method of proving (weak) consistency is to use Chebychev's inequality and apply the triangle inequality to deal with the bias in the estimator. From the triangle inequality, you have:



                $$|hattheta_n - theta|
                = |(hattheta_n - mathbbE(hattheta_n)) - (theta - mathbbE(hattheta_n))|
                leqslant |hattheta - mathbbE(hattheta_n)| + |theta - mathbbE(hattheta_n)|.$$



                In your problem you have an unbiased estimator, so the last term is zero. We therefore obtain:



                $$beginequation beginaligned
                mathbbP(|hattheta_n - theta| geqslant epsilon)
                &leqslant mathbbP(|hattheta_n - mathbbE(hattheta_n)| + |theta - mathbbE(hattheta_n)| geqslant epsilon) \[6pt]
                &= mathbbP(|hattheta_n - mathbbE(hattheta_n)| geqslant epsilon) \[6pt]
                &leqslant fracmathbbV(hattheta_n)epsilon^2. \[6pt]
                endaligned endequation$$



                Taking $n rightarrow infty$ with $mathbbV(hattheta_n) rightarrow 0$ gives the desired result. Note here that the triangle inequality has allowed us to isolate the term required for the Chebychev inequality, and in the present case the other term is zero since the estimator is unbiased. In the more general case you can still proceed with this method, and convergence occurs so long as the estimator is asymptotically unbiased.






                share|cite|improve this answer









                $endgroup$















                  3












                  3








                  3





                  $begingroup$

                  The standard method of proving (weak) consistency is to use Chebychev's inequality and apply the triangle inequality to deal with the bias in the estimator. From the triangle inequality, you have:



                  $$|hattheta_n - theta|
                  = |(hattheta_n - mathbbE(hattheta_n)) - (theta - mathbbE(hattheta_n))|
                  leqslant |hattheta - mathbbE(hattheta_n)| + |theta - mathbbE(hattheta_n)|.$$



                  In your problem you have an unbiased estimator, so the last term is zero. We therefore obtain:



                  $$beginequation beginaligned
                  mathbbP(|hattheta_n - theta| geqslant epsilon)
                  &leqslant mathbbP(|hattheta_n - mathbbE(hattheta_n)| + |theta - mathbbE(hattheta_n)| geqslant epsilon) \[6pt]
                  &= mathbbP(|hattheta_n - mathbbE(hattheta_n)| geqslant epsilon) \[6pt]
                  &leqslant fracmathbbV(hattheta_n)epsilon^2. \[6pt]
                  endaligned endequation$$



                  Taking $n rightarrow infty$ with $mathbbV(hattheta_n) rightarrow 0$ gives the desired result. Note here that the triangle inequality has allowed us to isolate the term required for the Chebychev inequality, and in the present case the other term is zero since the estimator is unbiased. In the more general case you can still proceed with this method, and convergence occurs so long as the estimator is asymptotically unbiased.






                  share|cite|improve this answer









                  $endgroup$



                  The standard method of proving (weak) consistency is to use Chebychev's inequality and apply the triangle inequality to deal with the bias in the estimator. From the triangle inequality, you have:



                  $$|hattheta_n - theta|
                  = |(hattheta_n - mathbbE(hattheta_n)) - (theta - mathbbE(hattheta_n))|
                  leqslant |hattheta - mathbbE(hattheta_n)| + |theta - mathbbE(hattheta_n)|.$$



                  In your problem you have an unbiased estimator, so the last term is zero. We therefore obtain:



                  $$beginequation beginaligned
                  mathbbP(|hattheta_n - theta| geqslant epsilon)
                  &leqslant mathbbP(|hattheta_n - mathbbE(hattheta_n)| + |theta - mathbbE(hattheta_n)| geqslant epsilon) \[6pt]
                  &= mathbbP(|hattheta_n - mathbbE(hattheta_n)| geqslant epsilon) \[6pt]
                  &leqslant fracmathbbV(hattheta_n)epsilon^2. \[6pt]
                  endaligned endequation$$



                  Taking $n rightarrow infty$ with $mathbbV(hattheta_n) rightarrow 0$ gives the desired result. Note here that the triangle inequality has allowed us to isolate the term required for the Chebychev inequality, and in the present case the other term is zero since the estimator is unbiased. In the more general case you can still proceed with this method, and convergence occurs so long as the estimator is asymptotically unbiased.







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered Jun 22 at 23:42









                  BenBen

                  33.4k2 gold badges39 silver badges146 bronze badges




                  33.4k2 gold badges39 silver badges146 bronze badges





















                      2












                      $begingroup$

                      Another method might be the following:
                      $$P(|hattheta_n-theta|geqepsilon)=P(|hattheta_n-theta|^2geqepsilon^2)underbraceleq_textMarkov Ineq.fracE[epsilon^2underbrace=_textE[$hattheta_n]=theta$fracmathbbV(hattheta_n)epsilon^2$$
                      So, when RHS goes to $0$, LHS does, which is what we want.






                      share|cite|improve this answer









                      $endgroup$

















                        2












                        $begingroup$

                        Another method might be the following:
                        $$P(|hattheta_n-theta|geqepsilon)=P(|hattheta_n-theta|^2geqepsilon^2)underbraceleq_textMarkov Ineq.fracE[epsilon^2underbrace=_textE[$hattheta_n]=theta$fracmathbbV(hattheta_n)epsilon^2$$
                        So, when RHS goes to $0$, LHS does, which is what we want.






                        share|cite|improve this answer









                        $endgroup$















                          2












                          2








                          2





                          $begingroup$

                          Another method might be the following:
                          $$P(|hattheta_n-theta|geqepsilon)=P(|hattheta_n-theta|^2geqepsilon^2)underbraceleq_textMarkov Ineq.fracE[epsilon^2underbrace=_textE[$hattheta_n]=theta$fracmathbbV(hattheta_n)epsilon^2$$
                          So, when RHS goes to $0$, LHS does, which is what we want.






                          share|cite|improve this answer









                          $endgroup$



                          Another method might be the following:
                          $$P(|hattheta_n-theta|geqepsilon)=P(|hattheta_n-theta|^2geqepsilon^2)underbraceleq_textMarkov Ineq.fracE[epsilon^2underbrace=_textE[$hattheta_n]=theta$fracmathbbV(hattheta_n)epsilon^2$$
                          So, when RHS goes to $0$, LHS does, which is what we want.







                          share|cite|improve this answer












                          share|cite|improve this answer



                          share|cite|improve this answer










                          answered Jun 23 at 0:51









                          gunesgunes

                          10.8k1 gold badge4 silver badges19 bronze badges




                          10.8k1 gold badge4 silver badges19 bronze badges



























                              draft saved

                              draft discarded
















































                              Thanks for contributing an answer to Cross Validated!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid


                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.

                              Use MathJax to format equations. MathJax reference.


                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function ()
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f414265%2fproof-of-weak-consistency-for-an-unbiased-estimator%23new-answer', 'question_page');

                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              Category:9 (number) SubcategoriesMedia in category "9 (number)"Navigation menuUpload mediaGND ID: 4485639-8Library of Congress authority ID: sh85091979ReasonatorScholiaStatistics

                              Circuit construction for execution of conditional statements using least significant bitHow are two different registers being used as “control”?How exactly is the stated composite state of the two registers being produced using the $R_zz$ controlled rotations?Efficiently performing controlled rotations in HHLWould this quantum algorithm implementation work?How to prepare a superposed states of odd integers from $1$ to $sqrtN$?Why is this implementation of the order finding algorithm not working?Circuit construction for Hamiltonian simulationHow can I invert the least significant bit of a certain term of a superposed state?Implementing an oracleImplementing a controlled sum operation

                              Magento 2 “No Payment Methods” in Admin New OrderHow to integrate Paypal Express Checkout with the Magento APIMagento 1.5 - Sales > Order > edit order and shipping methods disappearAuto Invoice Check/Money Order Payment methodAdd more simple payment methods?Shipping methods not showingWhat should I do to change payment methods if changing the configuration has no effects?1.9 - No Payment Methods showing upMy Payment Methods not Showing for downloadable/virtual product when checkout?Magento2 API to access internal payment methodHow to call an existing payment methods in the registration form?