Two matrices that are not similar have (almost) same eigenvalues [closed]Elegant operations on matrix rows and columnsExpress block matrix in terms of matrix basisPartitioned matrix operationsHow to calculate the trace of a product of matrices taking advantage of its properties?Principal submatrix and Principal minor of a matrixNumerically computing the eigenvalues of an infinite-dimensional tridiagonal matrixHow to stack multiple matrices?How to use Array[] to evaluate block-diagonal matrix at different values

Nothing like a good ol' game of ModTen

How to prevent clipped screen edges on my TV, HDMI-connected?

Are the players on the same team as the DM?

What to say to a student who has failed?

Ensuring all network services on a device use strong TLS cipher suites

Why isn't "I've" a proper response?

State-of-the-art algorithms for solving linear programs

Heyacrazy: Careening

You have 3 cakes. Everytime you eat one, there's 17% chance the number of cakes is reset to 3. Find average number of cakes eaten?

LeetCode: Group Anagrams C#

Why did Khan ask Admiral James T. Kirk about Project Genesis?

Does travel insurance for short flight delays exist?

Handling Disruptive Student on the Autistic Spectrum

Why did this happen to Thanos's ships at the end of "Avengers: Endgame"?

How to respectfully refuse to assist co-workers with IT issues?

I don't have the theoretical background in my PhD topic. I can't justify getting the degree

The Knight's estate

Why are non-collision-resistant hash functions considered insecure for signing self-generated information

Did anyone try to find the little box that held Professor Moriarty and his wife after the Enterprise D crashed?

Justifying the use of directed energy weapons

Compelling story with the world as a villain

Disambiguation of "nobis vobis" and "nobis nobis"

How to find out the average duration of the peer-review process for a given journal?

How do you harvest carrots in creative mode?



Two matrices that are not similar have (almost) same eigenvalues [closed]


Elegant operations on matrix rows and columnsExpress block matrix in terms of matrix basisPartitioned matrix operationsHow to calculate the trace of a product of matrices taking advantage of its properties?Principal submatrix and Principal minor of a matrixNumerically computing the eigenvalues of an infinite-dimensional tridiagonal matrixHow to stack multiple matrices?How to use Array[] to evaluate block-diagonal matrix at different values






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








3












$begingroup$


I have two matrices



$$
A=beginpmatrix
a & 0 & 0 \
0 & b & 0 \
0 & 0 & c
endpmatrix
quad
text and
quad
B=beginpmatrix
d & e & f \
d & e & f \
d & e & f
endpmatrix
$$



In reality mine are more like 1000 x 1000 matrices but the only thing that is important for now is that the left matrix is diagonal and the right one has one row that repeats itself.



Obviously the eigenvalues of the left matrix are its diagonal components. I want to create a new matrix C



$$C = A+B=beginpmatrix
a & 0 & 0 \0 & b & 0 \0 & 0 & c endpmatrix+beginpmatrix d & e & f \d & e & f \d & e & f endpmatrix=beginpmatrix a+d & e & f \d & b+e & f \d & e & c+f endpmatrix$$



I am now wondering how the eigenvalues of this new matrix C are related to the eigenvalues of the diagonal matrix A. Can I use an argument that uses row reduction in order to relate the eigenvalues of both matrices?



The reason why I am asking is that my 1000 x 1000 matrix (implemented in mathematica) that is described as above gives me almost the same eigenvalues as the corresponding diagonal matrix (only a few eigenvalues differ) and I really cannot think of any reason why that should be the case.



EDIT:



I implemented a simple code in mathematica to illustrate what I mean. One can see that every eigenvalue of the diagonal matrix A appears in C:



 dim = 50;

A = DiagonalMatrix[Flatten[RandomInteger[0, 10, 1, dim]]];

mat = RandomReal[0, 100, 1, dim];
B = ArrayFlatten[ConstantArray[mat, dim]];

c = A + B;

Abs[Eigenvalues[A]]
Round[Abs[Eigenvalues[c]], 0.01]

(*10, 10, 10, 10, 10, 10, 9, 9, 9, 9, 9, 9, 8, 8, 8, 8, 7, 7, 7, 7, 7,
6, 6, 6, 6, 5, 5, 5, 5, 5, 4, 4, 4, 4, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2,
1, 1, 1, 0, 0, 0*)

(*2084.89, 10., 10., 10., 10., 10., 9.71, 9., 9., 9., 9., 9., 8.54,
8., 8., 8., 7.72, 7., 7., 7., 7., 6.61, 6., 6., 6., 5.44, 5., 5., 5.,
5., 4.29, 4., 4., 4., 3.51, 3., 3., 3., 3., 2.28, 2., 2., 2., 2.,
1.21, 1., 1., 0.33, 0., 0.*)










share|improve this question











$endgroup$




closed as off-topic by Michael E2, m_goldberg, MarcoB, rhermans, C. E. Aug 12 at 17:08



  • The question does not concern the technical computing software Mathematica by Wolfram Research. Please see the help center to find out about the topics that can be asked here.
If this question can be reworded to fit the rules in the help center, please edit the question.












  • 1




    $begingroup$
    x-posted on math.se: math.stackexchange.com/q/3320509/289977
    $endgroup$
    – AccidentalFourierTransform
    Aug 12 at 1:46






  • 1




    $begingroup$
    @AccidentalFourierTransform Yes, it seems an appropriate question for Math.SE, but perhaps not for here.
    $endgroup$
    – Michael E2
    Aug 12 at 2:01

















3












$begingroup$


I have two matrices



$$
A=beginpmatrix
a & 0 & 0 \
0 & b & 0 \
0 & 0 & c
endpmatrix
quad
text and
quad
B=beginpmatrix
d & e & f \
d & e & f \
d & e & f
endpmatrix
$$



In reality mine are more like 1000 x 1000 matrices but the only thing that is important for now is that the left matrix is diagonal and the right one has one row that repeats itself.



Obviously the eigenvalues of the left matrix are its diagonal components. I want to create a new matrix C



$$C = A+B=beginpmatrix
a & 0 & 0 \0 & b & 0 \0 & 0 & c endpmatrix+beginpmatrix d & e & f \d & e & f \d & e & f endpmatrix=beginpmatrix a+d & e & f \d & b+e & f \d & e & c+f endpmatrix$$



I am now wondering how the eigenvalues of this new matrix C are related to the eigenvalues of the diagonal matrix A. Can I use an argument that uses row reduction in order to relate the eigenvalues of both matrices?



The reason why I am asking is that my 1000 x 1000 matrix (implemented in mathematica) that is described as above gives me almost the same eigenvalues as the corresponding diagonal matrix (only a few eigenvalues differ) and I really cannot think of any reason why that should be the case.



EDIT:



I implemented a simple code in mathematica to illustrate what I mean. One can see that every eigenvalue of the diagonal matrix A appears in C:



 dim = 50;

A = DiagonalMatrix[Flatten[RandomInteger[0, 10, 1, dim]]];

mat = RandomReal[0, 100, 1, dim];
B = ArrayFlatten[ConstantArray[mat, dim]];

c = A + B;

Abs[Eigenvalues[A]]
Round[Abs[Eigenvalues[c]], 0.01]

(*10, 10, 10, 10, 10, 10, 9, 9, 9, 9, 9, 9, 8, 8, 8, 8, 7, 7, 7, 7, 7,
6, 6, 6, 6, 5, 5, 5, 5, 5, 4, 4, 4, 4, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2,
1, 1, 1, 0, 0, 0*)

(*2084.89, 10., 10., 10., 10., 10., 9.71, 9., 9., 9., 9., 9., 8.54,
8., 8., 8., 7.72, 7., 7., 7., 7., 6.61, 6., 6., 6., 5.44, 5., 5., 5.,
5., 4.29, 4., 4., 4., 3.51, 3., 3., 3., 3., 2.28, 2., 2., 2., 2.,
1.21, 1., 1., 0.33, 0., 0.*)










share|improve this question











$endgroup$




closed as off-topic by Michael E2, m_goldberg, MarcoB, rhermans, C. E. Aug 12 at 17:08



  • The question does not concern the technical computing software Mathematica by Wolfram Research. Please see the help center to find out about the topics that can be asked here.
If this question can be reworded to fit the rules in the help center, please edit the question.












  • 1




    $begingroup$
    x-posted on math.se: math.stackexchange.com/q/3320509/289977
    $endgroup$
    – AccidentalFourierTransform
    Aug 12 at 1:46






  • 1




    $begingroup$
    @AccidentalFourierTransform Yes, it seems an appropriate question for Math.SE, but perhaps not for here.
    $endgroup$
    – Michael E2
    Aug 12 at 2:01













3












3








3


1



$begingroup$


I have two matrices



$$
A=beginpmatrix
a & 0 & 0 \
0 & b & 0 \
0 & 0 & c
endpmatrix
quad
text and
quad
B=beginpmatrix
d & e & f \
d & e & f \
d & e & f
endpmatrix
$$



In reality mine are more like 1000 x 1000 matrices but the only thing that is important for now is that the left matrix is diagonal and the right one has one row that repeats itself.



Obviously the eigenvalues of the left matrix are its diagonal components. I want to create a new matrix C



$$C = A+B=beginpmatrix
a & 0 & 0 \0 & b & 0 \0 & 0 & c endpmatrix+beginpmatrix d & e & f \d & e & f \d & e & f endpmatrix=beginpmatrix a+d & e & f \d & b+e & f \d & e & c+f endpmatrix$$



I am now wondering how the eigenvalues of this new matrix C are related to the eigenvalues of the diagonal matrix A. Can I use an argument that uses row reduction in order to relate the eigenvalues of both matrices?



The reason why I am asking is that my 1000 x 1000 matrix (implemented in mathematica) that is described as above gives me almost the same eigenvalues as the corresponding diagonal matrix (only a few eigenvalues differ) and I really cannot think of any reason why that should be the case.



EDIT:



I implemented a simple code in mathematica to illustrate what I mean. One can see that every eigenvalue of the diagonal matrix A appears in C:



 dim = 50;

A = DiagonalMatrix[Flatten[RandomInteger[0, 10, 1, dim]]];

mat = RandomReal[0, 100, 1, dim];
B = ArrayFlatten[ConstantArray[mat, dim]];

c = A + B;

Abs[Eigenvalues[A]]
Round[Abs[Eigenvalues[c]], 0.01]

(*10, 10, 10, 10, 10, 10, 9, 9, 9, 9, 9, 9, 8, 8, 8, 8, 7, 7, 7, 7, 7,
6, 6, 6, 6, 5, 5, 5, 5, 5, 4, 4, 4, 4, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2,
1, 1, 1, 0, 0, 0*)

(*2084.89, 10., 10., 10., 10., 10., 9.71, 9., 9., 9., 9., 9., 8.54,
8., 8., 8., 7.72, 7., 7., 7., 7., 6.61, 6., 6., 6., 5.44, 5., 5., 5.,
5., 4.29, 4., 4., 4., 3.51, 3., 3., 3., 3., 2.28, 2., 2., 2., 2.,
1.21, 1., 1., 0.33, 0., 0.*)










share|improve this question











$endgroup$




I have two matrices



$$
A=beginpmatrix
a & 0 & 0 \
0 & b & 0 \
0 & 0 & c
endpmatrix
quad
text and
quad
B=beginpmatrix
d & e & f \
d & e & f \
d & e & f
endpmatrix
$$



In reality mine are more like 1000 x 1000 matrices but the only thing that is important for now is that the left matrix is diagonal and the right one has one row that repeats itself.



Obviously the eigenvalues of the left matrix are its diagonal components. I want to create a new matrix C



$$C = A+B=beginpmatrix
a & 0 & 0 \0 & b & 0 \0 & 0 & c endpmatrix+beginpmatrix d & e & f \d & e & f \d & e & f endpmatrix=beginpmatrix a+d & e & f \d & b+e & f \d & e & c+f endpmatrix$$



I am now wondering how the eigenvalues of this new matrix C are related to the eigenvalues of the diagonal matrix A. Can I use an argument that uses row reduction in order to relate the eigenvalues of both matrices?



The reason why I am asking is that my 1000 x 1000 matrix (implemented in mathematica) that is described as above gives me almost the same eigenvalues as the corresponding diagonal matrix (only a few eigenvalues differ) and I really cannot think of any reason why that should be the case.



EDIT:



I implemented a simple code in mathematica to illustrate what I mean. One can see that every eigenvalue of the diagonal matrix A appears in C:



 dim = 50;

A = DiagonalMatrix[Flatten[RandomInteger[0, 10, 1, dim]]];

mat = RandomReal[0, 100, 1, dim];
B = ArrayFlatten[ConstantArray[mat, dim]];

c = A + B;

Abs[Eigenvalues[A]]
Round[Abs[Eigenvalues[c]], 0.01]

(*10, 10, 10, 10, 10, 10, 9, 9, 9, 9, 9, 9, 8, 8, 8, 8, 7, 7, 7, 7, 7,
6, 6, 6, 6, 5, 5, 5, 5, 5, 4, 4, 4, 4, 3, 3, 3, 3, 3, 2, 2, 2, 2, 2,
1, 1, 1, 0, 0, 0*)

(*2084.89, 10., 10., 10., 10., 10., 9.71, 9., 9., 9., 9., 9., 8.54,
8., 8., 8., 7.72, 7., 7., 7., 7., 6.61, 6., 6., 6., 5.44, 5., 5., 5.,
5., 4.29, 4., 4., 4., 3.51, 3., 3., 3., 3., 2.28, 2., 2., 2., 2.,
1.21, 1., 1., 0.33, 0., 0.*)







matrix linear-algebra eigenvalues






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Aug 12 at 18:19







xabdax

















asked Aug 11 at 23:47









xabdaxxabdax

716 bronze badges




716 bronze badges





closed as off-topic by Michael E2, m_goldberg, MarcoB, rhermans, C. E. Aug 12 at 17:08



  • The question does not concern the technical computing software Mathematica by Wolfram Research. Please see the help center to find out about the topics that can be asked here.
If this question can be reworded to fit the rules in the help center, please edit the question.









closed as off-topic by Michael E2, m_goldberg, MarcoB, rhermans, C. E. Aug 12 at 17:08



  • The question does not concern the technical computing software Mathematica by Wolfram Research. Please see the help center to find out about the topics that can be asked here.
If this question can be reworded to fit the rules in the help center, please edit the question.







closed as off-topic by Michael E2, m_goldberg, MarcoB, rhermans, C. E. Aug 12 at 17:08



  • The question does not concern the technical computing software Mathematica by Wolfram Research. Please see the help center to find out about the topics that can be asked here.
If this question can be reworded to fit the rules in the help center, please edit the question.







  • 1




    $begingroup$
    x-posted on math.se: math.stackexchange.com/q/3320509/289977
    $endgroup$
    – AccidentalFourierTransform
    Aug 12 at 1:46






  • 1




    $begingroup$
    @AccidentalFourierTransform Yes, it seems an appropriate question for Math.SE, but perhaps not for here.
    $endgroup$
    – Michael E2
    Aug 12 at 2:01












  • 1




    $begingroup$
    x-posted on math.se: math.stackexchange.com/q/3320509/289977
    $endgroup$
    – AccidentalFourierTransform
    Aug 12 at 1:46






  • 1




    $begingroup$
    @AccidentalFourierTransform Yes, it seems an appropriate question for Math.SE, but perhaps not for here.
    $endgroup$
    – Michael E2
    Aug 12 at 2:01







1




1




$begingroup$
x-posted on math.se: math.stackexchange.com/q/3320509/289977
$endgroup$
– AccidentalFourierTransform
Aug 12 at 1:46




$begingroup$
x-posted on math.se: math.stackexchange.com/q/3320509/289977
$endgroup$
– AccidentalFourierTransform
Aug 12 at 1:46




1




1




$begingroup$
@AccidentalFourierTransform Yes, it seems an appropriate question for Math.SE, but perhaps not for here.
$endgroup$
– Michael E2
Aug 12 at 2:01




$begingroup$
@AccidentalFourierTransform Yes, it seems an appropriate question for Math.SE, but perhaps not for here.
$endgroup$
– Michael E2
Aug 12 at 2:01










2 Answers
2






active

oldest

votes


















8













$begingroup$

The reason is that your second matrix is a rank-one update of your first matrix:
$$
Bequiv uv^t
$$

where $u=(1,1,1)$ and $v=(d,e,f)$. Therefore, the new eigenvalues are typically a small perturbation of the old ones, and there are some known formulas for special cases. See e.g. these lectures or the references in this math.OF post.






share|improve this answer









$endgroup$














  • $begingroup$
    This looks very useful. Are the new eigenvalues always only a small perturbation of the old ones? Is there any chance that they can diverge from the initial eigenvalues significantly when one chooses v to include very big numerical values?
    $endgroup$
    – xabdax
    Aug 12 at 1:19






  • 1




    $begingroup$
    @xabdax Most eigenvalues are indeed often small perturbations, but not all of them. And it very much depends on $u,v$. For example, if either of them is an eigenvector of $A$, then only one eigenvalue gets modified, and the rest stay the same. Otherwise, there will typically be modifications that scale with $u,v$. If they are very large, so will the modification.
    $endgroup$
    – AccidentalFourierTransform
    Aug 12 at 1:36










  • $begingroup$
    Thanks for that. I am having a hard time believing that the modifications will scale with u and/or v. I don't know why but if you modify the above code with mat = RandomReal[10^(6), 10^(9), 1, dim]; one gets modified eigenvalues which are barely distinguishable from the initial ones even though the numbers in v have a much larger magnitude.
    $endgroup$
    – xabdax
    Aug 12 at 3:24


















4













$begingroup$

It doesn't happen here:



SeedRandom[0];
aa = RandomReal[-10, 10, 1000, 1000];
bb = ConstantArray[RandomReal[-10, 10, 1000], 1000];

eva = Eigenvalues@aa;
evc = Eigenvalues[aa + bb];

ListPlot[ReIm@eva, ReIm@evc, ImageSize -> Large, MaxPlotPoints -> 1000]


enter image description here



OTOH, it does happen here:



bb = ConstantArray[RandomReal[-1, 1 1*^-8, 1000], 1000];

eva - Eigenvalues[aa + bb] // Abs // Max
(* 5.4818*10^-7 *_)


Of course, the explanations should be obvious.






share|improve this answer











$endgroup$














  • $begingroup$
    It’s not so obvious to me tbh. Why does it happen in the latter case but not in the former? The sample code that I added to my question has the same problem.
    $endgroup$
    – xabdax
    Aug 12 at 1:38






  • 1




    $begingroup$
    @xabdax The size of the perturbation bb affects the size of the effect on the eigenvalues. In the second case, the size is about $10^-9$ times as small. I meant to imply that you can have a difference in the eigenvalues, somewhere between a modest to a small one, depending on the matrices $A$ and $B$. If you want an explanation of your case, you'd have to give us the actual matrices.
    $endgroup$
    – Michael E2
    Aug 12 at 2:00










  • $begingroup$
    If one sorts the eigenvalues by magnitude, then even in the first code, the eigenvalues that you calculated would be almost the same. So apparently it does not really matter whether your perturbation is big or small.
    $endgroup$
    – xabdax
    Aug 12 at 2:27










  • $begingroup$
    @xabdax What do you mean "almost the same"? It's clear there are blue dots not very close to any gold dots. (But "close" is relative, after all.) -- oops, I had the wrong image.
    $endgroup$
    – Michael E2
    Aug 12 at 2:49











  • $begingroup$
    If you sort your eigenvalues (either the real or imaginary part) by magnitude and plot both eva and evc, you will get two S-shaped curves that lie on top of each other. Does that not imply that the eigenvalues have barely changed? I'm not sure by which way mathematica sorts the eigenvalues which is why I usually sort them manually by magnitude.
    $endgroup$
    – xabdax
    Aug 12 at 2:56




















2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









8













$begingroup$

The reason is that your second matrix is a rank-one update of your first matrix:
$$
Bequiv uv^t
$$

where $u=(1,1,1)$ and $v=(d,e,f)$. Therefore, the new eigenvalues are typically a small perturbation of the old ones, and there are some known formulas for special cases. See e.g. these lectures or the references in this math.OF post.






share|improve this answer









$endgroup$














  • $begingroup$
    This looks very useful. Are the new eigenvalues always only a small perturbation of the old ones? Is there any chance that they can diverge from the initial eigenvalues significantly when one chooses v to include very big numerical values?
    $endgroup$
    – xabdax
    Aug 12 at 1:19






  • 1




    $begingroup$
    @xabdax Most eigenvalues are indeed often small perturbations, but not all of them. And it very much depends on $u,v$. For example, if either of them is an eigenvector of $A$, then only one eigenvalue gets modified, and the rest stay the same. Otherwise, there will typically be modifications that scale with $u,v$. If they are very large, so will the modification.
    $endgroup$
    – AccidentalFourierTransform
    Aug 12 at 1:36










  • $begingroup$
    Thanks for that. I am having a hard time believing that the modifications will scale with u and/or v. I don't know why but if you modify the above code with mat = RandomReal[10^(6), 10^(9), 1, dim]; one gets modified eigenvalues which are barely distinguishable from the initial ones even though the numbers in v have a much larger magnitude.
    $endgroup$
    – xabdax
    Aug 12 at 3:24















8













$begingroup$

The reason is that your second matrix is a rank-one update of your first matrix:
$$
Bequiv uv^t
$$

where $u=(1,1,1)$ and $v=(d,e,f)$. Therefore, the new eigenvalues are typically a small perturbation of the old ones, and there are some known formulas for special cases. See e.g. these lectures or the references in this math.OF post.






share|improve this answer









$endgroup$














  • $begingroup$
    This looks very useful. Are the new eigenvalues always only a small perturbation of the old ones? Is there any chance that they can diverge from the initial eigenvalues significantly when one chooses v to include very big numerical values?
    $endgroup$
    – xabdax
    Aug 12 at 1:19






  • 1




    $begingroup$
    @xabdax Most eigenvalues are indeed often small perturbations, but not all of them. And it very much depends on $u,v$. For example, if either of them is an eigenvector of $A$, then only one eigenvalue gets modified, and the rest stay the same. Otherwise, there will typically be modifications that scale with $u,v$. If they are very large, so will the modification.
    $endgroup$
    – AccidentalFourierTransform
    Aug 12 at 1:36










  • $begingroup$
    Thanks for that. I am having a hard time believing that the modifications will scale with u and/or v. I don't know why but if you modify the above code with mat = RandomReal[10^(6), 10^(9), 1, dim]; one gets modified eigenvalues which are barely distinguishable from the initial ones even though the numbers in v have a much larger magnitude.
    $endgroup$
    – xabdax
    Aug 12 at 3:24













8














8










8







$begingroup$

The reason is that your second matrix is a rank-one update of your first matrix:
$$
Bequiv uv^t
$$

where $u=(1,1,1)$ and $v=(d,e,f)$. Therefore, the new eigenvalues are typically a small perturbation of the old ones, and there are some known formulas for special cases. See e.g. these lectures or the references in this math.OF post.






share|improve this answer









$endgroup$



The reason is that your second matrix is a rank-one update of your first matrix:
$$
Bequiv uv^t
$$

where $u=(1,1,1)$ and $v=(d,e,f)$. Therefore, the new eigenvalues are typically a small perturbation of the old ones, and there are some known formulas for special cases. See e.g. these lectures or the references in this math.OF post.







share|improve this answer












share|improve this answer



share|improve this answer










answered Aug 12 at 0:44









AccidentalFourierTransformAccidentalFourierTransform

6,8791 gold badge12 silver badges45 bronze badges




6,8791 gold badge12 silver badges45 bronze badges














  • $begingroup$
    This looks very useful. Are the new eigenvalues always only a small perturbation of the old ones? Is there any chance that they can diverge from the initial eigenvalues significantly when one chooses v to include very big numerical values?
    $endgroup$
    – xabdax
    Aug 12 at 1:19






  • 1




    $begingroup$
    @xabdax Most eigenvalues are indeed often small perturbations, but not all of them. And it very much depends on $u,v$. For example, if either of them is an eigenvector of $A$, then only one eigenvalue gets modified, and the rest stay the same. Otherwise, there will typically be modifications that scale with $u,v$. If they are very large, so will the modification.
    $endgroup$
    – AccidentalFourierTransform
    Aug 12 at 1:36










  • $begingroup$
    Thanks for that. I am having a hard time believing that the modifications will scale with u and/or v. I don't know why but if you modify the above code with mat = RandomReal[10^(6), 10^(9), 1, dim]; one gets modified eigenvalues which are barely distinguishable from the initial ones even though the numbers in v have a much larger magnitude.
    $endgroup$
    – xabdax
    Aug 12 at 3:24
















  • $begingroup$
    This looks very useful. Are the new eigenvalues always only a small perturbation of the old ones? Is there any chance that they can diverge from the initial eigenvalues significantly when one chooses v to include very big numerical values?
    $endgroup$
    – xabdax
    Aug 12 at 1:19






  • 1




    $begingroup$
    @xabdax Most eigenvalues are indeed often small perturbations, but not all of them. And it very much depends on $u,v$. For example, if either of them is an eigenvector of $A$, then only one eigenvalue gets modified, and the rest stay the same. Otherwise, there will typically be modifications that scale with $u,v$. If they are very large, so will the modification.
    $endgroup$
    – AccidentalFourierTransform
    Aug 12 at 1:36










  • $begingroup$
    Thanks for that. I am having a hard time believing that the modifications will scale with u and/or v. I don't know why but if you modify the above code with mat = RandomReal[10^(6), 10^(9), 1, dim]; one gets modified eigenvalues which are barely distinguishable from the initial ones even though the numbers in v have a much larger magnitude.
    $endgroup$
    – xabdax
    Aug 12 at 3:24















$begingroup$
This looks very useful. Are the new eigenvalues always only a small perturbation of the old ones? Is there any chance that they can diverge from the initial eigenvalues significantly when one chooses v to include very big numerical values?
$endgroup$
– xabdax
Aug 12 at 1:19




$begingroup$
This looks very useful. Are the new eigenvalues always only a small perturbation of the old ones? Is there any chance that they can diverge from the initial eigenvalues significantly when one chooses v to include very big numerical values?
$endgroup$
– xabdax
Aug 12 at 1:19




1




1




$begingroup$
@xabdax Most eigenvalues are indeed often small perturbations, but not all of them. And it very much depends on $u,v$. For example, if either of them is an eigenvector of $A$, then only one eigenvalue gets modified, and the rest stay the same. Otherwise, there will typically be modifications that scale with $u,v$. If they are very large, so will the modification.
$endgroup$
– AccidentalFourierTransform
Aug 12 at 1:36




$begingroup$
@xabdax Most eigenvalues are indeed often small perturbations, but not all of them. And it very much depends on $u,v$. For example, if either of them is an eigenvector of $A$, then only one eigenvalue gets modified, and the rest stay the same. Otherwise, there will typically be modifications that scale with $u,v$. If they are very large, so will the modification.
$endgroup$
– AccidentalFourierTransform
Aug 12 at 1:36












$begingroup$
Thanks for that. I am having a hard time believing that the modifications will scale with u and/or v. I don't know why but if you modify the above code with mat = RandomReal[10^(6), 10^(9), 1, dim]; one gets modified eigenvalues which are barely distinguishable from the initial ones even though the numbers in v have a much larger magnitude.
$endgroup$
– xabdax
Aug 12 at 3:24




$begingroup$
Thanks for that. I am having a hard time believing that the modifications will scale with u and/or v. I don't know why but if you modify the above code with mat = RandomReal[10^(6), 10^(9), 1, dim]; one gets modified eigenvalues which are barely distinguishable from the initial ones even though the numbers in v have a much larger magnitude.
$endgroup$
– xabdax
Aug 12 at 3:24













4













$begingroup$

It doesn't happen here:



SeedRandom[0];
aa = RandomReal[-10, 10, 1000, 1000];
bb = ConstantArray[RandomReal[-10, 10, 1000], 1000];

eva = Eigenvalues@aa;
evc = Eigenvalues[aa + bb];

ListPlot[ReIm@eva, ReIm@evc, ImageSize -> Large, MaxPlotPoints -> 1000]


enter image description here



OTOH, it does happen here:



bb = ConstantArray[RandomReal[-1, 1 1*^-8, 1000], 1000];

eva - Eigenvalues[aa + bb] // Abs // Max
(* 5.4818*10^-7 *_)


Of course, the explanations should be obvious.






share|improve this answer











$endgroup$














  • $begingroup$
    It’s not so obvious to me tbh. Why does it happen in the latter case but not in the former? The sample code that I added to my question has the same problem.
    $endgroup$
    – xabdax
    Aug 12 at 1:38






  • 1




    $begingroup$
    @xabdax The size of the perturbation bb affects the size of the effect on the eigenvalues. In the second case, the size is about $10^-9$ times as small. I meant to imply that you can have a difference in the eigenvalues, somewhere between a modest to a small one, depending on the matrices $A$ and $B$. If you want an explanation of your case, you'd have to give us the actual matrices.
    $endgroup$
    – Michael E2
    Aug 12 at 2:00










  • $begingroup$
    If one sorts the eigenvalues by magnitude, then even in the first code, the eigenvalues that you calculated would be almost the same. So apparently it does not really matter whether your perturbation is big or small.
    $endgroup$
    – xabdax
    Aug 12 at 2:27










  • $begingroup$
    @xabdax What do you mean "almost the same"? It's clear there are blue dots not very close to any gold dots. (But "close" is relative, after all.) -- oops, I had the wrong image.
    $endgroup$
    – Michael E2
    Aug 12 at 2:49











  • $begingroup$
    If you sort your eigenvalues (either the real or imaginary part) by magnitude and plot both eva and evc, you will get two S-shaped curves that lie on top of each other. Does that not imply that the eigenvalues have barely changed? I'm not sure by which way mathematica sorts the eigenvalues which is why I usually sort them manually by magnitude.
    $endgroup$
    – xabdax
    Aug 12 at 2:56
















4













$begingroup$

It doesn't happen here:



SeedRandom[0];
aa = RandomReal[-10, 10, 1000, 1000];
bb = ConstantArray[RandomReal[-10, 10, 1000], 1000];

eva = Eigenvalues@aa;
evc = Eigenvalues[aa + bb];

ListPlot[ReIm@eva, ReIm@evc, ImageSize -> Large, MaxPlotPoints -> 1000]


enter image description here



OTOH, it does happen here:



bb = ConstantArray[RandomReal[-1, 1 1*^-8, 1000], 1000];

eva - Eigenvalues[aa + bb] // Abs // Max
(* 5.4818*10^-7 *_)


Of course, the explanations should be obvious.






share|improve this answer











$endgroup$














  • $begingroup$
    It’s not so obvious to me tbh. Why does it happen in the latter case but not in the former? The sample code that I added to my question has the same problem.
    $endgroup$
    – xabdax
    Aug 12 at 1:38






  • 1




    $begingroup$
    @xabdax The size of the perturbation bb affects the size of the effect on the eigenvalues. In the second case, the size is about $10^-9$ times as small. I meant to imply that you can have a difference in the eigenvalues, somewhere between a modest to a small one, depending on the matrices $A$ and $B$. If you want an explanation of your case, you'd have to give us the actual matrices.
    $endgroup$
    – Michael E2
    Aug 12 at 2:00










  • $begingroup$
    If one sorts the eigenvalues by magnitude, then even in the first code, the eigenvalues that you calculated would be almost the same. So apparently it does not really matter whether your perturbation is big or small.
    $endgroup$
    – xabdax
    Aug 12 at 2:27










  • $begingroup$
    @xabdax What do you mean "almost the same"? It's clear there are blue dots not very close to any gold dots. (But "close" is relative, after all.) -- oops, I had the wrong image.
    $endgroup$
    – Michael E2
    Aug 12 at 2:49











  • $begingroup$
    If you sort your eigenvalues (either the real or imaginary part) by magnitude and plot both eva and evc, you will get two S-shaped curves that lie on top of each other. Does that not imply that the eigenvalues have barely changed? I'm not sure by which way mathematica sorts the eigenvalues which is why I usually sort them manually by magnitude.
    $endgroup$
    – xabdax
    Aug 12 at 2:56














4














4










4







$begingroup$

It doesn't happen here:



SeedRandom[0];
aa = RandomReal[-10, 10, 1000, 1000];
bb = ConstantArray[RandomReal[-10, 10, 1000], 1000];

eva = Eigenvalues@aa;
evc = Eigenvalues[aa + bb];

ListPlot[ReIm@eva, ReIm@evc, ImageSize -> Large, MaxPlotPoints -> 1000]


enter image description here



OTOH, it does happen here:



bb = ConstantArray[RandomReal[-1, 1 1*^-8, 1000], 1000];

eva - Eigenvalues[aa + bb] // Abs // Max
(* 5.4818*10^-7 *_)


Of course, the explanations should be obvious.






share|improve this answer











$endgroup$



It doesn't happen here:



SeedRandom[0];
aa = RandomReal[-10, 10, 1000, 1000];
bb = ConstantArray[RandomReal[-10, 10, 1000], 1000];

eva = Eigenvalues@aa;
evc = Eigenvalues[aa + bb];

ListPlot[ReIm@eva, ReIm@evc, ImageSize -> Large, MaxPlotPoints -> 1000]


enter image description here



OTOH, it does happen here:



bb = ConstantArray[RandomReal[-1, 1 1*^-8, 1000], 1000];

eva - Eigenvalues[aa + bb] // Abs // Max
(* 5.4818*10^-7 *_)


Of course, the explanations should be obvious.







share|improve this answer














share|improve this answer



share|improve this answer








edited Aug 12 at 3:09

























answered Aug 12 at 0:40









Michael E2Michael E2

159k13 gold badges216 silver badges515 bronze badges




159k13 gold badges216 silver badges515 bronze badges














  • $begingroup$
    It’s not so obvious to me tbh. Why does it happen in the latter case but not in the former? The sample code that I added to my question has the same problem.
    $endgroup$
    – xabdax
    Aug 12 at 1:38






  • 1




    $begingroup$
    @xabdax The size of the perturbation bb affects the size of the effect on the eigenvalues. In the second case, the size is about $10^-9$ times as small. I meant to imply that you can have a difference in the eigenvalues, somewhere between a modest to a small one, depending on the matrices $A$ and $B$. If you want an explanation of your case, you'd have to give us the actual matrices.
    $endgroup$
    – Michael E2
    Aug 12 at 2:00










  • $begingroup$
    If one sorts the eigenvalues by magnitude, then even in the first code, the eigenvalues that you calculated would be almost the same. So apparently it does not really matter whether your perturbation is big or small.
    $endgroup$
    – xabdax
    Aug 12 at 2:27










  • $begingroup$
    @xabdax What do you mean "almost the same"? It's clear there are blue dots not very close to any gold dots. (But "close" is relative, after all.) -- oops, I had the wrong image.
    $endgroup$
    – Michael E2
    Aug 12 at 2:49











  • $begingroup$
    If you sort your eigenvalues (either the real or imaginary part) by magnitude and plot both eva and evc, you will get two S-shaped curves that lie on top of each other. Does that not imply that the eigenvalues have barely changed? I'm not sure by which way mathematica sorts the eigenvalues which is why I usually sort them manually by magnitude.
    $endgroup$
    – xabdax
    Aug 12 at 2:56

















  • $begingroup$
    It’s not so obvious to me tbh. Why does it happen in the latter case but not in the former? The sample code that I added to my question has the same problem.
    $endgroup$
    – xabdax
    Aug 12 at 1:38






  • 1




    $begingroup$
    @xabdax The size of the perturbation bb affects the size of the effect on the eigenvalues. In the second case, the size is about $10^-9$ times as small. I meant to imply that you can have a difference in the eigenvalues, somewhere between a modest to a small one, depending on the matrices $A$ and $B$. If you want an explanation of your case, you'd have to give us the actual matrices.
    $endgroup$
    – Michael E2
    Aug 12 at 2:00










  • $begingroup$
    If one sorts the eigenvalues by magnitude, then even in the first code, the eigenvalues that you calculated would be almost the same. So apparently it does not really matter whether your perturbation is big or small.
    $endgroup$
    – xabdax
    Aug 12 at 2:27










  • $begingroup$
    @xabdax What do you mean "almost the same"? It's clear there are blue dots not very close to any gold dots. (But "close" is relative, after all.) -- oops, I had the wrong image.
    $endgroup$
    – Michael E2
    Aug 12 at 2:49











  • $begingroup$
    If you sort your eigenvalues (either the real or imaginary part) by magnitude and plot both eva and evc, you will get two S-shaped curves that lie on top of each other. Does that not imply that the eigenvalues have barely changed? I'm not sure by which way mathematica sorts the eigenvalues which is why I usually sort them manually by magnitude.
    $endgroup$
    – xabdax
    Aug 12 at 2:56
















$begingroup$
It’s not so obvious to me tbh. Why does it happen in the latter case but not in the former? The sample code that I added to my question has the same problem.
$endgroup$
– xabdax
Aug 12 at 1:38




$begingroup$
It’s not so obvious to me tbh. Why does it happen in the latter case but not in the former? The sample code that I added to my question has the same problem.
$endgroup$
– xabdax
Aug 12 at 1:38




1




1




$begingroup$
@xabdax The size of the perturbation bb affects the size of the effect on the eigenvalues. In the second case, the size is about $10^-9$ times as small. I meant to imply that you can have a difference in the eigenvalues, somewhere between a modest to a small one, depending on the matrices $A$ and $B$. If you want an explanation of your case, you'd have to give us the actual matrices.
$endgroup$
– Michael E2
Aug 12 at 2:00




$begingroup$
@xabdax The size of the perturbation bb affects the size of the effect on the eigenvalues. In the second case, the size is about $10^-9$ times as small. I meant to imply that you can have a difference in the eigenvalues, somewhere between a modest to a small one, depending on the matrices $A$ and $B$. If you want an explanation of your case, you'd have to give us the actual matrices.
$endgroup$
– Michael E2
Aug 12 at 2:00












$begingroup$
If one sorts the eigenvalues by magnitude, then even in the first code, the eigenvalues that you calculated would be almost the same. So apparently it does not really matter whether your perturbation is big or small.
$endgroup$
– xabdax
Aug 12 at 2:27




$begingroup$
If one sorts the eigenvalues by magnitude, then even in the first code, the eigenvalues that you calculated would be almost the same. So apparently it does not really matter whether your perturbation is big or small.
$endgroup$
– xabdax
Aug 12 at 2:27












$begingroup$
@xabdax What do you mean "almost the same"? It's clear there are blue dots not very close to any gold dots. (But "close" is relative, after all.) -- oops, I had the wrong image.
$endgroup$
– Michael E2
Aug 12 at 2:49





$begingroup$
@xabdax What do you mean "almost the same"? It's clear there are blue dots not very close to any gold dots. (But "close" is relative, after all.) -- oops, I had the wrong image.
$endgroup$
– Michael E2
Aug 12 at 2:49













$begingroup$
If you sort your eigenvalues (either the real or imaginary part) by magnitude and plot both eva and evc, you will get two S-shaped curves that lie on top of each other. Does that not imply that the eigenvalues have barely changed? I'm not sure by which way mathematica sorts the eigenvalues which is why I usually sort them manually by magnitude.
$endgroup$
– xabdax
Aug 12 at 2:56





$begingroup$
If you sort your eigenvalues (either the real or imaginary part) by magnitude and plot both eva and evc, you will get two S-shaped curves that lie on top of each other. Does that not imply that the eigenvalues have barely changed? I'm not sure by which way mathematica sorts the eigenvalues which is why I usually sort them manually by magnitude.
$endgroup$
– xabdax
Aug 12 at 2:56




Popular posts from this blog

Category:9 (number) SubcategoriesMedia in category "9 (number)"Navigation menuUpload mediaGND ID: 4485639-8Library of Congress authority ID: sh85091979ReasonatorScholiaStatistics

Circuit construction for execution of conditional statements using least significant bitHow are two different registers being used as “control”?How exactly is the stated composite state of the two registers being produced using the $R_zz$ controlled rotations?Efficiently performing controlled rotations in HHLWould this quantum algorithm implementation work?How to prepare a superposed states of odd integers from $1$ to $sqrtN$?Why is this implementation of the order finding algorithm not working?Circuit construction for Hamiltonian simulationHow can I invert the least significant bit of a certain term of a superposed state?Implementing an oracleImplementing a controlled sum operation

Magento 2 “No Payment Methods” in Admin New OrderHow to integrate Paypal Express Checkout with the Magento APIMagento 1.5 - Sales > Order > edit order and shipping methods disappearAuto Invoice Check/Money Order Payment methodAdd more simple payment methods?Shipping methods not showingWhat should I do to change payment methods if changing the configuration has no effects?1.9 - No Payment Methods showing upMy Payment Methods not Showing for downloadable/virtual product when checkout?Magento2 API to access internal payment methodHow to call an existing payment methods in the registration form?