Computing elements of a 1000 x 60 matrix exhausts RAMWhat is a Mathematica packed array?Paging RAM in case of memory shortage issueHeavy duty operations, RAM, and ReadyboostEfficient calculation of diagonal matrix elementsHow to know each variable used how much RAMReplacing elements of a matrixHow to clear RAM memory in a running code?How does this code behave with more than 32GiB RAM?Replace diagonal elements in sparse matrixSolution list from “Solve” too large for my RAM spaceLimit the amount of RAM Mathematica may access?

A steel cutting sword?

Should I disclose a colleague's illness (that I should not know) when others badmouth him

Can I tell a prospective employee that everyone in the team is leaving?

Is the derivative with respect to a fermion field Grassmann-odd?

Statue View: Tetrominoes

Are these reasonable traits for someone with autism?

what kind of chord progession is this?

Why didn't Thanos use the Time Stone to stop the Avengers' plan?

What are these arcade games in Ghostbusters 1984?

What is the object moving across the ceiling in this stock footage?

How to illustrate the Mean Value theorem?

I know that there is a preselected candidate for a position to be filled at my department. What should I do?

Any advice on creating fictional locations in real places when writing historical fiction?

Is the field of q-series 'dead'?

Count Even Digits In Number

Plot twist where the antagonist wins

Why would Ryanair allow me to book this journey through a third party, but not through their own website?

How to know if a folder is a symbolic link?

Which melee weapons have the Two-Handed property, but lack Heavy and Special?

Why did David Cameron offer a referendum on the European Union?

Who will lead the country until there is a new Tory leader?

Why are C64 games inconsistent with which joystick port they use?

Is it possible to play as a necromancer skeleton?

Installed Tankless Water Heater - Internet loss when active



Computing elements of a 1000 x 60 matrix exhausts RAM


What is a Mathematica packed array?Paging RAM in case of memory shortage issueHeavy duty operations, RAM, and ReadyboostEfficient calculation of diagonal matrix elementsHow to know each variable used how much RAMReplacing elements of a matrixHow to clear RAM memory in a running code?How does this code behave with more than 32GiB RAM?Replace diagonal elements in sparse matrixSolution list from “Solve” too large for my RAM spaceLimit the amount of RAM Mathematica may access?













5












$begingroup$


I am trying to compute a 1000 x 60 matrix or list of lists (and ideally this should go up to 1000 x 500 or 1000 x 1000).



Each element is the result of a FindRoot operation, so I make my list by doing



Table[Flatten[h /. FindRoot[h == F[h, b, g], b, 1, 1000, g, 1, 60)


but 16GB of RAM are filled up. I think I should be able to hold list of lists much bigger than that, so probably using Table with FindRoot is causing Mathematica to store a lot of undeeded stuff in memory.



Here is the code:



ι[m_, n_] := Binomial[n, n*(1 - m)/2]*2^(-n);
f[m_, h_, b_, g_, n_] := (h*m + g/2*m^2) +
1/(n*b)*Log[ι[m, n]];
μ[m_, h_, b_, g_, n_] :=
Exp[b*n*f[m, h, b, g, n] + b*n*(-h + g/2)]/
Sum[Exp[b*n*f[x, h, b, g, n] + b*n*(-h + g/2)], x, -1 + 2/n,
1 - 2/n, 2/n];
moment[h_, x_, b_, g_, n_] := Sum[m^x*μ[m, h, b, g, n], m, -1 + 2/n, 1 - 2/n, 2/n];
var[h_, b_, g_, n_] := moment[h, 2, b, g, n] - moment[h, 1, b, g, n]^2;
cov[h_, b_, g_, n_] := moment[h, 3, b, g, n] - moment[h, 1, b, g,n]*moment[h, 2, b, g, n];
F[h_,b_,g_,n_]:= -d*b*(cov[h, b, gg, n] +
2 var[h, b, gg, n]);
n = 100;
d = 0.9;

glist = Table[g, g, 0.4, 1, 0.01];
blist = Table[b, b, 1.1, 10.1, 0.01];

heatdata = Table[
Flatten[h /.
FindRoot[
h == F[h,b,g,n], h, -0.01]][[1]]
, b, blist, g, glist];









share|improve this question











$endgroup$











  • $begingroup$
    Please show a complete minimal example that reproduces the problem.
    $endgroup$
    – Szabolcs
    May 19 at 8:31






  • 1




    $begingroup$
    If it's a lot of code, that would be your actual code, not a minimal example. Please make an effort to track down the cause of the problem, and construct a small example that illustrates the problem. See here for guidance: mathematica.meta.stackexchange.com/q/2126/12
    $endgroup$
    – Szabolcs
    May 19 at 8:34






  • 1




    $begingroup$
    One possible issue is the memoization. Did you check how many values are actually saved? If you are working with floating point numbers, it may be the thing that eats up the memory.
    $endgroup$
    – Szabolcs
    May 19 at 8:35










  • $begingroup$
    Thanks, I've added the code as it isn't really that long. I've removed the memoization and I'm looking to see if this works now (the computation does take a while to run).
    $endgroup$
    – Three Diag
    May 19 at 8:40















5












$begingroup$


I am trying to compute a 1000 x 60 matrix or list of lists (and ideally this should go up to 1000 x 500 or 1000 x 1000).



Each element is the result of a FindRoot operation, so I make my list by doing



Table[Flatten[h /. FindRoot[h == F[h, b, g], b, 1, 1000, g, 1, 60)


but 16GB of RAM are filled up. I think I should be able to hold list of lists much bigger than that, so probably using Table with FindRoot is causing Mathematica to store a lot of undeeded stuff in memory.



Here is the code:



ι[m_, n_] := Binomial[n, n*(1 - m)/2]*2^(-n);
f[m_, h_, b_, g_, n_] := (h*m + g/2*m^2) +
1/(n*b)*Log[ι[m, n]];
μ[m_, h_, b_, g_, n_] :=
Exp[b*n*f[m, h, b, g, n] + b*n*(-h + g/2)]/
Sum[Exp[b*n*f[x, h, b, g, n] + b*n*(-h + g/2)], x, -1 + 2/n,
1 - 2/n, 2/n];
moment[h_, x_, b_, g_, n_] := Sum[m^x*μ[m, h, b, g, n], m, -1 + 2/n, 1 - 2/n, 2/n];
var[h_, b_, g_, n_] := moment[h, 2, b, g, n] - moment[h, 1, b, g, n]^2;
cov[h_, b_, g_, n_] := moment[h, 3, b, g, n] - moment[h, 1, b, g,n]*moment[h, 2, b, g, n];
F[h_,b_,g_,n_]:= -d*b*(cov[h, b, gg, n] +
2 var[h, b, gg, n]);
n = 100;
d = 0.9;

glist = Table[g, g, 0.4, 1, 0.01];
blist = Table[b, b, 1.1, 10.1, 0.01];

heatdata = Table[
Flatten[h /.
FindRoot[
h == F[h,b,g,n], h, -0.01]][[1]]
, b, blist, g, glist];









share|improve this question











$endgroup$











  • $begingroup$
    Please show a complete minimal example that reproduces the problem.
    $endgroup$
    – Szabolcs
    May 19 at 8:31






  • 1




    $begingroup$
    If it's a lot of code, that would be your actual code, not a minimal example. Please make an effort to track down the cause of the problem, and construct a small example that illustrates the problem. See here for guidance: mathematica.meta.stackexchange.com/q/2126/12
    $endgroup$
    – Szabolcs
    May 19 at 8:34






  • 1




    $begingroup$
    One possible issue is the memoization. Did you check how many values are actually saved? If you are working with floating point numbers, it may be the thing that eats up the memory.
    $endgroup$
    – Szabolcs
    May 19 at 8:35










  • $begingroup$
    Thanks, I've added the code as it isn't really that long. I've removed the memoization and I'm looking to see if this works now (the computation does take a while to run).
    $endgroup$
    – Three Diag
    May 19 at 8:40













5












5








5


1



$begingroup$


I am trying to compute a 1000 x 60 matrix or list of lists (and ideally this should go up to 1000 x 500 or 1000 x 1000).



Each element is the result of a FindRoot operation, so I make my list by doing



Table[Flatten[h /. FindRoot[h == F[h, b, g], b, 1, 1000, g, 1, 60)


but 16GB of RAM are filled up. I think I should be able to hold list of lists much bigger than that, so probably using Table with FindRoot is causing Mathematica to store a lot of undeeded stuff in memory.



Here is the code:



ι[m_, n_] := Binomial[n, n*(1 - m)/2]*2^(-n);
f[m_, h_, b_, g_, n_] := (h*m + g/2*m^2) +
1/(n*b)*Log[ι[m, n]];
μ[m_, h_, b_, g_, n_] :=
Exp[b*n*f[m, h, b, g, n] + b*n*(-h + g/2)]/
Sum[Exp[b*n*f[x, h, b, g, n] + b*n*(-h + g/2)], x, -1 + 2/n,
1 - 2/n, 2/n];
moment[h_, x_, b_, g_, n_] := Sum[m^x*μ[m, h, b, g, n], m, -1 + 2/n, 1 - 2/n, 2/n];
var[h_, b_, g_, n_] := moment[h, 2, b, g, n] - moment[h, 1, b, g, n]^2;
cov[h_, b_, g_, n_] := moment[h, 3, b, g, n] - moment[h, 1, b, g,n]*moment[h, 2, b, g, n];
F[h_,b_,g_,n_]:= -d*b*(cov[h, b, gg, n] +
2 var[h, b, gg, n]);
n = 100;
d = 0.9;

glist = Table[g, g, 0.4, 1, 0.01];
blist = Table[b, b, 1.1, 10.1, 0.01];

heatdata = Table[
Flatten[h /.
FindRoot[
h == F[h,b,g,n], h, -0.01]][[1]]
, b, blist, g, glist];









share|improve this question











$endgroup$




I am trying to compute a 1000 x 60 matrix or list of lists (and ideally this should go up to 1000 x 500 or 1000 x 1000).



Each element is the result of a FindRoot operation, so I make my list by doing



Table[Flatten[h /. FindRoot[h == F[h, b, g], b, 1, 1000, g, 1, 60)


but 16GB of RAM are filled up. I think I should be able to hold list of lists much bigger than that, so probably using Table with FindRoot is causing Mathematica to store a lot of undeeded stuff in memory.



Here is the code:



ι[m_, n_] := Binomial[n, n*(1 - m)/2]*2^(-n);
f[m_, h_, b_, g_, n_] := (h*m + g/2*m^2) +
1/(n*b)*Log[ι[m, n]];
μ[m_, h_, b_, g_, n_] :=
Exp[b*n*f[m, h, b, g, n] + b*n*(-h + g/2)]/
Sum[Exp[b*n*f[x, h, b, g, n] + b*n*(-h + g/2)], x, -1 + 2/n,
1 - 2/n, 2/n];
moment[h_, x_, b_, g_, n_] := Sum[m^x*μ[m, h, b, g, n], m, -1 + 2/n, 1 - 2/n, 2/n];
var[h_, b_, g_, n_] := moment[h, 2, b, g, n] - moment[h, 1, b, g, n]^2;
cov[h_, b_, g_, n_] := moment[h, 3, b, g, n] - moment[h, 1, b, g,n]*moment[h, 2, b, g, n];
F[h_,b_,g_,n_]:= -d*b*(cov[h, b, gg, n] +
2 var[h, b, gg, n]);
n = 100;
d = 0.9;

glist = Table[g, g, 0.4, 1, 0.01];
blist = Table[b, b, 1.1, 10.1, 0.01];

heatdata = Table[
Flatten[h /.
FindRoot[
h == F[h,b,g,n], h, -0.01]][[1]]
, b, blist, g, glist];






performance-tuning memory






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited May 20 at 0:11









m_goldberg

90.3k873203




90.3k873203










asked May 19 at 8:24









Three DiagThree Diag

346111




346111











  • $begingroup$
    Please show a complete minimal example that reproduces the problem.
    $endgroup$
    – Szabolcs
    May 19 at 8:31






  • 1




    $begingroup$
    If it's a lot of code, that would be your actual code, not a minimal example. Please make an effort to track down the cause of the problem, and construct a small example that illustrates the problem. See here for guidance: mathematica.meta.stackexchange.com/q/2126/12
    $endgroup$
    – Szabolcs
    May 19 at 8:34






  • 1




    $begingroup$
    One possible issue is the memoization. Did you check how many values are actually saved? If you are working with floating point numbers, it may be the thing that eats up the memory.
    $endgroup$
    – Szabolcs
    May 19 at 8:35










  • $begingroup$
    Thanks, I've added the code as it isn't really that long. I've removed the memoization and I'm looking to see if this works now (the computation does take a while to run).
    $endgroup$
    – Three Diag
    May 19 at 8:40
















  • $begingroup$
    Please show a complete minimal example that reproduces the problem.
    $endgroup$
    – Szabolcs
    May 19 at 8:31






  • 1




    $begingroup$
    If it's a lot of code, that would be your actual code, not a minimal example. Please make an effort to track down the cause of the problem, and construct a small example that illustrates the problem. See here for guidance: mathematica.meta.stackexchange.com/q/2126/12
    $endgroup$
    – Szabolcs
    May 19 at 8:34






  • 1




    $begingroup$
    One possible issue is the memoization. Did you check how many values are actually saved? If you are working with floating point numbers, it may be the thing that eats up the memory.
    $endgroup$
    – Szabolcs
    May 19 at 8:35










  • $begingroup$
    Thanks, I've added the code as it isn't really that long. I've removed the memoization and I'm looking to see if this works now (the computation does take a while to run).
    $endgroup$
    – Three Diag
    May 19 at 8:40















$begingroup$
Please show a complete minimal example that reproduces the problem.
$endgroup$
– Szabolcs
May 19 at 8:31




$begingroup$
Please show a complete minimal example that reproduces the problem.
$endgroup$
– Szabolcs
May 19 at 8:31




1




1




$begingroup$
If it's a lot of code, that would be your actual code, not a minimal example. Please make an effort to track down the cause of the problem, and construct a small example that illustrates the problem. See here for guidance: mathematica.meta.stackexchange.com/q/2126/12
$endgroup$
– Szabolcs
May 19 at 8:34




$begingroup$
If it's a lot of code, that would be your actual code, not a minimal example. Please make an effort to track down the cause of the problem, and construct a small example that illustrates the problem. See here for guidance: mathematica.meta.stackexchange.com/q/2126/12
$endgroup$
– Szabolcs
May 19 at 8:34




1




1




$begingroup$
One possible issue is the memoization. Did you check how many values are actually saved? If you are working with floating point numbers, it may be the thing that eats up the memory.
$endgroup$
– Szabolcs
May 19 at 8:35




$begingroup$
One possible issue is the memoization. Did you check how many values are actually saved? If you are working with floating point numbers, it may be the thing that eats up the memory.
$endgroup$
– Szabolcs
May 19 at 8:35












$begingroup$
Thanks, I've added the code as it isn't really that long. I've removed the memoization and I'm looking to see if this works now (the computation does take a while to run).
$endgroup$
– Three Diag
May 19 at 8:40




$begingroup$
Thanks, I've added the code as it isn't really that long. I've removed the memoization and I'm looking to see if this works now (the computation does take a while to run).
$endgroup$
– Three Diag
May 19 at 8:40










1 Answer
1






active

oldest

votes


















20












$begingroup$

Your function F is implemented really, really inefficiently. By quite simple means and in the proposed situation, it can be sped up by a factor of 20000. The key is to start with calculations in machine precision as early as possible and to store frequently used data in packed arrays.



n = 100;
mlist = Range[-1. + 2/n, 1. - 2/n, 2./n];
m2list = mlist^2;
m3list = mlist^3;
logiotalist = Log[Binomial[n, n*(1 - mlist)/2]*2^(-n)];

d = 0.9;
glist = Range[0.4, 1, 0.01];
blist = Range[1.1, 10.1, 0.01];

ClearAll[F];
F[h_?NumericQ, b_, g_] :=
Module[var, cov, explist, μlist, mom1, mom2, mom3,
explist = Exp[(b n h) mlist + (b n g/2) m2list + logiotalist + b n (-h + g/2)];
μlist = explist/Total[explist];
mom1 = μlist.mlist;
mom2 = μlist.m2list;
mom3 = μlist.m3list;
var = Subtract[mom2, mom1 mom1];
cov = Subtract[mom3, mom1 mom2];
(-d b) (cov + 2. var)
];


Just a quick test for precision and speed:



t1, r1 = F[0.1, blist[[1]], glist[[1]], n] // RepeatedTiming;
t2, r2 = Fnew[0.1, blist[[1]], glist[[1]]] // RepeatedTiming;
Abs[r1 - r2]/r1
t1/t2



-1.32375*10^-14



2.1*10^4




Now the parallelized solve loop requires about 10 seconds on my Quad Core Haswell CPU:



ParallelEvaluate[Off[General::munfl]];
heatdata = Developer`ToPackedArray[
ParallelTable[
Block[h0, h,
h0 = -0.01;
Developer`ToPackedArray[
Table[
h0 = h /. FindRoot[h == Fnew[h, b, g], h, h0],
b, blist]
]
],
g, glist]
]; // AbsoluteTiming // First



10.072




Memory considerations



You also see: Limited amount of RAM is not an issue here. That must have been caused by excessive memoziation. For the timing, it is crucial how information is stored and retrieved.
Storing computed values in a packed array for retrieving them later is significantly more efficient than memoization. Memoization into DownValues uses a complex data structure such as a hash table at its backend, and this data structure has certain overhead. In contrast, a packed array represents basically a connected block of physical memory, accompanied by some bytes of meta information (array dimensions and maybe some row pointers). Moreover, computation with data stored in packed arrays can take advantage of vectorization, which is most crucially employed in the following line:



explist = Exp[(b n h) mlist + (b n g/2) m2list + logiotalist + b n (-h + g/2)];


Remark on precision



Finally, I have to note that there is numerical underflow occurring in the course of the computation. This is probably caused by calling Exp with negative numbers of oversized absolute value. I decided to turn off the warning message, but this may lead to a significant loss of precision. So use with care. If one wants to do it correctly, one should investigate this further and apply, e.g. Clip or Threshold.






share|improve this answer











$endgroup$












  • $begingroup$
    Thanks for this very helpful and informative answer!
    $endgroup$
    – Three Diag
    May 19 at 23:07










  • $begingroup$
    You're welcome!
    $endgroup$
    – Henrik Schumacher
    May 20 at 6:40











Your Answer








StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "387"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathematica.stackexchange.com%2fquestions%2f198651%2fcomputing-elements-of-a-1000-x-60-matrix-exhausts-ram%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









20












$begingroup$

Your function F is implemented really, really inefficiently. By quite simple means and in the proposed situation, it can be sped up by a factor of 20000. The key is to start with calculations in machine precision as early as possible and to store frequently used data in packed arrays.



n = 100;
mlist = Range[-1. + 2/n, 1. - 2/n, 2./n];
m2list = mlist^2;
m3list = mlist^3;
logiotalist = Log[Binomial[n, n*(1 - mlist)/2]*2^(-n)];

d = 0.9;
glist = Range[0.4, 1, 0.01];
blist = Range[1.1, 10.1, 0.01];

ClearAll[F];
F[h_?NumericQ, b_, g_] :=
Module[var, cov, explist, μlist, mom1, mom2, mom3,
explist = Exp[(b n h) mlist + (b n g/2) m2list + logiotalist + b n (-h + g/2)];
μlist = explist/Total[explist];
mom1 = μlist.mlist;
mom2 = μlist.m2list;
mom3 = μlist.m3list;
var = Subtract[mom2, mom1 mom1];
cov = Subtract[mom3, mom1 mom2];
(-d b) (cov + 2. var)
];


Just a quick test for precision and speed:



t1, r1 = F[0.1, blist[[1]], glist[[1]], n] // RepeatedTiming;
t2, r2 = Fnew[0.1, blist[[1]], glist[[1]]] // RepeatedTiming;
Abs[r1 - r2]/r1
t1/t2



-1.32375*10^-14



2.1*10^4




Now the parallelized solve loop requires about 10 seconds on my Quad Core Haswell CPU:



ParallelEvaluate[Off[General::munfl]];
heatdata = Developer`ToPackedArray[
ParallelTable[
Block[h0, h,
h0 = -0.01;
Developer`ToPackedArray[
Table[
h0 = h /. FindRoot[h == Fnew[h, b, g], h, h0],
b, blist]
]
],
g, glist]
]; // AbsoluteTiming // First



10.072




Memory considerations



You also see: Limited amount of RAM is not an issue here. That must have been caused by excessive memoziation. For the timing, it is crucial how information is stored and retrieved.
Storing computed values in a packed array for retrieving them later is significantly more efficient than memoization. Memoization into DownValues uses a complex data structure such as a hash table at its backend, and this data structure has certain overhead. In contrast, a packed array represents basically a connected block of physical memory, accompanied by some bytes of meta information (array dimensions and maybe some row pointers). Moreover, computation with data stored in packed arrays can take advantage of vectorization, which is most crucially employed in the following line:



explist = Exp[(b n h) mlist + (b n g/2) m2list + logiotalist + b n (-h + g/2)];


Remark on precision



Finally, I have to note that there is numerical underflow occurring in the course of the computation. This is probably caused by calling Exp with negative numbers of oversized absolute value. I decided to turn off the warning message, but this may lead to a significant loss of precision. So use with care. If one wants to do it correctly, one should investigate this further and apply, e.g. Clip or Threshold.






share|improve this answer











$endgroup$












  • $begingroup$
    Thanks for this very helpful and informative answer!
    $endgroup$
    – Three Diag
    May 19 at 23:07










  • $begingroup$
    You're welcome!
    $endgroup$
    – Henrik Schumacher
    May 20 at 6:40















20












$begingroup$

Your function F is implemented really, really inefficiently. By quite simple means and in the proposed situation, it can be sped up by a factor of 20000. The key is to start with calculations in machine precision as early as possible and to store frequently used data in packed arrays.



n = 100;
mlist = Range[-1. + 2/n, 1. - 2/n, 2./n];
m2list = mlist^2;
m3list = mlist^3;
logiotalist = Log[Binomial[n, n*(1 - mlist)/2]*2^(-n)];

d = 0.9;
glist = Range[0.4, 1, 0.01];
blist = Range[1.1, 10.1, 0.01];

ClearAll[F];
F[h_?NumericQ, b_, g_] :=
Module[var, cov, explist, μlist, mom1, mom2, mom3,
explist = Exp[(b n h) mlist + (b n g/2) m2list + logiotalist + b n (-h + g/2)];
μlist = explist/Total[explist];
mom1 = μlist.mlist;
mom2 = μlist.m2list;
mom3 = μlist.m3list;
var = Subtract[mom2, mom1 mom1];
cov = Subtract[mom3, mom1 mom2];
(-d b) (cov + 2. var)
];


Just a quick test for precision and speed:



t1, r1 = F[0.1, blist[[1]], glist[[1]], n] // RepeatedTiming;
t2, r2 = Fnew[0.1, blist[[1]], glist[[1]]] // RepeatedTiming;
Abs[r1 - r2]/r1
t1/t2



-1.32375*10^-14



2.1*10^4




Now the parallelized solve loop requires about 10 seconds on my Quad Core Haswell CPU:



ParallelEvaluate[Off[General::munfl]];
heatdata = Developer`ToPackedArray[
ParallelTable[
Block[h0, h,
h0 = -0.01;
Developer`ToPackedArray[
Table[
h0 = h /. FindRoot[h == Fnew[h, b, g], h, h0],
b, blist]
]
],
g, glist]
]; // AbsoluteTiming // First



10.072




Memory considerations



You also see: Limited amount of RAM is not an issue here. That must have been caused by excessive memoziation. For the timing, it is crucial how information is stored and retrieved.
Storing computed values in a packed array for retrieving them later is significantly more efficient than memoization. Memoization into DownValues uses a complex data structure such as a hash table at its backend, and this data structure has certain overhead. In contrast, a packed array represents basically a connected block of physical memory, accompanied by some bytes of meta information (array dimensions and maybe some row pointers). Moreover, computation with data stored in packed arrays can take advantage of vectorization, which is most crucially employed in the following line:



explist = Exp[(b n h) mlist + (b n g/2) m2list + logiotalist + b n (-h + g/2)];


Remark on precision



Finally, I have to note that there is numerical underflow occurring in the course of the computation. This is probably caused by calling Exp with negative numbers of oversized absolute value. I decided to turn off the warning message, but this may lead to a significant loss of precision. So use with care. If one wants to do it correctly, one should investigate this further and apply, e.g. Clip or Threshold.






share|improve this answer











$endgroup$












  • $begingroup$
    Thanks for this very helpful and informative answer!
    $endgroup$
    – Three Diag
    May 19 at 23:07










  • $begingroup$
    You're welcome!
    $endgroup$
    – Henrik Schumacher
    May 20 at 6:40













20












20








20





$begingroup$

Your function F is implemented really, really inefficiently. By quite simple means and in the proposed situation, it can be sped up by a factor of 20000. The key is to start with calculations in machine precision as early as possible and to store frequently used data in packed arrays.



n = 100;
mlist = Range[-1. + 2/n, 1. - 2/n, 2./n];
m2list = mlist^2;
m3list = mlist^3;
logiotalist = Log[Binomial[n, n*(1 - mlist)/2]*2^(-n)];

d = 0.9;
glist = Range[0.4, 1, 0.01];
blist = Range[1.1, 10.1, 0.01];

ClearAll[F];
F[h_?NumericQ, b_, g_] :=
Module[var, cov, explist, μlist, mom1, mom2, mom3,
explist = Exp[(b n h) mlist + (b n g/2) m2list + logiotalist + b n (-h + g/2)];
μlist = explist/Total[explist];
mom1 = μlist.mlist;
mom2 = μlist.m2list;
mom3 = μlist.m3list;
var = Subtract[mom2, mom1 mom1];
cov = Subtract[mom3, mom1 mom2];
(-d b) (cov + 2. var)
];


Just a quick test for precision and speed:



t1, r1 = F[0.1, blist[[1]], glist[[1]], n] // RepeatedTiming;
t2, r2 = Fnew[0.1, blist[[1]], glist[[1]]] // RepeatedTiming;
Abs[r1 - r2]/r1
t1/t2



-1.32375*10^-14



2.1*10^4




Now the parallelized solve loop requires about 10 seconds on my Quad Core Haswell CPU:



ParallelEvaluate[Off[General::munfl]];
heatdata = Developer`ToPackedArray[
ParallelTable[
Block[h0, h,
h0 = -0.01;
Developer`ToPackedArray[
Table[
h0 = h /. FindRoot[h == Fnew[h, b, g], h, h0],
b, blist]
]
],
g, glist]
]; // AbsoluteTiming // First



10.072




Memory considerations



You also see: Limited amount of RAM is not an issue here. That must have been caused by excessive memoziation. For the timing, it is crucial how information is stored and retrieved.
Storing computed values in a packed array for retrieving them later is significantly more efficient than memoization. Memoization into DownValues uses a complex data structure such as a hash table at its backend, and this data structure has certain overhead. In contrast, a packed array represents basically a connected block of physical memory, accompanied by some bytes of meta information (array dimensions and maybe some row pointers). Moreover, computation with data stored in packed arrays can take advantage of vectorization, which is most crucially employed in the following line:



explist = Exp[(b n h) mlist + (b n g/2) m2list + logiotalist + b n (-h + g/2)];


Remark on precision



Finally, I have to note that there is numerical underflow occurring in the course of the computation. This is probably caused by calling Exp with negative numbers of oversized absolute value. I decided to turn off the warning message, but this may lead to a significant loss of precision. So use with care. If one wants to do it correctly, one should investigate this further and apply, e.g. Clip or Threshold.






share|improve this answer











$endgroup$



Your function F is implemented really, really inefficiently. By quite simple means and in the proposed situation, it can be sped up by a factor of 20000. The key is to start with calculations in machine precision as early as possible and to store frequently used data in packed arrays.



n = 100;
mlist = Range[-1. + 2/n, 1. - 2/n, 2./n];
m2list = mlist^2;
m3list = mlist^3;
logiotalist = Log[Binomial[n, n*(1 - mlist)/2]*2^(-n)];

d = 0.9;
glist = Range[0.4, 1, 0.01];
blist = Range[1.1, 10.1, 0.01];

ClearAll[F];
F[h_?NumericQ, b_, g_] :=
Module[var, cov, explist, μlist, mom1, mom2, mom3,
explist = Exp[(b n h) mlist + (b n g/2) m2list + logiotalist + b n (-h + g/2)];
μlist = explist/Total[explist];
mom1 = μlist.mlist;
mom2 = μlist.m2list;
mom3 = μlist.m3list;
var = Subtract[mom2, mom1 mom1];
cov = Subtract[mom3, mom1 mom2];
(-d b) (cov + 2. var)
];


Just a quick test for precision and speed:



t1, r1 = F[0.1, blist[[1]], glist[[1]], n] // RepeatedTiming;
t2, r2 = Fnew[0.1, blist[[1]], glist[[1]]] // RepeatedTiming;
Abs[r1 - r2]/r1
t1/t2



-1.32375*10^-14



2.1*10^4




Now the parallelized solve loop requires about 10 seconds on my Quad Core Haswell CPU:



ParallelEvaluate[Off[General::munfl]];
heatdata = Developer`ToPackedArray[
ParallelTable[
Block[h0, h,
h0 = -0.01;
Developer`ToPackedArray[
Table[
h0 = h /. FindRoot[h == Fnew[h, b, g], h, h0],
b, blist]
]
],
g, glist]
]; // AbsoluteTiming // First



10.072




Memory considerations



You also see: Limited amount of RAM is not an issue here. That must have been caused by excessive memoziation. For the timing, it is crucial how information is stored and retrieved.
Storing computed values in a packed array for retrieving them later is significantly more efficient than memoization. Memoization into DownValues uses a complex data structure such as a hash table at its backend, and this data structure has certain overhead. In contrast, a packed array represents basically a connected block of physical memory, accompanied by some bytes of meta information (array dimensions and maybe some row pointers). Moreover, computation with data stored in packed arrays can take advantage of vectorization, which is most crucially employed in the following line:



explist = Exp[(b n h) mlist + (b n g/2) m2list + logiotalist + b n (-h + g/2)];


Remark on precision



Finally, I have to note that there is numerical underflow occurring in the course of the computation. This is probably caused by calling Exp with negative numbers of oversized absolute value. I decided to turn off the warning message, but this may lead to a significant loss of precision. So use with care. If one wants to do it correctly, one should investigate this further and apply, e.g. Clip or Threshold.







share|improve this answer














share|improve this answer



share|improve this answer








edited May 19 at 17:44

























answered May 19 at 10:03









Henrik SchumacherHenrik Schumacher

63.2k587176




63.2k587176











  • $begingroup$
    Thanks for this very helpful and informative answer!
    $endgroup$
    – Three Diag
    May 19 at 23:07










  • $begingroup$
    You're welcome!
    $endgroup$
    – Henrik Schumacher
    May 20 at 6:40
















  • $begingroup$
    Thanks for this very helpful and informative answer!
    $endgroup$
    – Three Diag
    May 19 at 23:07










  • $begingroup$
    You're welcome!
    $endgroup$
    – Henrik Schumacher
    May 20 at 6:40















$begingroup$
Thanks for this very helpful and informative answer!
$endgroup$
– Three Diag
May 19 at 23:07




$begingroup$
Thanks for this very helpful and informative answer!
$endgroup$
– Three Diag
May 19 at 23:07












$begingroup$
You're welcome!
$endgroup$
– Henrik Schumacher
May 20 at 6:40




$begingroup$
You're welcome!
$endgroup$
– Henrik Schumacher
May 20 at 6:40

















draft saved

draft discarded
















































Thanks for contributing an answer to Mathematica Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathematica.stackexchange.com%2fquestions%2f198651%2fcomputing-elements-of-a-1000-x-60-matrix-exhausts-ram%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Get product attribute by attribute group code in magento 2get product attribute by product attribute group in magento 2Magento 2 Log Bundle Product Data in List Page?How to get all product attribute of a attribute group of Default attribute set?Magento 2.1 Create a filter in the product grid by new attributeMagento 2 : Get Product Attribute values By GroupMagento 2 How to get all existing values for one attributeMagento 2 get custom attribute of a single product inside a pluginMagento 2.3 How to get all the Multi Source Inventory (MSI) locations collection in custom module?Magento2: how to develop rest API to get new productsGet product attribute by attribute group code ( [attribute_group_code] ) in magento 2

Category:9 (number) SubcategoriesMedia in category "9 (number)"Navigation menuUpload mediaGND ID: 4485639-8Library of Congress authority ID: sh85091979ReasonatorScholiaStatistics

Magento 2.3: How do i solve this, Not registered handle, on custom form?How can i rewrite TierPrice Block in Magento2magento 2 captcha not rendering if I override layout xmlmain.CRITICAL: Plugin class doesn't existMagento 2 : Problem while adding custom button order view page?Magento 2.2.5: Overriding Admin Controller sales/orderMagento 2.2.5: Add, Update and Delete existing products Custom OptionsMagento 2.3 : File Upload issue in UI Component FormMagento2 Not registered handleHow to configured Form Builder Js in my custom magento 2.3.0 module?Magento 2.3. How to create image upload field in an admin form