Why is my read in of data taking so long?How can I control the number format of exported data?import a list of replacement rulesReading many single files and creating numbered (? or indexed) variableRead data from file starting at a certain pointWhat is a good way to import interpolating functions of large size?How to measure accuracy of prediction distribution?Repeated Calls to a Function that Imports Data the First TimeNASA CDF Epoch formatCritical values for Cramér-von Mises goodness of fit testImporting .csv or .xlsx real data with some missing fields and procesing it for curve fitting and 3D plotting
How and where to get you research work assessed for PhD?
Why is the Vasa Museum in Stockholm so Popular?
What is an air conditioner compressor hard start kit and how does it work?
Identify Batman without getting caught
split large formula in align
Is there a way to say "double + any number" in German?
Nested Unlocked Packages requires Installation of Base Packages?
Will a research paper be retracted if the code (which was made publically available ) is shown have a flaw in the logic?
Whats the difference between <processors> and <pipelines> in Sitecore configuration?
Traveling from Germany to other countries by train?
Why am I not getting stuck in the loop
Based on what criteria do you add/not add icons to labels within a toolbar?
Why do cheap flights with a layover get more expensive when you split them up into separate flights?
How do I get the =LEFT function in excel, to also take the number zero as the first number?
Ubuntu show wrong disk sizes, how to solve it?
How easy is it to get a gun illegally in the United States?
How important is it to have a spot meter on the light meter?
Getting an entry level IT position later in life
What is the probability of a biased coin coming up heads given that a liar is claiming that the coin came up heads?
Is a switch from R to Python worth it?
Does the length of a password for Wi-Fi affect speed?
Our group keeps dying during the Lost Mine of Phandelver campaign. What are we doing wrong?
Why do my fried eggs start browning very fast?
What date did Henry Morgan capture his most famous flagship, the "Satisfaction"?
Why is my read in of data taking so long?
How can I control the number format of exported data?import a list of replacement rulesReading many single files and creating numbered (? or indexed) variableRead data from file starting at a certain pointWhat is a good way to import interpolating functions of large size?How to measure accuracy of prediction distribution?Repeated Calls to a Function that Imports Data the First TimeNASA CDF Epoch formatCritical values for Cramér-von Mises goodness of fit testImporting .csv or .xlsx real data with some missing fields and procesing it for curve fitting and 3D plotting
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
$begingroup$
I'm just trying to read in some data before I do some calculations. The data set is over 200,000 values that look like this:
Here is the code I am using:
ClearAll["Global`*"];
Hrank = Flatten[Import["C:\Projects\Points_Analysis\H_Rank.txt", "Table"]];
HighRank = Cases[Hrank, x_?NumericQ /; x <= Mean[Hrank] + 3*StandardDeviation[Hrank] &&
x > Mean[Hrank] - 3*StandardDeviation[Hrank]];
It's just determining the values that lie within 3 sigma of the mean, but the calculation is taking hours (so far 6!) and doesn't seem to want to end. Could someone please tell me what is going wrong?
probability-or-statistics import data
$endgroup$
add a comment |
$begingroup$
I'm just trying to read in some data before I do some calculations. The data set is over 200,000 values that look like this:
Here is the code I am using:
ClearAll["Global`*"];
Hrank = Flatten[Import["C:\Projects\Points_Analysis\H_Rank.txt", "Table"]];
HighRank = Cases[Hrank, x_?NumericQ /; x <= Mean[Hrank] + 3*StandardDeviation[Hrank] &&
x > Mean[Hrank] - 3*StandardDeviation[Hrank]];
It's just determining the values that lie within 3 sigma of the mean, but the calculation is taking hours (so far 6!) and doesn't seem to want to end. Could someone please tell me what is going wrong?
probability-or-statistics import data
$endgroup$
add a comment |
$begingroup$
I'm just trying to read in some data before I do some calculations. The data set is over 200,000 values that look like this:
Here is the code I am using:
ClearAll["Global`*"];
Hrank = Flatten[Import["C:\Projects\Points_Analysis\H_Rank.txt", "Table"]];
HighRank = Cases[Hrank, x_?NumericQ /; x <= Mean[Hrank] + 3*StandardDeviation[Hrank] &&
x > Mean[Hrank] - 3*StandardDeviation[Hrank]];
It's just determining the values that lie within 3 sigma of the mean, but the calculation is taking hours (so far 6!) and doesn't seem to want to end. Could someone please tell me what is going wrong?
probability-or-statistics import data
$endgroup$
I'm just trying to read in some data before I do some calculations. The data set is over 200,000 values that look like this:
Here is the code I am using:
ClearAll["Global`*"];
Hrank = Flatten[Import["C:\Projects\Points_Analysis\H_Rank.txt", "Table"]];
HighRank = Cases[Hrank, x_?NumericQ /; x <= Mean[Hrank] + 3*StandardDeviation[Hrank] &&
x > Mean[Hrank] - 3*StandardDeviation[Hrank]];
It's just determining the values that lie within 3 sigma of the mean, but the calculation is taking hours (so far 6!) and doesn't seem to want to end. Could someone please tell me what is going wrong?
probability-or-statistics import data
probability-or-statistics import data
asked Jul 26 at 12:02
AngusAngus
604 bronze badges
604 bronze badges
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Your code is recalculating Mean[Hrank]
and StandardDeviation[Hrank]
with each comparison, making it extremely slow. Store them separately, and the calculation becomes much faster. I simulated an Hrank
vector using:
Hrank = RandomInteger[0, 1000, 200000];
And then did:
mean = Mean[Hrank];
stddev = StandardDeviation[Hrank];
HighRank =
Cases[Hrank,
x_?NumericQ /;
x <= mean + 3*stddev && x > mean - 3*stddev]; // AbsoluteTiming
All as one cell, so that it would automatically update if Hrank
changed. This gave a timing (via AbsoluteTiming
) of 3.421 seconds (on my system) for 200,000 elements. Using a smaller sample, it appears that this should give exactly the same results as the original code, as at no point would Hrank
be altered during this.
It's possible to go faster than this using Compile
and some cleverness. Since it's actually more elegant than using Cases
, it's presented below:
Define a family of test functions (testf[m,s]
) below:
testf[m_, s_] :=
Compile[x, _Real, x <= m + 3 s && x > m - 3 s,
RuntimeAttributes -> Listable, Parallelization -> True];
Note that testf[m,s]
returns a compiled function object which tests a real number as to whether it is within 3 standard deviations (True
) or not (False
). This is then used with Pick
to pick out the final set:
HighRank2 = Pick[Hrank, testf[mean, stddev][Hrank]]; // AbsoluteTiming
It is roughly 200 times faster (on my system, probably faster still if you have more than 8 threads available) than the above version using Cases
.
@Shadowray provides an even faster version utilizing UnitStep
, RealAbs
, and numericizing the input list to machine numbers before hand. Some cursory examination suggests the results match identically in at least most cases, so I'll preserve it here below:
threeSigmaFilter[list_] :=
With[nlist = N[list],
Pick[list,
UnitStep[
RealAbs[nlist - Mean[nlist]] - 3 StandardDeviation[nlist]], 0]]
This centers the whole data list on the mean, takes the absolute value of that, and subtracts 3 standard deviations. If this result is less than 0, then UnitStep
returns 0, and Pick
's 3rd argument tells Pick
to keep it from the original list
. It doesn't seem like Compile
can be used to further speed this up. This isn't very surprising, UnitStep
and RealAbs
(or Abs
, not much performance difference there it seems) are already Listable
and quite fast as built-ins, and there is a time overhead to compiling a function.
$endgroup$
2
$begingroup$
Can be further optimized to something like:threeSigmaFilter[list_] := With[nlist=N[list], Pick[list, UnitStep[RealAbs[nlist-Mean[nlist]] - 3 StandardDeviation[nlist]], 0] ]
$endgroup$
– Shadowray
Jul 26 at 18:33
$begingroup$
@Shadowray Thanks for pointing that out. I was hesitant to numericize the list because it introduces a little bit of inaccuracy, but it is fair to say that it's quite a bit faster. I'm always surprised at how fastUnitStep
is too.
$endgroup$
– eyorble
Jul 26 at 20:26
$begingroup$
I know we're not supposed to express thanks, but I just wanted to let both of you know how much I appreciated your solutions!
$endgroup$
– Angus
Jul 27 at 12:12
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "387"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathematica.stackexchange.com%2fquestions%2f202787%2fwhy-is-my-read-in-of-data-taking-so-long%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Your code is recalculating Mean[Hrank]
and StandardDeviation[Hrank]
with each comparison, making it extremely slow. Store them separately, and the calculation becomes much faster. I simulated an Hrank
vector using:
Hrank = RandomInteger[0, 1000, 200000];
And then did:
mean = Mean[Hrank];
stddev = StandardDeviation[Hrank];
HighRank =
Cases[Hrank,
x_?NumericQ /;
x <= mean + 3*stddev && x > mean - 3*stddev]; // AbsoluteTiming
All as one cell, so that it would automatically update if Hrank
changed. This gave a timing (via AbsoluteTiming
) of 3.421 seconds (on my system) for 200,000 elements. Using a smaller sample, it appears that this should give exactly the same results as the original code, as at no point would Hrank
be altered during this.
It's possible to go faster than this using Compile
and some cleverness. Since it's actually more elegant than using Cases
, it's presented below:
Define a family of test functions (testf[m,s]
) below:
testf[m_, s_] :=
Compile[x, _Real, x <= m + 3 s && x > m - 3 s,
RuntimeAttributes -> Listable, Parallelization -> True];
Note that testf[m,s]
returns a compiled function object which tests a real number as to whether it is within 3 standard deviations (True
) or not (False
). This is then used with Pick
to pick out the final set:
HighRank2 = Pick[Hrank, testf[mean, stddev][Hrank]]; // AbsoluteTiming
It is roughly 200 times faster (on my system, probably faster still if you have more than 8 threads available) than the above version using Cases
.
@Shadowray provides an even faster version utilizing UnitStep
, RealAbs
, and numericizing the input list to machine numbers before hand. Some cursory examination suggests the results match identically in at least most cases, so I'll preserve it here below:
threeSigmaFilter[list_] :=
With[nlist = N[list],
Pick[list,
UnitStep[
RealAbs[nlist - Mean[nlist]] - 3 StandardDeviation[nlist]], 0]]
This centers the whole data list on the mean, takes the absolute value of that, and subtracts 3 standard deviations. If this result is less than 0, then UnitStep
returns 0, and Pick
's 3rd argument tells Pick
to keep it from the original list
. It doesn't seem like Compile
can be used to further speed this up. This isn't very surprising, UnitStep
and RealAbs
(or Abs
, not much performance difference there it seems) are already Listable
and quite fast as built-ins, and there is a time overhead to compiling a function.
$endgroup$
2
$begingroup$
Can be further optimized to something like:threeSigmaFilter[list_] := With[nlist=N[list], Pick[list, UnitStep[RealAbs[nlist-Mean[nlist]] - 3 StandardDeviation[nlist]], 0] ]
$endgroup$
– Shadowray
Jul 26 at 18:33
$begingroup$
@Shadowray Thanks for pointing that out. I was hesitant to numericize the list because it introduces a little bit of inaccuracy, but it is fair to say that it's quite a bit faster. I'm always surprised at how fastUnitStep
is too.
$endgroup$
– eyorble
Jul 26 at 20:26
$begingroup$
I know we're not supposed to express thanks, but I just wanted to let both of you know how much I appreciated your solutions!
$endgroup$
– Angus
Jul 27 at 12:12
add a comment |
$begingroup$
Your code is recalculating Mean[Hrank]
and StandardDeviation[Hrank]
with each comparison, making it extremely slow. Store them separately, and the calculation becomes much faster. I simulated an Hrank
vector using:
Hrank = RandomInteger[0, 1000, 200000];
And then did:
mean = Mean[Hrank];
stddev = StandardDeviation[Hrank];
HighRank =
Cases[Hrank,
x_?NumericQ /;
x <= mean + 3*stddev && x > mean - 3*stddev]; // AbsoluteTiming
All as one cell, so that it would automatically update if Hrank
changed. This gave a timing (via AbsoluteTiming
) of 3.421 seconds (on my system) for 200,000 elements. Using a smaller sample, it appears that this should give exactly the same results as the original code, as at no point would Hrank
be altered during this.
It's possible to go faster than this using Compile
and some cleverness. Since it's actually more elegant than using Cases
, it's presented below:
Define a family of test functions (testf[m,s]
) below:
testf[m_, s_] :=
Compile[x, _Real, x <= m + 3 s && x > m - 3 s,
RuntimeAttributes -> Listable, Parallelization -> True];
Note that testf[m,s]
returns a compiled function object which tests a real number as to whether it is within 3 standard deviations (True
) or not (False
). This is then used with Pick
to pick out the final set:
HighRank2 = Pick[Hrank, testf[mean, stddev][Hrank]]; // AbsoluteTiming
It is roughly 200 times faster (on my system, probably faster still if you have more than 8 threads available) than the above version using Cases
.
@Shadowray provides an even faster version utilizing UnitStep
, RealAbs
, and numericizing the input list to machine numbers before hand. Some cursory examination suggests the results match identically in at least most cases, so I'll preserve it here below:
threeSigmaFilter[list_] :=
With[nlist = N[list],
Pick[list,
UnitStep[
RealAbs[nlist - Mean[nlist]] - 3 StandardDeviation[nlist]], 0]]
This centers the whole data list on the mean, takes the absolute value of that, and subtracts 3 standard deviations. If this result is less than 0, then UnitStep
returns 0, and Pick
's 3rd argument tells Pick
to keep it from the original list
. It doesn't seem like Compile
can be used to further speed this up. This isn't very surprising, UnitStep
and RealAbs
(or Abs
, not much performance difference there it seems) are already Listable
and quite fast as built-ins, and there is a time overhead to compiling a function.
$endgroup$
2
$begingroup$
Can be further optimized to something like:threeSigmaFilter[list_] := With[nlist=N[list], Pick[list, UnitStep[RealAbs[nlist-Mean[nlist]] - 3 StandardDeviation[nlist]], 0] ]
$endgroup$
– Shadowray
Jul 26 at 18:33
$begingroup$
@Shadowray Thanks for pointing that out. I was hesitant to numericize the list because it introduces a little bit of inaccuracy, but it is fair to say that it's quite a bit faster. I'm always surprised at how fastUnitStep
is too.
$endgroup$
– eyorble
Jul 26 at 20:26
$begingroup$
I know we're not supposed to express thanks, but I just wanted to let both of you know how much I appreciated your solutions!
$endgroup$
– Angus
Jul 27 at 12:12
add a comment |
$begingroup$
Your code is recalculating Mean[Hrank]
and StandardDeviation[Hrank]
with each comparison, making it extremely slow. Store them separately, and the calculation becomes much faster. I simulated an Hrank
vector using:
Hrank = RandomInteger[0, 1000, 200000];
And then did:
mean = Mean[Hrank];
stddev = StandardDeviation[Hrank];
HighRank =
Cases[Hrank,
x_?NumericQ /;
x <= mean + 3*stddev && x > mean - 3*stddev]; // AbsoluteTiming
All as one cell, so that it would automatically update if Hrank
changed. This gave a timing (via AbsoluteTiming
) of 3.421 seconds (on my system) for 200,000 elements. Using a smaller sample, it appears that this should give exactly the same results as the original code, as at no point would Hrank
be altered during this.
It's possible to go faster than this using Compile
and some cleverness. Since it's actually more elegant than using Cases
, it's presented below:
Define a family of test functions (testf[m,s]
) below:
testf[m_, s_] :=
Compile[x, _Real, x <= m + 3 s && x > m - 3 s,
RuntimeAttributes -> Listable, Parallelization -> True];
Note that testf[m,s]
returns a compiled function object which tests a real number as to whether it is within 3 standard deviations (True
) or not (False
). This is then used with Pick
to pick out the final set:
HighRank2 = Pick[Hrank, testf[mean, stddev][Hrank]]; // AbsoluteTiming
It is roughly 200 times faster (on my system, probably faster still if you have more than 8 threads available) than the above version using Cases
.
@Shadowray provides an even faster version utilizing UnitStep
, RealAbs
, and numericizing the input list to machine numbers before hand. Some cursory examination suggests the results match identically in at least most cases, so I'll preserve it here below:
threeSigmaFilter[list_] :=
With[nlist = N[list],
Pick[list,
UnitStep[
RealAbs[nlist - Mean[nlist]] - 3 StandardDeviation[nlist]], 0]]
This centers the whole data list on the mean, takes the absolute value of that, and subtracts 3 standard deviations. If this result is less than 0, then UnitStep
returns 0, and Pick
's 3rd argument tells Pick
to keep it from the original list
. It doesn't seem like Compile
can be used to further speed this up. This isn't very surprising, UnitStep
and RealAbs
(or Abs
, not much performance difference there it seems) are already Listable
and quite fast as built-ins, and there is a time overhead to compiling a function.
$endgroup$
Your code is recalculating Mean[Hrank]
and StandardDeviation[Hrank]
with each comparison, making it extremely slow. Store them separately, and the calculation becomes much faster. I simulated an Hrank
vector using:
Hrank = RandomInteger[0, 1000, 200000];
And then did:
mean = Mean[Hrank];
stddev = StandardDeviation[Hrank];
HighRank =
Cases[Hrank,
x_?NumericQ /;
x <= mean + 3*stddev && x > mean - 3*stddev]; // AbsoluteTiming
All as one cell, so that it would automatically update if Hrank
changed. This gave a timing (via AbsoluteTiming
) of 3.421 seconds (on my system) for 200,000 elements. Using a smaller sample, it appears that this should give exactly the same results as the original code, as at no point would Hrank
be altered during this.
It's possible to go faster than this using Compile
and some cleverness. Since it's actually more elegant than using Cases
, it's presented below:
Define a family of test functions (testf[m,s]
) below:
testf[m_, s_] :=
Compile[x, _Real, x <= m + 3 s && x > m - 3 s,
RuntimeAttributes -> Listable, Parallelization -> True];
Note that testf[m,s]
returns a compiled function object which tests a real number as to whether it is within 3 standard deviations (True
) or not (False
). This is then used with Pick
to pick out the final set:
HighRank2 = Pick[Hrank, testf[mean, stddev][Hrank]]; // AbsoluteTiming
It is roughly 200 times faster (on my system, probably faster still if you have more than 8 threads available) than the above version using Cases
.
@Shadowray provides an even faster version utilizing UnitStep
, RealAbs
, and numericizing the input list to machine numbers before hand. Some cursory examination suggests the results match identically in at least most cases, so I'll preserve it here below:
threeSigmaFilter[list_] :=
With[nlist = N[list],
Pick[list,
UnitStep[
RealAbs[nlist - Mean[nlist]] - 3 StandardDeviation[nlist]], 0]]
This centers the whole data list on the mean, takes the absolute value of that, and subtracts 3 standard deviations. If this result is less than 0, then UnitStep
returns 0, and Pick
's 3rd argument tells Pick
to keep it from the original list
. It doesn't seem like Compile
can be used to further speed this up. This isn't very surprising, UnitStep
and RealAbs
(or Abs
, not much performance difference there it seems) are already Listable
and quite fast as built-ins, and there is a time overhead to compiling a function.
edited Jul 26 at 20:25
answered Jul 26 at 12:25
eyorbleeyorble
6,4181 gold badge11 silver badges30 bronze badges
6,4181 gold badge11 silver badges30 bronze badges
2
$begingroup$
Can be further optimized to something like:threeSigmaFilter[list_] := With[nlist=N[list], Pick[list, UnitStep[RealAbs[nlist-Mean[nlist]] - 3 StandardDeviation[nlist]], 0] ]
$endgroup$
– Shadowray
Jul 26 at 18:33
$begingroup$
@Shadowray Thanks for pointing that out. I was hesitant to numericize the list because it introduces a little bit of inaccuracy, but it is fair to say that it's quite a bit faster. I'm always surprised at how fastUnitStep
is too.
$endgroup$
– eyorble
Jul 26 at 20:26
$begingroup$
I know we're not supposed to express thanks, but I just wanted to let both of you know how much I appreciated your solutions!
$endgroup$
– Angus
Jul 27 at 12:12
add a comment |
2
$begingroup$
Can be further optimized to something like:threeSigmaFilter[list_] := With[nlist=N[list], Pick[list, UnitStep[RealAbs[nlist-Mean[nlist]] - 3 StandardDeviation[nlist]], 0] ]
$endgroup$
– Shadowray
Jul 26 at 18:33
$begingroup$
@Shadowray Thanks for pointing that out. I was hesitant to numericize the list because it introduces a little bit of inaccuracy, but it is fair to say that it's quite a bit faster. I'm always surprised at how fastUnitStep
is too.
$endgroup$
– eyorble
Jul 26 at 20:26
$begingroup$
I know we're not supposed to express thanks, but I just wanted to let both of you know how much I appreciated your solutions!
$endgroup$
– Angus
Jul 27 at 12:12
2
2
$begingroup$
Can be further optimized to something like:
threeSigmaFilter[list_] := With[nlist=N[list], Pick[list, UnitStep[RealAbs[nlist-Mean[nlist]] - 3 StandardDeviation[nlist]], 0] ]
$endgroup$
– Shadowray
Jul 26 at 18:33
$begingroup$
Can be further optimized to something like:
threeSigmaFilter[list_] := With[nlist=N[list], Pick[list, UnitStep[RealAbs[nlist-Mean[nlist]] - 3 StandardDeviation[nlist]], 0] ]
$endgroup$
– Shadowray
Jul 26 at 18:33
$begingroup$
@Shadowray Thanks for pointing that out. I was hesitant to numericize the list because it introduces a little bit of inaccuracy, but it is fair to say that it's quite a bit faster. I'm always surprised at how fast
UnitStep
is too.$endgroup$
– eyorble
Jul 26 at 20:26
$begingroup$
@Shadowray Thanks for pointing that out. I was hesitant to numericize the list because it introduces a little bit of inaccuracy, but it is fair to say that it's quite a bit faster. I'm always surprised at how fast
UnitStep
is too.$endgroup$
– eyorble
Jul 26 at 20:26
$begingroup$
I know we're not supposed to express thanks, but I just wanted to let both of you know how much I appreciated your solutions!
$endgroup$
– Angus
Jul 27 at 12:12
$begingroup$
I know we're not supposed to express thanks, but I just wanted to let both of you know how much I appreciated your solutions!
$endgroup$
– Angus
Jul 27 at 12:12
add a comment |
Thanks for contributing an answer to Mathematica Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathematica.stackexchange.com%2fquestions%2f202787%2fwhy-is-my-read-in-of-data-taking-so-long%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown