Why is my read in of data taking so long?How can I control the number format of exported data?import a list of replacement rulesReading many single files and creating numbered (? or indexed) variableRead data from file starting at a certain pointWhat is a good way to import interpolating functions of large size?How to measure accuracy of prediction distribution?Repeated Calls to a Function that Imports Data the First TimeNASA CDF Epoch formatCritical values for Cramér-von Mises goodness of fit testImporting .csv or .xlsx real data with some missing fields and procesing it for curve fitting and 3D plotting

How and where to get you research work assessed for PhD?

Why is the Vasa Museum in Stockholm so Popular?

What is an air conditioner compressor hard start kit and how does it work?

Identify Batman without getting caught

split large formula in align

Is there a way to say "double + any number" in German?

Nested Unlocked Packages requires Installation of Base Packages?

Will a research paper be retracted if the code (which was made publically available ) is shown have a flaw in the logic?

Whats the difference between <processors> and <pipelines> in Sitecore configuration?

Traveling from Germany to other countries by train?

Why am I not getting stuck in the loop

Based on what criteria do you add/not add icons to labels within a toolbar?

Why do cheap flights with a layover get more expensive when you split them up into separate flights?

How do I get the =LEFT function in excel, to also take the number zero as the first number?

Ubuntu show wrong disk sizes, how to solve it?

How easy is it to get a gun illegally in the United States?

How important is it to have a spot meter on the light meter?

Getting an entry level IT position later in life

What is the probability of a biased coin coming up heads given that a liar is claiming that the coin came up heads?

Is a switch from R to Python worth it?

Does the length of a password for Wi-Fi affect speed?

Our group keeps dying during the Lost Mine of Phandelver campaign. What are we doing wrong?

Why do my fried eggs start browning very fast?

What date did Henry Morgan capture his most famous flagship, the "Satisfaction"?



Why is my read in of data taking so long?


How can I control the number format of exported data?import a list of replacement rulesReading many single files and creating numbered (? or indexed) variableRead data from file starting at a certain pointWhat is a good way to import interpolating functions of large size?How to measure accuracy of prediction distribution?Repeated Calls to a Function that Imports Data the First TimeNASA CDF Epoch formatCritical values for Cramér-von Mises goodness of fit testImporting .csv or .xlsx real data with some missing fields and procesing it for curve fitting and 3D plotting






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








6












$begingroup$


I'm just trying to read in some data before I do some calculations. The data set is over 200,000 values that look like this:




Blockquote




Here is the code I am using:



ClearAll["Global`*"];

Hrank = Flatten[Import["C:\Projects\Points_Analysis\H_Rank.txt", "Table"]];
HighRank = Cases[Hrank, x_?NumericQ /; x <= Mean[Hrank] + 3*StandardDeviation[Hrank] &&
x > Mean[Hrank] - 3*StandardDeviation[Hrank]];


It's just determining the values that lie within 3 sigma of the mean, but the calculation is taking hours (so far 6!) and doesn't seem to want to end. Could someone please tell me what is going wrong?










share|improve this question









$endgroup$




















    6












    $begingroup$


    I'm just trying to read in some data before I do some calculations. The data set is over 200,000 values that look like this:




    Blockquote




    Here is the code I am using:



    ClearAll["Global`*"];

    Hrank = Flatten[Import["C:\Projects\Points_Analysis\H_Rank.txt", "Table"]];
    HighRank = Cases[Hrank, x_?NumericQ /; x <= Mean[Hrank] + 3*StandardDeviation[Hrank] &&
    x > Mean[Hrank] - 3*StandardDeviation[Hrank]];


    It's just determining the values that lie within 3 sigma of the mean, but the calculation is taking hours (so far 6!) and doesn't seem to want to end. Could someone please tell me what is going wrong?










    share|improve this question









    $endgroup$
















      6












      6








      6





      $begingroup$


      I'm just trying to read in some data before I do some calculations. The data set is over 200,000 values that look like this:




      Blockquote




      Here is the code I am using:



      ClearAll["Global`*"];

      Hrank = Flatten[Import["C:\Projects\Points_Analysis\H_Rank.txt", "Table"]];
      HighRank = Cases[Hrank, x_?NumericQ /; x <= Mean[Hrank] + 3*StandardDeviation[Hrank] &&
      x > Mean[Hrank] - 3*StandardDeviation[Hrank]];


      It's just determining the values that lie within 3 sigma of the mean, but the calculation is taking hours (so far 6!) and doesn't seem to want to end. Could someone please tell me what is going wrong?










      share|improve this question









      $endgroup$




      I'm just trying to read in some data before I do some calculations. The data set is over 200,000 values that look like this:




      Blockquote




      Here is the code I am using:



      ClearAll["Global`*"];

      Hrank = Flatten[Import["C:\Projects\Points_Analysis\H_Rank.txt", "Table"]];
      HighRank = Cases[Hrank, x_?NumericQ /; x <= Mean[Hrank] + 3*StandardDeviation[Hrank] &&
      x > Mean[Hrank] - 3*StandardDeviation[Hrank]];


      It's just determining the values that lie within 3 sigma of the mean, but the calculation is taking hours (so far 6!) and doesn't seem to want to end. Could someone please tell me what is going wrong?







      probability-or-statistics import data






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Jul 26 at 12:02









      AngusAngus

      604 bronze badges




      604 bronze badges























          1 Answer
          1






          active

          oldest

          votes


















          16












          $begingroup$

          Your code is recalculating Mean[Hrank] and StandardDeviation[Hrank] with each comparison, making it extremely slow. Store them separately, and the calculation becomes much faster. I simulated an Hrank vector using:



          Hrank = RandomInteger[0, 1000, 200000];


          And then did:



          mean = Mean[Hrank];
          stddev = StandardDeviation[Hrank];
          HighRank =
          Cases[Hrank,
          x_?NumericQ /;
          x <= mean + 3*stddev && x > mean - 3*stddev]; // AbsoluteTiming


          All as one cell, so that it would automatically update if Hrank changed. This gave a timing (via AbsoluteTiming) of 3.421 seconds (on my system) for 200,000 elements. Using a smaller sample, it appears that this should give exactly the same results as the original code, as at no point would Hrank be altered during this.



          It's possible to go faster than this using Compile and some cleverness. Since it's actually more elegant than using Cases, it's presented below:



          Define a family of test functions (testf[m,s]) below:



          testf[m_, s_] := 
          Compile[x, _Real, x <= m + 3 s && x > m - 3 s,
          RuntimeAttributes -> Listable, Parallelization -> True];


          Note that testf[m,s] returns a compiled function object which tests a real number as to whether it is within 3 standard deviations (True) or not (False). This is then used with Pick to pick out the final set:



          HighRank2 = Pick[Hrank, testf[mean, stddev][Hrank]]; // AbsoluteTiming


          It is roughly 200 times faster (on my system, probably faster still if you have more than 8 threads available) than the above version using Cases.



          @Shadowray provides an even faster version utilizing UnitStep, RealAbs, and numericizing the input list to machine numbers before hand. Some cursory examination suggests the results match identically in at least most cases, so I'll preserve it here below:



          threeSigmaFilter[list_] := 
          With[nlist = N[list],
          Pick[list,
          UnitStep[
          RealAbs[nlist - Mean[nlist]] - 3 StandardDeviation[nlist]], 0]]


          This centers the whole data list on the mean, takes the absolute value of that, and subtracts 3 standard deviations. If this result is less than 0, then UnitStep returns 0, and Pick's 3rd argument tells Pick to keep it from the original list. It doesn't seem like Compile can be used to further speed this up. This isn't very surprising, UnitStep and RealAbs (or Abs, not much performance difference there it seems) are already Listable and quite fast as built-ins, and there is a time overhead to compiling a function.






          share|improve this answer











          $endgroup$










          • 2




            $begingroup$
            Can be further optimized to something like: threeSigmaFilter[list_] := With[nlist=N[list], Pick[list, UnitStep[RealAbs[nlist-Mean[nlist]] - 3 StandardDeviation[nlist]], 0] ]
            $endgroup$
            – Shadowray
            Jul 26 at 18:33










          • $begingroup$
            @Shadowray Thanks for pointing that out. I was hesitant to numericize the list because it introduces a little bit of inaccuracy, but it is fair to say that it's quite a bit faster. I'm always surprised at how fast UnitStep is too.
            $endgroup$
            – eyorble
            Jul 26 at 20:26










          • $begingroup$
            I know we're not supposed to express thanks, but I just wanted to let both of you know how much I appreciated your solutions!
            $endgroup$
            – Angus
            Jul 27 at 12:12













          Your Answer








          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "387"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathematica.stackexchange.com%2fquestions%2f202787%2fwhy-is-my-read-in-of-data-taking-so-long%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          16












          $begingroup$

          Your code is recalculating Mean[Hrank] and StandardDeviation[Hrank] with each comparison, making it extremely slow. Store them separately, and the calculation becomes much faster. I simulated an Hrank vector using:



          Hrank = RandomInteger[0, 1000, 200000];


          And then did:



          mean = Mean[Hrank];
          stddev = StandardDeviation[Hrank];
          HighRank =
          Cases[Hrank,
          x_?NumericQ /;
          x <= mean + 3*stddev && x > mean - 3*stddev]; // AbsoluteTiming


          All as one cell, so that it would automatically update if Hrank changed. This gave a timing (via AbsoluteTiming) of 3.421 seconds (on my system) for 200,000 elements. Using a smaller sample, it appears that this should give exactly the same results as the original code, as at no point would Hrank be altered during this.



          It's possible to go faster than this using Compile and some cleverness. Since it's actually more elegant than using Cases, it's presented below:



          Define a family of test functions (testf[m,s]) below:



          testf[m_, s_] := 
          Compile[x, _Real, x <= m + 3 s && x > m - 3 s,
          RuntimeAttributes -> Listable, Parallelization -> True];


          Note that testf[m,s] returns a compiled function object which tests a real number as to whether it is within 3 standard deviations (True) or not (False). This is then used with Pick to pick out the final set:



          HighRank2 = Pick[Hrank, testf[mean, stddev][Hrank]]; // AbsoluteTiming


          It is roughly 200 times faster (on my system, probably faster still if you have more than 8 threads available) than the above version using Cases.



          @Shadowray provides an even faster version utilizing UnitStep, RealAbs, and numericizing the input list to machine numbers before hand. Some cursory examination suggests the results match identically in at least most cases, so I'll preserve it here below:



          threeSigmaFilter[list_] := 
          With[nlist = N[list],
          Pick[list,
          UnitStep[
          RealAbs[nlist - Mean[nlist]] - 3 StandardDeviation[nlist]], 0]]


          This centers the whole data list on the mean, takes the absolute value of that, and subtracts 3 standard deviations. If this result is less than 0, then UnitStep returns 0, and Pick's 3rd argument tells Pick to keep it from the original list. It doesn't seem like Compile can be used to further speed this up. This isn't very surprising, UnitStep and RealAbs (or Abs, not much performance difference there it seems) are already Listable and quite fast as built-ins, and there is a time overhead to compiling a function.






          share|improve this answer











          $endgroup$










          • 2




            $begingroup$
            Can be further optimized to something like: threeSigmaFilter[list_] := With[nlist=N[list], Pick[list, UnitStep[RealAbs[nlist-Mean[nlist]] - 3 StandardDeviation[nlist]], 0] ]
            $endgroup$
            – Shadowray
            Jul 26 at 18:33










          • $begingroup$
            @Shadowray Thanks for pointing that out. I was hesitant to numericize the list because it introduces a little bit of inaccuracy, but it is fair to say that it's quite a bit faster. I'm always surprised at how fast UnitStep is too.
            $endgroup$
            – eyorble
            Jul 26 at 20:26










          • $begingroup$
            I know we're not supposed to express thanks, but I just wanted to let both of you know how much I appreciated your solutions!
            $endgroup$
            – Angus
            Jul 27 at 12:12















          16












          $begingroup$

          Your code is recalculating Mean[Hrank] and StandardDeviation[Hrank] with each comparison, making it extremely slow. Store them separately, and the calculation becomes much faster. I simulated an Hrank vector using:



          Hrank = RandomInteger[0, 1000, 200000];


          And then did:



          mean = Mean[Hrank];
          stddev = StandardDeviation[Hrank];
          HighRank =
          Cases[Hrank,
          x_?NumericQ /;
          x <= mean + 3*stddev && x > mean - 3*stddev]; // AbsoluteTiming


          All as one cell, so that it would automatically update if Hrank changed. This gave a timing (via AbsoluteTiming) of 3.421 seconds (on my system) for 200,000 elements. Using a smaller sample, it appears that this should give exactly the same results as the original code, as at no point would Hrank be altered during this.



          It's possible to go faster than this using Compile and some cleverness. Since it's actually more elegant than using Cases, it's presented below:



          Define a family of test functions (testf[m,s]) below:



          testf[m_, s_] := 
          Compile[x, _Real, x <= m + 3 s && x > m - 3 s,
          RuntimeAttributes -> Listable, Parallelization -> True];


          Note that testf[m,s] returns a compiled function object which tests a real number as to whether it is within 3 standard deviations (True) or not (False). This is then used with Pick to pick out the final set:



          HighRank2 = Pick[Hrank, testf[mean, stddev][Hrank]]; // AbsoluteTiming


          It is roughly 200 times faster (on my system, probably faster still if you have more than 8 threads available) than the above version using Cases.



          @Shadowray provides an even faster version utilizing UnitStep, RealAbs, and numericizing the input list to machine numbers before hand. Some cursory examination suggests the results match identically in at least most cases, so I'll preserve it here below:



          threeSigmaFilter[list_] := 
          With[nlist = N[list],
          Pick[list,
          UnitStep[
          RealAbs[nlist - Mean[nlist]] - 3 StandardDeviation[nlist]], 0]]


          This centers the whole data list on the mean, takes the absolute value of that, and subtracts 3 standard deviations. If this result is less than 0, then UnitStep returns 0, and Pick's 3rd argument tells Pick to keep it from the original list. It doesn't seem like Compile can be used to further speed this up. This isn't very surprising, UnitStep and RealAbs (or Abs, not much performance difference there it seems) are already Listable and quite fast as built-ins, and there is a time overhead to compiling a function.






          share|improve this answer











          $endgroup$










          • 2




            $begingroup$
            Can be further optimized to something like: threeSigmaFilter[list_] := With[nlist=N[list], Pick[list, UnitStep[RealAbs[nlist-Mean[nlist]] - 3 StandardDeviation[nlist]], 0] ]
            $endgroup$
            – Shadowray
            Jul 26 at 18:33










          • $begingroup$
            @Shadowray Thanks for pointing that out. I was hesitant to numericize the list because it introduces a little bit of inaccuracy, but it is fair to say that it's quite a bit faster. I'm always surprised at how fast UnitStep is too.
            $endgroup$
            – eyorble
            Jul 26 at 20:26










          • $begingroup$
            I know we're not supposed to express thanks, but I just wanted to let both of you know how much I appreciated your solutions!
            $endgroup$
            – Angus
            Jul 27 at 12:12













          16












          16








          16





          $begingroup$

          Your code is recalculating Mean[Hrank] and StandardDeviation[Hrank] with each comparison, making it extremely slow. Store them separately, and the calculation becomes much faster. I simulated an Hrank vector using:



          Hrank = RandomInteger[0, 1000, 200000];


          And then did:



          mean = Mean[Hrank];
          stddev = StandardDeviation[Hrank];
          HighRank =
          Cases[Hrank,
          x_?NumericQ /;
          x <= mean + 3*stddev && x > mean - 3*stddev]; // AbsoluteTiming


          All as one cell, so that it would automatically update if Hrank changed. This gave a timing (via AbsoluteTiming) of 3.421 seconds (on my system) for 200,000 elements. Using a smaller sample, it appears that this should give exactly the same results as the original code, as at no point would Hrank be altered during this.



          It's possible to go faster than this using Compile and some cleverness. Since it's actually more elegant than using Cases, it's presented below:



          Define a family of test functions (testf[m,s]) below:



          testf[m_, s_] := 
          Compile[x, _Real, x <= m + 3 s && x > m - 3 s,
          RuntimeAttributes -> Listable, Parallelization -> True];


          Note that testf[m,s] returns a compiled function object which tests a real number as to whether it is within 3 standard deviations (True) or not (False). This is then used with Pick to pick out the final set:



          HighRank2 = Pick[Hrank, testf[mean, stddev][Hrank]]; // AbsoluteTiming


          It is roughly 200 times faster (on my system, probably faster still if you have more than 8 threads available) than the above version using Cases.



          @Shadowray provides an even faster version utilizing UnitStep, RealAbs, and numericizing the input list to machine numbers before hand. Some cursory examination suggests the results match identically in at least most cases, so I'll preserve it here below:



          threeSigmaFilter[list_] := 
          With[nlist = N[list],
          Pick[list,
          UnitStep[
          RealAbs[nlist - Mean[nlist]] - 3 StandardDeviation[nlist]], 0]]


          This centers the whole data list on the mean, takes the absolute value of that, and subtracts 3 standard deviations. If this result is less than 0, then UnitStep returns 0, and Pick's 3rd argument tells Pick to keep it from the original list. It doesn't seem like Compile can be used to further speed this up. This isn't very surprising, UnitStep and RealAbs (or Abs, not much performance difference there it seems) are already Listable and quite fast as built-ins, and there is a time overhead to compiling a function.






          share|improve this answer











          $endgroup$



          Your code is recalculating Mean[Hrank] and StandardDeviation[Hrank] with each comparison, making it extremely slow. Store them separately, and the calculation becomes much faster. I simulated an Hrank vector using:



          Hrank = RandomInteger[0, 1000, 200000];


          And then did:



          mean = Mean[Hrank];
          stddev = StandardDeviation[Hrank];
          HighRank =
          Cases[Hrank,
          x_?NumericQ /;
          x <= mean + 3*stddev && x > mean - 3*stddev]; // AbsoluteTiming


          All as one cell, so that it would automatically update if Hrank changed. This gave a timing (via AbsoluteTiming) of 3.421 seconds (on my system) for 200,000 elements. Using a smaller sample, it appears that this should give exactly the same results as the original code, as at no point would Hrank be altered during this.



          It's possible to go faster than this using Compile and some cleverness. Since it's actually more elegant than using Cases, it's presented below:



          Define a family of test functions (testf[m,s]) below:



          testf[m_, s_] := 
          Compile[x, _Real, x <= m + 3 s && x > m - 3 s,
          RuntimeAttributes -> Listable, Parallelization -> True];


          Note that testf[m,s] returns a compiled function object which tests a real number as to whether it is within 3 standard deviations (True) or not (False). This is then used with Pick to pick out the final set:



          HighRank2 = Pick[Hrank, testf[mean, stddev][Hrank]]; // AbsoluteTiming


          It is roughly 200 times faster (on my system, probably faster still if you have more than 8 threads available) than the above version using Cases.



          @Shadowray provides an even faster version utilizing UnitStep, RealAbs, and numericizing the input list to machine numbers before hand. Some cursory examination suggests the results match identically in at least most cases, so I'll preserve it here below:



          threeSigmaFilter[list_] := 
          With[nlist = N[list],
          Pick[list,
          UnitStep[
          RealAbs[nlist - Mean[nlist]] - 3 StandardDeviation[nlist]], 0]]


          This centers the whole data list on the mean, takes the absolute value of that, and subtracts 3 standard deviations. If this result is less than 0, then UnitStep returns 0, and Pick's 3rd argument tells Pick to keep it from the original list. It doesn't seem like Compile can be used to further speed this up. This isn't very surprising, UnitStep and RealAbs (or Abs, not much performance difference there it seems) are already Listable and quite fast as built-ins, and there is a time overhead to compiling a function.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Jul 26 at 20:25

























          answered Jul 26 at 12:25









          eyorbleeyorble

          6,4181 gold badge11 silver badges30 bronze badges




          6,4181 gold badge11 silver badges30 bronze badges










          • 2




            $begingroup$
            Can be further optimized to something like: threeSigmaFilter[list_] := With[nlist=N[list], Pick[list, UnitStep[RealAbs[nlist-Mean[nlist]] - 3 StandardDeviation[nlist]], 0] ]
            $endgroup$
            – Shadowray
            Jul 26 at 18:33










          • $begingroup$
            @Shadowray Thanks for pointing that out. I was hesitant to numericize the list because it introduces a little bit of inaccuracy, but it is fair to say that it's quite a bit faster. I'm always surprised at how fast UnitStep is too.
            $endgroup$
            – eyorble
            Jul 26 at 20:26










          • $begingroup$
            I know we're not supposed to express thanks, but I just wanted to let both of you know how much I appreciated your solutions!
            $endgroup$
            – Angus
            Jul 27 at 12:12












          • 2




            $begingroup$
            Can be further optimized to something like: threeSigmaFilter[list_] := With[nlist=N[list], Pick[list, UnitStep[RealAbs[nlist-Mean[nlist]] - 3 StandardDeviation[nlist]], 0] ]
            $endgroup$
            – Shadowray
            Jul 26 at 18:33










          • $begingroup$
            @Shadowray Thanks for pointing that out. I was hesitant to numericize the list because it introduces a little bit of inaccuracy, but it is fair to say that it's quite a bit faster. I'm always surprised at how fast UnitStep is too.
            $endgroup$
            – eyorble
            Jul 26 at 20:26










          • $begingroup$
            I know we're not supposed to express thanks, but I just wanted to let both of you know how much I appreciated your solutions!
            $endgroup$
            – Angus
            Jul 27 at 12:12







          2




          2




          $begingroup$
          Can be further optimized to something like: threeSigmaFilter[list_] := With[nlist=N[list], Pick[list, UnitStep[RealAbs[nlist-Mean[nlist]] - 3 StandardDeviation[nlist]], 0] ]
          $endgroup$
          – Shadowray
          Jul 26 at 18:33




          $begingroup$
          Can be further optimized to something like: threeSigmaFilter[list_] := With[nlist=N[list], Pick[list, UnitStep[RealAbs[nlist-Mean[nlist]] - 3 StandardDeviation[nlist]], 0] ]
          $endgroup$
          – Shadowray
          Jul 26 at 18:33












          $begingroup$
          @Shadowray Thanks for pointing that out. I was hesitant to numericize the list because it introduces a little bit of inaccuracy, but it is fair to say that it's quite a bit faster. I'm always surprised at how fast UnitStep is too.
          $endgroup$
          – eyorble
          Jul 26 at 20:26




          $begingroup$
          @Shadowray Thanks for pointing that out. I was hesitant to numericize the list because it introduces a little bit of inaccuracy, but it is fair to say that it's quite a bit faster. I'm always surprised at how fast UnitStep is too.
          $endgroup$
          – eyorble
          Jul 26 at 20:26












          $begingroup$
          I know we're not supposed to express thanks, but I just wanted to let both of you know how much I appreciated your solutions!
          $endgroup$
          – Angus
          Jul 27 at 12:12




          $begingroup$
          I know we're not supposed to express thanks, but I just wanted to let both of you know how much I appreciated your solutions!
          $endgroup$
          – Angus
          Jul 27 at 12:12

















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Mathematica Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmathematica.stackexchange.com%2fquestions%2f202787%2fwhy-is-my-read-in-of-data-taking-so-long%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Get product attribute by attribute group code in magento 2get product attribute by product attribute group in magento 2Magento 2 Log Bundle Product Data in List Page?How to get all product attribute of a attribute group of Default attribute set?Magento 2.1 Create a filter in the product grid by new attributeMagento 2 : Get Product Attribute values By GroupMagento 2 How to get all existing values for one attributeMagento 2 get custom attribute of a single product inside a pluginMagento 2.3 How to get all the Multi Source Inventory (MSI) locations collection in custom module?Magento2: how to develop rest API to get new productsGet product attribute by attribute group code ( [attribute_group_code] ) in magento 2

          Category:9 (number) SubcategoriesMedia in category "9 (number)"Navigation menuUpload mediaGND ID: 4485639-8Library of Congress authority ID: sh85091979ReasonatorScholiaStatistics

          Magento 2.3: How do i solve this, Not registered handle, on custom form?How can i rewrite TierPrice Block in Magento2magento 2 captcha not rendering if I override layout xmlmain.CRITICAL: Plugin class doesn't existMagento 2 : Problem while adding custom button order view page?Magento 2.2.5: Overriding Admin Controller sales/orderMagento 2.2.5: Add, Update and Delete existing products Custom OptionsMagento 2.3 : File Upload issue in UI Component FormMagento2 Not registered handleHow to configured Form Builder Js in my custom magento 2.3.0 module?Magento 2.3. How to create image upload field in an admin form