Why did ML only become viable after Nvidia's chips were available?If digital values are mere estimates, why not return to analog for AI?Human-like general intelligence vs. task-specific algorithms: when people realized these are two different goals?Why did “Artificial Intelligence” stay intact as a coherent, unified field of study?Arbitrarily big neural networkHow can we connect artificial intelligence with cognitive psychology?Are “AI Winters” inevitable?If digital values are mere estimates, why not return to analog for AI?Why did Artificial Intelligence projects fail?AI use in the automation of software development?Single AI-person in software firm about to adopt AI, good idea?

Can you perfectly wrap a cube with this blocky shape?

What are "full piece" and "half piece" in chess?

What happens if there is no space for entry stamp in the passport for US visa?

Sending a photo of my bank account card to the future employer

How Can I Process Untrusted Data Sources Securely?

Is there an English equivalent for "Les carottes sont cuites", while keeping the vegetable reference?

Is it ethical for a company to ask its employees to move furniture on a weekend?

How can I find what program is preventing my Mac from going to sleep?

Alphanumeric Line and Curve Counting

Did Voldemort kill his father before finding out about Horcruxes?

How could an animal "smell" carbon monoxide?

How fast does a character need to move to be effectively invisible?

What happens on Day 6?

Should I be able to keep my company purchased standing desk when I leave my job?

Why is Katakana not pronounced Katagana?

Unix chat server making communication between terminals possible

(Piano) is the purpose of sheet music to be played along to? Or a guide for learning and reference during playing?

Credit card details stolen every 1-2 years. What am I doing wrong?

Why do candidates not quit if they no longer have a realistic chance to win in the 2020 US presidents election

Can a dragon's breath weapon pass through Leomund's Tiny Hut?

Strategy to pay off revolving debt while building reserve savings fund?

How to say no to more work as a PhD student so I can graduate

Why do so many pure math PhD students drop out or leave academia, compared to applied mathematics PhDs?

Why does FFmpeg choose 10+20+20 ms instead of an even 16 ms for 60 fps GIF images?



Why did ML only become viable after Nvidia's chips were available?


If digital values are mere estimates, why not return to analog for AI?Human-like general intelligence vs. task-specific algorithms: when people realized these are two different goals?Why did “Artificial Intelligence” stay intact as a coherent, unified field of study?Arbitrarily big neural networkHow can we connect artificial intelligence with cognitive psychology?Are “AI Winters” inevitable?If digital values are mere estimates, why not return to analog for AI?Why did Artificial Intelligence projects fail?AI use in the automation of software development?Single AI-person in software firm about to adopt AI, good idea?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








11












$begingroup$


I listened to a talk by panel consisted of two influential Chinese scientists: Wang Gang and Yu Kai and others.



When being asked about the biggest bottleneck of the development of artificial intelligence in the near future (3 to 5 years), Yu Kai, who has a background in the hardware industry, said that hardware would be the essential problem and we should pay most of our attention to that. He gave us two examples:



  1. In the early development of the computer, we compare our machines by its chips;

  2. Artificial intelligence which is very popular these years would be almost impossible if not empowered by Nvidia's GPU.

The fundamental algorithms existed already in the 1980s and 1990s, but artificial intelligence went through 3 AI winters and was not empirical until we can train models with GPU boosted mega servers.



Then Dr. Wang commented to his opinions that we should also develop software systems because we cannot build an automatic car even if we have combined all GPUs and computation in the world together.



Then, as usual, my mind wandered off and I started thinking that what if those who can operate supercomputers in the 1980s and 1990s utilized the then-existing neural network algorithms and train them with tons of scientific data? Some people at that time can obviously attempt to build AI systems we are building now. But why did AI become a hot topic and become empirical until decades later? Is it only a matter of hardware, software and data?










share|improve this question











$endgroup$







  • 2




    $begingroup$
    This question presupposes that AI is only machine learning, which is patently wrong. It has been around for 60+ years, and only the very narrow field of deep learning/neural networks has been accelerated by the currently available hardware. AI has been a hot topic several times, pushed back by having been over-hyped each time.
    $endgroup$
    – Oliver Mason
    Jul 8 at 10:05










  • $begingroup$
    @OliverMason Yes. In that context, we narrowed the AI down only to machine learning and deep learning.
    $endgroup$
    – Lerner Zhang
    Jul 8 at 13:37










  • $begingroup$
    OK, I amended the title accordingly.
    $endgroup$
    – Oliver Mason
    Jul 8 at 14:33

















11












$begingroup$


I listened to a talk by panel consisted of two influential Chinese scientists: Wang Gang and Yu Kai and others.



When being asked about the biggest bottleneck of the development of artificial intelligence in the near future (3 to 5 years), Yu Kai, who has a background in the hardware industry, said that hardware would be the essential problem and we should pay most of our attention to that. He gave us two examples:



  1. In the early development of the computer, we compare our machines by its chips;

  2. Artificial intelligence which is very popular these years would be almost impossible if not empowered by Nvidia's GPU.

The fundamental algorithms existed already in the 1980s and 1990s, but artificial intelligence went through 3 AI winters and was not empirical until we can train models with GPU boosted mega servers.



Then Dr. Wang commented to his opinions that we should also develop software systems because we cannot build an automatic car even if we have combined all GPUs and computation in the world together.



Then, as usual, my mind wandered off and I started thinking that what if those who can operate supercomputers in the 1980s and 1990s utilized the then-existing neural network algorithms and train them with tons of scientific data? Some people at that time can obviously attempt to build AI systems we are building now. But why did AI become a hot topic and become empirical until decades later? Is it only a matter of hardware, software and data?










share|improve this question











$endgroup$







  • 2




    $begingroup$
    This question presupposes that AI is only machine learning, which is patently wrong. It has been around for 60+ years, and only the very narrow field of deep learning/neural networks has been accelerated by the currently available hardware. AI has been a hot topic several times, pushed back by having been over-hyped each time.
    $endgroup$
    – Oliver Mason
    Jul 8 at 10:05










  • $begingroup$
    @OliverMason Yes. In that context, we narrowed the AI down only to machine learning and deep learning.
    $endgroup$
    – Lerner Zhang
    Jul 8 at 13:37










  • $begingroup$
    OK, I amended the title accordingly.
    $endgroup$
    – Oliver Mason
    Jul 8 at 14:33













11












11








11


3



$begingroup$


I listened to a talk by panel consisted of two influential Chinese scientists: Wang Gang and Yu Kai and others.



When being asked about the biggest bottleneck of the development of artificial intelligence in the near future (3 to 5 years), Yu Kai, who has a background in the hardware industry, said that hardware would be the essential problem and we should pay most of our attention to that. He gave us two examples:



  1. In the early development of the computer, we compare our machines by its chips;

  2. Artificial intelligence which is very popular these years would be almost impossible if not empowered by Nvidia's GPU.

The fundamental algorithms existed already in the 1980s and 1990s, but artificial intelligence went through 3 AI winters and was not empirical until we can train models with GPU boosted mega servers.



Then Dr. Wang commented to his opinions that we should also develop software systems because we cannot build an automatic car even if we have combined all GPUs and computation in the world together.



Then, as usual, my mind wandered off and I started thinking that what if those who can operate supercomputers in the 1980s and 1990s utilized the then-existing neural network algorithms and train them with tons of scientific data? Some people at that time can obviously attempt to build AI systems we are building now. But why did AI become a hot topic and become empirical until decades later? Is it only a matter of hardware, software and data?










share|improve this question











$endgroup$




I listened to a talk by panel consisted of two influential Chinese scientists: Wang Gang and Yu Kai and others.



When being asked about the biggest bottleneck of the development of artificial intelligence in the near future (3 to 5 years), Yu Kai, who has a background in the hardware industry, said that hardware would be the essential problem and we should pay most of our attention to that. He gave us two examples:



  1. In the early development of the computer, we compare our machines by its chips;

  2. Artificial intelligence which is very popular these years would be almost impossible if not empowered by Nvidia's GPU.

The fundamental algorithms existed already in the 1980s and 1990s, but artificial intelligence went through 3 AI winters and was not empirical until we can train models with GPU boosted mega servers.



Then Dr. Wang commented to his opinions that we should also develop software systems because we cannot build an automatic car even if we have combined all GPUs and computation in the world together.



Then, as usual, my mind wandered off and I started thinking that what if those who can operate supercomputers in the 1980s and 1990s utilized the then-existing neural network algorithms and train them with tons of scientific data? Some people at that time can obviously attempt to build AI systems we are building now. But why did AI become a hot topic and become empirical until decades later? Is it only a matter of hardware, software and data?







history hardware software-development






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Jul 8 at 14:32









Oliver Mason

1,5302 silver badges20 bronze badges




1,5302 silver badges20 bronze badges










asked Jul 7 at 6:14









Lerner ZhangLerner Zhang

2272 silver badges8 bronze badges




2272 silver badges8 bronze badges







  • 2




    $begingroup$
    This question presupposes that AI is only machine learning, which is patently wrong. It has been around for 60+ years, and only the very narrow field of deep learning/neural networks has been accelerated by the currently available hardware. AI has been a hot topic several times, pushed back by having been over-hyped each time.
    $endgroup$
    – Oliver Mason
    Jul 8 at 10:05










  • $begingroup$
    @OliverMason Yes. In that context, we narrowed the AI down only to machine learning and deep learning.
    $endgroup$
    – Lerner Zhang
    Jul 8 at 13:37










  • $begingroup$
    OK, I amended the title accordingly.
    $endgroup$
    – Oliver Mason
    Jul 8 at 14:33












  • 2




    $begingroup$
    This question presupposes that AI is only machine learning, which is patently wrong. It has been around for 60+ years, and only the very narrow field of deep learning/neural networks has been accelerated by the currently available hardware. AI has been a hot topic several times, pushed back by having been over-hyped each time.
    $endgroup$
    – Oliver Mason
    Jul 8 at 10:05










  • $begingroup$
    @OliverMason Yes. In that context, we narrowed the AI down only to machine learning and deep learning.
    $endgroup$
    – Lerner Zhang
    Jul 8 at 13:37










  • $begingroup$
    OK, I amended the title accordingly.
    $endgroup$
    – Oliver Mason
    Jul 8 at 14:33







2




2




$begingroup$
This question presupposes that AI is only machine learning, which is patently wrong. It has been around for 60+ years, and only the very narrow field of deep learning/neural networks has been accelerated by the currently available hardware. AI has been a hot topic several times, pushed back by having been over-hyped each time.
$endgroup$
– Oliver Mason
Jul 8 at 10:05




$begingroup$
This question presupposes that AI is only machine learning, which is patently wrong. It has been around for 60+ years, and only the very narrow field of deep learning/neural networks has been accelerated by the currently available hardware. AI has been a hot topic several times, pushed back by having been over-hyped each time.
$endgroup$
– Oliver Mason
Jul 8 at 10:05












$begingroup$
@OliverMason Yes. In that context, we narrowed the AI down only to machine learning and deep learning.
$endgroup$
– Lerner Zhang
Jul 8 at 13:37




$begingroup$
@OliverMason Yes. In that context, we narrowed the AI down only to machine learning and deep learning.
$endgroup$
– Lerner Zhang
Jul 8 at 13:37












$begingroup$
OK, I amended the title accordingly.
$endgroup$
– Oliver Mason
Jul 8 at 14:33




$begingroup$
OK, I amended the title accordingly.
$endgroup$
– Oliver Mason
Jul 8 at 14:33










3 Answers
3






active

oldest

votes


















12












$begingroup$

There is a lot of factors for the boom of AI industry. What many people miss though is the boom has mostly been in the Machine Learning part of AI. This can be attributed to various simple reasons along with their comparisons during earlier times:




  • Mathematics: The maths behind ML algorithms are pretty simple and known for a long time (whether it would work or not was not known though). During earlier times it was not possible to implement algorithms which require high precision of numbers, to be calculated on a chip, in an acceptable amount of time. One of the main arithmetic operations division of numbers still takes a lot of cycles in modern processors. Older processors were magnitude times slower than modern processors (more than 100x), this bottleneck made it impossible to train sophisticated models on contemporary processors.


  • Precision: Precision in calculations is an important factor in ML algorithms. 32 bit precision in processor was made in the 80's and was probably commercially available in the late 90's (x86), but it was still hella slow than current processors. This resulted in scientists improvising on the precision part and the most basic Perceptron Learning Algorithm invented in the 1960's to train a classifier uses only $1$'s and $0$'s, so basically a binary classifier. It was run on special computers. Although, it is interesting to note that we have come a full circle and Google is now using TPU's with 8-16 bit accuracy to implement ML models with great success.


  • Parallelization : The concept of parallelization of matrix operations is nothing new. It was only when we started to see Deep Learning as just a set of matrix operations we realized that it can be easily parallelized on massively parallel GPU's, still if your ML algorithm is not inherently parallel then it hardly matters whether you use CPU or GPU (e.g. RNN's).


  • Data: Probably the biggest cause in the ML boom. The Internet has provided opportunities to collect huge amounts of data from users and also make it available to interested parties. Since an ML algorithm is just a function approximator based on data, therefore data is the single most important thing in a ML algorithm. The more the data the better the performance of your model.


  • Cost: The cost of training a ML model has gone down significantly. So using a Supercomputer to train a model might be fine, but was it worth it? Super computers unlike normal PC's are tremendously resource hungry in terms of cooling, space, etc. A recent article on MIT Technology Review points out the carbon footprint of training a Deep Learning model (sub-branch of ML). It is quite a good indicator why it would have been infeasible to train on Supercomputers in earlier times (considering modern processors consume much lesser power and gives higher speeds).
    Although, I am not sure but I think earlier supercomputers were specialised in "parallel+very high precision computing" (required for weather, astronomy, military applications, etc) and the "very high precison part" is overkill in Machine Learning scenario.

Another important aspect is nowadays everyone has access to powerful computers. Thus, anyone can build new ML models, re-train pre-existing models, modify models, etc. This was quite not possible during earlier times,



All this factors has led to a huge surge in interest in ML and has caused the boom we are seeing today. Also check out this question on how we are moving beyond digital processors.






share|improve this answer











$endgroup$




















    2












    $begingroup$

    GPUs were ideal for AI boom becouse



    • They hit the right time

    AI has been researched for a LONG time. Almost half a century. However, that was all exploration of how would algorithms work and look. When NV saw that the AI is about to go mainstream, they looked at their GPUs and realized that the huge parellel processing power, with relative ease of programing, is ideal for the era that is to be. Many other people realized that too.



    • GPUs are sort of general purpose accelerators

    GPGPU is a concept of using GPU parallel processing for general tasks. You can accelerate graphics, or make your algorithm utalize 1000s of cores available on GPU. That makes GPU awesome target for all kinds of use cases including AI. Given that they are already available and are not too hard to program, its ideal choice for accelerating AI algorithms.






    share|improve this answer











    $endgroup$












    • $begingroup$
      Almost half a century?
      $endgroup$
      – MartynA
      Jul 7 at 15:40










    • $begingroup$
      @MartynA indeed it is thanks for pointing out!
      $endgroup$
      – Aleksandar Kostovic
      Jul 7 at 16:03


















    0












    $begingroup$

    Sorry, but Artificial Intelligence wasn't invented yet. In the FIRST Lego League, the robots aren't able to drive on a simple line, in the DARPA robotics challenge the humanoid robots struggle to open the valve, and the Tesla Autopilot isn't recommended for real traffic situations. The only situation in which deeplearning works is on the powerpoint slides in which the accuracy to detect a cat is 100%, but in reality the normal image search engines doesn't find anything.



    Let us go a step backward: what kind of AI application is available today? Right, nothing. The only control system which is available in reality is the normal refrigerator which holds the temperature at 5 degree, but this has nothing to do with machine learning but with a thermostat.



    The reason why Deeplearning is available everywhere is not because it's a powerful technology for detecting images, but because it's part of the curriculum to teach humans. Deeplearning means, that the human should learn something about statistics, python programming and edge detection algorithm. Not computers will become smarter but students.



    Books about the subject



    Even if Deeplearning itself isn't a very powerful technique to control robots, the amount and the quality of books about the subject is great. Since the year 2010 lots of mainstream publications were published which helped to introduce Artificial Intelligence into a larger audience. All of them have something with GPU supported neural networks in the title and they are explaining very well what image recognition, motion planning and speech recognition is. Even if the readers decides not using machine learning at all but realize a robot project with the conventional paradigm he will profit from reading the newly created tutorials.






    share|improve this answer











    $endgroup$








    • 3




      $begingroup$
      -1 It's not true that ML has no applications today. In the information security industry, ML is being widely used as classifiers for anomalies that could signify an attack (though to be fair, it's mostly a buzzword). It's also used in computer networking and regularly used in statistics. I have personally used ML for tasks that I could not have done otherwise. Not all ML is about making robots. Anyway, you're focusing on deep learning but that's the only kind of ML. It's good for specific things. It's not "the next gen AI".
      $endgroup$
      – forest
      Jul 8 at 2:59











    • $begingroup$
      AI is used in medical screening.
      $endgroup$
      – JCRM
      Jul 8 at 7:30






    • 2




      $begingroup$
      When the summary of your answer is "AI doesn't yet work" on ai.stackexchange.com.... How does this remain positive in upvotes?
      $endgroup$
      – Ng Oon-Ee
      Jul 8 at 7:34






    • 1




      $begingroup$
      @NgOon-Ee No idea. Out of curiosity I looked through some of this guy's other posts, and they're all just ignorant bashing of AI/ML, peppered with fundamental misconceptions about mathematics (apparently math is useless and was just invented so rich people can count their money!). HNQ hits hard.
      $endgroup$
      – forest
      Jul 8 at 7:38







    • 1




      $begingroup$
      This answer assumes a very limited definition of AI. AI has been around for about 60+ years, and there a loads of applications of it used in everyday life.
      $endgroup$
      – Oliver Mason
      Jul 8 at 10:02













    Your Answer








    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "658"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fai.stackexchange.com%2fquestions%2f13233%2fwhy-did-ml-only-become-viable-after-nvidias-chips-were-available%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    3 Answers
    3






    active

    oldest

    votes








    3 Answers
    3






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    12












    $begingroup$

    There is a lot of factors for the boom of AI industry. What many people miss though is the boom has mostly been in the Machine Learning part of AI. This can be attributed to various simple reasons along with their comparisons during earlier times:




    • Mathematics: The maths behind ML algorithms are pretty simple and known for a long time (whether it would work or not was not known though). During earlier times it was not possible to implement algorithms which require high precision of numbers, to be calculated on a chip, in an acceptable amount of time. One of the main arithmetic operations division of numbers still takes a lot of cycles in modern processors. Older processors were magnitude times slower than modern processors (more than 100x), this bottleneck made it impossible to train sophisticated models on contemporary processors.


    • Precision: Precision in calculations is an important factor in ML algorithms. 32 bit precision in processor was made in the 80's and was probably commercially available in the late 90's (x86), but it was still hella slow than current processors. This resulted in scientists improvising on the precision part and the most basic Perceptron Learning Algorithm invented in the 1960's to train a classifier uses only $1$'s and $0$'s, so basically a binary classifier. It was run on special computers. Although, it is interesting to note that we have come a full circle and Google is now using TPU's with 8-16 bit accuracy to implement ML models with great success.


    • Parallelization : The concept of parallelization of matrix operations is nothing new. It was only when we started to see Deep Learning as just a set of matrix operations we realized that it can be easily parallelized on massively parallel GPU's, still if your ML algorithm is not inherently parallel then it hardly matters whether you use CPU or GPU (e.g. RNN's).


    • Data: Probably the biggest cause in the ML boom. The Internet has provided opportunities to collect huge amounts of data from users and also make it available to interested parties. Since an ML algorithm is just a function approximator based on data, therefore data is the single most important thing in a ML algorithm. The more the data the better the performance of your model.


    • Cost: The cost of training a ML model has gone down significantly. So using a Supercomputer to train a model might be fine, but was it worth it? Super computers unlike normal PC's are tremendously resource hungry in terms of cooling, space, etc. A recent article on MIT Technology Review points out the carbon footprint of training a Deep Learning model (sub-branch of ML). It is quite a good indicator why it would have been infeasible to train on Supercomputers in earlier times (considering modern processors consume much lesser power and gives higher speeds).
      Although, I am not sure but I think earlier supercomputers were specialised in "parallel+very high precision computing" (required for weather, astronomy, military applications, etc) and the "very high precison part" is overkill in Machine Learning scenario.

    Another important aspect is nowadays everyone has access to powerful computers. Thus, anyone can build new ML models, re-train pre-existing models, modify models, etc. This was quite not possible during earlier times,



    All this factors has led to a huge surge in interest in ML and has caused the boom we are seeing today. Also check out this question on how we are moving beyond digital processors.






    share|improve this answer











    $endgroup$

















      12












      $begingroup$

      There is a lot of factors for the boom of AI industry. What many people miss though is the boom has mostly been in the Machine Learning part of AI. This can be attributed to various simple reasons along with their comparisons during earlier times:




      • Mathematics: The maths behind ML algorithms are pretty simple and known for a long time (whether it would work or not was not known though). During earlier times it was not possible to implement algorithms which require high precision of numbers, to be calculated on a chip, in an acceptable amount of time. One of the main arithmetic operations division of numbers still takes a lot of cycles in modern processors. Older processors were magnitude times slower than modern processors (more than 100x), this bottleneck made it impossible to train sophisticated models on contemporary processors.


      • Precision: Precision in calculations is an important factor in ML algorithms. 32 bit precision in processor was made in the 80's and was probably commercially available in the late 90's (x86), but it was still hella slow than current processors. This resulted in scientists improvising on the precision part and the most basic Perceptron Learning Algorithm invented in the 1960's to train a classifier uses only $1$'s and $0$'s, so basically a binary classifier. It was run on special computers. Although, it is interesting to note that we have come a full circle and Google is now using TPU's with 8-16 bit accuracy to implement ML models with great success.


      • Parallelization : The concept of parallelization of matrix operations is nothing new. It was only when we started to see Deep Learning as just a set of matrix operations we realized that it can be easily parallelized on massively parallel GPU's, still if your ML algorithm is not inherently parallel then it hardly matters whether you use CPU or GPU (e.g. RNN's).


      • Data: Probably the biggest cause in the ML boom. The Internet has provided opportunities to collect huge amounts of data from users and also make it available to interested parties. Since an ML algorithm is just a function approximator based on data, therefore data is the single most important thing in a ML algorithm. The more the data the better the performance of your model.


      • Cost: The cost of training a ML model has gone down significantly. So using a Supercomputer to train a model might be fine, but was it worth it? Super computers unlike normal PC's are tremendously resource hungry in terms of cooling, space, etc. A recent article on MIT Technology Review points out the carbon footprint of training a Deep Learning model (sub-branch of ML). It is quite a good indicator why it would have been infeasible to train on Supercomputers in earlier times (considering modern processors consume much lesser power and gives higher speeds).
        Although, I am not sure but I think earlier supercomputers were specialised in "parallel+very high precision computing" (required for weather, astronomy, military applications, etc) and the "very high precison part" is overkill in Machine Learning scenario.

      Another important aspect is nowadays everyone has access to powerful computers. Thus, anyone can build new ML models, re-train pre-existing models, modify models, etc. This was quite not possible during earlier times,



      All this factors has led to a huge surge in interest in ML and has caused the boom we are seeing today. Also check out this question on how we are moving beyond digital processors.






      share|improve this answer











      $endgroup$















        12












        12








        12





        $begingroup$

        There is a lot of factors for the boom of AI industry. What many people miss though is the boom has mostly been in the Machine Learning part of AI. This can be attributed to various simple reasons along with their comparisons during earlier times:




        • Mathematics: The maths behind ML algorithms are pretty simple and known for a long time (whether it would work or not was not known though). During earlier times it was not possible to implement algorithms which require high precision of numbers, to be calculated on a chip, in an acceptable amount of time. One of the main arithmetic operations division of numbers still takes a lot of cycles in modern processors. Older processors were magnitude times slower than modern processors (more than 100x), this bottleneck made it impossible to train sophisticated models on contemporary processors.


        • Precision: Precision in calculations is an important factor in ML algorithms. 32 bit precision in processor was made in the 80's and was probably commercially available in the late 90's (x86), but it was still hella slow than current processors. This resulted in scientists improvising on the precision part and the most basic Perceptron Learning Algorithm invented in the 1960's to train a classifier uses only $1$'s and $0$'s, so basically a binary classifier. It was run on special computers. Although, it is interesting to note that we have come a full circle and Google is now using TPU's with 8-16 bit accuracy to implement ML models with great success.


        • Parallelization : The concept of parallelization of matrix operations is nothing new. It was only when we started to see Deep Learning as just a set of matrix operations we realized that it can be easily parallelized on massively parallel GPU's, still if your ML algorithm is not inherently parallel then it hardly matters whether you use CPU or GPU (e.g. RNN's).


        • Data: Probably the biggest cause in the ML boom. The Internet has provided opportunities to collect huge amounts of data from users and also make it available to interested parties. Since an ML algorithm is just a function approximator based on data, therefore data is the single most important thing in a ML algorithm. The more the data the better the performance of your model.


        • Cost: The cost of training a ML model has gone down significantly. So using a Supercomputer to train a model might be fine, but was it worth it? Super computers unlike normal PC's are tremendously resource hungry in terms of cooling, space, etc. A recent article on MIT Technology Review points out the carbon footprint of training a Deep Learning model (sub-branch of ML). It is quite a good indicator why it would have been infeasible to train on Supercomputers in earlier times (considering modern processors consume much lesser power and gives higher speeds).
          Although, I am not sure but I think earlier supercomputers were specialised in "parallel+very high precision computing" (required for weather, astronomy, military applications, etc) and the "very high precison part" is overkill in Machine Learning scenario.

        Another important aspect is nowadays everyone has access to powerful computers. Thus, anyone can build new ML models, re-train pre-existing models, modify models, etc. This was quite not possible during earlier times,



        All this factors has led to a huge surge in interest in ML and has caused the boom we are seeing today. Also check out this question on how we are moving beyond digital processors.






        share|improve this answer











        $endgroup$



        There is a lot of factors for the boom of AI industry. What many people miss though is the boom has mostly been in the Machine Learning part of AI. This can be attributed to various simple reasons along with their comparisons during earlier times:




        • Mathematics: The maths behind ML algorithms are pretty simple and known for a long time (whether it would work or not was not known though). During earlier times it was not possible to implement algorithms which require high precision of numbers, to be calculated on a chip, in an acceptable amount of time. One of the main arithmetic operations division of numbers still takes a lot of cycles in modern processors. Older processors were magnitude times slower than modern processors (more than 100x), this bottleneck made it impossible to train sophisticated models on contemporary processors.


        • Precision: Precision in calculations is an important factor in ML algorithms. 32 bit precision in processor was made in the 80's and was probably commercially available in the late 90's (x86), but it was still hella slow than current processors. This resulted in scientists improvising on the precision part and the most basic Perceptron Learning Algorithm invented in the 1960's to train a classifier uses only $1$'s and $0$'s, so basically a binary classifier. It was run on special computers. Although, it is interesting to note that we have come a full circle and Google is now using TPU's with 8-16 bit accuracy to implement ML models with great success.


        • Parallelization : The concept of parallelization of matrix operations is nothing new. It was only when we started to see Deep Learning as just a set of matrix operations we realized that it can be easily parallelized on massively parallel GPU's, still if your ML algorithm is not inherently parallel then it hardly matters whether you use CPU or GPU (e.g. RNN's).


        • Data: Probably the biggest cause in the ML boom. The Internet has provided opportunities to collect huge amounts of data from users and also make it available to interested parties. Since an ML algorithm is just a function approximator based on data, therefore data is the single most important thing in a ML algorithm. The more the data the better the performance of your model.


        • Cost: The cost of training a ML model has gone down significantly. So using a Supercomputer to train a model might be fine, but was it worth it? Super computers unlike normal PC's are tremendously resource hungry in terms of cooling, space, etc. A recent article on MIT Technology Review points out the carbon footprint of training a Deep Learning model (sub-branch of ML). It is quite a good indicator why it would have been infeasible to train on Supercomputers in earlier times (considering modern processors consume much lesser power and gives higher speeds).
          Although, I am not sure but I think earlier supercomputers were specialised in "parallel+very high precision computing" (required for weather, astronomy, military applications, etc) and the "very high precison part" is overkill in Machine Learning scenario.

        Another important aspect is nowadays everyone has access to powerful computers. Thus, anyone can build new ML models, re-train pre-existing models, modify models, etc. This was quite not possible during earlier times,



        All this factors has led to a huge surge in interest in ML and has caused the boom we are seeing today. Also check out this question on how we are moving beyond digital processors.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Jul 8 at 5:04

























        answered Jul 7 at 9:25









        DuttaADuttaA

        2,8992 gold badges10 silver badges34 bronze badges




        2,8992 gold badges10 silver badges34 bronze badges























            2












            $begingroup$

            GPUs were ideal for AI boom becouse



            • They hit the right time

            AI has been researched for a LONG time. Almost half a century. However, that was all exploration of how would algorithms work and look. When NV saw that the AI is about to go mainstream, they looked at their GPUs and realized that the huge parellel processing power, with relative ease of programing, is ideal for the era that is to be. Many other people realized that too.



            • GPUs are sort of general purpose accelerators

            GPGPU is a concept of using GPU parallel processing for general tasks. You can accelerate graphics, or make your algorithm utalize 1000s of cores available on GPU. That makes GPU awesome target for all kinds of use cases including AI. Given that they are already available and are not too hard to program, its ideal choice for accelerating AI algorithms.






            share|improve this answer











            $endgroup$












            • $begingroup$
              Almost half a century?
              $endgroup$
              – MartynA
              Jul 7 at 15:40










            • $begingroup$
              @MartynA indeed it is thanks for pointing out!
              $endgroup$
              – Aleksandar Kostovic
              Jul 7 at 16:03















            2












            $begingroup$

            GPUs were ideal for AI boom becouse



            • They hit the right time

            AI has been researched for a LONG time. Almost half a century. However, that was all exploration of how would algorithms work and look. When NV saw that the AI is about to go mainstream, they looked at their GPUs and realized that the huge parellel processing power, with relative ease of programing, is ideal for the era that is to be. Many other people realized that too.



            • GPUs are sort of general purpose accelerators

            GPGPU is a concept of using GPU parallel processing for general tasks. You can accelerate graphics, or make your algorithm utalize 1000s of cores available on GPU. That makes GPU awesome target for all kinds of use cases including AI. Given that they are already available and are not too hard to program, its ideal choice for accelerating AI algorithms.






            share|improve this answer











            $endgroup$












            • $begingroup$
              Almost half a century?
              $endgroup$
              – MartynA
              Jul 7 at 15:40










            • $begingroup$
              @MartynA indeed it is thanks for pointing out!
              $endgroup$
              – Aleksandar Kostovic
              Jul 7 at 16:03













            2












            2








            2





            $begingroup$

            GPUs were ideal for AI boom becouse



            • They hit the right time

            AI has been researched for a LONG time. Almost half a century. However, that was all exploration of how would algorithms work and look. When NV saw that the AI is about to go mainstream, they looked at their GPUs and realized that the huge parellel processing power, with relative ease of programing, is ideal for the era that is to be. Many other people realized that too.



            • GPUs are sort of general purpose accelerators

            GPGPU is a concept of using GPU parallel processing for general tasks. You can accelerate graphics, or make your algorithm utalize 1000s of cores available on GPU. That makes GPU awesome target for all kinds of use cases including AI. Given that they are already available and are not too hard to program, its ideal choice for accelerating AI algorithms.






            share|improve this answer











            $endgroup$



            GPUs were ideal for AI boom becouse



            • They hit the right time

            AI has been researched for a LONG time. Almost half a century. However, that was all exploration of how would algorithms work and look. When NV saw that the AI is about to go mainstream, they looked at their GPUs and realized that the huge parellel processing power, with relative ease of programing, is ideal for the era that is to be. Many other people realized that too.



            • GPUs are sort of general purpose accelerators

            GPGPU is a concept of using GPU parallel processing for general tasks. You can accelerate graphics, or make your algorithm utalize 1000s of cores available on GPU. That makes GPU awesome target for all kinds of use cases including AI. Given that they are already available and are not too hard to program, its ideal choice for accelerating AI algorithms.







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Jul 7 at 16:04

























            answered Jul 7 at 7:30









            Aleksandar KostovicAleksandar Kostovic

            234 bronze badges




            234 bronze badges











            • $begingroup$
              Almost half a century?
              $endgroup$
              – MartynA
              Jul 7 at 15:40










            • $begingroup$
              @MartynA indeed it is thanks for pointing out!
              $endgroup$
              – Aleksandar Kostovic
              Jul 7 at 16:03
















            • $begingroup$
              Almost half a century?
              $endgroup$
              – MartynA
              Jul 7 at 15:40










            • $begingroup$
              @MartynA indeed it is thanks for pointing out!
              $endgroup$
              – Aleksandar Kostovic
              Jul 7 at 16:03















            $begingroup$
            Almost half a century?
            $endgroup$
            – MartynA
            Jul 7 at 15:40




            $begingroup$
            Almost half a century?
            $endgroup$
            – MartynA
            Jul 7 at 15:40












            $begingroup$
            @MartynA indeed it is thanks for pointing out!
            $endgroup$
            – Aleksandar Kostovic
            Jul 7 at 16:03




            $begingroup$
            @MartynA indeed it is thanks for pointing out!
            $endgroup$
            – Aleksandar Kostovic
            Jul 7 at 16:03











            0












            $begingroup$

            Sorry, but Artificial Intelligence wasn't invented yet. In the FIRST Lego League, the robots aren't able to drive on a simple line, in the DARPA robotics challenge the humanoid robots struggle to open the valve, and the Tesla Autopilot isn't recommended for real traffic situations. The only situation in which deeplearning works is on the powerpoint slides in which the accuracy to detect a cat is 100%, but in reality the normal image search engines doesn't find anything.



            Let us go a step backward: what kind of AI application is available today? Right, nothing. The only control system which is available in reality is the normal refrigerator which holds the temperature at 5 degree, but this has nothing to do with machine learning but with a thermostat.



            The reason why Deeplearning is available everywhere is not because it's a powerful technology for detecting images, but because it's part of the curriculum to teach humans. Deeplearning means, that the human should learn something about statistics, python programming and edge detection algorithm. Not computers will become smarter but students.



            Books about the subject



            Even if Deeplearning itself isn't a very powerful technique to control robots, the amount and the quality of books about the subject is great. Since the year 2010 lots of mainstream publications were published which helped to introduce Artificial Intelligence into a larger audience. All of them have something with GPU supported neural networks in the title and they are explaining very well what image recognition, motion planning and speech recognition is. Even if the readers decides not using machine learning at all but realize a robot project with the conventional paradigm he will profit from reading the newly created tutorials.






            share|improve this answer











            $endgroup$








            • 3




              $begingroup$
              -1 It's not true that ML has no applications today. In the information security industry, ML is being widely used as classifiers for anomalies that could signify an attack (though to be fair, it's mostly a buzzword). It's also used in computer networking and regularly used in statistics. I have personally used ML for tasks that I could not have done otherwise. Not all ML is about making robots. Anyway, you're focusing on deep learning but that's the only kind of ML. It's good for specific things. It's not "the next gen AI".
              $endgroup$
              – forest
              Jul 8 at 2:59











            • $begingroup$
              AI is used in medical screening.
              $endgroup$
              – JCRM
              Jul 8 at 7:30






            • 2




              $begingroup$
              When the summary of your answer is "AI doesn't yet work" on ai.stackexchange.com.... How does this remain positive in upvotes?
              $endgroup$
              – Ng Oon-Ee
              Jul 8 at 7:34






            • 1




              $begingroup$
              @NgOon-Ee No idea. Out of curiosity I looked through some of this guy's other posts, and they're all just ignorant bashing of AI/ML, peppered with fundamental misconceptions about mathematics (apparently math is useless and was just invented so rich people can count their money!). HNQ hits hard.
              $endgroup$
              – forest
              Jul 8 at 7:38







            • 1




              $begingroup$
              This answer assumes a very limited definition of AI. AI has been around for about 60+ years, and there a loads of applications of it used in everyday life.
              $endgroup$
              – Oliver Mason
              Jul 8 at 10:02















            0












            $begingroup$

            Sorry, but Artificial Intelligence wasn't invented yet. In the FIRST Lego League, the robots aren't able to drive on a simple line, in the DARPA robotics challenge the humanoid robots struggle to open the valve, and the Tesla Autopilot isn't recommended for real traffic situations. The only situation in which deeplearning works is on the powerpoint slides in which the accuracy to detect a cat is 100%, but in reality the normal image search engines doesn't find anything.



            Let us go a step backward: what kind of AI application is available today? Right, nothing. The only control system which is available in reality is the normal refrigerator which holds the temperature at 5 degree, but this has nothing to do with machine learning but with a thermostat.



            The reason why Deeplearning is available everywhere is not because it's a powerful technology for detecting images, but because it's part of the curriculum to teach humans. Deeplearning means, that the human should learn something about statistics, python programming and edge detection algorithm. Not computers will become smarter but students.



            Books about the subject



            Even if Deeplearning itself isn't a very powerful technique to control robots, the amount and the quality of books about the subject is great. Since the year 2010 lots of mainstream publications were published which helped to introduce Artificial Intelligence into a larger audience. All of them have something with GPU supported neural networks in the title and they are explaining very well what image recognition, motion planning and speech recognition is. Even if the readers decides not using machine learning at all but realize a robot project with the conventional paradigm he will profit from reading the newly created tutorials.






            share|improve this answer











            $endgroup$








            • 3




              $begingroup$
              -1 It's not true that ML has no applications today. In the information security industry, ML is being widely used as classifiers for anomalies that could signify an attack (though to be fair, it's mostly a buzzword). It's also used in computer networking and regularly used in statistics. I have personally used ML for tasks that I could not have done otherwise. Not all ML is about making robots. Anyway, you're focusing on deep learning but that's the only kind of ML. It's good for specific things. It's not "the next gen AI".
              $endgroup$
              – forest
              Jul 8 at 2:59











            • $begingroup$
              AI is used in medical screening.
              $endgroup$
              – JCRM
              Jul 8 at 7:30






            • 2




              $begingroup$
              When the summary of your answer is "AI doesn't yet work" on ai.stackexchange.com.... How does this remain positive in upvotes?
              $endgroup$
              – Ng Oon-Ee
              Jul 8 at 7:34






            • 1




              $begingroup$
              @NgOon-Ee No idea. Out of curiosity I looked through some of this guy's other posts, and they're all just ignorant bashing of AI/ML, peppered with fundamental misconceptions about mathematics (apparently math is useless and was just invented so rich people can count their money!). HNQ hits hard.
              $endgroup$
              – forest
              Jul 8 at 7:38







            • 1




              $begingroup$
              This answer assumes a very limited definition of AI. AI has been around for about 60+ years, and there a loads of applications of it used in everyday life.
              $endgroup$
              – Oliver Mason
              Jul 8 at 10:02













            0












            0








            0





            $begingroup$

            Sorry, but Artificial Intelligence wasn't invented yet. In the FIRST Lego League, the robots aren't able to drive on a simple line, in the DARPA robotics challenge the humanoid robots struggle to open the valve, and the Tesla Autopilot isn't recommended for real traffic situations. The only situation in which deeplearning works is on the powerpoint slides in which the accuracy to detect a cat is 100%, but in reality the normal image search engines doesn't find anything.



            Let us go a step backward: what kind of AI application is available today? Right, nothing. The only control system which is available in reality is the normal refrigerator which holds the temperature at 5 degree, but this has nothing to do with machine learning but with a thermostat.



            The reason why Deeplearning is available everywhere is not because it's a powerful technology for detecting images, but because it's part of the curriculum to teach humans. Deeplearning means, that the human should learn something about statistics, python programming and edge detection algorithm. Not computers will become smarter but students.



            Books about the subject



            Even if Deeplearning itself isn't a very powerful technique to control robots, the amount and the quality of books about the subject is great. Since the year 2010 lots of mainstream publications were published which helped to introduce Artificial Intelligence into a larger audience. All of them have something with GPU supported neural networks in the title and they are explaining very well what image recognition, motion planning and speech recognition is. Even if the readers decides not using machine learning at all but realize a robot project with the conventional paradigm he will profit from reading the newly created tutorials.






            share|improve this answer











            $endgroup$



            Sorry, but Artificial Intelligence wasn't invented yet. In the FIRST Lego League, the robots aren't able to drive on a simple line, in the DARPA robotics challenge the humanoid robots struggle to open the valve, and the Tesla Autopilot isn't recommended for real traffic situations. The only situation in which deeplearning works is on the powerpoint slides in which the accuracy to detect a cat is 100%, but in reality the normal image search engines doesn't find anything.



            Let us go a step backward: what kind of AI application is available today? Right, nothing. The only control system which is available in reality is the normal refrigerator which holds the temperature at 5 degree, but this has nothing to do with machine learning but with a thermostat.



            The reason why Deeplearning is available everywhere is not because it's a powerful technology for detecting images, but because it's part of the curriculum to teach humans. Deeplearning means, that the human should learn something about statistics, python programming and edge detection algorithm. Not computers will become smarter but students.



            Books about the subject



            Even if Deeplearning itself isn't a very powerful technique to control robots, the amount and the quality of books about the subject is great. Since the year 2010 lots of mainstream publications were published which helped to introduce Artificial Intelligence into a larger audience. All of them have something with GPU supported neural networks in the title and they are explaining very well what image recognition, motion planning and speech recognition is. Even if the readers decides not using machine learning at all but realize a robot project with the conventional paradigm he will profit from reading the newly created tutorials.







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Jul 7 at 17:50

























            answered Jul 7 at 9:32









            Manuel RodriguezManuel Rodriguez

            1,4391 gold badge4 silver badges27 bronze badges




            1,4391 gold badge4 silver badges27 bronze badges







            • 3




              $begingroup$
              -1 It's not true that ML has no applications today. In the information security industry, ML is being widely used as classifiers for anomalies that could signify an attack (though to be fair, it's mostly a buzzword). It's also used in computer networking and regularly used in statistics. I have personally used ML for tasks that I could not have done otherwise. Not all ML is about making robots. Anyway, you're focusing on deep learning but that's the only kind of ML. It's good for specific things. It's not "the next gen AI".
              $endgroup$
              – forest
              Jul 8 at 2:59











            • $begingroup$
              AI is used in medical screening.
              $endgroup$
              – JCRM
              Jul 8 at 7:30






            • 2




              $begingroup$
              When the summary of your answer is "AI doesn't yet work" on ai.stackexchange.com.... How does this remain positive in upvotes?
              $endgroup$
              – Ng Oon-Ee
              Jul 8 at 7:34






            • 1




              $begingroup$
              @NgOon-Ee No idea. Out of curiosity I looked through some of this guy's other posts, and they're all just ignorant bashing of AI/ML, peppered with fundamental misconceptions about mathematics (apparently math is useless and was just invented so rich people can count their money!). HNQ hits hard.
              $endgroup$
              – forest
              Jul 8 at 7:38







            • 1




              $begingroup$
              This answer assumes a very limited definition of AI. AI has been around for about 60+ years, and there a loads of applications of it used in everyday life.
              $endgroup$
              – Oliver Mason
              Jul 8 at 10:02












            • 3




              $begingroup$
              -1 It's not true that ML has no applications today. In the information security industry, ML is being widely used as classifiers for anomalies that could signify an attack (though to be fair, it's mostly a buzzword). It's also used in computer networking and regularly used in statistics. I have personally used ML for tasks that I could not have done otherwise. Not all ML is about making robots. Anyway, you're focusing on deep learning but that's the only kind of ML. It's good for specific things. It's not "the next gen AI".
              $endgroup$
              – forest
              Jul 8 at 2:59











            • $begingroup$
              AI is used in medical screening.
              $endgroup$
              – JCRM
              Jul 8 at 7:30






            • 2




              $begingroup$
              When the summary of your answer is "AI doesn't yet work" on ai.stackexchange.com.... How does this remain positive in upvotes?
              $endgroup$
              – Ng Oon-Ee
              Jul 8 at 7:34






            • 1




              $begingroup$
              @NgOon-Ee No idea. Out of curiosity I looked through some of this guy's other posts, and they're all just ignorant bashing of AI/ML, peppered with fundamental misconceptions about mathematics (apparently math is useless and was just invented so rich people can count their money!). HNQ hits hard.
              $endgroup$
              – forest
              Jul 8 at 7:38







            • 1




              $begingroup$
              This answer assumes a very limited definition of AI. AI has been around for about 60+ years, and there a loads of applications of it used in everyday life.
              $endgroup$
              – Oliver Mason
              Jul 8 at 10:02







            3




            3




            $begingroup$
            -1 It's not true that ML has no applications today. In the information security industry, ML is being widely used as classifiers for anomalies that could signify an attack (though to be fair, it's mostly a buzzword). It's also used in computer networking and regularly used in statistics. I have personally used ML for tasks that I could not have done otherwise. Not all ML is about making robots. Anyway, you're focusing on deep learning but that's the only kind of ML. It's good for specific things. It's not "the next gen AI".
            $endgroup$
            – forest
            Jul 8 at 2:59





            $begingroup$
            -1 It's not true that ML has no applications today. In the information security industry, ML is being widely used as classifiers for anomalies that could signify an attack (though to be fair, it's mostly a buzzword). It's also used in computer networking and regularly used in statistics. I have personally used ML for tasks that I could not have done otherwise. Not all ML is about making robots. Anyway, you're focusing on deep learning but that's the only kind of ML. It's good for specific things. It's not "the next gen AI".
            $endgroup$
            – forest
            Jul 8 at 2:59













            $begingroup$
            AI is used in medical screening.
            $endgroup$
            – JCRM
            Jul 8 at 7:30




            $begingroup$
            AI is used in medical screening.
            $endgroup$
            – JCRM
            Jul 8 at 7:30




            2




            2




            $begingroup$
            When the summary of your answer is "AI doesn't yet work" on ai.stackexchange.com.... How does this remain positive in upvotes?
            $endgroup$
            – Ng Oon-Ee
            Jul 8 at 7:34




            $begingroup$
            When the summary of your answer is "AI doesn't yet work" on ai.stackexchange.com.... How does this remain positive in upvotes?
            $endgroup$
            – Ng Oon-Ee
            Jul 8 at 7:34




            1




            1




            $begingroup$
            @NgOon-Ee No idea. Out of curiosity I looked through some of this guy's other posts, and they're all just ignorant bashing of AI/ML, peppered with fundamental misconceptions about mathematics (apparently math is useless and was just invented so rich people can count their money!). HNQ hits hard.
            $endgroup$
            – forest
            Jul 8 at 7:38





            $begingroup$
            @NgOon-Ee No idea. Out of curiosity I looked through some of this guy's other posts, and they're all just ignorant bashing of AI/ML, peppered with fundamental misconceptions about mathematics (apparently math is useless and was just invented so rich people can count their money!). HNQ hits hard.
            $endgroup$
            – forest
            Jul 8 at 7:38





            1




            1




            $begingroup$
            This answer assumes a very limited definition of AI. AI has been around for about 60+ years, and there a loads of applications of it used in everyday life.
            $endgroup$
            – Oliver Mason
            Jul 8 at 10:02




            $begingroup$
            This answer assumes a very limited definition of AI. AI has been around for about 60+ years, and there a loads of applications of it used in everyday life.
            $endgroup$
            – Oliver Mason
            Jul 8 at 10:02

















            draft saved

            draft discarded
















































            Thanks for contributing an answer to Artificial Intelligence Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fai.stackexchange.com%2fquestions%2f13233%2fwhy-did-ml-only-become-viable-after-nvidias-chips-were-available%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Category:9 (number) SubcategoriesMedia in category "9 (number)"Navigation menuUpload mediaGND ID: 4485639-8Library of Congress authority ID: sh85091979ReasonatorScholiaStatistics

            Circuit construction for execution of conditional statements using least significant bitHow are two different registers being used as “control”?How exactly is the stated composite state of the two registers being produced using the $R_zz$ controlled rotations?Efficiently performing controlled rotations in HHLWould this quantum algorithm implementation work?How to prepare a superposed states of odd integers from $1$ to $sqrtN$?Why is this implementation of the order finding algorithm not working?Circuit construction for Hamiltonian simulationHow can I invert the least significant bit of a certain term of a superposed state?Implementing an oracleImplementing a controlled sum operation

            Magento 2 “No Payment Methods” in Admin New OrderHow to integrate Paypal Express Checkout with the Magento APIMagento 1.5 - Sales > Order > edit order and shipping methods disappearAuto Invoice Check/Money Order Payment methodAdd more simple payment methods?Shipping methods not showingWhat should I do to change payment methods if changing the configuration has no effects?1.9 - No Payment Methods showing upMy Payment Methods not Showing for downloadable/virtual product when checkout?Magento2 API to access internal payment methodHow to call an existing payment methods in the registration form?