What sort of mathematical problems are there in AI that people are working on?Are there any anthropological studies involving AI right now?What are the top artificial intelligence journals?What are the mathematical prerequisites to be able to study general artificial intelligence?Is there any scientific/mathematical argument that prevents deep learning from ever producing strong AI?What are the mathematical prerequisites for an AI researcher?Are there any discount-factors based on branching factors?What are the algebraic properties of intelligence?When will we have computer programs that can compose mathematical proofs?Are there profitable hedge funds using AI?Is there any system that generates website designs?
Can US Supreme Court justices / judges be "rotated" out against their will?
Is it advisable to inform the CEO about his brother accessing his office?
Word ending in "-ine" for rat-like
Why is numpy sometimes slower than numpy + plain python loop?
Is my guitar action too high or is the bridge too high?
What happens if a caster is surprised while casting a spell with a long casting time?
Checkmate in 1 on a Tangled Board
Rear derailleur got caught in the spokes, what could be a root cause
What election rules and voting rights are guaranteed by the US Constitution?
How far can gerrymandering go?
How to count the number of bytes in a file, grouping the same bytes?
Active wildlife outside the window- Good or Bad for Cat psychology?
Meaning of the word "good" in context
Did the Russian Empire have a claim to Sweden? Was there ever a time where they could have pursued it?
Does "boire un jus" tend to mean "coffee" or "juice of fruit"?
Magento2: Custom module not working
How useful would a hydroelectric power plant be in the post-apocalypse world?
Is there a list of all of the cases in the Talmud where תיקו ("Teiku") is said?
Why wasn't ASCII designed with a contiguous alphanumeric character order?
How soon after takeoff can you recline your airplane seat?
What is the meaning of 'shout over' in a sentence exactly?
What prevents a US state from colonizing a smaller state?
Tricolour nonogram
How do I present a future free of gender stereotypes without being jarring or overpowering the narrative?
What sort of mathematical problems are there in AI that people are working on?
Are there any anthropological studies involving AI right now?What are the top artificial intelligence journals?What are the mathematical prerequisites to be able to study general artificial intelligence?Is there any scientific/mathematical argument that prevents deep learning from ever producing strong AI?What are the mathematical prerequisites for an AI researcher?Are there any discount-factors based on branching factors?What are the algebraic properties of intelligence?When will we have computer programs that can compose mathematical proofs?Are there profitable hedge funds using AI?Is there any system that generates website designs?
$begingroup$
I recently got a 18-month postdoc position in a math department. It's a position with relative light teaching duty and a lot of freedom about what type of research that I want to do.
Previously I was mostly doing some research in probability and combinatorics. But I am thinking of doing a bit more application oriented work, e.g., AI. (There is also the consideration that there is good chance that I will not get a tenure-track position at the end my current position. Learn a bit of AI might be helpful for other career possibilities.)
What sort of mathematical problems are there in AI that people are working on? From what I heard of, there are people studying
- Deterministic Finite Automaton
- Multi-armed bandit problems
- Monte Carlo tree search
- Community detection
Any other examples?
research math
$endgroup$
add a comment |
$begingroup$
I recently got a 18-month postdoc position in a math department. It's a position with relative light teaching duty and a lot of freedom about what type of research that I want to do.
Previously I was mostly doing some research in probability and combinatorics. But I am thinking of doing a bit more application oriented work, e.g., AI. (There is also the consideration that there is good chance that I will not get a tenure-track position at the end my current position. Learn a bit of AI might be helpful for other career possibilities.)
What sort of mathematical problems are there in AI that people are working on? From what I heard of, there are people studying
- Deterministic Finite Automaton
- Multi-armed bandit problems
- Monte Carlo tree search
- Community detection
Any other examples?
research math
$endgroup$
1
$begingroup$
The first two items on the list (deterministic state machines and bandit problems) are a good hint. In contrast, monte-carlo tree search isn't very common in AI because probability theory is the same as normal graph theory. Classical mathematics and a bit boolean logic is all what the modern AI researcher need. It allows him to describe the world from an objective standpoint.
$endgroup$
– Manuel Rodriguez
Jun 21 at 10:20
3
$begingroup$
Optimization. Probably is the most impactful field to AI ML. Proof of convergence, like in reinforcement learning, is lacking.
$endgroup$
– drerD
Jun 21 at 11:47
add a comment |
$begingroup$
I recently got a 18-month postdoc position in a math department. It's a position with relative light teaching duty and a lot of freedom about what type of research that I want to do.
Previously I was mostly doing some research in probability and combinatorics. But I am thinking of doing a bit more application oriented work, e.g., AI. (There is also the consideration that there is good chance that I will not get a tenure-track position at the end my current position. Learn a bit of AI might be helpful for other career possibilities.)
What sort of mathematical problems are there in AI that people are working on? From what I heard of, there are people studying
- Deterministic Finite Automaton
- Multi-armed bandit problems
- Monte Carlo tree search
- Community detection
Any other examples?
research math
$endgroup$
I recently got a 18-month postdoc position in a math department. It's a position with relative light teaching duty and a lot of freedom about what type of research that I want to do.
Previously I was mostly doing some research in probability and combinatorics. But I am thinking of doing a bit more application oriented work, e.g., AI. (There is also the consideration that there is good chance that I will not get a tenure-track position at the end my current position. Learn a bit of AI might be helpful for other career possibilities.)
What sort of mathematical problems are there in AI that people are working on? From what I heard of, there are people studying
- Deterministic Finite Automaton
- Multi-armed bandit problems
- Monte Carlo tree search
- Community detection
Any other examples?
research math
research math
edited Jun 21 at 20:30
nbro
4,5203 gold badges9 silver badges27 bronze badges
4,5203 gold badges9 silver badges27 bronze badges
asked Jun 21 at 9:37
ablmfablmf
1435 bronze badges
1435 bronze badges
1
$begingroup$
The first two items on the list (deterministic state machines and bandit problems) are a good hint. In contrast, monte-carlo tree search isn't very common in AI because probability theory is the same as normal graph theory. Classical mathematics and a bit boolean logic is all what the modern AI researcher need. It allows him to describe the world from an objective standpoint.
$endgroup$
– Manuel Rodriguez
Jun 21 at 10:20
3
$begingroup$
Optimization. Probably is the most impactful field to AI ML. Proof of convergence, like in reinforcement learning, is lacking.
$endgroup$
– drerD
Jun 21 at 11:47
add a comment |
1
$begingroup$
The first two items on the list (deterministic state machines and bandit problems) are a good hint. In contrast, monte-carlo tree search isn't very common in AI because probability theory is the same as normal graph theory. Classical mathematics and a bit boolean logic is all what the modern AI researcher need. It allows him to describe the world from an objective standpoint.
$endgroup$
– Manuel Rodriguez
Jun 21 at 10:20
3
$begingroup$
Optimization. Probably is the most impactful field to AI ML. Proof of convergence, like in reinforcement learning, is lacking.
$endgroup$
– drerD
Jun 21 at 11:47
1
1
$begingroup$
The first two items on the list (deterministic state machines and bandit problems) are a good hint. In contrast, monte-carlo tree search isn't very common in AI because probability theory is the same as normal graph theory. Classical mathematics and a bit boolean logic is all what the modern AI researcher need. It allows him to describe the world from an objective standpoint.
$endgroup$
– Manuel Rodriguez
Jun 21 at 10:20
$begingroup$
The first two items on the list (deterministic state machines and bandit problems) are a good hint. In contrast, monte-carlo tree search isn't very common in AI because probability theory is the same as normal graph theory. Classical mathematics and a bit boolean logic is all what the modern AI researcher need. It allows him to describe the world from an objective standpoint.
$endgroup$
– Manuel Rodriguez
Jun 21 at 10:20
3
3
$begingroup$
Optimization. Probably is the most impactful field to AI ML. Proof of convergence, like in reinforcement learning, is lacking.
$endgroup$
– drerD
Jun 21 at 11:47
$begingroup$
Optimization. Probably is the most impactful field to AI ML. Proof of convergence, like in reinforcement learning, is lacking.
$endgroup$
– drerD
Jun 21 at 11:47
add a comment |
3 Answers
3
active
oldest
votes
$begingroup$
In artificial intelligence (sometimes called machine intelligence or computational intelligence), there are several problems that are based on mathematical topics, especially optimization, statistics, probability theory, calculus and linear algebra.
Marcus Hutter has worked on a mathematical theory for artificial general intelligence, called AIXI, which is based on several mathematical and computation science concepts, such as reinforcement learning, probability theory (e.g. Bayes theorem and related topics) measure theory, algorithmic information theory (e.g. Kolmogorov complexity), optimisation, Solomonoff induction, universal Levin search and theory of compution (e.g. universal Turing machines). His book Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability, which is a highly technical and mathematical book, describes his theory of optimal Bayesian non-Markov reinforcement learning agents.
There is also the research field called computational learning theory, which is devoted to studying the design and analysis of machine learning algorithms. More precisely, the field focuses on the rigorous study and mathematical analysis of machine learning algorithms using techniques from fields such as probability theory, statistics, optimization, information theory and geometry. Several people have worked on the computational learning theory, including Michael Kearns and Vladimir Vapnik. There is also a related field called statistical learning theory.
There is also a lot of research effort dedicated to approximations (heuristics) of combinatorial optimization and NP-complete problems, such as ant colony optimization.
There is also some work on AI-completeness, but this has not received much attention (compared to the other research areas mentioned above).
$endgroup$
add a comment |
$begingroup$
Most of the math work being done in AI that I'm familiar with is already covered in nbro's answer. One thing that I do not believe is covered yet in that answer is proving algorithmic equivalence and/or deriving equivalent algorithms. One of my favourite papers on this is Learning to Predict Independent of Span by Hado van Hasselt and Richard Sutton.
The basic idea is that we may first formulate an algorithm (in math form, for instance some update rules/equations for parameters that we're training) in one way, and then find different update rules/equations (i.e. a different algorithm) for which we can prove that it is equivalent to the first one (i.e. always results in the same output).
A typical case where this is useful is if the first algorithm is easy to understand / appeals to our intuition / is more convenient for convergence proofs or other theoretical analysis, and the second algorithm is more efficient (in terms of computation, memory requirements, etc.).
$endgroup$
add a comment |
$begingroup$
Specifically for mathematical apparatus of Neural Networks - random matrix theory. Non-asymptotic random matrix theory was used in some proofs of convergence of gradient descent for Neural Networks, high dimensional random landscapes in connection to Hessian spectrum have relation to loss surfaces of Neural Networks.
Topological data analysis is another area of intense research related to ML, AI and applied to Neural Networks.
There were some works on Tropical Geometry of Neural Networks
Homotopy Type Theory also have connection to AI
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "658"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fai.stackexchange.com%2fquestions%2f12971%2fwhat-sort-of-mathematical-problems-are-there-in-ai-that-people-are-working-on%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
In artificial intelligence (sometimes called machine intelligence or computational intelligence), there are several problems that are based on mathematical topics, especially optimization, statistics, probability theory, calculus and linear algebra.
Marcus Hutter has worked on a mathematical theory for artificial general intelligence, called AIXI, which is based on several mathematical and computation science concepts, such as reinforcement learning, probability theory (e.g. Bayes theorem and related topics) measure theory, algorithmic information theory (e.g. Kolmogorov complexity), optimisation, Solomonoff induction, universal Levin search and theory of compution (e.g. universal Turing machines). His book Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability, which is a highly technical and mathematical book, describes his theory of optimal Bayesian non-Markov reinforcement learning agents.
There is also the research field called computational learning theory, which is devoted to studying the design and analysis of machine learning algorithms. More precisely, the field focuses on the rigorous study and mathematical analysis of machine learning algorithms using techniques from fields such as probability theory, statistics, optimization, information theory and geometry. Several people have worked on the computational learning theory, including Michael Kearns and Vladimir Vapnik. There is also a related field called statistical learning theory.
There is also a lot of research effort dedicated to approximations (heuristics) of combinatorial optimization and NP-complete problems, such as ant colony optimization.
There is also some work on AI-completeness, but this has not received much attention (compared to the other research areas mentioned above).
$endgroup$
add a comment |
$begingroup$
In artificial intelligence (sometimes called machine intelligence or computational intelligence), there are several problems that are based on mathematical topics, especially optimization, statistics, probability theory, calculus and linear algebra.
Marcus Hutter has worked on a mathematical theory for artificial general intelligence, called AIXI, which is based on several mathematical and computation science concepts, such as reinforcement learning, probability theory (e.g. Bayes theorem and related topics) measure theory, algorithmic information theory (e.g. Kolmogorov complexity), optimisation, Solomonoff induction, universal Levin search and theory of compution (e.g. universal Turing machines). His book Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability, which is a highly technical and mathematical book, describes his theory of optimal Bayesian non-Markov reinforcement learning agents.
There is also the research field called computational learning theory, which is devoted to studying the design and analysis of machine learning algorithms. More precisely, the field focuses on the rigorous study and mathematical analysis of machine learning algorithms using techniques from fields such as probability theory, statistics, optimization, information theory and geometry. Several people have worked on the computational learning theory, including Michael Kearns and Vladimir Vapnik. There is also a related field called statistical learning theory.
There is also a lot of research effort dedicated to approximations (heuristics) of combinatorial optimization and NP-complete problems, such as ant colony optimization.
There is also some work on AI-completeness, but this has not received much attention (compared to the other research areas mentioned above).
$endgroup$
add a comment |
$begingroup$
In artificial intelligence (sometimes called machine intelligence or computational intelligence), there are several problems that are based on mathematical topics, especially optimization, statistics, probability theory, calculus and linear algebra.
Marcus Hutter has worked on a mathematical theory for artificial general intelligence, called AIXI, which is based on several mathematical and computation science concepts, such as reinforcement learning, probability theory (e.g. Bayes theorem and related topics) measure theory, algorithmic information theory (e.g. Kolmogorov complexity), optimisation, Solomonoff induction, universal Levin search and theory of compution (e.g. universal Turing machines). His book Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability, which is a highly technical and mathematical book, describes his theory of optimal Bayesian non-Markov reinforcement learning agents.
There is also the research field called computational learning theory, which is devoted to studying the design and analysis of machine learning algorithms. More precisely, the field focuses on the rigorous study and mathematical analysis of machine learning algorithms using techniques from fields such as probability theory, statistics, optimization, information theory and geometry. Several people have worked on the computational learning theory, including Michael Kearns and Vladimir Vapnik. There is also a related field called statistical learning theory.
There is also a lot of research effort dedicated to approximations (heuristics) of combinatorial optimization and NP-complete problems, such as ant colony optimization.
There is also some work on AI-completeness, but this has not received much attention (compared to the other research areas mentioned above).
$endgroup$
In artificial intelligence (sometimes called machine intelligence or computational intelligence), there are several problems that are based on mathematical topics, especially optimization, statistics, probability theory, calculus and linear algebra.
Marcus Hutter has worked on a mathematical theory for artificial general intelligence, called AIXI, which is based on several mathematical and computation science concepts, such as reinforcement learning, probability theory (e.g. Bayes theorem and related topics) measure theory, algorithmic information theory (e.g. Kolmogorov complexity), optimisation, Solomonoff induction, universal Levin search and theory of compution (e.g. universal Turing machines). His book Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability, which is a highly technical and mathematical book, describes his theory of optimal Bayesian non-Markov reinforcement learning agents.
There is also the research field called computational learning theory, which is devoted to studying the design and analysis of machine learning algorithms. More precisely, the field focuses on the rigorous study and mathematical analysis of machine learning algorithms using techniques from fields such as probability theory, statistics, optimization, information theory and geometry. Several people have worked on the computational learning theory, including Michael Kearns and Vladimir Vapnik. There is also a related field called statistical learning theory.
There is also a lot of research effort dedicated to approximations (heuristics) of combinatorial optimization and NP-complete problems, such as ant colony optimization.
There is also some work on AI-completeness, but this has not received much attention (compared to the other research areas mentioned above).
edited Jun 21 at 21:15
answered Jun 21 at 14:48
nbronbro
4,5203 gold badges9 silver badges27 bronze badges
4,5203 gold badges9 silver badges27 bronze badges
add a comment |
add a comment |
$begingroup$
Most of the math work being done in AI that I'm familiar with is already covered in nbro's answer. One thing that I do not believe is covered yet in that answer is proving algorithmic equivalence and/or deriving equivalent algorithms. One of my favourite papers on this is Learning to Predict Independent of Span by Hado van Hasselt and Richard Sutton.
The basic idea is that we may first formulate an algorithm (in math form, for instance some update rules/equations for parameters that we're training) in one way, and then find different update rules/equations (i.e. a different algorithm) for which we can prove that it is equivalent to the first one (i.e. always results in the same output).
A typical case where this is useful is if the first algorithm is easy to understand / appeals to our intuition / is more convenient for convergence proofs or other theoretical analysis, and the second algorithm is more efficient (in terms of computation, memory requirements, etc.).
$endgroup$
add a comment |
$begingroup$
Most of the math work being done in AI that I'm familiar with is already covered in nbro's answer. One thing that I do not believe is covered yet in that answer is proving algorithmic equivalence and/or deriving equivalent algorithms. One of my favourite papers on this is Learning to Predict Independent of Span by Hado van Hasselt and Richard Sutton.
The basic idea is that we may first formulate an algorithm (in math form, for instance some update rules/equations for parameters that we're training) in one way, and then find different update rules/equations (i.e. a different algorithm) for which we can prove that it is equivalent to the first one (i.e. always results in the same output).
A typical case where this is useful is if the first algorithm is easy to understand / appeals to our intuition / is more convenient for convergence proofs or other theoretical analysis, and the second algorithm is more efficient (in terms of computation, memory requirements, etc.).
$endgroup$
add a comment |
$begingroup$
Most of the math work being done in AI that I'm familiar with is already covered in nbro's answer. One thing that I do not believe is covered yet in that answer is proving algorithmic equivalence and/or deriving equivalent algorithms. One of my favourite papers on this is Learning to Predict Independent of Span by Hado van Hasselt and Richard Sutton.
The basic idea is that we may first formulate an algorithm (in math form, for instance some update rules/equations for parameters that we're training) in one way, and then find different update rules/equations (i.e. a different algorithm) for which we can prove that it is equivalent to the first one (i.e. always results in the same output).
A typical case where this is useful is if the first algorithm is easy to understand / appeals to our intuition / is more convenient for convergence proofs or other theoretical analysis, and the second algorithm is more efficient (in terms of computation, memory requirements, etc.).
$endgroup$
Most of the math work being done in AI that I'm familiar with is already covered in nbro's answer. One thing that I do not believe is covered yet in that answer is proving algorithmic equivalence and/or deriving equivalent algorithms. One of my favourite papers on this is Learning to Predict Independent of Span by Hado van Hasselt and Richard Sutton.
The basic idea is that we may first formulate an algorithm (in math form, for instance some update rules/equations for parameters that we're training) in one way, and then find different update rules/equations (i.e. a different algorithm) for which we can prove that it is equivalent to the first one (i.e. always results in the same output).
A typical case where this is useful is if the first algorithm is easy to understand / appeals to our intuition / is more convenient for convergence proofs or other theoretical analysis, and the second algorithm is more efficient (in terms of computation, memory requirements, etc.).
answered Jun 21 at 15:27
Dennis SoemersDennis Soemers
4,5721 gold badge6 silver badges38 bronze badges
4,5721 gold badge6 silver badges38 bronze badges
add a comment |
add a comment |
$begingroup$
Specifically for mathematical apparatus of Neural Networks - random matrix theory. Non-asymptotic random matrix theory was used in some proofs of convergence of gradient descent for Neural Networks, high dimensional random landscapes in connection to Hessian spectrum have relation to loss surfaces of Neural Networks.
Topological data analysis is another area of intense research related to ML, AI and applied to Neural Networks.
There were some works on Tropical Geometry of Neural Networks
Homotopy Type Theory also have connection to AI
$endgroup$
add a comment |
$begingroup$
Specifically for mathematical apparatus of Neural Networks - random matrix theory. Non-asymptotic random matrix theory was used in some proofs of convergence of gradient descent for Neural Networks, high dimensional random landscapes in connection to Hessian spectrum have relation to loss surfaces of Neural Networks.
Topological data analysis is another area of intense research related to ML, AI and applied to Neural Networks.
There were some works on Tropical Geometry of Neural Networks
Homotopy Type Theory also have connection to AI
$endgroup$
add a comment |
$begingroup$
Specifically for mathematical apparatus of Neural Networks - random matrix theory. Non-asymptotic random matrix theory was used in some proofs of convergence of gradient descent for Neural Networks, high dimensional random landscapes in connection to Hessian spectrum have relation to loss surfaces of Neural Networks.
Topological data analysis is another area of intense research related to ML, AI and applied to Neural Networks.
There were some works on Tropical Geometry of Neural Networks
Homotopy Type Theory also have connection to AI
$endgroup$
Specifically for mathematical apparatus of Neural Networks - random matrix theory. Non-asymptotic random matrix theory was used in some proofs of convergence of gradient descent for Neural Networks, high dimensional random landscapes in connection to Hessian spectrum have relation to loss surfaces of Neural Networks.
Topological data analysis is another area of intense research related to ML, AI and applied to Neural Networks.
There were some works on Tropical Geometry of Neural Networks
Homotopy Type Theory also have connection to AI
answered Jun 22 at 7:44
mirror2imagemirror2image
4121 silver badge7 bronze badges
4121 silver badge7 bronze badges
add a comment |
add a comment |
Thanks for contributing an answer to Artificial Intelligence Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fai.stackexchange.com%2fquestions%2f12971%2fwhat-sort-of-mathematical-problems-are-there-in-ai-that-people-are-working-on%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
$begingroup$
The first two items on the list (deterministic state machines and bandit problems) are a good hint. In contrast, monte-carlo tree search isn't very common in AI because probability theory is the same as normal graph theory. Classical mathematics and a bit boolean logic is all what the modern AI researcher need. It allows him to describe the world from an objective standpoint.
$endgroup$
– Manuel Rodriguez
Jun 21 at 10:20
3
$begingroup$
Optimization. Probably is the most impactful field to AI ML. Proof of convergence, like in reinforcement learning, is lacking.
$endgroup$
– drerD
Jun 21 at 11:47