Artificial Intelligence

What is Artificial Intelligence (AI)

Artificial Intelligence is a branch of computer science that enables computers and machines to mimic the perception, learning, problem-solving, and decision-making of the human mind. It is also a branch of computer science that uses algorithms and mathematical equations to make computers smarter.

Artificial Intelligence includes the simulation process of human intelligence by machines and special computer systems. Examples of artificial intelligence include learning, reasoning, and self-correction.

History of Artificial Intelligence

Artificial Intelligence (AI) has been studied for decades and is still one of the most interesting and elusive branches of Computer Science. The term Artificial Intelligence (AI) was first coined by an American computer scientist and cognitive scientist called John McCarthy in 1956. He also invented a programming language called Lisp, which is the second oldest programming language and is still in use today.

Other Artificial Intelligence inventors include: Alan Turing, Wolfgang von Kempelen, Jacques de Vaucanson, Semyon Korsakov.

Artificial intelligence was born in the 1950s when a handful of pioneers from the nascent field of computer science started asking whether computers could be made to “think”—a question whose ramifications we’re still exploring today. A concise definition of the field would be as follows: the effort to automate intellectual tasks normally performed by humans. As such, AI is a general field that encompasses machine learning and deep learning but also includes many more approaches that don’t involve any learning. Early chess programs, for instance, only involved hardcoded rules crafted by programmers and didn’t qualify as machine learning. For a fairly long time, many experts believed that human-level artificial intelligence could be achieved by having programmers handcraft a sufficiently large set of explicit rules for manipulating knowledge. This approach is known as symbolic AI, and it was the dominant paradigm in AI from the 1950s to the late 1980s. It reached its peak popularity during the expert systems boom of the 1980s.

AI Process Loop

  • Observe
  • Plan
  • Optimize
  • Action
  • Learn and adapt

Evolution of Artificial Intelligence

  • Narrow intelligence: Machine intelligence that equals or exceeds human intelligence or efficiency at a specific task. E.g., MatBot, IBM Watson, Siri, Alexa, etc.
  • General intelligence: A machine with the ability to apply intelligence to any problem area, rather than just one specific problem.
  • Superintelligence: An intellect that is much smarter than the best human brains in practically every field, including general wisdom, social skills, and scientific creativity.

Branches of Artificial Intelligence

  • Machine learning
  • Deep learning
  • Natural language processing (NLP)
  • Computer Vision
  • Robotics
  • Expert systems

Machine Learning

Machine learning is about designing algorithms that automatically extract valuable information from data. Machine learning is the art and science of getting computers to act according to designed and programmed algorithms. Many researchers think machine learning is the best way to make progress towards human-level AI.

Machine learning is a subfield of computer science that evolved from the study of pattern recognition and computational learning theory in Artificial Intelligence (AI).

Machine Learning Categories
  • Supervised learning
    • Regression
    • Classification
  • Unsupervised learning
    • Clusters
    • Anomaly detection
    • Dimension reduction
  • Reinforcement learning
    • Q-learning
    • Markov decision process
    • Temporal differences methods

Deep Learning

Deep learning is a subfield of machine learning: a new approach to learning representations from data that emphasizes learning successive layers of increasingly meaningful representations.

Deep learning is a field where algorithms are inspired by the structure and function of the brain, called artificial neural networks.

Applications of Artificial Intelligence (AI)

  • Online advertising
  • Medical diagnosis
  • Natural language processing
  • Face detection
  • Self-driving cars

Skills Needed to Study Artificial Intelligence

  • Math
  • Data science
  • Data mining
  • Data analysis
  • Hacking skills

Math Topics Used in AI

  • Basic numerical processes
  • Statistics
  • Linear algebra
  • Probability
  • Analytic geometry
  • Matrices
  • Vector calculus

Tools Used by AI Experts

  • Anaconda
  • Orange 3
  • Python programming language
  • R programming language
  • CMD
  • JupyterLab
  • C programming language
  • Jupyter Notebook
  • Spider
  • JavaScript programming language

Advantages of Artificial Intelligence (AI)

  • AI facilitates decision-making by making the process faster and smarter.
  • AI helps in making fast decisions.
  • Reduction in human errors.
  • Helps in detecting criminal activity.
  • Facilitates accurate predictions and diagnoses.
  • Helps in information gathering.
  • Ability to work 24/7.
  • Deep learning.

Disadvantages of Artificial Intelligence (AI)

  • Causes unemployment.
  • Causes a lack of creativity.
  • High cost of implementation.
  • They need constant power.
  • They're restricted to their programming.
  • They perform relatively few tasks.
  • They have no emotions.
  • They impact human interaction.
  • They require expertise to set them up.
  • They're expensive to install and run.

Companies Using Artificial Intelligence (AI)

  • IBM
  • Microsoft
  • Google
  • Amazon
  • Facebook
  • Baidu
  • OpenAI
  • Tesla
  • Nvidia
  • Element AI
  • Matcite Solution Ltd.
Image placeholder Author

Disclaimer

This content has been uploaded by a guest (third party). Please note that their views and opinions expressed herein are their own and do not necessarily reflect the views of the hosting platform or its administrators.