GUPTA MECHANICAL

IN THIS WEBSITE I CAN TELL ALL ABOUT TECH. TIPS AND TRICKS APP REVIEWS AND UNBOXINGS ALSO TECH. NEWS .............

Friday, 12 June 2020

What is the History of Artificial Intelligence?

Artificial Intelligence has been a topic of growing prominence in the media and mainstream culture since 2015, as well as in the investment world with companies that even mentioned the word in their business model, gaining massive amounts of funding.

What is the History of Artificial Intelligence

While to many, the hype around AI may appear sudden, the concepts of advanced artificial intelligence have been around for over a century, and extending further, the idea of artificial intelligence, and unnatural beings have been in the minds of humans for thousands of years.  

To better understand and appreciate this technology and those who brought it to us, as well as to gain insight into where it will take us, sit back, relax, and join me in an exploration of the history of artificial intelligence.

How To Start Learning Artificial Intelligence(AI) Programming 


AI vs Machine Learning vs Deep Learning 


Since at least the times of ancient Greece, mechanical men and artificial beings have about, such as a Greek myth of Hephaestus, the Greek god of smithing, and his designed automatic men and other autonomous machines.

Progressing forward toward the Middle Ages and away from fables and myths of ancient times, realistic humanoid automatons and other self-operating machines ere built by craftsmen from various civilizations. Some of the more prominently known are of Ismail al-Jazari of the Turkish Artuqids dynasty in 1206, and Leonardo da Vinci in the 1500s.

Al-Jazari designed what to be the first programmable humanoid robots, a boat carrying four mechanical musicians powered by the flow of water, and da Vinci's various mechanical inventions built-in knight automaton that could wave its arm and move its mouth.

Moving forward to the 1600s, brilliant philosophers and mathematicians, Thomas Hobbes, Gottfried Leibniz, and René Descartes, believed in the concept that all rational thought could as symmetrical algebra or geometry.

This concept was initially birthed by Aristotle in the fourth century, referred to as syllogistic logic, where a conclusion is drawn based on two or more propositions. As Thomas Hobbes stated in his book "Leviathan," "When a man reasons, he does nothing else "but conceive a total from the addition of parcels "or conceive a remainder from subtraction "of one sum from another.

"The logicians teach the same consequences of words, "adding together two names to make an affirmation, "and two affirmations to make a syllogism, "and many syllogisms to make a demonstration."

Leibniz took Hobbes'sphilosophies a step further and laid the foundations for the language machines communicate in today, binary. His motivation for doing so was because he realized that mathematical computing processes could be done much more comfortably in a number encoding with ess digits.

Descartes examined the concept of thinking machines and even proposed a test to determine intelligence. In his 1637 book, "Discourse on the Method," where Descartes famously stated the line, "I think. Therefore I am." He also said in that book, "If there were machines "that bore a resemblance to our bodies, "and imitated our actions as closely as possible, "we should still have two happy means "of recognizing that they are not real humans.

"The first is that such a machine "should produce arrangements of words, "as to give an appropriately meaningful answer "to whatever in its presence.

"Secondly, even though some machines "might do things as well as we do them, "or perhaps even better, "they would inevitably fail in others, "which would reveal "that they are not acting from understanding."

In other words, alchemy, which was more of pseudoscience to transform the pure into the rare, changing the mind into matter. Countless stories during this period also portray this concept, such as a Golem in Jewish folklore, which created from inanimate matter.

Progressing forward, we see this trope again in stories such as Frankenstein, first published in 1818, with a  reanimated from dead flesh. - It's alive.

After the height of the first industrial revolution in the mid-1800s, where machines began replacing human muscle and the beginnings of the field of modern computing, we see these stories take a turn towards modern sci-fi elements and portraying technology as evolving into human form.

Take, for example, this clip from the silent film Metropolis.

The field of modern computing was officially born with Charles Babbage's mechanical analytical engine in the 1840s. Although it was never built for a variety of reasons, rebuilding his designs in the present day shows that they would have worked.

Then means Ada Lovelace was the world's first programmer, with her algorithm calculating Bernoulli numbers on Babbage's machine. Early computers had to be hardcoded to solve problems, and Lovelace, the first programmer, had serious doubts about the feasibility of artificial intelligence.

Nearly 200 years after Descartes, she shared similar sentiments, stating about the analytical engine, "It has no pretensions whatsoever to originate anything. "It can do whatever we know how to order it to perform. "It can follow analysis, "but it has no power "of anticipating any analytical relations or truths.

"To as Lovelace's Objection. As a side note, be sure to check my video on the history of computing if you want more background knowledge on the evolution of the computing field.

A decade after Babbage's analytical engine, in the 1850s, George Boole, an English mathematician and philosopher, revolutionized the field of computing and laid the first right steps for computing based artificial intelligence.

Boole, like those before him, also believed human thinking could by laws described utilizing mathematics. He took the principles of syllogistic reasoning from Aristotle.

He expands much more in-depth on the relationship between logic and math that Leibniz had set, thus resulting in the birth of Boolean logic, permanently replacing multiplication with and, and addition with or, with the output being either true or false.

This abstraction of logic by Boole was the first step in giving computers reasoning ability. As the field of computing evolved, several researchers noticed that binary numbers, one and zero, in conjunction with Boolean logic, true and false, could be used to analyze electrical switching circuits. Referred to as combinational logic. In other words, logic gates that output a resultant based on their inputs.

There is a variety of different types of gates, and, or, XOR, not, et cetera. The connections between various portals became more complex, leading to the design of electronic computers. Combinational logic is the first layer in automaton theory; in other words, the study of abstract and self-operating machines.

As computing evolved, additional layers began to be established, with the next one being finite state machines. These machines inherently black box sets of logic gates, and use logic between the black boxes to trigger more complicated events. For an illustrative example of a type of state machine, think of an oven that has three states, off, heating, and idle. In-state diagrams, we can illustrate state transitions and the values that will trigger them.

For example, the on and off button presses over the oven, the oven is too hot, the oven is too cold, et cetera.  Finally, the last layer of automaton theory in the machines we use today is Turing machines. Before continuing, I want to point out that this was an extremely simplistic overview of a subset of automaton theory and to research with other sources for a more in-depth glimpse.

The final layer of automaton theory based on a mathematical model of computation that Allen Turing proposed in 1936, dubbed the Universal Turing Machine. Once again, like those before him, Turing broke down logic into a mathematical form, in this case, translating it to a machine that reasons through abstract symbol manipulation, much like the symbolic reasoning done in our minds.

As stated earlier, new computing devices were hardcoded to solve problems. Turing's belief with this universal computer was instead of deciding what a computer should do when you build it, design a network in such a way that it can compute anything computable, so long as it's given the precise instructions.

This concept is the basis of modern computing.

At this point in the 1930s, with the field of modern computing officially born and rapidly evolving, the concept of artificial beings and intelligence based on computing technology began permeating across mainstream society of at a time. The first popular display of this was Elektro, the nickname of a humanoid robot built by the WestinghouseElectric Corporation, and shown in 1939, New York World Fair. - Ladies and gentlemen. I'll be happy to tell you my story. I am a smart fellow, as I have a beautiful brain.

One can say it is the basis of how mainstream society thinks of a computing-based artificial intelligence, as evidenced by the various movies, TV shows, books, and other entertainment media portraying the concept. As a side note, Westinghouse'sElektro draws many parallels to modern-day Hanson robotic, Sophia.
They are not truly intelligent but are more of a way for mainstream society to get a glimpse of future technology. In other words, they're imitating intelligence. Going back to Alan Turing in the 1950s, he pondered to this dilemma of real versus imitated knowledge in section one of his paper, "Computing Machinery and Intelligence," titled "The Imitation Game." In this paper, he lays the foundations for what we now refer to as the Turing test.

The first serious proposal and the philosophy of a computing-based artificial intelligence, the Turing test essentially states if a machine acts as intelligently as a human being, then it is as intelligent as a human being. An example often thrown around is an online chat room, in which if we are talking to an AI bot but aren't told until after, and believed during the conversation that it was a human.
The bot passes the Turing test and is deemed intelligent. Around the same time as Turing's proposal, another titan of the field of computing, the father of the information age, Claude Shannon, published the basis of information theory in his landmark paper, "A Mathematical Theory of communication" in 1948.

Information theory is the backbone of all digital systems today, and a very complex topic. Shannon's theory states all information in the entire universe could represent in binary in layman's terms and related to computing.

It has profound implications for artificial intelligence, meaning we can break down human logic, and more so the human brain, and replicate its processes with computing technology.

This fact was demonstrated a few years later in 1955 by what dubbed as the first artificial intelligence program called Logic Theorist, a program able to prove 30of the first 52 theorems in "Principia Mathematica," a three-volume work on the foundations of mathematics. Allen Newell wrote this program.

Herbert Simon and Cliff Shaw, who, like philosophers and mathematicians before them, also believed human thought could be broken down, with them stating, "The mind can view as a device operating "on bits of information according to formal rules." That being, they realize on a machine that can manipulate numbers, could also manipulate symbols, and that symbol manipulation is the essence of human thought.

As a fun side note, Herbert Simon stated, "A system composed of matter "can have the properties of mind," a throwback to the alchemy of the Middle Ages in which matter attempted to convert the mind. During this period in 1951, Marvin Minsky, one of the founding fathers of the field of artificial intelligence, built the first machine incorporating a neural net, the Stochastic Neural AnalogReinforcement Calculator, SNARC for short.

As you can see, at this point in the mid-1900s with computers becoming more capable every year, increasing research into abstracting human logic and behavior.

Development of the first neural net and various other innovations caused the field of modern computing based artificial intelligence to be born.

We'll cover the official birth of AI leading to the present day in the next video on this AI series. Speaking of present-day and advanced algorithms that require more and more of your data, now more than ever, privacy online is essential, and tools such as VPNsare can provide this by encrypting your internet connection.

One such VPN I use myself and as a sponsor of this video is Surfshark. They use military gradeAES256-GCM encryption, which is used by banks and financial institutions around the world.

Surfshark is also one of the few genuinely no-log VPNs out there, as they based in the British Virgin Islands, which has no data retention laws. The features of this VPN  supported by all major platforms, including Android and iOS.

While there are many VPN options out there, Surfshark has become one of my favorites. And as a bonus, it is one of the most affordable as well. To support Futurology and learn more about Surfshark, go surfing shark.deals/futurology. By using that link and entering the promo code Futurology at checkout, you will save 83% as well as get one additional month to your plan for free.

Thanks for reading it.

No comments:

Post a Comment