• This forum is the machine-generated translation of www.cad3d.it/forum1 - the Italian design community. Several terms are not translated correctly.

the humble servant

  • Thread starter Thread starter Fulvio Romano
  • Start date Start date
depends on what the software does. I just wanted to pay attention to the fact that the first important step to solve a problem is to define it carefully. and the problem of intelligence has not yet been well defined. . .
Here. I wanted to get here!
In the end, you are trying to emulate human intelligence, whatever it is, using things that are already intelligent but differently from ours.
Am I right?
 
prepare the land... I will follow you muto:)
I'll see if I ask you a few questions.

I like the subject very much but I never had a chance to deepen it.
 
Here. I wanted to get here!
In the end, you are trying to emulate human intelligence, whatever it is, using things that are already intelligent but differently from ours.
Am I right?
I think you're wrong.
the main objective of the ia is to put a machine under the conditions of "decide on its own".
this only if you can "understand" the commands that are imposed on you.

Unfortunately, the PC executes the commands that are inserted because there are software that are able to give the machine the "relation" between input and output.
finally the pc, given an input, returns you output=f(input) where the function is the software.

his pc does not add anything.

with the jaw we try to change this concept
 
Here. I wanted to get here!
In the end, you are trying to emulate human intelligence, whatever it is, using things that are already intelligent but differently from ours.
Am I right?
when deep blue battè kasparov, many claimed that the machine was not really smart because it was simply running a program. This is what he said:
saying that deep blue, playing chess, is not actually thinking is like saying that a plane does not fly because it does not slam the wings
this and so much more, in the next post, which I have already written, but now I don't have it under my hand...
 
Maybe we're going to go, maybe we should open a special discussion, but...
I think you're wrong.
the main objective of the ia is to put a car under the conditions of "decide on its own".
this only if you can "understand" the commands that are imposed on you.

Unfortunately, the PC executes the commands that are inserted because there are software that are able to give the machine the "relation" between input and output.
finally the pc, given an input, returns you output=f(input) where the function is the software.

his pc does not add anything.
I understand, but...
a man, what does he do? receives "input" from the surrounding environment and its "body", analyzes them, executes reasoning and emits an "output"
Do I make a stupid example?
sees an apple (input), analyzes the quality of the apple according to certain parameters (look, smell, shape...), considers other "internal" inputs (I'm hungry? Are they intolerant? I want to eat it?) and emits an output (I eat it/I leave it where it is)
and more or less worth for all things
think about what you do at work
you are assigned a task (input), receive or look for information (other input), process information (thought), spit out an answer (output)
You're a pc, basically. . .
 
...you are assigned a task (input), receive or look for information (other input), process information (thought), spit out an answer (output)
You're a pc, basically. . .
less than "thought", yes

the pc the solutions already has them inside of itself and from time to time spit out that "desired".
that's why the smart "looks" pc when it isn't really. the pc is unable to find a solution but only to return the right answer to the right question.

I hope I've explained now
 
Who knows if PCs think but they prefer not to let you know. . .
You joke about it, but think about what you could do with the jaw.
it would come to the desired "automatic factory".

imagine a factory made only of pcs and robots able to take production orders, arrange according to the type of work, produce, etc.
 
you are assigned a task (input), receive or look for information (other input), process information (thought), spit out an answer (output)
You're a pc, basically. . .
There are two substantial differences.
the first is that the reasoning you do, you do it because your ultimate purpose is to feed you, and then put into practice the techniques that allow you to survive, the pc instead does it because which one has asked.

the second is that... no one explained how to evaluate an apple, you learned it yourself. put a pc in a field, after a year I would find it rusty, but it will not be able to distinguish an apple from a pear if no one has written about the dedicated code.

vabbé...however I recovered the post...
 
alan turing was a British mathematical and logical, considered the father of modern computer science. One of the greatest mathematicians of the twentieth century, he worked as a cryptographer analyst during World War II, and is considered one of the most brilliant cryptoanalysts in history.
his name is linked to the so-called "test of turing", which is part of the so-called "mental experiments". mental experiments are experiments, very used in sciences, which can solve or more often refute theories, by simple philosophical speculation, without any practical laboratory experiment.
the most famous mental experiment is that of the cat of schrodinger, where in the thirties tried to demonstrate the incompleteness of classical mechanics. The experiment has become very famous because apparently of simple understanding, it is therefore still used even if the scientific community agrees with considering it suffering from sophism. There is a fundamental error that completely invalidates the experiment, which nevertheless shows something that for other reasons is correct. Although classical mechanics is indeed insufficient to explain quantum phenomena, it is also true that the geiger counter of the experiment generates physical action in a world well described by classical mechanics, and therefore the state of the cat is well determined regardless of its observation.

the turing test includes a black box, a box in which you cannot look, with inside a person or a computer. an external observer must understand whether in the box there is the person or computer simply asking questions and analyzing the answers. if from answers it is not possible to distinguish the computer from the person, it means that artificial intelligence has been realized, that is, a computer has been built that can "emulate" a human being.
One interesting thing is that a similar test is offered to us very often on the internet. when an internet site must "understand" if there is a human being at the other end of the wire, or a "bot", a pc that is attempting automatic recordings (e.g. a forum), proposes us a captcha,
i.e. a very distorted image that cannot be traced back by ocr algorithms (optical character recognition, an artificial intelligence algorithm), but only (hopefully) by humans. captcha means "completely automated public turing test to tell computers and humans apart", that is "test of public turing and completely automatic to distinguish computers and humans".

the turing test, as defined, can be easily confused, has been in fact several times invalidated and rewritten to try to improve it. over time the definition has been modified, limata, to try to make it as much as possible a "rasol", or a tool [filosofico] able to clearly divide two categories, for example the right from wrong, or humans from computers.

the two most important refutations of the turing test were a computer program that imitates a human being, and a human that imitates a pc:
the first, a "chatterbot", or a "robot chatting", in 1966 gave the first escape to the turing test. his name was eliza, borrowed from doolittle eliza (not the singer!) a fioraia, rozza and infimo social status, protagonist of "the lazy" by george bernard shaw. it managed to learn by heart the expressions and the way of speaking of the nobility repeating to parrot. so he could look elegant and refined while without losing his mind.
eliza uses an algorm, however, put into practice by our current political class. the software is written to imitate a therapist.
when a question is asked, eliza replaces words with other members of pre-packaged categories, turning the syntax and making it questionable. a dialogue like that could be [Uomo-Eliza] "I have a strong headache"[E] "perched the testa?" "perché mia madre mi odia"[E] "Who else hates you in your family? "
dialogue can go forward to obscurity, and a slightly awake interlocutor may have the impression of talking to a human being.
This comes from the fact that man tends to give meaning to the words that for a pc are pure symbols. Although a linguist is aware that symbolic, syntactic and semantic links between words follow fixed rules, the fact of reading in these words a deeper cognitive meaning remains intact.
 
You joke about it, but think about what you could do with the jaw.
it would come to the desired "automatic factory".

imagine a factory made only of pcs and robots able to take production orders, arrange according to the type of work, produce, etc.
would be a nightmare!
the next step, for me that I am fatalistic, is that "deciding" that we are "dark species".
There are two substantial differences.
the first is that the reasoning you do, you do it because your ultimate purpose is to feed you, and then put into practice the techniques that allow you to survive, the pc instead does it because which one has asked.
Yes, on this I agree
the second is that... no one explained how to evaluate an apple, you learned it yourself. put a pc in a field, after a year I would find it rusty, but it will not be able to distinguish an apple from a pear if no one has written about the dedicated code.
mha... This is more complicated. we know that apple we want because we are "transmitted" from generation to generation. Your father/mother told you that that's an apple. to your father/mother was explained by their parent, and so on... .
who told the "first"... this becomes philosophy!
 
the second refutation of the turing test is made by another mental experiment, the so-called "Chinese room". We always talk about linguistic problems, because the turing test is based on a question of what is inside the black box. Suppose we plan a computer and make it able to dialogue in a given language, such as Chinese. not as an eliza, but exactly as a human being, that is, suppose we can achieve a pc that is capable of understanding Chinese just like a human being would. the computer would take ideograms in, process them, and give out other ideograms. it can therefore be said that the computer understand Chinese, because there is no difference between the answers that from the machine and those that would give a human being able to understand Chinese.
Let us suppose now that I, who does not speak Chinese, put me in a room, called "Chinese room". I have in hand, printed in Italian, the list of the program used by the computer. I could get into ideograms, I look for them in the list, and I read in Italian what I have to do, how to draw out ideograms. I would behave exactly like the computer, but I don't understand Chinese, I have no idea what the incoming symbols mean and I have no idea what the meaning of those I drew out. And yet I can do the same job. So if I don't understand Chinese, I have to conclude that not even the computer. understand Chinese, but simply was educated on how to manipulate symbols to emulate human thought.

Ultimately it is so complicated to understand whether a pc may think or may never think, simply because we did not agree on what this means. a pc can behave in a way identical to a human being, just because very capable to manipulate symbols, is he thinking, or not? And if not, why?

considered all these difficulties, a philosopher, John searle, decided to divide artificial intelligence into two branches. the so-called strong artificial intelligence is that classic of science fiction films, where the machine can actually think in such a way as to be able to all effects to replace man. What that means is not clear yet.
completely different is instead the weak artificial intelligence, the one already realized by several programs that play chess, which predict weather, etc.

If so deep blue, the computer that in 1996 beats kasparov to chess is not intelligent, but only fast to count, what does it mean? What's the difference between einstein and rain man? He said: "saying that deep blue, playing chess, is not actually thinking is like saying that a plane does not fly because it does not slam the wings" meaning that deep blue simply reasons differently from man, but not in a less effective way.
However there is a single element, identified by searle, which distinguishes artificial intelligence strong from the weak one, i.e. intentionality. kasparov plays chess because he wants to win, because he wants to demonstrate his superiority to a car, deep blue because he is running a program.
 
in logic, in mathematics, but often also in everyday life, we are led to think of the concept of truth and the concept of false as the only labels to affix to certain events. this is nothing but a simplification of reality always used to simplify mental processes. It is a way of thinking so widespread that it has generated entire philosophical-logic branches. For example, the classical theory of sets lays the foundations mainly on two axioms, the principle of non-contradictoryness says that if an element belongs to the set, it cannot at the same time be external to, or belong to the non-a set; and the principle of the third excluded which says that the union of a and non-a must necessarily understand all the elements, that is no element will belong to a third set.

The theory of sets is a mathematical construct that, like all mathematical constructs, is born with the purpose of describing reality, or at least part of reality. Unfortunately, the dichotomic vision of set theory, the binary management of belongings, cannot accurately describe a whole series of phenomena. Let's see a couple.

the first category of phenomena not accurately described by a dichotomic vision is for example that linked to the definition of hot/cold or young/old, etc. if it is true that in winter it is cold and in summer it is hot, what happens in spring? if a newborn is certainly young and a centenary is certainly old, how can a fifty-year-old be defined? if the whole 'a' is the set of young people, the newborn will belong to, the centenary to non-a, but the fifty-year-old puts in serious difficulty the axioms of non-contradictory and the third excluded.

a second category, a little more prosperous is that of the self-referential paradoxes. if the set 'a' is the set of true phrases, and then 'non-a' that of false phrases, the phrase "this sentence is false" in which together it must be placed? and the phrase "I am a liar" where should he go?

It is therefore proved that classical dichotomic logic cannot describe everything that is real. The desire for something new was born in the early sixties. the so-called " fuzzy sets" are born, i.e. snail sets. the theory of nuanced sets does not include the principles of non-contradictoryness and the third excluded, but incorporates a new concept, the intangible of confidence. If so 'a' is the nuanced set of young people, the newborn will belong to 'a' and the centenary to 'non-a', but a twenty-year-old will belong to 80% to 'a' and for 20% to non-a; 50-year-old to 50% to 'a' and so on.
seems purely philosophical speculation, and perhaps it is also, but we return to the example of temperature. We take a split air conditioner, like those in our homes. household air conditioners (to tell us, those without inverters) work with a relay logic. when setting a reference temperature (setpoint) the air conditioner will turn off n degrees below that temperature and turn on n degrees above. 'n' is called the band of hysteresis, and serves to avoid that in proximity to the setpoint temperature the switching frequency becomes too high. this means that the conditioner will be either turned off or at maximum power, and that the temperature in the room will never be that "right", but will rise between right-n and right+n.
Let us now imagine a fuzzy logic control. There will be two functions, the blue is "cold" and the red is "warm". in the area where it makes "a little cold and a little warm" it will be possible to control the air conditioner so as not to let it go to maximum power. in this way the consumption will be optimized and the temperature will be longer close to the "right" one.

In industrial reality there are many simple machines, from washing machines to automatic coffee machines, which use fuzzy logic. In Japan, where these appliances were first born, the term "fuzzy" assumed meaning of "intelligent" and had a huge marketing power.
 

Attachments

  • Fuzzy.webp
    Fuzzy.webp
    11.7 KB · Views: 5
his name was eliza, borrowed from doolittle eliza. . .
He thinks that I at doolittle knew only jimmy doolittle, protagonist of the air raid on Tokyo on 18 April 1942 known as the doolittle raid. . .
all very interesting fulvio but, when we talk about ia in the videogames we use an unrelevant term?
because the market is very important to the "quality" of the ia.
 
He thinks that I at doolittle knew only jimmy doolittle, protagonist of the air raid on Tokyo on 18 April 1942 known as the doolittle raid. . .
all very interesting fulvio but, when we talk about ia in the videogames we use an unrelevant term?
because the market is very important to the "quality" of the ia.
What do you mean by video games? I don't know. I never heard of him. . .
more than the artificial intelligence of the video game though, I would focus on the natural stupidity of those in front...:tongue:
 
What do you mean by video games? I don't know. I never heard of him. . .
more than the artificial intelligence of the video game though, I would focus on the natural stupidity of those in front...:tongue:
Can I make the troll?? ?

It's like the i-phone, the only smartest smartphone of its owner! :biggrin::biggrin:
 
think I knew only jimmy doolittle
a cuff... If you go to google earth, choose the planet "mart" and look for "meliza", the program takes you to a point of the planet where there is an extraterrestrial.
you can start a chat with him and he will answer you... with the original eliza code
:36_1_13:
 

Attachments

  • Meliza.webp
    Meliza.webp
    34.1 KB · Views: 13

Forum statistics

Threads
44,997
Messages
339,767
Members
4
Latest member
ibt

Members online

No members online now.
Back
Top