• This forum is the machine-generated translation of www.cad3d.it/forum1 - the Italian design community. Several terms are not translated correctly.

the humble servant

  • Thread starter Thread starter Fulvio Romano
  • Start date Start date
What do you mean by video games? I don't know. I never heard of him. . .
more than the artificial intelligence of the video game though, I would focus on the natural stupidity of those in front...:tongue:
I'll give you a stupid example.
If you are playing (I know you will never do it:biggrin:) to a first-person shooter (among the best "call of duty") your opponents would not be able to get slaughtered by automatic shots but would seek shelter behind walls, crates, natural shelters, etc.
this ability to self-defense passive (or even active when repairing while making fire against you) is defined artificial intelligence and makes for fans, the game very close to reality.
one of the key features in these games is the "behaviour" of computer-controlled virtual characters. their "attitude" greatly influences playability: Too stupid opponents make the challenge too simple and boring, on the contrary, too smart opponents can make it impossible and neurotic.
This detail, many times overlooked by the software houses, can affect an excellent product, making it unbearable to the public and, fact not negligible, unseen.
 
I'll give you a stupid example.
If you are playing (I know you will never do it:biggrin:) to a first-person shooter (among the best "call of duty") your opponents would not be able to get slaughtered by automatic shots but would seek shelter behind walls, crates, natural shelters, etc.
this ability to self-defense passive (or even active when repairing while making fire against you) is defined artificial intelligence and makes for fans, the game very close to reality.
one of the key features in these games is the "behaviour" of computer-controlled virtual characters. their "attitude" greatly influences playability: Too stupid opponents make the challenge too simple and boring, on the contrary, too smart opponents can make it impossible and neurotic.
This detail, many times overlooked by the software houses, can affect an excellent product, making it unbearable to the public and, fact not negligible, unseen.
Now I understand... (and still I play with it... I'm not keen enough to spend money on this, but if it happens with friends because no... )

considering that an artificial intelligence, in my opinion, is not an abuse of terms. First of all, if that set of pixels that shoots you against seems to you in everything and for all a human being, it means that it would pass the turing test, and then it can be considered ia. Moreover the individual fighter solves an algorithm that on the basis of the circumstances (hidden, ammunition, position of the enemy, etc.) elaborates a strategy that has as objective the optimization of a functional of cost to two parameters, the health of the person and the reduction of the number of enemies. I would say that is exactly what a fighter does in war, why shouldn't he be ia?
 
for example this is a reason why pes2012 (it's a football game) made cilecca.
Also this year 2013 is a colossal crap: The doors seem to be blind with their hands full of soap.
 
for example this is a reason why pes2012 (it's a football game) made cilecca.
Also this year 2013 is a colossal crap: The doors seem to be blind with their hands full of soap.
o
Enough with football.... if you have to spend money on some video games besides cod (call of duty) mentioned above,
the best to escape and vent a bit is definitely gta (grand theft auto)... of which the 5 next exit.http://www.youtube.com/watch?v=jg2m8cj1au4
 
We enter a little more on the merits of what we mean today with "artificial intelligence". This term is closely linked to neural networks, not so much for functional reasons as for topological reasons. If in fact the definition of artificial intelligence is purely functional, that is, it says what it must, or should, make a machine to be considered intelligent, the neural network is structured as the human brain and therefore topologically recalls the intelligence itself.

in the human brain, cells destined to the functions related to thought are neurons. a neuron consists mainly of three elements, a "dendrite", i.e. the input of the neuron, a "assone", i.e. the output and the cellular body. The latter takes the incoming information, under the form of electrical tensions, makes it a weighed sum, and if the resulting value exceeds a certain threshold then activates its output. the exit of the neuron goes to form the entrance for other neurons, thus forming a real network.
Many scholars agree that the power of the human brain (but also of the animal) consists exclusively in its topology, that is how neurons are connected to each other. It is therefore natural to think that by copying this structure into an artificial system it should be possible to replicate intelligence. It is not so, or at least, it has not been until today. in this post: http://www.cad3d.it/forum1/showpost.php?p=274783&postcount=71 I anticipated something.

In reality a remarkable advantage of the "hardware" structures, where the neural network is physical, as in the human brain, is the parallel process that develops. the human brain is extremely slower than a pc, but many operations (such as facial recognition, ocr reading, etc.) are carried out at comparable speeds. Why? simply because neural networks programmed in pcs are actually sequential. Although topology may be similar, cpu is one and counts at a time, although at order speed 3x10^9 operations per second. a human neuron can make about 200 switches per second and its outputs travel to about 100m/s, but the human brain has about 10^11 neurons with a dirt number of synapses (connections). Therefore in all operations where the topology of the human neural network is exploited to the maximum, the processing speed can be compared to that of a pc. in making arithmetic accounts instead the pc is obviously unbeatable.

to understand well the concept of speed and parallelism we make an example of a very simple problem. on the table there are 9 equal pencils and one shorter than the others. asks to identify the shortest pencil. a pc performs the following algorithm:

run 9 times:
Is the pencil shorter than the pencil 'i+1'?
Yes -> exit = pencil 'i'
no -> exit = pencil 'i+1'


at the end of the cycle the exit is the shortest pencil. the human brain instead gives a look at the table and takes directly, with a single operation, the shorter mat. with 10 pencils the pc is faster, but with 100 or 1000 or 10^6 or 10^9 pencils? the pc will have to make an increasing number of accounts, while the number of operations (paralleles) necessary to the human brain will always be and only one.
However, there are so-called "neurocomputers", i.e. computers that replicate the structure of a neural network by hardware.
 
Therefore, we begin this process of "copy" of the human brain.
the mathematical object at the base of a neural network is the "determinal neuron", or "neuron of mcculloch-pitts", also called "perceptron" or "percettrone". neurons can be represented as in figure. There are a series of synapses that bring their value. These synapses enter the neurons, they first weigh with values, and then the results are added. the result of this sum is processed by the "activation function" of the neuron. the simplest function is that of threshold, or below a certain value, the neuron returns 0, above 1. However there are other more elaborate activation functions, such as linear function at strokes, which has a linear variation between 0 and 1, the sigmoid function that resembles a 's', etc.
the threshold value of the neurons is fixed and equal for all neurons, what changes is the weight of the different synapses. the distribution of this weight is the subject of the so-called "training" of the neural network, but let's get there by degrees.
the newly described neuron is called "deterministic" because once you notice weights and threshold, the output value of the neuron is known.
There are other types of neurons. There are networks with polarized neurons, where the neuron summatory node, in addition to weighed synapses, also comes a signal of "bias", which serves to polarize the neurons and stabilize certain situations, for example keeping alive the network even in the absence of stimuli, or in a condition where all neurons would be mutated. They are stochastic neurons, that is neurons characterized by a certain "temperature", according to which they have a higher or lower probability of activation. This type of system is used in the analysis of particularly noisy signals.

but this is the only neuron, how is a whole neural network structured?
the network is organized in layers. There is always a layer of input and one exit, of course. between the two can be more layers called "hidden". the information content, the "knowledge" of a network, or its ability to solve the problems that are subjected to it is concentrated in two points. in the weights of synapses of each neuron, and in the topology of the network, or of how many layers is composed, of how many neurons are composed the hidden layers*, and how the different neurons are connected.
the topology of the network is a "a priori" knowledge, inherent in the network itself, that of the synaptic weights instead is "back", created by training. to do a parallelism, if I had a traditional algorithm that processes a certain parametric function, the parameters are comparable to the synaptic weights, while the shape of the function to the topology of the network.

there are also the so-called recurring networks or networks where some synapses go back, and at the next step they go to feed neurons before those who have generated them. this type of network has two very interesting features.
the first, quite intuitive, is that it allows to analyze the viable time, without this being explicitly present on the input layer. In fact, at equal inputs (and synaptic weights) the network, if properly trained, evolves and can pursue events dependent on time. This phenomenon is absent in non-recurring networks that cannot behave as a dynamic system, or given an input and a set of synaptic weights, will always give the same output.
the second, much more important, is that certain areas of the network can be found in stationary conditions. that is, if you stimulate correctly, identify a state that remains there until it is disturbed. can therefore make memory function, storing information. that, as predicted by neuroscientific theories, imitating the neural topology of the brain and looking for a structure that was simply able to perform pseudo cognitive processes, we have created a network that is surprisingly organized by destinating certain areas to the storage of information, that is exactly what the human mind does.

This means that the hope of seeing the human brain replicated inside a computer could be far away in time, but not a pure chimera.

(*)
the number of hidden layers neurons is a topological feature of the network. instead the number of points of the input and output layer depends on the dimensionality of the problem and is not free choice of the designer.
 

Attachments

  • Neurone.webp
    Neurone.webp
    14.8 KB · Views: 11
  • rete.webp
    rete.webp
    60.8 KB · Views: 2
to answer the first mbt post, let's see how a neural network of a person who chooses an apple could work. We imagine extracting from the image the three information contained in the input layer, i.e. color, shape and size of an object. What mbt said was wrong, I don't eat the apple because my father taught me to do it. The child puts everything in his mouth, throws things, pulls them in his head, bites them, licks them. It all seems like a suicidal instinct, but in reality it is simply "training" its neural network, and this is the reason why it would be left to do, at least until it gets into trouble. in practice is putting the right numbers in the circles of the image. One day he puts in his mouth a tennis ball (round, yellow, 8cm) and realizes that if he's hungry, it's good that color neurons learn to discard yellow. Then he will try with a basketball (round, red, 50cm) and he will understand that red stuff of that size does not feed much. Then he will grab a lego brick (red, square, 10cm) and he won't even be able to eat it. when the net is fully trained, and when hungry, it will be able to recognize an apple and crush it, because its neural network will say that the object has the right color, shape and size so that it can be an apple.
if the apple is yellow will probably eat the same, because the color neuron has small weight in the exit layer. if it is a rounded cube, red, of 10cm will instead be discarded because the shape has great weight. and assumed great weight because by training the net will come to the conclusion that if a yellow apple is still plausible, a square apple is instead impossible.

In this way children, puppies of animals and neural networks of computers, learn.
 

Attachments

  • mela.webp
    mela.webp
    46.1 KB · Views: 3
but in this phase of learning, what determines the intelligence of one subject compared to another.
the very speed of learning or the number of wrong attempts before learning the right sequence?
that is how many tennis balls will have to identify the child before understanding that the apple is the other one or how long will it understand the difference?
 
but in this phase of learning, what determines the intelligence of one subject compared to another.
the very speed of learning or the number of wrong attempts before learning the right sequence?
that is how many tennis balls will have to identify the child before understanding that the apple is the other one or how long will it understand the difference?
in the human being I don't know, artificial intelligence depends a lot on the structure of the network. a network with multiple layers and more connections will have more "intellective" capabilities, but it will also have more tendency to "enter" as we will see in a while, when we introduce back propagation. because the human brain does not plan it is not known to me.

However, as we will see in a while, it also depends on how the learning data is presented (training set). if the data is passed to average nothing, decorated and equalized in covariance, the learning curve will be much faster.
 
Maybe I've already posted them somewhere else. I have no idea how this monstrosity is accomplished:http://www.youtube.com/watch?v=w1czbcnx1wwbut if I did, I would have used a neural network, and I would have generated the training set by applying sensors on a real dog. after a while the network should train to behaviors of this kind, which to general with a model-based system there would be to lose hair.

other beast here: http://www.youtube.com/watch?v=nuqsrpj1dywquesti sistemi who invegates:http://www.youtube.com/watch?v=aicftmdrvhmnot enough on neural systems. They are usually adopted rather simple rules and the shame "suitable". For example, each quadricopier measures the distance from the nearest ones, and optimizes a potential cost functional based on these distances* you get something similar to the first figures.

(*)
Imagine virtual springs connecting the various quadricopters, and imagine that their purpose is to minimize the energy accumulated in the springs.
 
in the first video, the way he stands after the pusher is very creepy!

quadricopters are stunning when changing the training to circumvent the obstacle.
 
Oh, but is it possible that all of you find them?!?!?!? :tongue::
Every now and then I commit myself. .
If I put myself on "good will" I would draw out of interested things seen in movies, cartoons, etc.

tapatalk 2...invited by tablet icon tab or smartphonnne sony... :-)
 
Maybe I've already posted them somewhere else. I have no idea how this monstrosity is accomplished:http://www.youtube.com/watch?v=w1czbcnx1wwbut if I did, I would have used a neural network, and I would have generated the training set by applying sensors on a real dog. after a while the network should train to behaviors of this kind, which to general with a model-based system there would be to lose hair.

other beast here: http://www.youtube.com/watch?v=nuqsrpj1dywquesti sistemi who invegates:http://www.youtube.com/watch?v=aicftmdrvhmnot enough on neural systems. They are usually adopted rather simple rules and the shame "suitable". For example, each quadricopier measures the distance from the nearest ones, and optimizes a potential cost functional based on these distances* you get something similar to the first figures.

(*)
Imagine virtual springs connecting the various quadricopters, and imagine that their purpose is to minimize the energy accumulated in the springs.
Maaa.. .
the "control" of these "things" is on-board or are they remotely commanded?
 
Maaa.. .
the "control" of these "things" is on-board or are they remotely commanded?
the control of the bigdog is on board. also the power supply is to batteries. However there are movies on youtube that portray it with an "ombelical cord". probably for laboratory tests the batteries are removed and is powered by cable. at that point it is not said that even control is not remote. given the size however, one is worth the other, there are no big differences.

for the swarms of quadricopter instead the difference is. for the purposes of control, unless you cheat, useless in research activities, it is indifferent to have one of the following architectures:
1. a single processing device and a set of cameras (visible in the video) that are responsible for identifying position and speed of the elements, "tag them" (distinguish from each other), elaborate the strategy for each element and then switch via radio to the various quadricopter, indications on the movements

2. on board each element, both the processing part, and the sensory part that sees the nearby quadricopter.

If from the control point of view the two architectures are equivalent, from that of costs and flexibility, the first is in net advantage, and that is why you prefer it.
 
for the swarms of quadricopter instead the difference is. for the purposes of control, unless you cheat, useless in research activities, it is indifferent to have one of the following architectures:
1. a single processing device and a set of cameras (visible in the video) that are responsible for identifying position and speed of the elements, "tag them" (distinguish from each other), elaborate the strategy for each element and then switch via radio to the various quadricopter, indications on the movements

2. on board each element, both the processing part, and the sensory part that sees the nearby quadricopter.

If from the control point of view the two architectures are equivalent, from that of costs and flexibility, the first is in net advantage, and that is why you prefer it.
in the first case, you can have an indescribable computing power that acts as a "register" and guides the "stupid" elements, without limits of bulk, required energy or other
In the second case... you have to miniaturize everything!
However the first architecture makes the device useless in environments not specially equipped
 
we now see more in detail as the training of a network "mlp", that is a multi layer perceptron, that is, the ones we have talked about so far.
First of all, the network is "initialized" using rather small values for synaptic weights. they will evolve towards knowledge through training. initializing to zero means making the net changes, so it does not work, moreover the initialization weights should be random. insert non-random weights, for example all the same or with recognizable sequences, tends to polarize the network by instilling a kind of pre-existing knowledge, a natural instinct, which if wrong is likely to worsen the learning ability. once again, exactly what happens in the human mind.

We talk about supervised training (it is not the only possible), where there is a "teacher" who has a search for knowledge of the problem to face the network. the teacher produces a training set, i.e. a set of inputs with the corresponding correct outputs. The steps are two. first a forward propagation, a stimulus is applied to the network and this produces an output. weights are not updated. the output is compared with the correct one in the training set, and an error is calculated. propagating backwards the error, evaluating for each layer the component of each neuron at the exit, all synaptic weights are updated. then switch to the next value of the training set by updating the weights again, and so on. completed the training set, a so-called "time" training was completed.

Obviously at this point, if the first value of the training set is again, the output will not be identical to that "right" but will only be close. the network reaches a great local that approximates the teacher function, but it will never be perfectly overlapping. However, by re-passing all training values set a second time (second epoch) you can better specialise the network, and at the end of the second training cycle the outputs will be closer to those "right". Is that good? When is the case of stopping?

There are at least two arrest criteria. the first, trivial, when the error takes under a predetermined threshold. the second when the change of error between an epoch and the next falls under a prefixed threshold. the first method stops when the network "learned enough", while the second when the network gives the impression of "not being able to learn further".

As previously anticipated, to optimize the learning curve of a network, it is often the case of "treating" data. we imagine a function that for each x output a y=f(x), and want to "imitate" function f with a neural network. to optimize the learning curve, and to ensure that all neurons learn at the same speed, the following steps are useful:
- normalization: makes data with uniform value
- decorating: eliminates trends that can polarize the network
- equalization of covariance: if the domain and codomain variance are equal, the network structure will learn at the same speed at each point.
 

Attachments

  • TrattamentoDati.webp
    TrattamentoDati.webp
    16.6 KB · Views: 4
in the first case, you can have an indescribable computing power that acts as a "register" and guides the "stupid" elements, without limits of bulk, required energy or other
In the second case... you have to miniaturize everything!
However the first architecture makes the device useless in environments not specially equipped
That's right. but the problem of control, the algorithms that "egg" the planes, is exactly the same. if you have to develop these algorithms, why not choose the most comfortable architecture?
 

Forum statistics

Threads
44,997
Messages
339,767
Members
4
Latest member
ibt

Members online

No members online now.
Back
Top