• This forum is the machine-generated translation of www.cad3d.it/forum1 - the Italian design community. Several terms are not translated correctly.

the humble servant

  • Thread starter Thread starter Fulvio Romano
  • Start date Start date
I went to sgnappaland to find him:-)
and it is also set well!!! has not yet taken the accent:biggrin:
What are you doing?

You know, too. Harmonic drive, cyclo, epicicloidali... But you have forgotten about orthogonals, parallels, coaxial, endless lives (maybe these better than not!).:biggrin:
 
and it is also set well!!! has not yet taken the accent:biggrin:
What are you doing?

You know, too. Harmonic drive, cyclo, epicicloidali... But you have forgotten about orthogonals, parallels, coaxial, endless lives (maybe these better than not!).:biggrin:
and is only 30 years old ... alimortè :-)
should be taken as an example by many people: as you can use your head as an alternative to using it as an ear spacer.
 
p.s.: ma dove lo avevate nascosto the "question"?!??
should be taken as an example by many people
You guys are confusing me. . Thank you for your appreciation. . .
I have so much more to write, I hope to keep the interest alive.
But you have forgotten about orthogonals, parallels, coaxial, endless lives (maybe these better than not!).:biggrin:
Well, I'm talking about robotics here, so I'm limited to the most used gearboxes in this area.
 
I read the first post, "ok si parla di automation, che balls... "
I read the second, but... . "
I read the third, "fuck!"

then, the triple, spectacular!

p.s.: but where did you hide it from "this"?!? ! ?
:smile:
fulvio, with this statement you can consider yourself "anointed by the Lord".
and is only 30 years old ... alimortè :-)
should be taken as an example by many people: as you can use your head as an alternative to using it as an ear spacer.
That's not true.
I also need a nose base. Otherwise I don't know where to put my glasses.
 
fulvio, with this statement you can consider yourself "anointed by the Lord".
the "bishop dal president" ?
That's not true.
I also need a nose base. Otherwise I don't know where to put my glasses.
same problem,...
I would like to say that I use it as a hair culture base, but now.... :redface:
 
- torque transducers normally are not used on industrial manipulators. In some cases however, for the study of particular architectures of control, it is useful to know the instant couple on a joint with a direct measure. a torquemeter is then applied.
not within the detail realized, it is not necessary. are instruments similar to load cells, or with a deformable element on which one or more strain gauge is glued (a resistance that varies if deformed). strain gauge is inserted in a bridge (e.g. wheatstone) to make the measurement independent of temperature and humidity conditions.
in the case of torque engines allow a direct measurement of the torque, more immediate and easier to condition than an indirect measure carried out by the analysis of currents. in the case of gear motors allows an independent measure from the masking action of the reducer.

As we will see later, from the measurement of pairs to joints, we can generally trace enough precision to the forces, defined in the operating space, agents on the tool. Such measures therefore play a fundamental role in all control architectures that revolve around the transferable manipulators, i.e. manipulators that can be "soft" in certain directions.

- inclined even here I will not enter the detail realized because it is not of particular interest. I would like to point out the use of the transducer in question in robotics.
an inclinometer, also called "electronic bubble", as the word itself says, is a transducer that provides a measure of the inclination of a plane compared to the vertical. is typically realized with an accelerometer that, calibrated when perfectly vertical, if it is tilted will measure a component of gravity acceleration. knowing the component you can go back to the corner.
robots, once assembled, must have three types of calibrations that we will discuss in depth later. one of these calibrations is the zero end of the joints. one of the methods to perform this calibration is as follows:
on each arm is made a machine-worked pitch whose plane is in close tolerance with the reference plane of the arm itself, for example the passing one for the axes of the joints at its ends. a differential inclinometer is used, that is a machine consisting of two inclinometers and that returns the angle between them. both are rested on the pitch of the link 0, that is the base of the robot, and the zeroing of the differential is done. then one of the two is carried on the pitch of the link 1 and the joint is rotated until the inlineometer marks 0, at this point the resolver of the axis 1. and so on finely calibrating all axes.
this calibration should be carried out whenever an engine, a resolver, a gearbox, an arm, etc. is replaced.

- Tastators some robots instead have calibration pockets that replace the worked pitches mentioned at the previous point. with the use of a tactor and an automatic procedure, the robot searches for the notch and auto resets.
Although this method seems simpler, there are at least two contraindications. the first is that the meter, as it is realized, has a precision less than that of the inlineometer. the second, much more important is that in this case the azeration of each joint between the arm that precedes it and the one that follows it, thus accumulating the error of measurement. with the method of the previous point instead each arm is compared with the base of the robot, the extent of the error remains stable along the structure of the manipulator.

- vision here we speak exclusively of the techniques of vision used as a proprioceptive transduction. the so-called "automatic vision" will see at the next post.
There are various vision techniques used for robot calibration. are more complex and expensive than previously seen methods, but give higher precision. in particular these methods are the only ones to provide a "absolute" calibration, that is referred to the outside world and not to the robot itself.

the simplest method is to use a point, such as a fiber photocell, and bring the robot to that point with different orientations. on the flange is mounted a pointed tool and the robot "looks" the point until the light stops. when he finds it, he moves away, changes orientation and returns to look for the same point. after a number of acquisitions you will have: a joint coordinate set known less than one error, to reach a known point, with an estimated orientation. with simple mathematical operations it is possible to correct the zeros of the joints quite precisely.

a more complex and more accurate method is to build a bolted and plugged tool to the flange of the robot that contains some "targets", i.e. the balls that can be identified by a camera. in general the spheres are chosen because they are invariable compared to the angle of recovery, and their centerpiece in the acquired image is invariable, or however little variable, compared to a large number of optical aberrations.
the robot assumes a number of positions (how? Well... a little patience...), the camera or the cameras take pictures in each location. with some calculations it is possible to trace from the image plane of each camera to the coordinates of the special tool in the space, as well as its orientation. Once you notice the joint positions and the relevant tool positions for each acquisition, you can calculate the joint and link parameters that carry out this transformation. it is therefore possible to know geometrically all the arms and all the joints.

an even more precise method, but even more expensive is to use an electronic theodolite. the finish line is mounted on the tool of the robot and the theodolite on the ground. conceptually you have the same situation seen with the cameras, but with a simpler and more robust relief algorithm* and a more precise measure**.

(*) the relief of two corners is certainly more robust than a stereoscopic analysis of image planes, or worse, with single camera, of the estimate of distance on a perspective basis.
(**) Theodolite has a typically greater resolution of the pixels of a camera than the distances in question.
 
Let's talk now about transducers Foreign, or those used by the robot to "feel" the surrounding world. the very definition of robotics as "smart connection between perception and action" in fact involves a need for perception of the environment in which the robot is called to operate.
until a few years ago the vast majority of industrial robots worked blindly, without the need to know the world. simply did fixed coordinate instructions. nowadays industrial robots are increasingly called to solve problems not fully known a priori. with the advancement of the technique in fact it becomes increasingly simple to equip the robots of sensors, and write programs that can be partly modified by such sensors.
industrial robots still move in partially known but highly structured environments, and I would like to clarify this point well. I therefore open a brief bracket to talk about the environments in which the robots are moving.
knowledge of an environment an environment can be known, partially known, or unknown. the definition is rather simple and intuitive. a known environment is an environment of which we know the geometry of each point (of interest) in terms of coordinates. in a partially known environment there may be elements that do not know the position and/or existence, but the main points of interest are known. in an unknown environment you know nothing. You don't know what it exists, and if it exists, where it is.
structure of an environment an environment can be structured, partially structured or unstructured. This concept is complementary to the previous one, but without being minimally linked. we can generically call "primitive" an environmental element that has certain characteristics. in a structured environment are present exclusively primitive known in detail. If the environment is known, the positions of these primitives will also be known. If it is unknown, you will not know whether or not they exist, or where they are, but you know that when you meet something, that something must be one of the first known. in a partially structured environment, you do not know all primitives, or those known may vary in size or shape, while maintaining the salient characteristics of the primitive mother. in an unstructured environment, there are no known primitives that you can meet.

Let us make some examples to better understand. a few posts ago I talked about the robot washroombasins. it moves in a highly structured unknown environment. This means that before being put in a box, the programmer has no way to know the house where the robot will go to operate. However he knows that the primitives with which he will deal are quite few, and all time invariant (the cats are not contemplated!). So what should we be careful about?
- chairs and tables
- cabinets and stairs that rise
- stairs descending

only three primitives completely known a priori. nothing can be said about their position, except that it is independent from time. It is then necessary to equip the robot with a set of sensors and algorithms so that its behavior is "robust" (i.e. undependent) compared to the impact with certain primitive ones. it is immediately seen that the first two can be identified by impacts. the discriminating between the first and the second is the variation of the forces of a second impact a short distance from the first. for the third primitive is sufficient a ring of brightness sensors immediately below the edge.

an industrial welding robot lives in a world almost completely known and fully structured. it is enough to "feel" the bonding of the welding clamp by supercurrent to the engines, and the variations of the welding current to understand if the cord is the right size, and in case it is not, to correct.

the rovers running in open countryside, under the sea or on other planets instead live in a completely unknown and unstructured world. are rich in sensors and have quite complex algorithms that I would like to introduce later (parecchio later) in this thread.

man lives in an unknown environment, but strongly structured. a robot that lives in the same environment, lives in a totally unstructured world. the difference lies in the fact that the human brain contains a quantity of information organized in such a way that it is not possible to reconstruct in a computer the same structure of knowledge. At least for now.
to make an example, a baby cat is in the same conditions as the robot mentioned above. with more or less the same chances of getting hurt.
 
Well, let's take back now with foreign sensors:

- cameras Although they are not the most used transducer, they are certainly those that most recall the idea of "seeing" the environment. the cameras are constituted by a sensitive element (ccd or cmos, but we do not enter into technicalisms) that receives the luminous irradiation and returns an electrical signal to it proportional. the sensitive element consists of a series of "points" called pixels. the term "pixel" is the acronym of picTours element, i.e. image element. in fact each pixel returns a signal proportional to the amount of light that invests it, and thus allows to reconstruct the image by points. you could say that the pixel represents the minimum information drive in an image, but this is true only for unstructured images. very often in fact knowing the nature of an image it is possible to pull out much more detailed information of the individual pixel.
There are cameras where for each pixel there are three elements, each sensitive only to a certain frequency range, in particular each is centered around bands of frequencies of red, green and blue. are color cameras, although they are rather rarely used in robotics.

in general a camera provides a large number of information. the problem therefore is almost never the lack of information, but its filtering to pull out what really interests.

the camera can be used mainly in two ways.
architecture "eye in hand" consists in fixing the camera to the terminal organ of the robot. the camera must be calibrated. with a particular procedure it is possible to know precisely the position of the camera regarding the robot. at this point all that is seen from the camera can be interpreted as compared to the location of the robot. the advantages of this architecture are a simpler calibration and a narrower field of vision, but generally with a greater resolution just because closer.
architecture "eye to hand" instead consists of a fixed camera that frames both the robot and the object on which it has to work. Calibration is more complex because you first need to georeferentiate by calibration the camera over the world, then the manipulator over the world and then calculate the positioning of the camera compared to the manipulator. the field of vision is typically wider, but usually with a lower resolution due to the greater distance.

a camera typically provides so much information on the image plane, but almost nothing compared to the direction to it orthogonal. in a partially known environment, where at least the distance of objects is known, the camera adds all the missing information.
the classic example is the identification of pieces on a conveyor belt. the position of the tape is known, what is unknown is the position of the piece on the tape. putting the camera parallel to the tape plane I have all the information I need.
If, however, the environment is unknown to the distance, it can operate in three ways. if it is at least partially structured, in the sense that you know the size of the objects, knowing the perspective characteristics of the camera (known from the calibration procedure) you can trace back to the distance of the objects (plus the object you see smaller more means that it is far), typically with a precision of some orders of magnitude lower than the positioning in the image plane. Alternatively you can add a distance transducer that provides the system with the only information that the camera provides in an uninterested manner. the third technique is that of stereoscopy. It's a pretty complex technique where two cameras are used on the same lens, obviously with different angles. As the ignorance of a camera focuses on its focal axis, and considering that the two cameras frame the same scene with different focal axes, it goes from itself that overlapping the information you get a deeper knowledge. the greater the angle between the two cameras, and therefore the greater their distance, the greater the information that is possible to obtain.
if then the two cameras are a "eye in hand" and a "eye to hand" there is also the possibility to combine the advantages of the two architectures.
Moreover, man has a stereoscopic vision precisely for this.
 
...of the university of the calabria where it goes to examine the gelmini (no offense, only for gossip) and one thing is the federic ii...
I'm sorry if I answer your post, but I felt like I was "hot." :biggrin:

I studied at the university of calabria (consistency) and gelmini does not have a bat with unicol.
She did college in Brescia and took the state exam at the University of Reggio Calabria... which has no business with the University of Calabria.
In all cases, her ignorance brought her as a breeze.

I always criticize the south (I am Calabrian and so I know what I am referring to) but the university can assure you that it is one of the few things that works well...at least as regards engineering.

read this:
http://it.wikipedia.org/wiki/università_della_calabriaAs for linch, your friends can respond by saying that ignorance unfortunately is in all universities.

remember to your friends that I did the thesis in ferrrari (though studying at the unicol...and who is my friend on facebook can check the photos I made with schumacher), I did 6 months in torino in a well-known company that works for the areonautica and there I met people graduating at the famous polytechnic of torino that did not know a well-loveded bat.
Now I work in Emilia Romagna and I have already had the opportunity to work for two rather structured companies that allowed me to deal with people graduated in different policies (wood, modena, padova, pisa, firenze, etc.)...and I can assure you that they are not great tops.

the formation of a person does not make the geographical position but the person himself: if you want to do and learn, you can be a good engineer even studying with video distance courses of an Indian university.
 
In fact, in advanced robotics, we often compare with children and adults to understand what are the limits of sensory and what are those of cognitive architecture. In the 1970s, the invention of artificial intelligence based on neural architectures seemed to have broken the exclusive of creation by God, except to fail miserably twenty years later because obviously something still escapes us.

the evolution of a self-learning system compares with the brain of children, the experienced systems are trained to create euristics by sampling human behavior.

It is in fact that the brain of a child has an ever increasing learning curve and it is not known why, since even complex neural systems often tend to "enter" and degrade their cognitive level.
 
In fact, in advanced robotics, we often compare with children and adults to understand what are the limits of sensory and what are those of cognitive architecture. In the 1970s, the invention of artificial intelligence based on neural architectures seemed to have broken the exclusive of creation by God, except to fail miserably twenty years later because obviously something still escapes us.

the evolution of a self-learning system compares with the brain of children, the experienced systems are trained to create euristics by sampling human behavior.

It is in fact that the brain of a child has an ever increasing learning curve and it is not known why, since even complex neural systems often tend to "enter" and degrade their cognitive level.
I did an examination of robotics (of course only a basic part because there was no anticipation of an advanced study plan) and I remember that at the end of the program there was talk of future developments.

we talked about integrating neural networks to make robots learn different ways of interacting with the outside on different occasions.
the famous "learning curve" you mentioned.
 
It is in fact that the brain of a child has an ever increasing learning curve and it is not known why, since even complex neural systems often tend to "enter" and degrade their cognitive level.
Maybe because, fortunately, we are "lives" and they are not... :rolleyes:
 
Maybe because, fortunately, we are "lives" and they are not... :rolleyes:
It's a long diatribe that flows into philosophic.
if you consider the human being as a body and soul that must subject to heavenly laws, then what you say is correct. But if you consider man as a set of carbon-based chemical compounds, which is exactly how it considers science and medicine, the horizon changes completely.

If man is able to do what he does for purely topological reasons of his own architecture means that a topologically equivalent architecture, although based on different chemical compounds (e.g. silicon instead of carbon), then the machine must be able to replicate the human being. The fact that we haven't been able to do it yet means we haven't guessed the right topology yet.

I don't want to go into these debates that can touch someone's susceptibility. There have been scientists living for much less. However, the two factions have both twists and reasons, and I would not feel it a priori to prolonging for one or another. .
 
returning to the "external senses".
in the company we have some anthropomorphic robots that serve to load/download the work centers.
the pieces are processed in two phases and between one phase and the other are removed from the machine and put on a sort of bench (do all the robot).
robots are equipped with a camera that takes a photograph.
They told me that through this photo the robot remembers how to take the piece from the bench to reposition it in the center of work for the 2nd phase.

But if no one touches the piece, why does the robot need to "memorize" the grip?
 
returning to the "external senses".
in the company we have some anthropomorphic robots that serve to load/download the work centers.
the pieces are processed in two phases and between one phase and the other are removed from the machine and put on a sort of bench (do all the robot).
robots are equipped with a camera that takes a photo.
They told me that through this photo the robot remembers how to take the piece from the bench to reposition it in the center of work for the 2nd phase.

But if no one touches the piece, why does the robot need to "memorize" the grip?
There should be no need. unless the piece moves for some reason. It may also be that the robot actually does not know how it took it after the first stage, in the sense that there could be more than a possible orientation, and the second phase must be done with a given orientation, and therefore if the discharge can be blind, the second socket must be made with knowledge of cause.

I don't know... try to explain better how the piece is made, how the counter is made, etc. etc.
 
There should be no need. unless the piece moves for some reason. It may also be that the robot actually does not know how it took it after the first stage, in the sense that there could be more than a possible orientation, and the second phase must be done with a given orientation, and therefore if the discharge can be blind, the second socket must be made with knowledge of cause.

I don't know... try to explain better how the piece is made, how the counter is made, etc. etc.
exact, the second phase requires a precise orientation because it has to do a fairly delicate workmanship.

the first phase are nothing but grinding and finishing of a die-cast head of a piston pump.
the second phase coincides with drilling and grinding of those that will be the valve ducts and seats.

Sorry, but the position for the grip could not "memorize" during programming?
or is it not possible because the gripping surfaces still have to be "realized"?
 

Attachments

  • Immagine.webp
    Immagine.webp
    4.9 KB · Views: 59

Forum statistics

Threads
44,997
Messages
339,767
Members
4
Latest member
ibt

Members online

No members online now.
Back
Top