• This forum is the machine-generated translation of www.cad3d.it/forum1 - the Italian design community. Several terms are not translated correctly.

common entities at one point

  • Thread starter Thread starter Gio_S
  • Start date Start date

Gio_S

Guest
Good morning, everyone. may be that in other programs it is implemented, but not in my cad.
I did two routines I use to search and delete duplicates, or faces or lines. really heavy procedures, because the loop progressively analyzes the successive entities until the end of the selection, and then starts again with the next element. heavy also because it must analyze how the entities have been designed, and compare them independently from the order of the vertices. would anyone think of a more efficient system? use a token for entity then creating with a small flattening an intersect window, and checking on that, it seems to me slender and little elegant. Of course that would make me do one scan, and with the kindness of simplifying the selection by removing away the entities already erased, but I would not tribulate to write it while perhaps there is a better idea.
If someone has an idea, thank you!
g.
 
What do you mean?
You wrote "my cad" and it's not very clear.
use intellicad with autolisp perfectly compatible with autocad, but I do not have the vba and vbscript functions various .. in practice work only with lisp
I have specified why maybe autocad duplicates can delete them natively and I did not want to hear me say use such item of the such menu:)
 
Okay, thank you for clarifying.

It is not entirely true that intellicad or other "clones" (discuss the term) of autocad and autocad itself cannot use vba, vbs or other programming languages.

I have tried, and use, different programming languages to automate the cad & ms office packages.

automation scripts/applications use those that are defined as client/bridge as they connect to the software and use bees for automation (when available).
 
That's why I wrote that "my" doesn't support it. intellicad is a consortium therefore do not count the brands and versions in circulation. mine is very old, but I keep it both because it consumes few resources, then because it has everything I need, and what it didn't have me made it to measure.
But this cleaning feature does not satisfy me for its slowness, not very thick, but I happen to use it when I go plotting on the 3d floor for various purposes of illustration and explanations to those who make the 3d printing, and I find myself having to clean up huge redundancies.
doing it for drawing parts, the function is fast, but if I'm in a hurry and want to create a clean 2d in one barrel by a full 3d, on the contrary, the above described routine takes me a life.
I know well that autocad can generate views and sections, but for my general use what I have made to measure in many years is more functional to me all the rest, and above all it does not ask for resources.
I simultaneously work on intellicad, meshlab, and povray. and this combination, however customised, gives me satisfaction for my needs, and generates files much cleaner and lighter than any automatic modeler.
Now, for this gap, I'd just need a getpoint-based selection function, but I don't get an idea of how to do it faster than the routine works now. in practice the problem is "it is useless to go to scan distant entities from that tried, but how to do not consider them without obliging me to go for more selections with small intersection windows? "
 
Okay, I understand, but without knowing the script you wrote, I can't give you targeted advice.

However, if you want to avoid scanning distant entities you must create groupings of entities per zone/coordinates.
 
It is very simple as my lisp works, the main loop runs through all the elements of the selected list, starting from the first element and repeating away with the later elements. if in the nested loop the pilot element meets an equal one, it comes out immediately from the nested loop, deletes itself, and the control takes from the next element. Obviously, as the main loop is made, its index grows, and then nested loops become more short. Also the control to find equal faces is not of little account, you must take into account that the order of the vertices can be timely or anticlockwise, can begin from the first summit as from the second as from the third, as from the fourth. Moreover, if the face has three sides, it could be duplicated a top element, although typically to be duplicated is the last. on the lines is faster, just check if a-b is equal to b-a.
I do not think that establishing areas of grouping is a robust method, there may be entities on horseback, and managing them becomes complicated.
If we do not see other ways I think, as I wrote, of having to create a second small selection unit, in a loop that takes care of all the elements of the selection. but also that is not a trivial task, the window must be three-dimensional, and have the bounding box of the pilot element, to grasp and analyze all and solo the units that intersect. . .
 
I do not think that establishing areas of grouping is a robust method, there may be entities on horseback, and managing them becomes complicated.
If we do not see other ways I think, as I wrote, of having to create a second small selection unit, in a loop that takes care of all the elements of the selection. but also that is not a trivial task, the window must be three-dimensional
establishing a grouping area is not stable, but then do you group?
maybe we are saying the same thing but differently, small selections +-= groupings

always according to the detailed description of your code, I wanted to show you a detail of your code that maybe I interpret badly, but let's see if I can explain myself.

if the main cycle does from 1 to 1000(number by case) and the nested cycle eliminates 100(number by case) entities, the main cycle as it behaves?
 
if the main cycle does from 1 to 1000(number by case) and the nested cycle eliminates 100(number by case) entities, the main cycle as it behaves?
No, it's not. If the nested cycle eliminates an entity when it meets it, it would undo me the selection for use even of the later nested cycles. the main cycle, in this case, would have the probe to take account of it. but there is a simple solution.
the main cycle works so, takes the elements in progressive order, call them idx for clarity.
each nested cycle will start with idx as a pilot, to compare the following elements from idx +1.
if the nested cycle finds an idx+n equal to the idx of comparison, it concludes its nested cycle and jumps out.
resumes the main cycle by increasing it, but in the meantime eliminates idx, which has already been used and of which a repetition has been found.
I said that the procedure is speeding up, but not because it goes to delete the list to be observed, but because that nested will be more and more short, starting every time from idx+1.
 
I said that the procedure is speeding up, but not because it goes to delete the list to be observed, but because that nested will be more and more short, starting every time from idx+1.
In fact, this was clear, like all the explanations of the code you gave.

with the aim of improving performance, every time you eliminate a single entity, different operations are carried out that could make the code more "slow", especially with large quantities of entities to be processed.

under certain conditions, when I delete an element from the "selection set" (we say that I have 10 entities in the "selection set" and gate the one in between), the "shifting" of the "selection set" is performed to make the data contiguous and this operation (automated) is worsening in terms of performance.

so if this is your case, this could be a point to reevaluate.
 
I understand the concept, you say?
I think the selection remains intact because the index does not change, I am the increase and the case eliminates how much I have to delete.
watch the main loop boot. I then the size max I don't mix it anymore. In my opinion a kind of nil is created, however, I will check, as long as I text the program with a nice "comment" before entdel, I will know you.
I think it is the comparison that takes away time, but I try to eliminate the cancellation and misjudge the respective times
(if (/= nilselez)
(progn
(setq max (sslength) contasel 0)
(while (> max contasel)
(setq en (ssname selez contasel) alist (entget en))
(if ( gold (= "3dline" (cdr(assoc 0 alist))) (= "line" (cdr(assoc 0 alist))) )
(progn
.......................
 
better go to practice, I frankly do not see how to optimize, put the entire code on the line, because for that on faces comparison controls are kilometric.
here the scanning concept is identical but the code is compact.
Note:
the third limit argument of equal is indisoensible for the known problem of doubles. if the second point comes from a rotation will never really be equal.
Note2:
the princ "." I know it wasting my time, but I prefer to see the progress at video, it becomes a progress bar


(defun deletelinecopy ()
(prompt "\ndelete line copy new: ") (setq selez(ssget))) (setq dead 0 spy 0)
(if (/= nil selez)
(progn)
(setq max (sslength selez) Contasel 0)
(while (> max contasel)
(setq en (ssname selez contasel) alist (entget en)
(if (or (= "3dline" (cdr(assoc 0 alist)))) (= "line" (cdr(assoc 0 alist)))))
(progn)
(setq dead 0)
(setq pnta (cdr(assoc 10 alist)) pntb (cdr(assoc 11 alist))
(Sectq spying 1)
(while (and (- max contasel ) spy on) ( /= dead 1) )
(setq en2 (ssname selez (+ counterwire spy)) alist2 (entget en2))
(setq pnta2 (cdr(assoc 10 alist2)) pntb2 (cdr(assoc 11 alist2)) )
(if (and (equal pnta2 0.00000001) (equal pntb pntb2 0.00000001)) (setq dead 1) )
(if (and (equal pntb2 0.00000001) (equal pntb pnta2 0.00000001)) (setq dead 1) )
(setq flatness (+ flatness 1) )
) ; end progn
) ; end if
;wend nested

(if (= dead 1)(entdel en))
(setq dead 0 spy 0)

;(princ (rtos contasel))
(princ)
(setq contasel (+ 1 accountant))
) ; end progn
) ; end if
)
)

 
Okay, more in the day I look at it more carefully.

for notes2, I also prefer to see the progression of the process, I don't think it represents a problem for performance, but if you have any doubts, just comment on the code and see the difference in execution.

Did you do a script analysis to figure out which part of the code takes longer?
 
I did it now, with results even higher than I thought.
I stressed the code at the most, with an extreme selection in which I doubled all the elements. in practice he had to erase always.
therefore, with cancellation minutes 13:20
without cancellation minutes 13:12
see that the cancellation does not weigh? The code is here, whoever can test it.
attention to how I suppressed "entdel", though.
I had to keep the weight of comparison if, so I swapped the two lines of code

(if (= dead 1)(entdel en))
(setq dead 0 spy 0)

to become

(setq dead 0 spy 0)
(if (= dead 1)(entdel en))

that is resetting the variable "dead", which represents the condition verified for having to cancel.

Instead, believe me, the output princ "." that increases my status line, the one sure weighs, but I can take it away at all times.
the problem is to find a smart one to limit the selection from principle, because, being nested cycles, the gain would be exponential.

N
because according to me princ "." every cycle weighs?
I'm not even testing it. with other languages, if you want a text output to video while doing complex operations, you have to put a very heavy "doevents", or I know if you stick it and refresh the output as soon as it has time, that is when you finish your calculation cycles.
the "doevents", or stuff like that, obliges me to keep an eye on this output instruction, in lisp I don't have to put it, but that means that the code already does it... otherwise you would see the classic wheel that turns to the end of the loops... with the frozen status bar. It usually happens so...
I could test it, but it's useless. sooner or later I will place it alongside entdel, only to indicate that "one" cancellation took place, and not on all that the main cycle.
Thank you!
 
Do you realize that I have never seen your code in full and that or made assumptions based on your descriptions?

After your last post, I don't know whether to continue the conversation or not, I have the legitimate doubt that this "code" to optimize is just a pretext, would you please give me your confirmation?

Thank you.
 
You also answered me, "ok, more in the day I look at it more carefully. "
what pretext for what? Are you kidding me? :
 
How did you not see him?
I've published it all here.
Yes, I have seen it, but when I say in full, I mean complete in every part of him, he posted the whole part that concerns the lines.
cmq, I saw the code this morning and I haven't had a chance to read it yet.
You also answered me, "ok, more in the day I look at it more carefully. "
what pretext for what? Are you kidding me? :
in a first reading, I misinterpreted the final notes of your message, now that I read it...you know the rush... :oops:

Maybe now I can take a look at the code you posted. . .

you said that, in your tests, the execution of the script took just over 13 minutes (13:12 - 13:20), how many entities did it work?

can you add some extra control to the code to have a more targeted situation on execution times?

it would be useful to understand which part of the code requires more time for execution.

ps:
Can you share an example file to allow me to test the script?
 
the code I posted, but you flew over, is very complete certain that it concerns the linear entities. and twin of the other on the faces, that I have not been to post.
and it would be just as twin if it compared polyline entity or other.
this because, as I had already explained, the code regarding the faces, in the setting of the scanning loops is identical, although then obviously within the cycles increase the vertices to compare and the verifications on the order of the vertices, and is therefore less immediate and compact to analyze.the big problem is to limit the selection, to decrease the number of cycles. but I find myself repeating things already said and written, so I think we don't mean too much.
if you think you can optimize also the method of loops and comparisons, the code is there to see, and I would be happy if you were to tell me in practice where you feel to intervene and how. much better, but I'm convinced there's not much to invent. the test, conducted on a few thousand faces, served me to verify the "weight", as you suggested, of the in the cycle, which has proved proportionally little significant also by suppressing it. this, I repeat, because it weighs the multiplicity of cycles.
my intention, already expressed in the topic, is in search of a way to limit the selection a priori, because what weighs is the Number because it increases exponentially increasing the number of entities. I thank you but since I don't really like the fold of the discussion, I thank you, but maybe it's better to say goodbye.
 

Forum statistics

Threads
44,997
Messages
339,767
Members
4
Latest member
ibt

Members online

No members online now.
Back
Top