• This forum is the machine-generated translation of www.cad3d.it/forum1 - the Italian design community. Several terms are not translated correctly.

use memory ram

  • Thread starter Thread starter tele5palla
  • Start date Start date

tele5palla

Guest
Hello everyone,
I am trying for the first time to perform a fem analysis of a scale model with ansys workbench on a workstation with 12gb ram, win 7 and two hd from 1 tera in raid 0.
I use a model created with solidworks and I clash for the first time with the problem of PC capacity.
I managed to make a very serious mesh at 700000 items but the analysis crashes.
oserving the task manager I see clearly that I re-empt all the ram, when the pc crashes.
decreasing to 130000 elements (if I don't remember bad) the analysis ends successfully.
I would like to understand why, when I launch the analysis, I do not indicate the amount of memory needed.
then I would like to understand that I could do to launch the analysis with 700000 elements... is it a solution to try to set the swap?
Thank you.
 
some considerations:
1-what kind of analysis is it? linear or not linear?
2-what kind of solutor do you use? Direct (Spain) or iterative?
3-you really need 700k elements? It's a proposition!

some tips:
1-Review your model in solidworks by removing all unnecessary features and simplifying geometry to eliminate small faces and edges.

2-Review the way you do mesh well, going to put it in place. if you are willing to tolerate longer mesh times introduce the dominant hex method in mesh controls. this method first mesha to tetrahedri and then combine them when possible to form hexahedra. the result is an average mesh with 1/3 of the elements, with a better quality and with much faster calculation times.

3-use when possible direct solutor (analisy settings: solver type: direct) and, after you launched it, click on solution: solution information: solver output. here in the first 2-3 minutes of the analysis will clearly show you the amount of allotted memory. check that the solutor functions if possible "in core", i.e. avoiding swapping on disk. When swapping the calculation times increase by 1-2 orders of magnitude.
 
Thanks stefano!

then my problem is that for meshare with hexahedra use only the sweep. So I am forced to cut and paste with the contact elements.
I would like to do a structural linear analysis. . But now that you make me think about the contacts make the analysis not linear, right?
as if not enough I have a lot of elements counts and plates, that's why I have such a high number.

I understood that it would be better to leverage contacts, in my head only to reduce the number of elements, but it probably is also a concept of linearity.
I think I'll start trying dominant hexa.


I read a lot about the help of the scattered direct solutor, in-core and out-of-core memory mode, as well as I saw the solver output with the "request" of the memory needed to work only in-core. but I have requests greater than 12gb, even much, so I use the out-of-core mode optimal which should also use virtual memory at the expense of calculation times. I just can't see if he's actually doing it... In short, I'm not charo the way in vui works virtual memory.

at this point I launch the analysis... there are times when the use of ram goes 100%, and the comuter stops.
What happens right now? Are you working on saying and slowing down for this?
Do I ever have a solution?


I thank you very much for the interest!
claudio
 
when ansys solver goes into "optimal out of core" the solution goes on, but mooooolto slower! It's a matter of patience, but the result will come!
but by curiosity what are you analyzing that has so many elements?
 
One big thing:tongue::tongue:

Well, you doubted me. Do you know how I can do to see how much hd portion is actually working as a virtual memory?

However I understand that the right way is to substantially reduce the number of elements. I'll try!
 
the amount of disk used is the difference between the "in core memory" and the "optimal out of core".
if you are looking in the subfolders of the workbench project you should find a ".page" file that represents the disk image of the used memory.
the windows file page for absurd you could also delete it, so ansys thinks alone to allocate the disk swap.
 
But be careful: swap is bad! being the hard drive hundreds of times slower than the ram, even the simulation time will increase accordingly. risk of having the solution in a few years. don't you have any more servers to use? or if you still have slots in your car, now 8 gb to add cost a hundred euros.
 
sail I'm trying to lighten the analysis just to try to work only in cores.
I also estimate the replacement of the 6 slots from 2 gb with those from 4gb admitted that our machine supports it.
 

Forum statistics

Threads
44,997
Messages
339,767
Members
4
Latest member
ibt

Members online

No members online now.
Back
Top