Problem during boundary condition assembly

Hello Kostas,

The result I got was

Variable       f                              1 copy   fem dependant     4096 doubles.
Variable       mult_on_f                      1 copy   fem dependant      670 doubles.
Variable       mult_on_f_2                    1 copy   fem dependant      715 doubles.
Variable       mult_on_f_3                    1 copy   fem dependant      682 doubles.
Variable       mult_on_f_4                    1 copy   fem dependant      731 doubles.
Variable       mult_on_f_5                    1 copy   fem dependant      791 doubles.
Variable       mult_on_f_6                    1 copy   fem dependant      246 doubles.
Variable       mult_on_f_7                    1 copy   fem dependant      257 doubles.
Variable       mult_on_f_8                    1 copy   fem dependant        0 double.
Data           B1                             1 copy   fem dependant     4096 doubles.
Data           B2                             1 copy   fem dependant     4096 doubles.
Data           B3                             1 copy   fem dependant     4096 doubles.
Data           DirichletData                  1 copy   fem dependant     4096 doubles.
Data           E1                             1 copy   fem dependant     4096 doubles.
Data           E2                             1 copy   fem dependant     4096 doubles.
Data           E3                             1 copy   fem dependant     4096 doubles.
Data           zeros                          1 copy   fem dependant     4096 doubles.

f is the plasma density function in particles per meter^3 per (meter/millisecond)^3, E and B are the electric and magnetic fields, and I am not sure what the mult_on_f variables are but would guess from contact that they are derivatives (though I am not sure why 8 are needed in 6D space).

DirichletData is my ambient plasma density function (used for the face the plasma enters on), and zeros are a matrix of zeros I was using for boundary conditions.

Sincerely,
Eric Comstock

mult_on_f_... are multiplier variables that are added to the model automatically when you use the add_Dirichlet_condition_with_multipliers or some other similar add_..._with_multipliers function. In general I prefer to implement all boundary conditions myself using GWFL and the add_linear_term method instead of black box methods like add_..._with_multipliers, but this is not essential.

The issue here is that mult_on_f_8 has 0 degrees of freedom which indicates, that you applied this condition on an empty region. For some reason some of your mesh regions becomes empty in the new version of GetFEM. Try to find out where this deviation from the old version occurs.

Have you compiled the last released version 5.4.4 or the development version from git?

Hello Kostas,

Got it - I will see if I can switch to add_linear_term.

I figured out what was going on - I was building my mesh slightly differently between the two file versions, because one was actually a file I had been using to test if my meshing was causing the delay. The one that works correctly gave

List of model variables and data:
Data           B1                             1 copy   fem dependant     4096 doubles.	           
Data           B2                             1 copy   fem dependant     4096 doubles.	           
Data           B3                             1 copy   fem dependant     4096 doubles.	           
Data           DirichletData                  1 copy   fem dependant     4096 doubles.	           
Data           E1                             1 copy   fem dependant     4096 doubles.	           
Data           E2                             1 copy   fem dependant     4096 doubles.	           
Data           E3                             1 copy   fem dependant     4096 doubles.	           
Variable       f                              1 copy   fem dependant     4096 doubles.	           
Variable       mult_on_f                      1 copy   fem dependant      653 doubles.	           
Variable       mult_on_f_2                    1 copy   fem dependant      690 doubles.	           
Variable       mult_on_f_3                    1 copy   fem dependant      687 doubles.	           
Variable       mult_on_f_4                    1 copy   fem dependant      780 doubles.	           
Variable       mult_on_f_5                    1 copy   fem dependant      804 doubles.	           
Variable       mult_on_f_6                    1 copy   fem dependant      247 doubles.	           
Variable       mult_on_f_7                    1 copy   fem dependant      231 doubles.	           
Data           zeros                          1 copy   fem dependant     4096 doubles.	

It is still somewhat odd, because the only difference in the meshes is that the one that was newer had its interior points closer together and its exterior points farther apart, but they had the same number of points, arranged in a very similar way. The boundaries should have also been the same.

I got the version directly from https://git.savannah.nongnu.org/git/getfem.git and downloaded it with the git clone command, as described in https://getfem.org/install/install_linux.html.

Sincerely,
Eric Comstock

The syntax is

md.add_filtered_fem_variable("mult1", mf, RG1)
md.add_linear_term(mim, "mult1*(f-f0)", RG1)

which is equivalent to

md.add_filtered_fem_variable("mult1", mf, RG1)
md.add_linear_term(mim, "Test_mult1*(f-f0)", RG1)
md.add_linear_term(mim, "mult1*Test_f", RG1)

if mf is for a vector field then * needs to be replaced with . or other appropriate contraction.

1 Like

Hello Kostas,

Just to confirm, will

md.add_filtered_fem_variable("mult1", mf, 40)
md.add_linear_term(mim, 'mult1*(DirichletData - f)', 40)

have the same effect (adding the initialized FEM data DirichletData as a Dirichlet boundary condition over region 40) as the expression

md.add_Dirichlet_condition_with_multipliers(mim, 'f', mf, 40, 'DirichletData')

?

When I run the code using the former, I get the ‘Solve with MUMPS failed: error -9, increase ICNTL(14)’ error, but not with the latter.

Sincerely,
Eric Comstock

yes, you understood correctly, the two codes should be equivalent. Is f a scalar field or a vector field?

Weird that you get this error, it means that your stiffness matrix is not very sparse after you change from one syntax to the other.

Hello Kostas,

f is a scalar field - I am just simulating the density of ions.

I finally figured out how to recompile it with a different ICNTL(14) for MUMPS (I set it to add 800 instead of 80 in gmm_MUMPS_interface.h), which led to a different error:

Level 1 Warning in ../../src/gmm/gmm_MUMPS_interface.h, line 150: Solve with MUMPS failed: matrix is singular

I am not sure why this is happening, since it does run and produces results with the previous syntax.

Edit: Actually, in the current version I have just replaced one boundary condition with the new syntax, to minimize the changes between the codes. Could this be the issue? Do I need to switch all BC’s to the new syntax for it to work?

Sincerely,
Eric Comstock

I did not want to tell you to change ICNTL(14), because the tangent matrix should not have changed between the 2 syntaxes in the first place. You do not need to change all of your BCs to this syntax. Just one is fine for testing. You can try to see if the error persist with this 3-line syntax instead of the 2-line syntax you tried before:

md.add_filtered_fem_variable("mult1", mf, 40)
md.add_linear_term(mim, 'Test_mult1*(DirichletData - f)', 40)
md.add_linear_term(mim, '-mult1*Test_f', 40)

you can also try to replace add_linear_term with add_nonlinear_term in either the 2-line or the 3-line syntax.

1 Like

Hello Kostas,

Thank you! I am keeping only 1 BC with the new syntax, as before.

I tried the three-line syntax, and I got the same “matrix is singular” error message as before.

For the three-line syntex with nonlinear terms, I got the error message

Trace 2 in getfem_models.cc, line 2652: Global generic assembly RHS
 iter   0 residual  1.16746e+15
Trace 2 in getfem_models.cc, line 2654: Global generic assembly tangent term
Level 1 Warning in ../../src/gmm/gmm_MUMPS_interface.h, line 150: Solve with MUMPS failed: matrix is singular
Singular tangent matrix: perturbation of the state vector.
Trace 2 in getfem_models.cc, line 2652: Global generic assembly RHS
Trace 2 in getfem_models.cc, line 2654: Global generic assembly tangent term
Level 1 Warning in ../../src/gmm/gmm_MUMPS_interface.h, line 150: Solve with MUMPS failed: matrix is singular
Singular tangent matrix: perturbation of the state vector.
Trace 2 in getfem_models.cc, line 2652: Global generic assembly RHS
Trace 2 in getfem_models.cc, line 2654: Global generic assembly tangent term
Level 1 Warning in ../../src/gmm/gmm_MUMPS_interface.h, line 150: Solve with MUMPS failed: matrix is singular
Singular tangent matrix: perturbation of the state vector.
Trace 2 in getfem_models.cc, line 2652: Global generic assembly RHS
Trace 2 in getfem_models.cc, line 2654: Global generic assembly tangent term
Level 1 Warning in ../../src/gmm/gmm_MUMPS_interface.h, line 150: Solve with MUMPS failed: matrix is singular
Singular tangent matrix: perturbation failed, aborting.

For the two-line syntax with nonlinear terms, I got the same error as above.

Sincerely,
Eric Comstock

please start with checking that mult1 has the same size as the respective mult_on_f_? in the output of md.variable_list().

Hello Kostas,

Thank you! It looks like they do not have the same size - mult1 is 3058 doubles, while mult_on_f_? is around 200 to 650.

Sincerely,
Eric Comstock

Hello Kostas,

I have not been able to replicate the Dirichlet BCs with linear and nonlinear terms. I think the problem may be in mult1, since it is much larger than the other multipliers that are automatically generated.

Do you know why this might be the case?

Sincerely,
Eric Comstock

there is a small difference between the two syntaxes, I think in order to get the exact equivalent of the syntax that is working for you, you need to replace

md.add_filtered_fem_variable("mult1", mf, 40)

with

md.add_multiplier("mult1", mf, "f", mim, 40)

the add_multiplier method will perform some extra checks and remove the null space of the multiplier. This is rarely needed, but if it is indeed needed then it is good to know it, so that you know how exactly your model is defined.

1 Like

Hello Kostas,

Thank you! That seems to have worked.

Sincerely,
Eric Comstock

Now you need to understand why your multiplier has a null space that needs to be removed. Knowing exactly what you are doing is the main motivation for not using black-box methods like add_Dirichlet_condition_with_multipliers.

1 Like

Hello Kostas,

I got the program to work with only add_linear_term as my Dirichlet boundary conditions (and add_source_term_brick for my Neumann conditions), and it is generating the same result again. However, it seems to be taking about twice as long to run as before.

How would I go about finding out why the multiplier has a null space? Is there a way to view the multiplier using getfem?

Sincerely,
Eric Comstock

difficult to say without more details about your model.

sure, like this:

mfmult = md.mesh_fem_of_variable("mult" )
R=mfmult.reduction_matrix()
E=mfmult.extension_matrix()

Then you can examine the matrix R or E to understand how the null space is eliminated.

if you want to convert these sparse matrices in some other format and export them, this is the conversion to scipy:

RR = scipy.sparse.csc_matrix((R.csc_val(),*(R.csc_ind()[::-1])))

But to be honest, the best way to understand why your multiplier has a non empty null space, is just by reflection. Think about what your region is, what is the FE basis on this region, calculate how many dofs you should have there, by hand and so on.

1 Like

Hello Kostas,

Thank you again for the help!

I have been running some more tests, and I have found some very odd results.

First, using the md.add_multiplier("mult40", mf, "f", mim, 40) method, the solver appears to converge, but only for very simple test cases where the output forces are all zero. If I try to use it for a real case the matrix becomes singular with the error message

Level 1 Warning in ../../src/gmm/gmm_MUMPS_interface.h, line 150: Solve with MUMPS failed: matrix is singular

Second, using the md.add_multiplier("mult40", mf, "f", mim, 40) method appears to give similar multiplier sizes (around 200 to 700) as the md.add_Dirichlet_condition_with_multipliers(mim, 'f', mf, 46, data_for_velocity_boundaries), but they are not eactly the same. They are both very different, however, from the 3000-4000 multiplier sizes I got when just using md.add_filtered_fem_variable("mult1", mf, 40).

However, when I ran the

mfmult = md.mesh_fem_of_variable("mult" )
R=mfmult.reduction_matrix()
E=mfmult.extension_matrix()

code, the resulting reduction and extension matrices were both 0x0:

2025-05-17 09:32:29,024 [INFO] Reduction matrix: matrix(0, 0)

2025-05-17 09:32:29,025 [INFO] Extension matrix: matrix(0, 0)

(I printed them in my log file)

So, I am confused how this is happening, since if these are 0x0 it would imply that the U and V matrices are also 0, which (I think?) would imply that the entire linear problem being solved is a zero matrix. These boundary conditions should be being applied to 5D slices of 6D space, so I am wondering if I could be applying the bulk assembly term wrong. One thought I had is that I might need to only apply my main linear term on the bulk of the simulation and not the edges, but I am unsure how to do this in getfem.

If it helps, the part of my code calling getFEM is below.

    # Define hyperrectangular mesh around the spacecraft at 0, 0 and the plasma momentum
    m               = gf.Mesh('regular simplices', grid_x, grid_x, grid_x, grid_p + v_x, grid_p + v_y, grid_p)
    
    # Confirm that this function operates in 6 dimensions
    dims            = 6
    
    # create a MeshFem of for a field of dimension 1 (i.e. a scalar field)
    mf              = gf.MeshFem(m, 1)
    
    # assign the Q2 fem to all convexes of the MeshFem
    mf.set_fem(gf.Fem('FEM_PK(' + str(dims) + ',1)'))
    
    # view the expression of its basis functions on the reference convex
    logging.debug('Basis functions per element: ' + str(gf.Fem('FEM_PK(' + str(dims) + ',1)').poly_str()))
    
    # an exact integration will be used
    mim             = gf.MeshIm(m, gf.Integ('IM_NC(' + str(dims) + ',1)'))
    
    # detect the border of the mesh
    fb0             = m.outer_faces_with_direction([ 1., 0., 0., 0., 0., 0.], 0.01)
    fb1             = m.outer_faces_with_direction([-1., 0., 0., 0., 0., 0.], 0.01)
    fb2             = m.outer_faces_with_direction([0.,  1., 0., 0., 0., 0.], 0.01)
    fb3             = m.outer_faces_with_direction([0., -1., 0., 0., 0., 0.], 0.01)
    fb4             = m.outer_faces_with_direction([0., 0.,  1., 0., 0., 0.], 0.01)
    fb5             = m.outer_faces_with_direction([0., 0., -1., 0., 0., 0.], 0.01)
    fb6             = m.outer_faces_with_direction([0., 0., 0.,  1., 0., 0.], 0.01)
    fb7             = m.outer_faces_with_direction([0., 0., 0., -1., 0., 0.], 0.01)
    fb8             = m.outer_faces_with_direction([0., 0., 0., 0.,  1., 0.], 0.01)
    fb9             = m.outer_faces_with_direction([0., 0., 0., 0., -1., 0.], 0.01)
    fb10            = m.outer_faces_with_direction([0., 0., 0., 0., 0.,  1.], 0.01)
    fb11            = m.outer_faces_with_direction([0., 0., 0., 0., 0., -1.], 0.01)
    
    # mark it as boundary #4X
    m.set_region(40, fb0)# Face the plasma is entering on
    m.set_region(41, fb1)# Face the plasma is exiting
    m.set_region(42, fb2)# Side face
    m.set_region(43, fb3)# Side face
    m.set_region(44, fb4)# Side face
    m.set_region(45, fb5)# Side face
    m.set_region(46, fb6)# Velocity extreme
    m.set_region(47, fb7)# Velocity extreme
    m.set_region(48, fb8)# Velocity extreme
    m.set_region(49, fb9)# Velocity extreme
    m.set_region(50, fb10)# Velocity extreme
    m.set_region(51, fb11)# Velocity extreme
    
    # empty real model
    md              = gf.Model('real')
    
    # declare that "f" is an unknown of the system, representing density in
    # ions / (kg^3 * m^6 / s^3)
    # on the finite element method `mf`
    md.add_fem_variable('f', mf)
    
    # Define electric and orthogonal magnetic field. E3, B1, B2 have no in-plane
    #   acceleration.
    md.add_initialized_fem_data('E1', mf, mf.eval('0 - B_earth * v_y', {**locals(), **globals()}))
    md.add_initialized_fem_data('E2', mf, mf.eval('0 + B_earth * v_x', {**locals(), **globals()}))
    md.add_initialized_fem_data('E3', mf, mf.eval('0', {**locals(), **globals()}))# No E3 since Earth's magnetic field is in that direction
    md.add_initialized_fem_data('B1', mf, mf.eval('0', {**locals(), **globals()}))#*
    md.add_initialized_fem_data('B2', mf, mf.eval('0', {**locals(), **globals()}))#*
    md.add_initialized_fem_data('B3', mf, mf.eval('B_earth', {**locals(), **globals()}))#*
    
    # Define the 6D Vlasov equation
    #   6 dimensions are: x1, x2, x3, p1, p2, p3, in order. x is position, and p is momentum.
    vlasov          = '([X(4); X(5); X(6); 0; 0; 0].Grad_f + ([0; 0; 0; E1; E2; E3] + [0; 0; 0; X(5) * B3 - X(6) * B2; X(6) * B1 - X(4) * B3; X(4) * B2 - X(5) * B1]).Grad_f) * Test_f'
    
    # add generic assembly brick for the bulk of the simulation
    md.add_nonlinear_term(mim, vlasov)
    
    # add Dirichlet conditions for inlet and momentum-space edges
    g               = mf.eval('p_0 * np.exp(-0.5*((u - v_x)/v_therm)**2-0.5*((v - v_y)/v_therm)**2-0.5*((w)/v_therm)**2)', {**locals(), **globals()})
    md.add_initialized_fem_data('DirichletData', mf, g)
    g2              = mf.eval('0', globals())
    md.add_initialized_fem_data('zeros', mf, g2)
    
    # Test why filtered FEM variable has a null space
    md.add_filtered_fem_variable("mult", mf, 40)
    mfmult = md.mesh_fem_of_variable("mult" )
    R=mfmult.reduction_matrix()
    E=mfmult.extension_matrix()
    logging.info('Reduction matrix: ' + str(R))
    logging.info('Extension matrix: ' + str(E))
    
    
    # Use linear terms with multiplier preconditioning
    md.add_multiplier("mult40", mf, "f", mim, 40)
    md.add_linear_term(mim, 'mult40 * (DirichletData - f)', 40)
    
    md.add_multiplier("mult46", mf, "f", mim, 46)
    md.add_linear_term(mim, '-mult46 * f', 46)
    md.add_multiplier("mult47", mf, "f", mim, 47)
    md.add_linear_term(mim, '-mult47 * f', 47)
    md.add_multiplier("mult48", mf, "f", mim, 48)
    md.add_linear_term(mim, '-mult48 * f', 48)
    md.add_multiplier("mult49", mf, "f", mim, 49)
    md.add_linear_term(mim, '-mult49 * f', 49)
    md.add_multiplier("mult50", mf, "f", mim, 50)
    md.add_linear_term(mim, '-mult50 * f', 50)
    md.add_multiplier("mult51", mf, "f", mim, 51)
    md.add_linear_term(mim, '-mult51 * f', 51)
    
    # add Neumann condition
    md.add_source_term_brick(mim, 'f', 'zeros', 41)
    md.add_source_term_brick(mim, 'f', 'zeros', 42)
    md.add_source_term_brick(mim, 'f', 'zeros', 43)
    md.add_source_term_brick(mim, 'f', 'zeros', 44)
    md.add_source_term_brick(mim, 'f', 'zeros', 45)
    
    # List variables
    md.variable_list()
    
    # solve the linear system
    md.solve("noisy", "lsolver", "mumps")

Sincerely,
Eric Comstock

Hello Kostas,

I solved the issue I described in the previous post, and now have the code running with all add_linear_terms. I have access to the source code, and am running it on a source compiled in wsl ubuntu.

Do you know any methods that I could use to manually partition the mesh for parallelization?

Sincerely,
Eric Comstock

Hi, I am out of office these days, but you need to learn a few different things that you need to combine. First thing is if you need MPI or openmp parallelism. I guess MPI is probably better in your case. So try to compile a version of GetFEM with OpenMPI. Search other threads in this forum where this has been discussed. Otherwise I can give you detailed instructions next week.

Then you need to learn how to use mpi4py and how to define regions with elements (instead of element faces). Each region will be your integration domain on each cpu.

1 Like