Comparaison des versions

Légende

  • Ces lignes ont été ajoutées. Ce mot a été ajouté.
  • Ces lignes ont été supprimées. Ce mot a été supprimé.
  • La mise en forme a été modifiée.

Sommaire

Work in progress!!!

Global physics fields

As you hopefully already know, GEM can run in parallel, using MPI and/or OpenMP. In the physics, when using MPI, the model domain gets horizontally divided into tiles, or rather cubes. Each MPI process is taking care of one of these cubes.

...

This is a special bus which is only valid for the surface schemes which do not have access to any of the busses described above! However, fields from this bus still need to get declared in sfc_businit.F90 or phyvar.hf the in the volatile or permanent bus. To get them copied to the surface bus one needs to add them' in sfcbus_mod.F90 with:


Volet

SFCVAR(name_sfc, name_bus)


Where:
    'name_sfc' is the name the field will have in the surface bus and
    'name_bus' is the "VN" name the one of the busses above.

...

To be able to treat the fields from the busses like "normal" 1-D (row) or 2-D (slab) fields (remember the physics only see one row or slab at the time - see above) they usually get "transformed" into regular 1-D or 2-D fields by using pointers the following way:

a) Declare Declaration of the field as 1-D or 2-D pointer, for example:

Volet
     real, pointer, dimension(:) :: zpmoins
     real, pointer, dimension(:,:) :: ztmoins


In general, the 1-D or 2-D field name is the same as the starting position of the field in the bus, preceded by a 'z'.


b) Assigning the pointer

In utils/phymkptr.hf several functions are defined to assign a pointer to the right space on the bus, depending on the dimension and type of level (if multi level). They can be used to assign a pointer to an address in a bus so the pointer can then get used in the routine like a "normal" array, for example:

Volet
MKPTR1D(zpmoins, pmoins, f)     ! 1-D field from the permanent bus (f)
MKPTR2D(ztmoins, tmoins, d)      ! 2-D field from the dynamic bus (d)


Where
      'zpmoins'/'ztmoins' are the 1- resp. 2-D "arrays" in the routine,
      'pmoins'/'tmoins' the specific position in the bus as declared in phyvar.hf and
      'f'/'p' the name of the bus as received by the routine.

Now, the pointer (zpmoins, ztmoins) can get used in the routine like "normal" arrays.


From the surface schemes

In the surface one passes the surface bus, its size, the list of variable and its size to the surface routine.

Explain what a bus is: slab with just land points etc.

main surface routine, surface/sfc_main.F90, all fields needed by the surface schemes get copied from the three main busses (dynamic, permanent and volatile) to the surface bus in calls to the routine 'copybus'. The routine 'copybus' gets called for all surface schemes separately. It copies only points from the three busses to the surface bus for which there is a surface fraction of the respective surface scheme, which greater than a critical value. For example, before the soil routine (ISBA/SVS/CLASS) gets called, 'copybus' will copy all points of the needed fields that contain a soil fraction greater equal 'critmask' to the surface bus. This is done in the calls:

    call copybus3(bus_soil, siz_soil, &
          ptr_soil, nvarsurf, &
          rg_soil, ni_soil, &
          ni, indx_soil, trnch, CP_TO_SFCBUS)

The starting address parameters (the ones declared in sfc_businit.F90 resp. phyvar.hf), will get new values assigned, matching now the start of the variable in the surface bus.
This means that fields in the surface routines are usually shorter than in the rest of the physics and they are usually different for each surface scheme!

After the call to the specific surface routine the output fields from the the specific surface routine will get copied back in the respective main busses, again with the routine 'copybus', for example:

     call copybus3(bus_soil, siz_soil, &
          ptr_soil, nvarsurf, &
          rg_soil, ni_soil, &
          ni, indx_soil, trnch, CP_FROM_SFCBUS)


When called from the main surface routine, surface/sfc_main.F90, the main routines of the different surface schemes then get called, passing to them:

  • the surface bus itself
  • the size of the surface bus
  • the new starting addresses of the variable in the surface bus and
  • the number of variables


Inside the respective main surface routines for the different schemes the fields from the surface bus can then get accessed by:

a) Declaration of the field as 1-D or 2-D pointer (same as in other physics routines), for example:

Volet

       real,pointer,dimension(:)   :: TA, ZSNOW
       real,pointer,dimension(:,:) :: TS


Often, the 1-D or 2-D field name is the same as the starting position of the field in the bus, preceded by a 'z'.


b) Assigning the pointer

Volet

1-D fields:
    PS            (1:n)            => bus( x(pplus,  1, 1)       : ) ! Input surface pressure at t+dt
    TA            (1:n)            => bus( x(tplus,   1, nk)     : ) ! Input temperature at t+dt on lowest model level (nk)

2-D fields:
    TS            (1:N,1:IG)  => bus( x( TSOIL, 1, 1 )    : )    !  inout Temperature of soil layers [K]

Fields defined for the different surface fractions:
    ZSNOW (
1:N)           => bus( x(SNODP , 1, indx_sfc ) : )  ! output snow depth; where indx_sfc is set to the specific surface fraction


Now, the pointer (PS, TA, TS, ZSNOW) can get used in the routine like "normal" arrays.in class_main:
    declare variable as pointer with the correct dimension (1-D)
        all usually 2-D fields are 1-D because we only get one row
        all usually 3-D fields are 2-D because we only get one slab