Skip to main content

hydrogym.nek.env

Core Nek5000 environment with Gymnasium interface. Single-agent with array-based observations/actions.

Supports two initialization patterns:

  1. MAIA pattern (recommended): env = NekEnv.from_hf('EnvName', nproc=10)
  2. Legacy pattern: env = NekEnv(conf=config_obj)

ConfigError Objects

class ConfigError(Exception)

Exception raised for configuration-related errors.

mpi_split

def mpi_split(comm_world: MPI.Comm, nproc: Optional[int] = None) -> MPI.Comm

Split MPI world into master/worker inter-communicator.

Arguments:

  • comm_world - MPI communicator
  • nproc - Expected number of Nek workers (for validation)

Returns:

Inter-communicator between controller and workers

NekEnv Objects

class NekEnv(gym.Env)

Core Nek5000 environment with Gymnasium interface.

This is a single-agent environment where the agent controls multiple actuators (control points) on the mesh. Observations and actions are flat arrays representing all actuators.

Supports two initialization patterns:

  1. MAIA pattern (recommended): env = NekEnv.from_hf( 'MiniChannel_Re180', nproc=10, hostfile='', )

  2. Legacy pattern (deprecated): conf = OmegaConf.load('config.yaml') env = NekEnv(conf=conf)

Args (MAIA pattern via env_config): environment_name: Name of environment on HuggingFace nproc: Number of MPI workers for Nek (required) hostfile: MPI hostfile path (default: '') hf_repo_id: HuggingFace repository (default: 'dynamicslab/HydroGym-environments') use_clean_cache: Use fresh workspace (default: True) local_fallback_dir: Local directory for offline usage configuration_file: Override config file path run_root: Root directory for outputs (default: 'runs') run_name: Name for this run (default: '' = no subdirectory, use run_root directly) reward_agg: Reward aggregation method ("mean" or "sum") ... (runtime overrides for config parameters)

Args (Legacy pattern): conf: Configuration object (OmegaConf) run_root: Root directory for run outputs run_name: Name for this run (defaults to MPI rank) reward_agg: How to aggregate per-actuator rewards ("mean" or "sum")

__init__

def __init__(conf: Optional[Config] = None,
env_config: Optional[Dict] = None,
run_root: str = ".",
run_name: Optional[str] = None,
reward_agg: str = "mean",
**kwargs)

Initialize NekEnv with either legacy conf or MAIA env_config pattern.

Arguments:

  • conf - Legacy OmegaConf object (deprecated)
  • env_config - MAIA-style configuration dict (recommended)
  • run_root - Output directory root
  • run_name - Run name (auto-generate if None)
  • reward_agg - Reward aggregation method
  • **kwargs - Additional parameters (for backward compatibility)

from_hf

@classmethod
def from_hf(cls,
environment_name: str,
nproc: int,
hostfile: str = "",
**kwargs)

Create environment from HuggingFace Hub (MAIA pattern).

Arguments:

  • environment_name - Name of the environment (e.g., 'MiniChannel_Re180')
  • nproc - Number of MPI workers for Nek (required)
  • hostfile - MPI hostfile path (default: '')
  • **kwargs - Additional env_config parameters:
    • hf_repo_id: HF repository (default: 'dynamicslab/HydroGym-environments')
    • use_clean_cache: Fresh workspace (default: True)
    • local_fallback_dir: Local directory
    • configuration_file: Override config path
    • run_root: Output directory (default: 'runs')
    • run_name: Run name (auto-generate if None)
    • reward_agg: 'mean' or 'sum' (default: 'mean')
    • normalize_input: Override normalization strategy
    • nb_interactions: Override episode length
    • random_init: Override IC randomization
    • rescale_actions: Override action rescaling
    • rew_mode: Override reward mode

Returns:

NekEnv instance

Example:

env = NekEnv.from_hf('MiniChannel_Re180', nproc=10)

env = NekEnv.from_hf( 'MiniChannel_Re180', nproc=10, hostfile='hosts.txt', use_clean_cache=True, normalize_input='utau', )

reset

def reset(seed=None, options=None) -> Tuple[np.ndarray, dict]

Reset the environment.

step

def step(action: np.ndarray) -> Tuple[np.ndarray, float, bool, bool, dict]

Step the environment.

Arguments:

  • action - Flat array of actions for all actuators, shape (n_actuators,)

Returns:

  • observation - Flat array of observations, shape (n_actuators * obs_per_actuator,)
  • reward - Scalar reward
  • terminated - Whether episode is done
  • truncated - Whether episode was truncated (always False for Nek)
  • info - Additional information

render

def render(mode="human")

Render the environment (not implemented).

close

def close()

Close the environment.

RingBuffer Objects

class RingBuffer()

N-dimensional ring buffer using numpy arrays.

extend

def extend(x)

Add array x to ring buffer.

get

def get()

Return first-in-first-out data in the ring buffer.

average

def average()

Return average of entries in the ring buffer.