Skip to content
Permalink

Comparing changes

This is a direct comparison between two commits made in this repository or its related repositories. View the default comparison for this range or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: equinor/ert
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: 494347af38132129cb822b2c2a4a5f530ff6e9c3
Choose a base ref
..
head repository: equinor/ert
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: ff892b413bd49fe1e56c67be94c623f18ba03fd5
Choose a head ref
2 changes: 1 addition & 1 deletion ci/testkomodo.sh
Original file line number Diff line number Diff line change
@@ -37,7 +37,7 @@ run_ert_with_opm () {
run_everest_tests () {
python -m pytest tests/everest -s \
--ignore-glob "*test_visualization_entry*" \
-m "not simulation_test and not ui_test"
-m "not requires_eclipse and not ui_test"
xvfb-run -s "-screen 0 640x480x24" --auto-servernum python -m pytest tests/everest -s -m "ui_test"
}

68 changes: 67 additions & 1 deletion docs/everest/config_generated.rst
Original file line number Diff line number Diff line change
@@ -956,7 +956,7 @@ Simulation settings


**queue_system (optional)**
Type: *Optional[Literal['lsf', 'local', 'slurm']]*
Type: *Optional[Literal['lsf', 'local', 'slurm', 'torque']]*

Defines which queue system the everest server runs on.

@@ -1031,6 +1031,72 @@ Simulation settings
optimizer.


**qsub_cmd (optional)**
Type: *Optional[str]*

The submit command


**qstat_cmd (optional)**
Type: *Optional[str]*

The query command


**qdel_cmd (optional)**
Type: *Optional[str]*

The kill command


**qstat_options (optional)**
Type: *Optional[str]*

Options to be supplied to the qstat command. This defaults to -x, which tells the qstat command to include exited processes.


**cluster_label (optional)**
Type: *Optional[str]*

The name of the cluster you are running simulations in.


**memory_per_job (optional)**
Type: *Optional[str]*

You can specify the amount of memory you will need for running your job. This will ensure that not too many jobs will run on a single shared memory node at once, possibly crashing the compute node if it runs out of memory.
You can get an indication of the memory requirement by watching the course of a local run using the htop utility. Whether you should set the peak memory usage as your requirement or a lower figure depends on how simultaneously each job will run.
The option to be supplied will be used as a string in the qsub argument. You must specify the unit, either gb or mb.



**keep_qsub_output (optional)**
Type: *Optional[int]*

Set to 1 to keep error messages from qsub. Usually only to be used if somethign is seriously wrong with the queue environment/setup.


**submit_sleep (optional)**
Type: *Optional[float]*

To avoid stressing the TORQUE/PBS system you can instruct the driver to sleep for every submit request. The argument to the SUBMIT_SLEEP is the number of seconds to sleep for every submit, which can be a fraction like 0.5


**queue_query_timeout (optional)**
Type: *Optional[int]*


The driver allows the backend TORQUE/PBS system to be flaky, i.e. it may intermittently not respond and give error messages when submitting jobs or asking for job statuses. The timeout (in seconds) determines how long ERT will wait before it will give up. Applies to job submission (qsub) and job status queries (qstat). Default is 126 seconds.
ERT will do exponential sleeps, starting at 2 seconds, and the provided timeout is a maximum. Let the timeout be sums of series like 2+4+8+16+32+64 in order to be explicit about the number of retries. Set to zero to disallow flakyness, setting it to 2 will allow for one re-attempt, and 6 will give two re-attempts. Example allowing six retries:



**project_code (optional)**
Type: *Optional[str]*

String identifier used to map hardware resource usage to a project or account. The project or account does not have to exist.



install_jobs (optional)
-----------------------
1 change: 0 additions & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -176,7 +176,6 @@ markers = [
"slow",
"everest_models_test",
"integration_test",
"simulation_test",
"ui_test",
"fails_on_macos_github_workflow", # Tests marked fail due to gui-related issues
]
16 changes: 7 additions & 9 deletions src/ert/config/design_matrix.py
Original file line number Diff line number Diff line change
@@ -149,26 +149,24 @@ def read_design_matrix(
def _read_excel(
file_name: Union[Path, str],
sheet_name: str,
usecols: Optional[Union[int, List[int]]] = None,
usecols: Optional[List[int]] = None,
header: Optional[int] = 0,
skiprows: Optional[int] = None,
dtype: Optional[str] = None,
) -> pd.DataFrame:
"""
Make dataframe from excel file
:return: Dataframe
:raises: OsError if file not found
:raises: ValueError if file not loaded correctly
Reads an Excel file into a DataFrame, with options to filter columns and rows,
and automatically drops columns that contain only NaN values.
"""
dframe: pd.DataFrame = pd.read_excel(
file_name,
sheet_name,
df = pd.read_excel(
io=file_name,
sheet_name=sheet_name,
usecols=usecols,
header=header,
skiprows=skiprows,
dtype=dtype,
)
return dframe.dropna(axis=1, how="all")
return df.dropna(axis=1, how="all")

@staticmethod
def _validate_design_matrix(design_matrix: pd.DataFrame) -> List[str]:
11 changes: 7 additions & 4 deletions src/ert/gui/simulation/evaluate_ensemble_panel.py
Original file line number Diff line number Diff line change
@@ -14,7 +14,7 @@
from ert.gui.simulation.experiment_config_panel import ExperimentConfigPanel
from ert.mode_definitions import EVALUATE_ENSEMBLE_MODE
from ert.run_models.evaluate_ensemble import EvaluateEnsemble
from ert.validation import RangeStringArgument
from ert.validation import EnsembleRealizationsArgument


@dataclass
@@ -47,9 +47,10 @@ def __init__(self, ensemble_size: int, run_path: str, notifier: ErtNotifier):
ActiveRealizationsModel(ensemble_size, show_default=False), # type: ignore
"config/simulation/active_realizations",
)
self._active_realizations_field.setValidator(
RangeStringArgument(ensemble_size),
self._realizations_validator = EnsembleRealizationsArgument(
self._ensemble_selector.selected_ensemble, max_value=ensemble_size
)
self._active_realizations_field.setValidator(self._realizations_validator)
self._realizations_from_fs()
layout.addRow("Active realizations", self._active_realizations_field)

@@ -68,7 +69,7 @@ def isConfigurationValid(self) -> bool:
return (
self._active_realizations_field.isValid()
and self._ensemble_selector.currentIndex() != -1
and bool(self._active_realizations_field.text())
and self._active_realizations_field.isValid()
)

def get_experiment_arguments(self) -> Arguments:
@@ -80,7 +81,9 @@ def get_experiment_arguments(self) -> Arguments:

def _realizations_from_fs(self) -> None:
ensemble = self._ensemble_selector.selected_ensemble
self._active_realizations_field.setEnabled(ensemble is not None)
if ensemble:
self._realizations_validator.set_ensemble(ensemble)
parameters = ensemble.get_realization_mask_with_parameters()
missing_responses = ~ensemble.get_realization_mask_with_responses()
failures = ~ensemble.get_realization_mask_without_failure()
14 changes: 8 additions & 6 deletions src/ert/gui/simulation/manual_update_panel.py
Original file line number Diff line number Diff line change
@@ -17,7 +17,7 @@
from ert.gui.simulation.experiment_config_panel import ExperimentConfigPanel
from ert.mode_definitions import MANUAL_UPDATE_MODE
from ert.run_models.manual_update import ManualUpdate
from ert.validation import ProperNameFormatArgument, RangeStringArgument
from ert.validation import EnsembleRealizationsArgument, ProperNameFormatArgument


@dataclass
@@ -72,14 +72,13 @@ def __init__(
ActiveRealizationsModel(ensemble_size, show_default=False), # type: ignore
"config/simulation/active_realizations",
)
self._active_realizations_field.setValidator(
RangeStringArgument(ensemble_size),
self._realizations_validator = EnsembleRealizationsArgument(
self._ensemble_selector.selected_ensemble, max_value=ensemble_size
)
self._active_realizations_field.setValidator(self._realizations_validator)
self._realizations_from_fs()
layout.addRow("Active realizations", self._active_realizations_field)

self.setLayout(layout)

self._active_realizations_field.getValidationSupport().validationChanged.connect(
self.simulationConfigurationChanged
)
@@ -88,12 +87,13 @@ def __init__(
self.simulationConfigurationChanged
)
self._ensemble_selector.currentIndexChanged.connect(self._realizations_from_fs)
self.setLayout(layout)

def isConfigurationValid(self) -> bool:
return (
self._active_realizations_field.isValid()
and self._ensemble_selector.currentIndex() != -1
and bool(self._active_realizations_field.text())
and self._active_realizations_field.isValid()
)

def get_experiment_arguments(self) -> Arguments:
@@ -106,7 +106,9 @@ def get_experiment_arguments(self) -> Arguments:

def _realizations_from_fs(self) -> None:
ensemble = self._ensemble_selector.selected_ensemble
self._active_realizations_field.setEnabled(ensemble is not None)
if ensemble:
self._realizations_validator.set_ensemble(ensemble)
parameters = ensemble.get_realization_mask_with_parameters()
responses = ensemble.get_realization_mask_with_responses()
mask = np.logical_and(parameters, responses)
3 changes: 0 additions & 3 deletions src/ert/gui/tools/plot/plottery/plots/histogram.py
Original file line number Diff line number Diff line change
@@ -133,9 +133,6 @@ def plotHistogram(
),
)
else:
if minimum is not None and maximum is not None and minimum == maximum:
minimum -= 0.1
maximum += 0.1
config.addLegendItem(
ensemble.name,
_plotHistogram(
Original file line number Diff line number Diff line change
@@ -521,7 +521,7 @@ def __init__(self) -> None:
str(
(
Path(__file__)
/ "../../../resources/forward_models/templating/script/template_render.py"
/ "../../../resources/forward_models/template_render.py"
).resolve()
),
"-i",
Loading