Feat/55/press-fault-data-simulator-service#64
Conversation
- SpringBoot URL 설정 추가(settings.py)
시뮬레이터가 모델 서버 대신 Spring Boot로 데이터를 전송하도록 수정 - spring_boot_service.py 신규 생성 - scheduler_service.py 로직 변경
- 더 이상 사용하지 않는 모델 연동 서비스 삭제 (prediction_api_service.py) - 위 서비스를 사용하던 테스트 삭제
- SpringBoot URL 설정 추가(settings.py)
시뮬레이터가 모델 서버 대신 Spring Boot로 데이터를 전송하도록 수정 - spring_boot_service.py 신규 생성 - scheduler_service.py 로직 변경
- 더 이상 사용하지 않는 모델 연동 서비스 삭제 (prediction_api_service.py) - 위 서비스를 사용하던 테스트 삭제
/stop 후 /start하여 재시작할 시 시뮬레이터가 시작되지 않던 문제 수정
- spring boot 포트를 8088로 수정 - 불필요한 log 제거
….com/smart-factory-team/smart_FAST into feat/55/update-press-fault-simulator
WalkthroughAdded SPRING_BOOT_BASE_URL setting. Introduced SpringBootService for sending sensor data and health checks and removed PredictAPIService and its tests. Scheduler switched from API-based predictions to SpringBootService transmissions and updated counters/flows. AzureStorageService gained reset_connection and adjusted close behavior. predict_service changed reconstruction_error to mean. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant S as SchedulerService
participant AZ as AzureStorageService
participant SB as SpringBootService
participant API as Spring Boot API
rect rgb(247,250,252)
note over S: Start simulation
S->>AZ: reset_connection()
AZ-->>S: ack
S->>SB: health_check()
SB->>API: GET /pressFaultDetectionLogs
API-->>SB: 200/404 or other
SB-->>S: OK / FAIL
end
alt health OK
loop each cycle
S->>AZ: read next sensor rows
AZ-->>S: sensor data
S->>SB: send_sensor_data(payload)
SB->>API: POST /pressFaultDetectionLogs/data
API-->>SB: 200/201/202 or error
SB-->>S: success True / False
opt retries on failure
SB->>API: POST (retry with backoff)
end
end
else health FAIL
note over S: Startup aborted / logged
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
🤖 Smart FAST 자동 코드 리뷰📦 변경된 서비스 (2개):
🔍 Git diff를 통해 서비스 감지됨 📊 전체 검사 결과:🧪 테스트 & 코드 품질 결과:
🔍 실패한 테스트:
🔒 보안 스캔 결과:
✅ 검사 체크리스트:
🚀 배포 정보:
|
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
services/press-fault-data-simulator-service/app/services/scheduler_service.py (1)
170-191: Dangling class-level code will crash at import timeThe method signature is commented out, but the body (Lines 171-191) remains active at class scope. This will execute at class definition and immediately NameError on result. Remove this dead block.
- # def _handle_prediction_result(self, result, data_source: str): - """예측 결과 처리 (로그 기록)""" - status = "FAULT DETECTED" if result.is_fault else "✅ NORMAL" - - # fault_probability가 None일 수 있으므로 안전하게 처리 - probability_str = ( - f"{result.fault_probability:.4f}" - if result.fault_probability is not None - else "N/A" - ) - - message = ( - f"{status} - Prediction: {result.prediction}, " - f"Probability: {probability_str}, " - ) - - if result.is_fault: - # 이상 감지 시 WARNING 레벨 (파일 저장) - prediction_log.warning(message) - else: - # 정상 시 INFO 레벨 (콘솔만) - prediction_log.info(message) + # (삭제됨) 예측 결과 처리 로직은 Spring Boot 전송 기반으로 전환됨
🧹 Nitpick comments (7)
services/press-fault-data-simulator-service/app/config/settings.py (1)
25-26: New Spring Boot base URL setting added — looks goodThe addition of SPRING_BOOT_BASE_URL cleanly enables the new Spring Boot integration and remains overrideable via env. No issues here.
If you want to guard against accidental trailing slashes in env, consider normalizing it (e.g., rstrip("/")) at usage sites (see my comment in spring_boot_service.py).
services/press-fault-data-simulator-service/app/services/scheduler_service.py (1)
93-94: Consider logging both total and successful transmissions on shutdownThis provides clearer operational visibility.
- system_log.info(f" └─ 총 전송 횟수: {self.total_transmissions}") + system_log.info( + f" └─ 총 전송 횟수: {self.total_transmissions} (성공: {self.successful_transmissions})" + )services/press-fault-data-simulator-service/app/services/spring_boot_service.py (5)
5-5: Remove unused Optional importRuff F401: typing.Optional is imported but unused.
-from typing import Optional
16-16: Normalize base URL to avoid accidental double slashesIf SPRING_BOOT_BASE_URL ends with “/”, this prevents “//pressFaultDetectionLogs…”.
- self.api_url = f"{settings.SPRING_BOOT_BASE_URL}/pressFaultDetectionLogs/data" + base = settings.SPRING_BOOT_BASE_URL.rstrip("/") + self.api_url = f"{base}/pressFaultDetectionLogs/data"
54-61: Optional: collapse nested async context managers (SIM117)Functionally identical, a bit cleaner.
- async with aiohttp.ClientSession(timeout=timeout) as session: - async with session.post( - url=self.api_url, - json=request_data, - headers={'Content-Type': 'application/json'} - ) as response: - response_time = time.time() - start_time + async with aiohttp.ClientSession(timeout=timeout) as session, session.post( + url=self.api_url, + json=request_data, + headers={'Content-Type': 'application/json'} + ) as response: + response_time = time.time() - start_time
86-87: Duplicate unreachable return FalseThere are two consecutive return False statements; the second is unreachable.
- - return False - +
97-101: Normalize base URL and optionally collapse async with (SIM117)- base_url = f"{settings.SPRING_BOOT_BASE_URL}/pressFaultDetectionLogs" - - timeout = aiohttp.ClientTimeout(total=10) - async with aiohttp.ClientSession(timeout=timeout) as session: - async with session.get(base_url) as response: + base_url = f"{settings.SPRING_BOOT_BASE_URL.rstrip('/')}/pressFaultDetectionLogs" + timeout = aiohttp.ClientTimeout(total=10) + async with aiohttp.ClientSession(timeout=timeout) as session, session.get(base_url) as response:
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (6)
services/press-fault-data-simulator-service/app/config/settings.py(1 hunks)services/press-fault-data-simulator-service/app/services/azure_storage_service.py(1 hunks)services/press-fault-data-simulator-service/app/services/prediction_api_service.py(0 hunks)services/press-fault-data-simulator-service/app/services/scheduler_service.py(7 hunks)services/press-fault-data-simulator-service/app/services/spring_boot_service.py(1 hunks)services/press-fault-data-simulator-service/tests/services/test_prediction_api_service.py(0 hunks)
💤 Files with no reviewable changes (2)
- services/press-fault-data-simulator-service/tests/services/test_prediction_api_service.py
- services/press-fault-data-simulator-service/app/services/prediction_api_service.py
🧰 Additional context used
🧬 Code Graph Analysis (4)
services/press-fault-data-simulator-service/app/config/settings.py (2)
services/press-fault-data-simulator-service/tests/services/test_prediction_api_service.py (1)
patch_settings(140-148)services/vehicle-assembly-process-defect-detection-model-service/app/core/config.py (2)
Settings(14-192)ProductionSettings(251-257)
services/press-fault-data-simulator-service/app/services/spring_boot_service.py (3)
services/press-fault-data-simulator-service/app/services/prediction_api_service.py (1)
PredictAPIService(11-142)services/press-fault-data-simulator-service/tests/services/test_prediction_api_service.py (3)
post_side_effect(157-173)DummySession(62-96)session_factory(156-175)services/press-fault-detection-model-service/tests/test_predict_router.py (1)
test_predict_fault_service_called_with_correct_data_structure(313-330)
services/press-fault-data-simulator-service/app/services/azure_storage_service.py (1)
services/press-fault-data-simulator-service/tests/services/test_azure_storage_service.py (6)
test_close_closes_underlying_client(431-437)test_clear_cache_resets_state(418-428)test_move_to_next_file_resets_state(393-404)test_connect_success(184-198)test_get_current_status_no_file_loaded(407-415)test_connect_failure_logs_and_returns_false(201-216)
services/press-fault-data-simulator-service/app/services/scheduler_service.py (5)
services/press-fault-data-simulator-service/app/services/spring_boot_service.py (3)
SpringBootService(12-113)health_check(88-113)send_sensor_data(22-86)services/press-fault-data-simulator-service/app/models/data_models.py (2)
PredictionRequest(5-42)from_csv_data(16-42)services/press-fault-data-simulator-service/app/services/azure_storage_service.py (2)
reset_connection(208-220)connect(32-50)services/press-fault-data-simulator-service/tests/services/test_scheduler_service.py (1)
test_run_single_simulation_happy_path_updates_stats_and_logs(222-233)services/press-fault-data-simulator-service/app/services/prediction_api_service.py (1)
PredictAPIService(11-142)
🪛 Ruff (0.12.2)
services/press-fault-data-simulator-service/app/services/spring_boot_service.py
5-5: typing.Optional imported but unused
Remove unused import: typing.Optional
(F401)
54-59: Use a single with statement with multiple contexts instead of nested with statements
(SIM117)
100-101: Use a single with statement with multiple contexts instead of nested with statements
(SIM117)
🔇 Additional comments (4)
services/press-fault-data-simulator-service/app/services/scheduler_service.py (4)
6-7: Import switch to SpringBootService is appropriateThe migration away from PredictAPIService is reflected correctly here.
42-47: Startup flow sanity check: good defensive health checkResetting Azure state before a Spring Boot health check is sensible. Flow will short-circuit early on health issues. LGTM.
141-159: Transmission flow via Spring Boot is clean and robustConverting CSV chunk to PredictionRequest, sending, and keeping counters is straightforward. Early-return on failure avoids double-counting. Nicely done.
211-213: Status fields updated correctlyget_simulation_status now reflects the new counters. Looks consistent with the new flow.
| async def reset_connection(self): | ||
| """연결 상태 초기화 (재시작 시 사용)""" | ||
| try: | ||
| if hasattr(self, '_current_client'): | ||
| await self._current_client.close() | ||
| delattr(self, '_current_client') | ||
|
|
||
| self.current_file_name = None | ||
| self.current_row_index = 0 | ||
|
|
||
| system_log.info("Azure Storage 연결 상태 초기화 완료") | ||
| except Exception as e: | ||
| system_log.warning(f"연결 초기화 중 오류 (무시됨): {str(e)}") | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
reset_connection uses an undefined _current_client and doesn’t ensure the main client is closed
_reset_connection() currently targets self._current_client, which is never initialized elsewhere in this class, and it doesn’t close blob_service_client. This can leave connections hanging and diverges from existing tests that expect blob_service_client.close() to be awaited.
Proposed fix: delegate to close(), reset state, and mark disconnected.
- async def reset_connection(self):
+ async def reset_connection(self) -> None:
"""연결 상태 초기화 (재시작 시 사용)"""
try:
- if hasattr(self, '_current_client'):
- await self._current_client.close()
- delattr(self, '_current_client')
-
- self.current_file_name = None
- self.current_row_index = 0
-
- system_log.info("Azure Storage 연결 상태 초기화 완료")
+ await self.close()
+ self.current_file_name = None
+ self.current_row_index = 0
+ self.is_connected = False
+ system_log.info("Azure Storage 연결 상태 초기화 완료")
except Exception as e:
system_log.warning(f"연결 초기화 중 오류 (무시됨): {str(e)}")This aligns behavior with expectations and avoids leaking the underlying client.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async def reset_connection(self): | |
| """연결 상태 초기화 (재시작 시 사용)""" | |
| try: | |
| if hasattr(self, '_current_client'): | |
| await self._current_client.close() | |
| delattr(self, '_current_client') | |
| self.current_file_name = None | |
| self.current_row_index = 0 | |
| system_log.info("Azure Storage 연결 상태 초기화 완료") | |
| except Exception as e: | |
| system_log.warning(f"연결 초기화 중 오류 (무시됨): {str(e)}") | |
| async def reset_connection(self) -> None: | |
| """연결 상태 초기화 (재시작 시 사용)""" | |
| try: | |
| await self.close() | |
| self.current_file_name = None | |
| self.current_row_index = 0 | |
| self.is_connected = False | |
| system_log.info("Azure Storage 연결 상태 초기화 완료") | |
| except Exception as e: | |
| system_log.warning(f"연결 초기화 중 오류 (무시됨): {str(e)}") |
| async def close(self): | ||
| """연결 종료 (선택적)""" | ||
| try: | ||
| if hasattr(self, "blob_service_client") and self.blob_service_client: | ||
| await self.blob_service_client.close() | ||
| system_log.info("Azure Storage 연결 종료 완료") | ||
| if hasattr(self, '_current_client'): | ||
| await self._current_client.close() | ||
| delattr(self, '_current_client') | ||
| system_log.info("Azure Storage 연결 종료 완료") | ||
| except Exception as e: | ||
| system_log.error(f"Azure Storage 연결 종료 중 오류: {str(e)}") | ||
| finally: | ||
| self.is_connected = False | ||
| system_log.warning(f"연결 종료 중 오류 (무시됨): {str(e)}") | ||
|
|
There was a problem hiding this comment.
close() no longer closes BlobServiceClient; potential resource leak and failing tests
close() checks only _current_client and never closes blob_service_client. Existing tests (e.g., test_close_closes_underlying_client) expect blob_service_client.close() to be awaited. This is both a functional leak and a test regression.
Apply this minimal, backward-compatible fix:
- async def close(self):
+ async def close(self) -> None:
"""연결 종료 (선택적)"""
try:
- if hasattr(self, '_current_client'):
- await self._current_client.close()
- delattr(self, '_current_client')
- system_log.info("Azure Storage 연결 종료 완료")
+ # 우선 실제 BlobServiceClient를 닫음
+ if hasattr(self, 'blob_service_client') and self.blob_service_client:
+ await self.blob_service_client.close()
+ # 호환성: 개별 클라이언트가 있다면 닫음
+ if hasattr(self, '_current_client'):
+ await self._current_client.close()
+ delattr(self, '_current_client')
+ system_log.info("Azure Storage 연결 종료 완료")
except Exception as e:
- system_log.warning(f"연결 종료 중 오류 (무시됨): {str(e)}")
+ system_log.warning(f"연결 종료 중 오류 (무시됨): {str(e)}")
+ finally:
+ self.is_connected = FalseThis will also restore the test expectation that the underlying client is closed.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async def close(self): | |
| """연결 종료 (선택적)""" | |
| try: | |
| if hasattr(self, "blob_service_client") and self.blob_service_client: | |
| await self.blob_service_client.close() | |
| system_log.info("Azure Storage 연결 종료 완료") | |
| if hasattr(self, '_current_client'): | |
| await self._current_client.close() | |
| delattr(self, '_current_client') | |
| system_log.info("Azure Storage 연결 종료 완료") | |
| except Exception as e: | |
| system_log.error(f"Azure Storage 연결 종료 중 오류: {str(e)}") | |
| finally: | |
| self.is_connected = False | |
| system_log.warning(f"연결 종료 중 오류 (무시됨): {str(e)}") | |
| async def close(self) -> None: | |
| """연결 종료 (선택적)""" | |
| try: | |
| # 우선 실제 BlobServiceClient를 닫음 | |
| if hasattr(self, 'blob_service_client') and self.blob_service_client: | |
| await self.blob_service_client.close() | |
| # 호환성: 개별 클라이언트가 있다면 닫음 | |
| if hasattr(self, '_current_client'): | |
| await self._current_client.close() | |
| delattr(self, '_current_client') | |
| system_log.info("Azure Storage 연결 종료 완료") | |
| except Exception as e: | |
| system_log.warning(f"연결 종료 중 오류 (무시됨): {str(e)}") | |
| finally: | |
| self.is_connected = False |
🤖 Prompt for AI Agents
In
services/press-fault-data-simulator-service/app/services/azure_storage_service.py
around lines 222 to 231, close() only checks and closes _current_client but
never closes blob_service_client, causing a resource leak and failing tests;
update close() to also check for an existing blob_service_client attribute,
await its close() call, and delete that attribute (while keeping the existing
handling for _current_client and the try/except logging) so the underlying
BlobServiceClient is properly closed and tests expecting
blob_service_client.close() to be awaited pass.
| # self.total_predictions = 0 | ||
| # self.fault_detections = 0 | ||
| self.start_time: Optional[datetime] = None | ||
|
|
||
| self.total_transmissions = 0 | ||
| self.successful_transmissions = 0 | ||
|
|
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Counters renamed; ensure references/tests are updated
You replaced total_predictions/fault_detections with total_transmissions/successful_transmissions. Ensure no stale references remain in tests or other modules (e.g., test_scheduler_service).
Run this to locate legacy identifiers and logger usage that may now be dead:
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Find old counters/loggers that should be removed/updated
rg -nP -C2 '\b(total_predictions|fault_detections|prediction_log)\b'Length of output: 21033
🏁 Script executed:
#!/bin/bash
set -euo pipefail
sed -n '140,200p' services/press-fault-data-simulator-service/app/services/scheduler_service.pyLength of output: 2067
🏁 Script executed:
#!/bin/bash
set -euo pipefail
sed -n '180,260p' services/press-fault-data-simulator-service/app/services/scheduler_service.pyLength of output: 1367
Update test references after renaming counters
The scheduler service now uses total_transmissions and successful_transmissions instead of total_predictions and fault_detections. Tests in services/press-fault-data-simulator-service/tests/services/test_scheduler_service.py still assert the old properties—these must be updated:
• Around lines 154–155:
- replace
assert service.total_predictions == 0
assert service.fault_detections == 0
with
assert service.total_transmissions == 0
assert service.successful_transmissions == 0
• Around lines 228–229 and 243–244 (after _run_single_simulation()):
- replace
service.total_predictions→service.total_transmissions - replace
service.fault_detections→service.successful_transmissions
• Around lines 260–261 and 272–273 (error cases):
- same replacements as above
• Around lines 338–339 (in get_simulation_status checks):
- replace
status["total_predictions"]→status["total_transmissions"] - replace
status["fault_detections"]→status["successful_transmissions"]
Make sure all references to the old counters in tests are updated to the new names.
🤖 Prompt for AI Agents
In
services/press-fault-data-simulator-service/tests/services/test_scheduler_service.py
(around lines 154–155, 228–229, 243–244, 260–261, 272–273, and 338–339), update
all test assertions and status checks that reference the old counters
`total_predictions` and `fault_detections` to the new names: replace
`total_predictions` with `total_transmissions` and `fault_detections` with
`successful_transmissions` (including in assertions after
`_run_single_simulation()`, in error-case assertions, and in the
`get_simulation_status` dict checks).
- 응답 데이터 중 재구성 오차를 max에서 mean으로 변경
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (7)
services/press-fault-detection-model-service/app/services/predict_service.py (7)
52-57: Add a lightweight shape guard after model.predict to avoid silent shape mismatchesIf the model returns a different shape than sequences, the subtraction will raise or produce incorrect results. Add a guard with a clear log to ease debugging.
reconstructed_sequences = model.predict(sequences) +if getattr(reconstructed_sequences, "shape", None) != sequences.shape: + logger.error( + "Model output shape mismatch: expected %s, got %s", + sequences.shape, getattr(reconstructed_sequences, "shape", None) + ) + raise ValueError("Model output shape mismatch")
74-76: Extract probability threshold to a named constant or configHardcoding 0.05 makes tuning harder in ops. Promote to a constant (or settings/env) for clarity and runtime configurability.
- is_fault = fault_probability > 0.05 + is_fault = fault_probability > PROBABILITY_THRESHOLDOutside this range (near the module header):
PROBABILITY_THRESHOLD = 0.05 # TODO: make configurable via settings/env
65-66: Fix comment typoMinor nit: spelling of “threshold”.
- # 오차(error)가 임곗값(thresshold)을 초과하는 시퀀스의 개수 + # 오차(error)가 임곗값(threshold)을 초과하는 시퀀스의 개수
105-111: Vectorize flatten to avoid Python loopsThis can be expressed with a single slice for clarity and speed.
def flatten(X): - if X.shape[1] < 1: - return np.empty((X.shape[0], X.shape[2])) - flattened = np.empty((X.shape[0], X.shape[2])) - for i in range(X.shape[0]): - flattened[i] = X[i, X.shape[1] - 1, :] - return flattened + if X.shape[1] == 0: + return np.empty((X.shape[0], X.shape[2])) + return X[:, -1, :]
114-123: Refactor create_sequences to sliding_window_view for readability and safetyEquivalent behavior without manual stride math.
def create_sequences(data, seq_length): """2D배열 데이터로부터 슬라이딩 윈도우 방식으로 시퀀스를 생성""" if len(data) < seq_length: return np.array([]) - sequences = len(data) - seq_length + 1 - return as_strided( - data, - shape=(sequences, seq_length, data.shape[1]), - strides=(data.strides[0], data.strides[0], data.strides[1]), - ) + # data: (N, F) -> windows: (N - L + 1, L, F) + windows = sliding_window_view(data, window_shape=(seq_length, data.shape[1])) + return windows.reshape(-1, seq_length, data.shape[1])Note: if your numpy version lacks sliding_window_view, keep current code.
12-20: Update the docstring to reflect the new reconstruction_error definitionThe docstring should document that reconstruction_error now reports the mean of per-sequence last-step MSE (previously max), to avoid ambiguity for API consumers.
-def predict_press_fault(data: SensorData) -> dict: - """_summary_ - 입력된 센서 데이터를 기반으로 고장 예측 - Args: +def predict_press_fault(data: SensorData) -> dict: + """Press fault prediction based on input sensor data. + 입력된 센서 데이터를 기반으로 고장 예측을 수행합니다. + + Args: data (SensorData): 상/하부 진동과 전류 센서 데이터 - - Returns: - dict: 예측한 정상/고장 값, 재구성 오차 값, is_fault boolean 값 + Returns: + dict: 예측 결과(정상/고장), 재구성 오차 값, is_fault 여부, 고장 확률 등. + Notes: + - reconstruction_error는 시퀀스별(last-step) MSE의 평균(mean)입니다. + (이전 버전은 최대값(max)이었으므로 소비처에서 확인 필요) """
6-6: Usesliding_window_viewfor safer, clearer windowing (numpy >=1.20 is available)I’ve confirmed that in
services/press-fault-detection-model-service/requirements.txtnumpy is pinned at 1.26.4, sosliding_window_viewis supported and safe to use.Recommended change:
-from numpy.lib.stride_tricks import as_strided +from numpy.lib.stride_tricks import sliding_window_viewThis makes the intent explicit and avoids the pitfalls of manually managing strides.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
services/press-fault-detection-model-service/app/services/predict_service.py(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
services/press-fault-detection-model-service/app/services/predict_service.py (1)
services/press-fault-detection-model-service/tests/test_schemas_output.py (2)
test_model_copy_with_updates(451-467)test_negative_reconstruction_error(263-273)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: test-and-lint (press-fault-data-simulator-service)
- GitHub Check: test-and-lint (press-fault-detection-model-service)
🔇 Additional comments (1)
services/press-fault-detection-model-service/app/services/predict_service.py (1)
77-92: Confirm external API consumer expectations or add backward-compatible field
- Internal scan found no references to a “max” aggregation for
reconstruction_errorin docs, tests, or other services—nothing in the repository assumes the old max-of-errors semantics.- However, any external clients or dashboards that consume this API might still expect the previous “max” behavior. Please verify with those consumers before removing it.
- If you need to maintain backward compatibility, consider preserving the max error under
reconstruction_errorand exposing the mean as a new field (e.g.reconstruction_error_mean), or ship this change behind a versioned contract.- If you intentionally prefer mean-only, add explicit metadata (for example,
"reconstruction_error_agg": "mean") so downstream code knows which aggregation was used.Suggested optional diff:
@@ services/press-fault-detection-model-service/app/services/predict_service.py - mean_error = np.mean(errors) if total_sequences > 0 else 0.0 + mean_error = np.mean(errors) if total_sequences > 0 else 0.0 + max_error = np.max(errors) if total_sequences > 0 else 0.0 @@ services/press-fault-detection-model-service/app/services/predict_service.py - "reconstruction_error": float(mean_error), + "reconstruction_error": float(max_error), # preserved for backward compatibility + "reconstruction_error_mean": float(mean_error), # new metric
✨ Description
✅ Task
✅ Test Result
📸 Screenshot (선택)
📌 Etc
Summary by CodeRabbit
New Features
Refactor
Bug Fixes
Changes