Skip to content

feat/24/welding-machine-data-simulator-service , welding-machine-data-simulator-service: 스프링 부트 게이트웨이 연동 & 시뮬레이터 업데이트 #69

Merged
TaeHyunaivle merged 1 commit intomainfrom
feat/24/welding-simulator-improve
Aug 27, 2025
Merged

feat/24/welding-machine-data-simulator-service , welding-machine-data-simulator-service: 스프링 부트 게이트웨이 연동 & 시뮬레이터 업데이트 #69
TaeHyunaivle merged 1 commit intomainfrom
feat/24/welding-simulator-improve

Conversation

@dahxxn
Copy link
Member

@dahxxn dahxxn commented Aug 27, 2025

✨ Description

  • 시뮬레이터–게이트웨이–스프링부트 연동: 시뮬레이터가 수집한 용접기 센서 데이터를 게이트웨이 경유 API로 전송하도록 통신 계층 추가
  • 환경별 URL 자동 전환: local / docker / kubernetes / production 환경에 따라 게이트웨이·모델 서비스 URL을 자동 선택
  • 초기 헬스 체크 고도화: 모델 서비스뿐 아니라 스프링부트 헬스 엔드포인트까지 점검하여 시작 안정성 향상
  • 주기 작업 로직 리팩토링: 주기적 데이터 수집 후 스프링부트 전송 흐름으로 정리(전류/진동 개별 전송, 재시도/로그 보강)
  • CORS 및 라우팅 정리: 프론트(로컬) 연동을 위한 CORS 허용라우터 prefix 일원화
  • 운영 가시성 향상: 시작/중지/전송 상태, 대상 URL 등 런타임 로그를 구체화

✅ TO-DO

  • Settings (환경/엔드포인트):
    services/welding-machine-data-simulator-service/app/config/settings.py

    • 환경별 게이트웨이/모델 URL 매핑, 프로덕션 기본값, 스프링 엔드포인트(health, welding_data), 환경 자동 감지 로직 추가
  • FastAPI 앱 설정(CORS/라우팅/수명주기):
    services/welding-machine-data-simulator-service/app/main.py

    • CORS 허용(로컬 프론트 도메인), /simulator, /test prefix 라우팅, lifespan에서 로그 디렉토리 준비 및 스케줄러 안전 종료 적용
  • 스케줄러(수집→전송 플로우):
    services/welding-machine-data-simulator-service/app/services/scheduler_service.py

    • 초기 헬스 체크 시 스프링부트 검사 포함, 전류/진동 데이터 개별 전송 메서드 추가, 재시도·예외 처리·로깅 강화, 상태 조회에 스프링부트 URL/전송모드 포함
  • 스프링부트 클라이언트(게이트웨이 경유):
    services/welding-machine-data-simulator-service/app/services/spring_client.py

    • httpx 비동기 클라이언트, 재시도/타임아웃/상태코드 처리, 센서 데이터 DTO 포맷(payload) 구성, /health 체크 구현

🔗 Reference

  • 없음

📌 Etc

  • 환경 변수 예시: ENVIRONMENT, AZURE_CONNECTION_STRING
  • 전송 대상 엔드포인트는 게이트웨이 경유로 설정됨

Summary by CodeRabbit

  • New Features

    • Environment-aware configuration with auto-detection and dynamic service URLs.
    • CORS enabled for http://localhost:3000.
    • Integration with a gateway-backed Spring service for sensor data transmission, with retries and health checks.
  • Refactor

    • Simulator now sends data to the Spring service instead of using model predictions; status output updated accordingly.
    • Routers mounted under /simulator and /test; existing endpoints and behavior remain unchanged.

스프링 부트 게이트웨이와의 통신을 위해 spring_client 추가
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 27, 2025

Walkthrough

Introduces environment-aware settings with dynamic service URL selection, adds CORS in FastAPI app, replaces model-based data handling with a Spring Boot gateway client, updates scheduler to transmit sensor data to Spring Boot and adjust health/status flows, and implements a new HTTP client with retries and health checks.

Changes

Cohort / File(s) Summary of Changes
Environment-driven settings
services/welding-machine-data-simulator-service/app/config/settings.py
Added environment field and per-environment gateway/model URLs; auto-detection in init; computed properties for selected URLs and endpoints; environment helpers; reorganized HTTP settings with new Spring Boot timeout/retry fields.
FastAPI app and CORS
services/welding-machine-data-simulator-service/app/main.py
Added CORSMiddleware with localhost origin; mounted routers with explicit prefixes; minor formatting/comment adjustments; removed some endpoint docstrings without changing behavior.
Scheduler switched to Spring Boot transmission
services/welding-machine-data-simulator-service/app/services/scheduler_service.py
Replaced model prediction flow with Spring Boot data sends; added per-signal transmission helper; updated health checks to use Spring client; revised start/stop logs and status fields; adjusted error/cancellation handling.
New Spring Boot HTTP client
services/welding-machine-data-simulator-service/app/services/spring_client.py
Implemented async client with retries for sending sensor data via gateway; health_check method; helpers for machine ID extraction, timestamp formatting, and sensor value mapping; exported singleton spring_client.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor User as Operator
  participant App as FastAPI App
  participant Sched as SchedulerService
  participant SBC as SpringBootClient
  participant GW as API Gateway
  participant SB as Spring Boot Service

  User->>App: Start simulator
  App->>Sched: start()
  note right of Sched: Logs Spring Boot URL<br/>Initial health check
  Sched->>SBC: health_check()
  SBC->>GW: GET /health (gateway headers)
  GW->>SB: Proxy GET /health
  SB-->>GW: 200 OK / status
  GW-->>SBC: 200 OK
  SBC-->>Sched: True/False

  loop Each tick
    Sched->>Sched: Generate current/vibration data
    par Send current
      Sched->>SBC: send_sensor_data(current)
      SBC->>GW: POST /welding_data
      GW->>SB: Proxy POST /welding_data
      SB-->>GW: 200/5xx/4xx
      GW-->>SBC: Response
      alt 5xx/timeout/error
        SBC-->>SBC: Retry (<= max_retries)
      end
    and Send vibration
      Sched->>SBC: send_sensor_data(vibration)
      SBC->>GW: POST /welding_data
      GW->>SB: Proxy POST /welding_data
      SB-->>GW: Response
      GW-->>SBC: Response
    end
    SBC-->>Sched: Results per signal
    Sched->>Sched: Log success/failure
  end
Loading
sequenceDiagram
  autonumber
  participant Sched as SchedulerService
  participant SBC as SpringBootClient

  rect rgba(230,245,255,0.7)
  note over Sched,SBC: Startup Health Check (new/updated flow)
  Sched->>SBC: health_check()
  SBC-->>Sched: True/False (no exception raised)
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Poem

I twitch my ears at envs that shift,
From local burrows to clouds that drift—
I hop through gateways, swift and bright,
Send sparks of data into night.
With retries packed in my backpack too,
“UP,” says Spring—so on I chew.
Thump-thump: deploy anew! 🐇⚙️

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/24/welding-simulator-improve

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@dahxxn dahxxn changed the title feat: add spring_client.py for gateway: feat/24/welding-machine-data-simulator-service , welding-machine-data-simulator-service: 스프링 부트 게이트웨이 연동 & 시뮬레이터 업데이트 Aug 27, 2025
@github-actions
Copy link

🤖 Smart FAST 자동 코드 리뷰

📦 변경된 서비스 (2개):

  • press-defect-detection-model-service 서비스
  • welding-machine-data-simulator-service 서비스

🔍 Git diff를 통해 서비스 감지됨

📊 전체 검사 결과:

⚠️ 일부 검사에서 문제 발견 - 확인 필요

🧪 테스트 & 코드 품질 결과:

⚠️ 테스트 & 린트 상태 불명

  • 총 테스트: 3개
  • 성공: 0개 ✅
  • 실패: 3개 ❌
  • 커버리지: 0%
  • 린트 오류: 4개
  • 린트 경고: 0개

🔍 실패한 테스트:

  • press-defect-detection-model-service

🔒 보안 스캔 결과:

⚠️ 보안 스캔 상태 불명

  • 보안 이슈가 발견되지 않았습니다 ✅

✅ 검사 체크리스트:

  • ❌ 테스트 실패
  • ❌ 린트 오류 발견
  • ❌ 보안 이슈 발견

🚀 배포 정보:

  • ⚠️ 검사 실패 - 문제 해결 후 배포 가능
    • 🧪 테스트 문제 해결 필요
    • 🔒 보안 이슈 해결 필요
  • 📋 상세 로그는 Actions 탭에서 확인 가능

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
services/welding-machine-data-simulator-service/app/config/settings.py (1)

10-13: Make azure_connection_string optional or Settings() may fail at import

With Pydantic BaseSettings, a bare str is required and will raise if ENV is missing. You print a warning in main.py, so this should be optional.

-from typing import List, Dict
+from typing import List, Dict, Optional
@@
-    azure_connection_string: str
+    azure_connection_string: Optional[str] = None
services/welding-machine-data-simulator-service/app/services/scheduler_service.py (1)

22-28: Refactor azure_storage.connect to handle missing connection string gracefully

The current implementation in
services/welding-machine-data-simulator-service/app/services/azure_storage.py (and its siblings in painting-process and press-fault simulators) unconditionally raises when settings.azure_connection_string is empty:

async def connect(self):
    """Azure Blob Storage 클라이언트 초기화"""
    if not self.connection_string:
        raise ValueError("Azure connection string이 설정되지 않았습니다.")
    …

Since Settings now allows azure_connection_string to be absent, connect() must degrade to a no-op (e.g., local-only mode) rather than throwing.

Please update all three service implementations accordingly (welding-machine, painting-process, press-fault):

• services/welding-machine-data-simulator-service/app/services/azure_storage.py
• services/painting-process-data-simulator-service/app/services/azure_storage.py
• services/press-fault-data-simulator-service/app/services/azure_storage_service.py

Suggested diff for each:

 async def connect(self):
     """Azure Blob Storage 클라이언트 초기화"""
-    if not self.connection_string:
-        raise ValueError("Azure connection string이 설정되지 않았습니다.")
+    if not self.connection_string:
+        # Missing connection string → run in local-only mode
+        logger.warning(
+            "azure_connection_string not set; skipping Azure Blob Storage and using local data only"
+        )
+        return
     # …proceed with normal initialization…

This ensures scheduler_service can always call await azure_storage.connect() without wrapping in try/catch and will silently fall back to local-only behavior when credentials are absent.

🧹 Nitpick comments (11)
services/welding-machine-data-simulator-service/app/config/settings.py (2)

103-112: 'status' endpoint duplicates 'welding_data'

If a status endpoint exists, map it; otherwise remove to avoid confusion.

Do you actually need a separate status URL? If not, delete the key. If yes, update it to the correct path (e.g., /actuator/info or a dedicated status route).


124-130: Instantiation on import + required fields can abort app startup

Given settings = Settings() runs at import time, making any required field optional (done above) prevents hard failures during local/dev usage. Consider adding lightweight validation in lifespan/startup instead of constructor-time throws.

services/welding-machine-data-simulator-service/app/main.py (2)

21-24: Startup log still references model services; align with Spring Boot gateway

Replace service-count log with the actual Spring gateway target to reduce confusion.

-    print(f"🎯 대상 서비스 수: {len(settings.model_services)}")
+    print(f"🎯 대상 게이트웨이: {settings.spring_boot_service_url}")

39-47: CORS: derive origins by environment; avoid broad perms in non-local

Hard-coding localhost is fine for dev, but make it env-driven for other tiers. Keep credentials only if required.

-# ✅ CORS (개발 환경)
-origins = ["http://localhost:3000"]
+# ✅ CORS (환경별)
+origins = (
+    ["http://localhost:3000"]
+    if settings.is_local_environment
+    else [o for o in os.getenv("ALLOWED_ORIGINS", "").split(",") if o]
+)
 app.add_middleware(
     CORSMiddleware,
-    allow_origins=origins,      # 프론트 도메인 명시
-    allow_credentials=True,     # 쿠키/인증 헤더 사용 시 True
+    allow_origins=origins,
+    allow_credentials=True,
     allow_methods=["*"],        # 또는 ["GET","POST","OPTIONS",...]
     allow_headers=["*"],
 )
services/welding-machine-data-simulator-service/app/services/scheduler_service.py (2)

30-37: Prevent overlapping runs

If a run exceeds the interval, jobs can overlap. Limit concurrency per job and coalesce.

 self.scheduler.add_job(
     func=self._simulate_data_collection,
     trigger=IntervalTrigger(
         minutes=settings.scheduler_interval_minutes),
     id='data_simulation',
     name='Data Collection Simulation',
-    replace_existing=True
+    replace_existing=True,
+    max_instances=1,
+    coalesce=True,
+    misfire_grace_time=30,
 )

106-115: Surface more robust per-signal result details

result.get('message', 'OK') may be absent. Consider printing HTTP status or known fields, and handle non-dict responses gracefully.

services/welding-machine-data-simulator-service/app/services/spring_client.py (5)

112-114: Ruff F541: f-string without placeholders

Remove the unnecessary f-prefix.

-        logger.error(f"게이트웨이를 통한 스프링부트 전송 실패 - 모든 재시도 완료")
+        logger.error("게이트웨이를 통한 스프링부트 전송 실패 - 모든 재시도 완료")

181-191: Ruff E722: avoid bare except

Scope the exception type to avoid masking errors.

-                    except:
+                    except Exception:

173-179: Use configured timeout in health check

Keep timeouts consistent (or cap it), instead of a hard-coded 5s.

-            async with httpx.AsyncClient(timeout=5) as client:
+            async with httpx.AsyncClient(timeout=min(self.timeout, 5)) as client:

15-19: Unused attribute: self.base_url

You compute endpoints from settings each call; drop this field to avoid confusion or use it consistently.

-        # spring_boot_service_url → gateway_service_url 변경
-        self.base_url = settings.gateway_service_url
         self.timeout = settings.spring_boot_timeout
         self.max_retries = settings.spring_boot_max_retries

20-44: DTO construction: KeyError risk if fields missing

sensor_data["signal_type"] / ["values"] will KeyError. Scheduler currently ensures they exist, but consider defensive .get(..., defaults) if this client is reused elsewhere.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between abddb82 and 9aa21ef.

📒 Files selected for processing (4)
  • services/welding-machine-data-simulator-service/app/config/settings.py (2 hunks)
  • services/welding-machine-data-simulator-service/app/main.py (2 hunks)
  • services/welding-machine-data-simulator-service/app/services/scheduler_service.py (5 hunks)
  • services/welding-machine-data-simulator-service/app/services/spring_client.py (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
services/welding-machine-data-simulator-service/app/services/spring_client.py (1)
services/welding-machine-data-simulator-service/app/config/settings.py (2)
  • gateway_service_url (68-77)
  • spring_boot_endpoints (104-112)
services/welding-machine-data-simulator-service/app/services/scheduler_service.py (3)
services/welding-machine-data-simulator-service/app/config/settings.py (1)
  • spring_boot_service_url (99-101)
services/welding-machine-data-simulator-service/app/services/azure_storage.py (1)
  • disconnect (29-32)
services/welding-machine-data-simulator-service/app/services/spring_client.py (2)
  • health_check (163-198)
  • send_sensor_data (20-114)
🪛 Ruff (0.12.2)
services/welding-machine-data-simulator-service/app/services/spring_client.py

113-113: f-string without any placeholders

Remove extraneous f prefix

(F541)


187-187: Do not use bare except

(E722)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: test-and-lint (press-defect-detection-model-service)
🔇 Additional comments (2)
services/welding-machine-data-simulator-service/app/main.py (1)

51-53: Router prefixes look good

Clear, consistent prefixes for simulator and test routes.

services/welding-machine-data-simulator-service/app/services/scheduler_service.py (1)

42-45: Good: startup logs include Spring Boot target

Helpful for ops. Keep.

Comment on lines +7 to +9
# ✅ 환경 구분 (기본값을 local로 변경)
environment: str = "production" # 기본값을 다시 local로 변경

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Default env should be 'local' (not 'production') to avoid accidental prod traffic

Comment and value conflict. Use a safe default.

-    environment: str = "production"  # 기본값을 다시 local로 변경
+    environment: str = "local"  # 안전한 기본값 (개발)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# ✅ 환경 구분 (기본값을 local로 변경)
environment: str = "production" # 기본값을 다시 local로 변경
# ✅ 환경 구분 (기본값을 local로 변경)
environment: str = "local" # 안전한 기본값 (개발)
🤖 Prompt for AI Agents
In services/welding-machine-data-simulator-service/app/config/settings.py around
lines 7 to 9, the environment default is set to "production" while the comment
indicates it should be "local"; change the hardcoded default value to "local" so
the line reads environment: str = "local" (or adjust the configuration source to
default to "local") to ensure a safe non-production default and keep the comment
and value consistent.

Comment on lines +52 to +66
def __init__(self, **kwargs):
super().__init__(**kwargs)
# ✅ 명시적으로 environment가 설정되지 않은 경우에만 자동 감지
if not kwargs.get('environment') and not os.getenv('ENVIRONMENT'):
# 특정 조건에서만 production으로 감지 (더 엄격한 조건)
if (hasattr(self, 'azure_connection_string') and
self.azure_connection_string and
self.azure_connection_string != "your_azure_connection_string" and
"localhost" not in self.azure_connection_string and
# 추가 조건: Kubernetes 환경 변수 존재 여부
(os.getenv('KUBERNETES_SERVICE_HOST') or os.getenv('WEBSITE_HOSTNAME'))):
self.environment = "production"
else:
self.environment = "local"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Kubernetes auto-detection currently sets environment='production' → wrong URL map

When KUBERNETES_SERVICE_HOST is present, you should select 'kubernetes' to use in-cluster URLs. Today this will route pods to the external prod IP.

-        if not kwargs.get('environment') and not os.getenv('ENVIRONMENT'):
-            # 특정 조건에서만 production으로 감지 (더 엄격한 조건)
-            if (hasattr(self, 'azure_connection_string') and
-                self.azure_connection_string and
-                self.azure_connection_string != "your_azure_connection_string" and
-                "localhost" not in self.azure_connection_string and
-                # 추가 조건: Kubernetes 환경 변수 존재 여부
-                    (os.getenv('KUBERNETES_SERVICE_HOST') or os.getenv('WEBSITE_HOSTNAME'))):
-                self.environment = "production"
-            else:
-                self.environment = "local"
+        if not kwargs.get('environment') and not os.getenv('ENVIRONMENT'):
+            # 우선순위: Kubernetes → Production(웹앱 등) → Local
+            if os.getenv('KUBERNETES_SERVICE_HOST'):
+                self.environment = "kubernetes"
+            elif (
+                self.azure_connection_string
+                and self.azure_connection_string != "your_azure_connection_string"
+                and "localhost" not in self.azure_connection_string
+            ) or os.getenv('WEBSITE_HOSTNAME'):
+                self.environment = "production"
+            else:
+                self.environment = "local"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def __init__(self, **kwargs):
super().__init__(**kwargs)
# ✅ 명시적으로 environment가 설정되지 않은 경우에만 자동 감지
if not kwargs.get('environment') and not os.getenv('ENVIRONMENT'):
# 특정 조건에서만 production으로 감지 (더 엄격한 조건)
if (hasattr(self, 'azure_connection_string') and
self.azure_connection_string and
self.azure_connection_string != "your_azure_connection_string" and
"localhost" not in self.azure_connection_string and
# 추가 조건: Kubernetes 환경 변수 존재 여부
(os.getenv('KUBERNETES_SERVICE_HOST') or os.getenv('WEBSITE_HOSTNAME'))):
self.environment = "production"
else:
self.environment = "local"
def __init__(self, **kwargs):
super().__init__(**kwargs)
# ✅ 명시적으로 environment가 설정되지 않은 경우에만 자동 감지
if not kwargs.get('environment') and not os.getenv('ENVIRONMENT'):
# 우선순위: Kubernetes → Production(웹앱 등) → Local
if os.getenv('KUBERNETES_SERVICE_HOST'):
self.environment = "kubernetes"
elif (
self.azure_connection_string
and self.azure_connection_string != "your_azure_connection_string"
and "localhost" not in self.azure_connection_string
) or os.getenv('WEBSITE_HOSTNAME'):
self.environment = "production"
else:
self.environment = "local"
🤖 Prompt for AI Agents
In services/welding-machine-data-simulator-service/app/config/settings.py around
lines 52 to 66, the auto-detection sets environment="production" whenever
KUBERNETES_SERVICE_HOST is present which causes pods to use external prod URLs;
change the logic so that if KUBERNETES_SERVICE_HOST is set you assign
environment="kubernetes" (to use in-cluster URLs) before falling back to
production (e.g. when WEBSITE_HOSTNAME indicates an App Service) and finally to
"local"; keep the guard so this auto-detection only runs when no explicit
kwargs['environment'] or ENVIRONMENT is set.

Comment on lines +127 to +191
async def _send_data_to_spring_boot(self, current_data: dict, vibration_data: dict) -> dict:
"""
수집된 데이터를 스프링부트 서비스로 전송

Args:
current_data: 전류 센서 데이터
vibration_data: 진동 센서 데이터

Returns:
전송 결과 딕셔너리
"""
results = {}
timestamp = datetime.now().isoformat()
machine_id = "WELDING_MACHINE_001"

try:
# 전류 데이터 전송
current_sensor_data = {
"signal_type": current_data["signal_type"],
"values": current_data["values"],
"machine_id": machine_id,
"timestamp": timestamp
}

print(
f"📤 전류 데이터 전송 중... (signal_type: {current_data['signal_type']})")
current_result = await spring_client.send_sensor_data(current_sensor_data)
results["current"] = current_result

# 잠시 대기 (API 호출 간격)
try:
await asyncio.sleep(0.5)
except asyncio.CancelledError:
print("⚠️ 작업이 취소되었습니다.")
return {}

# 진동 데이터 전송
vibration_sensor_data = {
"signal_type": vibration_data["signal_type"],
"values": vibration_data["values"],
"machine_id": machine_id,
"timestamp": timestamp
}

print(
f"📤 진동 데이터 전송 중... (signal_type: {vibration_data['signal_type']})")
vibration_result = await spring_client.send_sensor_data(vibration_sensor_data)
results["vibration"] = vibration_result

return results

except asyncio.CancelledError:
print("⚠️ 스프링부트 데이터 전송이 취소되었습니다.")
return {}
except Exception as e:
print(f"❌ 스프링부트 데이터 전송 실패: {str(e)}")
# 전송 실패시 로컬 로그에 기록 (백업용)
anomaly_logger.log_error("spring-boot-transmission", {
"error": str(e),
"current_data_size": len(current_data.get("values", [])),
"vibration_data_size": len(vibration_data.get("values", [])),
"timestamp": timestamp
})
return {}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Don’t drop partial successes on multi-signal send

If current succeeds and vibration fails, the blanket except returns {} and loses success info. Return accumulated results.

-        except Exception as e:
+        except Exception as e:
             print(f"❌ 스프링부트 데이터 전송 실패: {str(e)}")
             # 전송 실패시 로컬 로그에 기록 (백업용)
             anomaly_logger.log_error("spring-boot-transmission", {
                 "error": str(e),
                 "current_data_size": len(current_data.get("values", [])),
                 "vibration_data_size": len(vibration_data.get("values", [])),
                 "timestamp": timestamp
             })
-            return {}
+            # 이미 전송된 결과가 있으면 그대로 반환하여 부분 성공을 보존
+            return results
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async def _send_data_to_spring_boot(self, current_data: dict, vibration_data: dict) -> dict:
"""
수집된 데이터를 스프링부트 서비스로 전송
Args:
current_data: 전류 센서 데이터
vibration_data: 진동 센서 데이터
Returns:
전송 결과 딕셔너리
"""
results = {}
timestamp = datetime.now().isoformat()
machine_id = "WELDING_MACHINE_001"
try:
# 전류 데이터 전송
current_sensor_data = {
"signal_type": current_data["signal_type"],
"values": current_data["values"],
"machine_id": machine_id,
"timestamp": timestamp
}
print(
f"📤 전류 데이터 전송 중... (signal_type: {current_data['signal_type']})")
current_result = await spring_client.send_sensor_data(current_sensor_data)
results["current"] = current_result
# 잠시 대기 (API 호출 간격)
try:
await asyncio.sleep(0.5)
except asyncio.CancelledError:
print("⚠️ 작업이 취소되었습니다.")
return {}
# 진동 데이터 전송
vibration_sensor_data = {
"signal_type": vibration_data["signal_type"],
"values": vibration_data["values"],
"machine_id": machine_id,
"timestamp": timestamp
}
print(
f"📤 진동 데이터 전송 중... (signal_type: {vibration_data['signal_type']})")
vibration_result = await spring_client.send_sensor_data(vibration_sensor_data)
results["vibration"] = vibration_result
return results
except asyncio.CancelledError:
print("⚠️ 스프링부트 데이터 전송이 취소되었습니다.")
return {}
except Exception as e:
print(f"❌ 스프링부트 데이터 전송 실패: {str(e)}")
# 전송 실패시 로컬 로그에 기록 (백업용)
anomaly_logger.log_error("spring-boot-transmission", {
"error": str(e),
"current_data_size": len(current_data.get("values", [])),
"vibration_data_size": len(vibration_data.get("values", [])),
"timestamp": timestamp
})
return {}
async def _send_data_to_spring_boot(self, current_data: dict, vibration_data: dict) -> dict:
"""
수집된 데이터를 스프링부트 서비스로 전송
Args:
current_data: 전류 센서 데이터
vibration_data: 진동 센서 데이터
Returns:
전송 결과 딕셔너리
"""
results = {}
timestamp = datetime.now().isoformat()
machine_id = "WELDING_MACHINE_001"
try:
# 전류 데이터 전송
current_sensor_data = {
"signal_type": current_data["signal_type"],
"values": current_data["values"],
"machine_id": machine_id,
"timestamp": timestamp
}
print(f"📤 전류 데이터 전송 중... (signal_type: {current_data['signal_type']})")
current_result = await spring_client.send_sensor_data(current_sensor_data)
results["current"] = current_result
# 잠시 대기 (API 호출 간격)
try:
await asyncio.sleep(0.5)
except asyncio.CancelledError:
print("⚠️ 작업이 취소되었습니다.")
return {}
# 진동 데이터 전송
vibration_sensor_data = {
"signal_type": vibration_data["signal_type"],
"values": vibration_data["values"],
"machine_id": machine_id,
"timestamp": timestamp
}
print(f"📤 진동 데이터 전송 중... (signal_type: {vibration_data['signal_type']})")
vibration_result = await spring_client.send_sensor_data(vibration_sensor_data)
results["vibration"] = vibration_result
return results
except asyncio.CancelledError:
print("⚠️ 스프링부트 데이터 전송이 취소되었습니다.")
return {}
except Exception as e:
print(f"❌ 스프링부트 데이터 전송 실패: {str(e)}")
# 전송 실패시 로컬 로그에 기록 (백업용)
anomaly_logger.log_error("spring-boot-transmission", {
"error": str(e),
"current_data_size": len(current_data.get("values", [])),
"vibration_data_size": len(vibration_data.get("values", [])),
"timestamp": timestamp
})
# 이미 전송된 결과가 있으면 그대로 반환하여 부분 성공을 보존
return results

Comment on lines +55 to +57
# DEBUG -> INFO로 변경하여 로그에서 확인
logger.info(f"Payload: {payload}")

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Log sanitized payload at DEBUG, not full body at INFO

Avoid huge logs and potential data leakage. Log a preview and move to DEBUG.

-                    # DEBUG -> INFO로 변경하여 로그에서 확인
-                    logger.info(f"Payload: {payload}")
+                    # 대용량/민감 데이터는 요약만 DEBUG로 출력
+                    payload_preview = {
+                        "machineId": payload["machineId"],
+                        "timestamp": payload["timestamp"],
+                        "signalType": payload["signalType"],
+                        "sensorValues": f"<{len(payload['sensorValues'])} values>"
+                    }
+                    logger.debug(f"Payload: {payload_preview}")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# DEBUG -> INFO로 변경하여 로그에서 확인
logger.info(f"Payload: {payload}")
# 대용량/민감 데이터는 요약만 DEBUG로 출력
payload_preview = {
"machineId": payload["machineId"],
"timestamp": payload["timestamp"],
"signalType": payload["signalType"],
"sensorValues": f"<{len(payload['sensorValues'])} values>"
}
logger.debug(f"Payload: {payload_preview}")
🤖 Prompt for AI Agents
In services/welding-machine-data-simulator-service/app/services/spring_client.py
around lines 55 to 57, the code currently logs the full payload at INFO which
can leak sensitive data and create huge logs; change this to log a sanitized
preview at DEBUG instead. Replace the logger.info call with logger.debug, log
only a truncated/preview string (e.g., first N chars) or a sanitized copy where
sensitive fields are masked, and keep the full payload out of INFO logs; if
needed, add a small helper to produce the preview/sanitized dict before logging.

@TaeHyunaivle TaeHyunaivle merged commit dbfad2e into main Aug 27, 2025
10 of 11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants