Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

when subscriber can not get argo pod logs, make pod log message intuitive #4926

Open
wants to merge 12 commits into
base: master
Choose a base branch
from

Conversation

kwx4957
Copy link
Contributor

@kwx4957 kwx4957 commented Oct 19, 2024

Proposed changes

Fix: #4532

When subscriber pods fail to get from workflow pod or CRD, the imported logs always return the same value. This is not intuitive for users.

So, by getting the phase state from the chaos experiment run, make the log message different for each experiment phase.

But the problem remains. I'm not sure which is the best way to write the logs.

	switch phase {
	case "Completed":
		return "Experiment pod was deleted."
	case "Stopped":
		return "Experiment is stopped."
	case "Running":
		return "Experiment pod is initializing."
	case "Queue":
		return "Queue."
	case "NA":
		return "NA."
	case "Terminated":
		return "Terminated."
	case "Completed_With_Error":
		return "Completed_With_Error."
	case "Timeout":
		return "Timeout."
	case "Error":
		return "Experiment could not start."
	default:
		return "Experiment pod is initializing."
	}
  • completed
image - stopped image
  • Running
when workflow pod isn't ready workflow pod is ready, log is changed
image image

@kwx4957 kwx4957 changed the title when experiment run can not get argo pod logs, make pod log message intuitive when subscriber can not get argo pod logs, make pod log message intuitive Oct 19, 2024
@namkyu1999
Copy link
Member

Can you show us result screenshots? @kwx4957

@kwx4957
Copy link
Contributor Author

kwx4957 commented Oct 19, 2024

yes i will add it

@S-ayanide S-ayanide added the need-approvers-review Reminder label to the codeowners/maintainers for stale PRs that are more that 2 weeks old label Nov 13, 2024
func (k8s *k8sSubscriber) categorizeLogByPhase(phase string) string {
switch phase {
case "Completed":
return "Experiment pod is deleted"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There can be a situation where users haven't added the cleanup chaos step, in that case, the experiment resources won't be deleted. We have to handle that situation as well

Copy link
Contributor Author

@kwx4957 kwx4957 Nov 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Saranya-jena I think this function will not work in that case, because it will get data from mongoDB when getting the pod logs fails, but I will test it for that case.

Copy link
Contributor Author

@kwx4957 kwx4957 Nov 23, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Saranya-jena I remove the podGC field and the cleanup chaos step. Is this the correct procedure? The pod logs are returning as expected

image

@kwx4957 kwx4957 force-pushed the feat/pod-log-intutive branch from 428deef to 53a51e1 Compare December 13, 2024 09:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
mentorship-kr need-approvers-review Reminder label to the codeowners/maintainers for stale PRs that are more that 2 weeks old
Projects
Status: In Review
Development

Successfully merging this pull request may close these issues.

"Failed to get argo pod logs" is not intuitive
5 participants