Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tests: boxes: Add tests for saving drafts. #1495

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

Niloth-p
Copy link
Collaborator

What does this PR do, and why?

Add 2 test functions:

  1. Draft message to a channel
  2. Draft message to a private narrow (DM and PM huddle)

Test for an existing saved draft for each of the test cases.

Test for:

  • a function call to display the save-draft-confirmation popup, in case of existing saved draft
  • the created draft object
  • return of the key without creating a draft, in case of invalid recipients

This does not test the _tidy_valid_recipients_and_notify_invalid_ones or the session_draft_message functions. They're mocked. Tests for them would need to be added separately.

External discussion & connections

How did you test this?

  • Manually - Behavioral changes
  • Manually - Visual changes
  • Adapting existing automated tests
  • Adding automated tests for new behavior (or missing tests)
  • Existing automated tests should already cover this (only a refactor of tested code)

Self-review checklist for each commit

  • It is a minimal coherent idea
  • It has a commit summary following the documented style (title & body)
  • It has a commit summary describing the motivation and reasoning for the change
  • It individually passes linting and tests
  • It contains test additions for any new behavior
  • It flows clearly from a previous branch commit, and/or prepares for the next commit

@zulipbot zulipbot added the size: XL [Automatic label added by zulipbot] label May 11, 2024
Copy link

@zormit zormit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally this looks fine to me. I left some comments on things that came to mind. I don't know enough about the codebase yet to give an approval, but given that the test is green and it looks to me like it's testing something useful, I probably would approve it if I had to :P

hope that helps :)

tests/ui_tools/test_boxes.py Outdated Show resolved Hide resolved
tests/ui_tools/test_boxes.py Outdated Show resolved Hide resolved
tests/ui_tools/test_boxes.py Outdated Show resolved Hide resolved
tests/ui_tools/test_boxes.py Outdated Show resolved Hide resolved
Copy link
Collaborator

@neiljp neiljp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll take another look later, but wanted to get this comment out that I'd drafted previously, before mentoring had started.

tests/ui_tools/test_boxes.py Outdated Show resolved Hide resolved
@Niloth-p Niloth-p force-pushed the 1495-save-draft/pr branch 2 times, most recently from 8a3f2b4 to 96bdbba Compare May 18, 2024 02:52
@Niloth-p
Copy link
Collaborator Author

Updated to add the mocked_saved_drafts list. Took that outside both the test functions to reduce redundancy.
Also switched from using 'huddle' to 'dm_group' in the ids and message.

Comment on lines 236 to 237
assert not write_box.model.save_draft.called
assert not write_box.view.controller.save_draft_confirmation_popup.called
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NIT: Should we prefer using?

write_box.model.save_draft.assert_not_called()
write_box.view.controller.save_draft_confirmation_popup.assert_not_called()

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are equivalent - or should be!

However, the great thing about using assert and the core called style values are:

  • pytest can dig through levels of code using assert, if they fail
  • called and similar results are values, not assertions in a function

The most useful aspect here though, is that this form can be more easily extracted into parametrize blocks.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, the parametrization part makes sense. Should we stick to using .called from now on?

tests/ui_tools/test_boxes.py Outdated Show resolved Hide resolved
@rsashank
Copy link
Member

rsashank commented Jun 5, 2024

LGTM! Great work @Niloth-p 👍

I'm trying to see if we can simplify the conditionals in the test by adding more to the parameterize, but I'm not sure.

@Niloth-p
Copy link
Collaborator Author

Thank you for catching those details, @rsashank! Great review, that was very helpful.
Updated the requested changes.

Copy link
Collaborator

@neiljp neiljp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Niloth-p I was just mid-review yesterday, so just posting the notes I made and a few more, since I note you updated this already :)

Comment on lines 109 to 130
mocked_saved_drafts = [
case(None, id="no_saved_draft_exists"),
case(
StreamComposition(
type="stream",
to="Random Stream",
content="Random stream draft",
subject="Topic name",
read_by_sender=True,
),
id="saved_stream_draft_exists",
),
case(
PrivateComposition(
type="private",
to=[5140],
content="Random private draft",
read_by_sender=True,
),
id="saved_private_draft_exists",
),
]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I appreciate the intent to reduce repetition here.

That said, normally I would convert the input into a pytest fixture, which would mean you could skip the parametrize on the test function and put the name of the fixture in the list of test parameters.

Another advantage of converting this into a fixture, which we haven't done as much to date, is that you should be able to extract common mocking or other setup too.

(note that it's fine to have the fixture local to this file (ie. not conftest.py), if it is specific to this set of tests)

Comment on lines 150 to 156
draft_composition = StreamComposition(
type="stream",
to="Current Stream",
content="Random message",
subject="Topic",
read_by_sender=True,
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It may be worth noting here, and also asserting (and in the other test) that the draft_composition is not equal to the current draft_saved_in_current_session - ie. it explicitly does not cover that case, at least right now.

That kind of assert can be useful to demonstrate that the test is doing what it is supposed to do.

Given the above, we may also want to add a test for the behavior of if the draft exists and matches separately.

Comment on lines 168 to 173
if draft_saved_in_current_session is not None:
write_box.view.controller.save_draft_confirmation_popup.assert_called_once_with(
draft_composition
)
else:
write_box.model.save_draft.assert_called_once_with(draft_composition)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As per Sashank's comment, it would be preferable to include these in the parametrize.

tests/ui_tools/test_boxes.py Outdated Show resolved Hide resolved
Comment on lines 236 to 237
assert not write_box.model.save_draft.called
assert not write_box.view.controller.save_draft_confirmation_popup.called
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are equivalent - or should be!

However, the great thing about using assert and the core called style values are:

  • pytest can dig through levels of code using assert, if they fail
  • called and similar results are values, not assertions in a function

The most useful aspect here though, is that this form can be more easily extracted into parametrize blocks.

Comment on lines 235 to 238
if not is_every_recipient_valid:
write_box.model.save_draft.assert_not_called()
write_box.view.controller.save_draft_confirmation_popup.assert_not_called()
else:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To simplify this, since the test has a nested if, I'm wondering if it would be cleaner to extract this first part into a separate test?

That avoids a parametrize over is_every_recipient_valid.

@neiljp neiljp added PR awaiting update PR has been reviewed & is awaiting update or response to reviewer feedback and removed PR needs review PR requires feedback to proceed labels Jun 13, 2024
@Niloth-p Niloth-p force-pushed the 1495-save-draft/pr branch from 8e3b967 to 88584cd Compare June 15, 2024 06:38
Test functions added:
- 2 test functions for stream compositions
  - when a draft previously exists, call the confirmation popup
  - when new draft matches saved draft, do not save again
- 3 test functions for private compositions
  - valid recipients,
  - invalid recipients,
  - new draft matches already saved draft

Fixtures added for:
- list saved drafts
- list stream compositions
- list private compositions
- setup for private draft tests
- setup for stream draft tests

Factories added:
- composition factory (generates both stream and private compositions)
- saved draft factory
@Niloth-p Niloth-p force-pushed the 1495-save-draft/pr branch from 88584cd to 696eba6 Compare June 30, 2024 08:33
@Niloth-p Niloth-p added PR needs mentor review and removed PR awaiting update PR has been reviewed & is awaiting update or response to reviewer feedback labels Jul 22, 2024
@Niloth-p Niloth-p requested a review from zormit July 22, 2024 11:00
@zormit
Copy link

zormit commented Aug 5, 2024

@Niloth-p I have tried simplifying and typing your code a bit in https://github.com/zormit/zulip-terminal/tree/1495-save-draft/pr (713bf50, in case I change the branch). What do you think?

I'm still not fully happy with all the layers of indirection, probably it can be simplified even more, but this might give you an idea. I have to stop for now, so I'm sending you this idea, please take it if you like it. (edit: I realized tests are not even all passing, but that seems fixable :))

zormit and others added 4 commits August 6, 2024 15:13
Use different recipients for saved drafts and new drafts, in the
composition factory.
Check new drafts with
- same content, different recipients (already exists)
- same recipients, different content (newly added)

Previously, we were using different recipients in the composition
factory, to differentiate the new draft from the saved draft.
Now, it has been extended to allow creating different content as well.
@Niloth-p
Copy link
Collaborator Author

Niloth-p commented Aug 9, 2024

@zormit Thank you lots for making those changes yourself! That does look much better.

Sorry for the delay in replying back, I'd absorbed your commits right away, but hadn't managed to type out a reply here.

If you find this interesting, please feel free to push your own PR with other improvements and add me as a co-author, that could help get it merged too, as I've done several iterations on this previously, and have been struggling to identify good code and bad code here.

So, no, I'm not really clear on how we can further reduce the indirection, my apologies. I can think of several ways to change the tests, but I have no idea what improves it and what worsens it. If someone is able to guide me with 1 liners like the commit messages, I'll try my best to implement it and get back.

Being new to the complexities of testing, when I come across best practices and guidance, I'm not sure to what extent I should be following them, that I realise I might have overdone it here to the point of complexity, and I'm still not clear enough on all the tradeoffs, so please do let me know how we can proceed from here.

Also, I'm not sure how many commits this is supposed to be. I was initially under the impression this should be a single commit, since it's just tests, and we wouldn't want to be tracking the additions incrementally. But, after the suggestion from @neiljp to break down the refactoring commits of lint-hotkeys, I've been wondering whether we should break this down too. Do we need to?

I've added some newer test cases. But I'm not sure if we want to be that thorough or if it bulks up the testing time unnecessarily.

@Niloth-p
Copy link
Collaborator Author

Niloth-p commented Aug 9, 2024

Github will report the tests as failed, because I've kept my addition separate from Moritz's commits, but they'll pass if they're squashed. So, please ignore them.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area: tests PR needs mentor review size: XL [Automatic label added by zulipbot]
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants