Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bound preprocess and parallel signing calls #4564

Merged
merged 4 commits into from
Dec 27, 2024
Merged

Bound preprocess and parallel signing calls #4564

merged 4 commits into from
Dec 27, 2024

Conversation

jonathanmetzman
Copy link
Collaborator

Bound preprocess to 30 minutes (shouldn't be more than 5)
Bound parallel signing to 2 minutes, exception if we pass this.

else:
yield multiprocessing.Pool(pool_size)
yield futures.ProcessPoolExecutor(pool_size)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why processes and not threads?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is mostly used for CPU bound tasks such as cryptographic signing. In addition, Python's threads suck and processes are usually faster for things like I/O bound stuff.

@@ -616,6 +656,9 @@ def run(self):
if time_left <= 0:
logs.info('Lease reached maximum lease time of {} seconds, '
'stopping renewal.'.format(self._max_lease_seconds))
if self._ack_on_timeout:
logs.info('Acking on timeout')
self._message.ack()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why acking? Why is this preferred to exiting and retrying later?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For fuzz tasks, I'd rather things starve for now than gum up the queue.

@@ -1377,17 +1378,24 @@ def get_arbitrary_signed_upload_url(remote_directory):
get_arbitrary_signed_upload_urls(remote_directory, num_uploads=1))[0]


def maybe_parallel_map(func, arguments):
def parallel_map(func, argument_list):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The if statement is being removed because the rearch is complete, right?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct!

@jonathanmetzman jonathanmetzman merged commit 36505ab into oss-fuzz Dec 27, 2024
3 checks passed
vitorguidi added a commit that referenced this pull request Dec 28, 2024
vitorguidi added a commit that referenced this pull request Dec 28, 2024
@vitorguidi vitorguidi mentioned this pull request Dec 28, 2024
vitorguidi added a commit that referenced this pull request Dec 28, 2024
The preprocess count for fuzz task went to zero after #4564 got
deployed, reverting.

#4528 is also being reverted because it introduced the following error
into the fuzz task scheduler, which caused fuzz tasks to stop being
scheduled:

```
Traceback (most recent call last):
  File "/mnt/scratch0/clusterfuzz/src/python/bot/startup/run_cron.py", line 68, in <module>
    sys.exit(main())
             ^^^^^^
  File "/mnt/scratch0/clusterfuzz/src/python/bot/startup/run_cron.py", line 64, in main
    return 0 if task_module.main() else 1
                ^^^^^^^^^^^^^^^^^^
  File "/mnt/scratch0/clusterfuzz/src/clusterfuzz/_internal/cron/schedule_fuzz.py", line 304, in main
    return schedule_fuzz_tasks()
           ^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/scratch0/clusterfuzz/src/clusterfuzz/_internal/cron/schedule_fuzz.py", line 284, in schedule_fuzz_tasks
    available_cpus = get_available_cpus(project, regions)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/scratch0/clusterfuzz/src/clusterfuzz/_internal/cron/schedule_fuzz.py", line 247, in get_available_cpus
    result = pool.starmap_async(  # pylint: disable=no-member
             ^^^^^^^^^^^^^^^^^^
AttributeError: 'ProcessPoolExecutor' object has no attribute 'starmap_async'

```
jonathanmetzman added a commit that referenced this pull request Jan 8, 2025
Bound preprocess to 30 minutes (shouldn't be more than 5)
Bound parallel signing to 2 minutes, exception if we pass this.
jonathanmetzman pushed a commit that referenced this pull request Jan 8, 2025
The preprocess count for fuzz task went to zero after #4564 got
deployed, reverting.

#4528 is also being reverted because it introduced the following error
into the fuzz task scheduler, which caused fuzz tasks to stop being
scheduled:

```
Traceback (most recent call last):
  File "/mnt/scratch0/clusterfuzz/src/python/bot/startup/run_cron.py", line 68, in <module>
    sys.exit(main())
             ^^^^^^
  File "/mnt/scratch0/clusterfuzz/src/python/bot/startup/run_cron.py", line 64, in main
    return 0 if task_module.main() else 1
                ^^^^^^^^^^^^^^^^^^
  File "/mnt/scratch0/clusterfuzz/src/clusterfuzz/_internal/cron/schedule_fuzz.py", line 304, in main
    return schedule_fuzz_tasks()
           ^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/scratch0/clusterfuzz/src/clusterfuzz/_internal/cron/schedule_fuzz.py", line 284, in schedule_fuzz_tasks
    available_cpus = get_available_cpus(project, regions)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/mnt/scratch0/clusterfuzz/src/clusterfuzz/_internal/cron/schedule_fuzz.py", line 247, in get_available_cpus
    result = pool.starmap_async(  # pylint: disable=no-member
             ^^^^^^^^^^^^^^^^^^
AttributeError: 'ProcessPoolExecutor' object has no attribute 'starmap_async'

```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants