-
Notifications
You must be signed in to change notification settings - Fork 76
Laurel - Cedar #56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Laurel - Cedar #56
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,19 +1,43 @@ | ||
|
||
from ntpath import join | ||
from sqlalchemy import true | ||
|
||
|
||
def grouped_anagrams(strings): | ||
""" This method will return an array of arrays. | ||
Each subarray will have strings which are anagrams of each other | ||
Time Complexity: ? | ||
Space Complexity: ? | ||
Time Complexity: O(n * m log m) | ||
^ Not sure on the time complexity. Here, n represents the number of words in the | ||
input string, while m represents the length of an individual word. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is correct! However, we can make a simplifying assumption. Since we know the words are English words and English words do not get too long (about 5 letters per word on average), the effect they have on this algorithm is dwarfed by the number of words in the list (which could easily get in the hundreds and thousands for a big list). Thus, we can just say this is O(n). |
||
Space Complexity: O(n) | ||
""" | ||
pass | ||
word_dict = {} | ||
for word in strings: | ||
sorted_word = ''.join(sorted(word)) | ||
if sorted_word in word_dict: | ||
word_dict[sorted_word].append(word) | ||
else: | ||
word_dict[sorted_word] = [word] | ||
result = [v for k,v in word_dict.items()] | ||
return result | ||
|
||
|
||
def top_k_frequent_elements(nums, k): | ||
""" This method will return the k most common elements | ||
In the case of a tie it will select the first occuring element. | ||
Time Complexity: ? | ||
Space Complexity: ? | ||
Time Complexity: O(n log n) (This is the result of using the list sort method and | ||
n is the number of elements in nums) | ||
Space Complexity: O(n) | ||
""" | ||
pass | ||
if not nums or k < 1: | ||
return [] | ||
freq_map = {} | ||
for num in nums: | ||
freq_map[num] = 1 + freq_map.get(num, 0) | ||
freq_list = [(num, freq) for num, freq in freq_map.items()] | ||
freq_list.sort(key = lambda x: x[1], reverse=True) | ||
top_el = [freq_list[i][0] for i in range(k)] | ||
return top_el | ||
|
||
|
||
def valid_sudoku(table): | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like these
import
statements never got used, so you can remove them to keep the code clean as a good style practice.