-
Notifications
You must be signed in to change notification settings - Fork 121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add few missing SparkLikeExpr
methods
#1721
Merged
FBruzzesi
merged 11 commits into
narwhals-dev:main
from
Dhanunjaya-Elluri:feat/spark-expr-methods
Jan 7, 2025
Merged
Changes from all commits
Commits
Show all changes
11 commits
Select commit
Hold shift + click to select a range
ea03eaf
feat(spark): add missing methods to SparkLikeExpr
Dhanunjaya-Elluri 57ab8a0
feat(spark): add few missing methods
Dhanunjaya-Elluri f3ab9e2
fix: add xfail to median when python<3.9
Dhanunjaya-Elluri c470ece
fix: fixing reviewd requests & updated tests
Dhanunjaya-Elluri e569f83
fix: fix `PYSPARK_VERSION` for `median` calculation
Dhanunjaya-Elluri 120ea3b
fix: fix refactor issue
Dhanunjaya-Elluri b00c6dc
fix: remove `is_nan` method
Dhanunjaya-Elluri fbdd61c
Merge branch 'main' of https://github.com/Dhanunjaya-Elluri/narwhals β¦
Dhanunjaya-Elluri 6e8292b
Merge branch 'main' of https://github.com/Dhanunjaya-Elluri/narwhals β¦
Dhanunjaya-Elluri 9ac23e6
fix: fixing `is_duplicated` & `is_unique` & remove `n_unique`
Dhanunjaya-Elluri 155f654
Merge branch 'main' into feat/spark-expr-methods
FBruzzesi File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -160,6 +160,11 @@ def __gt__(self, other: SparkLikeExpr) -> Self: | |
returns_scalar=False, | ||
) | ||
|
||
def abs(self) -> Self: | ||
from pyspark.sql import functions as F # noqa: N812 | ||
|
||
return self._from_call(F.abs, "abs", returns_scalar=self._returns_scalar) | ||
|
||
def alias(self, name: str) -> Self: | ||
def _alias(df: SparkLikeLazyFrame) -> list[Column]: | ||
return [col.alias(name) for col in self._call(df)] | ||
|
@@ -179,44 +184,42 @@ def _alias(df: SparkLikeLazyFrame) -> list[Column]: | |
) | ||
|
||
def count(self) -> Self: | ||
def _count(_input: Column) -> Column: | ||
from pyspark.sql import functions as F # noqa: N812 | ||
from pyspark.sql import functions as F # noqa: N812 | ||
|
||
return F.count(_input) | ||
|
||
return self._from_call(_count, "count", returns_scalar=True) | ||
return self._from_call(F.count, "count", returns_scalar=True) | ||
|
||
def max(self) -> Self: | ||
def _max(_input: Column) -> Column: | ||
from pyspark.sql import functions as F # noqa: N812 | ||
from pyspark.sql import functions as F # noqa: N812 | ||
|
||
return F.max(_input) | ||
|
||
return self._from_call(_max, "max", returns_scalar=True) | ||
return self._from_call(F.max, "max", returns_scalar=True) | ||
|
||
def mean(self) -> Self: | ||
def _mean(_input: Column) -> Column: | ||
from pyspark.sql import functions as F # noqa: N812 | ||
|
||
return self._from_call(F.mean, "mean", returns_scalar=True) | ||
|
||
def median(self) -> Self: | ||
def _median(_input: Column) -> Column: | ||
import pyspark # ignore-banned-import | ||
from pyspark.sql import functions as F # noqa: N812 | ||
|
||
return F.mean(_input) | ||
if parse_version(pyspark.__version__) < (3, 4): | ||
# Use percentile_approx with default accuracy parameter (10000) | ||
return F.percentile_approx(_input.cast("double"), 0.5) | ||
|
||
return self._from_call(_mean, "mean", returns_scalar=True) | ||
return F.median(_input) | ||
|
||
def min(self) -> Self: | ||
def _min(_input: Column) -> Column: | ||
from pyspark.sql import functions as F # noqa: N812 | ||
return self._from_call(_median, "median", returns_scalar=True) | ||
|
||
return F.min(_input) | ||
def min(self) -> Self: | ||
from pyspark.sql import functions as F # noqa: N812 | ||
|
||
return self._from_call(_min, "min", returns_scalar=True) | ||
return self._from_call(F.min, "min", returns_scalar=True) | ||
|
||
def sum(self) -> Self: | ||
def _sum(_input: Column) -> Column: | ||
from pyspark.sql import functions as F # noqa: N812 | ||
|
||
return F.sum(_input) | ||
from pyspark.sql import functions as F # noqa: N812 | ||
|
||
return self._from_call(_sum, "sum", returns_scalar=True) | ||
return self._from_call(F.sum, "sum", returns_scalar=True) | ||
|
||
def std(self: Self, ddof: int) -> Self: | ||
from functools import partial | ||
|
@@ -239,3 +242,133 @@ def var(self: Self, ddof: int) -> Self: | |
func = partial(_var, ddof=ddof, np_version=parse_version(np.__version__)) | ||
|
||
return self._from_call(func, "var", returns_scalar=True, ddof=ddof) | ||
|
||
def clip( | ||
self, | ||
lower_bound: Any | None = None, | ||
upper_bound: Any | None = None, | ||
) -> Self: | ||
Comment on lines
+246
to
+250
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We recently introduced support for |
||
def _clip(_input: Column, lower_bound: Any, upper_bound: Any) -> Column: | ||
from pyspark.sql import functions as F # noqa: N812 | ||
|
||
result = _input | ||
if lower_bound is not None: | ||
# Convert lower_bound to a literal Column | ||
result = F.when(result < lower_bound, F.lit(lower_bound)).otherwise( | ||
result | ||
) | ||
if upper_bound is not None: | ||
# Convert upper_bound to a literal Column | ||
result = F.when(result > upper_bound, F.lit(upper_bound)).otherwise( | ||
result | ||
) | ||
return result | ||
|
||
return self._from_call( | ||
_clip, | ||
"clip", | ||
lower_bound=lower_bound, | ||
upper_bound=upper_bound, | ||
returns_scalar=self._returns_scalar, | ||
) | ||
|
||
def is_between( | ||
self, | ||
lower_bound: Any, | ||
upper_bound: Any, | ||
closed: str, | ||
) -> Self: | ||
def _is_between(_input: Column, lower_bound: Any, upper_bound: Any) -> Column: | ||
if closed == "both": | ||
return (_input >= lower_bound) & (_input <= upper_bound) | ||
if closed == "none": | ||
return (_input > lower_bound) & (_input < upper_bound) | ||
if closed == "left": | ||
return (_input >= lower_bound) & (_input < upper_bound) | ||
return (_input > lower_bound) & (_input <= upper_bound) | ||
|
||
return self._from_call( | ||
_is_between, | ||
"is_between", | ||
lower_bound=lower_bound, | ||
upper_bound=upper_bound, | ||
returns_scalar=self._returns_scalar, | ||
) | ||
|
||
def is_duplicated(self) -> Self: | ||
def _is_duplicated(_input: Column) -> Column: | ||
from pyspark.sql import Window | ||
from pyspark.sql import functions as F # noqa: N812 | ||
|
||
# Create a window spec that treats each value separately. | ||
return F.count("*").over(Window.partitionBy(_input)) > 1 | ||
|
||
return self._from_call( | ||
_is_duplicated, "is_duplicated", returns_scalar=self._returns_scalar | ||
) | ||
|
||
def is_finite(self) -> Self: | ||
def _is_finite(_input: Column) -> Column: | ||
from pyspark.sql import functions as F # noqa: N812 | ||
|
||
# A value is finite if it's not NaN, not NULL, and not infinite | ||
return ( | ||
~F.isnan(_input) | ||
& ~F.isnull(_input) | ||
& (_input != float("inf")) | ||
& (_input != float("-inf")) | ||
) | ||
|
||
return self._from_call( | ||
_is_finite, "is_finite", returns_scalar=self._returns_scalar | ||
) | ||
|
||
def is_in(self, values: Sequence[Any]) -> Self: | ||
def _is_in(_input: Column, values: Sequence[Any]) -> Column: | ||
return _input.isin(values) | ||
|
||
return self._from_call( | ||
_is_in, | ||
"is_in", | ||
values=values, | ||
returns_scalar=self._returns_scalar, | ||
) | ||
|
||
def is_unique(self) -> Self: | ||
def _is_unique(_input: Column) -> Column: | ||
from pyspark.sql import Window | ||
from pyspark.sql import functions as F # noqa: N812 | ||
|
||
# Create a window spec that treats each value separately | ||
return F.count("*").over(Window.partitionBy(_input)) == 1 | ||
|
||
return self._from_call( | ||
_is_unique, "is_unique", returns_scalar=self._returns_scalar | ||
) | ||
|
||
def len(self) -> Self: | ||
def _len(_input: Column) -> Column: | ||
from pyspark.sql import functions as F # noqa: N812 | ||
|
||
# Use count(*) to count all rows including nulls | ||
return F.count("*") | ||
|
||
return self._from_call(_len, "len", returns_scalar=True) | ||
|
||
def round(self, decimals: int) -> Self: | ||
def _round(_input: Column, decimals: int) -> Column: | ||
from pyspark.sql import functions as F # noqa: N812 | ||
|
||
return F.round(_input, decimals) | ||
|
||
return self._from_call( | ||
_round, | ||
"round", | ||
decimals=decimals, | ||
returns_scalar=self._returns_scalar, | ||
) | ||
|
||
def skew(self) -> Self: | ||
from pyspark.sql import functions as F # noqa: N812 | ||
|
||
return self._from_call(F.skewness, "skew", returns_scalar=True) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should narwhals suggest (and maintain :)) a guide to install Java for pyspark?
Or should we just add a note to say that pyspark needs Java and add a link to the pyspark documentation?
There may be different ways one wants to install Java on their machine.
For example, on macOS I prefer using openjdk installed via homebrew.
What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be simple to just say that pyspark needs java installed and add a link to pyspark documentation.