#Function
In PySpark, the between() function is used to filter rows or create boolean expressions based on whether a column value lies within a specified range (inclusive). It's a convenient way to check if a value falls between two boundaries.

Syntax: -

df.filter(column.between(lower_bound, upper_bound))

OR

df.select(column.between(lower_bound, upper_bound))

column: The column to evaluate.
lower_bound: The inclusive lower bound of the range.
upper_bound: The inclusive upper bound of the range.

Inclusive, like in MySQL.