The format is based on Keep a Changelog and this project adheres to Semantic Versioning.
All release versions should be documented here with release date and types of changes. Unreleased changes and pre-releases (i.e. alpha/beta versions) can be documented under the section Unreleased.
Possible types of changes are:
- Added for new features
- Changed for changes in existing functionality
- Deprecated for soon-to-be removed features
- Removed for now removed features
- Fixed for any bug fixes
- Security in case of vulnerabilities
- Relax version constraints for dependency
packaging
andgoogle-cloud-storage
.
- missing
preserve_time
keyword arguments to the methodscopy
andmove
- calls to deprecated methods
getbasic
,setbytes
, andgetbytes
.
- Added missing
requests
dependency.
- Added missing
urllib3
andpackaging
dependencies.
- Ensure compatibility with
urllib3
versions smaller than1.26.0
- the first version that introduced theallowed_methods
argument and started deprecation of themethod_whitelist
argument.
- Changed keyword argument of underlying dependency urllib3 from
method_whitelist
toallowed_methods
to avoid deprecation warning (#39)
- Fixed a bug in
GCSFile.readinto
that surfaced for Python >=3.8 when reading data via e.g.numpy.load
(#24, #36)
- The underlying HTTP client is now configured to automatically retry requests that return a status code of "429 Too Many Requests", "502 Bad Gateway", "503 Service Unavailable" and "504 Gateway Timeout".
- Some tests were still calling
get_bucket()
from the constructor ofGCSFS
.
- Removed the (non-required) call to
get_bucket()
from the constructor ofGCSFS
. This avoids the need for storage.buckets.get permissions on that bucket and thus now allows users to simplify access management by using tight, predefined IAM roles. This change has two implications: a) the constructor is now a bit faster as one less RPC is performed; b) the error message in case a bucket does not exist is slightly less informative.
open_fs
now supports providing a custom "project" and "api_endpoint", e.g.open_fs("gs://bucket_name?project=test")
oropen_fs("gs://bucket_name?api_endpoint=http%3A//localhost%3A8888")
(#26)
GCSFS.get_mapper()
which returns aGCSMap
that wraps aGCSFS
as aMutableMapping
. The keys of the mapping become files and the values (which must be bytes) the contents of those files. This is particularly useful to be used with libraries such as xarray or zarr. (#21)
GCSFS.fix_storage()
no longer creates a directory marker ifroot_path
is the actual root of the bucket. Apart from not having any advantage, this caused subsequentGCSFS.fix_storage()
calls as well asGCSFS.walk()
to be stuck in endless loops. (#19)
- Instead of uploading all blobs as application/octet-stream, the MIME type is now guessed via
mimetypes.guess_type()
. This enables e.g. hotlinking images directly from GCS. (#15)
- Fixed a bug where the url parameter
strict
was not considered by GCSFS, e.g. inopen_fs("gs://bucket_name?strict=False")
(#11)
- Fixed a bug where
create=True
in combination with an empty-ishroot_path
like""
,"."
or"/"
would create a directory marker.
- Implemented the
create
property onGCSFS
and the corresponding opener. By default all new GCSFS instances havecreate=False
(PyFilesystem default) which means they will raise aCreateFailed
exception ifroot_path
does not exist (#8)
- This is the first release available on conda-forge
delimiter
property fromGCSFS
as it was not fully functional and we currently do not have any use case for it
GCSFS.listdir()
andGCSFS.scandir()
now also correctly list blobs on the root level of a bucket
- Open-sourced GCSFS by moving it to GitHub
GCSFS.getinfo()
does not magically fix missing directory markers anymore. Instead, there is a new methodGCSFS.fix_storage()
which can be explicitly called to check and fix the entire filesystem.
project
andcredentials
properties fromGCSFS
. Instead, one can now optionally pass aclient
of type google.cloud.storage.Client.
GCSFS.makedirs()
is now suitable for multiprocessing
- The
bucket
andclient
properties ofGCSFS
are now only computed once on instance initialization (performance improvement)
GCSFS.exists()
now correctly handles existing directories that are not marked with an empty file
- Added a custom implementation of
FS.opendir()
in order to be able to skip the directory check if strict=False (performance improvement)
- Fixed a bug where
listdir
/scandir
on the root level of a bucket would always return an empty result